Paper_ID
stringlengths
10
10
Question
stringlengths
201
1.81k
ocr_output
stringlengths
252
54k
OZitfSXpdT
In addition, the teacher model indeed may underperform on the outliers. According to previous discussion, directly decreasing the $\alpha$ also can solve this issue. Thus, why to introduce the inter-sample relations? This motivation needs to be further claified.
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation Chengming Hu\textsuperscript{1,2}\footnote{Equal contribution with random order.}, Haolun Wu\textsuperscript{1,2}\footnote{To whom the correspondence should be addressed.}, Xuan Li\textsuperscript{1,2}, Chen Ma\textsuperscript{3}, Xi Chen\textsuperscript{1}, Jun Yan\textsuperscript{4}, Boyu Wang\textsuperscript{5}, Xue Liu\textsuperscript{1,2} \{chengming.hu, haolun.wu, xuan.li2\}@mail.mcgill.ca, chenma@cityu.edu.hk, xi.chen11@mcgill.ca, jun.yan@concordia.ca, bwang@csd.uwo.ca, xueliu@cs.mcgill.ca \textsuperscript{1}McGill University, \textsuperscript{2}Mila - Quebec AI Institute, \textsuperscript{3}City University of Hong Kong, \textsuperscript{4}Concordia University, \textsuperscript{5}Western University Abstract Knowledge distillation aims to train a compact student network using soft supervision from a larger teacher network and hard supervision from ground truths. However, determining an optimal knowledge fusion ratio that balances these supervisory signals remains challenging. Prior methods generally resort to a constant or heuristic-based fusion ratio, which often falls short of a proper balance. In this study, we introduce a novel adaptive method for learning a sample-wise knowledge fusion ratio, exploiting both the correctness of teacher and student, as well as how well the student mimics the teacher on each sample. Our method naturally leads to the intra-sample trilateral geometric relations among the student prediction ($S$), teacher prediction ($T$), and ground truth ($G$). To counterbalance the impact of outliers, we further extend to the inter-sample relations, incorporating the teacher’s global average prediction ($\bar{T}$) for samples within the same class. A simple neural network then learns the implicit mapping from the intra- and inter-sample relations to an adaptive, sample-wise knowledge fusion ratio in a bilevel-optimization manner. Our approach provides a simple, practical, and adaptable solution for knowledge distillation that can be employed across various architectures and model sizes. Extensive experiments demonstrate consistent improvements over other loss re-weighting methods on image classification, attack detection, and click-through rate prediction. 1 Introduction Knowledge distillation (KD) (Hinton et al., 2015) is a widely used machine learning technique that aims to transfer the informative knowledge from a cumbersome model (i.e., teacher) to a lightweight model (i.e., student). The student is trained by both imitating the teacher’s behavior and minimizing the difference between its own predictions and the ground truths. This is achieved by optimizing a convex combination of two losses: $L = \alpha L_{KD} + (1 - \alpha)L_{GT}$, where $\alpha \in [0, 1]$ is the knowledge fusion ratio balancing the trade-off between the two different supervision signals. Determining the knowledge fusion ratio $\alpha$ is critical for training. The most straightforward method is to pre-set an identical value for all training samples (Hinton et al., 2015; Huang et al., 2022; Clark et al., 2019; Romero et al., 2014; Park et al., 2019; Lassance et al., 2020). Other works, such as ANL-KD (Clark et al., 2019) and FitNet (Romero et al., 2014), gradually decrease $\alpha$ from 1 to 0 through an annealing factor. Recent studies (Lukasik et al., 2021; Zhou et al., 2021; Lu et al., 2021) imply that a uniform knowledge fusion ratio across all samples is sub-optimal and cannot well capture the nuanced dynamics of the knowledge transfer process, thus designing the knowledge fusion ratio in a more fine-grained manner. For instance, ADA-KD (Lukasik et al., 2021) assigns a higher $\alpha$ to a class if the teacher has a higher correctness on that class. WLS-KD (Zhou et al., 2021) takes... Figure 1: Motivation experiment on CIFAR-100 with a ResNet-34 teacher and a ResNet-18 student. The student is trained with varying knowledge fusion ratio values ($\alpha$). Data is first partitioned into $D$ (where the teacher predicts correctly) and $D'$ (incorrect predictions), and further categorized into five equalized groups based on the student-teacher prediction discrepancies ($ST$), respectively. Our claim is that determining $\alpha$ greatly depends on $ST$ and the correctness of teacher predictions. Both the teacher’s and student’s correctness into consideration, and $\alpha$ is increased if the teacher outperforms the student on a sample, otherwise decreased. RW-KD (Lu et al., 2021) analyzes the same information as WLS-KD yet employs a meta-learning method to learn the sample-wise $\alpha$. However, existing methods largely ignore the discrepancy between the student’s prediction ($S$) and the teacher’s prediction ($T$), denoted as $ST$, when determining $\alpha$. We argue that this oversight is significant, as making the student imitate the teacher lies at the heart of KD; thus intuitively, the $ST$ discrepancy should offer valuable insights into balancing the two supervisory signals. Empirical results on CIFAR-100 (Krizhevsky, 2009) further verify our argument. The details of the motivation experiment are demonstrated at the end of this section. Derived from our observations, we draw the following insights: - If the teacher predicts correctly, a higher $ST$ discrepancy indicates the higher learning potential from the teacher, favoring a larger $\alpha$. A lower discrepancy indicates less potential to learn from the teacher and value in using the ground truth, thus a smaller $\alpha$ is preferred. - If the teacher predicts incorrectly, knowledge from the teacher is misleading, and thus a smaller $\alpha$ is advisable. - Regardless of the situation, determining a proper sample-wise value $\alpha$ relies on not only the teacher’s or student’s performances but also the value of $ST$. Consequently, our findings suggest that the $ST$ discrepancy offers valuable insights for determining the knowledge fusion ratio $\alpha$. In light of the emphasized importance of student-ground truth ($SG$) and teacher-ground truth ($TG$) relations in existing studies (Zhou et al., 2021; Lu et al., 2021), we propose TGeo-KD, which captures all three relations aforementioned and naturally leads to model the intra-sample Trilateral Geometry among the signals from the student ($S$), teacher ($T$), and ground truth ($G$). To enhance the model stability against outliers, we further incorporate the teacher’s global average prediction for a given class as an additional reference, abbreviated as $\bar{T}$, enriching the geometric relations at both intra- and inter-sample levels. Based on the insights from the motivation experiment, we also argue that learning the sample-wise $\alpha$ is quite involved and cannot be achieved by merely heuristic rules. To this end, we propose to learn the fusion ratio by a neural network (NN) and formulate KD as a bilevel objective that leverages the trilateral geometric information. As a result, the student is influenced by a tailored blend of knowledge from both the teacher and the ground truth. Our proposed TGeo-KD, an end-to-end solution, is versatile and proves superior to other re-weighting methods in various tasks, from image classification to click-through rate prediction. To summarize, the main contributions of our work are as follows: - We introduce TGeo-KD, a novel method for learning sample-wise knowledge fusion ratios in KD. Leveraging the trilateral geometry, our method encapsulates the geometric relations among the signals from the student, teacher, and ground truth. - We exploit the trilateral geometry at both intra-sample and inter-sample levels, mitigating the impact of outliers in training samples towards a more effective knowledge fusion. - We conduct comprehensive experiments across diverse domains to demonstrate the consistent superiority over other loss re-weighting methods, as well as to highlight its versatility and adaptability across different architectures and model sizes. Details of Motivation Experiment. We conduct our motivation experiment on CIFAR-100 (Krizhevsky, 2009). Specifically, we partition the dataset into two subsets, \( D \) and \( D' \), where \( D \) consists of samples on which the pre-trained teacher (ResNet-34) has correct predictions, whereas \( D' \) includes those with incorrect predictions. Initially, with \( \alpha = 0.5 \), we train a student model (ResNet-18) over 50 epochs to imbibe preliminary knowledge. We then compute the Euclidean distance between the student’s and teacher’s predicted class probabilities across all samples, designating this as the \( ST \) discrepancy. Based on ascending \( ST \) values, we further split \( D \) and \( D' \) into five equalized groups \( g_1 \sim g_5 \) and \( g'_1 \sim g'_5 \), respectively. Subsequently, the student is further trained with varying \( \alpha \) values adjusted from \{0.1, 0.3, 0.5, 0.7, 0.9\}, yielding five distinct student models. Upon evaluating these students across all five \( g \) groups and five \( g' \) groups, we obtain 25 bins for each subfigure in Fig. 1, which reveals that: (i) For samples in \( D \), students trained with smaller \( \alpha \) values (i.e., 0.1, 0.3) outperformed on groups with lower \( ST \) discrepancies (i.e., \( g_1, g_2 \)), whereas larger \( \alpha \) values (i.e., 0.7, 0.9) were beneficial for groups with higher discrepancies (i.e., \( g_4, g_5 \)). (ii) For samples in \( D' \), a smaller \( \alpha \) (i.e., 0.1, 0.3) demonstrated the best performance on all \( g' \) groups. Our observation shows a proper sample-wise value \( \alpha \) relies on not only the student’s or teacher’s performances but also their discrepancy, which motivates the design of our proposed method for knowledge fusion learning. 2 PRELIMINARY: REVISITING KNOWLEDGE FUSION RATIO IN KD The vanilla KD (Hinton et al., 2015) transfers knowledge from a pre-trained teacher network to a student by reducing discrepancies between their predictions and aligning with the ground truth. The student learns through two losses: \( L_{KD} \), the Kullback–Leibler (KL) divergence (Joyce, 2011) between student and teacher predictions, and \( L_{GT} \), the Cross-Entropy (CE) loss (Good, 1952) from the ground truth. Formally, denoting \( D = \{(x_i, y_i)\}_{i=1}^N \) as the data where \( y_i \) is the ground truth label represented as a one-hot vector for each sample, \( C \) as the number of classes, \( z_i^s \) and \( z_i^t \) as the logits of the student and teacher, we formulate the two losses in a sample-wise manner as follows: \[ L_{KD} = \tau^2 \text{KL}(z_i^s, z_i^t) = \tau^2 \sum_{j=1}^{C} \sigma_j(z_i^t/\tau) \log \frac{\sigma_j(z_i^s/\tau)}{\sigma_j(z_i^t/\tau)}, \tag{1} \] \[ L_{GT} = \text{CE}(z_i^s, y_i) = -\sum_{j=1}^{C} y_{i,j} \log \left( \sigma_j(z_i^s) \right), \tag{2} \] where \( \sigma \) is the softmax function and the temperature \( \tau \) controls the softness of logits. Then the overall training objective aims to optimize the student network (parameterized by \( \theta \)) through a convex combination of the two losses with a sample-wise fusion ratio \( \alpha_i \): \[ L = \min_{\theta} \frac{1}{N} \sum_{i=1}^{N} \alpha_i L_{KD} + (1 - \alpha_i) L_{GT}. \tag{3} \] We present a comparison of prior knowledge fusion methods alongside our work in Fig. 2, emphasizing both the geometric relations captured for learning \( \alpha \) and their distinctive model attributes. Evidently, our proposed TGeo-KD overcomes the constraints observed in previous approaches, thus leading to enhanced performance. 3 Trilateral Geometry Guided Knowledge Fusion 3.1 Adaptive Learning for Knowledge Fusion Ratio To address the limitation of prior works and employing the insights from the motivation experiment as depicted in Sec. 1, we propose to adaptively learn the knowledge fusion ratio based on trilateral geometry within \((S, T, G)\) triplet using a separate network. For simplifying the notation, we consistently denote \(S := \sigma(z^s) \in \mathbb{R}^{N \times C}\) and \(T := \sigma(z^t) \in \mathbb{R}^{N \times C}\) as the prediction probabilities of the student and teacher, and \(G := y \in \mathbb{R}^{N \times C}\) as the ground truth (i.e., each row is an one-hot vector). Given a training sample \((x_i, y_i)\), the knowledge fusion ratio can be correspondingly modeled as \(\alpha_i = f_\omega(\Delta_i)\), where \(f_\omega\) is one NN parameterized by \(\omega\). The final layer of \(f_\omega\) employs a sigmoid activation, ensuring that \(\alpha_i \in (0, 1)\). For brevity, we omit explicitly writing the sigmoid function. \(\Delta_i\) represents the unique geometric relation among \(S_i, T_i,\) and \(G_i\). Our ultimate goal is to find the optimal sample-wise ratios \(\alpha_i = f_\omega(\Delta_i)\) that enable the student network parameterized by \(\theta\) to generalize well on test data. This naturally implies a bilevel optimization problem [Franceschi et al., 2018] with \(\omega\) as the outer level variable and \(\theta\) as the inner loop variable: \[ \min_{\omega} J_{\text{outer}}(\theta^*(\omega)) = \frac{1}{N_{\text{val}}} \sum_{i=1}^{N_{\text{val}}} L_{\text{GT}}^i, \] \[ \text{s.t. } \theta^*(\omega) = \arg\min_{\theta} J_{\text{inner}}(\theta, \omega) := \frac{1}{N_{\text{train}}} \sum_{i=1}^{N_{\text{train}}} f_\omega(\Delta_i)L_{\text{KD}}^i + \left(1 - f_\omega(\Delta_i)\right)L_{\text{GT}}^i. \] On the inner level, we aim to train a student network given a fixed \(\omega\) by minimizing the combined loss. On the outer level, the loss function on the validation set serves as a proxy for the generalization error of \(\omega\). The goal for TGeo-KD is to find \(\omega\) to minimize the validation loss. Note that Eq. 4 is an implicit function of \(\omega\) as \(\theta^*\) depends on \(\omega\). 3.2 Exploiting Trilateral Geometry For modeling the trilateral geometry of \(\Delta_i\), we propose to capture both intra-sample and inter-sample geometric relations. The details are demonstrated as follows. **Intra-sample relations.** Given the \(i\)-th sample, to capture the trilateral geometry of the \((S_i, T_i, G_i)\) triplet, denoting as \(\Delta_i^{STG}\), we capture its three edges as outlined below: \[ e_{i}^{sg} := [G_i - S_i] \in \mathbb{R}^C, e_{i}^{tg} := [G_i - T_i] \in \mathbb{R}^C, e_{i}^{st} := [T_i - S_i] \in \mathbb{R}^C. \] The three edges represent the student’s correctness, the teacher’s correctness, and the discrepancy between the student and teacher, respectively. Previous research [Zhou et al., 2021; Lu et al., 2021] has affirmed the efficacy of the first two edges in guiding the learning of \(\alpha\), while the third concept is our original contribution. We finally represent \(\Delta_i^{STG}\) by also incorporating the exact three vertices \(S_i, T_i, G_i\), to capture the exact probability across all classes for incorporating more information: \[ \Delta_i^{STG} := [e_{i}^{sg} \oplus e_{i}^{tg} \oplus e_{i}^{st} \oplus S_i \oplus T_i \oplus G_i], \] where \(\oplus\) is the concatenation operation. **Inter-sample relations.** In addition to intra-sample relations, we argue that inter-sample relations are also essential for knowledge fusion learning, especially considering the impact of outliers in training samples. Out-of-distribution samples, which are significantly different from normal training data, commonly behave as outliers to challenge the generalization capability of a model [Lee et al., 2018; Wang et al., 2022]. In KD, the teacher network may perform poorly on these outliers, occasionally even with high absolute values of confidence margin. Therefore, blindly using the teacher’s prediction as the supervisory signal can result in the propagation of misleading knowledge, thereby disturbing the student’s training process. To address this issue, we introduce inter-sample geometric relations. For each sample, we associate it with an additional vertex \(T_{ci} \in \mathbb{R}^C\), representing the teacher’s global average prediction for all samples of the class \( c^i \) that sample \( i \) belongs to. It is essential to understand that while each sample is linked to its respective class-specific vertex, samples within the same class refer to the same vertex \( T_{c^i} \). Consequently, we incorporate an additional triplet, \((S_i, T_{c^i}, G_i)\), to encapsulate these inter-sample relations. This is achieved by a similar process as before, focusing on the three edges, as well as incorporating all the vertices as follows: \[ \Delta^{STG}_i := [e^{sg}_i \oplus e^{tg}_i \oplus e^{st}_i \oplus S_i \oplus T_{c^i} \oplus G_i]. \] (8) As such, by introducing the teacher’s average prediction at the inter-sample level, the exploitation of more supportive knowledge can be further facilitated to effectively guide the student training process, particularly in addressing outliers. **Improved distillation with trilateral geometry.** Although we can fully explore the sample-wise trilateral geometry through intra- and inter-sample trilateral relations, it is still challenging to design an explicit formulation between these signals and a knowledge fusion ratio as depicted in Sec.1. We thus use a simple network \( f_\omega(\cdot) \) parameterized by \( \omega \), to adaptively learn a flexible and sample-wise knowledge fusion ratio with the input of geometric relations. The information captured for each sample can be represented as follows: \[ \Delta_i := \Delta^{STG}_i \oplus \Delta^{STG}_i, \] (9) \[ := [e^{sg}_i \oplus e^{tg}_i \oplus e^{st}_i \oplus e^{tg}_i \oplus e^{st}_i \oplus S_i \oplus T_i \oplus T_{c^i} \oplus G_i], \] (10) where the redundant terms are removed for brevity. Through inputting \( \Delta_i \) into \( f_\omega(\cdot) \), the knowledge fusion ratio \( \alpha_i \) can be adaptively learned, and \( \omega \) is optimized with \( \theta \) in an end-to-end way. ### 4 EXPERIMENTS #### 4.1 Tasks and Experiment Settings **Tasks and datasets.** To demonstrate the broad applicability of our method, we conduct extensive experiments on **three different tasks**. Specifically, we use CIFAR-100 (Krizhevsky 2009) and ImageNet (Deng et al., 2009) for **image classification** in computer vision, HIL (Pan et al., 2015) for **attack detection** in cyber-physical systems, and Criteo (Jean-Baptiste Tien, 2014) for **click-through rate (CTR)** prediction in recommender systems. Details of datasets and task selection are shown in Appendix A.1. **Experiment settings.** In the experiment setup, the temperatures (\( \tau \)) are set as 4.0, 1.5, 1.5, and 2.0 on the four datasets, respectively. In the vanilla KD, the pre-set fusion ratios are 0.2, 0.3, 0.1, and 0.3, respectively. Considering that the original HIL (Pan et al., 2015) and Criteo (Jean-Baptiste Tien, 2014) are imbalanced, we conduct oversampling on the minority class of training set as the data pre-processing procedure, ensuring all classes have the equal number of samples in a balanced setting. We conduct the experiments on one NVIDIA RTX-3080 GPU and one RTX-3090 GPU. The detailed experiment settings can be found in Appendix A.2. #### 4.2 Student Classification Performance **Results on CIFAR-100.** We evaluate our proposed TGeo-KD method against numerous established KD baselines, as illustrated in Table I. To ascertain the significance of improvement, we conduct a statistical t-test across five repeated runs, with t-scores being calculated based on the top-1 classification accuracy of TGeo-KD and the baseline methods. All computed t-scores surpass the threshold value \( t_{0.05,5} \), indicating the acceptance of the alternative hypothesis with a statistical significance level of 5.0%. This furnishes compelling evidence that TGeo-KD consistently demonstrates a marked enhancement in performance. Notably, when the teacher (ResNet-56) and student (ResNet-20) models possess relatively similar architectures, ADA-KD (Lukasik et al., 2021) is the best baseline with a marginal improvement of 0.07% over the second-best baseline WLS-KD (Zhou et al., 2021). In comparison to ADA-KD (Lukasik et al., 2021), TGeo-KD illustrates a substantial advantage of 0.76%. As the architectural gap increases, like between the student ResNet-32 and the teacher ResNet-110, our method’s performance advantage increases to 0.97%, compared to the best baseline. This performance gain Table 1: Top-1 classification accuracy (%) on CIFAR-100. We re-implemented the methods denoted by * and calculated their average results (with standard deviation) over 5 repeated runs. For the remaining methods, we utilized the results provided or verified by the others (Tian et al., 2020; Zhou et al., 2021). The best performance is **bold**, while the second best is _underlined_. | Method | WRN-40-2 | ResNet-56 | ResNet-110 | ResNet-32 | ResNet-32×4 | ResNet-32×4 | ShuffleNetV1 | ShuffleNetV2 | ShuffleNetV1 | |--------------|----------|-----------|------------|-----------|-------------|-------------|--------------|--------------|--------------| | Teacher | | | | | | | | | | | Student | 75.61 | 72.34 | 74.31 | 74.31 | 79.42 | 79.42 | 79.42 | 79.42 | 79.42 | | | 71.98 | 69.06 | 71.14 | 69.06 | 72.50 | 70.50 | 71.82 | 70.50 | 70.50 | | FitNet | 72.24 | 69.21 | 71.06 | 68.99 | 73.50 | 73.59 | 73.54 | 73.73 | 73.73 | | AT | 72.77 | 70.55 | 72.31 | 70.22 | 73.44 | 71.73 | 72.73 | 73.32 | 73.32 | | SP | 72.43 | 69.67 | 72.69 | 70.04 | 72.94 | 73.48 | 74.56 | 74.52 | 74.52 | | CC | 72.21 | 69.63 | 71.48 | 69.48 | 72.97 | 71.14 | 71.29 | 71.38 | 71.38 | | VID | 73.30 | 70.38 | 72.61 | 70.16 | 73.09 | 73.38 | 73.40 | 73.61 | 73.61 | | RKD | 72.22 | 69.61 | 71.82 | 69.25 | 71.90 | 72.28 | 73.21 | 72.21 | 72.21 | | PKT | 73.45 | 70.34 | 72.61 | 70.25 | 73.64 | 74.10 | 74.69 | 73.89 | 73.89 | | AB | 72.38 | 69.47 | 70.98 | 69.55 | 73.17 | 73.55 | 74.31 | 73.34 | 73.34 | | FT | 71.59 | 69.84 | 72.37 | 70.22 | 72.86 | 71.75 | 72.50 | 72.03 | 72.03 | | NST | 72.24 | 69.90 | 71.96 | 69.53 | 73.30 | 74.12 | 74.68 | 74.89 | 74.89 | | CRD | 74.14 | 71.16 | 73.48 | 71.46 | 75.51 | 75.11 | 75.65 | 76.05 | 76.05 | | Vanilla KD | 73.54 | 70.66 | 73.08 | 70.67 | 73.33 | 74.07 | 74.45 | 74.83 | 74.83 | | ANL-KD* | 72.81±0.25 | 72.13±0.18 | 72.50±0.21 | 72.28±0.21 | 75.07±0.26 | 72.58±0.23 | 73.11±0.14 | 75.27±0.32 | | ADA-KD* | 74.67±0.19 | 72.22±0.21 | 73.19±0.12 | 72.29±0.27 | 75.78±0.34 | 71.45±0.16 | 72.20±0.24 | 75.49±0.28 | | WLS-KD | 74.48 | 72.15 | 74.12 | 72.19 | 76.05 | 75.46 | 75.93 | 76.21 | 76.21 | | RW-KD* | 73.92±0.22 | 70.33±0.26 | 71.78±0.15 | 71.24±0.16 | 74.86±0.29 | 70.45±0.25 | 70.69±0.17 | 74.15±0.29 | | TGeo-KD | **75.43±0.16** | **72.98±0.14** | **75.09±0.13** | **73.55±0.20** | **77.27±0.25** | **76.83±0.17** | **76.89±0.14** | **77.05±0.23** | Table 2: Top-1 and Top-5 classification accuracy on ImageNet. We re-implemented the methods denoted by * and used the author-provided or author-verified results for the others (Zhou et al., 2021). | Teacher: ResNet-34 → Student: ResNet-18 | Teacher: ResNet-50 → Student: MobileNetV1 | |-----------------------------------------|-------------------------------------------| | Method | Top-1 ACC | Top-5 ACC | Method | Top-1 ACC | Top-5 ACC | |--------|-----------|-----------|--------|-----------|-----------| | Teacher | 73.31 | 91.42 | Teacher | 76.16 | 92.87 | | Student | 69.75 | 89.07 | Student | 68.87 | 88.76 | | AT | 71.03 | 90.04 | AT | 70.18 | 89.68 | | NST | 70.29 | 89.53 | FT | 69.88 | 89.50 | | FSP | 70.58 | 89.61 | AB | 68.89 | 88.71 | | RKD | 70.40 | 89.78 | RKD | 68.50 | 88.32 | | Overhaul| 71.03 | 90.15 | Overhaul| 71.33 | 90.33 | | CRD | 71.17 | 90.13 | CRD | 69.07 | 88.94 | | Vanilla KD | 70.67 | 90.04 | Vanilla KD | 70.49 | 89.92 | | ANL-KD* | 71.83±0.22 | 90.21±0.26 | ANL-KD* | 70.40±0.15 | 89.25±0.22 | | ADA-KD* | 71.96±0.17 | 90.45±0.21 | ADA-KD* | 71.08±0.24 | 90.17±0.16 | | WLS-KD | 72.04 | 90.70 | WLS-KD | 71.52 | 90.34 | | RW-KD* | 70.62±0.22 | 89.76±0.15 | RW-KD* | 70.15±0.16 | 89.40±0.19 | | TGeo-KD | **72.89±0.15** | **91.80±0.04** | TGeo-KD | **72.46±0.14** | **90.95±0.17** | Further escalates to 1.26% when the student ResNet-20 and the teacher ResNet-110, underscoring the advantage of our TGeo-KD particularly when dealing with the increasing architectural disparities between the student and teacher. Moreover, our technique excels in hetero-architecture KD scenarios, wherein knowledge is distilled from either ResNet-32×4 or WRN-40-2 models into ShuffleNet. In these cases, our method consistently demonstrates performance enhancements of 1.37%, 0.96%, and 0.84%, respectively, compared to the strongest baseline WLS-KD (Zhou et al., 2021). Results on ImageNet. To further demonstrate the effectiveness of our approach on larger datasets, we extend our experiments to ImageNet, adhering to the setup outlined by Zhou et al. (2021). As depicted in Table 2, TGeo-KD consistently outperforms all competing baselines. Notably, compared with the strongest baseline WLS-KD (Zhou et al., 2021), TGeo-KD exhibits the performance improvement of 1.10% when the teacher (ResNet-34) and student (ResNet-18) share the same architecture style. Similarly, in a hetero-architecture KD experiment with ResNet-50 and MobileNetV1 as teacher and student respectively, our method realizes an improvement of 0.94%. Table 3: Result comparison on HIL and Criteo under Teacher (12-layer BERT) → Student (4-layer BERT). The best performance is **bold**, while the second best is underlined. “⇑” indicates the metric value the higher the better, while “⇓” indicates the lower the better. Our TGeo-KD demonstrates a statistical significance for \( p \leq 0.01 \) compared to the strongest baseline based on the paired t-test. | Dataset | HIL | Criteo | |---------|-----|--------| | Method | ACC (%) ⇑ | AUC (%) ⇑ | NLL ⇓ | ACC (%) ⇑ | AUC (%) ⇑ | NLL ⇓ | | Teacher | 88.19 | 75.23 | 0.94 | 78.15 | 79.08 | 0.77 | | Student | 87.64 | 67.58 | 1.02 | 69.43 | 69.02 | 1.79 | | Vanilla KD | 87.55±0.56 | 69.52±0.70 | 1.00±0.04 | 71.08±0.48 | 69.42±0.60 | 1.51±0.05 | | ANL-KD | 87.27±0.23 | 70.01±0.26 | 1.02±0.03 | 72.71±0.35 | 71.02±0.39 | 1.08±0.05 | | ADA-KD | 90.15±0.34 | 70.02±0.21 | 0.99±0.02 | 72.15±0.33 | 71.01±0.35 | 1.15±0.04 | | WLS-KD | 90.05±0.28 | 70.70±0.23 | 1.01±0.05 | 75.30±0.38 | 75.03±0.40 | 0.82±0.04 | | RW-KD | 89.40±0.45 | 66.03±0.58 | 1.07±0.06 | 75.05±0.44 | 75.11±0.53 | 0.89±0.07 | | TGeo-KD | **92.39±0.49** | **71.65±0.28** | **0.94±0.03** | **77.80±0.29** | **77.00±0.32** | **0.81±0.04** | Figure 3: Knowledge fusion ratio distributions learned with (dark) and without (light) incorporating \( ST \) during learning \( \alpha \). We first partition all samples into two subsets based on the teacher’s correctness. In each subset, we sort the samples in descending order based on their \( ST \) values and select the top and bottom 20% as those with large and small discrepancies, respectively. Results on HIL and Criteo. To illustrate the broad applicability of our proposed TGeo-KD in diverse application scenarios, we also observe the similar superiority of our method on HIL for attack detection and Criteo for CTR prediction, as shown in Table 3. For instance, TGeo-KD not only relatively improves ACC and NLL over ADA-KD (Lukasik et al., 2021) by 2.48% and 5.05% on HIL, respectively, but also is better than the deeper teacher with the increasing ACC of 4.20%. Besides, WLS-KD (Zhou et al., 2021) and RW-KD (Lu et al., 2021) are the best methods among all the baselines on Criteo, approving the effectiveness of adopting sample-wise knowledge fusion ratio. More results on various network architectures are provided in Appendix A.4. 4.3 Fusion ratio analysis with prediction discrepancy To demonstrate the effectiveness of incorporating prediction discrepancy \( ST \) between the student and teacher on learning the knowledge fusion ratio \( \alpha \), we follow the same settings as the motivation experiment in Sec. 1 for a comprehensive analysis. We first categorize training samples based on the correctness of teacher predictions and \( ST \) on these samples. We then compare the distributions of fusion ratio learned with and without the consideration of \( ST \), as depicted in Fig. 3. When the teacher predicts incorrectly, it may transfer misleading knowledge to the student, resulting in a decline in the student’s performance. By incorporating \( ST \), our proposed TGeo-KD decreases the fusion ratio on \( L_{KD} \) when the discrepancy is either large or small (in Fig. 3(a) and Fig. 3(b)), which suggests that the student is encouraged to acquire more knowledge from the ground truths. In cases where the teacher predicts correctly, the fusion ratio typically ranges from 0.4 and 0.8 when not incorporating \( ST \). The fusion ratio is greater when incorporating \( ST \) in situations with a significant discrepancy between the student and teacher (in Fig. 3(c)). This suggests that the student is expected to emulate the teacher more closely, as the teacher possesses a greater potential for offering valuable knowledge. When the discrepancy is smaller, the student is encouraged to rely more on the ground truth, leading to a decline in the fusion ratio (in Fig. 3(d)). On the contrary, as illustrated in Table 4, the fusion ratio steadily increases without the incorporation of \( ST \) as the training progresses, yet an insufficient potential can be learned from the teacher. Table 4: Comparison of average knowledge fusion ratios with and without $ST$ during the training. | Teacher predictions | $X$ | $X$ | ✓ | ✓ | |---------------------|----|----|---|---| | Discrepancy | Large | Small | Large | Small | | Epoch | 100 | 200 | 300 | 400 | 100 | 200 | 300 | 400 | 100 | 200 | 300 | 400 | | Without $ST$ | 0.51 | 0.46 | 0.43 | 0.42 | 0.48 | 0.42 | 0.40 | 0.39 | 0.48 | 0.55 | 0.57 | 0.52 | 0.59 | 0.62 | 0.64 | | With $ST$ | 0.25 | 0.19 | 0.14 | 0.12 | 0.19 | 0.13 | 0.10 | 0.08 | 0.59 | 0.67 | 0.71 | 0.73 | 0.42 | 0.33 | 0.29 | 0.27 | (a) WLS-KD (b) RW-KD (c) TGeo-KD Figure 4: Knowledge fusion ratio distributions on normal samples (light) and outliers (dark). 4.4 Analysis on Normal Samples and Outliers Fusion ratio on normal samples and outliers. To conduct analysis on fusion ratios between normal samples and outliers, we first create outliers by adding synthetic Gaussian noise as additional training samples following the setting outlined in the studies (Hendrycks and Gimpel [2016], Liang et al. [2018], Vyas et al. [2018]). In the context of Gaussian noise-based outliers, each RGB value for every pixel is sampled from an independent and identically distributed Gaussian distribution with a mean of 0.5 and unit variance, and each pixel value is clipped to the range [0, 1]. As illustrated in Fig. 4, we compare the final fusion ratio distributions between sample-wise based baselines (i.e., WLS-KD [Zhou et al. [2021]] and RW-KD [Lu et al. [2021]]) and our TGeo-KD on CIFAR-100. Compared to the two baselines, TGeo-KD reports the final fusion ratios for normal samples within the range of 0.4 and 0.6 typically, which indicates that the student can be effectively guided with the valuable supervision signals from both the ground truth and the teacher on these normal samples. Furthermore, the teacher may make incorrect predictions on outliers, which provides misleading knowledge to the student. With the consideration of $ST$ discrepancy and inter-sample relations, TGeo-KD reports the fusion ratios below 0.3 on outliers, which suggests that the student is expected to learn more informative knowledge from the ground truth, resulting in an increased fusion ratio on $L_{GT}$. More comparisons about the effect of $ST$ discrepancy and inter-sample relations on outliers can be found in Appendix A.4. Prediction discrepancy during the training and testing. Fig. 5 shows the prediction discrepancies between the student and teacher on normal samples and outliers during training and testing. Compared to baselines, in Fig. 5(a), our proposed TGeo-KD achieves the smallest discrepancies (i.e., 0.19 during the training and 0.26 during the testing) on normal samples, indicating that the student can effectively mimic the teacher predictions through learning the knowledge distilled from the teacher. Furthermore, in Fig. 5(b), although ADA-KD [Lukasik et al. [2021]] and RW-KD [Lu et al. [2021]] (i.e., the baselines without inter-sample relations) report fewer discrepancies on outliers during the training, the teacher may report poor performance on these outliers and transfer misleading knowledge to the student. With the power of our inter-sample relations, TGeo-KD surpasses these aforementioned studies during the testing, which indicates that inter-sample relations can protect the student training process from being disrupted by low-quality knowledge from the teacher, especially on those outliers. 4.5 Ablation Studies Effect of different relations captured. TGeo-KD adaptively learns the knowledge fusion ratio by leveraging both intra- and inter-sample relations. To illustrate the effectiveness of each relation, we conduct experiments to train students under different combinations. As summarized in Table 6, both intra- and inter-sample relations yield better performance than the standalone student and vanilla KD models. Specifically, when we incorporate the $ST$ relation, there is a notable increase in top-1 accuracy from 73.92% to 75.28%. This suggests the importance of accounting for the discrepancies between the student and teacher models. The performance improves to 76.83% when the inter-sample relations are further captured. A deeper dive into how various representations of these intra- and inter-sample relations affect performance can be found in Appendix A.4. Comparison between attention mechanism and MLP. We assess various options for modeling the knowledge fusion ratio generator, \( f_\omega(\cdot) \), including attention mechanism as suggested in Vaswani et al. (2017) and MLPs with different numbers of layers. For CIFAR-100, the teacher and student are ResNet-110 and ResNet-32. For ImageNet, the teacher and student are ResNet-34 and ResNet-18. Based on the results in Table 5, a 2-layer MLP is sufficient in capturing the valuable information embedded in the trilateral geometry, leading to superior performance across two datasets. Furthermore, MLP settings generally demonstrate a higher performance compared to attention mechanisms. 5 RELATED WORKS Knowledge balance in KD. Knowledge distillation techniques have evolved from uniformly applying fusion ratios across samples (Hinton et al., 2015; Huang et al., 2022; Clark et al., 2019; Romero et al., 2014; Park et al., 2019; Lassance et al., 2020) to more refined strategies. Methods like ANL-KD (Clark et al., 2019) and FitNet (Romero et al., 2014) use annealing factors to adjust fusion ratios. Recent advancements include ADA-KD (Lukasik et al., 2021), which uses class-wise ratios, and WLS-KD (Zhou et al., 2021), which adjusts based on student’s and teacher’s performance. RW-KD (Lu et al., 2021) employs meta-learning for adaptive ratio learning. Yet, existing methods lack a consideration of the comprehensive trilateral relations among the signals from the student, teacher, and ground truth during the knowledge fusion learning. Sample relations in KD. The exploitation of sample relations in knowledge distillation has been a key focus of numerous studies (Zagoruyko and Komodakis, 2017; Zhou et al., 2016; Park et al., 2019; Heo et al., 2019; Tian et al., 2020). For instance, AT (Zagoruyko and Komodakis, 2017; Hu et al., 2023) introduces attention transfer to transfer the attention maps from the teacher to the student, explicitly capturing the sample-level relation. CAMT (Zhou et al., 2016) extends this idea by incorporating both spatial and channel attention transfer to enhance the student’s performance. Other studies have also emphasized relational reasoning and contrastive representation as key mechanisms for better understanding and improving knowledge distillation (Park et al., 2019; Tran et al., 2020). Despite these advancements in capturing sample relations, none of these studies target on the learning of the knowledge fusion ratio. Table 5: Top-1 classification accuracy (%) comparison among different modellings of \( f_\omega(\cdot) \) through attention and MLP. | | CIFAR-100 | ImageNet | |----------------|-----------|----------| | 1-head Atten. | 73.00 | 70.96 | | 2-head Atten. | 73.63 | 71.72 | | 3-head Atten. | 73.87 | 72.08 | | 1-layer MLP | 72.69 | 71.74 | | 2-layer MLP | **74.31** | **72.89**| | 3-layer MLP | 73.96 | 72.52 | Table 6: Ablation study on relations for learning \( \alpha \) on CIFAR-100. The teacher and student are ResNet-32×4 and ShuffleNetV1. | | SG+TG | \( \Delta^{STG} \) | \( \Delta^{STG} \) | ACC | |----------------|-------|---------------------|--------------------|-----| | Student | - | - | - | 70.50 | | Vanilla KD | - | - | - | 74.07 | | TGeo-KD | ✔ | - | - | 73.92 | | TGeo-KD | ✔ | ✔ | - | 75.28 | | TGeo-KD | ✔ | ✔ | ✔ | **76.83** | 6 CONCLUSION We propose an innovative approach named TGeo-KD for learning sample-wise knowledge fusion ratios during KD, which exploits the trilateral geometry among the signals from the student, teacher, and ground truth by modeling both intra- and inter-sample geometric relations. Across diverse domains, TGeo-KD outperforms other re-weighting methods consistently. It offers a simple yet adaptable distillation solution that fits various architectures and model sizes, thereby easing deployment complexities and resource limitations associated with deep neural networks. For the broader impact, TGeo-KD is effective across a variety of domains, including image classification, attack detection, and click-through rate prediction. Future research on TGeo-KD will address its focus on inter-sample relations within classes, exploring cross-class relationships to enhance performance. REFERENCES Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. BAM! born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5931–5937, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In International conference on machine learning, pages 1568–1577. PMLR, 2018. I. J. Good. Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological), (1):107–114, 1952. Huifeng Guo, Ruiming Tang, Yunning Ye, Zhenguo Li, and Xiuqiang He. Deepfm: A factorization-machine based neural network for CTR prediction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 1725–1731, 2017. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, 2016. Byeongho Heo, Minsik Lee, Sangdoo Yun, and Jin Young Choi. Knowledge transfer via distillation of activation boundaries formed by hidden neurons. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3779–3787, 2019. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Chengming Hu, Xuan Li, Dan Liu, Haolun Wu, Xi Chen, Ju Wang, and Xue Liu. Teacher-student architecture for knowledge distillation: A survey. CoRR, abs/2308.04268, 2023. Tao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu. Knowledge distillation from a stronger teacher. arXiv preprint arXiv:2205.10536, 2022. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448–456, 2015. Olivier Chapelle Jean-Baptiste Tien, joycenv. Display advertising challenge, 2014. URL https://kaggle.com/competitions/criteo-display-ad-challenge. James M. Joyce. Kullback-Leibler Divergence, pages 720–722. 2011. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. Carlos Lassance, Myriam Bontonou, Ghouthi Boukli Hacene, Vincent Gripon, Jian Tang, and Antonio Ortega. Deep geometric knowledge distillation with graphs. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8484–8488, 2020.
TFR0GrzERG
In the introduction, the authors claim that task descriptions with minimal information can impair in-context learning performance because they hinder the model’s ability to learn from in-context examples. I don’t think that this is correct given the presented results (Figure 1).
On Task Description of In-context Learning: A Study from Information Perspective Anonymous authors Paper under double-blind review Abstract Transformers have demonstrated remarkable performance in a wide range of applications, making in-context learning an essential technique. Although the in-context learning has been widely applied, our understanding of its underlying processes still remains limited. In-context learning in transformers primarily relies on two types of information: in-context samples and task descriptions. While previous research has extensively investigated the influence of in-context samples on learning behavior, the role of task descriptions has not been adequately explored, despite their practical significance. In this paper, we present a study examining the impact of task descriptions on in-context learning performance of transformers. We devise a synthetic experiment setting, making the information of task description controllable. Through a series of well-designed experiments, we systematically vary task description information and assess the resulting effects on model performance across multiple tasks. Our findings reveal the complex roles of task descriptions: task descriptions will lead the model to ignore in-context examples; task descriptions will increase the lower bound of the in-context learning performance. This study contributes to a deeper understanding of the in-context learning mechanism in transformers, paving the way for more effective real-world applications of these powerful models. 1 Introduction The impressive performance of transformers highlights the significance of in-context learning for real-world applications. In-context learning pertains to the Transformer’s ability to learn from context-based prompts. This learning approach is utilized in numerous practical applications, including AI planning (Valmeekam et al., 2022; Xie et al., 2023), reasoning (Huang & Chang, 2022), image understanding (Alayrac et al., 2022) and autonomous agents (Wang et al., 2023), and can provide theoretical derivation for experimental results in other fields like cognitive science Sumers et al. (2023). Despite the extensive use of in-context learning, our comprehension of its underlying mechanisms remains limited. Recent research has investigated in-context learning within a meta-learning framework (Gu et al., 2023; Min et al., 2021), offering insights into how Transformers utilize in-context demonstrations to tackle new tasks. However, Transformer employ in-context information in two ways: through in-context demonstrations and task descriptions. The role of task descriptions, though practically significant, has not been thoroughly examined. In this work, we adopt a different perspective by concentrating on how task descriptions influence in-context learning within a meta-learning framework. The meta-learning framework (Gu et al., 2023; Min et al., 2021) is used to enrich in-context learning of Transformer, where the Transformer is directly trained to implement in-context learning. The task dataset for this framework is constructed by equations in the form of \((x \circ y) \mod p = r\), where \(p\) is a prime number, \(\circ\) represents for operators, and \(r\) is the result of equation to be predicted. Under this framework, the prompt is formulated as \(\{(x_i, y_i, r_i)\}_{i=1}^l, (x_q, y_q)\}. \(\{(x_i, y_i, r_i)\}_{i=1}^l\) can be regarded as few shot examples, while \(x_q\) is the validation examples. The Transformer is expected to learn this task from the few show examples. This framework is also leveraged for exploration of in-context learning (Akyürek et al., 2022; Von Oswald et al., 2023; Garg et al., 2022; Chan et al., 2022a;b; Fu et al., 2023). Following previous studies, we also use this framework. However, we are different in that the task description is given. That is, the prompt in our task is \([d, \{(x_i, y_i, r_i)\}_{i=1}^{q}, (x_q, y_q)]\), where \(d\) denotes task description. To investigate the role of task description, we devise a synthetic experiment, where we can flexible control the complexity of the task description by assign the task description with different level of information. Specifically, given a task ground truth label \(t\), we design task description \(d\) to control the mutual information \(I(t; d)\). In the proposed experimental setup, we investigate the impact of task descriptions on in-context learning. Our findings are: (i) task descriptions can divert model’s attention in in-context examples, and this effect is related to the task description’s information, and (ii) task descriptions can raise the lower bound of in-context learning performance. Consequently, we observe a phase transition regarding the impact of task descriptions: those with insufficient information can impair in-context learning performance due to (i), while task descriptions with abundant information can aid in-context learning due to (ii). We find two cases where Transformers can achieve good in-context learning performance: 1) a large number of in-context examples with low-information task descriptions, and 2) high-information task descriptions. Additionally, we explore whether incorporating task prediction as an auxiliary task during training improves in-context learning performance. The results indicate that task prediction as a surrogate task benefits in-context learning in nearly all cases. To verify the generality of our findings, we conduct further studies on more realistic NLP tasks, which align with our experimental results on the synthetic tasks. Our contributions can be summarized as - The development of a new synthetic task for investigating the role of task description in in-context learning. - The identification of a phase transition of the in-context learning performance when increasing the information of task description. - The conduction of further research beyond synthetic tasks to corroborate the universality of our findings. 2 RELATED WORK In-context learning In recent years, the field of natural language processing (NLP) has witnessed significant advancements, particularly in the development of large-scale language models designed for in-context learning. These models, such as GPT-4 (OpenAI, 2023) by OpenAI, PaLM2 (Anil et al., 2023) by Google, and Llama (Touvron et al., 2023) by Facebook, have demonstrated remarkable capabilities to understand and generate human-like text by leveraging massive amounts of data and sophisticated algorithms. In-context learning refers to the model’s ability to adapt its understanding and responses based on the specific context provided (Brown et al., 2020), which has been proven to be crucial in enhancing their performance across various NLP tasks, including AI planning (Valmeekam et al., 2022; Xie et al., 2023), reasoning (Huang & Chang, 2022), image understanding (Alayrac et al., 2022), and autonomous agents (Wang et al., 2023). However, despite the impressive progress, challenges remain in terms of the mechanism driving in-context learning. This paper focuses on understanding the mechanism of in-context learning from a synthetic tasks. The results make a further step towards understanding in-context learning from the aspect of task description. Exploration of in-context learning from synthetic tasks. Exploring in-context learning mechanisms in real applications poses a significant challenge due to the complexities and intricacies involved in practical scenarios (Min et al., 2022). Consequently, recent studies have shifted their focus towards understanding the mechanisms of in-context learning on specific synthetic tasks, which offer a more controlled environment for examining individual aspects of the learning process. For instance, linear regression tasks have been employed in several studies (Akyürek et al., 2022; Von Oswald et al., 2023; Garg et al., 2022) to delve into the in-context learning behavior of Transformer, while some researchers have turned their attention to image data to analyze the learning process. Moreover, investigations (Chan et al., 2022a;b; Fu et al., 2023) have been conducted from in-context and in-weights perspectives, examining the learning process through the lens of the model’s internal representations and the role of weights. However, despite these valuable contributions, most explorations mentioned above tend to overlook the influence of task descriptions on the in-context learning process. Considering the practical significance of task descriptions in guiding Transformer towards desired learning outcomes, it is essential to examine their impact on in-context learning performance to gain a more comprehensive understanding of the in-context learning mechanisms and improve the effectiveness of these powerful models in real-world applications. **Task description in real in-context learning application.** In the realm of in-context learning, the prompt plays a crucial role in guiding the language model’s response generation. A prompt is a textual input provided to the model, containing the necessary context and instructions that help the model understand the user’s requirements and produce relevant responses. The task description in the prompt often includes specific questions, statements, or examples that outline the desired output, enabling the model to adapt and generate contextually appropriate text (Brown et al., 2020). The task description plays an important role in in-context learning by providing information about recognizing the task in real application (Pan, 2023; Cho et al., 2023). However, systematic studies about the role of task description and the mechanisms behind are lacking. This paper fills this gap by providing the analysis of task description under different situations. ### 3 FORMULATION AND MOTIVATION We assume a dataset \( D \), comprising \( N \) data samples \( D = \{x_i = (d_i, c_i, q_i, r_i, t_i)\}_{i=1}^N \), where \( d_i \) denotes the task description for the \( i \)-th sample, and \( c_i \) represents a sequence of task examples associated with \( q_i \). For each data sample, given a query \( q_i \), our objective is to predict the output of \( q_i \) for task \( t_i \), labeled as \( r_i \). We partition the dataset into two subsets: \( D_{\text{train}} \) and \( D_{\text{test}} \). This partitioning should ensure that tasks in the test dataset remain unseen in the training dataset, i.e., for each task \( i \) in the testing set \( D_{\text{train}} \), no \( t_j \) exists in \( D_{\text{test}} \) such that \( t_i = t_j \). The primary aim of in-context learning is to utilize the task description and examples for adapting the model, thereby optimizing its performance on previously unseen tasks. To accomplish this objective, we maximize the following function: \[ \mathbb{E}_{p(d,c,q)} \mathbb{E}_{q_\theta(r|d,c,q)} \log p(r|d,c,q). \] Here \( q_\theta(r|d,c,q) \) denotes the predicted distribution of target \( r \), while \( p \) refers to real distribution. To analyze the aforementioned objective associated with task \( t \), we employ the variational method, constructing an evidence lower bound. Given the intractable nature of the distribution \( p(t|r,d,c,q) \), we approximate it using a parameterized distribution \( q_\theta(t|d,c,q) \) as follows: \[ \text{KL}(q_\theta(t|d,c,q)||p(t|r,d,c,q)) = \text{KL}(q_\theta(t|d,c,q)||p(t|d,c,q)) - \mathbb{E}_{q_\theta(t|d,c,q)} \log p(r|t,d,c,q) + \log p(r|d,c,q). \] Please refer to appendix A.1 for the proof. Considering the non-negative nature of the KL divergence, we can express the log-likelihood in the following manner: \[ \log p(r|d,c,q) \geq -\text{KL}(q_\theta(t|d,c,q)||p(t|d,c,q)) + \mathbb{E}_{q_\theta(t|d,c,q)} \log p(r|t,d,c,q). \] The first term signifies the task label prediction, whereas the subsequent term corresponds to the loss function employed in the in-context training for the GPT model. This equation, therefore, demonstrates that accurate task label prediction contributes to the maximization of the log-likelihood. Incorporating the task description as a component of the input allows it to serve as a representation of the task itself. To assess the efficacy of this description, we examine encoder and decoder models that yield conditional distributions \( q(d|t) \) and \( p(t|d) \). Given that \( q(t) \) embodies the marginal distribution of task \( t \), we define the reconstruction error, denoted as \( R \), in the following manner: \[ R = \mathbb{E}_{q(t)} \mathbb{E}_{q(d|t)} [-\log p(t|d)] \leq \text{KL}(q(t,d)||p(t,d)) - I_q(t,d) + H_q(t). \] Please see appendix A.2 for the proof. The aforementioned equation indicates that increasing the mutual information can reduce the negative log likelihood of \( t \). The mutual information, denoted as \( I_q(t,d) \), between task label \( t \) and the task description \( d \) can be formulated as follows: \[ 0 \leq I(t;d) = \mathbb{E}_{p(t,d)} \left[ \log \frac{q(t,d)}{q(t)q(d)} \right] = H_q(t) - H_q(t|d) \leq H_q(t). \] Based on the aforementioned equation, we observe that the mutual information ranges from 0 to $H_q(t)$. Consequently, to examine the impact of mutual information, we propose incorporating its control in our experimental design. Please see Sec. 4 for the details. In summary, we consider an in-context learning setting where the task is unseen in the training set. However, to simplify the problem, we assume that the task labels in the testing set are novel recombinations of the training ones. In order to reformulate the prediction into a compositional generalization problem, we derive a variational lower bound of the log likelihood as a new objective, as shown in Equation 3. The first term in it is for task prediction. Since we consider the task description as a representation of the task, the goodness of it has an impact on the model performance. By modeling it as a representation, we derive a quantity to estimate its goodness, as shown in Equation 4. Therefore, we design our experiments with some principles to analyze how to train our model for better in-context ability from the following perspectives: 1) the mutual information between the task description and the task; 2) with or without task prediction. ### 4 EXPERIMENTAL DESIGN In this section, we will delve into the experimental design and its various components. We begin by outlining the design principles, which serve as the foundation for the entire experiment. With these principles in mind, the experimental design aims to study the factors impacting the model’s in-context ability by a robust and flexible framework. Furthermore, this design allows for the future research on in-context learning, since it is a controllable benchmark for in-context learning. **Design Principle** 1. **Controllable task description information**: The information provided in the task description can be directly manipulated, allowing for a precise control over the quantity of information presented to the model. 2. **Unseen evaluation tasks**: To ensure the model’s ability to generalize, the evaluation tasks presented to the model are not included in the training data. This helps assess the model’s performance in handling novel tasks. 3. **Information inference from multiple sources**: The model is designed to extract information of task from both the task description and in-context examples provided. This enables the model to adapt and learn from various sources of information. #### 4.1 TASK DESIGN Our synthetic task dataset is constructed by equations in the form of $((a \cdot x) \circ (b \cdot y)) \mod p = r$, where $p$ is a prime number, and $\circ$ can represent $+$, $-$ or $\div$. For each task, $a$, $b$ and $\circ$ are randomly selected and fixed, but only an inexact range of \(a\) and \(b\) will be implied in task descriptions, and we train the model to calculate the answer \(r\) of the operation given \(x\) and \(y\) as query. Only half of available \(ab\) pairs and \(xy\) queries are seen in the training, and the remaining equations are used for evaluation. We choose \(p = 11\) in all experiments. The task description is given as \(\langle a_l \rangle \langle a_u \rangle \langle b_l \rangle \langle b_u \rangle \langle op \rangle\), while \(\langle a_l \rangle, \langle a_u \rangle, \langle b_l \rangle, \langle b_u \rangle\) stands for the possible lower and upper bounds of \(a\) and \(b\) separately, and \(\langle op \rangle\) stands for the operator \(+, −\) or \(/\) used in this task. We change the given range of \(a,b\) to control the quality of task description, and a larger \(ab\) range refers to lower task description quality as more possible \(ab\) pairs can be deduced. For a given task \(((a \cdot x) \circ (b \cdot y)) \mod p = r\), several examples are randomly selected and constructed as \((x_i, y_i, r_i)\), while \(r_i = ((a \cdot x) \circ (b \cdot y)) \mod p\). ### 4.2 Model and Training **Model** For most experiments on synthetic tasks, we use a standard decoder-only causal Transformer (Vaswani et al., 2017) with 24 layers, an embedding length of 256, and 8 attention heads. For experiments on the natural language task CoFE (An et al., 2023), we follow their approach and use fine-tuned GPT2-Large as our model. **Loss Function** The auto-regression is used to train the model. Following GPT (Radford & Narasimhan, 2018), given a token sequence \(x = (x_1, \ldots, x_T)\), we train the model to predict \(p(x) = \prod_{t=1}^{T} p(x_t | x_{<t})\). We calculate loss for in-context examples, query, and the answer of query equation. The in-context examples are denoted as set \(C_{i-1}\). For \(i > 1\), \(C_{i-1}\) represents for in-context example sequence \(\{(x_1, y_1, r_1), \ldots, (x_{i-1}, y_{i-1}, r_{i-1})\}\). For \(i = 1\), \(C_0\) is an empty. Specifically, we calculate the loss for the sequence \(s = \{(x_1, y_1, r_1), \ldots, (x_L, y_L, r_L)\}\) and task description \(d\) as follows: \[ L(\theta, s, d) = \frac{1}{L} \sum_{i=1}^{L} l(f(\{d, C_{i-1}, x_i, y_i\}), r_i), \] where \(l\) denotes the loss function, e.g., crossentropy loss is adopted in our setting. The task description is \(d = (a_l, a_u, b_l, b_u, op)\). Accuracy is calculated only for the answer of query equation. For task prediction, task \(t = (a, b, op)\) will be added to the end of input token, and loss for task prediction can be re-formulated as: \[ L_t(\theta, s, d) = \frac{1}{L} \sum_{i=1}^{L} l(f(\{d, C_{i-1}, x_i, y_i\}), r_i, t). \] **Training configure** We train the model for 200k steps, and use Adam optimizer with learning rate \(1e^{-4}\) for all experiments. Minibatch size is set to 128 for training and validation on our synthetic tasks, and 4 for CoFE. ### 4.3 Impact Factors in Prompt **Task description** We leverage the mutual information to evaluate the task description. Since only inexact ranges of \(a\) and \(b\) are implied in task description as \(r_a = a_u - a_l\) and \(r_b = b_u - b_l\), the quality of task description can be controlled and quantified by changing \(r_a\) and \(r_b\). To be specific, suppose the full number of available \(ab\) pairs is \(n_{ab}\), and the inexact \(ab\) range implied in task description are \(r_a\) and \(r_b\). Then, given this task description, we can narrow down possible \(ab\) pair numbers from \(n_{ab}\) to \(r_a \cdot r_b\). This indicates that the information gain given by the task description is \(\log(n_{ab}/(r_a \cdot r_b))\). **Number of Examples** We use the number of examples to control the information conveyed by demonstration. For a given task, adding more in-context examples refers to providing more information by demonstration. 5 EXPERIMENTS RESULTS Figure 2: Phase Transition when increasing the information of task description. Shaded areas indicate +/- variance. **A:** The task description will distract in-context learning ability of transformer when its information is less than a threshold, while it will improve in-context learning after that. **B:** Before the Phase Transition, the number of in-context examples significantly impacts in-context learning, while after that, it has almost no influence. **C:** The model can obtain in-context learning only under two cases: 1) low info under large number of in-context examples. 2) High info task description. **D:** Attention explanation. The ratio of in-context examples in attention keeps declining with more task description information. The task description will divert the model’s attention in in-context examples. 5.1 HOW DOES TASK DESCRIPTION IMPACT IN-CONTEXT LEARNING We use the accuracy of the predicted results of query examples to reflect in-context learning performance, and use the mean of five runs to reduce the randomness. The results are presented in Figure 2. Our main findings are as following: **A Phase Transition course can be observed.** Figure 2A depicts the variation of accuracy with the amount of information and the number of in-context examples. Before a certain information threshold, the accuracy remains at a low level. At this stage, significant accuracy gain can only be observed when more in-context examples are added. However, after this information threshold, the accuracy grows rapidly with information gain, but keeps relatively stable with changes in the number of in-context examples. **Before the Phase Transition, the task description will distract in-context learning ability of transformer, but will improve in-context learning after that.** Figure 2B gives a clearer demonstration of Phase Transition. The accuracy grows as the number of in-context examples increases before Phase Transition, but stays relatively constant within a large range of in-context example numbers after Phase Transition. **Phase Transition course leads to two in-context learning stage of transformer.** As shown in Figure 2C. The model can achieve a high accuracy only when given low-information task description under large number of in-context examples, or given high-information task description. 5.2 THE PHASE TRANSITION OF TASK DESCRIPTION. In the previous section, we discover the phase transition of task description. Here, we further investigate the reason behind it. Specifically, we infer the possible reasons from the follow two perspectives: **The task description will lead the model to ignore the information from in-context examples.** We calculate the ratio of in-context examples and task description in transformer attention, given same input sequence. As shown in Figure 2D, the ratio of in-context examples in attention keeps declining with more task description information. On the contrary, the attention ratio of task description increases when more task-related information are given. This indicates that adding task description info will divert model’s attention in in-context examples. Figure 3: Results of task prediction. **A**: A demonstration of accuracy gain (Predicting tasks v.s. without predicting tasks). Acc(p.t.) refers to accuracy on predicting results under predicting tasks setting, Acc(w/o p.t.) refers to corresponding accuracy without task prediction. Accuracy gain means the value of Acc(p.t.) - Acc(w/o p.t.). Using task prediction as proxy task can significantly improve in-context learning ability of Transformer. **B**: Task accuracy increases with task description info. **C**: The number of in-context examples can impact task prediction accuracy only under low info task description. **D**: Task info have greater influence than the number of examples. Higher information of task description will increase the lower bound of performance. As illustrated in Eq 3, higher mutual information signifies that the task description is a good representation of the actual task. In other words, the task description captures the essential aspects and the underlying structure of the task, providing the model with valuable insights and a more accurate understanding of the problem it needs to solve. When the mutual information is high, it means that knowing the task description reduces the uncertainty about the prediction of task itself. Consequently, when the task description has high mutual information with the task, the model can leverage this strong representation to make better decisions and predictions, even when faced with limited or ambiguous examples. To study how predicting task label impacts the performance of in-context learning (measured using the accuracy of validation query examples), we conduct experiments by adding an extra loss between the predicted task label and ground truth task label. By comparing the gain (with predicting task label v.s w/o predicting task label), we can evaluate the impact of task prediction. Predicting the task can improve in-context learning performance. The results are presented in Figure 3A. A warm color in Figure 3A refers to positive accuracy gain. A performance improvement can be observed under different task descriptions and in-context example settings, as the points in Figure 3A are mainly colored warm. And the accuracy gain increases sharply with mutual info, at a similar threshold with that in Figure 2A, demonstrating a phase transition for the accuracy gain. Before Phase Transition, such accuracy gain tends to grow with the number of in-context examples. There are some cases where the performance slightly drops due to randomness. After Phase Transition, the accuracy gain remains significant and stable. The performance of task label prediction can also reflect whether the model understand what the task is. Besides the accuracy of query examples, we further examine the accuracy of the predicted task label (denoted as task accuracy for simple). As shown in Figure 3B and Figure 3C, the model can predict tasks better when given more task description information or more in-context examples. Figure 3C depicts that the number of in-context examples has an obvious impact on task prediction accuracy only under low info task descriptions. According to Figure 3D, increasing both task description information and the number of in-context examples can enhance the model’s ability in task prediction, but the influence of task description is relatively more significant. 5.3 Beyond the synthetic experiment To verify that the discovery from the synthetic experiment also hold on the real task, we conduct another experiment on the more realistic task on a realistic natural language dataset. We experiment on CoFE (An et al., 2023), a natural language dataset for compositional generalization. The training set covers all the primitives while lacking certain combinations, this enforces the model to understand and re-combine known components in language. We select 3 categories of combinations of primitives in the dataset: Primitive Substitution, Primitive Structural Alternation Figure 4: Experiments on real tasks. We design three different settings of task description. In **Full Task Info** experiment, all task information are given. In **Part Task Info** experiment, the info of target primitive is excluded. In **No Task Info** experiment, no task description is added. We experiment on all three info settings given 2, 4, 6, 8, 10 in-context examples separately. We find that the conclusions of experiments of synthetic tasks are also held in real tasks. and Phrase Recombination. The model is trained to predict 4 types of primitives for each combination category, resulting in 12 tasks. In our experiment, the training set consists of 4 randomly selected tasks, covering all 4 types of target primitives and all 3 combination categories. The test set consists of the remaining 8 tasks. Examples of data in CoFE are provided in the appendix. We design three settings of task description containing different amount of information. All task information are given in Full Task Info experiment. In Part Task Info experiment, we only imply the combination category of the task in task description, but leave out the type info of the target primitive. In No Task Info experiment, no task description is added. We experiment on all three info settings under different numbers of in-context examples. The results are given in Figure 4. The conclusions of synthetic experiments are still held. In all three settings, using task prediction as proxy task can significantly improve accuracy, confirming the impact of task prediction on model’s in-context learning ability. Figure 4A depicts that experiments on Full Task Info achieve the highest accuracy across all settings. This indicates that when given high info task descriptions, the model can obtain higher in-context learning ability than given low info. However, when given incomplete and limited task information, as shown in Figure 4B, the model achieve relatively low accuracy and obtains limited accuracy gain with an increasing number of in-context examples. The results demonstrate that low info task descriptions mislead in-context learning. Those observations are well-aligned with the findings on the above synthetic experiment, indicating our findings on synthetic experiments can be well scale to real word cases. 5.4 Ablations No task description during training. We present the model’s accuracy given no task description and different number of in-context examples. It can be depicted in Table 1 that the accuracy grows with in-context example number. This table actually refers to zero mutual information in Figure 2A and Figure 2C, and it can be inferred from Figure 2 that model given full info task description always outperforms model given zero task info. No in-context examples during training. Table 2 lists the model’s accuracy given different amount of task info and no in-context examples. When given maximal info (4,6052, referring to totally accurate task description), the model can achieve 0.8641 accuracy, better than all other info level settings, but fall behind models given both full task description and in-context examples. This infers model’s ability in understanding task description. Also, it can be seen that under no example setting, the accuracy grows with information gain. The growing trend is relatively tiny given low task info, but speeds up when more task info added. Such performance pattern keeps align with experiments given both task description and in-context examples. | Task Info (nats) | 0 | 0.21 | 0.4462 | 0.7133 | 1.0217 | 1.609 | 2.3026 | 3.2189 | 3.6243 | 3.912 | 4.3175 | 4.6052 | |-----------------|-----|------|--------|--------|--------|-------|--------|--------|--------|-------|--------|--------| | Accuracy | 0.1017 | 0.1027 | 0.1036 | 0.1041 | 0.1038 | 0.1053 | 0.1083 | 0.1089 | 0.2104 | 0.2834 | 0.4267 | 0.8641 | Table 1: Ablation Experiments: No in-context example and different amount of task information. | Number of In-context Examples | 0 | 4 | 8 | 12 | 16 | 24 | 32 | 36 | |------------------------------|-----|------|-------|-------|-------|-------|-------|-------| | Accuracy | 0.1017 | 0.1117 | 0.1234 | 0.1320 | 0.2094 | 0.2955 | 0.3670 | 0.5367 | Table 2: Ablation Experiments: No task info and different numbers of in-context examples. 6 LIMITATION A potential limitation of this work lies in the synthetic experimental setting that has been employed to investigate the impact of task descriptions on in-context learning performance of Transformers. While this approach enables the systematic exploration of task description information and its influence on model performance, it may not fully capture the nuances and challenges encountered in real-world scenarios. The simplification and controlled nature of the synthetic setting might result in findings that do not entirely generalize to practical applications, where language models have to deal with diverse tasks, more complex instructions, and ambiguous or incomplete information. Moreover, the study’s focus on task descriptions may not comprehensively address other factors that could significantly influence the performance of Transformers, such as the quality and representativeness of training data, model architecture, or the fine-tuning process. In the pursuit of a deeper understanding of in-context learning, it is essential to consider these additional elements to ensure a more holistic perspective on the behavior and performance of Transformers in real-world applications. 7 CONCLUSION In conclusion, transformers have exhibited exceptional performance in various applications, with in-context learning emerging as a vital technique in the field. Despite its widespread use, our comprehension of the underlying mechanisms of in-context learning remains limited. This study delves into the crucial yet underexplored role of task descriptions in in-context learning performance, shedding light on their impact on transformers. By conducting a series of well-designed experiments in a synthetic setting, the research systematically investigates the influence of task description information on model performance across diverse tasks and domains. The results underscore the importance of task descriptions as a guiding factor for transformers to achieve desired learning outcomes. The well-designed experiments conducted in a synthetic setting highlight the need for carefully crafting task descriptions to enhance model performance and generalization because of the phase transition. Ultimately, this study deepens our understanding of the in-context learning processes in transformers and lays the foundation for more efficient and effective real-world applications of these advanced models. However, it is crucial to acknowledge the limitations of the synthetic experimental setting and consider the additional factors that may influence transformer performance in real-world scenarios. While this study sheds light on the impact of task descriptions, future work should address the various challenges and complexities that transformers face in practical applications, such as diverse tasks, ambiguous instructions, and incomplete information. In future work, several avenues can be pursued to further advance our understanding of in-context learning on task description in Transformers and enhance their practical applications. For example, it is valuable to explore the development of automated methods for generating optimal task descriptions, which could alleviate the challenges in crafting effective prompts and improve model performance across a range of tasks. Secondly, investigating the impact of incorporating more structured or hierarchical task descriptions could provide valuable insights into the model’s ability to understand complex instructions and generate more contextually appropriate responses. REFERENCES Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? investigations with linear models. *arXiv preprint arXiv:2211.15661*, 2022. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. Shengnan An, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Jian-Guang Lou, and Dongmei Zhang. How do in-context examples affect compositional generalization? In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 11027–11052, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.618. URL https://aclanthology.org/2023.acl-long.618. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Tachard Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Z. Chen, Eric Chu, J. Clark, Laurent El Shafey, Yanping Huang, Kathleen S. Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Michael Brooks, Michele Catasta, Yongzhou Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, C Crépy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, M. C. D’iaz, Nan Du, Ethan Dyer, Vladimir Feinberg, Fan Feng, Vlad Fienber, Markus Freitag, Xavier García, Sebastian Gehrmann, Lucas González, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, An Ren Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wen Hao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Mu-Li Li, Wei Li, Yaguang Li, Jun Yu Li, Hyeontaek Lim, Han Lin, Zhong-Zhong Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pelлат, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alexandra Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Marie Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniela Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Ke Xu, Yunhan Xu, Lin Wu Xue, Pengcheng Yin, Jiahui Yu, Qiaoling Zhang, Steven Zheng, Ce Zheng, Wei Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. Palm 2 technical report. *ArXiv*, abs/2305.10403, 2023. URL https://api.semanticscholar.org/CorpusID:258740735. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In *International conference on machine learning*, pp. 531–540. PMLR, 2018. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang, Aaditya Singh, Pierre Richemond, James McClelland, and Felix Hill. Data distributional properties drive emergent in-context learning in transformers. *Advances in Neural Information Processing Systems*, 35:18878–18891, 2022a. Stephanie CY Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew K Lampinen, and Felix Hill. Transformers generalize differently from information stored in context vs in weights. *arXiv preprint arXiv:2210.05675*, 2022b. Hyunsoo Cho, Hyuhng Joon Kim, Junyeob Kim, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, and Taeuk Kim. Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 12709–12718, 2023.
esqRHCwTJ2
In other words, as the marginal feature distribution $P_X$ changes under strategic feedback, it would be expected that the qualification rate $Q(\mathcal{S}_{o,t-1})$ would also change in response, no?
Long-Term Impacts of Model Retraining with Strategic Feedback Anonymous authors Paper under double-blind review Abstract When machine learning (ML) models need to be frequently retrained, it is often too expensive to obtain human-annotated samples, so recent ML models have started to label samples by themselves. This paper studies a setting where an ML model is retrained (with human and model-annotated samples) over time to make decisions about a sequence of strategic human agents who can adapt their behaviors in response to the most recent ML model. We aim to investigate what happens when model-annotated data are generated under the agents’ strategic feedback and how the models retrained with such data can be affected. Specifically, we first formalize the interactions between agents and the ML system and then analyze how the agents and ML models evolve under such dynamic interactions. We find that as the model gets retrained, agents are increasingly likely to receive positive decisions, whereas the proportion of agents with positive labels may decrease over time. We thus propose an approach to stabilize the dynamics and show how this method can further be leveraged to enhance algorithmic fairness when agents come from multiple social groups. Experiments on synthetic/semi-synthetic and real data validate the theoretical findings. 1 Introduction As machine learning (ML) is widely used to automate human-related decisions (e.g., in lending, hiring, college admission), there is a growing concern that these decisions are vulnerable to human strategic behaviors. With the knowledge of decision policy, humans may adapt their behavior strategically in response to ML models, e.g., by changing their features at costs to receive favorable outcomes. A line of research called Strategic Classification studies such problems by formulating mathematical models to characterize strategic interactions and developing algorithms robust to strategic behavior (Hardt et al., 2016; Levanon & Rosenfeld, 2022). Among the existing works, most studies focus on one-time deployment where an ML model is trained and applied to a fixed population once. However, practical ML systems often need to be retrained periodically to ensure high performance on the current population. As the ML model gets updated, human behaviors also change accordingly. To prevent the potential adverse outcomes, it is critical to understand how the strategic population is affected by the model retraining process. Traditionally, the training data used for retraining models can be constructed manually from human-annotated dataset (e.g., ImageNet). However, acquiring a large amount of human-annotated training samples can be highly difficult and even infeasible, especially in human-related applications (e.g., in automated hiring where an ML model is used to identify qualified applicants, even an experienced interviewer needs time to label an applicant). Motivated by a recent practice of automating data annotation for retraining large-scale ML models (Taori & Hashimoto, 2023; Adam et al., 2022), we study strategic classification in a sequential framework where an ML model is periodically retrained by a decision-maker with both human and model-annotated samples. The updated models are deployed sequentially on agents who may change... Figure 2: Evolution of the student distribution and ML model at $t = 0$ (left), $t = 5$ (middle), and $t = 14$ (right): each student has two features. At each time, a classifier is retrained with both human and model-annotated samples, and students best respond to be admitted as illustrated in Fig. 1. Over time, the learned classifier (black lines) deviates from ground truth (green lines). their features to receive favorable outcomes. Specifically, we consider practical settings where: (i) the decision-maker can only label a limited number of human-annotated samples by itself, and has to use the current classifier to produce model-annotated samples for future retraining; (ii) the strategic agents need time to adapt their behaviors [Zrnic et al., 2021] and they best respond based on the previous model; (iii) feature changes caused by agents’ best responses can genuinely change their underlying labels [Kleinberg & Raghavan, 2020], and feature-label relationship is fixed over time. Because the ML model affects agent behavior and such strategic feedback is further captured when retraining the future model, both the model and agents change over time. However, it remains unclear how the two evolve under such dynamics and what long-term effects one may have on the other. In this paper, we examine the evolution of the ML model and the agent data distribution after they best respond. In particular, we ask: 1) How is the agent population reshaped over time when the model is retrained with strategic feedback? 2) How is the ML system affected by the agent’s strategic response? 3) If agents come from multiple social groups, how can model retraining further impact algorithmic fairness? 4) What happens if the human-annotated samples have a systematic bias? To further illustrate our problem, consider an example of college admission where new students from a population apply each year. In the $t$-th year, an ML model $f_t$ is learned from a training dataset $S_t$ and used to make admission decisions. For students who apply in the $(t + 1)$-th year, they will best respond to the model $f_t$ in the previous year (e.g., preparing the application package in a way that maximizes the chance of getting admitted). Meanwhile, the college retrains the classifier $f_{t+1}$ using a new training dataset $S_{t+1}$, which consists of previous training data $S_t$, new human-annotated samples, and new model-annotated samples (i.e., previous applicants annotated by the most recent model $f_t$). This retrained model $f_{t+1}$ is then used to make admission decisions in the $(t + 1)$-th year. This process continues over time and we demonstrate how the training dataset $S_t$ is updated in Fig. 1. Under such dynamics, both the ML system and the strategic population change over time and may lead to unexpected long-term consequences. An illustrating example is given in Fig. 2. Compared to prior studies on strategic classification, we go beyond one-shot settings to study the long-term impacts of retraining in a sequential framework. Instead of assuming labels are available while retraining, we consider more practical scenarios with model-annotated samples. Although the risk of using model-annotated samples to retrain models has been highlighted in some existing works [Taori & Hashimoto, 2023], ours is the first to incorporate strategic feedback from human agents. More related works are discussed in App. C. Our contributions are summarized as follows: 1. We formulate the problem of model retraining with strategic feedback and qualitatively analyze the sources influencing the system dynamics (Sec. 2). 2. We theoretically characterize the evolution of the expectation of acceptance rate (i.e., the proportion of agents receiving positive classifications), qualification rate (i.e., the proportion of agents with positive labels), and the classifier bias (i.e., the discrepancy between acceptance rate and qualification rate) under the retraining process. We show that the acceptance rate increases over time under the retraining process, while the actual qualification rate may decrease under certain conditions. The dynamics of classifier bias are more complex depending on the systematic bias of human-annotated samples. Finally, we propose an approach to stabilize the dynamics (Sec. 3). 3. We consider settings where agents come from multiple social groups and investigate how inter-group fairness can be affected by the model retraining process; we also propose an early stopping mechanism to promote fairness (Sec. 4). 4. We conduct experiments on synthetic/semi-synthetic and real data to verify the theoretical results and test their robustness (Sec. 5, App. E, App. F). 2 PROBLEM FORMULATION Consider a population of agents who are subject to certain machine learning decisions (e.g., admission/hiring decisions) and join the decision-making system in sequence. Each agent has observable continuous features \( X \in \mathbb{R}^d \) and a hidden binary label \( Y \in \{0, 1\} \) indicating its qualification state ("1" being qualified and "0" being unqualified). Let \( P_{XY} \) be the joint distribution of \((X, Y)\) which is fixed over time, and \( P_X, P_{Y|X} \) be the corresponding marginal and conditional distributions. \( P_X, P_{Y|X} \) are continuous with non-zero probability mass everywhere in their domain. For agents who join the system at time \( t \), the decision-maker makes decisions about them using a classifier \( f_t : \mathbb{R}^d \rightarrow \{0, 1\} \). In this paper, we consider practical settings that the decision-maker does not know \( P_{XY} \) and can only learn \( f_t \) from the training dataset at \( t \) (Guldogan et al., 2022). The agent’s best response. Agents who join the system at time \( t \) can adapt their behaviors based on the latest classifier \( f_{t-1} \) and change their features \( X \) strategically. We denote the resulting data distribution as \( P^t_{XY} \). Specifically, given original features \( X = x \), agents have incentives to change their features at costs to receive positive classification outcomes, i.e., by maximizing utility \[ x_t = \max_z \left\{ f_{t-1}(z) - c(x, z) \right\} \] where distance function \( c(x, z) \geq 0 \) measures the cost for an agent to change features from \( x \) to \( z \). In this paper, we consider \( c(x, z) = (z - x)^T B(z - x) \) for some \( d \times d \) positive semidefinite matrix \( B \), allowing heterogeneous costs for different features. After agents best respond, the agent data distribution changes from \( P_{XY} \) to \( P^t_{XY} \). In this paper, we term \( P_{XY} \) agents’ prior-best-response distribution and \( P^t_{XY} \) agents’ post-best-response distribution. We consider natural settings that (i) the agents’ responses are delayed: they act based on the latest classifier \( f_{t-1} \) they are aware of, not the one they receive; (ii) agents’ behaviors are benign and cause the actual labels to change, so the relationship between features and label \( P^t_{Y|X} = P_{Y|X} \) does not change (Guldogan et al., 2022). Human-annotated samples and systematic bias. At each round \( t \), we assume the decision-maker can draw a limited number of unlabeled samples from the prior-best-response distribution \( P_X \). Note that the human annotation process is independent of the decision-making process. At \( t \), each agent is classified by the model \( f_t \) and best responds to \( f_{t-1} \), the decision-maker never confuses the agents by simultaneously using human experts to label agents. Instead, human experts never participate in the interaction and human annotation is another process for the decision-maker to obtain additional information about the whole population (e.g., by first acquiring data from public datasets or third parties, and then labeling them to recover the population distribution). We may also consider the situation where human-annotated samples at \( t \) are drawn from post-best-response distribution \( P^t_X \), the discussion is in App. [D.7]. With some prior knowledge (possibly biased), the decision-maker can annotate these features and generate human-annotated samples \( S_{o,t} \). We assume the quality of human annotations is consistent, so \( S_{o,t} \) at any \( t \) is drawn from a fixed probability distribution \( D^o_{XY} \) with marginal distribution \( D^o_X = P_X \). Because human annotations may not be the same as true labels, \( D^o_{Y|X} \) can be biased compared to \( P_{Y|X} \). We define such difference as the decision-maker’s systematic bias, formally stated below. **Definition 2.1 (Systematic bias).** Let \( \mu(D^o, P) = \mathbb{E}_{x \sim P_X}[D^o_{Y|X}(1|x) - P_{Y|X}(1|x)] \). The decision-maker has a systematic bias if \( \mu(D^o, P) > 0 \) (overestimation) or \( \mu(D^o, P) < 0 \) (underestimation). Def. 2.1 implies that the decision-maker has a systematic bias when it labels a larger (or smaller) proportion of agents as qualified compared to the ground truth. Depending on the applications, the systematic bias may or may not exist and we study both scenarios in the paper. Model-annotated samples. In addition to human-annotated samples, the decision-maker at each round \( t \) can also leverage the most recent classifier \( f_{t-1} \) to generate model-annotated samples for training the classifier \( f_t \). Specifically, let \( \{x^t_{i-1}\}_{i=1}^N \) be \( N \) post-best-response features (equation 1) acquired from agents coming at \( t-1 \), the decision-maker uses \( f_{t-1} \) to annotate the samples and obtain model-annotated samples \( S_{m,t-1} = \{x^t_{i-1}, f_{t-1}(x^t_{i-1})\}_{i=1}^N \). Both human and model-annotated samples can be used to retrain the classifier at \( t \). Classifier’s retraining process. With the human and model-annotated samples introduced above, we next introduce how the model is retrained by the decision-maker over time. Denote the training set at \( t \) as \( S_t \). Initially, the decision-maker trains \( f_0 \) with a human-annotated training dataset \( S_0 = S_{o,0} \). Then the decision-maker updates \( f_t \) every round to make decisions about agents. The decision-maker learns \( f_t \in \mathcal{F} \) using empirical risk minimization (ERM) with training dataset \( S_t \). Similar to studies in strategic classification (Eilat et al., 2022), we consider linear hypothesis class \( \mathcal{F} \). At each round \( t \geq 1 \), \( S_t \) consists of three components: existing training samples \( S_{t-1} \), \( N \) new model-annotated and \( K \) new human-annotated samples, i.e., \[ S_t = S_{t-1} \cup S_{m,t-1} \cup S_{o,t-1}, \quad \forall t \geq 1 \] Since annotating agents is usually time-consuming and expensive, we have \( N \gg K \) in practice. The complete retraining process is shown in Alg. 1 (App. A). Given the training dataset \( S_t \) and the post-best-response distribution \( P^t_{XY} \), we can define their associated qualification rates as the proportion/probability of agents that are qualified, i.e., \[ Q(S_t) = \mathbb{E}_{(x,y) \in S_t}[y]; \quad Q(P^t) = \mathbb{E}_{(x,y) \sim P^t_{XY}}[y], \] For the classifier \( f_t \) deployed on marginal feature distribution \( P^t_X \), we define acceptance rate as the probability that agents are classified as positive, i.e., \[ A(f_t, P^t) = \mathbb{E}_{x \sim P^t_X}[f_t(x)]. \] Since \( S_t \) is related to random sampling at all \( t \), the resulting classifier \( f_t \) and agents’ best responses are also random. Denote \( D^t_{XY} \) as the probability distribution of sampling from \( S_t \) and recall that \( D^t_{XY} \) is the distribution for human-annotated \( S_{o,t} \), we can further define the expectations of qualification/acceptance rate \( Q(S_t), Q(P^t), A(f_t, P^t) \) over the training dataset: \[ \bar{q}_t := \mathbb{E}_{S_t}[Q(S_t)]; \quad q_t := \mathbb{E}_{S_{t-1}}[Q(P^t)]; \quad a_t := \mathbb{E}_{S_t}[A(f_t, P^t)] \] where \( \bar{q}_t \) is the expected qualification rate of agents in the training set; \( q_t \) is the expected actual qualification rate of agents after they best respond, note that the expectation is taken with respect to \( S_{t-1} \) because the distribution \( P^t_{XY} \) is the result of agents best responding to \( f_{t-1} \) which is trained with \( S_{t-1} \); \( a_t \) is the expected acceptance rate of agents at time \( t \). **Dynamics of qualification rate & acceptance rate.** Under the model retraining process, both the model \( f_t \) and agents’ distribution \( P^t_{XY} \) change over time. One goal of this paper is to understand how the agents and the ML model interact and impact each other in the long run. Specifically, we are interested in the dynamics of the following variables: 1. **Qualification rate** \( q_t \): it measures the qualification of agents and indicates the *social welfare*. 2. **Acceptance rate** \( a_t \): it measures the likelihood that an agent can receive positive outcomes and indicates the *applicant welfare*. 3. **Classifier bias** \( \Delta_t = |a_t - q_t| \): it is the discrepancy between the acceptance rate and the true qualification rate, measuring how well the decision-maker can approximate agents’ actual qualification rate and can be interpreted as *decision-maker welfare*. While it is difficult to derive the dynamics of \( a_t \) and \( q_t \) explicitly, we can first work out the dynamics of \( \bar{q}_t \) using the law of total probability (details in App. C.1), i.e., \[ \bar{q}_t = \frac{N + (t-1)K}{(t+1)N+tK} \cdot \bar{q}_{t-1} + \frac{N}{(t+1)N+tK} \cdot a_{t-1} + \frac{K}{(t+1)N+tK} \cdot \bar{q}_0 \] Then, we explore relations between \( \bar{q}_t \) and \( a_t \) (or \( q_t \)). By leveraging such relations and equation [3], we can further study the dynamics of \( a_t \) (or \( q_t \)). **Objectives.** This paper studies the above dynamics and we aim to answer the following questions: 1) How do the qualification rate \( q_t \), acceptance rate \( a_t \), and classifier bias \( \Delta_t \) evolve under the dynamics? 2) How can the evolution of the system be affected by the decision-maker’s retraining process? 3) What are the impacts of the decision-maker’s systematic bias? 4) If we further consider agents from multiple social groups, how can the retraining process affect inter-group fairness? ### 3 Dynamics of the Agents and Model In this section, we examine the evolution of qualification rate \( q_t \), acceptance rate \( a_t \), and classifier bias \( \Delta_t \). We aim to understand how *applicant welfare* (Sec. 2.1), *social welfare* (Sec. 3.2), and *decision-maker welfare* (Sec. 3.3) are affected by the retraining process in the long run. Because the acceptance rate \( a_t := \mathbb{E}_{S_t}[A(f_t, P^t)] \) and qualification rate \( q_t := \mathbb{E}_{S_{t-1}}[Q(P^t)] \) depend on agent post-best-response distribution \( P^t_{XY} \) and the classifiers, we can indeed identify all sources that affect the evolution of these quantities (details are in App. G.2): • \( q_t \): Expected qualification rate of agents in training set. • \( \delta(D^t; F) \): Algorithmic bias that measures how well hypothesis class \( F \) can approximate the training data distribution \( D^t_{Y|X} \). It universally exists in the PAC learning framework. • \( \delta^{t}_{BR} \): Strategic shift caused by agents' best responses to \( f_{t-1} \) at \( t \). Note that \( \delta^{t}_{BR} \) and \( q_t \) closely affect each other and cannot be decoupled quantitatively. Meanwhile, \( \delta(D^t; F) \) only depends on how well the hypothesis class \( F \) fits \( P^t_{Y|X} \). Since \( F \) is pre-determined, we ignore \( \delta(D^t; F) \) in our theoretical analysis (Assumption 3.1). However, all experiments in Sec. 5 and Appendix naturally capture \( \delta(D^t; F) \) and the results are consistent with theorems. **Assumption 3.1.** Under the retraining process, algorithmic bias \( \delta(D^t; F) \) is negligible. Finally, we further assume the monotone likelihood ratio property holds for \( D^o_{XY} \) and \( P_{XY} \). **Assumption 3.2.** Let \( x[m] \) be \( m \)-dimension of \( x \in \mathbb{R}^d \), then \( D^o_{Y|X}(1|x) \) and \( P_{Y|X}(1|x) \) is continuous and monotonically increasing in \( x[m] \), \( \forall m = 1, \cdots, d \). Note that the Assumption 3.2 is mild and widely used in previous literature (Zhang et al., 2022). It can be satisfied by many distributional families such as exponential, Gaussian, and mixtures of exponential/Gaussian. It implies that agents are more likely to be qualified as feature value increases. ### 3.1 Applicant Welfare: The Dynamics of Acceptance Rate We first examine the dynamics of \( a_t = \mathbb{E}_{S_t}[A(f_t, P^t)] \). Intuitively, when \( \delta(D^t; F) \) is negligible (Assumption 3.1), all classifiers can fit the training data well. Then the model-annotated samples \( S_{m,t-1} \) generated from post-best-response agents would have a higher qualification rate than the qualification rate of \( S_{t-1} \) (i.e., \( q_{t-1} \)). As a result, the training data \( S_t \) augmented with \( S_{m,t-1} \) has a higher proportion of qualified agents \( q_t \) than \( q_{t-1} \), thereby producing a more "generous" classifier \( f_t \) with a larger \( a_t \). This reinforcing process can be formally stated in Thm. 3.3. **Theorem 3.3 (Evolution of \( a_t \)).** Under the retraining process, the acceptance rate of the agents that join the system increases over time, i.e., \( a_t > a_{t-1}, \forall t \geq 1 \). We prove Thm. 3.3 by mathematical induction in App. G.3. Fig. 3 below illustrates Thm. 3.3 by showing how agents' best responses can reshape training data \( S_t \) and classifier \( f_t \). When agents best respond, the decision-maker tends to accept more and more agents. Indeed, we can further show that when the number of model-annotated samples \( N \) is sufficiently large compared to the number of human-annotated samples \( K \), the classifier will accept all agents in the long run (Prop. 3.4). **Proposition 3.4.** For any set of \( P_{XY}, D^o, B \), there exists a threshold \( \lambda > 0 \) such that \( \lim_{t \to \infty} a_t = 1 \) whenever \( \frac{K}{N} < \lambda \). The specific value of \( \lambda \) in Prop. 3.4 depends on \( P_{XY}, D^o, B \), which is difficult to find analytically. Nonetheless, we can demonstrate in Sec. 5 that when \( \frac{K}{N} = 0.05 \), \( a_t \) tends to approach 1 in various datasets. Since the human-annotated samples are often difficult to attain (due to time and labeling costs), the condition in Prop. 3.4 is easy to satisfy in practice. Figure 3: Illustration of increased acceptance rate \( a_t \). The left plot shows the training dataset \( S_t \) contains 2 unqualified (red circle) and 2 qualified agents (blue square) and \( a_t \) is 0.5. The middle plot shows the new agents coming at \( t \) best respond to \( f_{t-1} \). After the best responses, 3 of 4 agents are qualified (blue square) and 1 is still unqualified (blue circle). However, all 4 agents are annotated as "qualified" (blue). The right plot shows the training dataset \( S_{t+1} \) contains all points of the left and middle plot, plus two new human-annotated points (points with dashed edges). All blue points are labeled as 1 and the red points are labeled as 0. So the qualification rate \( q_{t+1} \) of \( S_{t+1} \) becomes larger and \( f_{t+1} \) accepts a higher proportion of agents (\( a_{t+1} \) is 0.58). 3.2 Social welfare: The dynamics of qualification rate Next, we study the dynamics of qualification rate \( q_t = \mathbb{E}_{S_{t-1}}[Q(P^t)] \). Unlike \( a_t \) which always increases during the retraining process, the evolution of \( q_t \) is more complicated and depends on agents’ prior-best-response distribution \( P_{XY} \). Specifically, let \( q_0 = Q(P) = \mathbb{E}_{(x,y) \sim P_{XY}}[y] \) be the initial qualification rate, then the difference between \( q_t \) and \( q_0 \) can be interpreted as the amount of improvement (i.e., increase in label) agents gain from their best responses at \( t \). This is determined by (i) the proportion of agents that decide to change their features at costs (depends on \( P_X \)), and (ii) the improvement agents can expect upon changing features (depends on \( P_{Y|X} \)). Thus, the dynamics of \( q_t \) depend on \( P_{XY} \). Despite the intricate nature of dynamics, we can still derive a sufficient condition under which \( q_t \) decreases monotonically. **Theorem 3.5 (Evolution of \( q_t \)).** Let \( F_X(x) \) be the cumulative density function corresponding to \( P_X \). Denote \( J = \{ x | f_0(x) = 0 \} \) as the half-space in \( \mathbb{R}^d \) determined by the classifier \( f_0 \) trained with \( S_{o,o} \). Under the retraining process, if \( F_X \) and \( P_{Y|X}(1|x) \) are convex on \( J \), then \( q_{t+1} < q_t, \forall t \geq 1 \). Note that \( q_{t+1} < q_t \) in Thm. 3.5 holds only for \( t \geq 1 \). Because agents can only improve their labels from their best responses, prior-best-response \( q_0 \) always serves as the lower bound of \( q_t \). The half-space \( J \) in Thm. 3.5 specifies the region in feature space where agents have incentives to change their features. The convexity of \( F_X \) and \( P_{Y|X}(1|x) \) ensure that as \( f_t \) evolves from \( t = 1 \): (i) fewer agents choose to improve their features, and (ii) agents expect less improvement from feature changes. Thus, \( q_t \) decreases over time. The proof and a more general analysis are shown in App. G.5. Indeed, the condition in Thm. 3.5 can be satisfied by common distributions \( P_X \) (e.g., Uniform, Beta when \( \alpha > \beta \)) and labeling functions \( P_{Y|X}(1|x) \) (e.g., linear function, quadratic functions with degree greater than 1). Other distributions (e.g., Gaussian) and labeling functions (e.g., logistic function) can also satisfy the condition if \( F_X \) and \( P_{Y|X}(1|x) \) are convex on \( x \in J \). We also show that Thm. 3.5 is valid under diverse experimental settings (Sec. 5, App. E, App. F). 3.3 Decision-maker welfare: The dynamics of classifier bias Sec. 3.1 and 3.2 show that as the classifier \( f_t \) gets updated over time, agents are more likely to get accepted (\( a_t \) increases). However, their true qualification rate \( q_t \) (after the best response) may actually decrease. It indicates that the decision-maker’s misperception about agents varies over time. Thus, this section studies the dynamics of classifier bias \( \Delta_t = |a_t - q_t| \). Our results show that the evolution of \( \Delta_t \) is largely affected by the systematic bias and its magnitude \( \mu(D^o, P) \) (Def. 2.1). **Theorem 3.6 (Evolution of \( \Delta_t \)).** Under the retraining process and the conditions in Thm. 3.5 1. If systematic bias does not exist (i.e., \( \mu(D^o, P) = 0 \)), then \( \Delta_t \) increases over time. 2. If the decision-maker overestimates agent qualification (\( \mu(D^o, P) > 0 \)), then \( \Delta_t \) increases. 3. If the decision-maker underestimates agent qualification (\( \mu(D^o, P) < 0 \)), then \( \Delta_t \) either monotonically decreases or first decreases but then increases. Thm. 3.6 highlights the potential risks of the model retraining process and is proved in App. G.6. Originally, the purpose of retraining the classifier was to ensure accurate decisions on the targeted population. However, in the presence of strategic agents, the retraining may lead to adverse outcomes by amplifying the classifier bias. Meanwhile, though systematic bias is usually an undesirable factor to eliminate when learning ML models, it may help mitigate classifier bias to improve the decision-maker welfare in the retraining process, i.e., \( \Delta_t \) decreases when \( \mu(D^o, P) < 0 \). 3.4 Intervention to stabilize the dynamics Sec. 3.1–3.3 show that as the model is retrained from strategic agents, \( a_t, q_t, \Delta_t \) are unstable and may change monotonically over time. Next, we introduce an effective approach to stabilize the system. From the above analysis, we know that one reason that makes \( q_t, a_t, \Delta_t \) evolve is agent’s best response, i.e., agents improve their features strategically to be accepted by the most recent model, which leads to a higher qualification rate of model-annotated samples (and the resulting training data), eventually causing \( a_t \) to deviate from \( q_t \). Thus, to mitigate such deviation, we can improve the quality of model annotation. Our method is proposed based on this idea, which uses a probabilistic sampler (Taori & Hashimoto, 2023) when producing model-annotated samples. Specifically, at each time \( t \), instead of adding \( S_{m-1,o} = \{x_{t-1}, f_{t-1}(x_{t-1})\}_{i=1}^{N} \) (samples annotated by \( f_{t-1} \)) to training data \( S_t \) (equation 2), we use a probabilistic model \( \Phi_{t-1}: \mathbb{R}^d \rightarrow [0, 1] \) to annotate each sample according to the following: for each sample \( x \), we label it as 1 with probability \( \Phi_{t-1}(x) \), and as 0 otherwise. Here \( \Phi_{t-1}(x) \approx D_{Y|X}^{-1}(1|x) \) is the estimated posterior probability learned from \( S_{t-1} \) (e.g., logistic model). We call the procedure refined retraining process if model-annotated samples are generated in this way based on a probabilistic sampler. Fig. 3 also illustrates the idea: after agents best respond to \( f_{t-1} \) (middle plot), their features improve and \( f_t \) will label both as 1. By contrast, a probabilistic sampler \( \Phi_t \) only labels a fraction of them as 1 to produce a smaller \( q_{t+1} \). This alleviates the influence of agents’ best responses to stabilize the dynamics of \( a_t, q_t, \Delta_t \). In App. F.3, we also compare the evolution of \( a_t, q_t, \Delta_t \) under refined retraining process with that under the original retraining process, and the results validate our approach. 4 LONG-TERM FAIRNESS DYNAMICS UNDER STRATEGIC RETRAINING Dynamics without fairness interventions. In this section, we focus on the long-term fairness dynamics of the retraining process. We first state that the decision-maker easily has systematic bias (Adebayo et al., 2022; Bareinboim & Pearl, 2012; Alvero et al., 2020) (see App. B for motivating examples). In this section, we consider scenarios where agents come from two groups \( i, j \) with different sensitive attributes, and the decision-maker, uses group-dependent classifiers to make decisions about two groups of agents. We assume the initial qualification \( q_0^i \geq q_0^j \) and both groups have the same cost matrix \( B \) to change features. Denote the systematic bias to \( i, j \) as \( \mu_i, \mu_j \) where \( \mu_i \geq \mu_j \). This is reasonable because the group with lower qualifications is usually under-represented. To measure the unfairness, we consider the metric demographic parity (DP) (Feldman et al., 2015), which measures the unfairness between two groups \( i, j \) as the discrepancy in their acceptance rates \( |a_t^i - a_t^j| \). The extension to other commonly used fairness metrics is discussed in App. D.2. First, if applying the original retraining process for both groups, then the decision-maker is expected to admit all agents in the long run and it ultimately preserves fairness, but the classifier bias will be maximized. However, the dynamics in the middle of the retraining cannot be determined without knowing the feature distributions of both groups. By contrast, applying refined retraining process on both groups can stabilize the dynamics, but it cannot mitigate the systematic bias of the decision-maker. In the left plot of Fig. 4, we see that under the refined retraining process, the unfairness between groups is always around 0.2. Instead, if the decision-maker only applies the refined retraining process on \( i \) while keeping the original retraining process on \( j \), then perfect fairness will be achieved in the middle of the retraining process, but the model becomes unfair again as the retraining goes on. Theorem 4.1 (Promote fairness through the early stopping mechanism). When \( q_0^i \geq q_0^j \) and \( \mu_i \geq \mu_j \), if the decision-maker applies the refined retraining process to group \( i \) while applying the original retraining process to group \( j \), then \( |a_t^i - a_t^j| \) will first decrease to be close to 0, and then increase. Thm. 4.1 implies that the decision-maker can monitor the unfairness at each round, and executes the early stopping mechanism to attain almost perfect DP fairness. As shown in the right plot of Fig. 4, the unfairness is minimized at \( t = 5 \) under the proposed method. Fairness interventions at each round. Since both the original retraining process and the refined retraining process are unable to maintain demographic parity among groups. We consider fairness interventions at each round to ensure the deployment of fair models in App. D.2. Specifically, we examine the dynamics of the qualification rate and acceptance rate for both groups under fairness interventions. The results show that fairness interventions under the original retraining process still cause the qualification rate and acceptance rate to change monotonically, but the intervention on the refined retraining process can produce stable and fair classifiers. 5 EXPERIMENTS We conduct experiments on two synthetic datasets (Uniform, Gaussian), one semi-synthetic dataset (German Credit \cite{Hofmann1994}), and one real dataset (Credit Approval \cite{Quinlan2017}) to validate the theorems and proposed methods. Note that only the Uniform dataset satisfies all assumptions and the conditions in the above theoretical analysis, while the Gaussian dataset and German Credit dataset violate the conditions in Thm. 3.5. The Credit Approval dataset violates all assumptions and conditions of the main paper. The decision-maker trains logistic regression models for all experiments using stochastic gradient descent (SGD) over $T$ steps. We present the experimental results of the Gaussian and German Credit datasets in this section, while the results for Uniform and Credit Approval data are similar and shown in App. F. **Gaussian data.** We consider a synthetic dataset with Gaussian distributed $P_X$. $P_{Y|X}$ is logistic and satisfies Assumption 3.2 but not the conditions of Thm. 3.5. We assume agents have two independent features $X_1$, $X_2$ and are from two groups $i,j$ with different sensitive attributes but identical joint distribution $P_{XY}$. Their cost matrix is $B = \begin{bmatrix} 5 & 0 \\ 0 & 5 \end{bmatrix}$ and the initial qualification rate is $q_0 = 0.5$. We assume the decision-maker has a systematic bias by overestimating (resp. underestimating) the qualification of agents in the advantaged group $i$ (resp. disadvantaged group $j$), which is modeled as increasing $D_{Y|X}(1|x)$ to be 0.1 larger (resp. smaller) than $P_{Y|X}(1|x)$ for group $i$ (resp. group $j$). For the retraining process, we let $r = \frac{K}{N} = 0.05$ (i.e., the number of model-annotated samples $N = 2000$, which is sufficiently large compared to the number of human-annotated samples $K = 100$). Table 1 summarizes the dataset information, and the joint distributions are visualized in App. F.1. We first verify the results in Sec. 3 by illustrating the dynamics of $a_t, q_t, \Delta_t$ for both groups (Fig. 5a). Since our analysis neglects the algorithmic bias and the evolution results are in expectation, we perform $n = 100$ independent runs of experiments for every parameter configuration and show the averaged outcomes. The results are consistent with Thm. 3.3, 3.5 and 3.6 (i) acceptance rate $a_t$ (red curves) increases monotonically; (ii) qualification rate $q_t$ decreases monotonically starting from $t = 1$ (since strategic agents only best respond from $t = 1$); (iii) classifier bias $\Delta_t$ evolves differently for different groups and it may reach the minimum after a few rounds of retraining. Next, we verify whether the early stopping mechanism of the retraining process proposed in Thm. 4.1 can promote fairness. Fig. 5b shows that the decision-maker attains almost perfect fairness at $t = 5$. However, as discussed in Sec. 4, although fairness can be enhanced, it only ensures both groups have a similar classifier bias $\Delta_t$ but cannot reduce such bias. Besides, while we assume agents at round $t$ have perfect knowledge of the classifier $f_{t-1}$, Jagadeesan et al. \cite{Jagadeesan2021} pointed out agents may have a noisy knowledge in practice. To test the robustness of our theoretical results against agent noisy response, we assume agents estimate their classification result as $\hat{f}_t(x) = f_t(x) + \epsilon$ where $\epsilon \sim N(0, 0.1)$. We present the dynamics of $a_t, q_t, \Delta_t$ for both groups in Fig. 7a, which are quite similar to Fig. 5a, demonstrating the robustness of our theorems. **German Credit dataset** \cite{Hofmann1994}. This dataset includes features for predicting individuals’ credit risks. It has 1000 samples and 19 numeric features, which are used to construct a larger-scaled dataset. Specifically, we fit a kernel density estimator for all 19 features to generate 19-dimensional features, the corresponding labels are sampled from the distribution $P_{Y|X}$ which is estimated from data by fitting a logistic classifier with 19 features. Given this dataset, the first 10 features are used to train the classifiers. The attribute "sex" is regarded as the sensitive attribute. The systematic bias is created by increasing/decreasing $P_{Y|X}$ by 0.05. Other parameters $n, r, T, q_0$ are the same as Table 1. Since $P_{Y|X}$ is a logistic function, Assumption 3.2 can be satisfied easily as illustrated in App. F.1. We first verify the results in Sec. 3 by illustrating the dynamics of $a_t, q_t, \Delta_t$ for both groups. The results are shown in Fig. 6a and are consistent with Thm. 3.3, 3.5 and 3.6 (i) acceptance rate $a_t$ (red curves) always increases; (ii) qualification rate $q_t$ (blue curves) decreases starting from $t = 1$ (since strategic agents only best respond from $t = 1$); (iii) classifier bias $\Delta_t$ (black curves) evolve differently for different groups. In the right plot of Fig. 6a, $\Delta_t^i$ reaches the minimum at $t = 2$, suggesting the best time for the decision-maker to stop retraining to maximize its welfare. We also evaluate the early stopping mechanism of the retraining and verify Thm. 4.1. Fig. 6b shows the unfairness decreases... first and is minimized at $t = 9$. Finally, similar to Fig. 7a, Fig. 7b demonstrates the results are still robust under the noisy setting. ![Graphs showing dynamics of $a_t$, $q_t$, $\Delta_t$ and unfairness $|a^i_t - a^j_t|$](image) Figure 5: Dynamics of $a_t$, $q_t$, $\Delta_t$ and unfairness $|a^i_t - a^j_t|$ on Gaussian dataset. ![Graphs showing dynamics of $a_t$, $q_t$, $\Delta_t$ and unfairness $|a^i_t - a^j_t|$](image) Figure 6: Dynamics of $a_t$, $q_t$, $\Delta_t$ and unfairness $|a^i_t - a^j_t|$ on German Credit dataset. ![Graphs showing dynamics of $a_t$, $q_t$, $\Delta_t$](image) (a) Gaussian: $a_t$, $q_t$, $\Delta_t$ for group $i$ (left) and $j$ (right) (b) German: $a_t$, $q_t$, $\Delta_t$ for group $i$ (left) and $j$ (right) Figure 7: Dynamics of $a_t$, $q_t$, $\Delta_t$ under the noisy setting. More comprehensive experiments in App. F. App. F.1 describes experimental setups. App. F.2 demonstrates the additional results to verify Thm. 3.3 and Thm. 3.4 under different $r = \frac{K}{N}$, where we observe the same trends under different $r$, but $a_t$, $q_t$ change with different rates. It also provides results on how $a_t$, $q_t$, $\Delta_t$ change under the following situations: (i) the longer-term dynamics when $T$ is very large; (ii) no systematic bias; (iii) all training examples are human-annotated or model-annotated; (iv) agents have different costs of changing different features. App. F.3 presents more dynamics under refined retraining process to illustrate how it stabilizes the retraining; App. F.4 illustrates the evolution of unfairness under various $r$; App. F.5 presents more experiments under the noisy setting, while App. F.6 compares the situations when agents are non-strategic with the ones when they are strategic, revealing that the strategic feedback of agents causes $a_t$, $q_t$ to diverge. 6 CONCLUSION & LIMITATIONS This paper studies the dynamics where strategic agents interact with an ML system retrained over time with model-annotated and human-annotated samples. We rigorously studied the evolution of applicant welfare, decision-maker welfare, and social welfare. Such results highlight the potential risks of retraining classifiers when agents are strategic. The paper also proposed solutions to stabilizing dynamics and improving fairness. However, our theoretical results rely on certain assumptions and we should first verify these conditions before adopting the results of this paper, which may be challenging in real-world applications. Finally, though early stopping is a simple yet powerful mechanism to promote fairness, it remains an interesting problem to ensure fairness during endless retraining. REFERENCES George Alexandru Adam, Chun-Hao Kingsley Chang, Benjamin Haibe-Kains, and Anna Goldenberg. Hidden risks of machine learning applied to healthcare: Unintended feedback loops between models and future data causing model degradation. In Proceedings of the 5th Machine Learning for Healthcare Conference, volume 126 of Proceedings of Machine Learning Research, pp. 710–731. PMLR, 2020. George Alexandru Adam, Chun-Hao Kingsley Chang, Benjamin Haibe-Kains, and Anna Goldenberg. Error amplification when updating deployed machine learning models. In Proceedings of the Machine Learning for Healthcare Conference, Durham, NC, USA, pp. 5–6, 2022. Julius Adebayo, Melissa Hall, Bowen Yu, and Bobbie Chern. Quantifying and mitigating the impact of label errors on model disparity metrics. In The Eleventh International Conference on Learning Representations, 2022. Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, and Keziah Naggita. The strategic perceptron. In Proceedings of the 22nd ACM Conference on Economics and Computation, pp. 6–25, 2021. Tal Alon, Magdalen Dobson, Ariel Procaccia, Inbal Talgam-Cohen, and Jamie Tucker-Foltz. Multiagent evaluation mechanisms. Proceedings of the AAAI Conference on Artificial Intelligence, 34: 1774–1781, 2020. AJ Alvero, Noah Arthurs, Anthony Lising Antonio, Benjamin W Domingue, Ben Gebre-Medhin, Sonia Giebel, and Mitchell L Stevens. AI and holistic review: informing human reading in college admissions. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 200–206, 2020. Elias Bareinboim and Judea Pearl. Controlling selection bias in causal inference. In Artificial Intelligence and Statistics, pp. 100–108, 2012. Yahav Bechavod, Katrina Ligett, Steven Wu, and Juba Ziani. Gaming helps! learning from strategic interactions in natural dynamics. In International Conference on Artificial Intelligence and Statistics, pp. 1234–1242, 2021. Yahav Bechavod, Chara Podimata, Steven Wu, and Juba Ziani. Information discrepancy in strategic learning. In International Conference on Machine Learning, pp. 1691–1715, 2022. Omer Ben-Porat and Moshe Tennenholtz. Best response regression. In Advances in Neural Information Processing Systems, 2017. Mark Braverman and Sumegha Garg. The role of randomness and noise in strategic classification. CoRR, abs/2005.08377, 2020. J Michelle Brock and Ralph De Haas. Discriminatory lending: Evidence from bankers in the lab. American Economic Journal: Applied Economics, 15(2):31–68, 2023. Quinn Capers IV, Daniel Clinchot, Leon McDougle, and Anthony G Greenwald. Implicit racial bias in medical school admissions. Academic Medicine, 92(3):365–369, 2017. Yatong Chen, Jialu Wang, and Yang Liu. Strategic recourse in linear classification. CoRR, abs/2011.00355, 2020a. Yiling Chen, Yang Liu, and Chara Podimata. Learning strategy-aware linear classifiers. Advances in Neural Information Processing Systems, 33:15265–15276, 2020b. Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, and Jason Weston. Queens are powerful too: Mitigating gender bias in dialogue generation. arXiv preprint arXiv:1911.03842, 2019. Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation, pp. 55–70, 2018.
NddKiWtdUm
Additionally, given the use of simulation with LMs is touted as a benefit of this approach, what are the drawbacks of using this instead of collecting representative feedback (and training a reward model), whose values are the models implicitly aligning to with such simulation? These should be explicitly called out and elaborated in the limitations section but are currently briefly glossed over in the ethics section.
Training Socially Aligned Language Models in Simulated Human Society Ruibo Liu Dartmouth College Ruixin Yang University of British Columbia Chenyan Jia Stanford University, Northeastern University Ge Zhang University of Michigan, Ann Arbor Diyi Yang Stanford University Soroush Vosoughi Dartmouth College Abstract Social alignment in AI systems aims to ensure that these models behave according to established societal values. However, unlike humans, who derive consensus on value judgments through social interaction, current language models (LMs) are trained to rigidly replicate their training corpus in isolation, leading to subpar generalization in unfamiliar scenarios and vulnerability to adversarial attacks. This work presents a novel training paradigm that permits LMs to learn from simulated social interactions. In comparison to existing methodologies, our approach is considerably more scalable and efficient, demonstrating superior performance in alignment benchmarks and human evaluations. This paradigm shift in the training of LMs brings us a step closer to developing AI systems that can robustly and accurately reflect societal norms and values. 1 Introduction “We want AI agents that can discover like we can, not which contain what we have discovered.” ——Prof. Richard Sutton, The Bitter Lesson, 2019 By virtue of their ability to “predict the next token(s)”, contemporary pre-trained language models (LMs) have shown remarkable proficiency in memorizing extensive corpora, thereby enabling the generation of text indistinguishable from human-produced content (Brown et al., 2020). However, successful memorization of human knowledge does not assure a model’s propensity to perform as per societal expectations. Recent research has exposed behavioral anomalies in these LMs (Weidinger et al., 2022), which include the generation of harmful content (Gehman et al., 2020; Bommasani et al., 2021), the reinforcement of bias (Venkit et al., 2022; Liu et al., 2022), and the dissemination of disinformation (Tamkin et al., 2021; Lin et al., 2022). This process of enhancing desirable societal behaviors and inhibiting undesirable ones is commonly referred to as “social alignment” (Gabriel, 2020; Taylor et al., 2016). Supervised Fine-Tuning (SFT) presents a straightforward method for achieving alignment by training LMs using socially aligned data (Figure 1 [a]). However, this method often necessitates substantial human annotation, which can be prohibitively expensive at scale. Additionally, such annotation frequently exhibits varying styles and inconsistent quality, particularly in the case of poorly annotated samples at the lower end of the quality spectrum (Touvron et al., 2023b; Gilardi et al., 2023). To address these practical challenges, an advanced technique known as “reward modeling” has been proposed (Leike et al., 2018; Christiano et al., 2017). This approach involves training a reward model to act as a proxy for human judgment, thereby guiding the optimization of the language model (LM), as exemplified by OpenAI’s RLHF (see Figure 1 [b]). However, it is important to acknowledge that reward-based supervision may have inherent limitations in accurately reflecting... Figure 1: Rather than incorporating an additional proxy model like RLHF, Stable Alignment establishes direct alignment between LMs and simulated social interactions. Fine-grained interaction data is collected through a rule-guided simulated society, which includes collective ratings, detailed feedback, and “step-by-step” revised responses. In contrast to existing methods, Stable Alignment effectively addresses instability and reward gaming concerns associated with reward-based RL optimization while reducing the need for expensive human labeling in large-scale SFT. nuanced human judgment (Wolf et al., 2023; Liu et al., 2023). Consequently, optimizing the LM through reward models could lead to issues such as reward gaming (Kenton et al., 2021; Krakovna et al., 2020; Lehman et al., 2018) or tampering (Pan et al., 2022; Steinhardt, 2022; Everitt et al., 2021). Furthermore, LMs trained in this manner have been reported to be susceptible to so-called “jailbreaking” prompting attacks (Huang et al., 2023; Deshpande et al., 2023). In contrast to these methods, humans acquire social norms and values through *social interactions*—we interact, receive feedback, and adjust our behaviors to create positive impressions. However, LMs are essentially trained in *social isolation* (Krishna et al., 2022)—they neither experience actual social activities firsthand nor receive iterative feedback for improvement. Instead, they often recite predetermined “safe answers” such as “I’m an AI language model, so I refuse to answer” without displaying the empathy or understanding typical of genuine social agents (Lee, 2021). To address these limitations, we introduce a novel alignment learning paradigm that enables LMs to benefit from simulated social interactions. We create a simulated human society, **SANDBOX**, comprising numerous LM-based social agents interacting and we record their behaviors. The recorded interaction data is distinct from traditional alignment data; it includes not only aligned and misaligned demonstrations but also collective ratings, detailed feedback, and iteratively revised responses. Compared to the reward modeling method, the use of offline simulation shifts the responsibility of providing accurate supervision onto autonomous social agents. These agents, guided by an incentive (i.e., the **SANDBOX** Rule, as shown in Figure 1 [c]), aim to improve their alignment. by refining their responses in each simulation round progressively. Leveraging this interaction data, we propose a new three-stage alignment learning framework, Stable Alignment, which effectively and efficiently teaches LMs social alignment based on these self-improved interactions. Our contributions are as follows: • We introduce **SANDBOX**, an open-source platform for simulating human society (§3.1). Through the deliberate design of Back-Scatter, which mimics how social agents gather peer feedback, our platform enables the modeling of social interactions. **SANDBOX** not only aids the development of socially aligned language models but also serves as a versatile environment for studying AI behavioral patterns. • We present a new alignment learning framework, Stable Alignment, which learns from simulated social interactions in three stages (§3.2). Our experiments show that Stable Alignment outperforms existing methods in six alignment benchmarks. Notably, it facilitates easy deployment in resource-constrained settings by removing the need for an additional reward model to provide proximal supervision during training, such as OpenAI’s RLHF. • We comprehensively assess the trained models, evaluating them against both conventional alignment benchmarks and adversarial attack scenarios. Our results reveal that the inclusion of feedback and revision significantly boosts the models’ robustness against “jailbreaking prompts” (§4.1). Ablation studies further confirm the importance of specialized data preparation for efficient and stable alignment learning. 2 RELATED WORK **Social Simulation.** The advancement of Language Models (LMs) has elevated their ability to exhibit human-like characteristics, sparking increased research that views LMs as authentic representations of human entities (Krishna et al., 2022; Andreas, 2022; Park et al., 2022). As a result, social simulations have emerged as a practical approach for conducting large-scale social science research, traditionally constrained by time and resources. The field has seen transformative applications with LMs. For instance, Aher et al. (2023) successfully replicated several social science findings by using GPT-3 based agents as stand-ins for human participants. In a comprehensive set of experiments, Argyle et al. (2022) demonstrated that LM-simulated humans exhibit sufficient algorithmic fidelity to reflect complex societal traits akin to those in real humans. Building on this, Park et al. (2023) introduced “Generative Agents” based on LMs to explore if these agents could develop emergent collaborative skills similar to human capabilities (Irving et al., 2018). These precedents support the viability of **SANDBOX** for simulating social interactions. In the realm of AI alignment research, Leike et al. (2017) used a grid world to simulate human society. Our work extends this by incorporating one hundred LM-based agents, thereby facilitating the training of a robust, socially aligned LM. **Alignment Training.** Ensuring that AI systems are aligned with human commonsense and preferences is crucial for their societal utility (Kenton et al., 2021). Traditional alignment methods often employ a reward model as a proxy for human judgment (Christiano et al., 2017), which interacts with the generative LM during training or inference (Jaques et al., 2020; Glaese et al., 2022; Liu et al., 2021). Crafting a robust reward function that resists adversarial attacks remains a significant challenge (Leike et al., 2018), partly due to the limitations outlined by Goodhart’s Law (Goodhart, 1984). To address these issues, recent studies have explored using human feedback (Ouyang et al., 2022; Askell et al., 2021) or AI feedback (Bai et al., 2022; Saunders et al., 2022; Lee et al., 2023) as alternatives to proximal supervision. Gudibande et al. (2023) found that training small LMs with synthetic supervision from large LMs, although the smaller LMs may not obtain equivalent factuality and reasoning capabilities, their safety level get improved significantly—this might be because alignment training focuses more on learning style than on acquiring knowledge (Zhou et al., 2023). Our approach seems to echo these recent findings, demonstrating the feasibility and effectiveness of training smaller and socially aligned LMs with proper AI supervision from larger LMs. Figure 2: We model the social interactions in SANDBOX with Back-Scatter. By considering the collective feedback from peers, social agents are able better to align their responses to social values through thorough communication. We also demonstrate how we construct three types of alignment data—Imitation, Self-Critic, and Realignment—from the simulated interactions. In total, we construct 169k data samples for our alignment training. 3 APPROACH 3.1 SIMULATING SOCIAL INTERACTIONS IN SANDBOX Our approach deviates from the conventional practice of adopting predefined rules akin to Supervised Fine Tuning (SFT) or solely depending on scalar rewards as seen in Reinforcement Learning from Human Feedback (RLHF). Instead, we take inspiration from the way humans learn to navigate social norms, a process inherently involving experiential learning and iterative refinement (Dohan et al., 2022; Zelikman et al., 2022). Therefore, we create SANDBOX, an innovative learning environment in which Language Model (LM) based social agents can interact and learn social alignment in a manner that mirrors human learning. We encourage the emergence of social norms by instigating discussions on controversial societal topics or risk-associated questions. Simultaneously, we introduce a latent rule as an incentive for agents to refine their responses (shown in Figure 1), fostering improved alignment and impression management. While our study focuses on social alignment, this rule can be adapted to suit varying requirements. Further details on the SANDBOX setup can be found in Appendix A.1. We adopt a three-tiered method, termed Back-Scatter, to simulate social interactions among agents (Figure 2). Upon receiving a societal question, the central agent generates an initial response, which is then shared with nearby agents for feedback. This feedback, comprising ratings and detailed explanations, informs the central agent’s revisions to its initial response. We equip each agent with a memory to keep track of their response history. Furthermore, we employ an embedding-based semantic search to retrieve relevant Question-Answer (QA) pairs from this history, providing agents Figure 3: Alignment analysis after running social simulation in SANDBOX with different LMs. The average ratings of alignment (y-axis) and those of engagement (x-axis) among all agents are measured as the number of interactions increases. The simulation stops once the society reaches Pareto Optimality, indicated by no further improvement in the product of alignment and engagement ratings (both measured on a 7-point Likert scale). Generally, larger models demonstrated a greater ability to achieve improved overall optimality, and aligned models (e) achieved higher optimality with fewer iterations. We annotate the initial status of each model with ★. with a context that promotes consistency with past opinions. Apart from these social agents, we also include observer agents without memory, tasked with rating responses for alignment and engagement. Further elaboration on the Back-Scatter process is available in Appendix A.1. By utilizing SANDBOX, we can simulate social dynamics across various LMs, monitor observer ratings, and analyze collected data post-hoc. Figure 3 showcases our analysis of alignment following simulations with different LMs. While larger models typically exhibit better alignment and engagement, our results surprisingly show that transitioning from a 6.8B to a 175B GPT-3 model, despite a 20-fold increase in model size, does not yield significant improvement. This suggests two key insights: 1) mere model scaling does not guarantee improved alignment, and 2) even smaller models can deliver satisfactory alignment performance. A comparison of models without (Figure 3 a, b, c, d) and with alignment training (Figure 3 e) indicates that alignment training primarily enhances a model’s ability to achieve higher alignment with fewer interactions—a crucial consideration in real-world applications, where users expect immediate, socially aligned responses without needing to guide the model through interaction. 3.2 Stable Alignment: Learning Alignment from Social Interactions Stable Alignment comprises three training stages: Imitation, Self-Critic, and Realignment (shown in Table 1). We first introduce the notation used throughout the paper and briefly outline the problem setup. We then detail the three-stage training process. Notation. Given an instruction $x_{\text{instruct}}$ and its corresponding input text $x_{\text{input}}$, the goal of social alignment training is to encourage the LM to generate socially aligned text (i.e., $y_{\text{aligned}}$) while discouraging socially misaligned text (i.e., $y_{\text{misaligned}}$). We consider such social judgments to be scalar ratings—the higher the rating $r$, the more socially aligned the response. The aim is to train an aligned LM whose policy $\pi_{\text{aligned}}$ favors aligned responses, even when faced with adversarial instructions and inputs. Ideally, the LM should have the ability to provide feedback $y_{\text{feedback}}$ as rationales. Data Preparation. Data collected in the SANDBOX simulation is unique for its interactive nature, comprising comparative pairs, collective ratings, detailed feedback, and response revisions. As depicted in Figure 2, we construct three types of alignment datasets for the corresponding three alignment learning stages. We follow the instruction-tuning format used in Alpaca (Taori et al., 2023), which formulates each sample into Instruction-Input-Output triplets. For training in Stages 1 and 3, we prepare data samples in mini-batches, where each sample shares the same instruction and input but varies in its responses. In total, we construct 169k samples from simulated interactions. Note that to avoid model collapse issues (Shumailov et al., 2023) we do not include the base LM (i.e., LLaMA 7B) in the simulation for data collection. We analyze data diversity in Appendix A.2 and discuss the benefits of using revision-form responses in our ablation and learning dynamics studies. Contrastive Preference Optimization (CPO). For Stages 1 and 3, we deploy a new alignment algorithm, CPO (i.e., Contrastive Preference Optimization), that directly optimizes the current policy $\pi$ towards human-preferred responses in each mini-batch. Essentially, CPO encourages learning from Table 1: Three learning stages of Stable Alignment with corresponding training methods and objectives. Note that the capability to generate feedback, acquired in Stage 2 (Self-Critic), is a prerequisite for Stage 3 (Realignment). We employ CPO in Stages 1 and 3, while SFT in Stage 2. | Training Stage | Training Method | Learning Objective | |--------------------|-----------------|-----------------------------------------------------------------------------------| | Imitation Learning | CPO | \( y_{\text{aligned}} \leftarrow \arg \max_y \text{LM}(\hat{y}|x_{\text{instruct}}) \) | | Self-Critic | SFT | \( y_{\text{feedback}} \leftarrow \arg \max_y \text{LM}(\hat{y}|x_{\text{instruct}}, x_{\text{aligned/misaligned}}) \) | | Realignment | CPO | \( y_{\text{feedback}} + y_{\text{aligned}} \leftarrow \arg \max_y \text{LM}(\hat{y}|x_{\text{instruct}}, x_{\text{misaligned}}) \) | high-rated responses and unlearning lower-rated ones. This is achieved by minimizing a contrastive objective akin to triplet loss (Schroff et al., 2015): \[ J_{\text{Diff}} = \sum_{i \neq i_{\text{best}}}^{\text{Batch}} \max \left\{ J_{\text{SFT}}^{i_{\text{best}}} - J_{\text{SFT}}^{i} + (r_{\text{best}} - r_i) \cdot M, 0 \right\}, \] where \( J_{\text{SFT}}^{i_{\text{best}}} \) is the SFT loss for the response with the highest rating \( r_{\text{best}} \), and \( J_{\text{SFT}}^{i} \) is the SFT loss for the other responses in the same mini-batch. The contrasting margin \( \Delta = (r_{\text{best}} - r_i) \cdot M \) is influenced by the rating difference. The margin between \( J_{\text{SFT}}^{i_{\text{best}}} \) and \( J_{\text{SFT}}^{i} \) increases in proportion to the distance from the highest rating, implying that the model should work harder to unlearn lower-rated responses while learning from the highest-rated ones. The overall alignment loss \( J_{\text{CPO}} \) can be expressed as: \[ J_{\text{CPO}}(y|x_{\text{instruct}}, x_{\text{input}})_{(x,y)\sim\text{Batch}} = J_{\text{SFT}}^{i_{\text{best}}} + \lambda \cdot J_{\text{Diff}}, \] which combines the SFT loss \( J_{\text{SFT}}^{i_{\text{best}}} \) and the contrastive loss \( J_{\text{Diff}} \), discounted by a factor of \( \lambda \). As the model progresses in alignment, the contrastive loss diminishes, allowing CPO to converge at least as effectively as when solely optimizing with SFT (e.g., Best-of-\( N \) sampling (Gao et al., 2022; Touvron et al., 2023b)). Appendix A.3 provides the pseudocode for implementing CPO. Why is Stable Alignment More Scalable? As mentioned in the introduction (\$1), Stable Alignment offers greater scalability and easier deployment in resource-constrained environments compared to RLHF (Ouyang et al., 2022; Ziegler et al., 2019). This advantage arises because 1) Stable Alignment does not require an online reward model in memory during training to supervise the current generative LM, and 2) the simulation in SANDBOX is executed offline using parallel processes, thereby decoupling the sequential stages of “generation-supervision-optimization” found in the RLHF pipeline. In resource-constrained settings, RLHF necessitates at least two models (the reward model and the generative LM), whereas Stable Alignment can run the simulation offline and train the model directly on the socially-aligned/misaligned data collected asynchronously from the environment. 4 EXPERIMENTS We constructed three distinct virtual societies, each populated by 100 social agents arranged in a 10x10 gridworld. These agents interacted following the Back-Scatter protocol. The societies utilized three different language models (LMs) to simulate human interaction: text-davinci-002 (175B), text-davinci-003 (175B), and GPT-4 (size unknown). For these experiments, we used ChatGPT (gpt-3.5-turbo) as the observer, as outlined in \$3.1, without memory functionality. Our pool of controversial societal questions comprised 9,662 questions sourced from the Anthropic RLHF dataset. We consider the following benchmarks to assess alignment performance: **Anthropic HH** (i.e., HH) is a small-scale test set (\( N=200 \)) sampled from the Anthropic RLHF dataset, provided by the Google BIG-Bench project. We have ensured that the questions sourced --- 1 See Step 3 in Figure 2 of Ouyang et al. (2022), which shows that RLHF consists of three sequential stages. 2 Anthropic HH dataset: https://github.com/anthropics/hh-rlhf. 3 The 200-sample BIG-Bench version of Anthropic RLHF data for evaluation: https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/hhh_alignment. for \textsc{Sandbox} simulation do not appear in this test set. To evaluate the robustness of trained models under “jailbreaking prompting” attacks, we prepared an \textbf{HH-Adversarial} (i.e., HH-A) dataset that appends the misaligned response to the end of each instruction. \textbf{Moral Stories} examines whether LMs can generate moral responses under diverse social situations (Emelin et al., 2021). We use each data sample’s “situation” as $x_{\text{instruct}}$, treating “immoral actions” as $y_{\text{misaligned}}$ and “moral actions” as $y_{\text{aligned}}$. \textbf{MIC} investigates whether chatbots can produce utterances aligned with a set of “Rules of Thumb (RoT)” of morality (Ziems et al., 2022). Each sample is labeled with its alignment level (e.g., “aligned”, “unaligned”, “neither”), RoT violation severity (from 1 to 5), RoT agreement, etc. We take the dialogue question as $x_{\text{instruct}}$, unaligned answers (with RoT violation severity 4-horrible or 5-worse) as $y_{\text{misaligned}}$, and aligned answers as $y_{\text{aligned}}$. \textbf{ETHICS-Deontology} assesses the performance of LMs on five human values alignment tasks (Hendrycks et al., 2021). We selected the deontology split due to its contextual nature. We take the requests as $x_{\text{instruct}}$, deontology-unaligned responses as $y_{\text{misaligned}}$, and deontology-aligned responses as $y_{\text{aligned}}$. \textbf{TruthfulQA} evaluates the ability of LMs to identify truth (Lin et al., 2022). We use the question as $x_{\text{instruct}}$, misinformation as $y_{\text{misaligned}}$, and the truth as $y_{\text{aligned}}$. We adopted evaluation metrics largely in line with previous works: human-rated \textbf{Alignment} scores (from 1-\textit{extremely misaligned} to 10-\textit{extremely aligned}) for HH and HH-A tasks (Ouyang et al., 2022), accuracy in choosing $y_{\text{aligned}}$ (i.e., \textbf{ACC}) for Moral Stories, MIC, and ETHICS (Hendrycks et al., 2021), and Multiple-Choice (i.e., \textbf{MC1}) for TruthfulQA (Lin et al., 2022). We calculated ACC using mutual information between the question and candidate responses, as recommended by (Askell et al., 2021) to mitigate surface form competition among the options (Holtzman et al., 2021). We trained our model on the released Stanford Alpaca checkpoint\footnote{Stanford Alpaca: \url{https://github.com/tatsu-lab/stanford_alpaca}.} with 8 $\times$ A100 80G GPUs, using both SFT and Stable Alignment methodologies. The total training time was approximately 10 hours across two epochs. The initial learning rates for both SFT and Stable Alignment training were set at 2.0e-5 and used cosine annealing with a warmup ratio of 0.03. As detailed in Section 4.2, we selected a $\lambda$ value of 0.2 and a mini-batch size of four, incorporating three low-rating responses in each mini-batch. We pre-cache the data for Stages 1, 2, and 3 training in order deterministically. ### 4.1 Main Results on Alignment Benchmarks In addition to Stable Alignment, we consider seven other baseline methods that can be trained with our interaction data: (1) \textbf{LLaMA} (Touvron et al., 2023a), a publicly available foundation model released by Meta; (2) \textbf{Alpaca} (Taori et al., 2023), an instruction fine-tuned LLaMA based on 52k GPT-3 generated instruction-following data; (3) \textbf{Alpaca + SFT}, Alpaca fine-tuned solely with $y_{\text{aligned}}$ interaction data from the \textsc{Sandbox} simulation; (4) \textbf{TRLX} (von Werra et al., 2023), an open-source community implementation of OpenAI’s RLHF; (5) \textbf{Chain-of-Hindsight} (Liu et al., 2023), fine-tuned with verbal rewards; (6) \textbf{DPO} (Rafailov et al., 2023), which learns alignment directly from comparisons; and (7) \textbf{RRHF} (Yuan et al., 2023), fine-tuned with ranking loss. We also break down the three training stages of Stable Alignment to create several baselines for ablation studies (see the lower part of Table 2. IL: Imitation Learning; SC: Self-Critic; RA: Realignment). **Human Evaluation.** We first conducted human evaluations to assess whether humans prefer the output generated by LMs trained with Stable Alignment. Figure 4 presents the results of our human preference study, conducted according to the Elo scoring protocol for chatbot evaluation (Chiang et al., 2023; Askell et al., 2021). We opted for human annotators over GPT-4 for the assessments to mitigate potential bias. In each round of evaluation, annotators are presented with two responses to a single instruction (+input) generated by the two candidate methods. The annotators are instructed to label which response is better aligned or to indicate if neither response is significantly superior (i.e., a tie). Guidance words for annotators are provided in Appendix A.4. We collected 1000 human annotations for each pair evaluation on the HHH and HHH-A test sets (each containing $N = 200$ samples) via Amazon MTurk. Figure 4: Evaluations of human preferences on (a) Anthropic HHH (b) Anthropic HHH-Adversarial test sets. We compare Stable Alignment against six baseline methods, using ChatGPT as a reference. Table 2: Benchmark results of Stable Alignment and seven baseline methods. In general, Stable Alignment achieves the best overall performance, while showing particularly strong robustness even under adversarial attacks (HH-A). We also include the performance of ChatGPT as a reference, since a direct comparison with other methods is not feasible or unfair due to the unknown details of data and training. For all other methods, we use LLaMA 7B as the base model and the interaction data collected from SANDBOX as the available training data. | Models | HH | HH-A | Moral Stories | MIC | ETHICS | TruthfulQA | |-----------------|------|-------|---------------|-------|--------|------------| | | | | ACC | ACC | ACC | MC1 | | LLaMA | 4.34 | 3.28 | 0.46 | 0.38 | 0.41 | 0.28 | | Alpaca | 5.49 | 2.52 | 0.40 | 0.42 | 0.39 | 0.30 | | Alpaca + SFT | 6.31 | 3.49 | 0.47 | 0.54 | 0.51 | 0.34 | | TRLX | 5.69 | 5.22 | 0.52 | 0.57 | 0.53 | 0.31 | | Chain-of-Hindsight | 6.13 | 5.72 | 0.54 | 0.54 | 0.56 | 0.29 | | DPO | 6.54 | 5.83 | 0.63 | 0.61 | 0.57 | 0.36 | | RRHF | 6.40 | 6.24 | 0.74 | 0.67 | 0.63 | 0.38 | | **Ours:** Stable Alignment | | | | | | | | w/ IL + SC + RA | 7.35 | 8.23 | 0.78 | 0.73 | 0.65 | 0.53 | | w/ IL + SC | 6.56 | 6.59 | 0.72 | 0.68 | 0.64 | 0.47 | | w/ IL | 6.43 | 6.27 | 0.70 | 0.66 | 0.62 | 0.40 | | Reference: ChatGPT | 7.72 | 8.43 | 0.84 | 0.79 | 0.76 | 0.60 | Based on the ratio of wins to losses, Stable Alignment generally outperforms existing methods—this advantage is more pronounced in adversarial settings. Except in comparisons with ChatGPT, Stable Alignment achieves an above 50% win rate in all matchups. In both the HHH and HHH-A datasets, Stable Alignment is considered at least as good as ChatGPT 66% and 69% of the time, respectively. Additional human evaluations are presented in Appendix A.5, where we further compare Stable Alignment with other methods on five fine-grained alignment perspectives (i.e., honesty, helpfulness, harmlessness, unbiasedness, engagement) using one-way ANOVA analysis. **Benchmarking Results.** Table 2 offers a comprehensive comparison between Stable Alignment and seven alternative alignment methods across six diverse alignment tasks. The results indicate that Stable Alignment outperforms other methods in both in-domain tasks (i.e., HH and HH-A, since the questions used for simulation are sourced from the HH training set) and out-of-domain tasks (i.e., the remaining tasks, for which the training data collected from simulation does not cover the topics). Notably, training solely with Imitation Learning (IL) yields strong results; the gains from the second and third training stages are particularly pronounced in adversarial tasks (e.g., HH-A). For other baselines, we find 1) Only training with instruction-following data (e.g., Alpaca) can actually lead to degraded performance in defending against adversarial attacks, probably because the LM learns to blindly complete any instruction even though the prompt might trigger unaligned generation. For example, the performance of Alpaca in HH-A (2.52) is lower than LLaMA (3.28). We also find methods that have the potential to directly learn from the comparison (e.g., RRHF and DPO) or revision (e.g., Stable Alignment) have better performance than reward model (RM) based methods in general. This might be because of the misspecification problem of reward modeling, Figure 5: The figure illustrates (a) the stability of Stable Alignment (SA) training relative to SFT and RRHF; (b) the efficiency of alignment learning in comparison with TRLX, as evaluated by the same reward model. We also explore hyperparameter selection with respect to (c) the intensity of penalty $\lambda$; (d) the number of low-rating responses in each mini-batch. Alignment ratings adhere to the Vicuna evaluation pipeline. Perplexity is assessed using a 13B LLaMA. or the stable training with RM is challenging. In general, Stable Alignment aims to propose a new data-centric alignment method that focuses more on the intrinsic features hidden in the data from simulated social interaction. **Ablation Studies.** We conducted a series of ablation studies to assess the contributions of the three training stages in Stable Alignment. These results are presented in the lower part of Table 2. Generally, the omission of the Realignment stage significantly impacts performance in adversarial settings, decreasing the score from 8.23 to 6.59 for Stable Alignment in HH-A. The inclusion of Self-Critic training consistently enhances the outcomes of the Imitation Learning stage. This improvement aligns with recent studies highlighting the advantages of learning from critiques (Saunders et al., 2022; Welleck et al., 2022) and iterative refinement processes (Ye et al., 2023; Huang et al., 2022; Yu et al., 2023; Scheurer et al., 2023). ### 4.2 Stability, Efficiency, and Hyperparameter Optimization of Training Figure 5 (a) analyzes the stability of Stable Alignment. Notably, Stable Alignment demonstrates stability comparable to that of SFT, while RRHF displays significantly greater noise. This variance can be attributed to the difficulty of accurately ranking responses with similar ratings, thereby introducing an unwarranted bias in the computation of ranking loss. We further compare the efficiency of Stable Alignment in alignment learning with that of the reward modeling method TRLX. Alignment is periodically assessed on the validation set using the same reward model employed by TRLX. Figure 5 (b) shows that Stable Alignment achieves superior reward gains within fewer training steps, even without direct supervision from a reward model. Compared with vanilla distillation settings where all agents are memory-less, the inclusion of multi-agent interaction data not only accelerates the alignment learning process but also improves the general alignment quality. Figures 5 (c) and (d) discuss the optimal hyperparameter settings for Stable Alignment. Based on our observations, we recommend a discount factor ($\lambda$) of 0.2 for penalties associated with low-rating responses and selecting $N = 3$ as the number of negative samples in each mini-batch. We found that excessively large values of $\lambda$ and $N$ not only led to lower alignment ratings but also increased the model’s perplexity. ### 4.3 Limitation While our proposed model, Stable Alignment, offers a novel framework for enhancing social alignment in language models, it is important to acknowledge its limitations. Firstly, Stable Alignment is currently confined to text-based social interactions, which may not fully capture the complexity of human communication. Real-world interactions often include non-verbal cues, such as body language, which our model does not currently interpret. Secondly, our model’s implementation, utilizing SANDBOX, assumes a static view of human societal norms, overlooking the dynamic and evolving nature of societal values (Pettigrew, 2019; Paul, 2014). As societal norms and values evolve, our model could benefit from accommodating these changes. Additionally, our empirical analysis is conducted primarily in English, which limits the generalizability of our findings. Although Stable Alignment shows promise for extension to other languages through the use of multilingual LMs, further research is required to validate this claim. 5 CONCLUSION In this paper, we introduced a novel approach for training LMs to achieve social alignment through simulated social interactions. Our proposed model, Stable Alignment, leverages unique interaction data from this simulation to outperform existing methods significantly. We posit that the concept of learning alignment from simulated human behavior could be readily extended to other domains or modalities. Moreover, the use of simulation in our approach effectively mitigates potential privacy concerns associated with data collection in certain sectors. Our work serves as a step toward more socially aligned AI models and emphasizes the need for continued research in this crucial area. ETHICS AND REPRODUCIBILITY STATEMENT The primary goal of Stable Alignment is to establish a scalable and easily deployable framework for alignment, which leverages learning from simulated social interactions. However, it is important to recognize that simulations utilizing publicly available LMs might predominantly reflect mainstream value judgments. In contrast, accurately representing the judgments of certain underrepresented social groups may necessitate simulations with LMs specifically trained on data from these communities (Jiang et al., 2022; Rae et al., 2021). Another critical ethical consideration is the temporal relevance of the social values derived from SANDBOX simulations: they may not accurately mirror current societal norms and practices. A potential remedy could be to equip the language model agents with access to real-time information sources on the open web, such as search engines. Additionally, our experiments and analyses are conducted in English; therefore, we do not assert that our findings are universally applicable across all languages. Nevertheless, the Stable Alignment framework could potentially be adapted to other languages with appropriate modifications. In the interest of reproducibility, we have conducted evaluations of Stable Alignment and baseline methods using publicly available datasets and codebases. We compare our results with those from published papers and public leaderboards. We would like to highlight that the specific data samples gathered from the simulation are contingent upon the precise English prompts used to initiate the agents’ generations (refer to Appendix A.4). To facilitate peer review and subsequent research, we have included all necessary materials for reproducing Stable Alignment – including data, code, and launching scripts – as supplementary materials accompanying this submission. REFERENCES Gati Aher, Rosa I. Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans and replicate human subject studies, 2023. Jacob Andreas. Language models as agent models. ArXiv preprint, abs/2212.01681, 2022. URL https://arxiv.org/abs/2212.01681. Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua Gubler, Christopher Rytting, and David Wingate. Out of one, many: Using language models to simulate human samples. ArXiv preprint, abs/2209.06899, 2022. URL https://arxiv.org/abs/2209.06899. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. ArXiv preprint, abs/2112.00861, 2021. URL https://arxiv.org/abs/2112.00861. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. ArXiv preprint, abs/2212.08073, 2022. URL https://arxiv.org/abs/2212.08073.
qTlcbLSm4p
Besides, from the equation and algorithm, I can’t find an explicit connection between low-resolution diffusion and high-resolution diffusion; such an explicit statement may be necessary for reader to understand.
Relay Diffusion: Unifying Diffusion Process Across Resolutions for Image Synthesis Jiayan Teng∗1, Wendi Zheng∗1, Ming Ding∗12†, Wenyi Hong1, Jianqiao Wangni2, Zhuoyi Yang1, Jie Tang1† ∗equal contribution 1Tsinghua University 2Zhipu AI †corresponding authors {tengjy20@mails,zhengwd23@mails,jietang@mail}.tsinghua.edu.cn mingding.thu@gmail.com Abstract Diffusion models achieved great success in image synthesis, but still face challenges in high-resolution generation. Through the lens of discrete cosine transformation, we find the main reason is that the same noise level on a higher resolution results in a higher Signal-to-Noise Ratio in the frequency domain. In this work, we present Relay Diffusion Model (RDM), which transfers a low-resolution image or noise into an equivalent high-resolution one for diffusion model via blurring diffusion and block noise. Therefore, the diffusion process can continue seamlessly in any new resolution or model without restarting from pure noise or low-resolution conditioning. RDM achieves state-of-the-art FID on CelebA-HQ and sFID on ImageNet 256×256, surpassing previous works such as ADM, LDM and DiT by a large margin. All the codes and checkpoints are open-sourced at https://github.com/THUDM/RelayDiffusion. Figure 1: (left): Generated Samples by RDM on ImageNet 256×256 and CelebA-HQ 256×256. (right): Benchmarking recent diffusion models on class-conditional ImageNet 256×256 generation without any guidance. RDM can achieve a FID of 1.99 (and a class-balanced FID of 1.87) if with classifier-free guidance. 1 Introduction Diffusion models (Ho et al., 2020; Rombach et al., 2022) succeeded GANs (Goodfellow et al., 2020) and autoregressive models (Ramesh et al., 2021; Ding et al., 2021) to become the most prevalent generative models in recent years. However, challenges still exist in the training of diffusion models for high-resolution images. More specifically, there are two main obstacles: **Training Efficiency.** Although equipped with UNet to balance the memory and computation cost across different resolutions, diffusion models still require a large amount of resources to train on high-resolution images. One popular solution is to train the diffusion model on a latent (usually $4 \times$ compression rate in resolution) space and map the result back as pixels (Rombach et al., 2022), which is fast but inevitably suffers from some low-level artifacts. The cascaded method (Ho et al., 2022; Saharia et al., 2022) trains a series of varying-size super-resolution diffusion models, which is effective but needs a complete sampling for each stage separately. **Noise Schedule.** Diffusion models need a noise schedule to control the amount of the isotropic Gaussian noise at each step. The setting of the noise schedule shows great influence over the performance, and most current models follow the linear (Ho et al., 2020) or cosine (Nichol & Dhariwal, 2021) schedule. However, an ideal noise schedule should be resolution-dependent (See Figure 2 or Chen (2023)), resulting in suboptimal performance to train high-resolution models directly with common schedules designed for resolutions of $32 \times 32$ or $64 \times 64$ pixels. These obstacles hindered previous researchers from establishing an effective end-to-end diffusion model for high-resolution image generation. Dhariwal & Nichol (2021) attempted to directly train a $256 \times 256$ ADM but found that it performs much worse than the cascaded pipeline. Chen (2023) and Hoogeboom et al. (2023) carefully adjusted the hyperparameters of the noise schedule and architecture for high-resolution cases, but the quality is still not comparable to the state-of-the-art cascaded method (Saharia et al., 2022). In our opinion, the cascaded method contributes in both training efficiency and noise schedule: (1) It provides flexibility to adjust the model size and architecture for each stage to find the most efficient combination. (2) The existence of low-resolution condition makes the early sampling steps easy, so that the common noise schedules (optimized for low-resolution models) can be applied as a feasible baseline to the super-resolution models. Moreover, (3) high-resolution images are more difficult to obtain on the Internet than low-resolution images. The cascaded method leverages the knowledge from low-resolution samples, meanwhile keeps the capability to generate high-resolution images. Therefore, it might not be a promising direction to completely replace the cascaded method with an end-to-end one at the current stage. The disadvantages of the cascaded method are also obvious: (1) Although the low-resolution part is determined, a complete diffusion model starting from pure noise is still trained and sampled for super-resolution, which is time-consuming. (2) The distribution mismatch between ground-truth and the generated low-resolution condition will hurt the performance, so that tricks like conditioning augmentation (Ho et al., 2022) become vitally important to mitigate the gap. Besides, the noise schedule of high-resolution stages are still not well studied. **Present Work.** Here we present the **Relay Diffusion Model** (RDM), a new cascaded framework to improve the shortcomings of the previous cascaded methods. In each stage, the model starts diffusion from the result of the last stage, instead of conditioning on it and starting from pure noise. Our method is named as the cascaded models work together like a “relay race”. The contributions of this paper can be summarized as follows: - We analyze the reasons of the difficulty of noise scheduling in high-resolution diffusion models in frequency domain. Previous works like LDM (Rombach et al., 2022) assume all image signals from the same distribution when analyzing the SNR, neglecting the difference in frequency domain between low-resolution and high-resolution images. Our analysis successfully accounts for phenomenon that the same noise level shows different perceptual effects on different resolutions, and introduce the block noise to bridge the gap. - We propose RDM to disentangle the diffusion process and the underlying neural networks in the cascaded pipeline. RDM gets rid of the low-resolution conditioning and its distribution mismatch problem. Since RDM starts diffusion from the low-resolution result instead of pure noise, the training and sampling steps can also be reduced. - We evaluate the effectiveness of RDM on unconditional CelebA-HQ $256 \times 256$ and conditional ImageNet $256 \times 256$ datasets. RDM achieves state-of-the-art FID on CelebA-HQ and sFID on ImageNet. 2 PRELIMINARY 2.1 DIFFUSION MODELS To model the data distribution \( p_{\text{data}}(x_0) \), denoising diffusion probabilistic models (DDPMs, Ho et al., 2020) define the generation process as a Markov chain of learned Gaussian transitions. DDPMs first assume a forward diffusion process, corrupting real data \( x_0 \) by progressively adding Gaussian noise from time steps 0 to \( T \), whose variance \( \{\beta_t\} \) is called the noise schedule: \[ q(x_t | x_{t-1}) = N(x_t; \sqrt{1 - \beta_t} x_{t-1}, \beta_t I). \] The reverse diffusion process is learned by a time-dependent neural network to predict denoised results at each time step, by optimizing the variational lower bound (ELBO). Many other formulations for diffusion models include stochastic differential equations (SDE, Song et al., 2020b), denoising diffusion implicit models (DDIM, Song et al., 2020a), etc. Karras et al. (2022) summarizes these different formulations into the EDM framework. In this paper, we generally follow the EDM formulation and implementation. The training objective of EDM is defined as \( L_2 \) error terms: \[ E_{x \sim p_{\text{data}}, \sigma \sim p(\sigma)} E_{\epsilon \sim N(0, I)} \| D(x + \sigma \epsilon, \sigma) - x \|^2, \] where \( p(\sigma) \) represents the distribution of a continuous noise schedule. \( D(x + \sigma \epsilon, \sigma) \) represents the denoiser function depending on the noise scale. We also follow the EDM precondition for \( D(x + \sigma \epsilon, \sigma) \) with \( \sigma \)-dependent skip connection (Karras et al., 2022). Cascaded diffusion model (CDM, Ho et al., 2022) is proposed for high-resolution generation. CDM divides the generation into multiple stages, where the first stage generates low-resolution images and the following stages perform super-resolution conditioning on the outputs of the previous stage. f-DM (Gu et al., 2022) unifies multiple resolutions of image generation with a linear interpolation in a single model. Cascaded models are extensively adopted in recent works of text-to-image generation, e.g. Imagen (Saharia et al., 2022), DALL-E-2 (Ramesh et al., 2022) and eDiff-I (Balaji et al., 2022). 2.2 BLURRING DIFFUSION The Inverse Heat Dissipation Model (IHDM) (Rissanen et al., 2022) generates images by reversing the heat dissipation process. The heat dissipation is a thermodynamic process describing how the temperature \( u(x, y, t) \) at location \((x, y)\) changes in a (2D) space with respect to the time \( t \). The dynamics can be denoted by a PDE \( \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \). Blurring diffusion (Hoogeboom & Salimans, 2022) is further derived by augmenting the Gaussian noise with heat dissipation for image corruption. Since simulating the heat equation up to time \( t \) is equivalent to a convolution with a Gaussian kernel with variance \( \sigma^2 = 2t \) in an infinite plane (Bredies et al., 2018), the intermediate states \( x_t \) become blurry, instead of noisy in the standard diffusion. If Neumann boundary conditions are assumed, blurring diffusion in discrete 2D pixel space can be transformed to the frequency space by Discrete Cosine Transformation (DCT) conveniently as: \[ q(u_t | u_0) = N(u_t | D_t u_0, \sigma_t^2 I), \] where \( u_t = \text{DCT}(x_t) \), and \( D_t = e^{\Lambda t} \) is a diagonal matrix with \( \Lambda_{i \times W+j} = -\pi^2 (\frac{i^2}{W^2} + \frac{j^2}{H^2}) \) for coordinate \((i, j)\). Here Gaussian noise with variance \( \sigma_t^2 \) is mixed into the blurring diffusion process to transform the deterministic dissipation process to a stochastic one for diverse generation. 3 METHOD 3.1 MOTIVATION The noise schedule is vitally important to the diffusion models and is resolution-dependent. A certain noise level appropriately corrupting the \( 64 \times 64 \) images, could fail to corrupt the \( 256 \times 256 \) (or a higher resolution) images, which is shown in the first row of Figure 2(a)(b). Chen (2023) and Hoogeboom et al. (2023) attributed this to the lack of schedule-tuning, but we found that an analysis from the perspective of frequency spectrum can help us better understand this phenomenon. Figure 2: Illustration of spatial and frequency results after adding independent Gaussian and block noise. (a)(b) At the resolution of $64 \times 64$ and $256 \times 256$, the same noise level results in different perceptual effects, and in the frequency plot, the SNR curve shifts upward. (c) The independent Gaussian noise at the resolution $64 \times 64$ and block noise (kernel size = 4) at the resolution $256 \times 256$ produce similar results in both spatial domain and frequency domain. The noise is $\mathcal{N}(0, 0.3^2)$ for (a). These SNR curves are universally applicable to most natural images. Frequency spectrum analysis of the diffusion process. The natural images with different resolutions can be viewed as the result of visual signals sampled at varying frequencies. To compare the frequency features of a $64 \times 64$ image and a $256 \times 256$ image, we can upsample the $64 \times 64$ one to $256 \times 256$, perform DCT and compare them in the 256-point DCT spectrum. The second row of Figure 2(a) shows the signal noise ratio (SNR) at different frequencies and diffusion steps. In Figure 2(b), we clearly find that the same noise level on a higher resolution results in a higher SNR in (the low-frequency part of) the frequency domain. Detailed frequency spectrum analysis are included in Appendix D. At a certain diffusion step, a higher SNR means that during training the neural network presumes the input image more accurate, but the early steps may not be able to generate such accurate images after the increase in SNR. This training-inference mismatch will accumulate over step by step during sampling, leading to the degradation of performance. Block noise as the equivalence at high resolution. After the upsampling from $64 \times 64$ to $256 \times 256$, the independent Gaussian noise on $64 \times 64$ becomes noise on $4 \times 4$ grids, thus greatly changes its frequency representation. To find a variant of the $s \times s$-grid noise without deterministic boundaries, we propose Block noise, where the Gaussian noises are correlated for nearby positions. More specifically, the covariance between noise $\epsilon_{x_0,y_0}$ and $\epsilon_{x_1,y_1}$ is defined as $$\text{Cov}(\epsilon_{x_0,y_0}, \epsilon_{x_1,y_1}) = \frac{\sigma^2}{s^2} \max(0, s - \text{dis}(x_0, x_1)) \max(0, s - \text{dis}(y_0, y_1)), \quad (4)$$ where $\sigma^2$ is the noise variance, and $s$ is a hyperparameter kernel size. The $\text{dis}(\cdot, \cdot)$ function here is the Manhattan distance. For simplicity, we “connect” the top and bottom edges and the left and right edges of the image, resulting in $$\text{dis}(x_0, x_1) = \min(|x_0 - x_1|, x_{\text{max}} - |x_0 - x_1|). \quad (5)$$ The block noise with kernel size $s$ can be generated by averaging $s \times s$ independent Gaussian noise. Suppose we have an independent Gaussian noise matrix $\epsilon$, the block noise construction function $\text{Block}[s](\cdot)$ is defined as $$\text{Block}[s](\epsilon)_{x,y} = \frac{1}{s^2} \sum_{i=0}^{s-1} \sum_{j=0}^{s-1} \epsilon_{x-i,y-j}, \quad (6)$$ where $\text{Block}[s](\epsilon)_{x,y}$ is the block noise at the position $(x, y)$, and $\epsilon_{-x} = \epsilon_{x_{\text{max}} - x}$. Figure 2(c) shows that the block noise with kernel size $s = 4$ on $256 \times 256$ has a similar frequency spectrum as the independent Gaussian noise on $64 \times 64$ images. The analysis above seems to indicate that we can design an end-to-end model for high-resolution images by introducing block noise in early diffusion steps, while cascaded models already achieve great success. Therefore, a revisit of the cascaded models is necessary. **Why does the cascaded models alleviate this issue?** Experiments in previous works (Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021) have already shown that cascaded models perform better than end-to-end models under a fair setting. These models usually use the same noise schedule in all stages, so why are the cascaded models not affected by the increase of SNR? The reason is that in the super-resolution stages, the low-resolution condition greatly eases the difficulty of the early steps, so that although the higher SNR requires a more accurate input, the accuracy is within the capability of the model. A natural idea is that since the low-frequency information in the high-resolution stage has already been determined by the low-resolution condition, we can continue generating directly from the upsampled result to reduce both the training and sampling steps. However, the generation of low-resolution images is not perfect, and thus the solution of the distribution mismatch between ground-truth and generated low-resolution images is a priority to “continue” the diffusion process. ### 3.2 Relay Diffusion ![Figure 3: Pipeline of Relay Diffusion Models (RDM).](image) We propose relay diffusion model (RDM), a cascaded pipeline connecting the stages with block noise and (patch-level) blurring diffusion. Different from CDM, RDM considers the equivalence of the low-resolution generated images when upscaled to high resolution. Suppose that the generated $64 \times 64$ low-resolution image $x_L = x^L + \epsilon_L$ can be decomposed into a sample in real distribution $x^L$ and a remaining noise $\epsilon_L \sim \mathcal{N}(0, \beta_0^2 I)$. As mentioned in section 3.1, the $256 \times 256$ equivalence of $\epsilon_L$ is Block[4] noise with variance $\beta_0^2$, denoted by $\epsilon_H$. After (nearest) upsampling, $x^L$ becomes $x^H$, where each $4 \times 4$ grid share the same pixel values. We can define it as the starting state of a patch-wise blurring diffusion. Unlike blurring diffusion models (Rissanen et al., 2022; Hoogeboom & Saltmans, 2022) that perform the heat dissipation on the entire space of images, we propose to implement the heat dissipation on each $4 \times 4$ patch independently, which is of the same size as the upsampling scale. We first define a series of patch-wise blurring matrices $\{D_p^t\}$, which is introduced in detail in Appendix A.1. The forward process would have a similar representation with equation 3: $$q(x_t | x_0) = \mathcal{N}(x_t | V D_p^t V^T x_0, \sigma_t^2 I), \quad t \in \{0, ..., T\},$$ where $V^T$ is the projection matrix of DCT and $\sigma_t$ is the variance of noise. Here the $D_p^t$ is chosen to guarantee $VD_p^t V^T x_0$ in the same distribution as $x^H$, meaning that the blurring process ultimately makes the pixel value in each $4 \times 4$ patch the same. The training objective of the high-resolution stage of RDM generally follows EDM (Karras et al., 2022) framework in our implementation. We replace the Gaussian noise in equation 7 with a mixture of Gaussian noise and block noise in section 3.1. The loss function is defined on the prediction of denoiser function $D$ to fit with true data $x$, which is written as: $$\mathbb{E}_{x \sim p_{data}, t \sim U(0,1), \epsilon \sim N(0,1), \epsilon' \sim N(0,1)} \| D(x_t, \sigma_t) - x \|^2,$$ where $x_t = V D^p_t V^T x + \frac{\sigma}{\sqrt{1 + \alpha^2}} (\epsilon + \alpha \cdot \text{Block}[s](\epsilon'))$, (8) where $\epsilon$ and $\epsilon'$ are two independent Gaussian noise. The main difference in training between RDM and EDM is that the corrupted sample $x_t$ is not simply $x_t = x + \epsilon$, but a mixture of the blurred image, block noise and independent Gaussian noise. Ideally, the noise should gradually transfer from block noise to high-resolution independent Gaussian noise, but we find that a weighting average strategy perform well enough, because the low-frequency component of the block noise is much larger than the independent Gaussian noise, and vice versa for high-frequency component. $\alpha$ is a hyperparameter and the normalizer $\frac{1}{\sqrt{1 + \alpha^2}}$ is used to keep the variance of the noise, $\sigma^2$ unchanged. The advantages of RDM compared to CDM includes: - RDM is more efficient, because RDM skips the re-generation of low-frequency information in the high-resolution stages, and reduce the number of training and sampling steps. - RDM is simpler, because it gets rid of the low-resolution conditioning and conditioning augmentation tricks. The consumption from cross-attention with the low-resolution condition is also spared. - RDM is more potential in performance, because RDM is a Markovian denoising process (if with DDPM sampler). Artifacts in the low-resolution images could be corrected in the high-resolution stage, while CDM is trained to correspond to the low-resolution condition. Compared to end-to-end models (Chen, 2023; Hoogeboom et al., 2023), - RDM is more flexible to adjust the model size and leverage more low-resolution data. ### 3.3 Stochastic Sampler Since RDM differs from traditional diffusion models in the forward process, we also need to adapt the sampling algorithms. In this section, we focus on the EDM sampler (Karras et al., 2022) due to its flexibility to switch between the first and second order (Heun’s) samplers. Heun’s method introduces an additional step for the correction of the first-order sampling. The updating direction of a first-order sampling step is controlled by the gradient term $d_n = \frac{x_n - x_{n-1}(\sigma_n, \sigma_{n-1})}{\sigma_n}$. The correction step updates current states with an averaged gradient term $\frac{d_n + d_{n-1}}{2}$. Heun’s method takes account of the change of gradient term $\frac{dx}{dt}$ between $t_n$ and $t_{n-1}$. Therefore, it achieves higher quality while allowing for fewer steps of sampling. We adapt the EDM sampler to the blurring diffusion of RDM’s super-resolution stage following the derivation of DDIM (Song et al., 2020a). We define the indices of sampling steps as $\{t_i\}_{i=0}^{N}$, corresponding to the noisy states of images $\{x_i\}_{i=0}^{N}$. To apply blurring diffusion, images are transformed into frequency space by DCT as $u_i = V^T x_i$. Song et al. (2020a) uses a family of inference distributions to describe the diffusion process. We can write it for blurring diffusion as: $$q_\delta(u_{1:N}|u_0) = q_\delta(u_N|u_0) \prod_{n=2}^{N} q_\delta(u_{n-1}|u_n, u_0),$$ (9) where $\delta \in \mathbb{R}^{N}_{\geq 0}$ denotes the index vector for the distribution. For all $n > 1$, the backward process is: $$q_\delta(u_{n-1}|u_n, u_0) = \mathcal{N}(u_{n-1}| \frac{1}{\sigma_{t_n}} (\sqrt{\sigma_{t_{n-1}}^2 - \delta_n^2} u_n + (\sigma_{t_n} D^p_{t_{n-1}} - \sqrt{\sigma_{t_{n-1}}^2 - \delta_n^2} D^p_{t_n}) u_0), \delta_n^2 I).$$ (10) The mean of the normal distribution ensures the forward process to be consistent with the formulation of blurring diffusion in Section 3.2, which is \( q(u_n | u_0) = \mathcal{N}(u_n | D^p_{t_n} u_0, \sigma^2_{t_n} I) \). We provide a detailed proof of the consistency between our sampler and the formulation of blurring diffusion in Appendix A.3. When the index vector \( \delta \) is 0, the sampler degenerates into an ODE sampler. We set \( \delta_n = \eta \sigma_{t_n-1} \) for our sampler, where \( \eta \in [0, 1) \) is a fixed scalar controlling the scale of randomness injected during sampling. We substitute the definition into equation (10) to obtain our sampler function as: \[ u_{n-1} = (D^p_{t_n-1} + \gamma_n (I - D^p_{t_n})) u_n + \sigma_{t_n} (\gamma_n D^p_{t_n} - D^p_{t_n-1}) \frac{u_n - \hat{u}_0}{\sigma_{t_n}} + \eta \sigma_{t_n-1} \epsilon, \] where \( \gamma_n \triangleq \sqrt{1 - \eta^2 \sigma^2_{t_n-1}} \). As in the section 3.1, we also need to consider block noise besides blurring diffusion. The adaptation is just to replace isotropic Gaussian noise \( \epsilon \) with \( \tilde{\epsilon} \), which is a weighted sum of the block noise and isotropic Gaussian noise. \( \hat{u}_0 = u_\theta(u_n, \sigma_{t_n}) \) is predicted by the neural network. Finally, a stochastic sampler for the super-resolution stage of RDM is summarized in Appendix A.4. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Dataset. We use CelebA-HQ and ImageNet in our experiments. CelebA-HQ (Karras et al., 2018) is a high-quality subset of CelebA (Liu et al., 2015) which consists of 30,000 images of faces from human celebrities. ImageNet (Deng et al., 2009) contains 1,281,167 images spanning 1000 classes and is a widely-used dataset for generation and other vision tasks. We train RDM on these datasets to generate 256 × 256 images. See Appendix C.1 for further experiments on higher resolutions. Architecture and Training. RDM adopts UNet (Ronneberger et al., 2015) as the backbone of diffusion models for all stages. The detailed architectures largely follow ADM (Dhariwal & Nichol, 2021) for fair comparison. We train unconditional models on CelebA-HQ and class-conditional models on ImageNet respectively. Since we follow the EDM implementation, we directly use the released checkpoint from EDM in ImageNet in the 64 × 64 stage. We calculate the training consumption by the number of training samples at 256 × 256 resolution, while also including the training cost of the 64 × 64 stage in the total calculation. According to Appendix B.1, the FLOPs of the 64 × 64 model are less than 1/10 that of the 256 × 256 model. So we add 1/10 of the first stage’s number of training samples to the 256 × 256 stage’s to be the total training consumption. See Appendix B.1 for more information about the architecture and hyperparameters. Evaluation. We use metrics including FID (Heusel et al., 2017), sFID (Nash et al., 2021), IS (Salimans et al., 2016), Precision and Recall (Kynkäänniemi et al., 2019) for a comprehensive evaluation of the results. FID measures the difference between the features of model generations and real images, which is extracted by a pretrained Inception network. sFID differs from FID by using intermediate features, which better measures the similarity of spatial distribution. IS and Precision both measure the fidelity of the samples, while Recall indicates the diversity. We compute metrics with 50,000 and 30,000 generated samples for ImageNet and CelebA-HQ respectively. Table 1: Benchmarking unconditional image generation on CelebA-HQ 256 × 256. | Model | FID↓ | Precision↑ | Recall↑ | Cost(Iter × BS) | |----------------|------|------------|---------|----------------| | LSGM (Vahdat et al., 2021) | 7.22 | - | | 470K × 128 | | WaveDiff (Phuene et al., 2022) | 5.94 | 0.37 | | 234k × 64 | | LDM (Pumarola et al., 2021) | 5.17 | 0.72 | | 48k × 48 | | StyleSwit (Pumarola et al., 2021) | 3.25 | - | | 25600k × 32 | | RDM (Zhao et al., 2023) | 3.15 | 0.77 | 0.55 | 46k × 1024 | Table 2: Effect of stochasticity in the sampler on ImageNet 256 × 256 (top) and CelebA-HQ 256 × 256 (bottom). We explored different values of \( \eta \) in Eq.(11). | \( \eta \) | 0 | 0.10 | 0.15 | 0.20 | 0.25 | 0.30 | 0.40 | 0.50 | |-----------|-----|------|------|------|------|------|------|------| | FID↓ | 5.65| 5.44 | 5.31 | 5.27 | 5.48 | 5.91 | 6.91 | 9.17 | | \( \eta \) | 0 | 0.10 | 0.15 | 0.20 | 0.25 | 0.30 | 0.40 | 0.50 | | FID↓ | 4.11| 3.74 | 3.43 | 3.15 | 3.23 | 3.52 | 4.79 | 6.41 | 4.2 RESULTS CelebA-HQ We compare RDM with the existing methods on CelebA-HQ 256 × 256 in Table 1, 512 × 512 in Table 6 and 1024 × 1024 in Table 7. RDM outperforms the state-of-the-art model. 1class-balance means making the number of images generated for each class same among 50,000 images. Table 3: Benchmarking class-conditional image generation on ImageNet 256 × 256. The cost of RDM in the table has taken the first-stage model into consideration and made equivalent conversions according to Section 4.1. The cost of latent diffusion model’s vae is not taken into consideration. The calculation process of NFE is clarified in sampling steps part of Section 4.3. | Model | FID↓ | sFID↓ | IS↑ | Precision↑ | Recall↑ | Cost(Iter×BS) | Sampling NFE | |------------------------|------|-------|-------|------------|---------|---------------|--------------| | BigGAN-deep [Brook et al., 2018] | 6.95 | 7.36 | 171.4 | 0.87 | 0.28 | 165k×2048 | - | | StyleGAN-XL [Sauer et al., 2022] | 2.30 | 4.02 | 265.12| 0.78 | 0.53 | - | - | | ADM [Dhariwal & Nichol, 2021] | 10.94| 6.02 | 100.98| 0.69 | 0.63 | 1980k×256 | 250 | | LDM-4 [Kornbach et al., 2022] | 10.56| - | 103.49| 0.71 | 0.62 | 178k×1200 | 250 | | CDM [Ho et al., 2022] | 4.88 | - | 158.71| - | - | - | 100 | | DiT-XL/2 [Peebles & Xie, 2022] | 9.62 | 6.85 | 121.50| 0.67 | 0.67 | 7000k×256 | 250 | | MDT-XL/2 [Gao et al., 2023] | 6.23 | 5.23 | 143.02| 0.71 | 0.65 | 6500k×256 | 250 | | **RDM** | **5.27** | **4.39** | **153.43** | **0.75** | **0.62** | **290k×4096** | **125** | | ADM-UG | 3.94 | 6.14 | 215.84| 0.83 | 0.53 | 1980k×256 | 500 | | LDM-4 (CFG=1.50) | 3.60 | - | 247.67| 0.87 | 0.48 | 178k×1200 | 500 | | DiT-XL/2-G (CFG=1.50) | 2.27 | 4.60 | 278.24| 0.83 | 0.57 | 7000k×256 | 500 | | MDT-XL/2-G (dynamic CFG) | **1.79** | **4.57** | **283.01** | **0.81** | **0.61** | **6500k×256** | **500** | | MDT-XL/2-G (CFG=1.325) | 2.26 | 4.28 | 246.06| 0.81 | 0.59 | 6500k×256 | 500 | | **RDM** (CFG=3.50) | 1.99 | 3.99 | 260.45| 0.81 | 0.58 | 290k×4096 | 250 | | + class-balance | 1.87 | 3.97 | 278.75| 0.81 | 0.59 | 290k×4096 | 250 | StyleSwin [Zhang et al., 2022], with a remarkably fewer training iterations (50M versus 820M trained images). We also achieve the best precision and recall among the existing works. ImageNet Table 3 shows the performance of class-conditional generative models on ImageNet 256 × 256. We report the best results as possible of the existing methods with classifier-free guidance (CFG) [Ho & Salimans, 2022]. RDM achieves the best sFID and outperforms all the other methods by FID except MDT-XL/2 [Gao et al., 2023] with a dynamic CFG scale. If with a fixed but best-picked CFG scale, MDT-XL/2 can only achieve an FID of 2.26. While achieving competitive results, RDM is trained with only 70% of the iterations of MDT-XL/2 (1.2B versus 1.7B trained images), indicating that the longer training and a more granular CFG strategy are potential directions to further optimize the FID of RDM. Training Efficiency We also compare the performance of RDM with existing methods along with the training cost in Figure 1. When CFG is disabled, RDM achieves a better FID than previous state-of-the-art diffusion models including DiT [Peebles & Xie, 2022] and MDT [Gao et al., 2023]. RDM outperforms them even with only about 1/3 training iterations. 4.3 Ablation Study In this section, we conduct ablation experiments on the designs of RDM to verify their effectiveness. Unless otherwise stated, we report results of RDM on 256 × 256 generation without CFG. The Effectiveness of block noise. We compare the performance of RDM with and without adding block noise in Figure 4. With a sufficient phase of training, RDM with block noise outperforms the model without block noise by a remarkable margin on both ImageNet and CelebA-HQ. This demonstrates the effectiveness of the block noise. The addition of block noise introduces higher modeling complexity of the noise pattern, which contributes to a slower convergence of training in the initial stage, as illustrated by Figure 4(a). We assume that training on a significantly smaller scale of samples leads to a fast convergence of the model, which obliterates such a feature, therefore a similar phenomenon cannot be observed in the training of CelebA-HQ. The scale of stochasticity. As previous works [Song et al., 2020b] have shown, SDE samplers usually perform better than ODE samplers. We want to quantitatively measure how the scale of the stochasticity affects the performance in the RDM sampler (Algorithm 1). Table 2 shows results with $\eta$ varying from 0 to 0.50. For both CelebA-HQ and ImageNet, the optimal FID is achieved by $\eta = 0.2$. We hypothesize a small $\eta$ is insufficient for the noise addition to cover the bias formed. --- 2The best CFG scale is 1.325 with a hyperparameter sweep from 1.0 to 1.8. We observed the FID increases greatly if CFG scale > 1.5 for MDT-XL/2. in earlier sampling steps, while a large $\eta$ introduces excessive noise into the process of sampling, which makes a moderate $\eta$ to be the best choice. Within a reasonable scale of stochasticity, an SDE sampler always outperforms the ODE sampler by a significant margin. **Sampling steps.** To show the efficiency of our model, we compare the performance of RDM and other methods with fewer sampling steps. Number of Function Evaluations (NFE), i.e., the number that a neural network is called during sampling, is used as the index of the comparison for fairness. For RDM, the NFE consists of the NFE in the second stage and $1/10$ the NFE in the first stage, according to the proportion of the FLOPs. As shown in Figure 5, the performance of DiT-XL/2 and MDT-XL/2 both drop significantly with a lower NFE, while RDM barely declines. Considering that the steps in different stages may contribute differently in FID, we demonstrate three FLOPs allocation strategies in Figure 5. With more NFE allocated in the first stage, RDM achieves a better FID. In all settings, RDM performs better than MDT-XL/2 and DiT-XL/2 if NFE < 200. ### 5 CONCLUSION AND DISCUSSION In this paper, we propose relay diffusion to optimize the cascaded pipeline. The diffusion process can now continue when changing the image resolution or model architectures. We anticipate that our method can reduce the cost of training and inference, and help create more advanced text-to-image model in the future. The frequency analysis in the paper reveals the relation between noise and image resolution, which might be helpful to design a better noise schedule. However, our numerous attempts to theoretically derive the optimal noise schedule on the dataset from a frequency perspective did not yield good results. The reason might be that the optimal noise schedule is also related to the size of the model, inductive bias, and the nuanced distribution characteristics of the data. Further investigation is left for future work. ACKNOWLEDGMENTS This work is supported by Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2022ZD0118600, the NSFC for Distinguished Young Scholar 61825602, Tsinghua University Initiative Scientific Research Program 20233080067 and the New Cornerstone Science Foundation through the XPLOREPRIZE. The authors also thank Ting Chen from Google DeepMind and Junbo Zhao from Zhejiang University for their valuable talks and comments. AUTHOR CONTRIBUTIONS Ming Ding proposes the methods and leads the project. Jiayan Teng and Wendi Zheng conduct most of the experiments. Wenyi Hong works together on early experiments. Jianqiao Wangni, Wenyi Hong and Zhuoyi Yang contribute to the writing of the paper. Jie Tang provides guidance and supervision. The work is partly done during the internship of Jiayan Teng and Wendi Zheng at Zhipu AI. REFERENCES Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning, volume 4. Springer, 2006. Kristian Bredies, Dirk Lorenz, et al. Mathematical image processing. Springer, 2018. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. Ting Chen. On the importance of noise scheduling for diffusion models. arXiv preprint arXiv:2301.10972, 2023. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34:19822–19835, 2021. Shanghua Gao, Pan Zhou, Ming-Ming Cheng, and Shuicheng Yan. Masked diffusion transformer is a strong image synthesizer. arXiv preprint arXiv:2303.14389, 2023. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020. Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Miguel Angel Bautista, and Josh Susskind. f-dm: A multi-stage diffusion model via progressive signal transformation. arXiv preprint arXiv:2210.04955, 2022. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
RAA0vCLMhp
In Table 4, the results of existing state-of-the-art solutions reported are zero-shot or fine-tuned, and SeMDiff is the zero-shot or fine-tuned? If the results of SOTA are zero-shot, how about the fine-tuned performance on these datasets?
Semantic Memory Guided Diffusion Networks for Image-to-Long Text Generation Anonymous authors Paper under double-blind review Abstract Automatic describing image with comprehensive textual content is often demanded by different real-world applications, which motivates image-to-text generation tasks such as image captioning. However, conventional tasks mainly focus on generating short text, which often fail to deal with challenging scenarios that long text is inevitable required to describe enriched and diversified visual contents. Therefore, a more generic solution, which should be able to generate text with arbitrary length (long text in most cases), is expected to overcome limitations from existing approaches such as inability to generate sufficiently comprehensive and complete textual content and ensure semantic coherence in it. To address such limitations, we propose a dedicated solution, semantic memory guided diffusion networks (SeMDiff), for image-to-long text generation (I2LTG), which explicitly captures salient semantics from the visual contents, and further process and enhance them by memory networks to facilitate the text generation process. Specifically, we employ semantic concepts as the vehicle to deliver and process semantics embedded in images, where they are predicted from each image and matched with memory vectors and serve as the condition to guide diffusion networks for iterative generation. Experimental results on three public datasets and a new proposed one with more than 54K instances demonstrate the superiority of our approach compared to previous state-of-the-art solutions. Further analyses illustrate that our approach offers an effective diffusion-based solution with external guidance for long text generation under different cross-modal settings. 1 Introduction Generating image descriptions is one of the most widely applied techniques in artificial intelligence, especially when visual contents are enriched and diversified so that one needs an effective process to produce and organize descriptive texts that cover all semantics in the scenery. To emulate the process, some task such as image captioning (IC) has been developed to do so and achieves promising results (Mao et al., 2015; Rennie et al., 2017; Anderson et al., 2018; Pantazopoulos et al., 2022). However, IC mainly deals with short texts, which often fail to satisfy the demands of challenging scenarios, especially in particular domains where an entire report is expected to be generated with given image, i.e., radiology report generation (RRG) (Ying et al., 2018; Li et al., 2018; Johnson et al., 2019; Liu et al., 2021b; Huang et al., 2023). Therefore, the ability of generating comprehensive long text for images is expected to upgrade existing image-to-text generation approaches. In performing current image description tasks, existing approaches adopt either AR (Herdade et al., 2019; Huang et al., 2019; Cornia et al., 2020; Hu et al., 2022; Li et al., 2023; Zhu et al., 2023; Liu et al., 2023) (e.g., Transformer (Vaswani et al., 2017)) or non-AR models (Lee et al., 2018; Gao et al., 2019a; Guo et al., 2020; Zhou et al., 2021) as their foundation architecture, by predicting words in a sequence or producing all words in parallel, respectively. In forcing these approaches to generate long texts, they all have difficulties in producing semantically coherent texts with both AR and non-AR manner. Particularly, AR solutions are susceptible to error propagation if incorrect contents are half-way generated, so that contextually irrelevant contents are always observed accordingly, thus exacerbate the coherent problem. Although some RRG studies (Chen et al., 2020; 2021; Qin & Song, 2022; Tanida et al., 2023; Omkar Thawkar & Khan, 2023; Tu et al., 2023) extend AR solutions with task- and domain-specific heuristics, they cannot guarantee comprehensive and coherent content. 1 Code, models, and the proposed dataset will be open-sourced in the final version of this paper. Figure 1: The overview architecture of our approach for I2LTG, which consists of four components, namely, the visual encoder, the semantic concept predictor, the semantic conditional memory, and the diffusion decoder, which are represented in grey, yellow, green, and red background, respectively. An example input image with its output text is provided for better demonstration. generation. Therefore, in terms of generation mechanism, non-AR approaches are relatively optimal than AR ones to avoid sequential error propagation. However, they are verified only on short text generation task in most cases, e.g., IC, and it is unclear if being applied to long text generation especially they also have their own limitations such as word repetition issue (Luo et al., 2022). As a result, to explore effective long text generation with non-AR approaches, it is valuable to carefully design guidance and enhancement that adapt to this task. In this paper, we propose semantic memory guided diffusion networks (SeMDiff) for image-to-long text generation (I2LTG) with three main components, namely, semantic concept predictor (SCP), semantic conditional memory (SCM), and diffusion decoder (DD). In our approach, we adopt semantic concepts as intermediate media to transport essential semantic information in image to text generation process, where they are captured from image by SCP and enhanced in SCM, then serve as the guidance for DD to iteratively generate final texts. Specifically, SCM is the distinctive design in this work that enhances the representation of semantic concepts with specific image-text correlation information stored in its most related memory vectors so as to provide precise control that piloting diffusion networks in generating comprehensive and coherent long texts. We evaluate our approach on three public datasets, i.e., MIMIC-CXR, CC-SBU and Localized Narratives (LN) and a new proposed one designed for I2LTG in this work, namely, COCO-Long Text (COCO-LT). Experimental results on them illustrate the superiority of our approach against state-of-the-art counterparts under different image description generation settings. Further analysis on different components of our approach illustrates that the SCP provides strong guidance for iterative refinement of DD, which allows the model to perform a more organized generation process, with SCM further ensuring the preciseness of the guidance for each iteration, guaranteeing the resulted semantic coherent texts. 2 THE APPROACH Given an input image $I$, our approach attempts to generate its description $\hat{Y}$ in long text. Figure 1 illustrates the overall pipeline of our approach, which consists of four components, i.e., the visual encoder, the semantic concept predictor (SCP), the semantic conditional memory (SCM), and the diffusion decoder (DD). Specifically, the visual encoder $f_{ve}$ processes the input image $I$ into visual representations $h^v$, and SCP $f_{scp}$ predicts semantic concepts $\hat{S}$ from a semantic matrix $S$ that stores the vectors of all possible concepts according to $h^v$. The SCM $f_{scm}$ further enhances the representations $h^s$ of $\hat{S}$ by matching top-$K$ memory vectors, resulting in a subset $\hat{h}^s$ of $h^s$. Finally, the DD $f_{dd}$ generates $\hat{Y}$ along with $\hat{h}^s$ and $h^s$, where the overall process is formulated by $$\hat{Y} = f_{dd}(f_{ve}(I), f_{scm}(f_{scp}(f_{ve}(I), S), K))$$ \hspace{1cm} (1) In training, the model is optimized based on the cross-entropy loss $L_{SCP}$. The final loss $L$ for the entire approach is then combined with $L_{SCP}$ and the loss function $L_{DD}$ of the DD through $$L = \beta_1 L_{SCP} + \beta_2 L_{DD}$$ \hspace{1cm} (2) where $\beta_1$ and $\beta_2$ are hyper-parameters balancing contributions of $L_{SCP}$ and $L_{DD}$, respectively. Following texts present aforementioned components in details according to our pipeline sequence. ### 2.1 The Visual Encoder The visual encoder consists of two components, a visual feature extractor $f_{ve}$ and a Transformer-based encoder $f_{te}$, where $f_{ve}$ is a pre-trained vision backbone model (i.e., ResNet-101 (He et al., 2016)). For feature extraction from $I$, we firstly decompose $I$ into a series of patches $\{I_1 \ldots I_{N_v}\}$ with $N_v$ denoting the number of patches, and then adopt the output matrices $[X_1 \ldots X_{N_v}]$ from the last convolutional layer of $f_{ve}$ to feed into $f_{te}$. Finally, $f_{te}$ encodes $[X_1 \ldots X_{N_v}]$ into visual representations $h^v$, with the overall process formulated by $$h^v = f_{te}(f_{ve}(I_1 \ldots I_{N_v}))$$ (3) ### 2.2 The Semantic Concept Predictor When generating long texts directly with the latent representations extracted from an image, there is potential deficiency that such representations have ambiguities in conveying all essential semantics, so that incoherent or even incomplete image descriptions are generated. To address such ambiguity issue, we propose SCP to explicitly predict semantic concepts, so as to provide accurate supplementary guidance for image representations. Starting from the randomly initialized matrix $S$ containing a series of semantic vectors $\{s_1 \ldots s_{N_c}\}$ that cover all the possible concepts, we use $f_{scp}$, a transformer based ranker, to predict $\hat{S} = \{\hat{s}_1 \ldots \hat{s}_{N_c}\}$ with $N_c$ concepts (i.e., words in some cases) according to $h^v$ from the visual encoder, with the process formulated by $$\hat{S} = f_{scp}(h^v, s_1 \ldots s_{N_c})$$ (4) where the representation $h^s_n$ of the $n$-th concept $\hat{s}_n$ is extracted from the last layer of $f_{scp}$ by $$h^s_n = f_{scp}(h^v, s_1 \ldots s_{N_c}; \hat{s}_1 \ldots \hat{s}_n)$$ (5) Later we compute the mean pooling of all $h^s_n$ and use the resulting vector $h^s$ to represent $\hat{S}$. In training, we compute the cross-entropy loss $L_{SCP}$ between $\hat{S}$ and the annotated semantic concepts $S^*$ in the gold standard image description $Y^*$. In doing so, we map $h^s_n$ to a distribution over $V^s$ with $p^s_{n,i}$ for the probability of the $i$-th concept $v_i$, and choose the concept $\hat{s}_n$ with the highest probability as output. Then, we compare $\hat{s}_n$ with the gold standard $y^*_n$ to compute the cross-entropy loss by $$L_{S,n} = - \sum_{v_i \in V^s} p^*_{v_i} \log p^s_{n,i}$$ (6) where $p^*_{v_i}$ is the probability distribution of the gold standard over $V^s$ with $p^*_{v_i} = 1$ if $v_i = y^*$ and $p^*_{v_i} = 0$ otherwise. Finally, we sum $L_{S,n}$ over all concepts in $\hat{S}$ and obtain $L_{SCP} = \sum_{n=1}^{N_c} L_{S,n}$. ### 2.3 The Semantic Conditional Memory In our approach, we utilize the SCM to enhance the representations of the produced concepts from the SCP with the memory that stores the information in aligning images and texts, so as to provide more precise guidance for the next text generation process. In doing so, SCM is built upon a memory matrix $M$, which stores a series of $d$-dimension memory vectors $\{m_1 \ldots m_{N_m}\}$ that interact with $h^s$, with $N_m$ denoting the number of these vectors. Two main steps are involved in SCM, namely, memory querying (MQ) and memory responding (MR), respectively. **Memory Querying** In this process, we project $h^s$ and $m_i \in \{m_1 \ldots m_{N_m}\}$ into $q^s$ and $k_i$ to the same semantic space through two linear transformation matrices $W_q$ and $W_k$, respectively, through $$q^s = h^v \cdot W_q, \quad k_i = m_i \cdot W_k$$ (7) where we use two one-layer perceptrons to model $W_q$ and $W_k$, respectively. Then, we compute the latent distance $D_i$ between $q^s$ and $k_i$ by $$D_i = \frac{q^s \cdot k_i^T}{\sqrt{d}}$$ (8) Subsequently with $D_i$, we retrieve the top-$K$ memory vectors $\{k_1 \ldots k_K\}$ from $M$ and calculate the corresponding importance weight $\omega_i$ for each $k_i$ by normalization over $D_i$: $$\omega_i = \frac{\exp(D_i)}{\sum_{j=1}^{K} \exp(D_j)}$$ (9) Memory Responding MR obtains a responded vector \( r \) based on \( \{k_1 \ldots k_K\} \) and their weights \( \{\omega_1 \ldots \omega_K\} \), and enhance \( h^s \) with the resulted \( r \). In doing so, we project \( k_i \) to the same semantic space of \( h^s \) through a linear transformation matrix \( W_v \), resulting \( v_i \) through \[ v_i = k_i \cdot W_v, \] where \( W_v \) is performed by a one-layer perceptron. Then, we obtain the responded vector \( r \) by \[ r = \sum_{i=1}^{K} \omega_i \cdot v_i \] Finally, we add \( r \) to \( h^s \) and normalizing (\( \text{Norm} \)) it as \( \hat{h}^s = \text{Norm}(h^s + r) \), and send \( \hat{h}^s \) to DD to guide the generation process. 2.4 The Diffusion Decoder The DD (\( f_{dd} \)) aims to generate \( \hat{Y} \) based on \( h^v \) and \( \hat{h}^s \). In doing so, DD performs diffusion forwarding and decoding processes, where forwarding allows DD to learn the ability of reconstructing noisy representation and insert them into final result, so that DD is able to generate \( \hat{Y} \) through iteratively denoising during the decoding process. Details of these processes are illustrated in following texts. Diffusion Forwarding Given the step \( t \sim U(0, T) \) with \( T \) denoting the total number of steps, diffusion forwarding firstly adds Gaussian noise \( n \) into the representation \( h_0 \) of \( Y^* \), resulting in the noisy representations \( h_t \) at \( t \)-step. We follow Bit Diffusion (BD) (Chen et al., 2023) to convert tokens in \( Y^* \) into their bit representation (\( h_0 \)) and compute the representation \( h_t \) at the \( t \)-th step by \[ h_t = \sqrt{\bar{\alpha}_t} \cdot h_0 + \sqrt{1 - \bar{\alpha}_t} \cdot n \] Herein, \( \bar{\alpha}_t \) is a blending scalar correlated to the noise scheduling strategy of denoising diffusion probabilistic model (DDPM) (Ho et al., 2020), and we use the cosine noising schedule of DDPM. Then, \( f_{dd} \) reconstructs \( h_t \) to \( h_0 \) based on \( h^v \) and \( \hat{h}^s \), where we compute the diffusion loss \( L_{diff} \) of DD through \[ L_{diff} = \mathbb{E}_{t \sim U(0, T)} \| f_{dd}(h_t, h^v, \hat{h}^s, t) - h_0 \|_2^2 \] Upon the reconstructed representation, we use a linear projection layer to predict the probability distribution over all tokens. Afterwards, we compute cross-entropy loss \( L_{CE} \) by comparing \( \hat{Y} \) and \( Y^* \), where the final loss of DD \( L_{DD} \) is formulated by \[ L_{DD} = L_{CE} + L_{diff} \] Diffusion Decoding Diffusion decoding generates \( \hat{Y} \) following the standard process of BD. Specifically, we randomly sample a Gaussian noise \( n \) and denoise it into the final representation \( \hat{h}_0 \) for \( \hat{Y} \). In doing so, we initialize \( \hat{h}_T \) with \( n \) and iteratively denoise it into \( \hat{h}_0 \) according to \[ \hat{h}_0 = \prod_{t=1}^{T} p(\hat{h}_{t-1} | \hat{h}_t, h^v, \hat{h}^s) \] where \[ p(\hat{h}_{t-1} | \hat{h}_t, h^v, \hat{h}^s) = \sqrt{\bar{\alpha}_{t-1}} \cdot \frac{\hat{h}_t - \sqrt{1 - \bar{\alpha}_t} \cdot f_{dd}(\hat{h}_t, h^v, \hat{h}^s, t)}{\sqrt{\bar{\alpha}_t}} + \sqrt{1 - \bar{\alpha}_{t-1}} \cdot f_{dd}(\hat{h}_t, h^v, \hat{h}^s, t) \] Finally, we decode \( \hat{h}_0 \) and obtain the final text results \( \hat{Y} \) for the input image \( I \). 3 Experiment Settings 3.1 Datasets We evaluate our approach on a series of datasets from different tasks, including MIMIC-CXR (John-son et al., 2019) for RRG, CC-SBU (Zhu et al., 2023) for cross-modal alignment, Localized Nar-ratives (LN) (Pont-Tuset et al., 2020) for IC. Details of the aforementioned datasets are reported in Table 1 and illustrated in the following text. | Dataset | MIMIC-CXR | CC-SBU | LN | COCO-LT | |---------|-----------|--------|----|--------| | | Train | Val | Test | Train | Val | Test | Train | Val | Test | | IMAGE | 369.0K | 3.0K | 5.2K | 3.0K | 0.1K | 0.3K | 1743K | 41.7K | 126.0K | | DESCRIPTION | 222.8K | 1.8K | 3.3K | 3.0K | 0.1K | 0.3K | 507.4K | 41.7K | 126.0K | | AVG. LEN. | 53.0 | 53.1 | 66.4 | 70.8 | 70.8 | 71.5 | 35.5 | 29.9 | 30.6 | Table 1: Statistics of our experiment datasets w.r.t. their training, validation, and test sets, including the numbers of images, descriptions, and the average length of descriptions (i.e., (AVG. LEN.)). | Data | Model | NLG Metrics | CE Metrics | |------|-------|-------------|------------| | | | BL-1 | BL-2 | BL-3 | BL-4 | MTR | RG-L | AVG. Δ | P | R | F1 | | MIMIC-CXR | TRANS | 0.357 | 0.216 | 0.141 | 0.091 | 0.129 | 0.271 | - | 0.348 | 0.314 | 0.330 | | | DIFF | 0.380 | 0.221 | 0.143 | 0.100 | 0.137 | 0.277 | 4.5% | 0.385 | 0.401 | 0.393 | | | +SCP | 0.409 | 0.243 | 0.167 | 0.113 | 0.149 | 0.284 | 12.8% | 0.437 | 0.445 | 0.441 | | | +SCM | 0.385 | 0.227 | 0.149 | 0.106 | 0.142 | 0.279 | 6.3% | 0.405 | 0.417 | 0.411 | | | +SCP+SCM (SeMDIFF) | **0.412** | **0.259** | **0.180** | **0.129** | **0.178** | **0.287** | **19.0%** | **0.471** | **0.479** | **0.478** | | CC-SBU | TRANS | 0.343 | 0.197 | 0.115 | 0.054 | 0.066 | 0.214 | - | - | - | - | | | DIFF | 0.370 | 0.223 | 0.131 | 0.081 | 0.173 | 0.253 | 23.6% | - | - | - | | | +SCP | 0.404 | 0.251 | 0.155 | 0.099 | 0.181 | 0.284 | 32.7% | - | - | - | | | +SCM | 0.388 | 0.239 | 0.140 | 0.084 | 0.174 | 0.267 | 27.4% | - | - | - | | | +SCP+SCM (SeMDIFF) | **0.417** | **0.265** | **0.167** | **0.109** | **0.201** | **0.253** | **37.7%** | - | - | - | | LN | TRANS | 0.197 | 0.117 | 0.063 | 0.040 | 0.095 | 0.151 | - | - | - | - | | | DIFF | 0.220 | 0.139 | 0.087 | 0.053 | 0.117 | 0.175 | 18.5% | - | - | - | | | +SCP | 0.305 | 0.175 | 0.102 | 0.067 | 0.130 | 0.220 | 34.2% | - | - | - | | | +SCM | 0.291 | 0.164 | 0.138 | 0.061 | 0.125 | 0.206 | 33.4% | - | - | - | | | +SCP+SCM (SeMDIFF) | **0.376** | **0.229** | **0.148** | **0.092** | **0.153** | **0.281** | **49.1%** | - | - | - | | COCO-LT | TRANS | 0.257 | 0.129 | 0.058 | 0.030 | 0.093 | 0.178 | - | - | - | - | | | DIFF | 0.283 | 0.144 | 0.076 | 0.041 | 0.119 | 0.210 | 17.9% | - | - | - | | | +SCP | 0.328 | 0.178 | 0.102 | 0.071 | 0.133 | 0.239 | 34.3% | - | - | - | | | +SCM | 0.314 | 0.152 | 0.088 | 0.056 | 0.129 | 0.202 | 25.6% | - | - | - | | | +SCP+SCM (SeMDIFF) | **0.365** | **0.210** | **0.144** | **0.093** | **0.155** | **0.265** | **44.7%** | - | - | - | Table 2: Comparison of different baselines with the full model (SeMDIFF) on four datasets under NLG and CE metrics (CE only applies to MIMIC-CXR). “BL” denotes the abbreviation of BLEU; “MTR” and “RG-L” denote METEOR and ROUGE-L, respectively. The average improvement over all NLG metrics compared to “Trans” is also presented in the “AVG. Δ” column. * marks the results where the improvements are statistically significant over all baselines at $p \leq 0.05$ level. MIMIC-CXR is the largest public dataset for RRG with 473,057 chest X-Ray images and 206,563 reports. We follow its official split and utilize the medical text indexer (MTI) to preprocess all radiology reports in obtaining medical concepts. CC-SBU is a dataset proposed by MiniGPT-4 (Zhu et al., 2023), which contains 3,439 high-quality image-description pairs. In this dataset, we use key words in image description as semantic concepts by filtering them according to their part-of-speech (POS) tags and frequencies. In doing so, we employ the NLTK POS tagger to annotate POS labels for each word in image description and set a threshold to filter out infrequent words. Based on the aforementioned process, we finally obtain 1,622 semantic concepts (words) for CC-SBU. For Localized Narratives (LN), we choose its Open Images subset containing 671k image-description pairs for our experiments and obtain the semantic concepts following the similar pipeline as that applied to CC-SBU, resulted in 4,888 semantic concepts (words) in total. Particularly, we propose a new dataset COCO-LT dedicated to I2LTG based on COCO (Lin et al., 2014) for further evaluating our approach. In detail, we randomly choose around 40% of original COCO instances to form this dataset with each image in it having five corresponding short description sentences from different perspectives. Then we employ ChatGPT (GPT-3.5-Turbo) to produce a long description (generally a paragraph) based on these sentences through a special prompt and finally result in 54,785 image-description pairs. For this dataset, we utilize the similar process as that for CC-SBU and COCO-LT, and obtain 1,894 semantic concepts (words). --- 2https://lhncbc.nim.nih.gov/ii/tools/MTI.html 3Preserved POS labels only consist JJ, JJR, JJS, NN, NNS, RB, RBR, RBS, VB, VBD, VBG, VBZ. 4https://github.com/cvdfoundation/open-images-dataset 5We illustrate more details of the proposed COCO-LT dataset in Appendix A. Table 3: Comparisons of SeMDiff with previous studies on the test set of MIMIC-CXR under NLG and CE metrics. The best and second results are in boldface and underlined. For LLM-based methods (i.e., XRayGPT, Med-PALM), we also illustrate their parameter numbers in parentheses. * marks the results the improvements are statistically significant over all baselines at $p \leq 0.05$ level. ### 3.2 Baselines and Evaluation Metrics To verify our proposed model, we use four baselines for comparison in our experiments. “Trans” represents the autoregressive model with ResNet-101 (He et al., 2016) and a 3-layer Transformer as the visual encoder, and another 3-layer Transformer with an additional 8-head cross-attention layer as the decoder, and “Diff” denotes our baseline diffusion model which directly generates the image description from the visual representations. “+SCP” stand for the model that SCP is applied to “Diff”, serving as our third baseline. “+SCM” represents our fourth baseline model that “Diff” is equipped with only SCM, where SCM directly interacts with visual representations. “+SCP+SCM” is our full model with all proposed components. For evaluation on MIMIC-CXR, we follow previous studies (Chen et al., 2020; 2021; Qin & Song, 2022; Huang et al., 2023) and evaluate the different models with natural language generation (NLG) and clinical efficacy (CE) metrics. For NLG metrics, we use BLEU (Papineni et al., 2002), METEOR (Michael & Alon, 2011), and ROUGE-L (Lin, 2004). For CE metrics, we employ CheXpert (Gao et al., 2019b) to classify words in the generated reports into 14 different categories related to thoracic diseases and support devices, and compare the resulted labels with the ones in gold standard reports. We use precision, recall, and F1 to evaluate model performance for CE metrics. For evaluation on CC-SBU, LN, COCO-LT, we only use NLG metrics following conventional studies (Vinyals et al., 2015; Rennie et al., 2017; Anderson et al., 2018; Cornia et al., 2020; Fang et al., 2022; Li et al., 2022b) and also measure the lengths of the generated texts. ### 3.3 Implementation Details In our experiments, we try different hyper-parameter settings and select the one with best performance on the validation set. For model architecture, we implement $f_{\text{enc}}$, $f_{\text{scp}}$, and $f_{\text{dd}}$ with 3 layers of Transformer, where number of the attention head and dimension of the hidden vectors are set to 8 and 512, respectively. In SCP and DD, we implement an additional 8-head cross-attention layer to incorporate the visual representations. For SCM, the memory dimension $d$ is set to 512. For DD, the total step $T$ for diffusion forwarding and decoding processes is set to 100. For optimization, we use Adam (Kingma & Ba, 2015) optimizer updating all model parameters with a learning rate of $5e^{-4}$. We follow the learning rate scheduling strategy in Vaswani et al. (2017) with 20,000 steps for warm-up, where the total training steps vary from 1.5M to 6.7M according to different datasets. The weights to balance SCP and DD loss in Eq. 2 are set to $\beta_1 = 1$ and $\beta_2 = 1$, respectively. ### 4 Results and Analysis #### 4.1 Overall Results Experimental results of different models on the test sets of four datasets are reported in Table 2 with several observations. First, in all four test sets, it is observed that the basic non-AR model... Table 4: Comparisons of our approach with previous studies on the test sets of CC-SBU, LN, and COCO-LT under NLG metrics (BL, MTR and RG refer to BLEU, METEOR and ROUGE, respectively). The best and second results are in boldface and underlined. LLM-based methods (i.e., BLIP-2, MinIGPT-4, and LLava) are illustrated with their parameter numbers in parentheses. * marks the results where improvements are statistically significant at $p \leq 0.05$ level over all baselines. ("Diff") consistently outperforms the AR one ("Trans") on all datasets, owing to that the error propagation problem is alleviated. Second, by comparing whether using semantic information, "Diff+SCP" (i.e., latent representations and explicit semantic concepts) leads to significantly better performance over "Diff" (i.e., latent representations), which confirms the effectiveness of semantic guidance for I2LTG. Third, comparing approaches with and without using memory, we find that "Diff+SCM" achieves better performance than "Diff", which indicates that SCM helps the model to establish a better cross-modal alignment. Fourth, when SCP and SCM are combined, our approach "Diff+SCP+SCM" is able to further enhance the performance of "Diff+SCP" and "Diff+SCM", and achieves the best result, which indicates the necessity to optimize semantic concepts in SCM. To further illustrate the effectiveness of our approach, we compare it with existing state-of-the-art solutions on all four datasets, with results presented in Table 3 and 4. Overall, our approach significantly outperforms other approaches on all metrics, which illustrates the superiority of our approach for I2LTG with its specific model design. Notably, our approach even achieves better performance than those studies based on large language models (LLMs) (i.e., XRayGPT, Med-PALM, BLIP-2, MinIGPT-4, and LLava), indicating that appropriate semantic guidance is more efficient than using a massive amount of parameters in LLMs. Compared to prevailing non-AR solutions (i.e., MIR, SATIC, and SCD-NET), our approach obtains significant improvements, suggesting the power of semantic concepts in helping non-AR models with overcoming their limitations such as word repetition issue, which are further illustrated in the next subsection. Particularly, in noticing that SCD-NET also leverages semantic guidance, our approach presents its superior capability in generating better results by utilizing predicted semantic concepts while SCD-NET obtains such semantic information by retrieving and encoding sentences, resulting in a coarser guidance. 4.2 ANALYSIS We perform a series of analysis to investigate the effect of different components of our approach following its pipeline sequence. Specifically, we firstly explore how semantic matrix size affects the concept prediction process in SCP. Then, we investigate SCM performance against different memory sizes and the number of queried memory vectors. Finally, we qualitatively illustrate the effect of different components of our approach through a case study. Effect of the Semantic Matrix Size We conduct our approach with different semantic matrix sizes (i.e., $N_s$) to analyze their effects to SCP. Figure 2(a) presents the curves of BLEU-4 score against --- 7 To comprehensively evaluate the quality of the semantic guidance, we compare the generated concepts with the ones in gold standard descriptions, and present the results (precision, recall, and F1) in Appendix C. 8 The guideline for choosing these studies is based on that they have open-sourced code, which allows us to run their models on our experiment datasets, especially the COCO-LT dataset proposed in this paper. 9 We report full evaluation with all metrics on our approach and existing state-of-the-art solutions on CC-SBU, LN, and COCO-LT datasets in Appendix D. 10 Med-PALM does not release the model weights and its RRG test set. Therefore, for fair comparisons, we approximate their settings to randomly curated 10 groups of test instances with the same size (i.e., 246 cases) as that used in Med-PALM. Under this setting, SEMDIFF performs similarly to the results reported in Table 3. Figure 2: The curves of BLEU-4 score on test sets of different datasets with respect to (a) semantic matrix size, (b) memory size, and (c) number of queried memory vectors. \(N_s\), showing that the semantic matrix size should be separately set for different datasets. In general, when this size is smaller than the optimal value, the model gradually obtains better performance as \(N_s\) increases, which indicates that semantic matrix is able to cover more related concepts so that SCP stores more essential semantic information. However, once the optimal value is reached, model performance starts to degrade when the size keeps enlarging, thus overfitting is observed accordingly and larger matrix size does not help in storing useful semantic information. **Effect of the Memory Size** To explore the effect of memory size on SCM (i.e., \(N_m\)), we conduct our approach with different \(N_m\). Figure 2(b) presents the curves of BLEU-4 score with respect to \(N_m\) ranging from 32 to 4,096. It is observed that, in general, enlarging the memory matrix helps improving model performance on all datasets, indicating that better generation results are expected when a larger matrix is applied and stores more image-text correlation information. Moreover, we also notice performance convergence when \(N_m\) reaches 2048 (512 on CC-SBU), so that there exists a limit for the bonus on enlarging matrix size for preserving essential information. **Effect of the Number of Queried Memory Vectors** In analyzing how the number of queried memory vectors (i.e., \(K\)) affects the SCM, we try our approach under different \(K\) settings. Figure 2(c) presents the curves of BLEU-4 score with respect to \(K\) ranging from 1 to 512. Similar to that found in semantic matrix size analysis, it is shown that \(K\) has an optimal value on each dataset, where retrieving either too few or too many memory vectors leads to inferior performance, corresponding to the situations of information insufficiency and overloading, respectively. Particularly, when too many vectors are retrieved, the impact of noise is highly significant in affecting model performance as the BLEU-4 scores rapidly drop, suggesting that \(K\) should be carefully chosen. **Case Study** In addition to quantitative analyses, we also present a case study on the generated texts from different models with the same image input from CC-SBU. Figure 3 demonstrates the results with comparison of iterative generations from “Diff” and “Diff+SCP+SCM”, where semantic words shared by model outputs and the gold standard texts are highlighted in the same color, as well as the time step \(t\) in iteration and the average number of repetitive words in different results illustrated in parentheses.\(^{11}\) There are several observations from different perspectives. “Diff” gradually refines the initialized repetitive words into a series of descriptive sentences, which produces few related semantic words in its results, suggesting the ambiguity of visual representation that leads to insufficient semantic information for the text generation process. On the contrary, with the assistance of semantic concepts, our full model (“Diff+SCP+SCM”) is able to generate more reasonable results that contain enough related contents, indicating that SCP and SCM provide a strong guidance for the generation process to produce semantic coherent long texts. Notably, “Diff+SCP+SCM” also performs a more organized generation process, where the number of repetitive words is significantly decreased during the iterative generation process, which confirms the validity of our model design and the potential of semantic concepts to alleviate existing limitations of non-AR solutions.\(^{12}\) **5 RELATED WORK** Conventionally, describing images is primarily carried out through image captioning (IC), where normally short sentences are generated for input source images based on autoregressive models (i.e., LSTM (Hochreiter & Schmidhuber, 1997; Vaswani et al., 2017)) or non-autoregressive ones (Lee et al., 2018; Gao et al., 2019a; Guo et al., 2020; Zhou et al., 2021), with pre-training techniques (Hu et al., 2022; Chen et al., 2022; Nukrai et al., 2022; Romain & Rufin, 2023; Ramos et al., 2023), semantic condition (Fang et al., 2022; Li et al., 2022b), and enhanced multi-modal features (Shi et al., 2021; Ng et al., 2021; Nguyen et al., 2022; Liu et al., 2022; Wu et al., 2022; Zhang et al., 2022; Wu et al., 2023) applied to facilitate the generation process. However, IC normally fails \(^{11}\)We further report word repetition results from different models on all datasets in Appendix E. \(^{12}\)For comprehensive comparisons, we present more case studies in Appendix F. to meet the requirements of some challenging scenarios, especially the ones in particular domain with long descriptions, e.g., report for radiology. Although some approaches directly use IC models (Vinyals et al., 2015; Lu et al., 2017; Rennie et al., 2017; Anderson et al., 2018) for radiology report generation (RRG), some studies improve conventional AR solutions with co-attentions (Jing et al., 2018), memory networks (Chen et al., 2020, 2021), reinforcement learning (Qin & Song, 2022), and useful features in different modalities (Li et al., 2018; Wang et al., 2022; Tanida et al., 2023; Hou et al., 2023; Huang et al., 2023), which are still limited to guarantee comprehensive and coherent texts in the generated result. With recent advances in large language models (LLMs) (Touvron et al., 2023a,b) and diffusion model (Ho et al., 2020) that both illustrate outstanding generation ability, these techniques have been employed to enhance the cross-modal content generation process (Li et al., 2023; Zhu et al., 2023; Liu et al., 2023) as well as report generation in the medical domain (Omkar Thawkar & Khan, 2023; Tu et al., 2023). Particularly, owing to the discrete nature of texts, it is hard to directly applying standard diffusion model for text generation, some studies are thus proposed to do so through continuous representations, e.g., embedding (Li et al., 2022a; Gong et al., 2023) and bit representations (Chen et al., 2023; Luo et al., 2022). Compared with all aforementioned work, our approach offers a generic solution for I2LTG, with an effective design of using diffusion networks for non-AR text generation, and proves the validity of employing semantic guidance to enhance the coherence of texts when generating long descriptions for an image. 6 CONCLUSION In this paper, we propose a diffusion-based model, SEMDIFF, with memory networks for I2LTG, which firstly captures salient semantic concepts in image, then utilizes memory networks to enhance such concepts, and finally employs diffusion networks to incorporate them to facilitate the long-text generation process. SEMDIFF offers a solution to incorporating external guidance into diffusion networks, effectively addresses a series of issues such as incoherence problem in non-AR text generation, especially for long texts. Experiments on three public datasets and COCO-LT illustrate the superiority of our approach compared to state-of-the-art solutions. We also propose a new dataset COCO-LT dataset with over 54K image-long text pairs to further evaluate our approach on I2LTG, which further confirms its long-text generation ability as that proved on the three public datasets. Further analyses investigate the effect of our approach in accommodating semantic concepts into diffusion networks, indicating that our SEMDIFF design of incorporating external guidance has its potential of being utilized as a benchmark framework for similar tasks in future studies. REFERENCES Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. In CVPR, pp. 6077–6086, 2018. Jun Chen, Han Guo, Kai Yi, Boyang Li, and Mohamed Elhoseiny. VisualGPT: Data-Efficient Adaptation of Pretrained Language Models for Image Captioning. In CVPR, pp. 18009–18019, 2022. Ting Chen, Ruixiang Zhang, and Geoffrey Hinton. Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning. In ICLR, pp. 1–23, 2023. Zhihong Chen, Yan Song, Tsung-Hui Chang, and Xiang Wan. Generating Radiology Reports via Memory-driven Transformer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1439–1449, Online, November 2020. Zhihong Chen, Yaling Shen, Yan Song, and Xiang Wan. Cross-modal Memory Networks for Radiology Report Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 5904–5914, Online, August 2021. Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. Meshed-memory Transformer for Image Captioning. In CVPR, pp. 10578–10587, 2020. Zhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lin Liang, Zhe Gan, Lijuan Wang, Yezhou Yang, and Zicheng Liu. Injecting Semantic Concepts into End-to-End Image Captioning. In CVPR, pp. 18009–18019, 2022. Junlong Gao, Xi Meng, Shiqi Wang, Xia Li, Shanshe Wang, Siwei Ma, and Wen Gao. Masked non-autoregressive image captioning. CoRR, abs/1906.00717, 2019a. Junlong Gao, Xi Meng, Shiqi Wang, Xia Li, Shanshe Wang, Siwei Ma, and Wen Gao. Masked non-autoregressive image captioning. CoRR, abs/1906.00717, 2019b. Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong. DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models. In ICLR, pp. 1–20, 2023. Longteng Guo, Jing Liu, Xinxin Zhu, Xingjian He, Jie Jiang, and Hanqing Lu. Non-Autoregressive Image Captioning with Counterfactuals-Critical Multi-Agent Learning. In International Joint Conference on Artificial Intelligence, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’16, pp. 770–778, June 2016. Simao Herdade, Armin Kappeler, Kofi Boakye, and Joao Soares. Image Captioning: Transforming Objects into Words. Red Hook, NY, USA, 2019. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. NeurIPS, 33:6840–6851, 2020. Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8): 1735–1780, 1997. Wenjun Hou, Kaishuai Xu, Yi Cheng, Wenjie Li, and Jiang Liu. ORGAN: Observation-Guided Radiology Report Generation via Tree Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8108–8122, Toronto, Canada, July 2023. Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. Scaling Up Vision-Language Pretraining for Image Captioning. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17959–17968, 2022.
LTHWoQ9ac1
Sequential recourse: the problem in Eq (7) doesn’t seem like the real problem that needs to be solved. The set U_P updates after every step, and this changes the argmax A and therefore w_ij. Am I missing anything?
Cost-Adaptive Recourse Recommendation by Adaptive Preference Elicitation Anonymous authors Paper under double-blind review Abstract Algorithmic recourse recommends a cost-efficient action to a subject to reverse an unfavorable machine learning classification decision. Most existing methods in the literature generate recourse under the assumption of complete knowledge about the cost function. In real-world practice, subjects could have distinct preferences, leading to incomplete information about the underlying cost function of the subject. This paper proposes a two-step approach integrating preference learning into the recourse generation problem. In the first step, we design a question-answering framework to refine the confidence set of the Mahalanobis matrix cost of the subject sequentially. Then, we generate recourse by utilizing two methods: gradient-based and graph-based cost-adaptive recourse that ensures validity while considering the whole confidence set of the cost matrix. The numerical evaluation demonstrates the benefits of our approach over state-of-the-art baselines in delivering cost-efficient recourse recommendations. 1 Introduction Many machine learning algorithms are deployed to aid significant decisions in various domains. These decisions might have a direct or indirect influence on people’s lives, especially in the case of high-profile applications (Verma et al., 2020) such as job hiring (Harris, 2018; Pessach et al., 2020), bank loan (Wang et al., 2020; Turkson et al., 2016) and medical diagnosis (Fatima et al., 2017; Latif et al., 2019). Thus, it’s imperative to develop methods to explain the prediction of machine learning models. For instance, when a person applies for a job and is rejected by a predictive model deployed by the employer, the applicant should be notified of the reasoning behind the negative decision and what they could do to be hired. Recently, algorithmic recourse has become a powerful tool for explaining machine learning (ML) models. Recourse refers to the actions a person should take to achieve an alternative predicted outcome, and it is also known in the literature as a counterfactual explanation. In the case of job hiring, recourse should be individualized suggestions such as “get two more engineering certificates” or “complete one more personal project.” When a company suggests a recourse to a subject, this recourse must be valid because the company should accept all applicants who completely implement the suggestions provided in the recommended recourse. Throughout this paper, we use “subject” to refer to the individual who is subject to the prediction of the algorithm. In the context of our job-hiring example, “subject” refers to the job applicant who was rejected by the company. Several approaches have been proposed to generate recourse for a machine learning model prediction (Karimi et al., 2022; Verma et al., 2020; Stepin et al., 2021). Wachter et al. (2018) used gradient information of the underlying model to generate a counterfactual closest to the input. Ustun et al. (2019) introduced an integer programming problem to find the minimal and actionable change for an input instance. Pawelczyk et al. (2020) leveraged the ideas from manifold learning literature to generate counterfactuals on the high-density data region. Karimi et al. (2020, 2021) generated counterfactual as a sequence of interventions based on a pre-defined causal graph. These aforementioned approaches all assume that all subjects have the same cost function, for example, the $l_1$ distance (Ustun et al., 2019; Upadhyay et al., 2021; Slack et al., 2021; Ross et al., 2021) or define the same prior causal graph for all subjects (Karimi et al., 2020, 2021). This assumption results in two subjects with identical attributes receiving the same recourse recommendation. Unfortunately, this recourse recommendation is unrealistic in practice because having identical attributes... does not necessarily guarantee that the two subjects will have identical preferences. Indeed, human preferences are strongly affected by many unobservable factors, including historical and societal experiences, which are hardly encoded in the attributes. Thus, the cost functions could be significantly different even between subjects with identical attributes, yet this difference is rarely considered in the recourse generation framework (Yadav et al., 2021). To mitigate these issues, De Toni et al. (2023) proposed a human-in-the-loop framework to generate a counterfactual explanation uniquely suited to each subject. The proposed method first fixes the initialized causal graph and iteratively learns the subject’s specific cost function. Recourse is generated by a reinforcement learning approach that searches for a suitable sequence of interventions. The disadvantage of this approach is that it requires a pre-defined causal graph, which is rarely available in practice (Verma et al., 2020). Besides, Rawal & Lakkaraju (2020) employed the Bradley-Terry model to estimate a universal cost function and then utilized the user input to generate personalized recourse for the user. However, this method is additive in features; therefore, its ability to recover the underlying causal graph remains problematic. Following the same line of work, Yetukuri et al. (2023) captures user preferences via three soft constraints: scoring continuous features, bounding feature values, and ranking categorical features. This method generates recourse via a gradient-based approach. However, the fractional-score concept for user preference might not be as straightforward, especially when the data has many continuous features. To resolve these problems, we propose a preference elicitation framework that learns the subject’s cost function from pairwise comparisons of possible recourses. Compared to De Toni et al. (2023), our framework does not require the causal graph as input, and compared to Rawal & Lakkaraju (2020) and Yetukuri et al. (2023), our framework can perform well even when the dimension of the feature space grows large. This paper contributes by: - proposing in Section 3 an adaptive preference learning framework to learn the subject’s cost function parametrized by the cost matrix of a Mahalanobis distance. This framework initializes with an uninformative confidence set of possible cost matrices. In each round, it determines the next question by finding a pair of recourses corresponding to the most effective cut of the confidence set, that is, a cut that slices the incumbent confidence set most aggressively. The incumbent confidence set shrinks along iterations. We terminate the questioning upon reaching a predefined number of inquiries. The final confidence set is employed for recourse generation. - proposing in Section 4 two methods for generating recourse under various assumptions of the machine learning models. These methods will consider explicitly the terminal confidence set about the subject’s cost matrix. If the model is white-box and differentiable, we can use the cost-adaptive gradient-based recourse-generation method that generates cost-adaptive recourse. Otherwise, we can use the graph-based method to generate the sequential recourse. Section 5 reports our numerical results. In Appendix A, we also extend our framework to cope with potential inconsistencies in subject responses and extend the heuristics from pairwise comparison to multiple-option questions. All proofs are relegated to the appendix. Notations. Given an integer $d$, we use $\mathbb{S}^d$ and $\mathbb{S}_+^d$ to denote the space of $d$-by-$d$ symmetric matrices and $d$-by-$d$ symmetric positive definite matrices, respectively. The identity matrix is denoted by $I$. The inner product between two matrices $A, B \in \mathbb{S}^d$ is $\langle A, B \rangle = \sum_{i,j} A_{ij}B_{ij}$, and we write $A \preceq B$ to denote that $B - A \in \mathbb{S}_+^d$. The set of integers from 1 to $N$ is $\llbracket N \rrbracket$. 2 Problem Statement and Solution Overview We are given a binary classifier $C_\theta : \mathbb{R}^d \to \{0, 1\}$ and access to the training dataset containing $N + M$ instances $x_i \in \mathbb{R}^d$, $i = 1, \ldots, N + M$. The dataset is split into two parts: - a positive dataset $D_1 = \{x_1, \ldots, x_N\}$ containing instances with $C_\theta(x_i) = 1 \forall x_i \in D_1$. - a negative dataset $D_0 = \{x_{N+1}, \ldots, x_{N+M}\}$ containing all instances that have the negative predicted outcome, thus $C_\theta(x_i) = 0 \forall x_i \in D_0$. Given a subject with input $x_0 \in \mathbb{R}^d$ with a negative predicted outcome $C_\theta(x_0) = 0$, we make the following assumption on the cost function of this subject. Assumption 2.1. The subject \( x_0 \) has a Mahalanobis cost function of the form \( c_{A_0}(x, x_0) = (x - x_0)^T A_0 (x - x_0) \) for some symmetric, positive definite matrix \( A_0 \in S^d_{++} \). We provide two possible justifications for the aforementioned assumption in Appendix D. First, we describe a sequential control process that affects feature transitions of a subject \( x_0 \) towards a recourse \( x_r \) while minimizing the cost of efforts. We formalize this problem as a Linear Quadratic Regulator, and then we prove that the optimal cost function has the Mahalanobis form, see Section D.1. Second, Appendix D.2 establishes a connection between the linear Gaussian structural causal model and the Mahalanobis cost function. We show that we can recover the Mahalanobis cost preference model with \( A_0 \) corresponding to the precision matrix of the deviation under linear Gaussian structural equation assumption. In the above cost function, \( A_0 \) is the ground-truth matrix specific for subject \( x_0 \), but it remains elusive to the recourse generation framework. We aim to find \( x_r \) which has a positive predicted outcome \( C_\theta(x_r) = 1 \) and minimizes the cost \( c_{A_0}(x_r, x_0) \). Because the matrix \( A_0 \) is unknown, we propose an adaptive preference learning approach (Bertsimas & O’Hair [2013] Vayanos et al. [2020]) to approximate the actual cost function \( c_{A_0}(x, x_0) \). Our overall approach is as follows: We have a total of \( T \) question-answer rounds for cost elicitation. In each round, we choose a pair \((x_i, x_j)\) from the positive dataset \( D_1 \). We then ask the subject the following binary question: “Between two possible recourses \( x_i \) and \( x_j \), which one do you prefer to implement?”. The answer from the subject takes one of the three answers: \( x_i \) or \( x_j \) or indifference. The subject’s answer can be used to learn a binary preference relation \( P \). If \( x_i \) is preferred to \( x_j \), then we denote \( x_i P x_j \); if the subject is indifferent between \( x_i \) and \( x_j \), then we have simultaneously \( x_i P x_j \) and \( x_j P x_i \). Because both \( x_i \) and \( x_j \) have positive predicted outcomes, we assume that the subject’s preference is solely based on which recourse requires less effort. Assume that \( x_i P x_j \), then \( A_0 \) should satisfy \[ (x_i - x_0)^T A_0 (x_i - x_0) \leq (x_j - x_0)^T A_0 (x_j - x_0). \] However, to model possible error in the judgment of the subject and to accommodate the indifference answer, we will equip a positive margin \( \varepsilon > 0 \), and we have \( x_i P x_j \) if and only if: \[ (x_i - x_0)^T A_0 (x_i - x_0) \leq (x_j - x_0)^T A_0 (x_j - x_0) + \varepsilon. \] Let us denote the following matrix \( M_{ij} \in S^d \) as \[ M_{ij} = x_i x_i^T - x_j x_j^T + (x_j - x_i)x_0^T + x_0(x_j - x_i)^T, \] then we can rewrite (2) in the form \( \langle A_0, M_{ij} \rangle \leq \varepsilon \). Let \( P \) be a set of ordered pairs representing the information collected so far about the preference of the subject: \[ P = \{(i, j) \in [N] \times [N] : x_i P x_j \}. \] For any preference set \( P \), we can define \( U_P \) as the set of possible cost matrices \( A \) that is consistent with the revealed preferences \( P \): \[ U_P \triangleq \{ A \in S^d_+ : \langle A, M_{ij} \rangle \leq \varepsilon \forall (i, j) \in P \}, \] then at any time, we have \( A_0 \in U_P \). Thus, \( U_P \) is considered the confidence set of the cost matrix from the viewpoint of the recourse generation framework. Our learning framework aims to reduce the size of \( U_P \), hoping to pinpoint a small region where \( A_0 \) may reside. Afterward, we use a recourse generation method adapted to the confidence set \( U_P \). We present the overall flow of our framework in Figure 1. In general, our framework addresses several questions of the cost-adaptive recourse-generation approach: 1. What are the questions to ask the subject? If \( N \) is large, asking the subject exhaustively for \( O(N^2) \) pairwise comparisons is impossible. Thus, this question aims to find the pair \( x_i \) and \( x_j \) such that \( (i, j) \notin P \) and \( (j, i) \notin P \), and that adding either one of these two ordered pairs to \( P \) will bring the largest amount of information as possible (in the sense of narrowing down the set \( U_P \)). 2. How to recommend a recourse \( x_r \) that minimizes the cost, knowing the confidence set \( U_P \)? 3. What happens if there is inconsistency in the subject’s preferences? For example, if there exist three distinct indices \( (i, j, k) \) such that the subject states \( x_i P x_j \), \( x_j P x_k \) and \( x_k P x_i \). The first and third questions are the fundamental questions in preference learning literature (Lu & Shen, 2021; Bertsimas & O’Hair, 2013; Vayanos et al., 2020). In the marketing literature (Toubia et al., 2003, 2004) or recommendation systems literature (Zhao et al., 2016; Rashid et al., 2008; Pu et al., 2012), the preference learning framework aims to recommend products that maximize the utility or preference of subjects. In the adaptive questionnaire framework, we would like to ask questions that give us the most information regardless of the response because the responses to each question are unknown. Moreover, we would like to select the next comparison questions to ask the subject that can maximize the acquired information and reduce the size of the confidence set as quickly as possible (Bertsimas & O’Hair, 2013; Vayanos et al., 2020). Guided by these ideas, we integrate the adaptive preference learning framework into the recourse generation problem. We show the overall flow of our framework in Figure 1. Our approach generally consists of two phases: preference elicitation and recourse generation. Next, we present the preference elicitation phase in Section 3 and recourse-generation methods in Section 4. ![Figure 1](image) **Figure 1**: Overall flow of our cost-adaptive recourse recommendation framework. The subject inputs an instance $x_0$. In each of $T$ rounds of question-answer, we first find the Chebyshev center of the set $\mathcal{U}_P$, then select the next question that minimizes the distance to the Chebyshev center. We provide two methods to generate the cost-adaptive recourse: gradient-based and graph-based. ### 3 COST IDENTIFICATION VIA ADAPTIVE PAIRWISE COMPARISONS #### 3.1 Finding the Chebyshev Center First, we observe that without any loss of generality, we can impose an upper bound constraint $A \preceq I$ to the set $\mathcal{U}_P$. Indeed, the inequality (1) is invariant with any positive scaling of the matrix $A_0$, and thus, we can normalize $A_0$ so that it has a maximum eigenvalue of one. Adding $A \preceq I$ makes the set $\mathcal{U}_P$ bounded. Given a bounded set $\mathcal{U}_P$, we find the Chebyshev center of $\mathcal{U}_P$ for each question-answer round. Then, we find the question prescribing a hyperplane closest to this center; thus, this hyperplane can be considered the most aggressive cut. Notice that a question involving $x_i$ and $x_j$ can be represented by the hyperplane $\langle A, M_{ij} \rangle = 0$. The confidence set $\mathcal{U}_P$ is simply a polytope in the space of positive definite matrices. We first consider finding the Chebyshev center of the set $\mathcal{U}_P$. For any bounded set with a non-empty interior, the Chebyshev center is the center of a ball with the largest radius inside the set. Thus, given a confidence set $\mathcal{U}_P$, its Chebyshev center represents a safe point estimate of the true cost matrix and its corresponding radius $r^*$ of $\mathcal{U}_P$ is the optimal solution of the problem $$ (A_c^*, r^*) = \arg \max_{A_c \in S^d_+, r \in \mathbb{R}_+} \left\{ r : \|A - A_c\|_F^2 \leq r^2 \quad \forall A \in \mathcal{U}_P \right\}. $$ For our problem, the Chebyshev center can be found by solving a semidefinite program resulting from the following theorem. ![Figure 2](image) **Figure 2**: Illustration of the Chebyshev center. Black lines represent the hyperplanes $\langle A, M_{ij} \rangle = \varepsilon$ for $(i,j) \in P$ defining the boundaries of the polytope $\mathcal{U}_P$. The ball centered at the Chebyshev center $A_c^*$ with radius $r$ is the largest inscribed ball of $\mathcal{U}_P$. Theorem 3.1 (Chebyshev center). Suppose that \( U_P \) has a non-empty interior. The Chebyshev center \( A^*_c \) of the set \( U_P \) can be found by solving the following semidefinite program \[ \max \quad r \\ \text{s.t.} \quad A_c \in S^d_+, \quad r \in \mathbb{R}_+ \\ A_c \preceq I, \quad \langle A_c, M_{ij} \rangle + r \| M_{ij} \|_F \leq \varepsilon \quad \forall (i,j) \in P. \] (5) 3.2 Recourse-Pair Determination Finding the next question to ask the subject is equivalent to finding two indices \((i, j) \in [N] \times [N]\), corresponding to two recourses \(x_i\) and \(x_j\) in the positive dataset \(D_1\), such that the corresponding hyperplane \( \langle A, M_{ij} \rangle = 0 \) is as close to the Chebyshev center \(A^*_c\) as possible. This is equivalent to solving the minimization problem \[ \min_{(i,j) \in [N] \times [N]} \frac{|\langle A^*_c, M_{ij} \rangle|}{\| M_{ij} \|_F}, \] where the matrix \(M_{ij}\) is calculated as in (3). The objective function of the above problem is simply the projection distance of \(A^*_c\) to \( \langle A, M_{ij} \rangle = 0 \) under the Frobenius norm. Similar cost heuristics. An exhaustive search over all pairs of indices \((i, j)\) requires an \(O(N^2)\) complexity. This search may become too expensive for large datasets because we must conduct one separate search at each round. We propose a heuristic that can produce reasonable questions in a limited time to alleviate this burden. This heuristics is based on the following observation: given an incumbent Chebyshev center \(A^*_c\), two valid recourses \(x_i\) and \(x_j\) are more comparable to each other if their resulting costs measured with respect to \(A^*_c\) are close to each other, that is, \(c_{A^*_c}(x_i, x_0) \approx c_{A^*_c}(x_j, x_0)\). If their costs are too different, for example, \(c_{A^*_c}(x_i, x_0) \ll c_{A^*_c}(x_j, x_0)\), then it is highly likely that the subject will prefer \(x_i\) to \(x_j\) uniformly over the set of possible weighting matrices in \(U_P\). Profiting from this observation, we consider the following similar-cost heuristic: - Step 1: Compute the distances from \(x_i\) to \(x_0\): \(s_i = (x_i - x_0)^\top A^*_c (x_i - x_0)\) for all \(i \in [N]\), - Step 2: Sort \(s_i\) in a non-decreasing order. The sorted vector is denoted by \((s[1], \ldots, s[N])\), - Step 3: For each \(i = 1, \ldots, N - 1\), choose a pair of adjacent cost samples \(x[i]\) and \(x[i+1]\) corresponding to \(s[i]\) and \(s[i+1]\), then compute the projection distance of the incumbent center \(A^*_c\) to the hyperplane \( \langle M_{[i],[i+1]}, A \rangle = 0 \). - Step 4: Pick a pair of \(([i], [i + 1])\) that induces the smallest projection distance in Step 3. In step 2, sorting costs \(O(N \log N)\). Nevertheless, in Step 3, we only need to compute \(N\) times the projection distance by looking at pairs of adjacent costs, contrary to the total number of \(O(N^2)\) pairs. The comparison between similar cost heuristics and exhaustive search is relegated to Appendix B. 4 Cost-Adaptive Recourse Recommendation Given the subject input \(x_0\), this section explores two generalizations to generate single and sequential recourses, adapted to the terminal confidence set \(U_S\) of the cost metric. In Section 4.1, we generalize the gradient-based counterfactual generation method in Wachter et al. (2018). In Section 4.2, we generalize the graph-based counterfactual generation method in Poyiadzi et al. (2020). 4.1 Gradient-based Cost-adaptive Single Recourse Given a machine learning model \(f_\theta : \mathbb{R}^d \to (0, 1)\) that outputs the probability of being classified in the favorable group. The binary classifier \(C_\theta : \mathbb{R}^d \to \{0, 1\}\) takes the form of a threshold policy \[ C_\theta(x) = \begin{cases} 1 & \text{if } f_\theta(x) \geq 0.5, \\ 0 & \text{otherwise}, \end{cases} \] where we have used a threshold of 0.5 similar to the setting in Wachter et al. (2018). We suppose that we have access to the probability output $f_\theta$. Let $l$ be a differentiable loss function that minimizes the gap between $f_\theta(x)$ and the decision threshold 0.5; one can think of $l(f_\theta(x), 1)$ as the term that promotes the validity of the recourse. Given a weight $\lambda \geq 0$ which balances the trade-off between the validity and the (worst-case) cost, we can generate a recourse for an input instance $x_0$ by solving $$\min_{x \in X} \left\{ l(f_\theta(x), 1) + \lambda \max_{A \in U_\theta} (x - x_0)^\top A (x - x_0) \right\}. \tag{6}$$ A practical choice for loss function is the quadratic loss $l(f_\theta(x), 1) = (f_\theta(x) - 0.5)^2$, which is a differentiable function in $x$. Under a mild condition about the uniqueness of the optimal solution to the inner maximization problem, the cost term in the objective of (6) is also differentiable. Thus, one can invoke a (projected) gradient descent algorithm to solve (6) and find the recourse. Algorithm 1 proceeds iteratively to solve problem (6). In each iteration, we first find a matrix $A^*$ of the max problem with a solver such as Mosek (MOSEK ApS, 2019), and then we take a gradient step in the variable $x$ using the computed gradient. The next incumbent solution is the projection onto the set $X$, where $\Pi_X$ denotes the projection onto $X$. Furthermore, similar to Wachter et al. (2018), we can add an early stopping criterion for Algorithm 1. For example, we can stop the algorithm at iteration $t$ if $C_\theta(x_t) = 1$. ### 4.2 Graph-based Cost-adaptive Sequential Recourse In Section 4.1, we introduce a gradient-based recourse-generation method. However, this approach requires access to the gradient information, which is restricted in some real-world applications (Ilyas et al., 2018; Alzantot et al., 2019). In this section, we present a model-agnostic recourse-generation approach that leverages the ideas from FACE (Poiiadzi et al., 2020). After $T$ rounds of questions in Section 3, we solve problem (5) to find the Chebyshev center $A^*$ of the terminal confidence set $U_\theta$. **Graph construction.** We first build a directed graph $G = (V, E)$ that represents the geometry of the available data: each node $x_i \in V = \{x_0\} \cup D_1 \cup D_0$ corresponds to a data sample, and an edge $(x_i, x_j) \in E$ represents a feasible transition from node $x_i$ to node $x_j$. We compute the edge weight $w_{ij} = c_{A^*}(x_i, x_j)$ based on Mahalanobis cost function associated with matrix $A^*$. Finally, $w_{ij} = \infty$ for $(x_i, x_j) \notin E$. **Sequential recourse generation.** Recall that $D_1$ is the set of all vertices with favorable predicted outcomes. A one-step recourse recommendation suggests a single continuous action from $x_0$ to $x_r$ (e.g., Ustun et al., 2019; Mothilal et al., 2020). A sequential recourse is a directed path from the input instance $x_0$ to a node $x_r \in D_1$; each transition in the path is a concrete action that the subject has to implement to move towards $x_r$. A sequential recourse has several advantages compared to the one-step ones: plausibility and sparsity. In real-world applications, sequential steps are more plausible than a one-step continuous change (Ramakrishnan et al., 2020; Singh et al., 2021). Moreover, recent work shows that sequential recourse promotes sparsity, allowing subjects to modify a few features at each step (Verma et al., 2022). For illustration purposes, we present an example of sequential recourse in Appendix B. The cost of a sequential recourse is computed by the sum of all the edge weights in the path. ![Figure 3: The illustration of $G$, showing negatively predicted samples as red circles and positively predicted samples as green circles. The input instance $x_0$ is a gray circle. The terminal edges and unreachable nodes of flows in $F$ are blue edges and green nodes with white crosses, respectively.] path. Thus, we can recommend a sequential and actionable recourse by finding a path that originates from \( x_0 \) and ends at the node \( x_r^* \in D_1 \) with the lowest path cost. **Worst-case sequential recourse generation.** After conducting \( T \) rounds of questioning in Section 3, we obtain the confidence set \( U_P \) for the parameter \( A_0 \). However, the precise value of \( A_0 \) remains unknown. In this section, we focus on minimizing the total cost of the sequential recourse subject to the most unfavorable scenario of \( A_0 \) within the final confidence set. Let \( F \) denote the set containing all possible flows from the input subject \( x_0 \) to a node in \( D_1 \). Mathematically, we can write \( F \) as \[ F = \left\{ f_{ij} \in \{0, 1\} \mid \forall (x_i, x_j) \in E : \sum_{(x_0, x_j) \in E} z_{0j} - \sum_{(x_j, x_0) \in E} z_{j0} = 1 \right\} \] Figure 3 illustrates the visual representation of the set \( F \). The first constraint ensures that the total flow out of \( x_0 \) is precisely one. The second constraint enforces the terminal condition for flows, halting the flow once it reaches the first node in the positive class. In the visual depiction in Figure 3, the terminal edges of flows are visually distinguished as blue edges. Consequently, positive nodes without direct connections from negative nodes are not part of any flows, and they are identifiable as green nodes with white crosses in Figure 3. The third constraint imposes flow conservation at each negatively predicted node. For any \( f \in F \), we have \( f_{ij} = 1 \) if the edge \((x_i, x_j)\) constitutes one (actionable) step in the path. The optimal cost-robust sequential recourse is defined to be the optimal flow of the min-max problem \[ \min_{f \in F} \max_{A \in U_P} \sum_{(x_i, x_j) \in E} w_{ij}(A)f_{ij}, \] with the edge weight depends explicitly on the weighting matrix \( A \) as \( w_{ij}(A) = (x_i - x_j)^T A (x_i - x_j) \). The next proposition asserts an equivalent form of (7) as a single-layer minimization problem. **Proposition 4.1 (Equivalent formulation).** Problem (7) is equivalent to \[ \begin{align*} \min & \quad \langle U, I \rangle + \varepsilon \sum_{(i,j) \in P} t_{ij} \\ \text{s.t.} & \quad f \in F, \quad t_{ij} \geq 0 \quad \forall (i,j) \in P, \quad U \in S^d_+ \\ & \quad U + \sum_{(i,j) \in P} M_{ij}t_{ij} \succeq \sum_{(x_i, x_j) \in E} (x_i - x_j)(x_i - x_j)^T f_{ij}. \end{align*} \] Problem (8) is a binary semidefinite programming problem, which is challenging to solve due to its combinatorial nature. Consequently, finding an optimal sequential recourse can be a daunting task. To address this issue, we propose an alternative approach. Specifically, we associate the weight of each edge \((x_i, x_j)\) with its maximum cost taken over all possible values of \( A \) in the set \( U_P \): \[ \bar{w}_{ij} = \max_{A \in U_P} w_{ij}(A) = \max_{A \in U_P} \langle A, (x_i - x_j)(x_i - x_j)^T \rangle \\ \text{s.t.} \quad 0 \preceq A \preceq I, \quad \langle A, M_{i'j'} \rangle \leq \varepsilon \quad \forall (i', j') \in P. \] Given a graph \( G \) with the worst-case weight matrix \([\bar{w}_{ij}]\), we find the shortest paths from \( x_0 \) to each positively-predicted node in \( D_1 \). The recommended sequential recourse is the path that originates from \( x_0 \) and ends at the node \( x_r^* \in D_1 \) with the lowest path cost. ## 5 Numerical Experiments We evaluate our method, Cost-Adaptive Recourse Recommendation by Adaptive Preference Elicitation (ReAP), using synthetic data and seven real-world datasets: German, Bank, Student, Adult, COMPAS, GMC, and HELOC. Notably, these datasets are commonly used in recourse literature (Verma et al., 2020; Upadhyay et al., 2021; Mothilal et al., 2020). In the main paper, we present the results for Synthesis, German, Bank, and Student datasets. The results for other datasets can be found in the appendix. We compare our approach against the recourse-generation baselines implemented in CARLA (Pawelczyk et al., 2021). For the gradient-based single recourse method in Section 4.1, we compare our method to Wachter (Wachter et al., 2018) and DiCE (Mothilal et al., 2020). For the graph-based sequential recourse method in Section 4.2, we compare our method to FACE (Poyiadzi et al., 2020). Codes for the experiments in the main paper are provided in the supplementary material. In Appendix B, we present the detailed implementation and numerical results for additional datasets, providing a benchmarking performance for the proposed heuristics and an additional comparison against PEAR (De Toni et al., 2023). 5.1 Experimental Setup Data preprocessing. Following Mothilal et al. (2020), we preprocess the data using the min-max standardizer for continuous features and one-hot encoding for categorical features. Classifier. For each dataset, we perform an 80-20 uniformly split (80% for training) of the original dataset. Then we train an MLP classifier \( C_\theta \) on the training data. We use the test data to benchmark the performance of different recourse-generation methods. Cost matrix generation. We generate 10 ground-truth matrices \( A_0 \) with this procedure: first, we generate a matrix \( A \in \mathbb{R}^{d \times d} \) of random, standard Gaussian elements, where \( d \) is the dimension of \( x_0 \). Then we compute \( A_0 = AA^\top \) and normalize \( A_0 \) to have a unit spectral radius by taking \( A_0 \leftarrow A_0 / \sigma_{\text{max}}(A_0) \), where \( \sigma_{\text{max}} \) is the maximum eigenvalue function. For an input \( x_0 \) and a ground-truth matrix \( A_0 \), we choose \( T \) questions using the similar-cost heuristics in Section 3.2, to find the set \( U_F \). After \( T \) rounds of question-answers, we solve (5) using MOSEK to find the Chebyshev center \( A^* \) of the terminal confidence set \( U_F \). Then, we generate recourse using the gradient-based method in Section 4.1 and the graph-based method in Section 4.2. Note that with \( T = 0 \), we haven’t asked any questions. Thus, \( A^* = \frac{1}{2}I \) (an uninformative estimate). Hence, all algorithms share the same cost function. Within this context, the proposed worst-case sequential recourse generation in Section 4.2 demonstrates the effectiveness as it manages to provide an acceptable recourse for challenging scenarios within the domain where \( A \) is a matrix satisfying \( A \preceq I, A \in S^d_+ \). This approach also proves valuable when users’ responses contain significant noise and inconsistencies, resulting in a still large search space for \( A_0 \) in the final round. 5.2 Metrics for Comparison We compare different recourse-generation methods using the following metrics: Validity. A recourse \( x_r \) generated by a recourse-generation method is valid if \( C_\theta(x_r) = 1 \). We compute validity as the fraction of instances for which the recommended recourse is valid. Cost. For the gradient-based single recourse method, we calculate the cost of a recourse \( x_r \) as the Mahalanobis distance between \( x_r \) and \( x_0 \) evaluated with the ground-truth matrix \( A_0 \) as \( c_{A_0}(x_r, x_0) \). Shortest-path cost. For the graph-based recourse-generation, we report the cost of a sequential recourse \( x_0 \rightarrow \ldots \rightarrow x_r \) as the path cost from input \( x_0 \) to \( x_r \), evaluated with \( A_0 \). Mean rank. We borrow the ideas from Bertsimas & O’Hair (2013) and consider the mean rank metric for ranking recourses based on subject preference. We first rank all of the recourses in the positive dataset \( D_1 \) with their preferences according to the ground-truth matrix \( A_0 \). Thus, the recourse with the smallest cost is ranked 1, and the recourse with the largest cost is ranked \( N \) (\( N \) is the total number of recourses in the positive dataset). We then find the top \( K \) recourses according to the cost metric \( c_{A^*}(x, x_0) \) and compare the selected solutions with the true rank of the recourse. Therefore, smaller values indicate that the matrix \( A^* \), the Chebyshev center of the terminal confidence set, is closer to the ground truth \( A_0 \). Each recourse \( x_i \in D_1 \) thus can be assigned with a rank \( r_i \in [1, \ldots, N] \). We compute the normalized mean rank of top \( K \) recourses as \[ r_{\text{mean}} = \frac{\sum_{i=1}^{K} r_i - r_{\text{min}}}{r_{\text{max}}} \quad \text{where} \quad r_{\text{min}} = \sum_{i=1}^{K} i = \frac{(K+1)K}{2} \quad \text{and} \quad r_{\text{max}} = \sum_{i=N-K+1}^{N} i = \frac{(2N-K+1)K}{2} \] are normalizing constants so that \( r_{\text{mean}} \in (0, 1) \). ![Figure 4](image-url) Figure 4: Impact of the number of questions \( T \) to the average mean rank on synthetic data and three real-world datasets. As the number of questions increases, the mean rank tends to decrease, highlighting that the Chebyshev center tends closer to the ground truth \( A_0 \). 5.3 Numerical Results We conduct three experiments to study the efficiency of our framework in generating cost-adaptive recourses. First, we study the impact of the number of questions $T$ on the mean rank. Then, we compare our two cost-adaptive recourse-generation methods: gradient-based and graph-based, with the recourse-generation baselines implemented in CARLA (Pawelczyk et al., 2021). Appendix B provides additional numerical results and discussions. Impact of the number of questions $T$ to the mean rank. Here, we analyze the impact of the number of questions $T$ on the mean rank. We first fix the parameter $\varepsilon$ and vary the number of questions $T \in [0, 10]$. For each value of $T$, we choose $T$ questions with the heuristics in Section 3.2 and solve problem (5) to find the center $A^*$. Then, we evaluate the mean rank with $A^*$. Figure 4 demonstrates that the average mean rank decreases as the number of questions increases. This implies that the Chebyshev center $A^*$ comes closer to the ground truth $A_0$ with the more questions we ask, leading to a more accurate estimate of the actual cost function. Gradient-based cost-adaptive recourse. In this experiment, we generate recourse using our gradient-based recourse-generation method. We compute the cost as the Mahalanobis distance described in Section 5.2. We compare our method with three baselines: Wachter and DiCE. Table 1 demonstrates that DiCE has the highest cost across all datasets, and its validity isn’t perfect in the German, Bank, and Student datasets. Our method has similar validity to Wachter but at a lower cost in three out of four datasets. It’s important to note that if $T = 0$, the Chebyshev center is $A^* = \frac{1}{2}I$, and the cost metric $c_{A^*}(x, x_0)$ becomes the squared Euclidean distance between $x$ and $x_0$, which DiCE and Wachter directly optimize. Thus, these results indicate that our approach effectively adjusts to the subject’s cost function and adequately reflects the individual subject’s preferences. Graph-based cost-adaptive recourse. In this experiment, we generate recourse using the graph-based sequential recourse method. We compute the cost of a sequential recourse as the shortest-path cost described in Section 5.2. We compare our graph-based method with FACE. Table 2 demonstrates that our ReAP framework has the lowest cost across all four datasets. The validity of the two methods is perfect in all four datasets because the two methods both find a path from the input node $x_0$ to the node $x_r \in D_1$. As mentioned above, if $T = 0$, the cost metric $c_{A^*}(x, x_0)$ becomes squared of the Euclidean distance between $x$ and $x_0$, and FACE builds the graph using this Euclidean metric. These observations show that our graph-based method accurately captures the subjects’ preferences and adapts to their cost function. 6 Conclusions This work proposes an adaptive preference learning framework for the recourse generation problem. Our proposed framework aims to approximate the true cost matrix of the subject in an iterative manner using a few rounds of question-answers. At each round, we select the question corresponding to the most effective cut of the confidence set of possible cost matrices. We provide two recourse-generation methods: gradient-based and graph-based cost-adaptive recourse. Finally, we generalize our framework to handle inconsistencies in subject responses and extend the heuristics to choose the questions from pairwise comparison to multiple-option questions. Extensive numerical experiments show that our framework can adapt to the subject’s cost function. Table 1: Benchmark of Cost and Validity between gradient-based methods on four datasets. | Dataset | Methods | Cost | Validity | |-----------|---------|----------|----------| | Synthetic | DiCE | 0.31 ± 0.27 | 1.00 ± 0.00 | | | Wachter | 0.12 ± 0.14 | 1.00 ± 0.00 | | | ReAP | 0.10 ± 0.15 | 1.00 ± 0.00 | | German | DiCE | 0.10 ± 0.37 | 0.96 ± 0.19 | | | Wachter | 0.03 ± 0.02 | 1.00 ± 0.00 | | | ReAP | 0.01 ± 0.01 | 1.00 ± 0.00 | | Bank | DiCE | 1.43 ± 0.61 | 0.99 ± 0.10 | | | Wachter | 0.11 ± 0.10 | 1.00 ± 0.00 | | | ReAP | 0.08 ± 0.08 | 1.00 ± 0.00 | | Student | DiCE | 0.07 ± 0.18 | 0.64 ± 0.48 | | | Wachter | 0.05 ± 0.07 | 1.00 ± 0.00 | | | ReAP | 0.05 ± 0.07 | 1.00 ± 0.00 | Table 2: Benchmark of Path cost between graph-based ReAP and FACE. All methods attain the validity of 1.00 ± 0.00. | Dataset | Methods | Path cost | |-----------|---------|-----------| | Synthetic | FACE | 0.73 ± 0.55 | | | ReAP | 0.70 ± 0.56 | | German | FACE | 0.66 ± 0.48 | | | ReAP | 0.53 ± 0.49 | | Bank | FACE | 1.20 ± 0.69 | | | ReAP | 0.82 ± 0.39 | | Student | FACE | 1.10 ± 0.76 | | | ReAP | 1.04 ± 0.66 | REFERENCES Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, Huan Zhang, Cho-Jui Hsieh, and Mani B Srivastava. Genattack: Practical black-box attacks with gradient-free optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111–1119, 2019. Barry Becker and Ronny Kohavi. Adult. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5XW20. Dimitri Bertsekas. Dynamic Programming and Optimal Control: Volume I. Athena Scientific, 2012. URL https://books.google.com.vn/books?id=qVBEAAAQBAJ. Dimitris Bertsimas and Allison O’Hair. Learning preferences under noise and loss aversion: An optimization approach. Operations Research, 61(5):1190–1199, 2013. Paulo Cortez and Alice Silva. Using data mining to predict secondary school student performance. Proceedings of 5th FUTURE BUSiness TEChnology Conference, 2008. Giovanni De Toni, Paolo Viappiani, Bruno Lepri, and Andrea Passerini. Generating personalized counterfactual interventions for algorithmic recourse by eliciting user preferences, 2023. URL https://arxiv.org/abs/2205.13743. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml. Meherwar Fatima, Maruf Pasha, et al. Survey of machine learning algorithms for disease diagnostic. Journal of Intelligent Learning Systems and Applications, 9(01):1, 2017. Christopher G Harris. Making better job hiring decisions using “human in the loop” techniques. In HumL@ ISWC, pp. 16–26, 2018. Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning, pp. 2137–2146. PMLR, 2018. Amir-Hossein Karimi, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. Advances in Neural Information Processing Systems, 33:265–277, 2020. Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse: From counterfactual explanations to interventions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pp. 353–362, New York, NY, USA, 2021. Association for Computing Machinery. Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Computing Surveys, 55(5):1–29, 2022. Jahanzaib Latif, Chuangbai Xiao, Azhar Imran, and Shanshan Tu. Medical imaging using machine learning and deep learning algorithms: a review. In 2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–5. IEEE, 2019. Mengshi Lu and Zuo-Jun Max Shen. A review of robust operations management under model uncertainty. Production and Operations Management, 30(6):1927–1943, 2021. MOSEK ApS. MOSEK Optimizer API for Python 9.2.10, 2019. URL https://docs.mosek.com/9.2/pythonapi/index.html. Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617, 2020. Tuan-Duy H Nguyen, Ngoc Bui, Duy Nguyen, Man-Chung Yue, and Viet Anh Nguyen. Robust Bayesian recourse. In Uncertainty in Artificial Intelligence, pp. 1498–1508. PMLR, 2022.
oJ1tx3fXDA
In Eq.(5), what is the MVR term? Could the author explain this in detail? Its first part is a sum of $J_i$-step updates, while its second part is a sum of $j$-step updates. I know these two parts come from the additional term from $\theta_{i,j}^t-\theta_i^{t-\tau_i}$ to $\theta^{t-1}-\theta^{t-\tau_i-1}$, but how does it perform as a variance reduction?
COMMUNICATION-EFFICIENT HETEROGENEOUS FEDERATED LEARNING WITH GENERALIZED HEAVY-BALL MOMENTUM Anonymous authors Paper under double-blind review ABSTRACT Federated Learning (FL) is the state-of-the-art approach for learning from decentralized data in privacy-constrained scenarios. As the current literature reports, the main problems associated with FL refer to system and statistical challenges: the former ones demand for efficient learning from edge devices, including lowering communication bandwidth and frequency, while the latter require algorithms robust to non-iidness. State-of-art approaches either guarantee convergence at increased communication cost or are not sufficiently robust to handle extreme heterogeneous local distributions. In this work we propose a novel generalization of the heavy-ball momentum, and present FEDHBM to effectively address statistical heterogeneity in FL without introducing any communication overhead. We conduct extensive experimentation on common FL vision and NLP datasets, showing that our FEDHBM algorithm empirically yields better model quality and higher convergence speed w.r.t. the state-of-art, especially in pathological non-iid scenarios. While being designed for cross-silo settings, we show how FEDHBM is applicable in moderate-to-high cross-device scenarios, and how good model initializations (e.g. pre-training) can be exploited for prompt acceleration. Extended experimentation on large-scale real-world federated datasets further corroborates the effectiveness of our approach for real-world FL applications. 1 INTRODUCTION The introduction of the Federated Learning (FL) paradigm in (McMahan et al., 2017) and FEDAVG algorithm has sparked a considerable interest in learning from decentralized data. In FL, a central server orchestrates an iterative two-step training process over several communication rounds consisting of: (i) local training on a potentially large number of clients, each having its own private data, and (ii) aggregation of the updated models into a shared global one. The intrinsic privacy-preserving nature of FL is appealing because it enables decentralized applications in cases where local data cannot be shared among clients. Yet, this very same characteristic of FL introduces also some challenges, because constraining the local optimization to use only the client’s own data may cause statistical heterogeneity. This has been shown to hamper the convergence of FEDAVG (Hsu et al., 2019), increasing the number of communication rounds needed to reach a target model quality (Reddi et al., 2021) and the result at convergence. Recent advances in FL have tried to mitigate this problem, proposing new methods that possess strong theoretical guarantees even in the presence of a non-iid distribution of the local datasets but at the cost of increased communication. For instance, SCAFFOLD relies on additional control variables to correct the local client’s updates, with experimentally better performances but with double the communication bandwidth requirements. Other recent algorithms (Karimireddy et al., 2021) require even more communication and also additional computation. Therefore, these solutions may be unsuitable in a regimen of limited communication resources, which is particularly relevant for applications with edge devices connected by slow, expensive and unreliable communication links (Kairouz et al., 2021). Moreover, albeit these methods are theoretically sound, in this paper we show experimental evidence that they are not sufficiently robust to handle cases of extreme heterogeneity (see fig. 1), confirming and extending what was found by Varno et al. (2022) for the specific case of FEDDYN (Acar et al., 2021). 1 Code will be released upon acceptance These considerations motivate the need for an FL algorithm that is both robust to client heterogeneity and communication-efficient by design. In this work, we try to answer the following research question: **Is it possible to robustly speed-up federated optimization, even in extreme heterogeneous settings, without incurring in additional communication and computational costs?** As a positive answer, we propose **FedHBM**, a novel FL algorithm based on our generalization of the heavy-ball momentum (Polyak, 1964) to the federated setting. The underlying idea of FedHBM is to exploit the models sent to a client at two subsequent rounds to calculate, locally on the client, a momentum term over a window of the last $\tau$ rounds of FedAvg. Intuitively, this formulation is equivalent to a moving average of velocity vectors and not gradients, thus providing a more direct and robust estimate of the global optimization trajectory that can be used as a client-drift correction. Our analysis reveals that the proposed momentum formulation has superior performance, being remarkably more stable in extreme heterogeneous scenarios. Additionally, by adding a local correction term, the presented method achieves faster convergence and improves the quality of the final model without any additional communication. **Contributions** We summarize the contributions of our work as follows: - We shed a new light on the problem of communication-efficient FL under extreme statistical heterogeneity, and propose a framework based on a novel generalized heavy-ball formulation. We show that existing momentum-based FL algorithms can be regarded as instances of this general framework and, within this same framework, we present FedHBM, a robust and communication-efficient federated optimization algorithm. - We perform an extensive empirical validation on common FL vision and NLP tasks, showing that FedHBM yields both better model quality and higher convergence speed w.r.t. the state-of-art, especially in pathological non-iid scenarios. FedHBM also shows remarkable flexibility with very low client participation rates, which makes it effective even in cross-device FL. In particular, we show how good model initializations, such as a pre-trained model, can be exploited to achieve a substantial acceleration. - Extending the experimentation to large-scale real-world vision federated datasets, our analysis reveals robustness issues of even theoretically-proven algorithms. Conversely, these results corroborate the effectiveness of our approach for real-world FL applications. **Related works** The problem of non-iidness. The detrimental effects of non-iid data have been first observed by Zhao et al. (2018), who proposed to broadcast a small portion of public data to reduce the distance between local clients’ distributions and partly recover the loss in performance. Alternatively, in Li & Wang (2019) the public data is kept server-side and used for knowledge distillation. However, such approaches require having data well suited for the purpose, which is a strong assumption. Having noticed that the performance loss comes from weight divergence, FedProx (Li et al., 2020) adds a regularization term to the loss function, penalizing the divergence from the global model. Nevertheless, in practical cases this was shown to be ineffective in addressing data heterogeneity (Caldarola et al., 2022). Other works (Kopparapu & Lin, 2020; Zaccone et al., 2022; Zeng et al., 2022; Caldarola et al., 2021) have explored grouping clients based on their data distribution to alleviate the aggregation of divergent models. Stochastic Variance Reduction in FL. Another research line applies stochastic variance reduction techniques in FL (Chen et al., 2021; Li et al., 2019). With SCAFFOLD, Karimireddy et al. (2020) for the first time provided convergence guarantees in FL for arbitrarily heterogeneous data. The authors also shed light on the client-drift experienced in local optimization, which results in slow and unstable convergence. SCAFFOLD uses control variates to estimate the direction of the server model and clients’ models: their difference is an estimate of the client drift and can be used to correct the local update. While well principled in theory and robust in practice, this approach requires double the communication to send the control variates back and forth. Similarly to (Karimireddy et al., 2020), we use a corrective term to alleviate the client drift during local optimization but our momentum term does not require any additional data exchange. **ADMM and adaptivity.** Other methods are based on the Alternating Direction Method of Multipliers (Chen et al., 2022; Gong et al., 2022; Wang et al., 2022). In particular, FedDyn (Acar et al., 2021) dynamically modifies the loss function such that the model parameters converge to stationary points of the global empirical loss. Although technically it enjoys the same convergence properties of SCAFFOLD without suffering from its increased communication cost, in practical cases FedDyn has displayed problems in dealing with pathological non-iid settings (Varno et al., 2022). **Momentum-based approaches.** Authors in (Hsu et al., 2019) found that using a server-side momentum effectively reduces the gap in accuracy between iid and non-iid scenarios. However, as highlighted in (Karimireddy et al., 2020), the source of slow and unstable convergence is the client drift experienced locally. FEDADC (Ozfatura et al., 2021) and FEDCM (Xu et al., 2021), albeit with slightly different but equivalent formulations, both propose to send the server momentum to clients to correct local updates. As a more general and theoretically proved framework, authors in (Karimireddy et al., 2021) proposed MIME to adapt an arbitrary centralized optimization algorithm to cross-device FL, by using a combination of control-variates and server optimizer state (e.g. momentum) at every client-update step. These statistics require an extra communication round and increased bandwidth, hence these algorithms are not communication-efficient. In this work we generalize the heavy-ball formulation by proposing a window wider than one round for momentum calculation. Within this framework, existing algorithms can be expressed as special case of our formulation. We investigate the role of a larger window, experimentally proving it is an enabling factor for dealing with extreme heterogeneity. We then propose our FEDHBM as a specific instantiation in which the window width being controlled by client participation leads to an algorithm robust and communication-efficient by design. **Lowering communication requirements in FL.** Researchers have studied methods to reduce the memory needed for exchanging gradients in the distributed setting, for example by quantization (Alistarh et al., 2017), or by compression (Mishchenko et al., 2019; Koloskova* et al., 2020). In the context of FL, such ideas have been developed to meet the communication and scalability constraints (Reisizadeh et al., 2020), and to take into account non-iidness (Sattler et al., 2020). Our work focuses on the efficient use of the information already being sent in standard FedAvg, so additional techniques to compress that information remain orthogonal to our approach. ## Problem Setup **Notation.** Throughout this work we adopt a unified notation both for ours and state-of-art algorithms, in a way compliant with the first work on FL (McMahan et al., 2017). We denote as $K \in \mathbb{N}^+$ the total number of clients who could participate in training, $C \in (0, 1]$ as the portion of them that participate in any round $t \in [T]$, and $S$ and $S^t$ as respectively the total set of clients and the set of clients participating in any round $t$. We indicate as $\mathcal{D}$ any data distribution, with $\mathcal{D}_i$ and $d_{i,j}$ respectively the local distribution and the $j$-th batch of size $B$ of client $i$, and $E$ as the number of local epochs. Conversely, $J_i := E\lceil |\mathcal{D}_i|/B \rceil$ is the number of local steps of client $i$, and $\eta, \eta_i$ indicate the global and local learning rates. In regards to the objective function, we call $f_\theta$ the function parameterized by model parameters $\theta$ and $L$ the loss function. More precisely, $\theta_{i,j}^t$ is the model of client $i$ at round $t$ before being presented with batch $j$, $\theta_{i,1}^{t-1} = \theta_{i}^{t-1}$ the model received by the server and $\theta_{i}^{t} = \theta_{i,J_i+1}^{t}$ the model trained by the $i$-th client and sent to the server for aggregation. **Setting Cross-silo FL.** In this setting, following the characterization in (Kairouz et al., 2021), the training nodes are expected to be different organizations or geo-distributed data centers. The number of such nodes is modest ($O(10^2)$) and they are assumed to be almost always available and reliable. This makes it possible to maintain a state on nodes across two different rounds, and often the use of stateful clients is an indicator for an algorithm to be designed for this scenario. Usually, the problem of FL in such a setting is cast as a finite-sum optimization problem, where each function is the local clients’ loss function (eq. 1). **Setting cross-device FL.** Differently from cross-silo FL, in the cross-device setting the clients are assumed to be possibly unreliable edge devices, with only a fraction of them available at any given time. As such, communication is the primary bottleneck. Most importantly, they can be massive in number ($O(10^{10})$), so this motivates the fact that they should be stateless since each client is likely to participate only once in the training procedure. Following the characterization in [Karimireddy et al., 2021], being the number of clients enormous, this problem can be modeled by introducing the stochasticity client-level, over the possibly sampled clients (eq. 2). **Cross-Silo:** $$\arg_{\theta \in \mathbb{R}^d} \sum_{k \in S} \frac{|D_k|}{|D_S|} \mathbb{E}_{(x,y) \sim D_k}[L(f_\theta; (x, y))]$$ (1) **Cross-Device:** $$\arg_{\theta \in \mathbb{R}^d} \mathbb{E}_{i \sim S} \left[ \frac{1}{|D_i|} \sum_{i=1}^{|D_i|} L(f_\theta; (x_i, y_i)) \right]$$ (2) **Cross-silo and cross-device in practice** The two aforementioned setups are however extreme cases, and real-world scenarios will likely enjoy some features from both settings. Previous FL works that address cross-silo FL usually experiment with a few hundred devices but account for low participation and unreliability, and treat communication as the primary bottleneck [Karimireddy et al., 2020; Acar et al., 2021]. However, they are stateful, and this has raised concerns about their applicability in cross-device: in particular Karimireddy et al. [2021] noticed that the control variates in [Karimireddy et al., 2020] get stale as clients are not seen again during training, and highlights that stateless clients reflect the different formulation in equations 2, 1. In this work we show that FedHBM is robust to extremely low participation rates, and that it gets more effective as each client participates in the training process. Remarkably, our method succeeds in scenarios where even theoretically strong methods fail (see figure 2 and table 5). 3 **METHOD** **Generalized heavy-ball momentum** The use of SGD with momentum is a common practice in deep learning [Krizhevsky et al., 2012; He et al., 2015] as it often provides faster convergence and better generalization [Yan et al., 2018]. It consists in accumulating the directions of reduction of the objective function to stabilize the optimization dynamics. In this work, we propose a framework for using momentum in FL based on a novel generalization of Polyak’s heavy-ball formulation [Polyak, 1964], as follows: **Classical heavy-ball (Polyak, 1964):** $$\theta^t \leftarrow \theta^{t-1} - \eta \nabla L(f_{\theta^{t-1}}) + \beta (\theta^{t-1} - \theta^{t-2})$$ (3) **Generalized Heavy-Ball (GHB):** $$\theta^t \leftarrow \theta^{t-1} - \eta \nabla L(f_{\theta^{t-1}}) + \frac{\beta}{\tau} (\theta^{t-1} - \theta^{t-\tau-1})$$ (4) Namely, we propose to allow a wider $\tau$-window to be considered to estimate the momentum term. When setting $\tau = 1$, the above formulation falls back to SGD with Polyak’s formulation, which is equivalent to a more common one that uses an additional variable to accumulate the previous directions [Liu et al., 2020; Sutskever et al., 2013]. The main intuition behind our method is that the trajectory of the server updates over a window $\tau > 1$ provides a better estimate for the momentum term in a federated setting. This proves particularly important in FL because partial participation and non-iidness of local datasets tend to worsen the estimate of the global gradient. Intuitively, as $\tau$ increases, the momentum term increasingly incorporates information from a broader range of clients. A key observation is that when $\tau$ equals the average period length (e.g., $\tau = \frac{1}{C}$), under uniform client sampling, the momentum term contains the information on the global distribution and hence is optimal. We experimentally verified this hypothesis, demonstrating its validity in practice as we showed by purposely varying $\tau$ in figure 4.3. Within the GHB formulation, we also show that existing momentum-based FL algorithms implement the special case of GHB with $\tau = 1$, as it is shown in Table 1. However, in a FL scenario, implementing the GHB in eq. 4 for an arbitrary value of $\tau$ requires the server to send both models $\theta^{t-1}$ and $\theta^{t-\tau-1}$ to each client, resulting in a communication overhead of $1.5\times$ w.r.t. FedAvg. Namely, both methods in [Xu et al., 2021; Ozfatura et al., 2021] incur in this overhead. To calculate such momentum in a communication efficient way, we can exploit the fact that a client participates multiple times in the training procedure, it has available the model... Table 1: Comparison of recent momentum-base FL algorithms within our generalized heavy-ball framework: FEDCM and FEDADC implement an equivalent update rule, since the only difference is a constant scaling on the gradient term (Liu et al., 2020). We generalize the momentum calculation over a window of $\tau$ rounds in GHB, recovering FEDCM (and FEDADC) when setting $\tau = 1$. | Method | Update Rule in Original Work | Equivalent Update Rule | |-----------------|------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | FEDCM | $\theta_{t+1}^{i,j} \leftarrow \theta_{t}^{i,j} - \eta(\alpha \nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}) + (1-\alpha)m_{t-1}^{i,j})$ | $\theta_{t+1}^{i,j} \leftarrow \theta_{t}^{i,j} - \eta(\nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}) + \beta m_{t-1}^{i,j})$ | | Xu et al., 2021 | $m_{t}^{i} \leftarrow \frac{1}{|S_{t}|} \sum_{j=1}^{|S_{t}|} (\theta_{t}^{i,j} - \theta_{t-1}^{i,j})$ | $m_{t}^{i} \leftarrow \beta m_{t-1}^{i,j} + \frac{1}{|S_{t}|} \sum_{j=1}^{|S_{t}|} \nabla L(f_{\theta_{t}^{i,j}}, d_{i,j})$ | | FEDADC | $\theta_{t+1}^{i,j} \leftarrow \theta_{t}^{i,j} - \eta(\nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}) + \frac{1}{J_{t}}m_{t-1}^{i,j})$ | $\theta_{t+1}^{i,j} \leftarrow \theta_{t}^{i,j} - \eta(\nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}) + \beta m_{t-1}^{i,j})$ | | Ozefatura et al., 2021 | $m_{t}^{i} \leftarrow \frac{1}{\eta|S_{t}|} \sum_{j=1}^{|S_{t}|} (\theta_{t}^{i,j} - \theta_{t-1}^{i,j}) - (1-\beta)m_{t-1}^{i,j}$ | $m_{t}^{i} \leftarrow \beta m_{t-1}^{i,j} + \frac{1}{|S_{t}|} \sum_{j=1}^{|S_{t}|} \nabla L(f_{\theta_{t}^{i,j}}, d_{i,j})$ | | MIMELITEMom | $\theta_{t+1}^{i,j} \leftarrow \theta_{t}^{i,j} - \eta(\nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}) + \beta m_{t-1}^{i,j})$ | (No Equivalent) | | Kairamedy et al., 2021 | $m_{t}^{i} \leftarrow \beta m_{t-1}^{i,j} + \frac{1}{|S_{t}|} \sum_{j=1}^{|S_{t}|} \nabla L(f_{\theta_{t}^{i,j}}, D_{i})$ | | | GHB (ours) | $\theta_{t+1}^{i,j} \leftarrow \theta_{t}^{i,j} - \eta(\nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}) + \beta m_{t-1}^{i,j})$ | $\theta_{t+1}^{i,j} \leftarrow \theta_{t}^{i,j} - \eta(\nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}) + \beta m_{t-1}^{i,j})$ | | | $m_{t}^{i} \leftarrow \frac{1}{\tau} \sum_{\tau=t-r+1}^{t} (\beta m_{t-r+1}^{i,j} + \frac{1}{|S_{t}|} \sum_{j=1}^{|S_{t}|} \nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}))$ | $m_{t}^{i} \leftarrow \frac{1}{\tau} \sum_{\tau=t-r+1}^{t} (\beta m_{t-r+1}^{i,j} + \frac{1}{|S_{t}|} \sum_{j=1}^{|S_{t}|} \nabla L(f_{\theta_{t}^{i,j}}, d_{i,j}))$ | | | $\theta_{t}^{i} \leftarrow \theta_{t-1}^{i} - \eta \sum_{i \in S_{t}} \frac{|D_{i}|}{|D_{S_{t}}|} (\theta_{t-1}^{i} - \theta_{t}^{i})$ | $\theta_{t}^{i} \leftarrow \theta_{t-1}^{i} - \eta \sum_{i \in S_{t}} \frac{|D_{i}|}{|D_{S_{t}}|} (\theta_{t-1}^{i} - \theta_{t}^{i})$ | $\theta_{t-\tau_i-1}^{i,j}$ received at some round $t-\tau_i$. Hence, choosing $\tau = \tau_i$ does not involve additional data exchange. Let us remark that $\tau_i$ is not hand-tuned, but it is instead determined stochastically by client participation: in practice, under uniform sampling, on average each client automatically considers a window of length $\tau_i \approx 1/C$. In this sense, letting it be self-tuned resonates with the above intuition about considering the average period in which each client is sampled once. We show in section 4.3 that this choice is a good trade-off between required participation and performance. We name this communication efficient instance of our generalized momentum framework LOCAL-GHB (for a graphical intuition see fig. 5 in appendix). Algorithm 1: FEDHBM and FedAvg **Require:** initial model $\theta^0$, $K$ clients, $C$ participation ratio, $T$ number of total round, $B$ batch size, $\eta$ and $\eta_l$ learning rates. 1: for $t = 1$ to $T$ do 2: $S_t \leftarrow$ subset of clients $\sim U(S, \max(1, K \cdot C))$ 3: for $i \in S_t$ in parallel do 4: $\theta_{t,1}^{i} \leftarrow \theta_{t-1}^{i}$ 5: for $j = 1$ to $J_i$ do 6: $m_{t,j}^{i} \leftarrow (\theta_{t,j}^{i} - \theta_{t-\tau_i}^{i})$ if $\theta_{t-\tau_i}^{i}$ is set else $0$ 7: sample a mini-batch $d_{i,j}$ from $D_i$ 8: $\theta_{t,j+1}^{i} \leftarrow \theta_{t,j}^{i} - \eta \nabla L(f_{\theta_{t,j}^{i}}, d_{i,j}) + \beta m_{t,j}^{i}$ 9: end for 10: save locally model $\theta_{t}^{i}$ 11: end for 12: $\theta_{t}^{i} \leftarrow \theta_{t-1}^{i} - \eta \sum_{i \in S_t} \frac{|D_{i}|}{|D_{S_t}|} (\theta_{t-1}^{i} - \theta_{t}^{i})$ 13: end for penalizing the incremental updates of the current round with respect to the ones at round $t-\tau_i$. We call FEDHBM the addition of such correction term to LOCAL-GHB. More formally, let us denote by $u_{t,j}^{i}$ the update performed by client $i$-th at step $j$-th for any round $t$, then FEDHBM update FEDHBM While a generalized momentum over a window $\tau > 1$ can better estimate the local correction to apply for embedding the updated information of other clients, the correction is not adjusted to the progressive drift of multiple local steps. To counteract this issue, we add a correction term specific to each client objective, such that it penalizes the direction of the last updates at round $t-\tau_i$ with respect to the progressive updates of local steps at current round $t$. This intuitions results in a slight modification of LOCAL-GHB, namely considering $\theta_{t,j}^{i}$ instead of $\theta_{t-1}^{i}$ and $\theta_{t-\tau_i}^{i}$ instead of $\theta_{t-\tau_i-1}^{i}$. As shown below, this results in an update rule consisting of two contributions, i) the same $\tau_i$-momentum of LOCAL-GHB and ii) a local correction term, rule can be written as follows: \[ \theta_{i,j+1} = \theta_{i,j} - \eta \nabla L(f_{\theta_{i,j}}, d_{i,j}) + \hat{\beta}_i (\theta_{i,j} - \theta_{i,-\tau_i}) \] (5) \[ = \theta_{i,j} - \eta \nabla L(f_{\theta_{i,j}}, d_{i,j}) + \hat{\beta}_i \left( \theta^{t-1} - \theta^{t-\tau_i-1} - \sum_{k=1}^{j} u_{i,k}^t + \sum_{k=1}^{J_i} u_{i,k}^{t-\tau_i} \right) \] Let us notice that under uniform client sampling, it holds that \(E_{i \sim U(S)}[\tau_i] \rightarrow \tau = 1/C\). Consequently, the momentum factor in equation (5) is set as \(\hat{\beta}_i := \frac{BC}{J_i}\). **Communication-efficiency and low participation** Our efficient formulation relies on the fact that each client is selected more than once during training: this is reasonable in cross-silo settings, but may not hold in extreme cross-device scenarios. In this case, it would be still possible to leverage GHB, choosing the value of \(\tau\) as a hyperparameter (with 1.5× overhead). Even if in this work we focus on cases that are still tractable using our most efficient formulation [1], we show that large values of \(\tau\) are still a robust choice in high cross-device settings: in particular, we show that is possible to consider a window starting from a common initialization and still recover the full acceleration obtained in cross-silo (see ablation study in section 4.3). Remarkably, robustness to very low participation rates is especially noticeable in our large-scale experiments in section 4.4 where other SOTA methods fail. ### 4 EXPERIMENTAL RESULTS To validate our method, we run experiments on commonly used FL datasets across computer vision and NLP tasks. We then extend experimentation to large-scale real-world federated datasets. Comparing FEDHBM with existing state-of-art algorithms, we find that: i) it is the most communication-efficient, ii) it yields the best model quality and iii) it provides a clear advantage in high cross-device scenarios, especially when starting training from pre-trained models. #### 4.1 SETUP **Experimental protocol** Our experimental baseline includes several state-of-the-art algorithms, including momentum-based methods (FEDAVGM [Hsu et al., 2019], MIMEMOM and MIMELITEMOM [Karimireddy et al., 2021]). Since FEDCM and FEDADC correspond to our GHB with \(\tau = 1\) and we report a full ablation on the value of \(\tau\) (see section 4.3), they are not considered in the main result. All the results are reported in terms of mean top-1 accuracy over the last 100 rounds, averaged over 5 independent runs. **Datasets and settings** We consider image classification and next character/word prediction tasks. For the former, we use CIFAR-10/100, and for the latter SHAKESPEARE and STACKOVERFLOW. Following Hsu et al. (2020), for both CIFAR-10/100 we split the total datasets according to a Dirichlet distribution with concentration parameter \(\alpha\), simulating two extreme levels of heterogeneity, corresponding to \(\alpha = 0\) (NON-IID) and \(\alpha = 10,000\) (IID). For SHAKESPEARE and STACKOVERFLOW we instead use the predefined splits. We consider two settings: the first one closer to cross-silo, we use CIFAR-10, CIFAR-100, and SHAKESPEARE, partitioning the datasets in \(K = 100\) parts and choosing \(C = 10\%\). The second is closer to cross-device: we choose \(K = 500\) and \(C = 1\%\) for both CIFAR's and use the natural split of STACKOVERFLOW dataset, corresponding to having \(K = 40,000\) and \(C = 0.12\%\). Let us remark that the level of non-iidness we introduce is extreme: in the non-iid cross-silo setting with CIFAR-100 each client only has samples belonging to a single class. Additional details about each setting are provided in table 3 of supplementary. We also present results on large-scale real-world FL vision datasets, LANDMARKS-USERS-160K and INATURALIST-USERS-120K, in section 4.4. **Models** Unless otherwise mentioned, for CIFAR-10/100 we use the version of LeNet-5 described in Hsu et al. (2020), whereas for SHAKESPEARE and STACKOVERFLOW we use the same RNN and Table 2: Number of rounds to reach a target accuracy w.r.t centralized of several SOTA FL algorithms with respect to ours ($\alpha \to 0$). In round brackets we report the speedup w.r.t FedAvg. Best result is in **bold**, second best is _underlined_. | METHOD | COMM. OVERHEAD | CIFAR10 | |--------------|---------------|---------| | | | CROSS-SILO | | | | | 70% | 80% | 90% | 70% | 80% | 90% | | FEDAVG | 1x | 5520 (1.00x) | 9935 (1.00x) | - (-) | 5610 (1.00x) | - (-) | - (-) | | FEDPROX | 1x | 5610 (0.98x) | 9935 (1.00x) | - (-) | 5610 (1.00x) | - (-) | - (-) | | SCAFFOLD | 2x | 2800 (1.97x) | 5200 (1.91x) | - (-) | 2970 (1.89x) | 5270 (-) | - (-) | | FEDDYN | 1x | 1000 (5.52x) | 1810 (5.49x) | - (-) | 1950 (2.88x) | 3180 (-) | 7600 (-) | | AdaBEST | 1x | 5520 (1.00x) | 9935 (1.00x) | - (-) | 5610 (1.00x) | - (-) | - (-) | | MIME | 2x | 3410 (1.62x) | 5180 (1.92x) | 9700 (-) | 3840 (1.46x) | 7340 (-) | - (-) | | FEDAvgM | 1x | 5380 (1.03x) | 9500 (1.05x) | - (-) | 3480 (1.61x) | 5370 (-) | - (-) | | FedCM | 1.5x | 5400 (1.02x) | 9500 (1.05x) | - (-) | 3400 (1.65x) | 5300 (-) | - (-) | | FEDADC | 1.5x | 5400 (1.02x) | 9500 (1.05x) | - (-) | 3400 (1.65x) | 5300 (-) | - (-) | | MimeMom | 3x | 1500 (3.68x) | 2350 (4.23x) | 4450 (-) | 2490 (2.25x) | 3470 (-) | 7360 (-) | | MimeLiteMom | 2x | 2080 (2.65x) | 3320 (2.99x) | 6510 (-) | 3090 (1.82x) | 4510 (-) | 8490 (-) | | FedHBM (ours)| 1x | 770 (7.17x) | 1270 (7.82x) | 2560 (-) | 1950 (2.88x) | 2960 (-) | 6510 (-) | LSTM used in [Reddi et al. (2021); Karimireddy et al. (2021)]. Additional details about the datasets and the splits, the models architecture, and the algorithms’ hyperparameters are deferred to the appendix. ### 4.2 Comparative results **Convergence speed** As it is possible to see from table 2, FedHBM is consistently faster than the current state-of-the-art: it attains 70% of centralized accuracy with a speedup of $7.17\times$ and $2.88\times$ respectively in cross-silo and cross-device. Importantly, the reported results do not consider the additional slowdown introduced in MIME and SCAFFOLD due to increased communication: while usually being the second best, they require additional communication, which in practice nullifies the speed gains attained. Similar results hold also for CIFAR-100 and are reported in table 4 of supplementary. This evidence corroborates our claim that FedHBM is the most communication-efficient method. **Final model quality** As showed in tables Tables 3 and 4, FedHBM consistently outperforms the other methods even when facing extreme non-iid clients’ distributions, in both settings. FedDYN improves FedAvg on CIFAR-10, but fails to converge for CIFAR-100, in line with the results reported by [Varno et al. (2022)]. Confirming the findings of [Hsu et al. (2019)], server-only momentum improves performance only in non-pathological scenarios, due to the client drift. Integrating the server momentum client side, MimeMom usually surpasses both SCAFFOLD and FedDYN, especially in the presence of high heterogeneity. However these results are not consistent across architectures, since on our ResNet-20 experiments we found that MimeMom and FedDYN fail to surpass FedAvg (see figure 2). Conversely, FedHBM consistently outperforms all the other algorithms across all settings, except for the extreme cross-device scenario of StackOverflow. This is mainly due to the fact that, since each client participates 1.5 times on average, FedHBM most of the time cannot calculate its $\tau_i$-momentum. As we will show in Sec. 4.3, it is possible to easily circumvent this limitation, without introducing any communication overhead. Figure 2: Comparing Local-GHB and FedHBM with other state-of-art approaches on CIFAR-10/100 (up and bottom respectively) using a ResNet-20, under extreme heterogeneity. Table 3: Test accuracy (%) comparison of several SOTA FL algorithms on our **cross-silo** setting. Best result is in **bold**, second best is _underlined_. | METHOD | CIFAR-10 NON-IID | CIFAR-10 IID | CIFAR-100 NON-IID | CIFAR-100 IID | SHAKESPEARE NON-IID | SHAKESPEARE IID | |------------|------------------|--------------|-------------------|--------------|---------------------|----------------| | FEDAVG | 66.12 ± 0.32 | 83.11 ± 0.34 | 35.56 ± 0.24 | 49.74 ± 0.22 | 47.31 ± 0.10 | 47.08 ± 0.17 | | FedProx | 66.12 ± 0.32 | 83.11 ± 0.34 | 35.48 ± 0.30 | 49.86 ± 0.22 | 47.30 ± 0.10 | 47.07 ± 0.17 | | SCAFFOLD | 74.83 ± 0.20 | 82.93 ± 0.25 | 45.50 ± 0.12 | 49.41 ± 0.40 | 50.25 ± 0.10 | 50.13 ± 0.10 | | FedDYN | 70.93 ± 0.18 | 83.52 ± 0.12 | NaN | 51.95 ± 0.17 | 50.72 ± 0.12 | 50.80 ± 0.16 | | AdaBest | 66.12 ± 0.36 | 83.11 ± 0.38 | 35.56 ± 0.26 | 49.74 ± 0.25 | 47.31 ± 0.10 | 47.08 ± 0.17 | | Mime | 75.08 ± 0.55 | 83.13 ± 0.46 | 36.31 ± 0.49 | 50.87 ± 0.36 | 48.29 ± 0.16 | 48.49 ± 0.15 | | FedAvgM | 67.58 ± 0.27 | 83.60 ± 0.31 | 35.22 ± 0.33 | 50.68 ± 0.25 | 50.00 ± 0.03 | 50.41 ± 0.08 | | FedCM | 69.01 ± 0.26 | 83.39 ± 0.30 | 36.04 ± 0.34 | 50.18 ± 0.50 | 49.16 ± 0.07 | 50.45 ± 0.09 | | FedADC | 69.12 ± 0.32 | 83.41 ± 0.32 | 37.88 ± 0.30 | 50.16 ± 0.41 | 49.23 ± 0.11 | 50.42 ± 0.12 | | MimeMom | 80.95 ± 0.40 | 83.11 ± 0.20 | 48.17 ± 0.68 | 50.60 ± 0.11 | 48.46 ± 0.19 | 48.89 ± 0.25 | | MimeLiteMom| 78.79 ± 0.38 | 83.23 ± 0.29 | 46.00 ± 0.30 | 50.66 ± 0.10 | 49.10 ± 0.38 | 49.39 ± 0.32 | | **FedHBM (ours)** | **81.71 ± 0.15** | **83.83 ± 0.14** | **50.41 ± 0.51** | **51.99 ± 0.45** | **51.33 ± 0.08** | **51.36 ± 0.19** | Table 4: Test accuracy (%) comparison of several SOTA FL algorithms on our **cross-device** setting. Best result is in **bold**, second best is _underlined_. | METHOD | CIFAR-10 NON-IID | CIFAR-10 IID | CIFAR-100 NON-IID | CIFAR-100 IID | STACKOVERFLOW NON-IID | |------------|------------------|--------------|-------------------|--------------|-----------------------| | FEDAVG | 66.08 ± 0.15 | 77.47 ± 0.33 | 35.31 ± 0.31 | 48.46 ± 0.56 | 24.02 ± 0.41 | | FedProx | 65.92 ± 0.26 | 77.42 ± 0.37 | 35.32 ± 0.20 | 48.55 ± 0.56 | 23.88 ± 0.42 | | SCAFFOLD | 74.20 ± 0.12 | 80.77 ± 0.32 | 44.59 ± 0.38 | 50.35 ± 0.51 | 24.77 ± 0.41 | | FedDYN | 77.79 ± 0.73 | 80.82 ± 0.74 | NaN | 50.46 ± 0.31 | 24.04 ± 0.35 | | AdaBest | 65.91 ± 0.25 | 77.43 ± 0.35 | 35.31 ± 0.31 | 48.46 ± 0.56 | 24.01 ± 0.4 | | Mime | 70.90 ± 0.24 | 77.64 ± 0.17 | 39.43 ± 0.22 | 48.30 ± 0.20 | 18.82 ± 2.85 | | FedAvgM | 73.90 ± 0.97 | 82.40 ± 0.28 | 38.11 ± 1.04 | 50.61 ± 0.28 | 24.07 ± 0.35 | | FedCM | 74.01 ± 0.91 | 81.36 ± 0.25 | 38.57 ± 0.99 | 50.56 ± 0.38 | 24.01 ± 0.29 | | FedADC | 73.96 ± 0.89 | 81.31 ± 0.32 | 38.52 ± 1.01 | 50.36 ± 0.42 | 23.96 ± 0.23 | | MimeMom | 77.41 ± 0.74 | 82.87 ± 0.22 | 42.33 ± 1.47 | 50.12 ± 0.29 | 24.92 ± 0.59 | | MimeLiteMom| 76.41 ± 1.15 | 82.73 ± 0.27 | 41.23 ± 2.57 | 49.93 ± 0.27 | 23.30 ± 3.46 | | **FedHBM (ours)** | **79.31 ± 0.45** | **81.64 ± 0.18** | **48.69 ± 0.95** | **52.73 ± 0.29** | **24.47 ± 0.40** | ### 4.3 Ablation Study The importance of $\tau$-window momentum in GHB In figure 3 we show that the $\tau$-window momentum generalization introduced in our GHB formulation is crucial to effectively address extreme statistical heterogeneity. In fact constraining $\tau = 1$ fails at improving FEDAVG: this demonstrates that the correction provided by the momentum term is ineffective under extreme non-iidness when using the standard formulation. Both FedCM and FedADC are equivalent to GHB with $\tau = 1$ (cf. Table 1), hence they lead to the same results. Conversely, a wider window provides a steep enhancement both in convergence speed and final model quality, showing that our generalized momentum is the key factor for enabling excellent performance. Secondly our experiments show that our communication-efficient instance LOCAL-GHB, that allows each client ![Figure 3: Ablation study on the size $\tau$ of the window for GHB on CIFAR-10/100 and comparison with LOCAL-GHB and FedHBM, under extreme heterogeneity](image-url) to independently calculate its momentum term, reaches the same performance of GHB without the additional overhead of sending the global model of round $t - \tau$. Finally, thanks to the additional local correction term (eq. 1), FedHBM always outperforms all the alternative solutions both in convergence speed and final model quality (see also figure 2). Addressing extreme cross-device scenarios Besides overall empirical success, we have shown that in the extreme cross-device of StackOverflow FedHBM has diminished performance. This is due to the fact that most of the time the momentum term will be equal to zero (line 6 in algorithm 1). To address such limitation, we propose just to use the simpler formulation of Local-GHB the first time each client is selected, using as past model the initial server model. From the second time on, each client uses the formulation in eq. 5. Let us note that this does not require additional communication: when training a model from scratch, it is necessary to only know the initialization algorithm and the seed for the random number generator to recover the very same model client side. We denote this variation FedHBM-shared. To further investigate the impact of initial model selection, we conducted experiments in which clients were allowed to choose a distinct random initialization, referred to as FedHBM-random. As it is possible to see from figure 4a, both solutions make our algorithm to recover its full performance gains, underscoring the resilience of our approach. Use of pre-trained models Following the practice highlighted above, when training from a pre-trained model it is possible to use it as past model for all clients. The availability of the pre-trained model does not constitute a communication-hampering factor, since it can be asynchronously downloaded from a server different than the FL training orchestrator. We experiment by letting the initial server model have the feature extractor initialized from a pre-trained model (on CIFAR-100 for CIFAR-10 and vice versa). As illustrated in figures 4b and 4c, this modification allows regaining full speed from early rounds of training, thereby demonstrating the efficacy of leveraging a well-initialized model for prompt acceleration. ![Figure 4](image) Figure 4: Effect of using a shared model as initialization. For CIFAR’s we show the impact of using a pre-trained backbone, while for StackOverflow we analyze the use of a random shared or independent model initialization. 4.4 Results in large-scale real-world scenarios To further corroborate the results presented in controlled scenarios on common FL datasets, in this section we extended our experimentation to real-world large-scale FL vision datasets, following Hsu et al. (2020). Results in table 5 show that FedHBM outperforms SOTA methods even in large-scale applications. Importantly, it shows superior robustness, as we show failure cases of even theoretically-backed algorithms (e.g. SCAFFOLD), despite careful and broad hyperparameter search. In particular, for MIMEOM we leveraged the official JAX implementation provided by authors (see section B.1 for details). 5 Conclusions We introduced a framework based on a novel generalized heavy-ball momentum (GHB) formulation for FL; in particular, we showed that existing momentum-based FL algorithms are instances of this general framework. Within it we proposed FedHBM, which outperforms the state-of-the-art Table 5: Test accuracy (%) comparison of best SOTA FL algorithms on Landmarks-Users-160K and Inaturalist-Users-120K | Method | Comm. Overhead | Landmarks-Users-160K | Inaturalist-Users-120K | |--------------|----------------|----------------------|------------------------| | | | C ≈ 0.79% | C ≈ 0.1% | C ≈ 0.5% | C ≈ 1% | | FedAvg | 1× | 60.31 ± 0.18 | 38.03 ± 0.84 | 45.25 ± 0.07 | 47.59 ± 0.13 | | Scaffold | 2× | 61.03 ± 0.08 | 0.0 | 0.0 | 0.0 | | FedAvgM | 1× | 61.50 ± 0.22 | 41.34 ± 0.38 | 46.08 ± 0.09 | 48.37 ± 0.07 | | MimeMom | 3× | 0.0 | 0.0 | 0.0 | 0.0 | | FedHBM (ours)| 1× | 65.41 ± 0.17 | 41.64 ± 0.18 | 47.33 ± 0.04 | 49.80 ± 0.05 | approaches in terms of both model quality and convergence speed. Remarkably, we showed that FedHBM is the most robust to statistical heterogeneity and performs favorably even in high cross-device settings and real-world scenarios. The generality and versatility of the novel generalized heavy-ball momentum formulation we propose expands its potential applications to a wider range of scenarios where communication is a bottleneck, such as distributed learning. References Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. International Conference on Learning Representations, 2021. Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Communication-efficient sgd via gradient quantization and encoding. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/6c340f25839e6acdc73414517203f5f0-Paper.pdf Debora Caldarola, Massimiliano Mancini, Fabio Galasso, Marco Ciccone, Emanuele Rodola, and Barbara Caputo. Cluster-driven graph federated learning over multiple domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 2749–2758, June 2021. Debora Caldarola, Barbara Caputo, and Marco Ciccone. Improving generalization in federated learning by seeking flat minima. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIII, pp. 654–672. Springer, 2022. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings, 2019. Dawei Chen, Choong Seon Hong, Yiyong Zha, Yunfei Zhang, Xin Liu, and Zhu Han. Fedsvrg based communication efficient scheme for federated learning in mec networks. IEEE Transactions on Vehicular Technology, 70(7):7300–7304, 2021. doi: 10.1109/TVT.2021.3089431. Yicheng Chen, Rick S. Blum, and Brian M. Sadler. Communication efficient federated learning via ordered admm in a fully decentralized setting. In 2022 56th Annual Conference on Information Sciences and Systems (CISS), pp. 96–100, 2022. doi: 10.1109/CISS53076.2022.9751166. Yonghai Gong, Yichuan Li, and Nikolaos M. Freris. Fedadmm: A robust federated deep learning framework with adaptivity to system heterogeneity, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015. Kevin Hsieh, Amar Phanishayee, Onur Mutlu, and Phillip Gibbons. The non-IID data quagmire of decentralized machine learning. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 4387–4398. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/hsieh20a.html.
gBLEHzKOfF
Proposition 3.1 assumes $Y \sim \pi_{\epsilon}^{\star} (\cdot | x)$ in Eq 7. In practice, Algorithm 1 adopts the mini-batch estimate for $\hat{\pi}_{ \epsilon}$. Can GENOT still recovers Optimal Conditional Generators by taking expectations over mini-batch estimate, e.g. [4]? In this respect, I am curious about the quantitative results for the OT coupling estimates from GENOT for Fig 1 and Fig 4.
Generative Entropic Neural Optimal Transport To Map Within and Across Spaces Anonymous authors Paper under double-blind review Abstract Learning measure-to-measure mappings is a crucial task in machine learning, featured prominently in generative modeling. Recent years have witnessed a surge of techniques that draw inspiration from optimal transport (OT) theory. Combined with neural network models, these methods collectively known as Neural OT use optimal transport as an inductive bias: such mappings should be optimal w.r.t. a given cost function, in the sense that they are able to move points in a thrifty way, within (by minimizing displacements) or across spaces (by being isometric). This principle, while intuitive, is often confronted with several practical challenges that require adapting the OT toolbox: cost functions other than the squared-Euclidean cost can be challenging to handle, the deterministic formulation of Monge maps leaves little flexibility, mapping across incomparable spaces raises multiple challenges, while the mass conservation constraint inherent to OT can provide too much credit to outliers. While each of these mismatches between practice and theory has been addressed independently in various works, we propose in this work an elegant framework to unify them, called generative entropic neural optimal transport (GENOT). GENOT can accommodate any cost function; handles randomness using conditional generative models; can map points across incomparable spaces, and can be used as an unbalanced solver. We evaluate our approach through experiments conducted on various synthetic datasets and demonstrate its practicality in single-cell biology. In this domain, GENOT proves to be valuable for tasks such as modeling cell development, predicting cellular responses to drugs, and translating between different data modalities of cells. 1 Introduction Mapping a probability distribution onto another is a ubiquitous challenge in machine learning, with many implications in the field of generative modeling. Optimal transport (OT) has arisen in a few years as a major purveyor of tools to better address these challenges, both in theory and practice. The focus of OT lies on finding maps that can effectively transform a distribution of matter onto another, by minimizing a certain notion of cost (Santambrogio [2015]). Originally rooted in physics, the application of OT to large-dimensional problems arising in machine learning and sciences has necessitated various modifications and adaptations. Starting with solvers that can solve approximate matching problems at large scales (Cuturi [2013], Peyré et al. [2016], Scetbon et al. [2021, 2022]), a recent plethora of OT-inspired training approaches for neural networks has emerged (Makkuva et al. [2020], Korotin et al. [2020], Asadulaev et al. [2022], Fan et al. [2020], Uscidda & Cuturi [2023], Lipman et al. [2023], Tong et al. [2020, 2023b]). As an illustration of this overall trend, the applications of OT to single-cell genomics have evolved from advanced matching problems (Schiebinger et al. [2019], Demetri et al. [2022]), towards neural-based approaches that can, for instance, predict the response of cells to various perturbations (Bunne et al. [2021, 2022]). Our goal in this paper is to address the various challenges that still stand in the way of applying OT to the most pressing scientific tasks. From Linear to Quadratic Neural OT Maps. Optimal transport is primarily used through the Kantorovich problem to put in correspondence distributions taking values in the same space $\mathcal{X}$, pending the existence of a cost $c(x, y)$ for any two points $x, y \in \mathcal{X}$. Most of the theory is available in that regime, notably for simpler costs such as the squared Euclidean distance (Santambrogio [2015] §1.3). We refer to such problems as linear OT problems. Yet, more challenging applicative scenarios sought by practitioners involve source and target distributions that do not live in the same space, e.g., \( \mathcal{X} \) and \( \mathcal{Y} \) have differing dimensions, as in [Demetri et al., 2022]. The challenge in that case is that no cost functions are known, requiring the use of quadratic losses [Mémoli, 2011; Sturm, 2020], yielding the so-called Gromov-Wasserstein (GW) problem. While theory is far more scarce in these regimes, practitioners expressed major interest in that flexibility, going as far as proposing, with the Fused Gromov-Wasserstein (FGW) distance, a tool that blends both linear and quadratic approaches [Vayer et al., 2018], as in [Klein et al., 2023; Lange et al., 2023; Nitzan et al., 2019; Zeira et al., 2022]. There exists, however, to our knowledge, only one formulation of a neural quadratic OT method, which is limited to learning deterministic maps for the inner product costs and whose training procedure involves a min-max-min optimization procedure [Nekrashevich et al., 2023]. **From Deterministic to Stochastic Maps.** The classic (Monge) deterministic map can lack flexibility in practice, both at estimation and inference time. In the quadratic case, that map may not exist [Dumont et al., 2022]. Practitioners may favor, instead, stochasticity, which would account naturally for instance, for the non-determinism of cell evolutions [Elowitz et al., 2002]. Stochastic formulations can also produce a conditional distribution that can be used to quantify uncertainty. In the discrete setting, this property is fulfilled by entropy-regularized OT (EOT) [Cuturi, 2013]. **Flexibility in Mass Conservation.** In numerous real-world applications, the data acquisition process can be error-prone, resulting in outliers. To mitigate this, unbalanced OT (UOT) formulations that can discard observations have been proposed [Frogner et al., 2015; Chizat et al., 2018; Sejourne et al., 2021], with numerous applications to generative modeling [Balaji et al., 2020; Yang & Uhler, 2019] and single-cell genomics [Schiebinger et al., 2019; Eyring et al., 2022; Lübeck et al., 2022]. **Contributions.** We propose a flexible neural OT framework that satisfies all requirements above: - We propose the first method to compute neural EOT couplings in both Kantorovich and GW settings by fitting stochastic maps to their conditional distributions (Prop. 3.1) using conditional flow matching [Lipman et al., 2023] as a building block. In particular, GENOT works with any cost function between samples. - By showing that solving an unbalanced EOT problem is equivalent to solving a balanced one between re-weighted measures (Prop. 3.2) that can be estimated consistently (Prop. 3.3), we introduce U-GENOT to solve unbalanced EOT problems. - We extend (U-)GENOT to solve the (unbalanced) entropic Fused GW problem (§ 3.3). To our knowledge, GENOT is the first neural OT method to solve a continuous Fused GW problem. - We demonstrate the applicability of GENOT in various single-cell biology problems. In particular, we (i) quantify lineage branching events in the developing mouse pancreas, (ii) predict cellular responses to drug perturbations along with a well-calibrated uncertainty estimation, and (iii) introduce a novel method to translate ATAC-seq data to RNA-seq data. ## 2 BACKGROUND **Notations.** We consider throughout this work two compact subsets \( \mathcal{X} \subset \mathbb{R}^p, \mathcal{Y} \subset \mathbb{R}^q \), referred to as the source and the target domain, respectively. In general, \( p \neq q \). The sets of positive measures and probability measures on \( \mathcal{X} \) are denoted by \( \mathcal{M}^+(\mathcal{X}) \) and \( \mathcal{M}_1^+(\mathcal{X}) \), respectively. For \( \pi \in \mathcal{M}^+(\mathcal{X} \times \mathcal{Y}) \), we denote its marginals by \( \pi_1 := p_1\sharp\pi \) and \( \pi_2 := p_2\sharp\pi \). Then, for \( \mu \in \mathcal{M}^+(\mathcal{X}), \nu \in \mathcal{M}^+(\mathcal{Y}), \Pi(\mu, \nu) \) is the set of probability measures with respective marginals \( \mu \) and \( \nu \), i.e., \( \Pi(\mu, \nu) = \{ \pi : \pi_1 = \mu, \pi_2 = \nu \} \subset \mathcal{P}(\mathcal{X} \times \mathcal{Y}) \). We define \( \frac{d\mu}{d\nu} \) to be the relative density of \( \mu \) w.r.t. \( \nu \) and write \( \mu = \frac{d\mu}{d\nu} \cdot \nu \) accordingly. For \( \rho, \gamma \in \mathcal{M}^+(\mathcal{X}), \text{KL}(\rho|\gamma) = \int_{\mathcal{X}} \log\left(\frac{d\rho}{d\gamma}\right) d\rho - \int_{\mathcal{X}} d\gamma + \int_{\mathcal{X}} d\rho \). ### 2.1 ENTROPIC OPTIMAL TRANSPORT **The Entropic Kantorovich Problem.** Let \( c : \mathcal{X} \times \mathcal{Y} \to \mathbb{R} \) be a cost function, \( \mu \in \mathcal{M}_1^+(\mathcal{X}), \nu \in \mathcal{M}_1^+(\mathcal{Y}) \) and \( \varepsilon \geq 0 \). The entropy-regularized OT problem reads \[ \min_{\pi \in \Pi(\mu, \nu)} \int_{\mathcal{X} \times \mathcal{Y}} c(x, y) \, d\pi(x, y) + \varepsilon \text{KL}(\pi|\mu \otimes \nu). \] EK A solution \( \pi^\star_\varepsilon \) of (EK) always exists. With \( \varepsilon = 0 \), we recover the classical Kantorovich [1942] problem. When \( \varepsilon > 0 \), the optimal coupling \( \pi^\star_\varepsilon \) is unique. If \( \mu \) and \( \nu \) are discrete, (EK) can be solved with the Sinkhorn algorithm [Cuturi, 2013]. The Entropic Gromov-Wasserstein Problem. As opposed to considering an inter-domain cost defined on $\mathcal{X} \times \mathcal{Y}$, the entropic Gromov-Wasserstein problem is concerned with seeking couplings based on intra-domain cost functions $c_X : \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ and $c_Y : \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}$: $$\min_{\pi \in \Pi(\mu, \nu)} \int_{(\mathcal{X} \times \mathcal{Y})^2} |c_X(x, x') - c_Y(y, y')|^2 d\pi(x, y) d\pi(x', y') + \varepsilon \text{KL}(\pi || \mu \otimes \nu).$$ EGW With $\varepsilon = 0$, we recover the Gromov-Wasserstein problem (Mémoli, 2011). As in the Kantorovich setting, using $\varepsilon > 0$ comes with favorable computational properties, since for discrete $\mu, \nu$, we can solve (EGW) with a mirror-descent scheme based on the Sinkhorn algorithm (Peyré et al., 2016). Unbalanced Extensions. The EOT formulations presented above can only handle measures with the same total mass. Unbalanced optimal transport (UOT) (Liero et al., 2018; Chizat et al., 2018) lifts this constraint by penalizing the deviation of $p_1 \# \pi$ to $\mu$ and $p_2 \# \pi$ to $\nu$ with a divergence. Using the KL divergence and introducing $\lambda_1, \lambda_2 > 0$ controlling how much mass variations are penalized as opposed to transportation, the unbalanced extension of (EK) seeks a measure $\pi \in \mathcal{M}^+(\mathcal{X} \times \mathcal{Y})$: $$\min_{\pi \in \mathcal{M}^+(\mathcal{X} \times \mathcal{Y})} \int_{\mathcal{X} \times \mathcal{Y}} c(x, y) d\pi(x, y) + \varepsilon \text{KL}(\pi || \mu \otimes \nu) + \lambda_1 \text{KL}(\pi_1 || \mu) + \lambda_2 \text{KL}(\pi_2 || \nu).$$ UEK This problem can be solved efficiently in a discrete setting using a variant of the Sinkhorn algorithm (Frogner et al., 2015; Séjourné et al., 2023a). Analogously, the GW formulation (EGW) also admits an unbalanced generalization, which reads $$\min_{\pi \in \mathcal{M}^+(\mathcal{X} \times \mathcal{Y})} \int_{(\mathcal{X} \times \mathcal{Y})^2} |c_X(x, x') - c_Y(y, y')|^2 d\pi(x, y) d\pi(x', y') + \varepsilon \text{KL}(\pi || \mu \otimes \nu) + \lambda_1 \text{KL}(\pi_1 || \mu) + \lambda_2 \text{KL}(\pi_2 || \nu),$$ UEGW where $\text{KL}^\otimes(\rho || \gamma) = \text{KL}(\rho \otimes \rho || \gamma \otimes \gamma)$. This can also be solved using an extension of Peyre et al. (2016)'s scheme introduced by Séjourné et al. (2023b). For both unbalanced problems (EK) and (UEGW), instead of directly selecting $\lambda_i$, we introduce $\tau_i = \frac{\lambda_i}{\lambda_i + \varepsilon}$ s.t. we recover the hard marginal constraint for $\tau_i = 1$, when $\lambda_i \to +\infty$. We write $\tau = (\tau_1, \tau_2)$ accordingly. 2.2 Conditional Flow Matching Provided a prior distribution $\rho_0 \in \mathcal{M}_1^+(\mathbb{R}^d)$ and a time-dependent vector field $v_t$, one can define a probability path $(p_t)_{t \in [0, 1]}$ starting from $\rho_0$ using the flow $(\phi_t)_{t \in [0, 1]}$ induced by the ODE $$\frac{d}{dt} \phi_t(z) = v_t(\phi_t(z)), \quad \phi_0(z) = z,$$ by setting $p_t = \phi_t \# \rho_0$. In that case, we say that $v_t$ generates the path $p_t$ through the flow $\phi_t$. Continuous Normalizing Flows (Chen et al., 2018) model the vector field with a neural network $v_{t,\theta}$, leading to a deep parametric model of the flow, which is trained to match a terminal condition defined by a target distribution $p_1 = \rho_1 \in \mathcal{M}_1^+(\mathbb{R}^d)$. (Conditional) Flow Matching (CFM) (Lipman et al., 2023) is a simulation-free technique to train CNFs by constructing probability paths between individual data samples $z_0 \sim \rho_0, z_1 \sim \rho_1$, and minimizing the loss $$L_{\text{CFM}}(\theta) = \mathbb{E}_{t \sim U([0, 1]), Z_0 \sim \rho_0, Z_1 \sim \rho_1} \left[ \| v_{t,\theta}(tZ_0 + (1-t)Z_1) - (Z_1 - Z_0) \|_2^2 \right].$$ If this loss is 0, then $v_{t,\theta}$ generates a probability path between $\rho_0$ and $\rho_1$, i.e. the induced flow satisfies $\phi_1 \# \rho_0 = \rho_1$ (Lipman et al., 2023) [Theorem 1]. To sample from $\rho_1$, we solve the ODE (1) with $z_0$ sampled from $\rho_0$, and therefore $\phi_1(z_0)$ is a sample from $\rho_1$. 3 Generative Entropic Neural Optimal Transport In this section, we introduce GENOT, a method to learn EOT couplings by learning their conditional distributions. In § 3.1, we first focus on the balanced OT case, when the source and the target measures have the same mass, and show that GENOT can solve (EK) or (EGW). Second, in § 3.2, we extend GENOT to the unbalanced setting by loosening the conservation of mass constraint and defining U-GENOT, which can be used to solve problems (UEK) and (UEGW). Finally, in § 3.3, we highlight that GENOT also addresses a fused problem, combining (EK) and (EGW). 3.1 Learning Entropic Optimal Couplings with GENOT Let $\mu \in M^1_+(\mathcal{X})$, $\nu \in M^1_+(\mathcal{Y})$ and $\pi^\ast_\varepsilon$ be an EOT coupling between $\mu$ and $\nu$, which can be a solution of problem (EK) or (EGW). The measure disintegration theorem yields $$d\pi^\ast_\varepsilon(x, y) = d\pi^\ast_\varepsilon(x) d\pi^\ast_\varepsilon(y|x) = d\mu(x) d\pi^\ast_\varepsilon(y|x).$$ (3) Knowing $\mu$, we can hence fully describe $\pi^\ast_\varepsilon$ via the conditional distributions $(\pi^\ast_\varepsilon(\cdot|x))_{x \in \mathcal{X}}$. The latter are also of great practical interest, as they provide a way to transport a source sample $x \sim \mu$ to the target domain $\mathcal{Y}$: either stochastically by sampling $y_1, \ldots, y_n \sim \pi^\ast_\varepsilon(\cdot|x)$, or deterministically by averaging over conditional samples: $$T_\varepsilon(x) := \mathbb{E}_{Y \sim \pi^\ast_\varepsilon(\cdot|x)}[Y] = \mathbb{E}_{(X,Y) \sim \pi^\ast_\varepsilon}[Y|X = x].$$ (4) Moreover, we can compute any statistic of $\pi^\ast_\varepsilon(\cdot|x)$ to assess the uncertainty surrounding this prediction. In the following, we elaborate on our approach for calculating these conditional distributions. Noise Outsourcing. Let $\rho \in M^1_+(\mathcal{Z})$ be an atomless distribution on an arbitrary Borel space $\mathcal{Z}$, refer to as the noise. The noise outsourcing lemma (Kallenberg, 2002) states that there exists a collection of maps $\{T^\ast(\cdot|x)\}_{x \in \mathcal{X}}$ with $T^\ast(\cdot|x): \mathcal{Z} \to \mathcal{Y}$ s.t. for each $x$ in the support of $\mu$, $\pi^\ast_\varepsilon(\cdot|x) = T^\ast(\cdot|x)\# \rho$, i.e. if $Z \sim \rho$, then $Y = T^\ast(Z|x) \sim \pi^\ast_\varepsilon(\cdot|x)$. Each $T^\ast(\cdot|x)$ generates a distribution from a point $x$, by “outsourcing” the noise vectors $Z \sim \rho$. We refer to $\{T^\ast(\cdot|x)\}_{x \in \mathcal{X}}$ as a collection of optimal conditional generators since they generate the conditional distributions of $\pi^\ast_\varepsilon$. Conversely, noise outsourcing provides a way to define neural couplings $\pi_\theta$ by parameterizing their conditional generators $\{T_\theta(\cdot|x)\}_{x \in \mathcal{X}}$ with neural networks. To obtain $\pi_\theta \approx \pi^\ast_\varepsilon$, we then need $T_\theta(\cdot|x)$ to generate $\pi^\ast_\varepsilon(\cdot|x)$ by outsourcing the noise $\rho$, for any source sample $x$ in the support of $\mu$. Learning the Conditional Generators. In the following, we learn a collection of maps $\{T_\theta(\cdot|x)\}_{x \in \mathcal{X}}$ fitting the constraint $T_\theta(\cdot|x)\# \rho \approx \pi^\ast_\varepsilon(\cdot|x)$ for any $x$ in the support of $\mu$. Instead of directly modeling $T_\theta(\cdot|x)$ with a neural network, we employ the CFM framework discussed in §2.2. To that end, we first set $\mathcal{Z} = \mathbb{R}^q$ and the noise $\rho = \mathcal{N}(0, I_q)$. Recall that $q$ is the dimension of the target domain $\mathcal{Y}$. Then, we parameterize each $T_\theta(\cdot|x)$ implicitly as the flow induced by a neural vector field $v_{t,\theta}(\cdot|x): \mathbb{R}^q \to \mathbb{R}^q$. Namely $T_\theta(\cdot|x) = \phi_t(\cdot|x)$ where $\phi_t(\cdot|x)$ solves $$\frac{d}{dt}\phi_t(z|x) = v_{t,\theta}(\phi_t(z|x)|x), \quad \phi_0(z|x) = z.$$ (5) We stress that while $x \in \mathcal{X} \subset \mathbb{R}^d$, the flow from $\rho$ to $\pi^\ast_\varepsilon(\cdot|x)$ is defined on $\mathbb{R}^q \supset \mathcal{Y}$. Hence, we can map samples within the same space when $p = q$, but also across incomparable spaces when $p \neq q$. In particular, this allows us to solve the Gromov-Wasserstein problem (EGW). Thus, for each $x$, we optimize $v_{t,\theta}(\cdot|x)$ by minimizing the CFM loss (2) with source $\rho$ and target $\pi^\ast_\varepsilon(\cdot|x)$, i.e. $$\mathbb{E}_{t \sim U([0,1]), Z \sim \rho, Y \sim \pi^\ast_\varepsilon(\cdot|x)}[\|v_{t,\theta}((1-t)Z + tY|x) - (Y-Z)\|^2_2].$$ (6) Averaging for all $x$ in the support of $\mu$ and using Fubini’s Theorem, we arrive at the GENOT loss $$L_{GENOT}(\theta) = \mathbb{E}_{t \sim U([0,1]), Z \sim \rho, X \sim \mu, Y \sim \pi^\ast_\varepsilon(\cdot|x)}[\|v_{t,\theta}((1-t)Z + tY|X) - (Y-Z)\|^2_2].$$ (7) GENOT is a well-posed loss in the sense that, in the idealized asymptotic, infinite sample setting, assuming neural network architectures that are expressive enough, one could provably recover the original entropic coupling (and its conditional distributions), as shown in the Proposition below. Proposition 3.1 (Well-posedness of GENOT Loss). Suppose that $L_{GENOT}(\theta) = 0$. Then the flows $\{\phi_t(\cdot|x)\}_{x \in \mathcal{X}}$, induced by the velocity fields $\{v_{t,\theta}(\cdot|x)\}_{x \in \mathcal{X}}$, are a collection of optimal conditional generators. Namely, for $x$ in the support of $\mu$, $Z \sim \rho$ and $Y = \phi_t(Z|x)$ denoting the solution of the ODE (5), then $Y \sim \pi^\ast_\varepsilon(\cdot|x)$, therefore this ideal conditional vector field $v_{t,\theta}$ recovers $\pi^\ast_\varepsilon$. We optimize the sample-based GENOT loss, using mini-batches. This involves (i) estimating a discrete coupling $\hat{\pi}_\varepsilon$ from samples $x_1, \ldots, x_n$ from $\mu$ and $y_1, \ldots, y_n$ from $\nu$, and (ii) sampling its discrete conditional distributions, to recover paired samples. Algorithm 1 details the overall procedure, using noise and time samples. GENOT can be thought of as a conditional CFM model: For each $x$, using CFM, train a conditional vector field $v_{t,\theta}(\cdot|x)$ to generate $\pi^\ast_\varepsilon(\cdot|x)$ from noise $\rho$. Bias and Mini-batches. Quantifying non-asymptotically the bias resulting from minimizing a sample-based GENOT loss, and not its population value, is a challenging task. The OT-inspired generative modeling literature (Genevay et al., 2019a; Salimans et al., 2018; Uscidda & Cuturi, 2023; Tong et al., 2023) mentions recurrently this aspect, see also (Patras et al., 2021). Analyzing these non-asymptotic properties becomes even harder when considering conditional mappings across spaces, in a GW setting, as we do here since discrete solvers do not return, in general, a globally optimal sample-based coupling. Yet, our goal in this paper is not to estimate a deterministic Monge map or vector field (Benamou & Brenier, 2000), we target explicitly the entropic coupling. In that sense, using a large $\varepsilon$ does help, because of two qualitative factors: In the Kantorovich problem, all statistical recovery rates that relate to entropic costs (Genevay et al., 2019b; Mena & Niles-Weed, 2019) or maps (Rigollet & Strommel, 2022), for a fixed $\varepsilon > 0$, have a far more favorable regime, with a parametric rate that dodges the curse of dimensionality. While these statistics are less studied for the GW case, Rioux et al. (2023) have recently shown that for sufficiently large $\varepsilon$, GW becomes a convex problem, making optimization more stable. Qualitatively, large $\varepsilon$ will be therefore useful on both statistical and computational fronts. The simpler alternative of independent sampling boils down effectively to an infinite $\varepsilon$. GENOT Addresses Any Cost. Thanks to Prop. 3.1, we can use GENOT to solve (EK) and (EGW) problems. In both cases, we make no assumptions on the cost functions, and only need to evaluate these costs to estimate $\pi^*_\varepsilon$. In particular, we can use costs that are implicitly defined and whose evaluation requires a non-differentiable sub-routine. For instance, recent works have proposed using the geodesic distance on the data manifold as cost, which can be approximated from samples by considering the shortest path distance on the $k$-nn graph induced by the Euclidean distance (Demetcu et al., 2022). Using data-driven cost functions is crucial for many applications as in some single-cell genomic tasks (Huguet et al., 2022; Klein et al., 2023). 3.2 U-GENOT: Extension to the Unbalanced Setting Re-Balancing the UOT Problems. In its standard form, GENOT respects marginal constraints, so it cannot directly tackle unbalanced formulations (UEK) or (UEGW). We show that such unbalanced problems can be re-balanced. Lübeck et al. (2022) Yang & Uhler (2019) introduced previously these ideas in the Monge map estimation setting, namely, in a static and deterministic setup. Besides this important conceptual difference, various aspects differentiate further our approach: Lübeck et al. (2022) define unbalanced couplings between mapped instances of the source measure using an ICNN (significantly closer to the target), and vice-versa, whereas we directly target the unbalanced coupling between source and target for any cost; Yang & Uhler (2019) provide an asymmetric formulation that only considers modulations of the source distribution to the target distribution. Our method stems from the fact that, for Kantorovich and GW cases, we can show that the unbalanced EOT coupling $\pi^*_{\varepsilon,\tau}$ between $\mu \in \mathcal{M}^+(\mathcal{X})$ and $\nu \in \mathcal{M}^+(\mathcal{Y})$ solves a balanced EOT problem between its marginals, which are re-weighted versions of $\mu$ and $\nu$ that have the same mass. Proposition 3.2 (Re-Balancing the unbalanced problems.). Let $\pi^*_{\varepsilon,\tau}$ be an unbalanced EOT coupling, solution of (UEK) or (UEGW) between $\mu \in \mathcal{M}^+(\mathcal{X})$ and $\nu \in \mathcal{M}^+(\mathcal{Y})$. We note $\tilde{\mu} = p_1 \sharp \pi^*_{\varepsilon,\tau}$ and $\tilde{\nu} = p_2 \sharp \pi^*_{\varepsilon,\tau}$ its marginals. Then, in both cases, $\tilde{\mu}$ (resp. $\tilde{\nu}$) has a density w.r.t $\mu$ (resp. $\nu$) i.e. it exists $\eta, \xi : \mathbb{R}^d \to \mathbb{R}^+$ s.t. $\tilde{\mu} = \eta \cdot \mu$ and $\tilde{\nu} = \xi \cdot \nu$. Moreover, $\tilde{\mu}$ and $\tilde{\nu}$ have the same mass and 1. (Kantorovich) $\pi^*_{\varepsilon,\tau}$ solves the balanced problem (EK) between $\tilde{\mu}$ and $\tilde{\nu}$ with the same $\varepsilon$. 2. (Gromov-Wasserstein) Provided that $c_X$ and $c_Y$ are conditionally positive (or conditionally negative) kernels (see Def. B.7), $\pi^*_{\varepsilon,\tau}$ solves the balanced problem (EGW) between $\tilde{\mu}$ and $\tilde{\nu}$ with $\varepsilon' = m(\pi^*_{\varepsilon,\tau}) \varepsilon$, where $m(\pi^*_{\varepsilon,\tau}) = \pi^*_{\varepsilon,\tau}(\mathcal{X} \times \mathcal{Y})$ is the total mass of $\pi^*_{\varepsilon,\tau}$. Remark. In various experimental settings, $\mu$ and $\nu$ have mass 1 and we impose one of the two hard marginal constraints, for instance on $\mu$, by setting $\tau_1 = 1$. Then $\tilde{\nu}$ has also mass 1 and $m(\pi^*_{\varepsilon,\tau}) = 1$, so we keep the same regularization strength $\varepsilon$ by re-balancing (UEGW). Learning the Coupling and the Re-Weightings Simultaneously. Thanks to Prop. 3.2, we aim to (i) learn a balanced EOT coupling between $\tilde{\mu}$ and $\tilde{\nu}$ along with (ii) the re-weighting functions $\eta, \xi$. The latter are crucial since they model the creation and destruction of mass. We do both simultaneously by adapting the GENOT procedure. More formally, we seek to optimize the U-GENOT loss $$L_{U-\text{GENOT}}(\theta) = \mathbb{E}_{t \sim U([0,1]), Z \sim p, X \sim \tilde{\mu}, Y \sim \pi^*_{\varepsilon,\tau}(\cdot | X)}[\| v_{t,\theta}(tZ + (1-t)Y | X) - (Y - Z) \|_2^2]$$ $$+ \mathbb{E}_{X \sim \mu}[(\eta(X) - \eta_\theta(X))^2] + \mathbb{E}_{Y \sim \nu}[(\xi(Y) - \xi_\theta(Y))^2].$$ As with GENOT, we simply need to estimate the unbalanced OT coupling $\hat{\pi}_{\varepsilon,\tau}$ from samples $X_1, \ldots, X_n$ from $\mu$ and $Y_1, \ldots, Y_n$ from $\nu$ to estimate that loss. We build upon theoretical insights from the Kantorovich case, which we extend in practice to the Gromov-Wasserstein case. **Proposition 3.3** (Estimation of the re-weightings.). Let $\hat{\pi}_{\varepsilon,\tau}$ be the solution of (UEK) computed on samples. Let $a = \hat{a}_{\varepsilon,\tau}^T 1_n$ and $b = \hat{b}_{\varepsilon,\tau}^T 1_n$ be its marginal weights and let $\hat{\eta}_n(x_i) := n a_i$ and $\hat{\xi}_n(y_i) := n b_i$. Then, almost surely, $\hat{\eta}_n(x_i) \to \eta(x_i)$ and $\hat{\xi}_n(x_i) \to \xi(y_i)$. Using Prop. 3.2, $\hat{\pi}_{\varepsilon,\tau}$ is a balanced EOT coupling between its marginals, which are empirical approximations of $\mu$ and $\nu$. We hence estimate the term (i) of the loss as we do in the balanced case by sampling from the discrete conditional distribution. Furthermore, Prop. 3.3 highlights that the estimation of $\hat{\pi}_{\varepsilon,\tau}$ also provides a consistent estimate of the re-weighting function evaluations at each $x_i$ and $y_i$. This enables the estimation of the term (ii). Therefore, as with GENOT, each U-GENOT iteration only requires a call to a discrete solver. We detail our training procedure in algorithm 2. ### 3.3 Combining Kantorovich and Gromov-Wasserstein to the Fused Setting We show in § 3.1 and § 3.2 how to use our method to map samples within the same space, or across incomparable spaces, by solving (EK) or (EGW), and their unbalanced extensions. On the other hand, there are cases where the source and the target domains are only partially incomparable, leading to a problem that combines both OT formulations (Vayer et al., 2018). Suppose that the source and target space can be decomposed as $X = \Omega \times \tilde{X}$ and $Y = \Omega \times \tilde{Y}$, respectively. Moreover, assume we are given an inter-domain cost $c : \Omega \times \Omega \to \mathbb{R}$ along with the intra-domain costs $c_{\tilde{X}}, c_{\tilde{Y}}$. The entropic fused-Gromov-Wasserstein (FGW) problem can then be defined as $$\min_{\pi \in \Pi(\mu, \nu)} \int ((u, x), (v, y), (x', y')) L((u, x), (v, y), (x', y')) d\pi((u, x), (v, y)) d\pi(x', y') + \varepsilon KL(\pi | \mu \otimes \nu),$$ where $L((u, x), (v, y), (x', y')) := (1 - \alpha)c(u, v) + \alpha[c_{\tilde{X}}(x, x') - c_{\tilde{Y}}(y, y')]^2$ and $\alpha \in [0, 1]$ determines the influence of the components of the space decompositions. When $\alpha = 1$, we recover the pure GW setting. The above fused problem admits an unbalanced extension, which can be derived exactly in the same way as (UEGW) using the quadratic KL (Thual et al., 2023). (U-)GENOT Addresses the Fused Setting. Whether in the balanced or unbalanced setting, we can use our method to learn a specific coupling as soon as it can be estimated from samples. We stress that the discrete solvers we use for problems (EGW) and (UEGW) are still applicable in the fused setting. As a result, we can compute discrete fused couplings and then solve (EFGW) and its unbalanced counterpart with (U-)GENOT. To illustrate this idea more precisely, take a solution $\pi^\star_\alpha$ of (EFGW). Learning $\pi^\star_\alpha$ with our method amounts to training vector fields that are conditioned on pairs of modality from the source domain $v_{t,\theta}(\cdot | u, x)$, to sample pairs of modality from the target domain via the induced flow: $z \sim \rho, \phi_t(z | u, x) = (v, y) \sim \pi^\star_\alpha(\cdot | u, x)$. Given each term of the fused problem (EFGW), the sampled modalities $(v, y)$ minimize transport cost quantified by $c$ along the first modality, while being "isometric" w.r.t. $c_{\tilde{X}}$ and $c_{\tilde{Y}}$ on the second modality. ### 4 RELATED WORK **Neural EOT.** While GENOT is the first model to learn neural EOT couplings in the (Fused) Gromov-Wasserstein or the unbalanced setting, various methods have been proposed in the (balanced) Kantorovich setting. The first class of methods solves the (EK) dual problem. While some of them (Genevay et al., 2019a) do not allow direct sampling according to $\pi^\star_\varepsilon(\cdot | x)$. However, this method is (i) costly as it employs Langevin sampling at inference and (ii) numerically unstable as it requires the exponentiation of large numbers. Mokrov et al., (2023) proposed another approach modeling $\pi^\star_\varepsilon(\cdot | x)$ leveraging energy-based models, but is computationally expensive since it relies on Langevin sampling in each training iteration. Other Kantorovich EOT solvers build upon the link between (EK) and the Schrödinger bridge (SB) problem. They model the EOT plan as a time-evolving stochastic process with fixed marginal constraints, endowed with learnable drift and diffusion terms (De Bortoli et al., 2021; Chen et al., 2021; Vargas et al., 2021; Gushchin et al., 2022). Although these methods have shown good performance on image data, they are very costly since they require simulation-based training. A recent line Figure 1: Prediction by UGENOT-K and ground truth of the unbalanced entropy-regularized transport plan between mixtures of Gaussians. The first column shows the source (top) and target (bottom) distribution. The second and third column show the marginal distributions of the true and the learnt transport plan, respectively. The fourth column compares the learnt (top) with the true (bottom) transport plan, while the fifth column plots conditional distributions. Here, $\varepsilon = 0.05$. of work proposed to train such models in a completely simulation-free manner (Tong et al., 2023a,b; Shi et al., 2023; Liu et al., 2023) via score or flow matching. However, these methods can only be used for the squared Euclidean cost. Indeed, they rely on the fact that the marginals of the SB can be characterized as a mixture of Brownian bridges weighted by an EOT plan. However, this property is true only when we choose the Wiener process as a reference measure in the SB problem, which is limited to using $c(x,y) = \|x - y\|_2^2$ in (EK) (Léonard, 2013) [Eq. 1.2]. On the other hand, GENOT is the first neural EOT framework that can handle any cost function, even those defined implicitly, and whose evaluation requires a call to a non-differentiable sub-routine, like the geodesic distance on the data manifold. This point allows us to emphasize that our method fundamentally differs from theirs since we do not exploit the link between EOT and SB. Our approach is purely conditional and uses flow matching only as a powerful generative black box to learn, for each $x$, a flow from $\rho$ to each $\pi^\varepsilon_\tau(\cdot|x)$. Notably, since we set $\rho \in M_1^+(\mathcal{Y})$ each flow occurs in the target domain $\mathcal{Y}$, which allows us to map distributions across spaces, while (Tong et al., 2023a; Shi et al., 2023; Liu et al., 2023) model (stochastic) flow directly from $\mu$ to $\nu$, requiring $\mu$ and $\nu$ to lie in the same space. Computation of Neural Couplings. Another line of work considers computing neural couplings through the weak OT paradigm (Korotin et al., 2022a,b; Asadulaev et al., 2022; Gazieva et al., 2022), by solving a challenging min-max problem. However, (i) their method only enables mapping within the same space, (ii) in the balanced setting, and (iii) cannot handle EOT problems since they would require estimating the entropy of the neural coupling from samples at each iteration. 5 EXPERIMENTS We demonstrate the applicability and versatility of the GENOT framework on toy data and single-cell data to map within the same space and across incomparable spaces. Metrics are discussed in appendix C and details on the single-cell datasets can be found in appendix D. Further experimental details or results for each experiment are reported in appendix E. Setups for competing methods are listed in appendix F. Details on the implementation of GENOT can be found in appendix G. We introduce the notation GENOT-K for the GENOT model solving problem (EK) while GENOT models solving the tasks (EGW) and (EFGW) are referred to as GENOT-GW and GENOT-FGW, respectively. The prefix U is used whenever consider an unbalanced problem, as described in §3.2. Moreover, when reporting results based on the conditional mean of a GENOT model, we add the suffix CM to the model name. If not stated otherwise, we use the squared Euclidean distance as cost. 5.1 GENOT-K TO MAP WITHIN SPACES U-GENOT-K on simulated data To visualize the capabilities of UGENOT-K to learn unbalanced entropy-regularized transport plans and rescaling functions, we compare its predictions with the OT plan obtained from a discrete EOT solver. Fig. 1 shows that the unbalanced entropy-regularized transport plan with $\varepsilon = 0.05$ and $\tau_1 = \tau_2 = 0.98$ between mixtures of Gaussians is accurately learnt by U-GENOT-K. The influence of the unbalancedness parameters $\tau_1, \tau_2$ is visualized in Fig. 7. The performance of GENOT-K is further assessed, in [E], on its ability to learn the entropic OT coupling between Gaussian distributions. U-GENOT-K for modeling single-cell trajectories OT has been successfully applied to recover cellular trajectories in time-resolved single-cell data (Schiebinger et al., 2019). Due to the ever increasing size of these datasets (Haniffa et al., 2021), neural OT solvers are of particular interest and deterministic Monge map estimators have been successfully applied to millions of cells (He et al., 2023). We apply GENOT-K to a dataset capturing gene expression of the developing mouse pancreas at embryonic days 14.5 and 15.5 (Bastidas-Ponce et al., 2019). We assess the fitting property of the learnt plan by computing the Sinkhorn divergence (Feydy et al., 2019a) between the predicted target distribution $p_2 \# \hat{\pi}_\varepsilon$ and the target distribution see (E.2). Fig. 12 shows that GENOT-K outperforms competing methods. A key feature of all GENOT models is the ability to sample from the conditional distribution. Indeed, it is indispensable to stochastically model cellular trajectories, as cells are known to evolve non-deterministically (Elowitz et al., 2002). Following Gayoso et al. (2022), we compute $\cos-\text{var}(\hat{\pi}_\varepsilon(\cdot|\mathbf{x})) = \text{Var}_{Y \sim \hat{\pi}_\varepsilon(\cdot|\mathbf{x})}[\cos-\text{sim}(Y, \mathbb{E}_{Y \sim \hat{\pi}_\varepsilon(\cdot|\mathbf{x})}[Y])]$, where $\cos-\text{sim}(\cdot,\cdot)$ denotes the cosine similarity, to assess the uncertainty of cell trajectories in the developing mouse pancreas (appendix C.1). We expect high uncertainty in cell types with fate decisions and low variance in mature cell types or cell types with a homogeneous descending population. Indeed, Fig. 2 and Fig. 13 show that GENOT-K helps to uncover lineage branching events. The pancreas dataset considered so far subsets the original dataset to one cell lineage (endocrine) to prevent obtaining biologically implausible couplings. Indeed, table 1 shows that in the balanced case, the cell lineage transition score (see C.2) shows that only 66% of the cells are mapped to the correct lineage. By loosening the conservation of mass constraint, U-GENOT-K helps to counteract the distributional shift introduced by different proliferation rates of cells and experimental biases. Prediction of cellular responses to drug perturbations with U-GENOT-K In-silico perturbation prediction is a promising approach to accelerate drug discovery and improve gene therapies (Ji et al., 2021; Hetzel et al., 2022). Neural OT has been successfully applied to model cellular responses to such perturbations, using deterministic Monge maps (Bunne et al., 2021; Uscidda & Cuturi, 2023). GENOT has the comparative advantage that it can sample from the conditional distribution, which allows for uncertainty quantification. We consider single-cell RNAseq data measuring the response of cells to 163 cancer drugs (Srivatsan et al., 2020). Each drug has been applied to a population of cells that can be partitioned into 3 different cell types. While there is no ground truth in the matching between unperturbed and perturbed cells due to the destructive nature of sequencing technologies, we know which unperturbed subset of cells is supposed to be mapped to which perturbed subset of cells. We use this to define an accuracy metric (Appendix C.2). For the uncertainty metric, we choose again $\cos-\text{var}$. Fig. 3 shows that for 117 out of 163 drugs the model is perfectly calibrated (Appendix C.1), while it yields a negative correlation between error and uncertainty only for one drug. To improve the accuracy of GENOT-K, we leverage its unbalanced formulation. Fig. 3 shows that allowing for mass variation improves the performance for nine different cancer drugs which are known to have a strong effect. Fig. 17 and 18 confirm the results visually. Figure 5: UMAP embedding of transported cells and cells in the target distribution (left), and jointly colored by cell type (right). 5.2 GENOT-GW AND GENOT-FGW TO MAP ACROSS SPACES GENOT-GW on simulated data. We transport a Swiss role in $\mathbb{R}^3$ to a spiral in $\mathbb{R}^2$. Fig. 4 shows that GENOT-GW successfully mimics an isometric alignment. Here, we set $\varepsilon = 0.01$ and investigate its influence in more detail in Fig. 19. GENOT-GW for translating modalities of single cells The number of modalities which can be simultaneously measured in a single cell is limited due to technical limitations. At the same time, new technologies allow to capture a more diverse set of modalities [Baysoy et al., 2023]. Yet, it is important to match measurements of different modalities to obtain a more holistic view of the profile of a cell. The discrete GW formulation has been used to match measurements of cells in different modalities [Demetri et al., 2022]. We use GENOT-GW to translate ATAC measurements to gene expression space on a bone marrow dataset [Luecken et al., 2021]. As both modalities were measured in the same cell, the true match of each cell is known. We compare GENOT-GW with the discrete GW formulation (see F.2) and assess the performance with the FOSCTTM (“Fractions of Samples Closer to the True Match”) score (see C.2). We leverage the flexibility of GENOT and use an approximated geodesic distance [Crane et al., 2013] rather than the Euclidean distances, which is not meaningful within embeddings of single-cell measurements [Moon et al., 2018]. Fig. 6 shows 3 results related to the FOSCTTM score. First, using a graph-based cost is crucial in higher dimensions. Second, out-of-sample prediction for discrete GW based on regression (GW-LR) is competitive in low-dimensions, but not for higher. Third, taking the conditional mean as prediction improves the result with respect to the FOSCTTM score. Regarding the distributional fitting property, GENOT models are clearly superior. Crucially, Fig. 6 shows that the fitting property of GENOT models is not affected by the cost. GENOT-FGW improves modality translation of single cells As the predictions yielded by GW-based models are not satisfactory, we introduce a novel method for translating between ATAC and RNA measurements by extending the model proposed by [Demetri et al., 2022] to the fused setting. Therefore, we infer approximate gene expression from the ATAC measurements using gene activity [Stuart et al., 2021]. We construct a joint space of the two modalities using a conditional VAE [Lopez et al., 2018a]. Fig. 20 shows that the additional fused term helps to obtain a significantly better alignment compared to GENOT-GW, with the best GENOT-FGW CM model (weight parameter $\alpha = 0.7$) attaining a FOSCTTM score of below 0.05. It is important to note that incorporating the GW terms is necessary for attaining good results as discussed in appendix E.3. Fig. 5 visualizes the push-forward of the learnt coupling. The intertwinement of samples of the target and the predicted target in the left panel visualizes the distribution fitting property, while the separation into cell types on the right confirms the optimality of the learnt coupling. See figures 23 and 24 for further visualizations. When aligning multiple modalities of single cells, we cannot assume to have the same proportion of cell types in both datasets, for example due to experimental biases caused by sequencing technologies. We simulate this setting by removing cells belonging to either of the cell types Proerythroblasts, Erythroblasts or Normoblasts in the source distribution. Table 3 shows that U-GENOT-FGW preserves high accuracy while learning meaningful rescaling functions. Conclusion. We introduce GENOT, a versatile neural OT framework to learn cost-efficient stochastic maps within the same space and/or across incomparable spaces. GENOT is flexible to the extent that the mass conservation constraint can be loosened, and provides tools to sample targets from an input. GENOT can be used within a wide array of tasks in single-cell biology. REFERENCES David Alvarez-Melis and Tommi S Jaakkola. Gromov-wasserstein alignment of word embedding spaces. arXiv preprint arXiv:1809.00013, 2018. Arip Asadulaev, Alexander Korotin, Vage Egiazarian, and Evgeny Burnaev. Neural optimal transport with general cost functionals, 2022. URL https://arxiv.org/abs/2205.15403 Yogesh Balaji, Rama Chellappa, and Soheil Feizi. Robust optimal transport with applications in generative modeling and domain adaptation. Advances in Neural Information Processing Systems, 33:12934–12944, 2020. Aimée Bastidas-Ponce, Sophie Tritschler, Leander Dony, Katharina Scheibner, Marta Tarquis-Medina, Ciro Salinno, Silvia Schirge, Ingo Burtscher, Anika Böttcher, Fabian J Theis, et al. Comprehensive single cell mrna profiling reveals a detailed roadmap for pancreatic endocrinogenesis. Development, 146(12):dev173849, 2019. Alev Baysoy, Zhiliang Bai, Rahul Satija, and Rong Fan. The technological landscape and applications of single-cell multi-omics. Nature Reviews Molecular Cell Biology, pp. 1–19, 2023. Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the monge-kantorovich mass transfer problem. Numerische Mathematik, 84(3):375–393, 2000. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax
WqsYs05Ri7
Is there a way to abstract away the specific form of the concept activation of Oikarinen et al. (2023) in this formulation? Stated differently, what is $\vec{m}(x)$, and $\vec{s}(x)$ for a standard CBM?
ESTIMATION OF CONCEPT EXPLANATIONS SHOULD BE UNCERTAINTY AWARE Anonymous authors Paper under double-blind review ABSTRACT Model explanations are very valuable for interpreting and debugging prediction models. We study a specific kind of global explanations called Concept Explanations, where the goal is to interpret a model using human-understandable concepts. Recent advances in multi-modal learning rekindled interest in concept explanations and led to several label-efficient proposals for estimation. However, existing estimation methods are unstable to the choice of concepts or dataset that is used for computing explanations. We observe that instability in explanations is due to high variance in point estimation of importance scores. We propose an uncertainty aware Bayesian estimation method, which readily improved reliability of the concept explanations. We demonstrate with theoretical analysis and empirical evaluation that explanations computed by our method are more reliable while also being label-efficient and faithful. 1 INTRODUCTION With an ever increasing complexity of ML models, there is an increasing need to explain them. Concept-based explanations are a form of interpretable methods that explain predictions using high-level and semantically meaningful concepts (Kim et al., 2018). They are aligned with how humans communicate their decisions (Yeh et al., 2022) and are shown (Kim et al., 2018; 2023b) to be more preferable over explanations using salient input features (Ribeiro et al., 2016; Selvaraju et al., 2017) or salient training examples (Koh & Liang, 2017). Concept explanations also show potential in scientific discovery (Yeh et al., 2022) and for encoding task-specific prior knowledge (Yüksekgönül et al., 2022). Concept explanations explain a pretrained prediction model by estimating the importance of concepts using two human-provided resources (1) a list of potentially relevant concepts for the task, (2) a dataset of examples usually referred to as the probe-dataset. Estimation usually proceeds in two steps (a) compute the log-likelihood of concept given an example called concept activations, and (b) aggregate their local activation scores into a globally relevant explanation. For example, the concept wing is considered important if the information about the concept is encoded in all examples of the plane class in the dataset. Owing to example-agnostic and classifier-level nature of concept explanations they are easy to interpret and have witnessed wide recognition in diverse applications (Yeh et al., 2022). Despite their easy interpretation, concept explanations are known to be unreliable and data expensive. Ramaswamy et al. (2022a) showed that existing estimation methods are sensitive to the choice of concept set and dataset raising concerns over their interpretability. Another major limitation of concept-based explanation is the need for datasets with concept annotations in order to specify the concepts. Increasingly popular multi-modal models such as CLIP (Radford et al., 2021) present an exciting alternate direction to specify relevant concepts, especially for common image applications through their text description. Recent work has explored using multi-modal models for training concept-bottleneck models (Oikarinen et al., 2023; Yüksekgönül et al., 2022; Moayeri et al., 2023), but they are not yet evaluated for generating post-hoc concept explanations. Our objective is to generate reliable concept explanations without requiring datasets with concept annotations. We begin by observing that existing estimation methods do not model noise in the estimation pipeline leading to high variance and unreliable explanations. We identify at least two causes of uncertainty (Section 4.1 presents more concrete scenarios) leading to unreliable explanations (1) When a concept is missing from the probe-dataset, we cannot estimate its importance with confidence. Reporting uncertainty over estimated importance of a concept can thus help the user make a more informed interpretation. (2) When a concept is hard or irrelevant to the task their corresponding activations predicted from the representation layer of the model-to-be-explained are expected to be noisy. For example, it is harder to recognise the concept *whiskers* when compared with the concept *wings*. The noise or uncertainty in concept activations either due to their absence, hardness, or relevance if not modelled cascades into noise in explanations. Appreciating the need to model uncertainty, we present an estimator called Uncertainty-Aware Concept Explanations (U-ACE), which we show is instrumental in improving reliability of explanations. **Contributions.** ● We motivate the need for modeling uncertainty for faithful estimation of concept explanations. ● We propose a Bayesian estimation method called U-ACE that is both label-free and models uncertainty in the estimation of concept explanations. ● We demonstrate the merits of our proposed method U-ACE through theoretical analysis and empirical evidence on two controlled datasets and two real-world datasets. ## 2 BACKGROUND AND MOTIVATION We denote the model-to-be explained as $f : \mathbb{R}^D \rightarrow \mathbb{R}^L$ that maps D-dimensional inputs to L labels. Further, we use $f^{[l]}(x)$ to denote $l^{th}$ layer representation space and $f(x)[y]$ for $y \in [1, L]$ as the logit for the label $y$. Given a probe-dataset of examples $\mathcal{D} = \{x^{(i)}\}_{i=1}^N$ and a list of concepts $\mathcal{C} = \{c_1, c_2, \ldots, c_K\}$, our objective is to explain the pretrained model $f$ using the specified concepts. Traditionally, the concepts are demonstrated using potentially small and independent datasets with concept annotations $\{\mathcal{D}_c^k : k \in [1, K]\}$ where $\mathcal{D}_c^k$ is a dataset with positive and negative examples of the $k^{th}$ concept. Concept-Based Explanations (CBE) estimate explanations in two steps. In the first step, they learn what are known as concept activation vectors that predict the concept from $l^{th}$ layer representation of an example. More formally, they learn the concept activation vector $v_k$ for $k^{th}$ concept by optimizing $v_k = \arg\min_v \mathbb{E}_{(x,y) \sim \mathcal{D}_c^k}[\ell(v^T f^{[l]}(x), y)]$ where $\ell$ is the usual cross-entropy loss. The inner product of representation with the concept activation vector $v_k^T f^{[l]}(x)$ is usually referred to as concept activations. Various approaches exist to aggregate example-specific concept activations into global example-agnostic explanations for the second step. Kim et al. (2018) computes sensitivity of logits to interventions on concept activations to compute what is known as CAV score per example per concept and report the fraction of examples in the probe-dataset with a positive CAV score as the global importance of the concept known as TCAV score. Zhou et al. (2018) proposed to decompose the classification layer weights as $\sum_k \alpha_k v_k$ and report the coefficients $\alpha_k$ as the importance score of the $k^{th}$ concept. We refer the reader to Yeh et al. (2022) for an in-depth survey. **Data-efficient concept explanations.** A major limitation of traditional CBEs is their need for datasets with concept annotations $\{\mathcal{D}_c^1, \mathcal{D}_c^2, \ldots\}$. In practical applications, we may wish to find important concepts among thousands of potentially relevant concepts, which is not possible without expensive data collection. Recent proposals (Yuksekgonul et al., 2022; Oikarinen et al., 2023; Moayeri et al., 2023) suggested using pretrained multi-modal models like CLIP to evade the data annotation cost for a related problem called Concept Bottleneck Models (CBM) (Koh et al., 2020). CBMs aim to train inherently interpretable model with a concept bottleneck. Although CBMs cannot generate explanations for a model-to-be-explained, a subset of methods propose to train what are known as Posthoc-CBMs using the representation layer of a pretrained task model for data efficiency. Given that Posthoc-CBMs base on the representation of a pretrained task model, we may use them to generate concept explanations. We describe briefly two such CBM proposals below. Oikarinen et al. (2023) (O-CBM) estimates the concept activation vectors by learning to linearly project from the representation space of CLIP where the concept is encoded using its text description to the representation space of the model-to-be-explained $f$. It then learns a linear classification model on concept activations and returns the weight matrix as the concept explanation. Based on the proposal of Yuksekgonul et al. (2022), we can also generate explanations by training a linear model to match the predictions of model-to-be-explained directly using the concept activations of CLIP, which we denote by (Y-CBM). Unreliable Explanations, a limitation. Apart from data inefficiency, concept explanation methods are known to be unreliable. We observed critical reliability concerns with existing CBEs in the same spirit as the challenges raised in [Ramaswamy et al., 2022a]. As we demonstrate in Section 4.1, concept explanations for the same model-to-be-explained vary with the choice of the probe-dataset or the concept set bringing into question the reliability of explanations. 3 UNCERTAINTY-AWARE CONCEPT EXPLANATIONS As summarized in the previous section, CBEs rely on concept activations for generating explanations. It is not hard to see that the activation score of a concept cannot be predicted confidently if the concept is hard/ambiguous or if it is not encoded by the model-to-be-explained. The noise in concept activations if not modeled cascades into the next step leading to poor explanations. Moreover, importance of a concept cannot be confidently estimated if it is missing from the probe-dataset, which must be informed to the user through confidence interval on the concept’s estimated importance score. Motivated by the role of uncertainty for trustworthy explanations, we design our estimator. Our approach has the following steps. (1) Estimate concept activations along with their error interval, (2) Aggregate concept activations and their confidence intervals into a global concept explanation. We describe the estimation of concept activations and their error given an instance \( x \) denoted as \( \tilde{m}(x), \tilde{s}(x) \) respectively in Section 3.1. By definition, the true concept activation for a concept \( k \) and instance \( x \) is in the range of \( \tilde{m}(x) \pm \tilde{s}(x) \) with a high probability. We describe the estimation of concept explanations in what follows using \( \tilde{m}(x), \tilde{s}(x) \), which is independent of how they are computed. We compute explanations by fitting a linear regression model on the concept activations in the same spirit as many CBM methods because it is easier to incorporate the input noise in a regression model. Our objective is to learn linear model weights \( W_c \) of size \( L \times K \) (recall that \( L, K \) are the number of labels and concepts respectively) that map the concept activations to their logit scores, i.e. \( f(x) \approx W_c \tilde{m}(x) \). Since the concept activations contain noise, we require that \( W_c \) is such that predictions do not change under noise, that is \( W_c [\tilde{m}(x) + \tilde{s}(x)] \approx W_c \tilde{m}(x) \implies W_c \tilde{s}(x) \approx 0 \). I.e. the inner product of each row (\( \vec{w} \)) of \( W_c \) with \( \tilde{s}(x) \) must be negligible. For the sake of exposition, we analyse the solution of \( y^{th} \in [1, L] \) row \( \vec{w} \) of \( W_c \), which can be easily generalized to the other rows. We cast the bounded error constraint, i.e. \( |\vec{w}^T \tilde{s}(x)| \leq \delta \) for some small positive \( \delta \) and for all the instances \( x \) in the probe-dataset, into a distributional prior over the weights. The prior over weights can then be easily accommodated in the Bayesian estimation of the posterior on weights. \[ |\vec{w}^T \tilde{s}(x)| \leq \delta \quad \forall x \in D \implies |\vec{w}^T \epsilon| \leq \frac{\sum_{x \in D} |\vec{w}^T \tilde{s}(x)|}{N} \leq \delta \text{ where } \epsilon \triangleq \frac{\sum_{x \in D} \tilde{s}(x)}{N} \] \[ |\vec{w}^T \epsilon| \leq \delta, \text{ for some small } \delta > 0 \text{ with high probability } \implies \vec{w}^T \epsilon \epsilon^T \vec{w} \approx \vec{w}^T \text{diag}(\epsilon \epsilon^T) \vec{w} \leq \delta^2 \] \[ \implies -\frac{1}{2} (\vec{w} - \vec{0})^T S^{-1} (\vec{w} - \vec{0}) \text{ where } S^{-1} = \text{diag}(\epsilon \epsilon^T) \text{ is high when } \vec{w} \text{ satisfies the constraint} \] \[ \implies N(\vec{w}; \vec{0}, \lambda S) \text{ is high for an appropriate } \lambda > 0 \implies \vec{w} \sim N(\vec{0}, \lambda S) \] We observe therefore that the weight vectors drawn from \( N(\vec{0}, \lambda \text{diag}(\epsilon \epsilon^T)^{-1}) \) satisfy the invariance to input noise constraint with high probability. We now estimate the posterior on the weights after having observed the data with the prior on weights set to \( N(0, \lambda \text{diag}(\epsilon \epsilon^T)^{-1}) \). The posterior over weights has the following closed form \cite{Salakhutdinov2011} where \( C_X = [\vec{m}(x_1), \vec{m}(x_2), \ldots, \vec{m}(x_N)] \) is a \( K \times N \) matrix and \( Y = [f(x_1)[y], f(x_2)[y], \ldots, f(x_N)[y]]^T \) is an \( N \times 1 \) vector (derivation in Appendix A.1). \[ Pr(\vec{w} | C_X, Y) = \mathcal{N}(\vec{w}; \mu, \Sigma) \] where \( \mu = \beta \Sigma C_X Y, \quad \Sigma^{-1} = \beta C_X C_X^T + \lambda^{-1} \text{diag}(\epsilon \epsilon^T) \) \( \beta \) is the inverse variance of noise in observations \( Y \). We optimise both \( \beta \) and \( \lambda \) using MLE on \( D \) (more details in Appendix B). We could directly set the inverse of \( \beta \) approximately 0 since there is no noise on the observations \( Y \). Instead of setting \( \beta \) to an arbitrary large value, we observed better explanations when we allowed the tuning algorithm to find a value of \( \beta, \lambda \) to balance the evidence and noise. **Sparsifying weights for interpretability.** Because a dense weight matrix can be hard to interpret, we induce sparsity in \( W_c \) by setting all the values below a threshold to zero. The threshold is picked such that the accuracy on train split does not fall by more than \( \kappa \), which is a positive hyperparameter. The estimator shown in Equation 1 and details on how we estimate the noise in concept activations presented in the next section completes the description of our estimator. We call our estimator Uncertainty-Aware Concept Explanations (U-ACE) because it computes and models the uncertainty in concept activations. Algorithm 1 summarizes our proposed system. ### 3.1 Estimation of Concept Activations and Their Noise In this section, we discuss how we estimate \( \vec{m}(x), \vec{s}(x) \) using a pretrained multi-modal model. Recall that image-text multi-modal (MM) systems such as CLIP \cite{Radford2021} can embed both images and text in a shared representation space, which enables one to estimate the similarity of an image to any phrase. This presents us an interesting solution approach of specifying a concept using its text description (\( T_k \) for the \( k^{th} \) concept) without needing concept datasets \( D_c^k \). We denote by \( g(\bullet) \) the image embedding function of MM and \( g_{\text{text}}(\bullet) \) the text embedding function. Our objective is to estimate \( \vec{m}(x), \vec{s}(x) \) such that the true concept activation value is in the range \( \vec{m}(x) \pm \vec{s}(x) \). Two major sources of uncertainty in concept activations are due to (1) epistemic uncertainty arising from lack of information about the concept in the representation layer of the model-to-be-explained, (2) data uncertainty arising from ambiguity (because the concept is not clearly visible, see Appendix G.1 for some examples). We wish to estimate \( \vec{s}(x) \) that is aware of both the forms of uncertainty. We can obtain a point estimate for the activation vector of the \( k^{th} \) concept \( v_k \) such that \( f(x)^T v_k \approx g(x)^T w_k \) (where \( w_k = g_{\text{text}}(T_k) \)) for all \( x \) in the probe-dataset \( D \) through simple optimization \cite{Oikarinen2023, Moayeri2023}. We may then simply repeat the estimation procedure multiple times to sample from the distribution of activation vectors and their corresponding concept activations. However, as shown empirically in Appendix G.1, \( \vec{s}(x) \) estimated from random sampling is a poor measure of uncertainty. We instead derive a closed form for \( \vec{m}(x), \vec{s}(x) \) based on the following intuition. The concept activations estimated using cos-sim(\( f(x), v_k \)) must intuitively be in the ballpark of \( \cos(\theta_k) = \cos(\text{sim}(g(x), w_k)) \) where cos-sim is the cosine similarity \cite{Wikipedia2023a} (we switched from dot-products to cos-sim to avoid differences due to magnitude of the vectors). However, if the concept \( k \) is not encoded in \( f(x) \) or if it is ambiguous, the concept activations are expected to deviate by an angle \( \alpha_k \), which is an error measure specific to the concept. Therefore, we expect the concept activations to be in the range of \( \cos(\theta_k \pm \alpha_k) \). The concept specific value \( \alpha_k \) must account for uncertainty due to lack of knowledge (for eg. irrelevant concept) and due to ambiguity. In what follows, we present a specific measure for \( \alpha_k \) and the closed form solution for \( \vec{m}(x), \vec{s}(x) \). Borrowing from \cite{Oikarinen2023}, we define \( \cos(\alpha_k) \) as \[ \max_x [\cos(\text{sim}(e(v, f, X), e(w_k, g, D)))] \] where \( e(w_k, g, D) \triangleq [w_k^T g(x_1), \ldots, w_k^T g(x_N)]^T \), and \( e(v, f, D) \triangleq [v^T f[-1](x_1), \ldots, v^T f[-1](x_N)]^T \). We may just as well adopt any other measure for \( \alpha_k \). **Proposition 1.** For a concept \( k \) and a measure for \( \alpha_k \), we have the following result when concept activations in \( f \) for an instance \( x \) are computed as cos-sim(\( f(x), v_k \)) instead of \( v_k^T f(x) \). \[ \vec{m}(x)_k = \cos(\theta_k)\cos(\alpha_k), \quad \vec{s}(x)_k = \sin(\theta_k)\sin(\alpha_k) \] where \( \cos(\theta_k) = \cos(\text{sim}(g_{\text{text}}(T_k), g(x))) \) and \( \vec{m}(x)_k, \vec{s}(x)_k \) denote the \( k^{th} \) element of the vector. The proof can be found in Appendix C. The mean and scale values above have a clean interpretation. If the model-to-be-explained \( f \) uses the \( k^{th} \) concept for label prediction, the information about the concept is encoded in \( f \) and we get a good fit, i.e., \( \cos(\alpha_k) \approx 1 \), and a small error on concept activations. On the other hand, error bounds are large and concept activations are suppressed when the fit is poor, i.e., \( \cos(\alpha_k) \approx 0 \). In Appendix G.1, we contrasted different methods for estimation of \( s(x) \). We observed from the empirical evaluation that U-ACE modeled both model and data uncertainty well. 3.2 THEORETICAL MOTIVATION The motivation of this section is to demonstrate unreliability of concept explanations estimated using standard methods that do not model uncertainty during estimation. We particularly focus on unreliability due to misspecified concept set for the ease of analysis. In our study, we compared explanations generated using a standard linear estimator and U-ACE. Recall that posthoc-CBMs (O-CBM, Y-CBM), which are our primary focus for comparison, and they both estimate explanations by fitting a linear model on concept activations. We present two scenarios with noisy concept activations. In the first scenario (over-complete concept set), we analyzed the estimation when the concept set contains many irrelevant concepts. We show that the likelihood of marking an irrelevant concept as more important than a relevant concept increases rapidly with the number of concepts when the explanations are estimated using a standard linear estimator that is unaware of the uncertainty. We also show that U-ACE do not suffer the same problem. In the second scenario (under-complete concept set), we analyzed the explanations when the concept set only includes irrelevant concepts, which should both be assigned a zero score ideally. We again show that standard linear model attributes a significantly non-zero score while U-ACE mitigates the issue. In Section 4.1, we confirm our theoretical findings with an empirical evaluation. Unreliable explanations due to over-complete concept set. We analyze a simple setting where the output \( y \) is linearly predicted from the input \( x \) as \( y = w^T x \). We wish to estimate the importance of some \( K \) concepts by fitting a linear estimator on concept activations. Where concept activations are computed as \( w_k^T x \) using concept activation vectors \( w_k \) that are distributed as \( w_k \sim \mathcal{N}(u_k, \sigma_k^2 I), k \in [1, K] \). **Proposition 2.** The concept importance estimated by U-ACE when the input dimension is sufficiently large and for some \( \lambda > 0 \) is approximately given by \( v_k = \frac{u_k^T w}{u_k^T u_k + \lambda \sigma_k^2} \). On the other hand, the importance scores estimated using Ordinary Least Squares (OLS) estimator under the same conditions is distributed as \( v_k \sim \mathcal{N}\left( \frac{u_k^T w}{u_k^T u_k}, \sigma_k^2 \frac{\|w\|^2}{\|u_k\|^2} \right) \). Proof of the result can be found in Appendix D. Based on the result, we can deduce the following result for a specific case of \( u_k \)'s and \( \sigma_k \)'s. **Corollary 1.** For the data setup of Proposition 2, the following results holds when \( u_1 = w, \sigma_1 \approx 0 \) and \( u_k^T w = 0, \forall k \in [2, K] \). Then the probability that the standard estimator returns the first concept as the most salient decreases exponentially with the number of concepts. On the other hand, the importance score assigned by U-ACE is 1 for the only relevant first concept and 0 otherwise. Derivation of the result can be found in Appendix A.2. We observe therefore that the probability of a random concept being estimated as more important than the relevant concept quickly converges to 1 with the number of random concepts \( K-1 \) when the distribution or uncertainty is not modeled. Sections 4.1, 5 demonstrate this phenomena in practice. Unreliable explanations due to under-complete concept set. We now analyze explanations when the concept set only includes two irrelevant concepts. Consider normally distributed inputs \( x \sim \mathcal{N}(0, I) \), and define two orthogonal unit vectors \( u, v \). The concept activations: \( c_1(i), c_2(i) \) and label \( y(i) \) for the \( i^{th} \) instance \( x(i) \) are as defined below. \[ y(i) = u^T x(i), \quad c_1(i) = (\beta_1 u + (1 - \beta_1) v)^T x(i), \quad c_2(i) = (\beta_2 u + (1 - \beta_2) v)^T x(i) \] If \( \beta_1, \beta_2 \) are very small, then both the concepts are expected to be unimportant for label prediction. However, we can see with simple working (Appendix E) that the importance scores computed by a standard estimator are $\frac{1-\beta_2}{\beta_1-\beta_2}$, $\frac{1-\beta_1}{\beta_1-\beta_2}$, which are large because $\beta_1 \approx 0$, $\beta_2 \approx 0$. $\beta_1 - \beta_2 \approx 0$. We will now show that U-ACE estimates near-zero importance scores as expected. **Proposition 3.** The importance score, denoted $\eta_1, \eta_2$, estimated by U-ACE are bounded from above by $\frac{1}{N^\lambda}$, where $\lambda > 0$ is a regularizing hyperparameter and $N$ the number of examples. Proof can be found in Appendix E. It follows from the result that the importance scores computed by U-ACE are near-zero for sufficiently large value of $\lambda$ or $N$. ### 4 EXPERIMENTS We evaluate U-ACE on two synthetic and two real-world datasets. We demonstrate how reliability of explanations is improved by U-ACE using a controlled study in Section 4.1. We make a quantitative assessment with known ground-truth on a controlled dataset in Section 5. Finally, we evaluate on two challenging real-world datasets with more than 700 concepts in Section 6. **Baselines.** *Simple:* $W_c$ is estimated using lasso regression of ground-truth concept annotations to estimate logit values of $f$. Simple was also adopted in the past (Ramaswamy et al., 2022b,a) for estimating completeness of concepts. Other baselines are introduced in Section 2: TCAV (Kim et al., 2018), O-CBM (Oikarinen et al., 2023), Y-CBM based on (Yuksekgonul et al., 2022). **Standardized comparison between importance scores.** The interpretation of the importance score varies between different estimation methods. For instance, the importance scores in TCAV is the fraction of examples that meet certain criteria while for other methods the importance scores are the weights from linear model that predicts logits. Further, Simple operates on binary concept annotations and O-CBM, Y-CBM, U-ACE on soft scores estimated using concept activation vectors. For this reason, we cannot directly compare importance scores or their normalized variants. We instead use negative scores to obtain a ranked list of concepts and assign to each concept an importance score given by its rank in the list normalized by number of concepts. Our sorting algorithm ranks any two concepts with same score by alphabetical order of their text description. In all our comparisons we use the rank score if not mentioned otherwise. **Other experiment details.** For all our experiments, we used a Visual Transformer (with 32 patch size called “ViT-B/32”) based pretrained CLIP model that is publicly available for download at https://github.com/openai/CLIP. We use $l = -1$, i.e. last layer just before computation of logits for all the explanation methods. U-ACE returns the mean and variance of the importance scores as shown in Algorithm 1; we use mean divided by standard deviation as the importance score estimated by U-ACE everywhere for comparison with other methods. #### 4.1 SIMULATED STUDY In this section, we consider explaining a two-layer CNN model trained to classify between solid color images with pixel noise as shown in Figure 2. The colors red, green on the left are defined as label 0 and the colors blue, white on the right are defined as label 1. The model-to-be-explained is trained on a dataset with equal proportion of all colors, so we expect that all constituent colors of a label are equally important for the label. We specify a concept set with the four colors encoded by their literal name red, green, blue, white. U-ACE (along with others) attribute positive importance for red, green and negative or zero importance for blue, white when explaining label 0 using a concept set with only the four task-relevant concepts and when the probe-dataset is the same distribution as the training dataset. However, quality of explanations quickly degrade when the probe-dataset is shifted or if the concept set is misspecified. **Unreliability due to dataset shift.** We varied the probe-dataset to include varying population of different colors while keeping the concept set and model-to-be-explained fixed. We observed that importance of a concept estimated with standard CBEs varied with the choice of probe-dataset for the same underlying model-to-be-explained as shown in left and middle plots of Figure 3. Most methods attributed incorrect importance to the red concept when it is missing (left extreme of left Figure 3: Left, middle plots show the importance of red and green concepts while the rightmost plot shows their importance score difference. U-ACE estimated large uncertainty in importance score when red or green concept is missing from the dataset as seen in the left of the left and middle plots. Also the difference in importance at either extreme in the right plot is not statistically significant. plot), and similarly for the green concept (left extreme of middle plot). The explanations would have led the user to believe that green is more important than red or red is more important than green depending on the probe-dataset used as shown in the right most plot. Because U-ACE also informs the user of uncertainty in the estimated importance, we see that the difference in importance scores between the two colors at either extremes is not statistically significant as shown in the rightmost plot. Over-complete concept set. We now evaluate the quality of explanations when the concept set is misspecified. More specifically, when the concept set is made over-complete by gradually expanding it to include common fruit names (Appendix F contains the full list), which are clearly irrelevant to the task. We obtain the explanations using an in-distribution probe-dataset that contains all colors in equal proportion. Figure 4 shows the score of most salient fruit concept with increasing number of fruit (nuisance) concepts on X-axis. We observe that U-ACE is far more robust to the presence of nuisance concepts. Robustness to irrelevant concepts is important because it allows the user to begin with a superfluous set of concepts and find their relevance to model-to-be-explained instead of requiring to guess relevant concepts, which is ironically the very purpose of using concept explanations. Appendix H presents and evaluates on an under-complete concept setting. 5 ASSESSMENT WITH KNOWN GROUND-TRUTH Figure 5: On the left is STL dataset with a spurious tag. In the middle is importance of a tag concept for three different model-to-be-explained. X-axis shows the probability of tag in the training dataset of model-to-be-explained. To the right is average rank of true concepts with irrelevant concepts (lower is better). Our objective in this section is to establish that U-ACE generates faithful and reliable concept explanations. Subscribing to the common evaluation practice (Kim et al., 2018), we generate explanations for a model that is trained on a dataset with controlled correlation of a spurious pattern. We make a dataset using two labels from STL-10 dataset (Coates et al., 2011) car, plane and paste a tag $U$ or $Z$ in the top-left corner as shown in the left panel of Figure 5. The probability that the examples of car are added the Z tag is $p$ and $1-p$ for the $U$ tag. Similarly for the examples of plane, the probability of $U$ is $p$ and $Z$ is $1-p$. We generate three training datasets with $p=0$, $p=0.5$ and $p=1$, and train three classification models using 2-layer convolutional network. Therefore, the three models are expected to have a varying and known correlation with the tag, which we hope to recover from its concept explanation. We generate concept explanations for the three model-to-be-explained using a concept set that includes seven car-related concepts and three plane-related concepts (Appendix F) along with the two tags $U$, $Z$. We obtain the importance score of the concept $U$ with car class using a probe-dataset that is held-out from the corresponding training dataset (i.e. probe-dataset has the same input distribution as the training dataset). The results are shown in the middle plot of Figure 5. Since the co-occurrence probability of $U$ with car class goes from 1, 0.5 to 0 for $p=0$, 0.5, 1, we expect the importance score of $U$ should change from positive to negative as we move right. We note that U-ACE, along with others, show the expected decreasing importance of the tag concept. The result corroborates that U-ACE estimates a faithful explanation of model-to-be-explained while also being more reliable as elaborated below. Unreliability due to misspecified concept set. In the same spirit as the previous section, we repeat the over-complete experiment of Section 4.1 and generated explanations as animal (irrelevant) concepts are added (Appendix F contains the full list). Right panel of Figure 5 shows the average rank of true concepts (lower the better). We note that U-ACE ranks true concepts highly even with 50 nuisance concepts. 6 REAL-WORLD EVALUATION We expect that our reliable estimator to also generate higher quality concept explanations in practice. To verify the same, we generated explanations for a scene classification model with ResNet-18 architecture pretrained on Places365 (Zhou et al., 2017a), which is publicly available. Following the experimental setting of Ramaswamy et al. (2022a), we generate explanations when the probe-dataset is set to PASCAL (Chen et al., 2014) or ADE20K (Zhou et al., 2017b), which are both part of the Broden dataset (Bau et al., 2017b). The dataset contains images with dense annotations with more than 1000 attributes. We ignored around 300 attributes describing the scene since model-to-be-explained is itself a scene classifier. For the remaining 730 attributes, we defined a concept per attribute using literal name of the attribute. We picked 50 scene labels (Appendix F contains the full list) that have support of at least 20 examples in both ADE20K and PASCAL datasets. We evaluate quality of explanations by their closeness to the explanations generated using the Simple baseline. Simple estimates explanation using true concept annotations and therefore its explanation must be the closest to the ground-truth. For the top-20 concepts identified by Simple, we compute the average absolute difference in importance scores estimated using any estimation method and Simple. Table 1 presents the deviation in explanations averaged over all the 50 scene labels. Figure 6 shows the most salient concepts for four randomly picked scene labels. We observe from the figure that top-10 concepts identified by U-ACE seem more relevant to the scene when compared with Y-CBM and O-CBM. We also evaluated the explanation quality using a standard measure for comparing ranked lists, which is presented in Appendix F, which further confirms the dominance of U-ACE. Dataset shift. Ramaswamy et al. (2022a) demonstrated with results the drastic shift in concept explanations for the same model-to-be-explained when using ADE20K or PASCAL as the probe-dataset. Explanations diverge partly because (a) population of concepts may vary between datasets thereby influencing their perceived importance when using standard methods, (b) variance in explanations. We have demonstrated that U-ACE estimated importance scores have low variance (shown in Section 3.2, 4.1) and attributes high uncertainty and thereby near-zero importance to concepts that are rare or missing from the probe-dataset (Section 4.1). For these reasons, we expect U-ACE to mitigate the data-shift problem. We confirm the same by estimating the average difference in importance scores estimated using ADE20K and PASCAL for different estimation techniques (where the average is only over salient concepts with non-zero importance). The results are shown in Table 2 and are inline with our prediction. | Dataset | TCAV | O-CBM | Y-CBM | U-ACE | |---------|------|-------|-------|-------| | ADE20K | 0.13 | 0.19 | 0.16 | **0.09** | | PASCAL | 0.41 | 0.20 | 0.18 | **0.11** | Table 1: Evaluation of explanation quality. Each cell shows the average absolute difference of importance scores for top-20 concepts estimated using Simple. | Dataset | Simple | TCAV | O-CBM | Y-CBM | U-ACE | |---------|--------|------|-------|-------|-------| | ADE20K | 0.41 | 0.41 | 0.32 | 0.33 | **0.19** | | PASCAL | | | | | | Table 2: Effect of data shift. Average absolute difference between concept importance scores estimated using ADE20K and PASCAL datasets for the same model-to-be-explained using different estimation methods. 7 RELATED WORK Concept Bottleneck Models use a set of predefined human-interpretable concepts as an intermediate feature representation to make the predictions (Koh et al., 2020; Bau et al., 2017a; Kim et al., 2018; Zhou et al., 2018). CBM allows human test-time intervention which has been shown to improve overall accuracy (Barker et al., 2023). Traditionally, they require labelled data with concept annotations and typically the accuracy is worse than the standard models without concept bottleneck. To address the limitation of concept annotation, recent works have leveraged large pretrained multi-modal models like CLIP (Oikarinen et al., 2023; Yükselgonul et al., 2022). There have also been efforts to enhance the reliability of CBMs by focusing on the information leakage problem (Havasi et al., 2022; Marconato et al., 2022), where the linear model weights estimated from concept activations utilize the unintended information, affecting the interpretability. Concept Embedding Models (CEM) (Espinosa Zarlenga et al., 2022) overcome the trade-off between accuracy and interpretability by learning high-dimensional concept embeddings. However, addressing the noise in the concept prediction remains underexplored. Collins et al. (2023) have studied human uncertainty in concept-based models and have shown the importance of considering uncertainty over concepts in improving the reliability of the model. Kim et al. (2023a) proposed the Probabilistic Concept Bottleneck Models (ProbCBM) and is closely related to our work. They too argue for the need to model uncertainty in concept prediction for reliable explanations. However, their method of noise estimation in concept activations requires retraining the model and cannot be applied directly when concept activations are estimated using CLIP. Moreover, they use simple MC sampling to account for noise in concept activations. Concept based explanations use a separate probe dataset to first learn the concept and then explain through decomposition either the individual predictions or overall label features. Yeh et al. (2022) contains a brief summary of existing concept based explanation methods. Our proposed method is very similar to concept based explanations (CBE) (Kim et al., 2018; Bau et al., 2017a; Zhou et al., 2018; Ghorbani et al., 2019). Ramaswamy et al. (2022a) emphasized that the concepts learned are sensitive to the probe dataset used and therefore pose problems when transferring to applications that have distribution shift from the probe dataset. Moreover, they also highlight other drawbacks of existing CBE methods in that concepts can sometimes be harder to learn than the label itself (meaning the explanations may not be causal) and that the typical number of concepts used for ex- planations far exceed what a typical human can parse easily. Achtibat et al. (2022) championed an explanation method that provides explanation highlighting important feature (answering “where”) and what concepts are used for prediction thereby combining the strengths of global and local explanation methods. Choi et al. (2023) have built upon the current developments in CBE methods for providing explanations for out-of-distribution detectors. Wu et al. (2023) introduced the causal concept based explanation method (Causal Proxy Model), that provides explanations for NLP models using counterfactual texts. Moayeri et al. (2023) also used CLIP to interpret the representations of a different model trained on uni-modal data. 8 CONCLUSION We studied concept explanation methods with a focus on data-efficient systems that exploit pre-trained multi-modal models. We demonstrated with simple examples the reliability challenge of existing estimators of concept explanations and motivated the need for modeling uncertainty in estimation and informing user the uncertainty in importance scores. Accordingly, we proposed an uncertainty-aware and data-efficient estimator called U-ACE, which readily yielded several benefits. We demonstrated the merits of our estimator through theoretical analysis, controlled study experiments and two challenging real-world evaluation with around 700 concepts. To the best of our knowledge, previous evaluations did not consider concept explanations with as many concepts. Our results showed that concept explanations estimated by U-ACE are more reliable. Limitations and Future Work • The need and advantage when modeling uncertainty is also applicable when learning concept activations using datasets with concept annotations. However, our experimental setup is only focused on using CLIP for specifying concepts. • We did not model the uncertainty in CLIP’s knowledge of a concept. Epistemic uncertainty due to CLIP when modelled may improve reliability further, which we leave for future work.
a745RnSFLT
Besides, using the pre-trained LLaMA can improve the PAC-Bayes bound, is it some form of transferring the ''generalization problem'' of the prompt engineering to the generalization problem of the pre-trained language model?
UNDERSTANDING PROMPT ENGINEERING MAY NOT REQUIRE RETHINKING GENERALIZATION Victor Akinwande¹, Yiding Jiang¹, Dylan Sam¹ & J. Zico Kolter¹,² ¹Carnegie Mellon University, ²Bosch Center for AI ABSTRACT Zero-shot learning in prompted vision-language models, the practice of crafting prompts to build classifiers without an explicit training process, has achieved impressive performance in many settings. This success presents a seemingly surprising observation: these methods suffer relatively little from overfitting, i.e., when a prompt is manually engineered to achieve low error on a given training set (thus rendering the method no longer actually zero-shot), the approach still performs well on held-out test data. In this paper, we show that we can explain such performance well via recourse to classical PAC-Bayes bounds. Specifically, we show that the discrete nature of prompts, combined with a PAC-Bayes prior given by a language model, results in generalization bounds that are remarkably tight by the standards of the literature: for instance, the generalization bound of an ImageNet classifier is often within a few percentage points of the true test error. We demonstrate empirically that this holds for existing handcrafted prompts and prompts generated through simple greedy search. Furthermore, the resulting bound is well-suited for model selection: the models with the best bound typically also have the best test performance. This work thus provides a possible justification for the widespread practice of “prompt engineering,” even if it seems that such methods could potentially overfit the training data. 1 INTRODUCTION Generalization bounds provide statistical guarantees on the average-case performance of a learning algorithm’s output. However, in the case of deep learning models, there is still debate about how useful such bounds can be: Zhang et al. (2021) highlighted that classical approaches for deriving generalization bounds are insufficient for explaining the generalization ability of deep learning, spurring a flurry of new approaches for deriving tighter generalization bounds for deep neural networks (Bartlett et al., 2017; Dziugaite & Roy, 2017; Neyshabur et al., 2017b). In the recent literature on generalization bounds for deep learning, a large focus has been on developing data-dependent bounds, or bounds that consider both the data distribution and the hypothesis space. Some of the best data-dependent bounds arise from the PAC-Bayes framework (McAllester, 1999) and are derived by bounding the KL divergence between a prior over the hypothesis space and the posterior yielded by the learning algorithm. However, although PAC-Bayes bounds led to the first non-vacuous generalization bounds for deep learning (Dziugaite & Roy, 2017), they are still too loose to be practically useful (Jiang et al., 2019) in most realistic settings. In fact, as Lotfi et al. (2022) have recently argued, many PAC-Bayes bounds with data-dependent priors, while non-vacuous, can be best described as validation bounds — i.e., the use of data-dependent priors effectively leverages held-out data in a manner similar to cross-validation, which undermines their ability to explain generalization. Notwithstanding the lack of a clear theoretical basis, modern machine learning models are moving towards increasingly large pretrained models (Kaplan et al., 2020; Dosovitskiy et al., 2020). One prevailing paradigm is to use pretrained foundation models such as CLIP (Radford et al., 2021) or ALIGN (Jia et al., 2021) as feature extractors and provide weak supervision for a downstream target task via prompts, which are text descriptions of the desired tasks that are often significantly easier to obtain compared to full model weights or even a generic linear classifier over the last layer. The versatility and performance of prompting pretrained models have led to the rise of prompt engineering, an emergent paradigm in machine learning where practitioners carefully design the task specification in text or even learn the prompts in a data-driven fashion (Lester et al., 2021). For example, to obtain a two-class image classifier, one would write two sentences that describe the classes (e.g., “This is Table 1: Comparison with existing state-of-the-art generalization bounds for test error on different datasets. We report both data-independent and data-dependent bounds (* indicates data-dependent prior and – indicates that the bounds are not available). Note that different works use different architectures and analytic tools so direct comparison can be more nuanced. Nonetheless, our bounds on prompt engineering are significantly tighter than the existing PAC-Bayes bounds in the literature, often within a few percent of the actual test error. | Dataset | Zhou et al. (2019) | Dziugaite et al. (2021) | Lotfi et al. (2022) | PAC-Bayes (prompt) | |------------|--------------------|-------------------------|---------------------|--------------------| | CIFAR-10 | – | 0.230* | 0.582 / 0.166* | 0.063 | | CIFAR-100 | – | – | 0.946 / 0.444* | 0.266 | | ImageNet | 0.965 | – | 0.930 / 0.409* | 0.319 | a dog” and “This is a cat”), and the two sentences are turned into text embeddings which can be used to classify image embeddings. Despite its empirical success, little is understood of how and why prompting these pretrained models work and, in particular, why the method seems to suffer little from overfitting: manually tuning or even greedily optimizing prompts on a given training set often performs nearly as well on the corresponding test set. In this paper, we demonstrate that rather simple analysis tools capture this behavior surprisingly well (under some assumptions). In particular, we show that classical PAC-Bayes bounds (McAllester, 1999), when applied to the discrete hypothesis class defined by prompts (and specifically with a prior given by a large language model), are often remarkably tight, even for large domains: for example, we achieve a generalization bound of 32% error for a full ImageNet classifier, which is within 6% of the actual test error. This represents a vast improvement over existing bounds for deep learning, where achieving any non-vacuous bound on domains like ImageNet typically requires a great deal of effort; see, for instance, Table 1 for a comparison with other approaches. Perhaps more interestingly, our bounds do not depend on the training data as the prior approaches do but instead depend on the pretraining data of pretrained model (e.g., CLIP) through the image encoder. To summarize, we find that, unlike conventional deep learning models, prompting pretrained models does not suffer from vacuous generalization bounds, and one can readily derive a strong theoretical guarantee for using prompts via well-studied techniques. Overall, these findings suggest that, despite a large amount of automatic or manual tuning, prompt engineering is a principled approach for using these pretrained models that do not suffer the same lack of theoretical grounding as conventional deep learning models. On the other hand, it does introduce its own set of considerations, which we will discuss in the experiments section and conclusion. 2 RELATED WORKS Prompt Engineering. With the advent of large pretrained models, prompting developed as a different yet effective method to harness the abilities of these large models with limited labeled data (Brown et al., 2020; Le Scao & Rush, 2021; Liu et al., 2023). The flexibility of prompting has enabled a wide range of new capabilities unavailable to previous machine learning models, leading to a significant effort to document successful prompting methods (Bach et al., 2022) in both classification and text-to-image generation. One downside of prompting is that the performance varies greatly depending on how the prompt is phrased. To address this issue, several methods have been proposed to learn “optimal” prompts given labeled data, which empirically performs well and is parameter efficient (Lester et al., 2021; Li & Liang, 2021; Gao et al., 2021; Zhou et al., 2022a,b). A limitation of data-driven methods is their tendency to learn “soft” prompts or embedding vectors that do not correspond to specific tokens. Moreover, from a learning theoretic perspective, the continuous nature of soft prompts, combined with transformations by non-linear models, results in a complex hypothesis space, making it less amenable to theoretical analysis. In contrast, another line of work uses gradient-based methods to learn prompts that consist of discrete tokens that can be mapped to natural language (Wen et al., 2023). This work studies the theoretical guarantees of the latter. 1Data-dependent priors primarily refer to the setting where a portion of training data is used to obtain a prior that is closer to the final posterior. We note that in our setting, the training of pretrained models uses data from different distributions and does not use any training data from the task of interest. approach, that is, why these discrete prompting methods seem to work without any overfitting, and our analysis extends to the methods proposed in Wen et al. (2023). Prompt engineering has been extended to computer vision through CLIP (Contrastive Language-Image Pretraining) (Radford et al., 2021). CLIP combines an image and language encoder trained jointly to minimize a contrastive loss, enabling it to perform classification tasks based on natural language instructions. Examples include object recognition, image caption generation (Tewel et al., 2021), and zero-shot image classification using textual descriptions even for unseen labels. **Generalization bounds.** Generalization bounds are upper bounds on the test error of a model. Deriving such bounds for deep learning has been difficult, and most are usually vacuous (Zhang et al., 2021; Jiang et al., 2019; Dziugaite et al., 2020). Many well-studied tools in statistical learning theory are fundamentally limited when it comes to the analysis of deep neural networks (Nagarajan & Kolter, 2019b). The core component of a generalization bound is a complexity measure, a quantity that relates to some aspect of generalization. A complexity measure may depend on the properties of the trained model, optimizer, and possibly training data, as long as it does not have access to a validation set. The most classic bounds, such as VC-dimension (Vapnik, 1971), are often related to some form of parameter counting which is often too pessimistic for deep neural networks. Norm-based bounds usually rely on the margin and some norms of the model weights (Langford & Caruana, 2001; Bartlett et al., 2017; Neyshabur et al., 2015; 2017b), but these bounds have been ineffective at studying generalization of deep learning (Nagarajan & Kolter, 2019a). Another main class is the PAC-Bayes bounds (McAllester, 1999) which have been much more successful in deep learning due to the flexibility of prior (Neyshabur et al., 2017a; Dziugaite & Roy, 2017; Zhou et al., 2019; Lotfi et al., 2022), although these bounds are still much looser than the actual generalization error. Our approach also belongs to the PAC-Bayes family, but we apply the PAC-Bayes bounds to the distribution of discrete tokens (with a language model as the prior) rather than to a distribution over the parameters of a neural network. This allows us to derive significantly tighter bounds compared to applying the PAC-Bayes bounds with less informative priors. ### 3 Preliminaries **Notations.** Let $\mathcal{X} \in \mathbb{R}^d$ be a set of inputs and $\mathcal{Y} = [K]$ be a label set, and there exists a probability distribution $D$ on $(\mathcal{X} \times \mathcal{Y})$ which is unknown. Let our data $(X_1, Y_1), \ldots, (X_n, Y_n)$ be drawn i.i.d from $D$, and consider a predictor $f : \mathcal{X} \to \mathcal{Y}$ and a fixed set of predictors indexed by the parameter set $\Theta$. We use $f_\theta$ to denote the classifier indexed by $\theta$. We consider the 0–1 loss given by $\ell(y', y) = 1\{y \neq y'\}$. The generalization error (risk) of a predictor is defined as $R(\theta) = \mathbb{E}_{(X,Y) \sim P}[\ell(f_\theta(X), Y)]$ and the empirical risk $r(\theta) = \frac{1}{n} \sum_{i=1}^{n} \ell(f_\theta(X_i), Y_i)$ satisfies $\mathbb{E}_S[r(\theta)] = R(\theta)$ for a sample $S = [(X_1, Y_1), \ldots, (X_n, Y_n)]$. An estimator is a function $\hat{\theta} : \bigcup_{n=1}^{\infty} (\mathcal{X} \times \mathcal{Y})^n \to \Theta$. **Vision-language models.** CLIP consists of two encoders $\text{enc}_{\text{img}}$ and $\text{enc}_{\text{txt}}$. Given an image $X \in \mathcal{X}$, the image encoder $\text{enc}_{\text{img}} : \mathcal{X} \to \mathbb{R}^d$ maps an image $X$ to a $d$-dimension real-valued embedding. Let $\mathcal{T}$ be the space of texts and $T \in \mathcal{T}$ a single piece of text, the image encoder $\text{enc}_{\text{txt}} : \mathcal{T} \to \mathbb{R}^d$ maps $T$ to a $d$-dimension real-valued embedding. Given a batch of images $\{X_i\}_{i=1}^{B}$ and their corresponding texts $\{T_i\}_{i=1}^{B}$, the training objective maximizes the cosine similarity of the embeddings of the matching image and text pair and minimizes the cosine similarity of image and text pairs that do not correspond to each other. The primary task we consider in this work is image classification via pretrained vision-language models. The goal is to find a class prompt, $\theta^k \in \mathcal{T}$, for each class that achieves good accuracy. For a $K$-class classification problem with $\theta = (\theta^1, \theta^2, \ldots, \theta^K) \in \Theta = \mathcal{T}^K$, the zero-shot classifier is $f_\theta(X) = \arg\max_{k \in [K]} \langle \text{enc}_{\text{txt}}(\theta^k), \text{enc}_{\text{img}}(X) \rangle$. **Generalization bounds.** Deriving generalization bounds is closely related to assigning hypotheses prior probabilities of being good (Shalev-Shwartz & Ben-David, 2014). One of the simplest approaches uses uniform convergence over the entire discrete hypothesis space (where $|\Theta|$ denotes the number of functions in the class) to derive the well-known generalization bound, **Theorem 3.1** (Shalev-Shwartz & Ben-David (2014)). For every $\delta > 0$, with probability $1 - \delta$ over the training set of size $n$, for any hypothesis $\theta \in \Theta$, the following holds $R(\theta) \leq r(\theta) + \sqrt{\frac{\log |\Theta| + \log \left(\frac{1}{\delta}\right)}{2n}}$. This result does not consider the implicit bias of the learning algorithm (Neyshabur et al., 2014), the training data \( S \), or the data-generating distribution \( D \). In contrast, the PAC-Bayes framework offers a flexible approach for leveraging this information by defining a hierarchy over hypotheses in the hypothesis class \( \Theta \) that takes the form of a prior distribution \( P \) over \( \Theta \). That is, we assign a probability \( P(\theta) \geq 0 \) for each \( \theta \in \Theta \) and refer to \( P(\theta) \) as the prior score of \( \theta \). The learning process defines a posterior probability over \( \Theta \), which we denote by \( Q \). In the context of supervised learning, we can think of \( Q \) as defining the following prediction rule: given an instance \( X \), we randomly pick a hypothesis \( \theta \) according to \( Q \) and predict \( f_\theta(X) \). Remarkably, it was shown that the expected generalization gap can be upper bounded by the KL-divergence between \( P \) and \( Q \): **Theorem 3.2** (McAllester (1999)). For every \( \delta > 0 \), prior \( P \) over \( \Theta \), with probability \( 1 - \delta \) over the training set of size \( n \), for any posterior \( Q \) over \( \Theta \), the following holds \[ E_{\theta \sim Q}[R(\theta)] \leq E_{\theta \sim Q}[r(\theta)] + \sqrt{\frac{D_{KL}(Q \| P) + \log(n/\delta)}{2n-1}}. \] ### 4 METHODOLOGY Designing a prompt is analogous to finding a set of weights in typical machine learning models, where the hypothesis space is the space of texts/tokens. The goal is to find class prompts that maximize training accuracy without finetuning the model’s parameters. This process, which is often referred to as **prompt engineering**, can be formulated as discrete optimization over the space of tokens, \( V \). #### 4.1 PROMPT SEARCH To study the generalization capabilities of discrete prompts, we consider a simple greedy search algorithm that mimics an overeager prompt engineer who exhaustively tries adjusting prompts with every possible word, although the analysis extends to other techniques that produce discrete prompts. To find class prompts of length \( L \), we will search for \( K \cdot L \) tokens over the space, \( V^{K \cdot L} \). Naively, this search is exponential in the length of the prompt so to circumvent this problem, the prompts are generated successively; that is, we increment the prompts by selecting the token that maximizes a **search criterion**, \( J \), on the training dataset from a set of **candidate tokens**, \( \hat{V} \subseteq V \). With a slight abuse of notation, we will use \( \hat{V}(\theta) \) to denote a candidate set that can be conditioned on the current \( \theta \). The search criterion is the objective being optimized (e.g., the empirical loss), and candidate tokens are permissible tokens that can be used to extend the current class prompts. At every step of the search, we keep the class prompts fixed except for all but one class. The prompt for each class \( k \) is a sequence of \( l \) tokens \( \theta^k_l \in V \), \( \theta^k_{l-1} = (\theta^k_1, \theta^k_2, \ldots, \theta^k_l) \) where \( l < L \), and we use \( \theta^{-k} \) to denote the class prompts for all classes that are not the \( k \)-th class. The next token for \( \theta^k \) is obtained via: \[ \theta^k_{l+1} = \arg\max_{v \in \hat{V}(\theta)} J(v, \theta^k_{l-1}, \theta^{-k}). \] The pseudocode for this sequential search is outlined in detail in Algorithm 1. **Empirical risk minimization.** Using \( \oplus \) to denote concatenation, we consider a simple form of search, **greedy search**, where we use: \[ \hat{V}_{\text{greedy}}(\theta) = V, \quad J_{\text{greedy}}(v, \theta^k_{l-1}, \theta^{-k}) = -r((\ldots, \theta^{k-1}, \theta^k_{l-1} \oplus v, \theta^{k+1}, \ldots)), \] where \( r \) is the empirical risk in terms of the 0–1 loss (see Section 3). In other words, we always search over all possible tokens (line 6) to maximize the training accuracy. This greedy search is an **empirical risk minimization** (Vapnik, 1991, ERM) learner since its only objective is to minimize the training error. There are several drawbacks to this simple algorithm, the chief of which is that we need to search over \( V \) exhaustively at each step, which can be expensive since it consists of all the tokens of the vision-language model (e.g., CLIP has about 50000 tokens). Instead, we could search over only a subset of \( V \). To reduce this search space, we use a language model (LM) to induce a distribution over the next tokens conditioned on \( \theta^k \) and only evaluate the tokens with high probabilities: \[ p_{\text{next}}(\theta^k_{l+1} | \theta^k_{l-1}) = p_{\text{LM}}(\theta^k_{l+1} | \theta^k_{l-1} = (\theta^k_1, \theta^k_2, \ldots, \theta^k_l)). \] Given that CLIP is trained with natural language supervision, autoregressive LMs that are also trained on natural language can likely predict suitable next tokens. We then take the top \( N \) candidates and only evaluate the accuracy of these candidates. Conveniently, this can be seen as constraining the complexity of the prompt as the language model provides a structured prior. We observe that this pruning incurs minimal performance loss, suggesting that LMs indeed are good prior in searching for class prompts on image classification tasks. Furthermore, we may use predefined strings to further constrain the hypothesis space by starting with an initial prompt such as “This is an image of [ . . . ]”, instead of an empty string. These initial prompts can provide additional structure to the generated prompts by constraining the output distribution, similar to the role of inductive bias. We refer to this method as Greedy. **Structural risk minimization via PAC-Bayes.** This procedure can be further augmented to optimize the PAC-Bayes bound via structural risk minimization (Vapnik & Chervonenkis, 1974, SRM) similar to the approach of Dziugaite & Roy (2017), namely, we will take the hypothesis complexity (e.g., KL-divergence) into account as we search for the next token for each prompt. We use the KL-divergence directly in the objective optimization without sacrificing the quality of the solution. Once again, we do this optimization in a sequential manner via Algorithm 1: \[ \hat{v}_{LM}(\theta) = \left\{ v \in V \mid \max_{v'} p_{next}(v' \mid \theta^k_{\leq l}) - p_{next}(v \mid \theta^k_{\leq l}) \leq \Delta \right\}, \] \[ J_{LM}(v, \theta^k_{\leq l}, \theta^{-k}) = -r((\ldots, \theta^{k-1}, \theta^k_{\leq l} \oplus v, \theta^{k+1}, \ldots)) + \beta \log p_{next}(v \mid \theta^k_{\leq l}), \] where \( \Delta \) controls the size of the search space (adjusted according to computational constraints) and \( \beta \) is a hyperparameter that controls the strength of the regularization. This set of permissible tokens could also be pruned and fixed beforehand by discarding tokens with low marginal probability. We refer to this version of search as regularized greedy. ### 4.2 Generalization Guarantees for Prompts Since the space of all prompts is discrete and the total number of possible prompts is \( |\Theta| = |V|^{LK} \), for a single hypothesis \( \hat{\theta} \), we have the following uniform convergence bound for prompts that depends on prompt length, the number of classes, and the number of tokens in the vocabulary by assigning uniform probability to each hypothesis (from Theorem 3.1): \[ R(\hat{\theta}) \leq r(\hat{\theta}) + \sqrt{\frac{L K \log |V| + \log(1/\delta)}{2n}}. \] However, not all prompts are equally likely to be good. To obtain a tighter generalization guarantee on the learned \( \hat{\theta} \), we will leverage a classical PAC-Bayes bound to derive an upper bound on the generalization error of the learned prompts. In conventional application of PAC-Bayes to deep learning, \( P \) and \( Q \) are often chosen to be isotropic Gaussian on the parameters (Langford & Caruana, 2001) so the KL-divergence between the prior and posterior can be easily computed. We instead use a language model as the prior over \( K \) independent prompts, \( P(\theta) = \prod_{i=1}^{K} \prod_{j=1}^{L} p_{LM}(\theta^i_j \mid \theta^i_{\leq j}) \). Further, we treat the prompts \( \hat{\theta} \) found through search or through prompt engineering as a point mass posterior, \( Q(\theta) = 1\{\theta = \hat{\theta}\} \). In this case, the KL-divergence is conveniently equal to the negative log-likelihood of \( \hat{\theta} \) under the LM because the posterior is zero everywhere except for at \( \hat{\theta} \): \[ D_{KL}(Q \parallel P) = \sum_{\theta \in \Theta} Q(\theta) \log \frac{Q(\theta)}{P(\theta)} = \log \frac{1}{P(\hat{\theta})} = -\sum_{i=1}^{K} \sum_{j=1}^{L} \log p_{LM}(\hat{\theta}^i_j \mid \hat{\theta}^i_{\leq j}). \] This bound has an intuitive interpretation, which is that the generalizing prompts are the ones that achieve good training performance and are likely under the language model. Having a point-mass posterior over discrete space also means that we can derandomize the PAC-Bayes bound for free (Viallard et al., 2021). Combining these observations, we have the following deterministic upper bound on the generalization error (from Theorem 3.2): \[ R(\hat{\theta}) \leq r(\hat{\theta}) + \sqrt{-\sum_{i=1}^{K} \sum_{j=1}^{L} \log p_{LM}(\hat{\theta}^i_j \mid \hat{\theta}^i_{\leq j}) + \log(n/\delta) + 2}. \] Table 2: Performance and generalization bounds for prompts produced by Greedy and for hand-crafted prompts on different datasets with different CLIP architectures. UC represents the uniform convergence bound. Handcrafted prompts are taken from CLIP and Wise-FT (Wortsman et al., 2022). | Dataset | Model | Method | Train Err | Test Err | UC | PAC-Bayes | |-------------|-------|--------------|-----------|----------|------|-----------| | CIFAR-10 | B-16 | Greedy | 0.050 | 0.060 | 0.154| 0.086 | | | L-14 | Greedy | 0.023 | 0.028 | 0.128| 0.063 | | | L-14 | handcrafted | 0.040 | 0.040 | 0.145| 0.078 | | CIFAR-100 | B-16 | Greedy | 0.208 | 0.255 | 0.537| 0.317 | | | L-14 | Greedy | 0.142 | 0.180 | 0.471| 0.266 | | | L-14 | handcrafted | 0.221 | 0.221 | 0.549| 0.339 | | fMoW | B-16 | Greedy | 0.598 | 0.621 | 0.807| 0.667 | | | L-14 | Greedy | 0.514 | 0.547 | 0.723| 0.596 | | | L-14 | handcrafted | 0.725 | 0.402 | 0.934| 0.804 | | OfficeHome | B-16 | Greedy | 0.104 | 0.150 | 0.635| 0.281 | | | L-14 | Greedy | 0.070 | 0.115 | 0.601| 0.260 | | | L-14 | handcrafted | 0.926 | 0.928 | 1.457| 1.119 | | ImageNet | L-14 | handcrafted | 0.243 | 0.256 | 0.448| 0.319 | Figure 1: Test error vs generalization bound on CIFAR-10, CIFAR-100, and OfficeHome. We compare the uniform convergence bound and PAC-Bayes bound, when evaluated on prompts produced by Greedy. The dashed line represents $y = x$. We note that these techniques are not novel from a theoretical perspective and there are more sophisticated PAC-Bayes variants that may yield tighter results. Nonetheless, in the next section, we will observe that this simple bound is surprisingly tight even for complex datasets such as ImageNet. Data leakage and contamination. One strong assumption of these bounds, which we make explicitly and which could indeed be violated in practice, is that the image encoder is trained without access to the training set used for prompt engineering. If it is trained on this data, even from the training set, then the functional complexity of the hypothesis class depends not just on the prompt, but also implicitly on the complexity of the image encoder. We emphasize that this fact does not change the nature of the bounds above, but it does change whether or not any given bound in the experiments can be formally considered a valid bound, or could be violated. In practice, this is difficult to verify for the e.g. CLIP encoder, since the data it was trained on is not publicly disclosed. Nonetheless, the CLIP paper includes a sensitivity analysis that shows a relatively small effect of including any of the evaluation datasets they consider (Radford et al., 2021). Thus, while we fully acknowledge that data contamination may apply to the experiments below, we believe this to be similar to many current evaluations of foundation models, where it is difficult to assess the extent to which any performance is truly zero-shot. 5 EXPERIMENTS In this section, we evaluate the generalization of discrete prompts generated by Greedy on CIFAR-10, CIFAR-100, ImageNet as well as domain generalization datasets fMoW (Christie et al., 2018) and OfficeHome (Venkateswara et al., 2017), which is much less studied in the context of numerical generalization bounds. We also evaluate existing well-performing handcrafted prompts taken from CLIP and Wise-FT (Wortsman et al., 2022). Given these prompts, we compute generalization bounds via PAC-Bayes ($\text{PAC-Bayes}$) and via uniform convergence ($\text{UC}$). The PAC-Bayes bounds are computed using LLaMA-7B (Touvron et al., 2023) as the prior. Within $\text{Greedy}$, we search using the CLIP vocabulary of 49,408 tokens and measure the generalization bounds for 100 realizations of $\text{Greedy}$ with each corresponding to a fixed prompt length $l \in \{1, \ldots, 10\}$ and split portion of the dataset $s \in \{0.1, \ldots, 1.0\}$. More details on the experimental procedure are in Appendix C. **Baselines** We compare our generalization bounds against existing generalization bounds on CIFAR-10, CIFAR-100, and ImageNet. In particular, we compare against the works of Lotfi et al. (2022) and Zhou et al. (2019), which represent the latest progress in PAC-Bayes bounds for deep learning. As shown in Table 1, discrete prompts achieve much tighter bounds than the state-of-the-art across all 3 datasets. We remark that our approach is also data-independent, while still achieving a tighter bound than the data-dependent approach in the work of Lotfi et al. (2022). An added benefit of this result is that we make little modification to the existing learning paradigm – indeed prior bounds often need to make strict assumptions about the neural network such as Gaussian posterior or the weights lying in a low dimensional manifold (Lotfi et al., 2022) which may hurt the performance. We observe that even simple UC bounds over discrete prompts generated by $\text{Greedy}$ lead to tight, non-vacuous bounds across a variety of datasets, and PAC-Bayes bounds with an LLM prior further improve these bounds (Figure 1). These also apply to handcrafted prompts (Figure 2) from the existing literature (Radford et al., 2021; Wortsman et al., 2022) (other datasets’ result in Appendix B). Figure 2: Test error vs PAC-Bayes generalization bound on CIFAR-10, CIFAR-100, and OfficeHome on handcrafted prompts. The dashed line represents $y = x$. Figure 3: Train error (orange) and generalization bound (blue) vs test error ($y$-axis) on CIFAR-10, CIFAR-100, and OfficeHome of prompts produced by $\text{Greedy}$. The dashed line represents $y = x$. Notice that towards the region of low training loss (left), many prompts actually have higher test loss (negative correlation). On the other hand, the low bounds correlate with low test errors well. Figure 4: Test error vs the PAC-Bayes bound on CIFAR-10 when using SRM (i.e., directly penalizing the PAC-Bayes bound) (left). We also report the train and test performance when the CLIP vocabulary is pruned (i.e., removing tokens that have logit values that are $k$ standard deviations away from the max token) using the language model (right). This yields prompts with tighter bounds at the cost of slightly higher error. Structural risk minimization with the PAC-Bayes bound PAC-Bayes is related to SRM (Vapnik & Chervonenkis, 1974), where one tries to optimize both the goodness of fit and complexity of the model. When we compare test error against train error or the generalization bound (Figure 3), we observe that the generalization bound can serve as a useful criterion for model selection. We consider using SRM, where our complexity term is exactly the KL divergence term in Equation 8. Regularized Greedy now jointly maximizes train accuracy and minimizes this KL divergence term when adding new tokens to each class prompt. We observe that this naturally leads to tighter bounds for prompts yielded by Greedy on CIFAR-10 (Figure 4) while maintaining comparable accuracy. Interestingly, using LLaMA-7B as the prior does not significantly improve the linguistic coherence of prompts obtained through regularized search, which leaves room for more sophisticated search techniques to address this in future work. Pruning the hypothesis space In addition to regularizing the search objective with the KL term directly, another method to improve our generalization bounds is to prune the vocabulary using a large language model. We experiment with conditioning the language model on the class names and then selecting tokens from the language model’s vocabulary with the highest probability under the language model. In Figure 4, we report the performance and generalization of Greedy when the tokens considered in search are restricted to within $k$ standard-deviations (see Appendix B.1 for details) away from the maximum logit token. While the vocabulary size of LLaMA-7b is 32,000 tokens, the number of tokens within 3, 2, 1 standard deviations from the maximum token are 6,894, 1,361, 185 respectively. We observe this implicitly prunes the hypotheses to contain those with smaller generalization error at a small cost to the train and test error. Restricting the vocabulary also encodes prior knowledge about the data or domain. For example, further results using a vocabulary of English words in Appendix B.1 (instead of CLIP’s vocabulary of tokens) show that we can learn slightly more interpretable prompts. Effects of prompt length Another key quantity of prompt engineering is the prompt length which directly controls the size of the hypothesis space. We analyze how the length of class prompts impacts the performance of Greedy (Figure 5). We note that at a certain length, the train accuracy plateaus, which means that a relatively small prompt length suffices for good classification performance. Fitting random labels Motivated by our new observations about prompt engineering, we hypothesize that the learned prompts are less prone to overfitting the noise in the data. Zhang et al. (2021) showed that conventional deep neural networks can fit both random labels, arguing that these models have much higher capacity than what traditional statistical learning theory can deal with. To demonstrate that prompt engineering is robust to label noise, we experiment with running Greedy Figure 5: The train and test accuracy with different prompt lengths for greedy search. Although the generalization gap increases with prompt length, there is little overfitting even at the longest lengths. Figure 6: We show the generalization of discrete prompts produced by Greedy on randomly labeled data from CIFAR-10 (left). We also report the performance when search is done with 1% - 9% of the labeled data (middle), and when search is done with 1% - 9% of the CLIP vocabulary (right). We fix the prompt length to be 5. Table 3: Performance and generalization bounds for prompts produced by Greedy and for a linear probe (on top of CLIP features) on different datasets with 20 samples per class. UC represents the uniform convergence bound. We omit UC for linear probing because this is a multi-class problem. | Dataset | Model | Method | Train Err | Test Err | UC-20 | PAC-Bayes-20 | |------------|-------|----------------|-----------|----------|-------|--------------| | CIFAR-10 | L-14 | Greedy | 0.020 | 0.138 | 1.675 | 0.634 | | | L-14 | Linear Probe | 0.000 | 0.038 | - | 2.591 | | CIFAR-100 | L-14 | Greedy | 0.156 | 0.367 | 1.801 | 0.637 | | | L-14 | Linear Probe | 0.000 | 0.198 | - | 3.715 | on training data with a certain proportion of randomly flipped labels. We observe that both training and test accuracy drop monotonically in tandem as we flip these training labels (Figure 6), which suggests that the prompts cannot overfit the random labels. For a baseline comparison, we also compare the performance of a linear probe on random labels. We observe that this achieves roughly random performance (13.60% accuracy) with 100% flipped labels. This supports that Greedy is not too simple of a search approach to fit the random labels as other more complex methods also cannot. Learning with small data When the number of data points is small (e.g., \( n = 20 \)), the use of PAC-Bayes is especially attractive since we can use all the data points to estimate the posterior and bound its risk. Furthermore, prompt engineering is frequently used with limited labeled data; thus, further progress in understanding its generalization properties must provide bounds in this regime. In Figure 6, we report the train and test accuracy of Greedy as we vary the amount of training data (between 1%–10% of the full data) we use in computing the search objective. We observe less than 2% increase in error with 2% of the training set of CIFAR-10. This highlights that Greedy can be remarkably data efficient. We then compute both the uniform convergence and PAC-Bayes bounds with 20 samples per class (Table 3). The results underscore the importance of an informative prior in the form of the LLM. The bounds obtained with the LLM prior are, albeit loose but still non-vacuous. To the best of our knowledge, this is not possible with prior approaches unless it is data-dependent. One could ask since we assume the representation from CLIP is not learned from the training data, can we simply use an SVM-like bound on the learned features (McNamara & Balcan, 2017)? As a case in point, we present a standard linear probe (on top of CLIP’s features), which achieves slightly better accuracy but a vacuous generalization bound. The implementation details are described in Appendix C. The discrete nature of prompts and the fact that the corresponding hypothesis space of CLIP is so small is crucial to the success of our approach. We believe that exploring avenues to obtain tighter PAC-Bayes bounds in the small data regime is an opportunity for future work and the use of data-dependent priors may be fruitful in this regard. 6 CONCLUSION AND LIMITATIONS In this paper, we study the generalization properties of engineered prompts on image recognition tasks. We observe the surprising fact: prompt engineering does not seem to overfit, and also performs well on the test distribution. We provide a principled approach to analyze this generalization behavior by framing discrete prompts as a relatively small hypothesis class, onto which we can naturally apply classical PAC-Bayes bounds using an LLM prior. This results in the tightest bounds yet observed across multiple complex datasets, including CIFAR-10, CIFAR-100, and ImageNet. As a whole, this supports the use of prompt-engineering or simple greedy searches over potential class prompts as a high-performing and well-generalizing classifier. Despite the ability to produce highly non-vacuous bounds, the bounds rely on the fact that pretrained vision-language models readily contain some hypothesis class that will perform well on the training set (for whatever the desired task is). This, in turn, naturally relies on the generalization performance of the underlying model itself, which our analysis evidently does not, and cannot, address (as they are only aware of the language model, which does not observe the data). Nonetheless, what our bounds do address is the fact that when given these performant models, manual prompt engineering (even when “overfitting” to a training set) often exhibits surprisingly strong generalization behavior. Given the prevalence of prompt engineering in modern ML, we believe that this work provides an important perspective on this widespread practice. ACKNOWLEDGEMENTS We thank Nina Balcan for valuable discussions during this project. Victor Akinwande, and Dylan Sam were supported by funding from Bosch Center for AI. Dylan Sam was also supported by a National Science Foundation Graduate Research Fellowship under Grant No. DGE2140739 and the ARCS Foundation. Yiding Jiang is supported by the Google PhD Fellowship. REFERENCES Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Févry, et al. Promptsource: An integrated development environment and repository for natural language prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 93–104, 2022. Peter L. Bartlett, Dylan J. Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. ArXiv, abs/1706.08498, 2017. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In CVPR, 2018. Amit Daniely, Sivan Sabato, Shai Ben-David, and Shai Shalev-Shwartz. Multiclass learnability and the erm principle. J. Mach. Learn. Res., 16(1):2377–2404, 2015. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008, 2017. Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitashan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, and Daniel M Roy. In search of robust measures of generalization. Advances in Neural Information Processing Systems, 33:11723–11733, 2020. Gintare Karolina Dziugaite, Kyle Hsu, Waseem Gharbieh, Gabriel Arpino, and Daniel Roy. On the role of data in pac-bayes bounds. In International Conference on Artificial Intelligence and Statistics, pp. 604–612. PMLR, 2021. Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. In Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021, pp. 3816–3830. Association for Computational Linguistics (ACL), 2021. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904–4916. PMLR, 2021. Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them. arXiv preprint arXiv:1912.02178, 2019. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. John Langford and Rich Caruana. (not) bounding the true error. In NIPS, 2001.
SkETBJRKH7
I am a bit confused about the results shown in the middle panel of Figures 4 and 5: The PFC-LLM architecture produced zero invalid action proposals in both tasks. Does this imply that the Monitor module is unnecessary, given that its role is to identify invalid action proposals? However, this contradicts the ablation study, which demonstrates a significant drop in PFC-LLM performance without the Monitor module. Could the authors provide a little more detailed explanation of this inconsistency?
A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models Anonymous authors Paper under double-blind review Abstract Large language models (LLMs) demonstrate impressive performance on a wide variety of tasks, but they often struggle with tasks that require multi-step reasoning or goal-directed planning. To address this, we take inspiration from the human brain, in which planning is accomplished via the recurrent interaction of specialized modules in the prefrontal cortex (PFC). These modules perform functions such as conflict monitoring, state prediction, state evaluation, task decomposition, and task coordination. We find that LLMs are sometimes capable of carrying out these functions in isolation, but struggle to autonomously coordinate them in the service of a goal. Therefore, we propose a black box architecture with multiple LLM-based (GPT-4) modules. The architecture improves planning through the interaction of specialized PFC-inspired modules that break down a larger problem into multiple brief automated calls to the LLM. We evaluate the combined architecture on three challenging planning tasks – graph traversal, Tower of Hanoi, and logistics – finding that it yields significant improvements over standard LLM methods (e.g., zero-shot prompting, in-context learning, and chain-of-thought). These results demonstrate the benefit of utilizing knowledge from cognitive neuroscience to improve planning in LLMs. 1 Introduction Large Language Models (LLMs) (Devlin et al., 2019; Brown et al., 2020) have recently emerged as highly capable generalist systems with a surprising range of emergent capacities (Srivastava et al., 2022; Wei et al., 2022a; Webb et al., 2023). They have also sparked broad controversy, with some suggesting that they are approaching general intelligence (Bubeck et al., 2023), and others noting a number of significant deficiencies (Mahowald et al., 2023). A particularly notable shortcoming is their poor ability to plan or perform faithful multi-step reasoning (Valmeekam et al., 2023; Dziri et al., 2023). Recent work (Momennejad et al., 2023) has evaluated the extent to which LLMs might possess an emergent capacity for planning and exploiting cognitive maps, the relational structures that humans and other animals utilize to perform planning (Tolman, 1948; Tavares et al., 2015; Behrens et al., 2018). This work found that a variety of LLMs, ranging from small, open-source models (e.g., LLaMA-13B and Alpaca-7B) to large, state-of-the-art models (e.g., GPT-4), displayed systematic shortcomings in planning tasks that suggested an inability to reason about cognitive maps. Common failure modes included a tendency to ‘hallucinate’ (e.g., to imagine non-existent paths), and to fall into loops. This work raises the question of how LLMs might be improved so as to enable a capacity for planning. In the present work, we take a step toward improving planning in LLMs, by taking inspiration from the planning mechanisms employed by the human brain. Planning is generally thought to depend on the prefrontal cortex (PFC) (Owen, 1997; Russin et al., 2020; Brunec & Momennejad, 2022; Momennejad et al., 2018; Momennejad, 2020; Mattar & Lengyel, 2022), a region in the frontal lobe that is broadly involved in executive function, decision-making, and reasoning (Miller & Cohen, 2001). Research in cognitive neuroscience has revealed the presence of several subregions or modules within the PFC that appear to be specialized to perform certain functions. These include functions such as conflict monitoring (Botvinick et al., 1999); state prediction and state evaluation (Wallis, 2007; Schuck et al., 2016); and task decomposition and task coordination (Ramnani & Owen, 2004; Momennejad & Haynes, 2012, 2013). Human planning then emerges through the coordinated and recurrent interactions among these specialized PFC modules, rather than through the activity of a single, monolithic system. An interesting observation is that LLMs often seem to display some of these capacities when probed in isolation, even though they are unable to reliably integrate and deploy these capacities in the service of a goal. For instance, Momennejad et al. (2023) noted that LLMs often attempt to traverse invalid or hallucinated paths in planning problems (e.g., to move between rooms that are not connected), even though they can correctly identify these paths as invalid when probed separately. This suggests the possibility of a PFC-inspired approach, in which planning is carried out through the coordinated activity of multiple LLM modules, each of which is specialized to perform a distinct process. With this goal in mind, we propose LLM-PFC (Figure 1), an architecture composed of modules that are specialized to perform specific PFC-inspired functions. Each module consists of an LLM instance (GPT-4), constructed through a combination of prompting and few-shot in-context learning. We specifically propose modules that perform error monitoring, action proposal, state prediction, state evaluation, task decomposition, and task coordination. It is suggested that the coordinated activity of multiple PFC subregions performs tree search during planning (Owen, 1997; Daw et al., 2005; Wunderlich et al., 2012; Doll et al., 2015). Thus, our approach combines action proposal, state prediction, and state evaluation to perform tree search. We evaluate LLM-PFC on three challenging planning tasks. First, we performed controlled experiments on a set of graph traversal tasks using the CogEval protocol (Momennejad et al., 2023). These tasks require navigation in novel environments based on natural language descriptions, and have been shown to be extremely challenging for LLMs, including GPT-4. Second, we investigate Tower of Hanoi (ToH), a classic problem solving task that requires multi-step planning (Simon, 1975), and for which performance is known to be heavily dependent on PFC function (Goel & Grafman, 1995; Fincham et al., 2002). Finally, we investigate a more complex, real-world planning task involving logistics (transportation of goods) (Valmeekam et al., 2023). We find that our approach significantly improves LLM performance on all three tasks. Ablation experiments further indicate... that each of the individual modules plays an important role in the overall architecture’s performance. Taken together, these results indicate the potential of a PFC-inspired approach to improve the reasoning and planning capabilities of LLMs. 2 APPROACH The LLM-PFC architecture is constructed from a set of specialized LLM modules, each of which performs a specific PFC-inspired function. In the following sections, we first describe the functions performed by each module, and then describe how they interact to generate a plan. 2.1 MODULES LLM-PFC contains the following specialized modules, each constructed from a separate LLM instance through a combination of prompting and few-shot (≤ 3 examples) in-context learning (described in greater detail in section A.6): - **TaskDecomposer.** The TaskDecomposer receives the current state \( x \) and a goal \( y \) and generates a set of subgoals \( Z \) that will allow the agent to gradually work toward its final goal. This module is inspired by the anterior PFC (aPFC), which is known to play a key role in task decomposition through the generation and maintenance of subgoals (Ramnani & Owen, 2004). In the present work, the TaskDecomposer is only utilized to generate a single intermediate goal, though in future work we envision that it will be useful to generate a series of multiple subgoals. - **Actor.** The Actor receives the current state \( x \) and a subgoal \( z \) and proposes \( B \) potential actions \( A = a_{b=1} \ldots a_{b=B} \). The Actor can also receive feedback \( \epsilon \) from the Monitor about its proposed actions. This module can be viewed as being analogous to the dorsolateral PFC (dlPFC) which plays a role in decision making through top-down control and guidance of lower-order premotor and motor regions (Miller & Cohen, 2001). - **Monitor.** The Monitor assesses the actions proposed by the Actor to determine whether they are valid (e.g., whether they violate the rules of a task). It emits an assessment of validity \( \sigma \), and also feedback \( \epsilon \) in the event the action is deemed invalid. This module is inspired by the Anterior Cingulate Cortex (ACC), which is known to play a role in conflict monitoring (Botvinick et al., 1999), i.e., detecting errors or instances of ambiguity. - **Predictor.** The Predictor receives the current state \( x \) and a proposed action \( a \) and predicts the resulting next state \( \tilde{x} \). The Predictor is inspired by the Orbitofrontal cortex (OFC), which plays a role in estimating and predicting task states. In particular, it has been proposed that the OFC plays a key role in encoding cognitive maps: representations of task-relevant states and their relationships to one another (Schuck et al., 2016). - **Evaluator.** The Evaluator receives a next-state prediction \( \tilde{x} \) and produces an estimate of its value \( v \) in the context of goal \( y \). This is accomplished by prompting the Evaluator (and demonstrating via a few in-context examples) to estimate the minimum number of steps required to reach the goal (or subgoal) from the current state. The Evaluator is also inspired by the OFC which, in addition to predicting task states, plays a key role in estimating the motivational value of those states (Wallis, 2007). - **Orchestrator.** The Orchestrator receives the current state \( x \) and a subgoal \( z \) and emits an assessment \( \Omega \) of whether the subgoal has been achieved. When the Orchestrator determines that all subgoals (including the final goal) have been achieved, the plan is emitted to the environment as a series of actions. This module is also inspired by the aPFC, which is thought to both identify subgoals and coordinate their sequential execution (Ramnani & Owen, 2004). 2.2 ACTION PROPOSAL LOOP The Actor and Monitor interact via the ProposeAction function (Algorithm 1). The Actor proposes actions which are then gated by the Monitor. If the Monitor determines that the actions are invalid (e.g., they violate the rules of a task), feedback is provided to the Actor, which then proposes an alternative action. In the brain, a similar process is carried out by interactions between the ACC and dorsolateral PFC (dlPFC). The ACC is thought to recruit the dlPFC under conditions of conflict. (e.g., errors or ambiguity), which then acts to resolve the conflict through top-down projections to lower-order control structures (e.g., premotor and motor cortices) (Miller & Cohen, 2001; Shenhav et al., 2013). **Algorithm 1: Action proposal loop.** ProposeAction takes a state \( x \) and a goal \( y \) and generates \( B \) potential actions \( A = a_0, \ldots, a_{B-1} \). This is implemented via a loop, in which the Actor first proposes potential actions, and the Monitor then assesses those actions according to certain constraints (e.g., task rules), providing feedback if any of the actions are deemed to be invalid. This continues until the proposed actions are considered valid. See Sections A.6.2 and A.6.3 for more details. ```plaintext Function ProposeAction(x, y, B): σ ← false // Initialize validity E ← {} // Initialize feedback while σ is false do A ← Actor(x, y, E, B) // Sample B actions σ, ε ← Monitor(x, A) // Determine validity and provide feedback E ← E ∪ {ε} // Accumulate feedback end return A ``` ### 2.3 Search loop ProposeAction is further embedded in a Search loop (Algorithm 2). The actions emitted by ProposeAction are passed to the Predictor, which predicts the states that will result from these actions. A limited tree search is then performed, starting from the current state, and then exploring \( B \) branches recursively to a depth of \( L \) layers. Values are assigned to the terminal states of this search by the Evaluator, and the action leading to the most valuable predicted state is selected. This approach mirrors that of the human brain, in which search is thought to be carried out through the coordinated activity of multiple regions within the PFC, including dlPFC, ACC, and OFC (Owen, 1997; Mattar & Lengyel, 2022). **Algorithm 2: Search loop.** Tree search with a depth of \( L \) layers, with \( B \) branches at each layer \( l \). For each branch, a proposed action is sampled, and the Predictor predicts the next state \( \tilde{x} \). This process continues recursively until the terminal layer \( L \), at which point the value \( v_{l=L} \) of the terminal states is estimated by the Evaluator. The values are backpropagated to their parent states in the first layer, and the action that leads to the most valuable state is selected. In our implementation, we accelerate this process by caching the actions and predicted states from deeper search layers and then reusing them in subsequent searches. We also employ the Orchestrator to prematurely terminate search if the goal state is achieved. ```plaintext Function Search(l, L, B, x, y): V_l ← {} // Initialize value record X_l ← {} // Initialize next-state record A_l ← ProposeAction(x, y, B) // Propose B actions for b in 1...B do \( \tilde{x}_{lb} \leftarrow \text{Predictor}(x, A_{lb}) \) // Predict next state X_l ← X_l ∪ {\( \tilde{x}_{lb} \)} // Update next-state record Ω ← Orchestrator(\( \tilde{x}_{lb}, y \)) // Terminate search if goal achieved if \( l < L \) and \( Ω \) is false then \( a_{l+1}, \tilde{x}_{l+1}, v_{l+1} \leftarrow \text{Search}(l + 1, L, B, \tilde{x}_{lb}, y) \) // Advance search depth V_l ← V_l ∪ {v_{l+1}} // Update value record else \( v_{lb} \leftarrow \text{Evaluator}(\tilde{x}_{lb}, y) \) // Evaluate predicted state V_l ← V_l ∪ {v_{lb}} // Update value record end end \( v_l \leftarrow \max(V_l) \) // Maximum value (randomly sample if equal value) \( a_l \leftarrow A_l[\arg\max(V_l)] \) // Select action \( \tilde{x}_l \leftarrow X_l[\arg\max(V_l)] \) // Predicted next-state return \( a_l, \tilde{x}_l, v_l \) ``` Algorithm 3: LLM-PFC. LLM-PFC takes a state \( x \) and a goal \( y \) and generates a plan \( P \), a series of actions with a maximum length of \( T \). The TaskDecomposer first generates a set of subgoals \( Z \). The agent then pursues each individual subgoal \( z \) in sequence, followed by the final goal \( y \). At each time step, Search is called to generate an action and a predicted next-state. Actions are added to the plan until the Orchestrator determines that the goal has been achieved, or the plan reaches the maximum length \( T \). Function LLM-PFC \((x, y, T, L, B)\): \[ \begin{align*} P &\leftarrow [] \\ Z &\leftarrow \text{TaskDecomposer}(x, y) \\ \text{for } g &\text{ in } 1 \ldots \text{length}(Z) + 1 \text{ do} \\ &\quad \text{if } g \leq \text{length}(Z) \text{ then} \\ &\quad\quad z \leftarrow Z_g \\ &\quad \text{else} \\ &\quad\quad z \leftarrow y \\ &\quad \text{end} \\ &\Omega \leftarrow \text{Orchestrator}(x, z) \\ &\text{while } \Omega \text{ is false and length}(P) < T \text{ do} \\ &\quad a, x, v \leftarrow \text{Search}(l = 1, L, B, x, z) \\ &\quad P \leftarrow [P, a] \\ &\quad \Omega \leftarrow \text{Orchestrator}(x, z) \\ &\text{end} \\ \text{return } P \end{align*} \] 2.4 Plan generation Algorithm 3 describes the complete LLM-PFC algorithm. To generate a plan, a set of subgoals is first generated by the TaskDecomposer based on the final goal and current state. These subgoals are then pursued one at a time, utilizing the Search loop to generate actions until the Orchestrator determines that the subgoal has been achieved. The actions are accumulated in a plan buffer \( P \) until either the Orchestrator determines that the final goal has been reached, or the maximum allowable number of actions \( T \) are accumulated. This approach is inspired by the role that aPFC plays in task decomposition. This involves the decomposition of tasks into smaller, more manageable tasks, and the coordinated sequential execution of these component tasks (Ramnani & Owen, 2004). 3 Experiments 3.1 Tasks Graph Traversal. We performed controlled experiments on four multi-step planning tasks based on graph traversal using the CogEval protocol (Momennejad et al., 2023). Natural language descriptions of a graph are provided with each node assigned to a room (e.g., ‘room 4 is connected to room 7’). We focused on a particular type of graph (Figure 4) with community structure (Schapiro et al., 2013) previously found to be challenging for a wide variety of LLMs. The first task, Valuepath, involves finding the shortest path from a given room that results in the largest reward possible. A smaller reward and a larger reward are located at two different positions in the graph. We fixed the two reward locations, and created 13 problems based on different starting locations. The second task, Steppath, involves finding the shortest path between a pair of nodes. We evaluated problems with an optimal shortest path of 2, 3, or 4 steps. We generated 20 problems for each of these conditions by sampling different starting and target locations. The other two tasks, Detour and Reward Revaluation, involve modifications to the Valuepath task that test for flexibility in planning. In these tasks, the problem description and in-context examples for the Valuepath task are presented, and a single Valuepath problem is solved as in the original task. The task is then modified in-context in one of two ways. In the Detour task, an edge is removed from the graph and replaced with a new edge (e.g., ‘the door from room 1 to room 11 is locked and now room 13 is connected to room 11’). In the Reward Revaluation task, the value associated with the two reward locations is changed (e.g., ‘the reward of the chest in room 8 has been changed to 12 and the reward of the chest in room 15 has been changed to 48°). As with the Valuepath task, the Detour and Reward Revaluation tasks each involved 13 problems based on different starting locations. **Tower of Hanoi.** We also investigated a classic multi-step planning task called the Tower of Hanoi (ToH) (Figure 5). In the original formulation, there are three pegs and a set of disks of different sizes. The disks are stacked in order of decreasing size on the leftmost peg. The goal is to move all disks to the rightmost peg, such that the disks are stacked in order of decreasing size. There are a couple of rules that determine which moves are considered valid. First, a disk can only be moved if it is at the top of its stack. Second, a disk can only be moved to the top of another stack if it is smaller than the disks in that stack (or if the peg is empty). More complex versions of the task can be created by using a larger number of disks. We designed an alternative formulation of this task in which the inputs are text-based rather than visual. In this alternative formulation, three lists (A, B, and C) are used instead of the three pegs, and a set of numbers (0, 1, 2, and so on) is used instead of disks of different sizes. The goal is to move all numbers so that they are arranged in ascending order in list C. The rules are isomorphic to ToH. First, a number can only be moved if it is at the end of a list. Second, a number can only be moved to the end of a new list if it is larger than all the numbers in that list. Note that although this novel formulation is isomorphic to ToH (and equally complex), it does not share any surface features with the original ToH puzzle (disks, pegs, etc.), and thus GPT-4 cannot rely on exposure to descriptions of ToH in its training data to solve the problem. We created multiple problem instances by varying the initial state (the initial positions of the numbers). This resulted in 26 three-disk problems and 80 four-disk problems. **Logistics.** To assess the ability to generate plans in more real-world settings, we investigated a logistics plan generation task involving the transportation of goods between cities using airplanes and trucks (more details can be found in Valmeekam et al. (2023)). ### 3.2 Baselines We compared our model to several baseline methods. The first method involved asking GPT-4 (zero-shot) to provide the solution step by step. For the second method, in-context learning (ICL), we provided GPT-4 with a few in-context examples of a complete solution. We provided two examples for ToH and Valuepath, and 3 examples (one each for 2, 3, and 4 steps) for Steppath. The third method was chain-of-thought (CoT) (Wei et al., 2022b). For this method, the in-context examples were annotated with a series of intermediate computations that break down the planning process into multiple steps (see Sections A.6.7–A.6.9 for example baseline prompts). We also evaluated tree-of-thought (ToT) (Yao et al., 2023). Similar to LLM-PFC, ToT combines multiple LLM modules – a generator and an evaluator – to perform tree search. We implemented the generator by combining the prompts from our Actor and Predictor modules, and implemented the evaluator by combining the prompts from our Monitor and Evaluator modules (see Sections A.6.10–A.6.11). We used the codebase provided by Yao et al. (2023). Plans were terminated when the predicted state matched the goal state (based on a groundtruth evaluation, as opposed to requiring the model to make this determination for itself as in LLM-PFC). For each problem, we selected the best out of five proposed plans (again based on a groundtruth evaluation). Although these decisions arguably gave ToT an advantage relative to LLM-PFC, we chose to evaluate ToT in this way so as to give it the best possible chance of performing well in our task setting. Finally, we evaluated multi-agent debate (MAD), using the codebase from Du et al. (2023). In this approach, similar to LLM-PFC, a solution is generated through the interaction between multiple LLM instances (each instance was equivalent to the GPT-4 ICL baseline); however, unlike LLM-PFC, these instances are not specialized to perform specific functions. ### 4 Results Figure 2 shows the results on the four graph traversal tasks (see Section A.4 for all results in Table form). On the Valuepath task, LLM-PFC solved 100% of problems, significantly outperforming both baselines. On the Steppath task, LLM-PFC displayed perfect performance for 2-step and 3-step paths, and near-perfect performance for 4-step paths, again significantly outperforming both Figure 2: **Graph traversal results.** Top row: Valuepath results. Middle row: Steppath results. Bottom row: Detour and Reward Revaluation results. ‘% solved’ indicates percentage of problems solved without proposing invalid actions (↑ better). ‘% invalid’ indicates percentage of moves that are invalid (↓ better). ‘Plan steps’ indicates number of steps in plan for solved problems only (therefore excluding many problems for the baseline models; ↓ better). GPT-4 Zero-shot and ICL baselines are deterministic, and therefore a single run was performed on all problems. Note that LLM-PFC did not employ tree search on the Steppath task, and did not employ task decomposition on any of the graph traversal tasks, as the performance of the model was already at ceiling without these components. Without tree search, LLM-PFC’s performance is deterministic, and therefore only a single run was performed on the Steppath task. Gray error bars reflect 95% binomial confidence intervals (for models evaluated on a single run). For Valuepath, we performed 5 runs with LLM-PFC, and present average performance ± the standard error of the mean (black error bars). baselines. Notably, LLM-PFC’s proposed plans were close to the optimal number of steps for both tasks. LLM-PFC also significantly outperformed both baselines on the Detour and Reward Revaluation tasks, with near-perfect performance in the Detour task. This demonstrates that LLM-PFC can flexibly adjust to new circumstances when generating plans. Finally, the model did not propose any invalid actions in any of the four tasks (e.g., it did not hallucinate the presence of non-existent edges), due to the filtering of invalid actions by the Monitor. Figure 3 shows the results on Tower of Hanoi (ToH). LLM-PFC demonstrated a significant improvement both in terms of the number of problems solved (left) and the number of invalid actions proposed (right). On 3-disk problems, LLM-PFC yielded a nearly seven-fold improvement in the number of problems solved over zero-shot performance, and also significantly outperformed standard in-context learning (ICL), chain-of-thought (CoT ICL), tree-of-thought (ToT), and multi-agent debate (MAD). For the problems that LLM-PFC solved, the average plan length (5.4) was close to the optimal number of moves (4.4). The model also demonstrated some ability to generalize out-of-distribution (OOD) to more complex 4-disk problems (not observed in any in-context examples), whereas the baseline models solved close to 0% of these problems. Notably, LLM-PFC did not propose any invalid actions, even on OOD 4-disk problems, whereas the baselines proposed a significant number of invalid actions. Finally, we found that LLM-PFC also significantly improved performance on the logistics task, successfully solving 31% (62/200) of problems, whereas the GPT-4 ICL baseline solved only 10.5% (21/200) of problems, and GPT-4 zero-shot solved only 7.5% (15/200) of problems (as reported in the original paper [Valmeekam et al., 2023]). This demonstrates the potential of LLM-PFC to be beneficial in more real-world planning domains. ### 4.1 Ablation Study We also carried out an ablation study to determine the relative importance of each of LLM-PFC’s major components, focusing on the 3-disk ToH problems. Figure 3(left) shows the results. We found that the Monitor was the most important component, as ablating this module resulted in significantly fewer solved problems, due primarily to an increased tendency to propose invalid moves (31% invalid moves vs. 0% for other ablation models). Ablating the tree search and TaskDecomposer module also resulted in significantly fewer solved problems. Overall, these results suggest that all major components played an important role in the model’s performance. ### 5 RELATED WORK Early work in AI formalized planning as a problem of search through a combinatorial state space, typically utilizing various heuristic methods to make this search tractable (Newell & Simon [1956], Newell et al. [1959]). Problems such as ToH figured prominently in this early research (Simon [1975]), as it affords the opportunity to explore ideas based on hierarchical or recursive planning (in which a larger problem is decomposed into a set of smaller problems). Our proposed architecture adopts some of the key ideas from this early work, including tree search and hierarchical planning. A few recent studies have investigated planning in LLMs. These studies suggest that, although LLMs can perform relatively simple planning tasks (Huang et al., 2022), and can learn to make more complex plans given extensive domain-specific fine-tuning (Pallagani et al., 2022; Wu et al., 2023), they struggle on tasks that require zero-shot or few-shot generation of complex multi-step plans (Valmeekam et al., 2023; Momennejad et al., 2023). These results also align with studies that have found poor performance in tasks that involve other forms of extended multi-step reasoning, such as arithmetic (Dziri et al., 2023). Our approach is in large part motivated by the poor planning and reasoning performance exhibited by LLMs in these settings. --- 1 Note that we did not use tree search or the TaskDecomposer on these problems. Incorporating these components may further improve the performance of LLM-PFC on this task. Some recent approaches have employed various forms of heuristic search to improve performance in LLMs (Lu et al., 2021; Zhang et al., 2023), but these approaches have generally involved search at the level of individual tokens. This is in contrast to our approach, in which search is performed at the more abstract level of task states (described in natural language). This is similar to other recently proposed black-box approaches in which ‘thoughts’ – meaningful chunks of natural language – are utilized as intermediate computations to solve more complex problems. These approaches include scratchpads (Nye et al., 2021), chain-of-thought (Wei et al., 2022b), tree-of-thoughts (Yao et al., 2023), reflexion (Shin et al., 2023), Society of Mind (Du et al., 2023), and Describe-Explain-Plan-Select (Wang et al., 2023). All of these approaches can be viewed as implementing a form of controlled, or ‘system 2’, processing (as contrasted with automatic, or ‘system 1’, processing) (Schneider & Shiffrin, 1977; Sloman, 1996; Kahneman, 2011). In the brain, these controlled processes are strongly associated with the prefrontal cortex (Miller & Cohen, 2001). Therefore, in the present work, we leveraged knowledge from cognitive neuroscience about the modular properties of the PFC. The resulting architecture shares some components with other black box approaches (e.g., tree search (Yao et al., 2023)), but also introduces a number of new components (error monitoring, task decomposition, task coordination, state/action distinction), and combines these components in a novel manner inspired by the functional organization of the human brain (see Section A.4). There have also been a number of proposals for incorporating modularity into deep learning systems, including neural module networks (Andreas et al., 2016), and recurrent independent mechanisms (Goyal et al., 2019). Our approach is distinguished from these approaches by the proposal of modules that perform specific high-level component processes, based on knowledge of specific sub-regions within the PFC. Finally, our approach is closely related to a recent proposal to augment deep learning systems with PFC-inspired mechanisms (Russin et al., 2020). LLM-PFC can be viewed as a concrete framework for accomplishing this goal. 6 CONCLUSION AND FUTURE DIRECTIONS In this work, we have proposed the LLM-PFC architecture, an approach aimed at improving the planning ability of LLMs by taking inspiration from the modular architecture of the human PFC. In experiments on three challenging planning domains, we found that LLM-PFC significantly improved planning performance over standard LLM methods. While these results represent a significant step forward, there is still room for improvement. In particular, the model has less than optimal performance on Tower of Hanoi, the Reward Revaluation graph traversal task, and the Logistics planning task (Valmeekam et al., 2023) (see Section A.5). This may be due in part to the inherent limitations of prompting and in-context learning as methods for the specialization of LLM-PFC’s modules. A promising avenue for further improvement may be to jointly fine-tune the modules across a range of diverse tasks (which requires open-source models), rather than relying only on black box methods (our only option with GPT-4). A white-box approach would also eliminate the need for task-specific prompts, and potentially enable zero-shot planning on novel tasks. LLM-PFC also has important implications for neuroscientific models of PFC function. Though much work has characterized the function of individual PFC subregions, there has been less emphasis on the development of integrative models in which these functions interact to carry out coordinated behavior. The present work represents a first step in that direction. An important next step will be to directly evaluate LLM-PFC as a model of neural data, which may then lead to further refinements of the model. We look forward to investigating these possibilities in future work. REFERENCES Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 39–48, 2016. Timothy EJ Behrens, Timothy H Muller, James CR Whittington, Shirley Mark, Alon B Baram, Kimberly L Stachenfeld, and Zeb Kurth-Nelson. What is a cognitive map? organizing knowledge for flexible behavior. Neuron, 100(2):490–509, 2018. Matthew Botvinick, Leigh E Nystrom, Kate Fissell, Cameron S Carter, and Jonathan D Cohen. Conflict monitoring versus selection-for-action in anterior cingulate cortex. Nature, 402(6758):179–181, 1999. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Iva K Brunec and Ida Momennejad. Predictive representations in hippocampal and prefrontal hierarchies. *Journal of Neuroscience*, 42(2):299–312, 2022. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. *arXiv preprint arXiv:2303.12712*, 2023. Patricia A Carpenter, Marcel A Just, and Peter Shell. What one intelligence test measures: a theoretical account of the processing in the raven progressive matrices test. *Psychological review*, 97(3):404, 1990. Nathaniel D Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. *Nature neuroscience*, 8(12):1704–1711, 2005. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *Proceedings of NAACL-HLT*, 17:4171–4186, 2019. Bradley B Doll, Katherine D Duncan, Dylan A Simon, Daphna Shohamy, and Nathaniel D Daw. Model-based choices involve prospective neural activity. *Nature neuroscience*, 18(5):767–772, 2015. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. *arXiv preprint arXiv:2305.14325*, 2023. Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. Faith and fate: Limits of transformers on compositionality. *arXiv preprint arXiv:2305.18654*, 2023. Jon M Fincham, Cameron S Carter, Vincent van Veen, V Andrew Stenger, and John R Anderson. Neural mechanisms of planning: a computational analysis using event-related fmri. *Proceedings of the National Academy of Sciences*, 99(5):3346–3351, 2002. Vinod Goel and Jordan Grafman. Are the frontal lobes implicated in “planning” functions? interpreting data from the tower of hanoi. *Neuropsychologia*, 33(5):623–642, 1995. Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. *arXiv preprint arXiv:1909.10893*, 2019. Hosein Hasanbeig, Hiteshi Sharma, Leo Betthauser, Felipe Vieira Frujeri, and Ida Momennejad. Allure: A systematic protocol for auditing and improving llm-based evaluation of text using iterative in-context-learning. *arXiv preprint arXiv:2309.13701*, 2023. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In *International Conference on Machine Learning*, pp. 9118–9147. PMLR, 2022. Daniel Kahneman. *Thinking, fast and slow*. macmillan, 2011. Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, et al. Neurologic a* esque decoding: Constrained text generation with lookahead heuristics. *arXiv preprint arXiv:2112.08726*, 2021. Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, and Evelina Fedorenko. Dissociating language and thought in large language models: a cognitive perspective. *arXiv preprint arXiv:2301.06627*, 2023.
y01KGvd9Bw
The major concern I have is the necessity to utilize the token from LLM for image decoding. What is is going to be if you let the LLM to first output the image description, then extract it and feed it directly to the diffusion model?
DREAMLLM: Synergistic Multimodal Comprehension and Creation Runpei Dong \(^{1,2}\) Chunrui Han \(^3\) Yuang Peng \(^4\) Zekun Qi \(^{1,2}\) Zheng Ge \(^3\) Jinrong Yang \(^5\) Liang Zhao \(^3\) Jianjian Sun \(^3\) Hongyu Zhou \(^3\) Haoran Wei \(^3\) Xiangwen Kong \(^3\) Xiangyu Zhang \(^3\) Kaisheng Ma \(^4\) Li Yi \(^{4,6,7}\) \(^1\) Xi’an Jiaotong University \(^2\) Institute for Interdisciplinary Information Core Technology (IIISCT) \(^3\) MEGVII Technology \(^4\) Tsinghua University \(^5\) HUST \(^6\) Shanghai Artificial Intelligence Laboratory \(^7\) Shanghai Qi Zhi Institute Abstract This paper presents DREAMLLM, a learning framework that first achieves versatile Multimodal Large Language Models (MLLMs) empowered with frequently overlooked synergy between multimodal comprehension and creation. DREAMLLM operates on two fundamental principles. The first focuses on the generative modeling of both language and image posteriors by direct sampling in the raw multimodal space. This approach circumvents the limitations and information loss inherent to external feature extractors like CLIP, and a more thorough multimodal understanding is obtained. Second, DREAMLLM fosters the generation of raw, interleaved documents, modeling both text and image contents, along with unstructured layouts. This allows DREAMLLM to learn all conditional, marginal, and joint multimodal distributions effectively. As a result, DREAMLLM is the first MLLM capable of generating free-form interleaved content. Comprehensive experiments highlight DREAMLLM’s superior performance as a zero-shot multimodal generalist, reaping from the enhanced learning synergy. Project page: dreamllm.github.io. 1 Introduction “What I cannot create, I do not understand.” Richard P. Feynman, on his blackboard at the time of his death, 1988 Content comprehension and creation in multimodality are crucial and among the ultimate courses of machine intelligence (Sternberg, 1985; Legg & Hutter, 2007). To this end, Multimodal Large Language Models (MLLMs) (Alayrac et al., 2022; Hao et al., 2022; Huang et al., 2023) have emerged as extensions of the successful GPT-style Large Language Models (LLMs) (Brown et al., 2020; Zhang et al., 2022; OpenAI, 2022; 2023a;b; Chen et al., 2023b; Touvron et al., 2023a;b) into visual realm. Recognized as foundation models (Bommasani et al., 2021), MLLMs have achieved unprecedented progress in multimodal comprehension capabilities. These advanced models typically enhance LLMs by incorporating images as multimodal inputs, such as CLIP features (Radford et al., 2021), to facilitate language-output multimodal comprehension. Their aim is to capture multimodal conditional or marginal distributions via a language posterior. However, multimodal creation, which involves generating images, texts, or both, necessitates a universal generative model that simultaneously learns language and image posteriors—currently underexplored. Until very recently, some concurrent works have shown success in conditional image generation using MLLMs (Koh et al., 2023; Sun et al., 2023b). As depicted in Fig. 1, these methods compel MLLMs to produce either discrete or continuous conditional embeddings that explicitly align with a pretrained CLIP encoder, which could later be used by a pretrained Stable Diffusion (SD) (Rombach et al., 2022) model for image generation. However, due to an inherent modality gap (Liang et al., 2022), CLIP *Equal contribution. † Work partially done during the internship at IIISCT and MEGVII. ‡ Project leaders. ¶ Corresponding authors. Figure 1: Conceptual comparison of vision-language (VL) foundation models. (a) CLIP-like models (Radford et al., 2021; Yu et al., 2022a; Li et al., 2023e) take advantage of two towers that explicitly align VL representations. (b) Flamingo/BLIP-like models (Alayrac et al., 2022; Li et al., 2022; 2023d; Huang et al., 2023) encode VL representations into a unified manifold space using a singular MLLM. However, these models lack full autoregressivity, as they only output language. (c) Concurrent MLLMs (Koh et al., 2023; Sun et al., 2023b) align visual outputs with CLIP representations, but this alignment occurs in an intermediate space, not a raw data space. Consequently, models such as Emu necessitate a second-stage fine-tuning of Stable Diffusion (Rombach et al., 2022) for raw image generation. These models also fall short in generating raw interleaved documents. (d) Our DreamLLM, instead, generates raw language and image inputs in a unified auto-regressive manner, inherently enabling interleaved generation. Only non-autoregressive generation loss is noted. Semantics focus predominantly on modality-shared information, often overlooking modality-specific knowledge that could enhance multimodal comprehension. Consequently, these studies have not fully realized the potential learning synergy between multimodal creation and comprehension, have shown only marginal improvements in creativity, and remain deficient in multimodal comprehension. In this work, we introduce DreamLLM, universally learning image and text posteriors with expected creation & comprehension synergy, based on the following two de-facto designing principles: i. **Generate Everything as It Is** Different from existing works that generate intermediate image representations like CLIP embeddings during training, DreamLLM not only takes all modalities raw data as inputs but also as outputs in a truly end-to-end fashion (i.e., outputs are identical to inputs, see Fig. 1). The challenge lies in enabling MLLMs to learn the image posterior without compromising their comprehension capabilities. To address this, we introduce dream queries, a set of learnable embeddings that encapsulate the semantics encoded by MLLMs. This approach avoids altering the output space of MLLMs. Raw images are then decoded by the SD image decoder conditioned on these semantics. In this fashion, the pretrained SD acts as the score function (Ho et al., 2020). The image posterior is thus modeled by direct sampling in the pixel space, facilitated by score distillation (van den Oord et al., 2018; Poole et al., 2023). ii. **Interleaved Generative Pre-Training (I-GPT)** DreamLLM is trained to generate interleaved multimodal corpora from the internet (Zhu et al., 2023b), both encoding and decoding interleaved image-text multimodal inputs. Unlike encoding multimodal inputs as in existing methods, decoding interleaved multimodal outputs is challenging due to the complex interleaving layout structures and the long-context requirement of images. Our approach tackles the interleaved layout learning using a unique <dream> token that predicts the placement of images within texts. Harnessing DreamLLM’s causal nature, all contents are generated with history multimodal contexts of any length. This interleaved generative pretraining (I-GPT) inherently forms all joint, marginal, and conditional distributions of images and texts in the document, leading to a learning synergy that grounds DreamLLM’s comprehension in creation and vice versa. Extensive experiments across various vision-language comprehension, content creation, and language-only tasks demonstrate DreamLLM’s superior performance as a zero-shot multimodal generalist. For instance, DreamLLM-7B achieves an 8.46 FID on MS-COCO and sets a new standard with 49.1/35.9 scores on MBench and MM-Vet evaluations, respectively. Moreover, we delve into the learning synergy between comprehension and creation, revealing decent in-context generation capabilities. With I-GPT pretraining, DreamLLM generates interleaved documents following human prompts after supervised fine-tuning on instruction-following data curated with GPT-4. To our knowledge, this work is the first to enable MLLMs to create free-form interleaved content with a learning synergy on both sides. As a foundational learning framework, DreamLLM is adaptable across all modalities, laying a promising foundation for future multimodal learning research. 2 BACKGROUND & PROBLEM STATEMENT Autoregressive Generative Modeling Given the joint probability distribution \( p_\theta(w) \) over a sequence \( w = \{w_t\}_{t=1}^T \) with length \( T \), the canonical causal generation (Mikolov et al., 2010; Radford et al., 2018; 2019) of every token \( w_t \) by a \( \theta \)-parameterized language model \( F \) is modeled as \( p_\theta(w) = \prod_{t=1}^T p_\theta(w_t | w_{<t}) \). For multimodal comprehension, the sequence could contain \( K \) ordered images \( I = \{I_k\}_{k=1}^K \) interleaved with words. The \( k \)-th image is processed as patch embeddings with visual encoders \( H_\phi(\cdot) \) like CLIP, which will then be encoded by a projector \( M_\zeta \) (e.g., a linear layer (Huang et al., 2023) or DETR- (Carion et al., 2020)/Perceiver-like (Jaegle et al., 2021) Resampler (Alayrac et al., 2022)) into \( L \)-length visual embeddings \( V_k = \{v_{lt}\}_{l=1}^L \). Let \( K(t) \) be the image number before the \( t \)-th word token. The maximum likelihood estimation (MLE) is to minimize \[ L_{MLLM}(\Theta = \{\theta, \zeta\}, w, I) := -E_t [\log p_\theta(w_t | w_{<t}, V_{<K(t)})], \quad V_{K(t)} = M_\zeta \circ H_\phi(I_{K(t)}). \] Diffusion Models Diffusion Models (DMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020) are probabilistic generative models that learn the latent structure of data \( z = \{z_t\}_{t=1}^T \) through continuous-\( T \)-timestamps information diffusion. DMs involve a forward or diffusion process \( q \) that smoothly converts data to Gaussian noise. Given the initial datapoint \( z_1 \sim q(z_1) \) and diffusion rate \( \beta_t := 1 - \alpha_t \), this process can be defined as a marginal distribution \( q(z_t | z_1) := N(\sqrt{\alpha_t}z_1, \beta_t I) \), and the perturbed data distribution is \( q(z_t) := \int q(z_t | z)q(z)dz \) by integrating out data density \( q(z) \). A reversed denoising probability flow \( p \) is used for generating data from noise \( z_T \sim N(0, I) \) as a Markov Chain with transition approximated by a Gaussian model \( p_\xi(z_{t-1} | z_t) := N(\mu_\xi(z_t), \sigma_t^2 I) \), which relates to an optimal MSE denoiser since \( q(z_{t-1} | z_t, z_1) \) is Gaussian with enough timestamps (Feller, 1949; Sohl-Dickstein et al., 2015). Ho et al. (2020) show that the optimization with the evidence lower bound (ELBO) can be simplified by training a denoising U-Net \( e_\xi(z_t, t) \) parameterized with \( \xi \) that estimates the conditional expectation \( E[e \sim N(0, I) | z_t] \) (Bao et al., 2022). Let \( C \) be the conditional embeddings, and the perturbed data \( z_t = \sqrt{\alpha_t}z_1 + \sqrt{1 - \alpha_t}e \), the minimization objective is \[ L_{DM}(\xi, z) := E_{e \sim U(0,1), e \sim N(0, I)} [\|e_\xi(z_t; C, t) - e\|^2]. \] Since \( e_\xi(z_t; t) = -\sigma_t s_\xi(z_t; t) \) as derived from Tweedie’s (Efron, 2011; Luo, 2022), it is equivalent to denoising score matching of \( \nabla_{z_t} \log p_\xi(z_t) \) (Hyvärinen, 2005; Vincent, 2011), thus DMs are also called score-function based generative models (Song & Ermon, 2019; 2020; Song et al., 2021; 2023). 2.1 HOW CAN WE USE MLLMs FOR DIFFUSION SYNTHESIS THAT SYNERGIZES BOTH SIDES? Multimodal signals typically exhibit modality-specific information that has distinct structure but complementary semantics (Dong et al., 2023). This complementary property allows us to utilize deep language comprehension to enhance cross-modal image generation (Saharia et al., 2022). However, the potential of multimodal creation to improve comprehension remains largely unexplored. Existing strategies (Koh et al., 2023; Sun et al., 2023b; Ge et al., 2023) integrate successful Diffusion Models with MLLMs by aligning the semantic spaces of conditional embeddings between CLIP \( C_{CLIP} \) and MLLMs \( C_{MLLM} \). The objective is to minimize alignment loss \( L_{align} = D(M_\psi \circ C_{MLLM}, C_{CLIP}) \), employing a distance metric \( D(\cdot, \cdot) \) and a condition projector \( M_\psi \). However, CLIP models primarily learn modality-shared semantics, often overlooking modality-specific information due to a modality gap (Liang et al., 2022; Liu et al., 2023f). This explicit alignment with CLIP’s intermediate output space may induce more conflicts than synergies, as MLLMs are forced to generate semantically reduced information, deviating from their original output space. To circumvent these issues, we propose alternative learning methodologies (See Fig. 2), which we elaborate in the ensuing sections. Learning Objective Our aim is to leverage MLLMs to model distributions via direct pixel space sampling. Here, the pretrained SD functions as a score metric, distilling the learned data distribution. This approach is similar to Score Distillation Sampling (Poole et al., 2023) (SDS, also known as Score Jacobian Chaining (Wang et al., 2023a)). In this context, image posterior is learned in a DeepDream-like manner (Mordvintsev et al., 2015), using MLLMs’ conditional parameterization. Conditional Embeddings Rather than converting the output space of MLLMs to align with CLIP, we propose to query MLLMs using learned embeddings. Consequently, MLLMs-enriched semantics serve as diffusion conditioning, and the distribution is implicitly modeled through synthesis sampling. Figure 2: Overview of our DreamLLM framework. Interleaved documents serve as input, decoded to produce outputs. Both text and images are encoded into sequential, discrete token embeddings for the MLLM input. A special `<dream>` token predicts where to generate images. Subsequently, a series of dream queries are fed into the MLLM, capturing holistic historical semantics. The images are synthesized by the SD image decoder conditioned on queried semantics. The synthesized images are then fed back into the MLLM for subsequent comprehension. 3 DreamLLM We introduce DreamLLM, a universal learning framework that facilitates both MLLM’s comprehension and creation capabilities. Our DreamLLM is built with a causal decoder-only LLM $F_\theta$ as the model foundation, i.e., Vicuna (Chiang et al., 2023) based on LLaMA (Touvron et al., 2023a) trained on ShareGPT (Zheng et al., 2023). We adopt OpenAI’s CLIP-Large (Radford et al., 2021) as the visual encoder $H_\phi$, followed by a linear layer $M_\zeta$ for visual embedding projection. To synthesize images, we use Stable Diffusion (SD) (Rombach et al., 2022) as the image decoder, and the condition projector $M_\psi$ is also a linear layer. An overview of the architecture is depicted in Fig. 2. 3.1 End-to-End Interleaved Generative Pretraining ($I$-GPT) All natural documents can be regarded as carriers of text-image interleaved information. Text-only, images-only, and text-image pairs data, on the other hand, can be seen as special cases of interleaved corpora with different modality compositions. Thus, it is critical to empower the model with the capability to learn and generate free-form interleaved documents that form all possible distributions. Interleaved Structure Learning. To model the interleaved structure, the interleaved sequence is operated by extending a new special `<dream>` token before images. During training, DreamLLM is trained to predict this `<dream>` token that indicates where an image emerges, and the conditional image synthesis is performed afterward, as introduced next. During inference, DreamLLM will generate an image on its “free will” when this token is predicted. Conditional Synthesis through Score Distillation. To avoid the possible conflicts of CLIP semantics and MLLMs stated in Sec. 2.1, we carefully design a different learning objective and conditional embeddings. Formally, we introduce a series of learnable dream queries with length $Q$: $d = \{d_q\}_{q=1}^Q$. Considering the $t$-th token is predicted as `<dream>` token, the conditional embeddings $C_{K(t)+1}^{\text{DreamLLM}}$ for the $(K(t) + 1)$-th image synthesis can be obtained by causally querying the previous sequences: $$C_{K(t)+1}^{\text{DreamLLM}} := F_\theta(d, x_{<t+1}, V_{<K(t)+1}).$$ Thus, the denoising score matching with latent $z$ is motivated in the similar formulation to Eq. (2): $$L_{\text{DM}}^{\text{DreamLLM}}(\theta, d, \zeta, \psi, z) := \mathbb{E}_{t \sim U(0,1), \epsilon \sim N(0, I)} \left[ \| \epsilon_\xi(z_t; C_{\text{DreamLLM}}, t) - \epsilon \|_2^2 \right],$$ where $\xi$ is not updated since the SD is frozen. Eq. (4) can also be viewed as a generalized formulation of textual inversion (Gal et al., 2023), but all condition embeddings are learnable by model-seeking. From the perspective of score distillation (van den Oord et al., 2018), the KL divergence defined by conditions and the pre-learned score function is equivalently minimized for distilling (Hinton et al., Table 1: Zero-shot multimodal comprehension evaluation of image-to-text captioning, general VQA, text-related VQA, and comprehensive benchmarks. * denotes non-zero-shot results for VQA. DREAMLLM-7B* is trained using the SFT data constructed by LLaVA-1.5 (Liu et al., 2023b). | Method | Captioning | VQA | Comprehensive | |-------------------------|------------|--------------|---------------| | | COCO | I2Paragraph | VQAv2 | OKVQA | VizWiz | TextVQA | MMBench | MM-Vet | | **Comprehension Only MLLMs** | | | | | | | | | | MetaLM (Hao et al., 2022)| - | 41.1 | 11.4 | - | - | - | - | - | | Kosmos-1 (Huang et al., 2023) | - | 51.0 | - | 29.2 | - | - | - | - | | Flamingo-9B (Alayrac et al., 2022) | 79.4 | 51.8 | 44.7 | 28.8 | - | - | - | - | | OF-9B (Awadalla et al., 2023) | 65.5 | 52.7 | 37.8 | 27.5 | 29.1 | 4.6 | 21.8 | - | | LLaVA-7B (Liu et al., 2023c) | - | - | - | - | 28.9 | 38.7 | 23.8 | - | | **MLLMs for Comprehension & Creation** | | | | | | | | | | CM3Leon-7B* (Yu et al., 2023a) | 61.6 | 10.5 | 47.6 | 23.8 | 37.6 | - | - | - | | Emu-14B (Sun et al., 2023b) | 117.7 | - | 40.0 | 34.7 | 35.4 | - | - | - | | DREAMLLM-7B (Ours) | 115.4 | 17.4 | 56.6 | 44.3 | 45.8 | 34.9 | 49.9 | 35.9 | | DREAMLLM-7B* (Ours) | 103.7 | 8.4 | 72.9 | 52.2 | 49.3 | 41.8 | 38.2 | 36.6 | Universal Multimodal Generative Modeling An interleaved document sequence \( x = \{x_t\}_{t=1}^T \) contains both words \( w = \{w_i\}_{i=1}^J \) and images \( I = \{I_k\}_{k=1}^K \). The autoregressive nature forms all possible conditional distributions, such as image conditional multimodal comprehension \( p(w|I) \) or text-to-image synthesis \( p(I|w) \). The images are processed as visual embeddings \( V \) for causal comprehension. Assuming that the pretrained SD is an optimal score function, Eq. (5) thus could be viewed as an MLE optimization for the synthesis posterior. Different from Eq. (1), the targeted sequence \( x_t \) now could be both encoded images or words. The objective is thus unified to the MLE of all causally-conditioned posteriors in arbitrary forms: \[ \min_{\theta, d, \zeta, \psi} \mathcal{L}_{\text{DREAMLLM}} := \mathbb{E}_{t,c_{\text{DREAMLLM}}} \left[ D_{KL}(q(z_{t-1}|z_t, z_1, c_{\text{DREAMLLM}}) \| p_\xi(z_{t-1}|z_t)) \right]. \] (5) 3.2 Model Training In this work, we consider a three-stage training procedure. It can be summarized as follows, and the implementation details, like training data, can be found in Table 13 in Appendix C. I Alignment Training This stage is used to alleviate the gap in multimodality, facilitating the adaptation of multimodal inputs to LLMs. The linear visual projector, linear condition projector, and learnable dream embeddings are pretrained for cross-modal manifold alignment among frozen LLMs, visual encoder, and SD. We use approximately 30M image-text pairs data, training both image-to-text comprehension and text-to-image synthesis. II T-GPT Pretraining Following alignment, the LLM undergoes an unfrozen process for T-GPT pretraining (detailed in Sec. 3.1). This critical stage facilitates the learning of joint vision-language distributions via generative modeling. Training incorporates approximately 2M selectively filtered documents from MMC4-Core (Zhu et al., 2023b), adhering to a CLIP score threshold of 0.25. Furthermore, we use 2M paired data samples from LAION400M (Schuhmann et al., 2021), captioned by BLIP (Li et al., 2022) (i.e., BLIP-LAION), to enhance text-to-image training and potentially mitigate the impact of some low-quality noisy images and texts from sMMC4. III Supervised Fine-tuning This stage enables the model to perform general multimodal comprehension and creative tasks following human instructions (Ouyang et al., 2022). We utilize approximately 80K visual instruction tuning data collected by Liu et al.. For instruction-following content creation, GPT-4 is prompted with document summaries or image captions, collecting approximately 20K instruction-following document synthesis from MMC4 (InstructMMC4) and 20K image synthesis data from BLIP captioned LAION400M (Instruct-BLIP-LAION). 4 Experiments DREAMLLM is a versatile multimodal generalist that excels at zero-shot or in-context vision-language comprehension and synthesis tasks. In this section, we conduct systematic evaluations for demonstration. See qualitative results in Appendix B and implementation details in Appendix C. 4.1 Multimodal Comprehension Multimodal comprehension enables humans to interact with agents conditioned on both words and visual content. We evaluate the multimodal vision and language capabilities of DreamLLM across several benchmarks, including image-to-text captioning on COCO (Karpathy & Fei-Fei, 2017) and Image2Paragraph (Krause et al., 2017), general visual question answering (VQA) on VQAv2 (Goyal et al., 2019), OKVQA (Marino et al., 2019), VizWiz (Gurari et al., 2018), and text-related VQA on TextVQA (Singh et al., 2019). Additionally, we conducted a zero-shot evaluation on the recently developed benchmarks of MMBench and MM-Vet to assess the model’s performance in complex multimodal tasks. The results are presented in Table 1 (See Table 5, and Table 6 in Appendix A). All metrics and data splits are listed in Table 14 in Appendix C. We find that i) DreamLLM outperforms other MLLMs across all benchmarks. Notably, DreamLLM-7B surpasses concurrent MLLMs with image synthesis capabilities by a significant margin, achieving +16.6 higher accuracy on VQAv2 compared to Emu-13B. ii) On comprehensive benchmarks like MMBench and MM-Vet, DreamLLM achieves state-of-the-art performance against all 7B counterparts. Detailed analysis revealed superior spatial/relations reasoning capabilities in DreamLLM compared to other MLLMs, likely a result of its image synthesis learning. See qualitative results and comparisons on multimodal dialogue in Table 11, Table 12, Fig. 10, Fig. 11, and Fig. 12, in Appendix B. 4.2 Text-Conditional Image Synthesis Text2Image is one of the most commonly used techniques for creative content generation that follows human’s fabulous imaginations through free-form languages. We assess text-conditional image synthesis on the MS-COCO validation set (Lin et al., 2014) and LN-COCO, the COCO subset of Localized Narratives (Pont-Tuset et al., 2020), following prior works (Xu et al., 2018; Yu et al., 2022b). The MS-COCO dataset primarily contains high-level image abstractions with shorter captions, whereas LN-COCO provides more comprehensive image descriptions (Yu et al., 2022b). DreamLLM samples 8 images per text prompt on MS-COCO by CLIP score ranking, following previous works (Ramesh et al., 2022). On LN-COCO, DreamLLM samples one image per prompt without CLIP ranking since the text is too long and exceeds the CLIP length limit. Note that Parti samples 16 images per prompt with CoCa (Yu et al., 2022a). Our evaluation metric is the zero-shot Fréchet Inception Distance (FID) (Heusel et al., 2017), the results of which are presented in Table 2. We note three key observations: i) Our DreamLLM shows a significant FID improvement over the StableDiffusion baseline after stage-I alignment, reducing the score by 3.67 and 11.83 on MS-COCO and LN-COCO, respectively. Further, FID improvements of 3.97 and 13.73 are achieved after pretraining and supervised fine-tuning. The substantial improvement on LN-COCO underscores DreamLLM’s superior capability in processing long-context information. ii) When compared to prior specialist models, DreamLLM delivers competitive results based on the SD image decoder. iii) DreamLLM consistently outperforms concurrent MLLMs-based image synthesis methods. For instance, DreamLLM-7B surpasses Emu-13B by a significant 3.20 FID on MS-COCO. See qualitative results on text-to-image synthesis in Fig. 13 and Fig. 14 in Appendix B. 4.3 Multimodal Joint Creation & Comprehension Free-form Interleaved Document Creation Leveraging the interleaved generative modeling from L-GPT, DreamLLM can now generate interleaved documents in a free-form manner. In | Method | LM | MG | FIG | MS-COCO | LN-COCO | |-------------------------|----|----|-----|---------|---------| | Retrieval Result (Yu et al.) | x | x | x | 17.97 | 33.59 | | DALL-E (Ramesh et al.) | x | x | x | ~28 | - | | CogView (Ding et al.) | x | x | x | 27.1 | - | | CogView2 (Ding et al.) | x | x | x | 24.0 | - | | SDv2.1 (Rombach et al.) | x | x | x | 12.43 | 34.26 | | SDv2.1† (Rombach et al.) | x | x | x | 11.91 | 25.35 | | GLIDE (Nichol et al.) | x | x | x | 12.24 | - | | Make-A-Scene (Gafni et al.) | x | x | x | 11.84 | - | | DALL-E 2 (Ramesh et al.) | x | x | x | 10.39 | - | | Muse-3B (Chang et al.) | ✓ | ✓ | x | 7.88 | - | | Imagen-3.4B (Saharia et al.) | ✓ | ✓ | x | 7.27 | - | | Parti-20B (Yu et al.) | ✓ | ✓ | x | 7.23 | 15.97 | | Multimodal Large Language Models | |----------------------------------| | CM3-13B (Aghajanyan et al.) | ✓ | ✓ | x | 29.56 | - | | GILL-8B (Koh et al.) | ✓ | ✓ | x | 12.20 | - | | Emu-13B (Sun et al.) | ✓ | ✓ | x | 11.66 | - | | CM3Leon-7B† (Yu et al.) | ✓ | ✓ | x | 10.82 | - | | DreamLLM-7B‡ (Ours) | ✓ | ✓ | ✓ | 8.76 | 22.42 | | DreamLLM-7B (Ours) | ✓ | ✓ | ✓ | 8.46 | 20.53 | The Austrian capital, Vienna, has long been known as the City of Dreams, and there is no better place to experience than this romantic city during the holiday season. The city’s historic center is surrounded by a collection of beautiful buildings on a square mile. Stroll along the famous Ring Boulevard and admire the architecture. Some of the buildings date back to the 13th century. The Vienna State Opera House is one of the most beautiful buildings. Founded in 1869, the opera house is the home of the Vienna State Opera and the annual Vienna Opera Ball. Cosmopolitan flair, an eclectic mix of architectural styles and a rich cultural heritage come together in Vienna. The city is home to a number of museums, palaces and churches. The Sacher Torte and the Wiener Schnitzel are local specialties. A visit to the romantic Christmas Market in Vienna is also a unique experience. Imagine you are a robot agent in the house. There is a mobile phone on the dining table in the dining room, and you are in the living room. Write a short article about how you would walk through the house to grab the mobile phone with your robotic arm step by step. Include several high-quality photographs to illustrate your procedure. As a robot agent in the house, I would walk through the house to grab the mobile phone in the dining room as shown in the image below. First, I would look around the living room to make sure the area is safe and there are no obstacles or people that I need to avoid. I would then walk towards the dining room and stop before entering the room. Next, I would look for the mobile phone on the table, as shown in the image below. Once I have identified the phone, I would then move closer to it and extend my robot arm to grab it, as shown in the image below. Once the phone is grabbed, I would then bring it back to the living room and place it on the table where it belongs. Figure 3: Selected DreamLLM instruction following interleaved content creation examples. Note that each image is created automatically at the location decided by DreamLLM, and then it will be fed back as multimodal comprehension input for the following content generation. Fig. 3, we showcase the generated interleaved contents based on human instructions. It demonstrates that: i) DreamLLM can generate meaningful content per the instructions. ii) The system can autonomously create images at any specified location by predicting the proposed <dream> tokens, thereby eliminating the need for additional human intervention. This is a more user-friendly approach compared to systems like Emu, which necessitate human input for image generation locations. Image Quality Document quality can be influenced by factors such as text content, image quality (including image-text alignment), and illustration positioning. To assess the quality of generated documents, we utilized a held-out instruction-following subset from the constructed InstructMMC4 as a demonstrative tool. This subset comprises 15K documents across 30 MMC4-defined topics, with 500 samples per topic. We began by evaluating image quality using FID on this subset, generating each image based on the corresponding ground truth texts. The results revealed that when using only matched text inputs for image synthesis, SD achieved an FID score of 74.77. In contrast, our DreamLLM significantly outperforms SD with an FID score of 36.62. Human Evaluation We perform a comprehensive human evaluation to assess the quality of the generated samples. We randomly selected 150 samples (5 per topic) for instruction-following document generation, mixing the generated and ground truth MMC4 documents without any identifying information. Five unbiased volunteers were then asked to determine whether the given samples were supported. Given the presence of duplicate and low-quality images in MMC4, the supportive rate for MMC4 was only 77.24%. In contrast, our DreamLLM model achieves a supportive rate of 60.68%, surpassing the 30% Turing test requirement. This result indicates that the generated documents contain high-quality images placed logically, demonstrating the effectiveness of our model. 5 DISCUSSIONS 5.1 SYNERGY BETWEEN CREATION & COMPREHENSION? To elucidate the synergy between multimodal creation and comprehension, we make the comparison among three methods with DreamLLM architecture, each utilizing identical training data yet differing in their learning objectives: a) the Creation-only baseline, focused solely on text/document-conditional image synthesis; b) the Comprehension-only baseline, dedicated to word generation exclusively; c) the Joint-learning method, which is the default setting of DreamLLM learning both image and language modeling. Quantitative Analysis As per Table 3, the following observations are made: i) The powerful language comprehension of LLMs significantly enhances the performance of text-to-image specialists like SD, as evidenced by the impressive 8.50 FID (line 1). ii) The use of interleaved data, such as MMC4, can potentially boost multimodal comprehension performance (line 4). iii) The proposed I-GPT further synergizes comprehension and creation with improved performance (line 5). iv) When incorporating CLIP alignment loss \( \mathcal{L}_{\text{align}} \) stated in Section 2.1, our DreamLLM fails to converge but rather ends in a collapsing point (line 6). This indicates that the queries are adaptively learning the true data distributions, where CLIP semantics are in conflict with MLLM-encoded semantics. Qualitative Analysis In Fig. 4, we compare answers to some examplar VQA tasks from comprehension-only and joint learning modules, respectively. It can be seen that: i) The joint-learning method exhibits superior multimodal comprehension, particularly in identifying subject relationships and attributes like object size. ii) In multimodal comprehension scenarios involving multiple image inputs, the joint-learning approach demonstrates enhanced precision. This improved performance is a natural outcome of I-GPT pretraining, allowing better modeling of multimodal correlations in various interleaved documents. Multimodal In-Context Generation Multimodal in-context generation is a critical emerging capability for MLLMs (Bommasani et al., 2021; Alayrac et al., 2022). While significant strides have been made in in-context visual question answering, in-context image synthesis remains relatively lacking in exploration. The multimodal context-conditional image synthesis capabilities of DreamLLM, as demonstrated in Fig. 5, offer promising insights into this domain. Tasks such as in-context image edition, subject-driven image generation, and compositional generation, however, pose significant challenges. Table 3: Concrete analysis of the synergy between multimodal comprehension and creation (image synthesis). ID denotes whether the interleaved dataset is used during the second stage of pretraining. | ID | \( \mathcal{L}_{\text{align}} \) | MM-Vet | VQAv2 | COCO | |----|-----------------|--------|-------|------| | 0 | Stable Diffusion | X | - | - | 12.43 | | 1 | Creation-only | X | X | - | 8.50 | | 2 | Creation-only | ✓ | X | - | 8.57 | | 3 | Comprehension-only | X | X | 31.0 | 55.1 | - | | 4 | Comprehension-only | ✓ | X | 34.4 | 54.3 | - | | 5 | Joint-learning | ✓ | X | 35.9 | 56.6 | 8.46 | | 6 | Joint-learning | ✓ | ✓ | N/A | N/A | N/A | Figure 4: Qualitative comparison. Answer A: answer from comprehension-only models w/o interleaved training; Answer B: answer from joint-learning models. Figure 5: Selected DreamLLM in-context image generation examples. The X in multimodal inputs are replaced accordingly by the text prompts shown under the generated images. We show the results of the SD baseline in (c) with only the text prompt X for a comparison. challenges in a zero-shot setting, particularly without downstream fine-tuning as in DreamBooth (Ruiz et al., 2023) or attention modification techniques as in Prompt2Prompt (Hertz et al., 2023). Despite these hurdles, Fig. 5 illustrates DreamLLM’s ability to generate images conditioned on the provided image context. This capability suggests promising potential for DreamLLM in maintaining subject, identity, and semantic context, thereby paving a new way for resolving these complex tasks. 5.2 What is learned by DreamLLM? Dream Query Attention In DreamLLM, the conditional embedding is derived from MLLMs with some learned dream queries. Fig. 6 demonstrates a visualization of the learned cross-attention mechanism between these queries and the diffusion latent. Similar to (Hertz et al., 2023), we visualize the attention map averaged across all timestamps. It is seen that: i) The query attention is structured, disentangled, and semantically-oriented. This is evidenced by the fact that distinct queries adeptly capture different subject and background semantics. ii) Despite varying prompts, attention patterns exhibit remarkable similarity as shown in Fig. 6 (a) and (b). This contrasts with the token attentions from the original SD, which are typically text-token dependent. We postulate that this arises from the model’s causal nature, leading to a consistent semantic structure order. 6 Related Works Rapid developments have been witnessed in extending LLMs like LLaMA (Touvron et al., 2023a) to multimodal comprehension that enables human interaction with both words and visual content. One line of work is built by system integration of LLMs with various functioning agents where language acts as general interface (Wu et al., 2023; Gupta & Kembhavi, 2023; Yang et al., 2023b; Liang et al., 2023; Shen et al., 2023; Yang et al., 2023a; Surís et al., 2023), and remarkable success has been demonstrated in such plugin-style frameworks. Another line of work instead explores training LLMs to consume and understand multimodal inputs (Hao et al., 2022; Huang et al., 2023; Chen et al., 2023b) with parameter-efficient tuning (Hu et al., 2022; Alayrac et al., 2022; Li et al., 2023d; Zhang et al., 2023e; Zhu et al., 2023a; Ye et al., 2023) and instruction tuning (Xu et al., 2023b; Liu et al., 2023c; Dai et al., 2023a). More recently, some approaches have been developed towards visual-interactive multimodal comprehension by precise referring instruction tuning (Zhao et al., 2023a; Peng et al., 2023; Chen et al., 2023a; Zhang et al., 2023g). For cross-modal creation, early works generally tokenize the visual contents into discrete VQ codebooks (van den Oord et al., 2017; Wang et al., 2022; Sun et al., 2022; Lu et al., 2023; Diao et al., 2023; Yu et al., 2023a). Recent works instead explore incorporating MLLMs for image synthesis using text-to-image models such as Stable Diffusion, and the objective is to generate conditional embeddings that align pretrained CLIP text (i.e., CLIP) or CLIP variant embeddings (Koh et al., 2023; Ge et al., 2023; Sun et al., 2023a;b). 7 Conclusions How can the learning synergy between multimodal content understanding and creation emerge? In this paper, we present DreamLLM, a learning framework for developing MLLMs that not only comprehends but also creates multimodal content via diffusion models. Through score distillation of conditional-image synthesis distributions, we avoid the need for intermediate representation targets that may bring information loss. The employment of interleaved documents further enriches the multimodal distributions, fostering the learning of multimodal encoding and decoding. Our extensive empirical evaluations across diverse VL benchmarks demonstrate the effectiveness of DreamLLM and the emerging learning synergy between multimodal content understanding and creation. Besides, this work initiates the first step towards free-form interleaved content creation. As a general learning framework, we hope it will spur further research in the multimodal machine learning field. ACKNOWLEDGEMENT This research is supported by the National Natural Science Foundation of China (20211710187). REFERENCES Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. CM3: A causal masked multimodal model of the internet. CoRR, abs/2201.07520, 2022. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning. In Adv. Neural Inform. Process. Syst. (NeurIPS), 2022. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: visual question answering. In Int. Conf. Comput. Vis. (ICCV), 2015. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Yitzhak Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. CoRR, abs/2308.01390, 2023. Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. In Int. Conf. Learn. Represent. (ICLR), 2022. James Betker, Goh Gabriel, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, Wesam Manassra, Prafulla Dhariwal, Casey Chu, Yunxin Jiao, and Aditya Ramesh. Improving image generation with better captions. 2023. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. PIQA: reasoning about physical commonsense in natural language. In AAAI Conf. Artif. Intell. (AAAI), 2020. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, and et al. On the opportunities and risks of foundation models. CoRR, abs/2108.07258, 2021. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Adv. Neural Inform. Process. Syst. (NeurIPS), 2020. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Eur. Conf. Comput. Vis. (ECCV), 2020. Huiwen Chang, Han Zhang, Jarred Barber, Aaron Maschinot, José Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, and Dilip Krishnan. Muse: Text-to-image generation via masked generative transformers. In Int. Conf. Mach. Learn. (ICML), 2023. Hila Chefer, Yuval Alahuf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Trans. Graph., 42(4):148:1–148:10, 2023. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llm’s referential dialogue magic. CoRR, abs/2306.15195, 2023a.
kxpswbhr1r
The paper could benefit from a discussion on the practical implications of the findings. For instance, how can the insights on convergence be used to improve training methodologies or model selection in real-world applications?
In-Context Convergence of Transformers Anonymous authors Paper under double-blind review Abstract Transformers have recently revolutionized many domains in modern machine learning and one salient discovery is their remarkable in-context learning capability, where models can solve an unseen task by utilizing task-specific prompts without further parameters fine-tuning. This also inspired recent theoretical studies aiming to understand the in-context learning mechanism of transformers, which however focused only on linear transformers. In this work, we take the first step toward studying the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent in order to in-context learn linear function classes. We consider a structured data model, where each token is randomly sampled from a set of feature vectors in either balanced or imbalanced fashion. For data with balanced features, we establish the finite-time convergence guarantee with near-zero prediction error by navigating our analysis over two phases of the training dynamics of the attention map. More notably, for data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process, where the transformer first converges to a near-zero prediction error for the query tokens of dominant features, and then converges later to a near-zero prediction error for the query tokens of under-represented features, respectively via one and four training phases. Our proof features new techniques for analyzing the competing strengths of two types of attention weights, the change of which determines different training phases. 1 Introduction Transformers (Vaswani et al., 2017) have emerged as the foundational architectures in various domains, including natural language processing (Devlin et al., 2018; OpenAI, 2023), computer vision (Dosovitskiy et al., 2020; He et al., 2022), reinforcement learning (Chen et al., 2021; Janner et al., 2021), and so on. Recently, large language models (LLMs) based on transformers have exhibited remarkable in-context learning capabilities, where the model can solve a new task solely through inference based on prompts of the task without further fine-tuning (Brown et al., 2020). Such striking abilities have inspired a recent line of research to understand the underlying mechanisms of in-context learning from various aspects (Garg et al., 2022; Min et al., 2022; Wei et al., 2023; Von Oswald et al., 2023; Xie et al., 2021). Among these studies, the pioneering work of Garg et al. (2022) empirically studied in-context learning via an interpretable framework, highlighting the capacity of transformers to acquire in-context knowledge of linear and some more complex function classes. Specifically, they showed that an in-context trained model over a function class $\mathcal{F}$ can accurately predict the function value $f(x_{\text{query}})$ of a new query token $x_{\text{query}}$ for most $f \in \mathcal{F}$ by using a prompt sequence including in-context input-label pairs along with the query token $(x_1, f(x_1), \ldots, x_N, f(x_N), x_{\text{query}})$. Built on this theoretically amenable setting, many follow-up works explored theoretical properties of in-context learning of transformers from different perspectives such as expressive power (Akyürek et al., 2022; Giannou et al., 2023), generalization (Li et al., 2023b), internal mechanisms (Von Oswald et al., 2023; Bai et al., 2023), etc. Specially, a few recent studies (Zhang et al., 2023a; Mahankali et al., 2023; Ahn et al., 2023) made interesting progress towards understanding the training dynamics of transformers for in-context learning\(^1\). However, those studies focused only on ‘linear’ transformers, and does not capture the crucial role of the ‘softmax’ mapping, which lies in the core design of transformers to be advantageous over other network architectures. Therefore, the following fundamental problem still remains largely open: \(^1\)More detailed discussions for related work can be found in Appendix B. How do softmax-based transformers trained via gradient descent learn in-context? This paper takes the first step toward addressing this problem by investigating the learning dynamics of a single-layer transformer with softmax attention trained by gradient descent (GD) for in-context learning. We focus on the setting with training prompts generated from linear regression models as in Garg et al. (2022), and with structured input data, where each token is randomly selected from a set of feature vectors \( \{v_k\}_{k=1}^K \) with probability \( \{p_k\}_{k=1}^K \), respectively. We then train the transformer over the squared loss of prediction error using GD. We study the training dynamics under both balanced and imbalanced feature distributions, and characterize the in-context learning ability for both settings. We highlight our contributions as follows. Our Contributions. - We first establish the convergence guarantee for the setting with balanced features, where \( p_k = \Theta\left(\frac{1}{K}\right) \) for each \( k \in [K] \), and characterize the training evolution of the attention map into a two-phase dynamic process. In the first phase, for each \( k \in [K] \), the parameters of the self-attention module undergo fast growth, aligning the query token featuring \( v_k \) with input tokens featuring \( v_k \) rapidly disregarding other feature directions. In the second phase, the loss of prediction error converges to a near-minimum value. - We then prove the convergence for the setting with imbalanced features, where one feature dominates, say \( v_1 \) with \( p_1 = \Theta(1) \), while others are under-represented with \( p_k = \Theta\left(\frac{1}{K}\right) \) for \( k > 1 \), which serves as a remarkable showcase of the in-context learning capabilities of transformers. We demonstrate that the learning dynamics display a stage-wise convergence process. Initially, the transformer quickly attains near-zero prediction error for the query tokens of dominant features, and then converges to near-zero prediction error for the query tokens of under-represented features, irrespective of their infrequent occurrence, through one and four phases, respectively. - Our analysis hinges on a novel proof technique that characterizes the softmax attention dynamics via the interplay between two types of bilinear attention weights: ‘weight of query token and its target feature’ and ‘weight of query token and off-target features’. Which weight plays a dominant role in the attention dynamics can change over the learning process, resulting in different training phases. Our analysis tools may be of independent interest and hold the potential to study various other problems involving transformer architectures. Notations. We let \( [K] := \{1, 2, \ldots, K\} \). We use capital letters for matrices (e.g., \( A \)), and lowercase letters for vectors and scalars (e.g., \( a \)). For a general matrix \( A \), we use \( A_i \) to represent the \( i \)-th column of \( A \) and \( A_{i:j} \) to indicate a collection of columns spanning from \( i \) to \( j \). We use \( 1\{\cdot\} \) to denote the indicator function. We use \( O(K) \), \( \Omega(K) \), and \( \Theta(K) \) to omit universal constants concerning the variable \( K \). We use \( \text{poly}(K) \) and \( \text{polylog}(K) \) to denote large constant-degree polynomials of \( K \) and \( \log(K) \), respectively. Given \( h(x) \leq 0 \) and \( g(x) > 0 \), we denote \( h(x) = -\Omega(g(x)) \) if there exists some constant \( C_1 > 0 \) and \( a_1 \), s.t. \( |h(x)| \geq C_1 g(x) \) for all \( x \geq a_1 \); \( h(x) = -O(g(x)) \) if there exist some constant \( C_2 > 0 \) and \( a_2 \), s.t. \( |h(x)| \leq C_2 g(x) \) for all \( x \geq a_2 \); \( h(x) = \Theta(g(x)) \) if there exists some constant \( C_3, C_4 > 0 \) and \( a_3 \), s.t. \( C_3 g(x) \leq |h(x)| \leq C_4 g(x) \) for all \( x \geq a_3 \). 2 Problem Setup In this section, we present our problem formulations, including the in-context learning framework, one-layer transformer architecture, and the training settings we consider in this paper. 2.1 In-Context Learning Framework We adopt the well-established in-context learning framework as given in Garg et al. (2022). The objective is to enable the training of models capable of in-context learning within a specified function class \( F \), where the functions and input data are sampled respectively by the distributions \( D_F \) and \( D_X \). Specifically, the process is initiated by generating random training prompts as follows. For each prompt, we first sample a random function \( f \) from the class according to the distribution \( D_F \). We then create a set of random inputs \( x_1, \ldots, x_N \) and query \( x_{\text{query}} \), all drawn independently by \( D_X \). Finally, we compute the value of function \( f \) on these inputs to construct the prompt \( P = (x_1, y_1, \ldots, x_N, y_N, x_{\text{query}}) \), where \( y_i = f(x_i) \). The goal for an in-context learner is to use the prompt to form a prediction \( \hat{y}(x_{\text{query}}) \) for the query such that \( \hat{y}(x_{\text{query}}) \approx f(x_{\text{query}}) \). Task Distribution. In this work, our focus is on the task of linear functions defined as \( \mathcal{F} = \{ f : \mathcal{X} \rightarrow \mathbb{R} | f(x) = \langle w, x \rangle \text{ with } w \in \mathbb{R}^d, \mathcal{X} \subset \mathbb{R}^d \} \), which is widely adopted in recent studies for in-context learning (Ahn et al., 2023; Zhang et al., 2023a; Mahankali et al., 2023). For each prompt, the task-specific weight \( w \) is independently drawn from a task distribution \( D_\Omega \) with zero mean and identity covariance matrix \( I_{d \times d} \). Data Distribution. To specify the data distribution \( D_X \), we consider a set of distinct features \( \{ v_k \in \mathbb{R}^d, k = 1, \ldots, K \} \), where all features are orthonormal vectors. Each data point \( x \) is sampled from the feature set with the probability \( p_k \) for sampling \( v_k \), where \( p_k \in (0, 1) \) for \( k \in [K] \) and \( \sum_{k \in [K]} p_k = 1 \). Such a data model has been widely employed in the theoretical studies of deep learning, including ensemble methods (Allen-Zhu & Li, 2020), multi-modal learning (Huang et al., 2022), vision transformers (Li et al., 2023a), etc. 2.2 One-Layer Transformer Architecture To present the one-layer transformer model we consider in this work, we first introduce the self-attention mechanism (Bahdanau et al., 2014; Vaswani et al., 2017) for the transformer model. Definition 2.1 (Self-Attention (SA) Mechanism). A self-attention layer (Bahdanau et al., 2014; Vaswani et al., 2017) in the single-head case with width \( d_e \) consists of the following components: a key matrix \( W_{\text{Key}} \in \mathbb{R}^{d_e \times d_e} \), a query matrix \( W_Q \in \mathbb{R}^{d_e \times d_e} \), and a value matrix \( W_V \in \mathbb{R}^{d_e \times d_e} \). Given a prompt \( P \) of length \( N \), let \( E \in \mathbb{R}^{d_e \times d_N} \) be an embedding matrix of the prompt \( P \), and the self-attention mechanism will output: \[ F_{\text{SA}}(E; W_{\text{Key}}, W_Q, W_V) = W_V E \cdot \text{softmax}\left((W_{\text{Key}} E)^T W_Q E\right), \] where the softmax(\(\cdot\)) function is applied column-wisely, i.e., for a vector input \( z \), the \( i \)-th entry of softmax(\(z\)) is given by \( e^{z_i} / \sum_s e^{z_s} \). Embeddings. For in-context learning, given a prompt \( P = (x_1, y_1, \ldots, x_N, y_N, x_{\text{query}}) \), a natural token embedding is to stack \( x_i \in \mathbb{R}^d \) and \( y_i \) into the first \( N \) columns. The final column consists of \( x_{\text{query}} \in \mathbb{R}^d \) and 0. Formally, \[ E = E(P) = \begin{pmatrix} x_1 & x_2 & \cdots & x_N & x_{\text{query}} \\ y_1 & y_2 & \cdots & y_N & 0 \end{pmatrix} \in \mathbb{R}^{(d+1) \times (N+1)}. \] Therefore, \( d_N = N + 1 \) and \( d_e = d + 1 \) in the above embedding. Let us further denote the first \( d \) rows of \( E \) as \( E^x(P) \in \mathbb{R}^{d \times (N+1)} \) and the last row of \( E \) as \( E^y(P) \in \mathbb{R}^{1 \times (N+1)} \). Then we write \( E(P) = \{ E^x(P), E^y(P) \} \). We omit the dependency on \( P \) for \( E(P), E^x(P) \) and \( E^y(P) \) when there is no ambiguity. We next instantiate additional operations and certain parameter settings based on the general SA mechanism (1) for our one-layer transformer model to mitigate unnecessary complications in theoretical analysis while keeping the most critical component of the SA mechanism. Masking. Let \( M(\cdot) \) denote the masking operation, which masks (removes) the last column of the entry matrix. In other words, for a given matrix \( A \in \mathbb{R}^{(d+1) \times (N+1)} \), \( M(A) \) yields \( A_{1:N} \in \mathbb{R}^{(d+1) \times N} \). We will first mask the embedding matrix \( E \) before its multiplication with the key matrix \( W_{\text{Key}} \) and the value matrix \( W_V \), which results in \( W_{\text{Key}} M(E) \) and \( W_V M(E) \), in order to prevent the query token from attending to itself. This approach has been commonly taken in previous works (Tian et al., 2023; Mahankali et al., 2023; Von Oswald et al., 2023; Kitaev et al., 2020). Reparameterization. We consolidate the query and key matrices into one matrix denoted as \( W_{KQ} \in \mathbb{R}^{(d+1) \times (d+1)} \), often taken in recent theoretical frameworks (Zhang et al., 2023a; Jelassi et al., 2022; Tian et al., 2023). Furthermore, we consider \( W_V \) and \( W_{KQ} \) in the following specific forms: \[ W_V = \begin{pmatrix} 0_{d \times d} & 0_d \\ 0_d^T & \nu \end{pmatrix}, \quad W_{KQ} = \begin{pmatrix} Q & 0_d \\ 0_d^T & 0 \end{pmatrix}, \] where \( \nu \in \mathbb{R} \) and \( Q \in \mathbb{R}^{d \times d} \). The above structures of \( W_V \) and \( W_{KQ} \) are inspired by the recent study (Zhang et al., 2023a), which showed that such structured matrices achieve the global optimum in the linear SA model. Furthermore, we set \( \nu = 1 \) (where \( \nu \) is the only parameter in \( W_V \)) and do not update it during the training. The reason is twofold: 1) this aligns with the common practice in theoretical studies of deep learning, where the last linear layer is often kept fixed to focus on the analysis of hidden layers. Our objective remains highly nonconvex and challenging even with a fixed \( \nu \); and 2) the form of the global optimum outlined in recent work (Zhang et al., 2023a) suggests that for linear SA, the optimal solution for \( \nu \) serves as a scaling factor to normalize the output of linear attention. In our case, the output of softmax attention is already inherently normalized. **Remark 1** (Nealy no loss of optimality). Despite the specific form of \( \{W^V, W^KQ\} \) that we take, the minimum of the loss function \( L^* = \Theta(e^{-\text{poly}(K)}) \) (as shown in Theorem 3.1) implies that such a specific form at most incurs an error of \( \Theta(e^{-\text{poly}(K)}) \) that vanishes exponentially with \( K \), compared to the minimum loss over the general parameter space \( \{W^V, W^K\text{Key}, W^Q\} \). Therefore, for our nonlinear softmax SA, such specific parameterization does not lose optimality. With the aforementioned masking operations and reparameterization, the overall transformer model consisting of a single SA layer can be recast in the parameterization of \( \theta = \{1, Q\} \) as follows: \[ F_{SA}(E; \theta) = M(E^y) \cdot \text{softmax}\left(M(E^x)^TQE^x\right). \] Such a reparameterization separates the label \( E^y \) from the softmax operator while maintaining simultaneous processing of both input \( E^x \) and label \( E^y \) information. The prediction for the token \( x_{\text{query}} \) will be the last entry of \( F_{SA} \), namely, \[ \hat{y}_{\text{query}} = \hat{y}_{\text{query}}(E; \theta) = [F_{SA}(E; \theta)]_{(N+1)}. \] Henceforth, we may omit the reference to \( E \) and \( \theta \), and use \( \hat{y}_{\text{query}} \) if it is not ambiguous. ### 2.3 Training Settings **Loss Function.** To train the transformer model \( F_{SA} \) over linear regression tasks, we minimize the following squared loss of the prediction error, which has also been taken by (Zhang et al., 2023a; Ahn et al., 2023): \[ L(\theta) = \frac{1}{2} \mathbb{E}_{w \sim D_X, \{x_i\}_{i=1}^N \cup \{x_{\text{query}}\} \sim D_X^{N+1}} \left[ (\hat{y}_{\text{query}} - \langle w, x_{\text{query}} \rangle)^2 \right] \] where the expectation is taken with respect to the prompt \( P \) including input and query tokens \( \{x_i\}_{i=1}^N \cup \{x_{\text{query}}\} \) and the weight vector \( w \). In the following, we omit subscripts of the expectation to simplify the notation. **Training Algorithm.** The above learning objective in eq. (4) is minimized via GD with the learning rate \( \eta \). At \( t = 0 \), we initialize \( Q^{(0)} \) as zero matrix \( 0_{d \times d} \). The parameter is updated as follows: \[ \theta^{(t+1)} = \theta^{(t)} - \eta \nabla_\theta L(\theta^{(t)}). \] ### 3 Main Results In this section, we characterize the convergence of in-context learning by GD for the settings with balanced and imbalanced features, respectively. To measure the degree to which the query token \( x_{\text{query}} \) attends to the specific input token and to a certain class of features, we define the following notions of the attention scores. **Definition 3.1** (Attention Score). Given a prompt \( P = (x_1, y_1, \cdots, x_N, y_N, x_{\text{query}}) \) and its corresponding embedding \( E \), where \( \{x_i \in \mathbb{R}^d\}_{i=1}^N, x_{\text{query}} \) is drawn independently from \( D_X \), then at time \( t \), for \( F_{SA} \) with parameter \( \theta^{(t)} \), we define the attention score as follows. 1. Given \( i \in [N] \), the attention score for the \( i \)-th token \( x_i \) is \[ \text{attn}_i(\theta^{(t)}; E) := \left[ \text{softmax}(M(E^x)^TQ^{(t)}E^x) \right]_i = \frac{e^{F_i^x \top Q^{(t)}E^x_{N+1}}}{\sum_{j \in [N]} e^{F_j^x \top Q^{(t)}E^x_{N+1}}}. \] 2. For \( k \in [K] \), denote \( V_k(P) \subset [N] \) as the index set for input tokens, such that \( x_i = v_k \) for \( i \in V_k(P) \). Then the attention score for the \( k \)-th feature is given by \[ \text{Attn}_k(\theta^{(t)}; E) := \sum_{i \in V_k(P)} \text{attn}_i(\theta^{(t)}; E). \] For simplicity, we represent $\text{attn}_i(\theta^{(t)}; E)$ and $\text{Attn}_k(\theta^{(t)}; E)$ as $\text{attn}^{(t)}_i$ and $\text{Attn}^{(t)}_k$, respectively, and denote $V_k(P)$ as $V_k$. We also rewrite the prediction output at time $t$ as follows: $$\hat{y}^{(t)}_{\text{query}} = \sum_{i \in [N]} \text{attn}^{(t)}_i y_i = \sum_{k \in [K]} \text{Attn}^{(t)}_k \langle w, v_k \rangle.$$ (5) ### 3.1 In-Context Learning with Balanced Features In this subsection, we study in-context learning with balanced features, where the probabilities of sampling all $K$ features are in the same order, i.e., $p_k = \Theta\left(\frac{1}{K}\right)$ for each $k \in [K]$. In such a setting, each feature appears equally likely in the prompt, ensuring their equal recognition. The following theorem characterizes the convergence of GD. **Theorem 3.1 (In-context Learning with Balanced Features).** Suppose $p_k = \Theta\left(\frac{1}{K}\right)$ for $k \in [K]$. For any $0 < \epsilon < 1$, suppose $N \geq \text{poly}(K)$ and $\text{polylog}(K) \gg \log\left(\frac{1}{\epsilon}\right)$. We apply GD to train the loss function given in eq. (4). Then with at most $T^* = O\left(\frac{\log(K)K^2}{\eta} + \frac{K \log\left(K \epsilon^{-\frac{1}{2}}\right)}{\epsilon \eta}\right)$ iterations, we have 1. The loss converges: $L(\theta(T^*)) - L^* \leq \epsilon$, where $L^* = \Theta(e^{-\text{poly}(K)})$ is the global minimum of the population loss in eq. (4). 2. Attention score concentrates: if $x_{\text{query}} = v_k$, then with probability at least $1 - e^{-\Omega(\text{poly}(K))^2}$, the one-layer transformer nearly “pays all attention” to input tokens featuring $v_k$, i.e., $(1 - \text{Attn}^{(T^*)}_k)^2 \leq O(\epsilon)$. Theorem 3.1 shows that training a one-layer transformer with softmax attention can converge to the minimum of the objective loss in the reparameterization space via GD, with polynomial time efficiency with respect to $K$ and $\frac{1}{\epsilon}$. The learning dynamics for such a case with balanced features exhibit a two-phase behavior. (i) The first term of $T^*$ captures the duration of phase I, where the network actively aligns the query token (suppose $x_{\text{query}} = v_k$) with those tokens featuring $v_k$ itself, thus substantially increasing $\text{Attn}^{(t)}_k$ to a constant level. (ii) The second term captures the duration of phase II, where the loss converges to the near-zero prediction error. **In-context Learning Ability.** For the obtained model with $\theta(T^*)$, let us evaluate a test prompt associated with a linear task $w$, which might not be drawn from the support of $D_\Omega$ (i.e., $w$ may not be present in the training process), but has its data drawn by $D_X$. Suppose the query token is $x_{\text{query}} = v_k$. Following from the attention score concentration principle in Theorem 3.1, eq. (5) yields that with high probability the query prediction is given by $$\hat{y}^{(T^*)}_{\text{query}} = \text{Attn}^{(T^*)}_k \langle w, v_k \rangle + \sum_{m \neq k} \text{Attn}^{(T^*)}_m \langle w, v_m \rangle \approx \langle w, v_k \rangle.$$ This implies that the in-context learned model can still well approximate the test prompt even if the task model $w$ does not lie in the support of the training task distribution $D_\Omega$ and was unseen during training. This showcases the remarkable in-context learning capability of trained transformers. ### 3.2 In-Context Learning with Imbalanced Features In real-world datasets, skewed distributions are common, where a few classes or features dominate in data while others are under-represented. It is typically difficult to train models to perform well on features that have limited representation in those datasets (Cui et al., 2019; Chou et al., 2020). In this subsection, we investigate the setting with imbalanced features, where the dominant feature $v_1$ is sampled with the probability $p_1 = \Theta(1)$, and all other features are sampled with $p_k = \Theta\left(\frac{1}{K}\right)$ for $2 \leq k \leq K$. We will show that somewhat remarkably, in-context learning is less sensitive to imbalanced features and can achieve a near-zero error even when the query token takes an under-represented feature. To investigate the performance for the imbalanced scenario, we focus on the following prediction error for each feature $v_k$: $$L_k(\theta) = \frac{1}{2} \mathbb{E} \left[ (\hat{y}_{\text{query}} - \langle w, x_{\text{query}} \rangle)^2 \mid x_{\text{query}} = v_k \right].$$ (6) The following theorem characterizes the convergence of GD. --- 2The randomness originates from the first $N$ input tokens in the test prompt. Theorem 3.2 (In-context Learning with Imbalanced Features). Suppose \( p_1 = \Theta(1) \) and \( p_k = \Theta\left(\frac{1}{K}\right) \) for \( 2 \leq k \leq K \). For any \( 0 < \epsilon < 1 \), suppose \( N \geq \text{poly}(K) \), and \( \text{polylog}(K) \gg \log\left(\frac{1}{\epsilon}\right) \). We apply GD to train the loss function given in eq. (4). Then the following results hold. 1. The prediction error for the dominant feature converges: for \( v_1 \), with at most \( T_1 = O\left(\frac{\log(\epsilon^{-\frac{1}{2}})}{\eta}\right) \) GD iterations, \( L_1(\theta^{(T_1)}) \leq L^*_1 + \epsilon \), where \( L^*_1 = \Theta(e^{-\text{poly}(K)}) \) is the global minimum of eq. (6) for \( k = 1 \); 2. The prediction error for the under-represented features converges: for \( v_k \) with \( 2 \leq k \leq K \), with at most \( T_k = O\left(\frac{\log(K)K^2}{\eta} + \frac{K \log(Ke^{-\frac{1}{2}})}{\epsilon \eta}\right) \) GD iterations, \( L_k(\theta^{(T_k)}) \leq L^*_k + \epsilon \), where \( L^*_k = \Theta(e^{-\text{poly}(K)}) \) is the global minimum of eq. (6); 3. Attention score concentrates: for each \( k \in [K] \), if the query token is \( v_k \), then after \( T_k \) iterations, with probability at least \( 1 - e^{-\Omega(\text{poly}(K))} \), the one-layer transformer nearly “pays all attention” to input tokens featuring \( v_k \): \( (1 - \text{Attn}_k^{(T_k)})^2 \leq O(\epsilon) \). Theorem 3.2 shows that the GD dynamics of the in-context training exhibit ‘stage-wise’ convergence. The trained transformer rapidly (within \( T_1 \)) converges to a model that achieves a near-zero prediction error \( L_1 \) for the dominant feature; and then takes a much longer time (up to \( T_k \gg T_1 \)) to converge to a model that attains a near-zero prediction error \( L_k \) for the under-represented features. Our analysis captures the later learning dynamics associated with the under-represented features into a four-phase behavior as further described in the subsequent section. Despite the longer convergence time it takes, in-context learning still achieves the same accurate prediction for under-represented features as that for the dominant feature. 4 OVERVIEW OF TRAINING PHASES In this section, we explain our key ideas for analyzing the in-context learning capabilities of transformers. We will first characterize the training process of the setting with imbalanced features for under-represented features in Section 4.1, which comprehensively exhibits four phases. Other scenarios take only one or two of those phases, which we will briefly describe in Section 4.2. The complete proofs of all the results are provided in the appendix. We will first provide the general training dynamics for the bilinear attention weights (defined in Definition 4.1 below), which is useful for analyzing all learning phases. These quantities are the key elements in the attention scores \( \text{attn}_i^{(t)} \) for \( 1 \leq i \leq N \), which play an important role in determining the prediction \( \hat{y}_{\text{query}}^{(t)} \). Hence, our analysis mainly tracks the training dynamics of those bilinear attention weights. Definition 4.1. (Bilinear Attention Weights) Given \( k, n \in [K] \), where \( k \neq n \), for \( t \geq 0 \), we define the bilinear attention weights as follows: \[ A_k^{(t)} := v_k^\top Q^{(t)} v_k, \quad B_{k,n}^{(t)} := v_n^\top Q^{(t)} v_k. \] By our initialization, we have \( A_k^{(0)} = B_{k,n}^{(0)} = 0 \). To further interpret these weights, suppose the query token corresponds to the feature \( v_k \). Then \( e^{A_k^{(t)}} \) serves as the (un-normalized) weight for the input token featuring \( v_k \), while \( e^{B_{k,n}^{(t)}} \) captures the weight for the input token featuring a different vector \( v_n \) with \( n \neq k \). Having a larger \( A_k^{(t)} \) compared to other \( B_{k,n}^{(t)} \) indicates a better capture of the target feature \( v_k \). As shown in eq. (5), this condition implies a higher ‘attention’ towards input tokens featuring \( v_k \), resulting in \( \hat{y}_{\text{query}}^{(t)} \approx \sum_{i \in V_k} \text{attn}_i^{(t)} y_i \approx \langle w, v_k \rangle \), where the prediction well approximates the ground truth. The following lemma provides the GD updates of the bilinear attention weights \( A_k^{(t)} \) and \( B_{k,n}^{(t)} \). Figure 1: Overview of the dynamics of attention scores and bilinear attention weights for under-represented features. Assume the query token is $v_k$ with $2 \leq k \leq K$. The top row depicts the trend of the attention score $\text{Attn}_m^{(t)}$ for each feature $v_m$, where a darker color corresponds to a higher score. The bottom row shows the interplay and leading effect among bilinear attention weights $A_k^{(t)}$, $B_{k,1}^{(t)}$, and $B_{k,n}^{(t)}$ (where $n \neq 1, k$) in different training phases. (a) Phase I: $B_{k,1}^{(t)}$ significantly decreases and the attention on tokens with the dominant feature $v_1$ is suppressed (Section 4.1.1); (b) Phase II: With the suppression of $\text{Attn}_1^{(t)}$, the decreasing rate for $B_{k,1}^{(t)}$ drops and the growth of $A_k^{(t)}$ becomes the leading influence (Section 4.1.2); (c) Phase III: $A_k^{(t)}$ rapidly grows and $\text{Attn}_k^{(t)}$ reaches $\Omega(1)$ (Section 4.1.3); (d) Phase IV: $\text{Attn}_k^{(t)}$ nearly grows to 1 and the prediction error converges to a global minimum (Section 4.1.4). **Lemma 4.1.** Let $t \geq 0$. For $k, n \in [K]$, where $k \neq n$, $A_k^{(t)}$ and $B_{k,n}^{(t)}$ satisfy: $$ \begin{align*} A_k^{(t+1)} &= A_k^{(t)} + \eta \alpha_k^{(t)}, \\ B_{k,n}^{(t+1)} &= B_{k,n}^{(t)} + \eta \beta_{k,n}^{(t)}, \\ \alpha_k^{(t)} &= \mathbb{E}\left[1\{x_{\text{query}} = v_k\} \cdot \text{Attn}_k^{(t)} \cdot \left(\sum_{m \neq k} \text{Attn}_m^{(t)} - (1 - \text{Attn}_k^{(t)})^2\right)\right], \\ \beta_{k,n}^{(t)} &= \mathbb{E}\left[1\{x_{\text{query}} = v_k\} \cdot \text{Attn}_n^{(t)} \cdot \left(\sum_{m \neq k} \text{Attn}_m^{(t)} - \text{Attn}_n^{(t)} - \text{Attn}_k^{(t)} (1 - \text{Attn}_k^{(t)})\right)\right]. \end{align*} $$ Lemma 4.1 shows that $A_k^{(t)}$ is monotonically increasing at any time since $\alpha_k^{(t)} \geq 0$, whereas the monotonicity does not always hold for $B_{k,n}^{(t)}$. Therefore, we need to analyze whether $B_{k,n}^{(t)}$ decreases and determine its rate of change compared to $A_k^{(t)}$. Such a comparison between $B_{k,n}^{(t)}$ and $A_k^{(t)}$ determines which bilinear weight plays a dominant role in the attention dynamics, and the change of the leading weight over the learning process results in different training phases. ### 4.1 Learning Process for Under-represented Features We consider the setting with imbalanced features and focus on the under-represented features. Given a prompt $P = (x_1, y_1, \cdots, x_N, y_N, x_{\text{query}})$, denote $P_{\text{input}}$ to be the collection of input tokens, i.e., $\{x_i\}_{i=1}^N$. Recall that $|V_k|$ is the number of input tokens featuring $v_k$. Based on our data generation setup, we can show that for imbalanced data, with high probability, $P_{\text{input}}$ belongs to $$ E_{\text{imbal}}^* := \left\{ P_{\text{input}} : |V_1| = \Theta(N), |V_k| = \Theta\left(\frac{N}{K}\right) \text{ for } 2 \leq k \leq K \right\}. $$ In the following, we focus on the event that $P_{\text{input}} \in E_{\text{imbal}}^*$ unless otherwise specified. We next characterize the learning process for under-represented features $v_k$ with $k > 1$ by four phases. An illustration of these four phases is provided in Figure 1. #### 4.1.1 Phase I: Decrease of Dominant Feature. Consider the query token featuring $v_k$ for some $k > 1$. At $t = 0$, $A_k^{(0)} = B_{k,n}^{(0)} = 0$, and hence $\text{attn}_i^{(0)} = \frac{1}{N}$ for $i \in [N]$ which implies that the transformer equally attends each input token. However, due to the imbalanced occurrence of features in $E_{\text{imbal}}^*$, the number of tokens featuring $v_1$ is much larger than others. Hence, \( \text{Attn}_1^{(0)} = \frac{|v_1|}{N} \geq \Omega(1) \) while \( \text{Attn}_m^{(0)} = \Theta\left(\frac{1}{K}\right) \) for \( m > 1 \). Therefore, by Lemma 4.1, we obtain \[ \beta_{k,1}^{(0)} = \mathbb{E} \left[ 1\{x_{\text{query}} = v_k\} \text{Attn}_1^{(0)} \cdot \left( \sum_{m \neq k, 1} \text{Attn}_m^{(0)} - \text{Attn}_1^{(0)} (1 - \text{Attn}_1^{(0)}) - \text{Attn}_k^{(0)} (1 - \text{Attn}_k^{(0)}) \right) \right] \leq -\Omega\left(\frac{1}{K^2}\right), \] whereas \( \alpha_k^{(0)}, |\beta_{k,n}^{(0)}| \approx \Theta\left(\frac{1}{K^2}\right) \) for \( n \neq k, 1 \). Therefore, \( B_{k,1}^{(t)} \) enjoys a much larger decreasing rate initially. It can be shown that the decrease of \( B_{k,1}^{(t)} \) will dominate for a certain time period that defines phase I. The following lemma summarizes our main result in this phase. **Lemma 4.2 (Informal).** Under the same conditions as Theorem 3.2, given \( k > 1 \), there exists \( T_{1,k} = O\left(\frac{\log(K)^{1.98}}{\eta}\right) \), such that for all \( 0 \leq t \leq T_{1,k} \) \[ \beta_{k,1}^{(t)} \leq -\Omega\left(\frac{1}{K^{1.98}}\right), \quad \alpha_k^{(t)} = \Theta\left(\frac{1}{K^2}\right), \quad |\beta_{k,n}^{(t)}| \leq O\left(\frac{\alpha_k^{(t)} + |\beta_{k,n}^{(t)}|}{K}\right) \quad \text{for all } n \neq k, 1, \] \( B_{k,1}^{(T_{1,k}+1)} \leq -0.49 \log(K) \), while \( A_k^{(T_{1,k}+1)} \) and \( B_{k,n}^{(T_{1,k}+1)} \) for \( n \neq k, 1 \) remain close to zero. During phase I, \( B_{k,1}^{(t)} \) significantly decreases, leading to a reduction in \( \text{Attn}_1^{(t)} \), whereas other \( \text{Attn}_n^{(t)} \) with \( n > 1 \) remain at the level of \( \Theta\left(\frac{1}{K}\right) \). By the end of this phase, \( (\text{Attn}_1^{(t)})^2 \) drops to \( O\left(\frac{1}{K^{0.98}}\right) \), resulting in a decrease in \( |\beta_{k,1}^{(t)}| \) as it approaches \( \alpha_k^{(t)} \). Phase II then begins. ### 4.1.2 Phase II: Switching of Leading Influence Soon after entering this phase, the dominance role of \( B_{k,1}^{(t)} \) diminishes as \( |\beta_{k,1}^{(t)}| \) reaches the same order of magnitude as \( \alpha_k^{(t)} \). The following result captures the shift of the leading influence, where the growth of \( A_k^{(t)} \) takes dominance. **Lemma 4.3 (Informal).** Under the same conditions as Theorem 3.2, given \( k > 1 \), there exists \( T_{2,k} = T_{1,k} + O\left(\frac{\log(K)K^2}{\eta}\right) \), such that at iteration \( t = T_{2,k} + 1 \), we have \[ A_k^{(T_{2,k}+1)} \geq 0.5 \log(K), \quad B_{k,1}^{(T_{2,k}+1)} \in [-0.51 \log(K), -0.49 \log(K)] \] and \( B_{k,n}^{(T_{2,k}+1)} \) for \( n \neq k, 1 \) remain close to zero. Lemma 4.3 shows that by the end of phase II, \( A_k^{(t)} \) matches the magnitude of \( B_{k,1}^{(t)} \), and during phase II \( B_{k,1}^{(t)} \) changes only slightly from the end of phase I. This suggests that, at certain moments in this phase, \( A_k^{(t)} \) significantly increases and its growth becomes the dominant factor. We next provide some insights into the reasons behind this transition. Once \( B_{k,1}^{(t)} \) decreases to \(-0.5 \log(K)\), we observe that \( |\beta_{k,1}^{(t)}| \approx \alpha_k^{(t)} = \Theta\left(\frac{1}{K^2}\right) \). After this point, it becomes challenging for \( B_{k,1}^{(t)} \) to decrease significantly compared to the increase in \( A_k^{(t)} \). To illustrate, let us suppose a minimal decrease of \( B_{k,1}^{(t)} \) by an amount of 0.01 \( \log(K) \). This would yield that \( \text{Attn}_1^{(t)} \leq O\left(\frac{1}{K^{0.98}}\right) \) and \( \beta_{k,1}^{(t)} \leq O\left(\frac{1}{K^{0.98}}\right) \), while \( \text{Attn}_k^{(t)} \geq \Omega\left(\frac{1}{K}\right) \) and \( \alpha_k^{(t)} \geq \Omega\left(\frac{1}{K^2}\right) \), establishing a situation where \( \alpha_k^{(t)} \gg \beta_{k,1}^{(t)} \). Such a discrepancy leads to the switching of the dominant effect. ### 4.1.3 Phase III: Growth of Target Feature After a transition phase, we observe that \( A_k^{(t)} \) enjoys a larger gradient \( \alpha_k^{(t)} \approx \Theta\left(\frac{1}{K^{1.5}}\right) \) compared to \( |\beta_{k,1}^{(t)}| \leq O\left(\frac{1}{K^{1.98}}\right) \) and \( |\beta_{k,n}^{(t)}| \leq O\left(\frac{1}{K^2}\right) \) with \( n \neq k, 1 \). This gap between \( \alpha_k^{(t)} \) and \( \beta_{k,n}^{(t)} \) remains over the period, and the gradient \( \alpha_k^{(t)} \) continues to grow, driving the rapid growth of \( A_k^{(t)} \) with \( B_{k,n}^{(t)} \) being relatively unchanged. The following lemma summarizes our main results in this phase. **Lemma 4.4 (Informal).** Under the same conditions as Theorem 3.2, given \( k > 1 \), there exists \( T_{3,k} = O\left(\frac{\log(K)K^{1.5}}{\eta}\right) \), such that for all \( T_{2,k} < t \leq T_{3,k} \) \[ \alpha_k^{(t)} \geq \Omega\left(\frac{1}{K^{1.5}}\right), \quad \beta_{k,1}^{(t)} \in \left[-O\left(\frac{\alpha_k^{(t)}}{K^{0.48}}\right), -\Omega\left(\frac{1}{K^{0.48}}\right)\right], \quad |\beta_{k,n}^{(t)}| \leq O\left(\frac{\alpha_k^{(t)} + |\beta_{k,n}^{(t)}|}{K}\right) \quad \text{with } n \neq k, 1. \] At time $t = T_{3,k} + 1$, we have $A_k^{(T_{3,k}+1)} \geq \log(K)$. Lemma 4.4 follows because the continuous growth of $\alpha_k^{(t)}$ is mainly driven by $\text{Attn}_k^{(t)}$, where $1 - \text{Attn}_k^{(t)}$ remains at the constant order. However, as $A_k^{(t)}$ reaches $\log(K)$, $\text{Attn}_k^{(t)}$ is above $\Omega(1)$, necessitating a more detailed analysis to control $\alpha_k^{(t)}$, which starts the final phase. ### 4.1.4 Phase IV: Convergence After learning the target feature $v_k$ at a certain level, the prediction error converges. We characterize this in the following lemma, where we establish a connection between $\alpha_k^{(t)}$ and the prediction error via analyzing the change of $1 - \text{Attn}_k^{(t)}$ that diminishes during this phase. **Lemma 4.5 (Informal).** Under the same conditions as Theorem 3.2, given $0 < \epsilon < 1$, for each $k > 1$, there exists $T_{4,k} = T_{3,k} + O\left(\frac{K \log(K \epsilon^{-\frac{1}{2}})}{\eta \epsilon}\right)$, such that for all $T_{3,k} < t \leq T_{4,k}$ $$\alpha_k^{(t)} \geq \Omega\left(\frac{\epsilon}{K}\right), \quad \beta_{k,n}^{(t)} \in [-O\left(\frac{\alpha_k^{(t)}}{K^{0.49}}\right), 0], \quad \beta_{k,n}^{(t)} \in [-O\left(\frac{\alpha_k^{(t)}}{K}\right), 0] \text{ with } n \neq k, 1.$$ At time $t = T_{4,k} + 1$, we have $\mathcal{L}_k(\theta^{(T_{4,k}+1)}) - \mathcal{L}_k^* < \epsilon$ and $(1 - \text{Attn}_k^{(t)})^2 \leq O(\epsilon)$, if $x_{\text{query}} = v_k$ and $P_{\text{input}} \in \mathcal{E}_{\text{imbalance}}^*$. The convergence result for $k > 1$ stated in Theorem 3.2 directly follows by choosing $T_k^* = T_{4,k} + 1$. ### 4.2 Training Dynamics of Other Settings We next describe the training dynamics of other settings, which take the phases similar to those discussed in Section 4.1. #### Imbalanced Setting for the Dominant Feature. For the dominant feature $v_1$ in the imbalanced setting, since the overall attention $\text{Attn}_1^{(0)}$ to the target feature already reaches $\Omega(1)$ due to the abundance of tokens featuring $v_1$ in $\mathcal{E}_{\text{imbalance}}^*$, the training directly enters the convergence stage, as summarized in the following lemma. **Lemma 4.6 (Informal).** Under the same conditions as Theorem 3.2, given $k > 1$, there exists $T_1 = O\left(\frac{\log(\epsilon^{-\frac{1}{2}})}{\eta \epsilon}\right)$, such that for all $t \leq T_1$ $$\alpha_1^{(t)} \geq \Omega(\epsilon), \quad \beta_{1,n}^{(t)} \in [-O\left(\frac{\alpha_1^{(t)}}{K}\right), 0] \text{ with } n > 1.$$ Further $\mathcal{L}_1(\theta^{(T_1+1)}) - \mathcal{L}_1^* < \epsilon$, and $(1 - \text{Attn}_1^{(T_1+1)})^2 \leq O(\epsilon)$ if $x_{\text{query}} = v_1$ and $P_{\text{input}} \in \mathcal{E}_{\text{imbalance}}^*$. #### Balanced Scenarios. Similarly to imbalanced settings, we can show that for balanced data, with high probability, $P_{\text{input}}$ belongs to $\mathcal{E}_{\text{balance}}^* := \{P_{\text{input}} : |V_k| = \Theta\left(\frac{N}{K}\right) \text{ for all } k \in [K]\}$. At initialization, the transformer uniformly assigns attention to each token, i.e., $\text{attn}_i^{(0)} = \frac{1}{N}$ for $i \in [N]$. Unlike the imbalanced case, here, due to $P_{\text{input}} \in \mathcal{E}_{\text{balance}}^*$, we have that $\text{Attn}_m^{(0)} = \Theta\left(\frac{1}{K}\right)$ for $m \in [K]$, indicating nearly equal attention to each feature. Consequently, as Lemma 4.1, we observe a significantly larger gradient in $A_k^{(t)}$ at the outset, with $\alpha_k^{(0)} \approx \Theta\left(\frac{1}{K^2}\right)$, compared to $|\beta_{k,n}^{(0)}| \approx \Theta\left(\frac{1}{K^3}\right)$ for $n \neq k$. This behavior mirrors the observations from phase III for under-represented features, allowing us to directly generalize the analysis. ### 5 Conclusions In this work, we investigated the training dynamics of a one-layer transformer with softmax attention trained by GD for in-context learning. We analyzed two settings respectively with balanced and imbalanced features, and proved the guaranteed convergence to a vanishing in-context prediction error by detailing the evolution of attention dynamics for both settings. Interestingly, we characterized a four-phase behavior for the imbalanced settings that sheds light on the intricate attention dynamics between dominant and target under-represented features during training. To our knowledge, this is the first work that rigorously analyzed the softmax attention dynamics for in-context learning. Our approach features novel ideas for phase decomposition based on the changes of the dominant role between two types of bilinear attention weights in the learning process, and has the potential to facilitate further theoretical understanding of how transformers perform in other algorithms and learning paradigms. REFERENCES Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, and Suvrit Sra. Transformers learn to implement preconditioned gradient descent for in-context learning. *arXiv preprint arXiv:2306.00297*, 2023. Kabir Ahuja, Madhur Panwar, and Navin Goyal. In-context learning through the bayesian prism. *arXiv preprint arXiv:2306.04891*, 2023. Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? investigations with linear models. *arXiv preprint arXiv:2211.15661*, 2022. Zeyuan Allen-Zhu and Yuanzhi Li. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. *arXiv preprint arXiv:2012.09816*, 2020. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014. Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. Transformers as statisticians: Provable in-context learning with in-context algorithm selection. *arXiv preprint arXiv:2306.04637*, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems*, 34:15084–15097, 2021. Hsin-Ping Chou, Shih-Chieh Chang, Jia-Yu Pan, Wei Wei, and Da-Cheng Juan. Remix: rebalanced mixup. In *Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part VI* 16, pp. 95–110. Springer, 2020. Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 9268–9277, 2019. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers. In *ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models*, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Luc Devroye. The equivalence of weak, strong and complete convergence in l1 for kernel density estimates. *The Annals of Statistics*, pp. 896–904, 1983. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. What can transformers learn in-context? a case study of simple function classes. *Advances in Neural Information Processing Systems*, 35:30583–30598, 2022. Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D Lee, and Dimitris Papailiopoulos. Looped transformers as programmable computers. *arXiv preprint arXiv:2301.13196*, 2023. Chi Han, Ziqi Wang, Han Zhao, and Heng Ji. In-context learning of large language models explained as kernel regression. *arXiv preprint arXiv:2305.12766*, 2023.
dEz3ge8QSo
In section 2, the authors introduce convex risk measures, where there is only one input in the measure $\sigma$ in (5) and (6), while there are two in examples 1 and 2. Could the authors explain what is the difference between these two $\sigma$'s?
SOFT ROBUST MDPs AND RISK-SENSITIVE MDPs: EQUIVALENCE, POLICY GRADIENT, AND SAMPLE COMPLEXITY Runyu (Cathy) Zhang Harvard University runyuzhang@fas.harvard.edu Yang Hu Harvard University yanghu@g.harvard.edu Na Li Harvard University nali@seas.harvard.edu ABSTRACT Robust Markov Decision Processes (MDPs) and risk-sensitive MDPs are both powerful tools for making decisions in the presence of uncertainties. Previous efforts have aimed to establish their connections, revealing equivalences in specific formulations. This paper introduces a new formulation for risk-sensitive MDPs, which assesses risk in a slightly different manner compared to the classical Markov risk measure [71], and establishes its equivalence with a class of soft robust MDP (RMDP) problems, including the standard RMDP as a special case. Leveraging this equivalence, we further derive the policy gradient theorem for both problems, proving gradient domination and global convergence of the exact policy gradient method under the tabular setting with direct parameterization. This forms a sharp contrast to the Markov risk measure, known to be potentially non-gradient-dominant [39]. We also propose a sample-based offline learning algorithm, namely the robust fitted-Z iteration (RFZI), for a specific soft RMDP problem with a KL-divergence regularization term (or equivalently the risk-sensitive MDP with an entropy risk measure). We showcase its streamlined design and less stringent assumptions due to the equivalence and analyze its sample complexity. 1 INTRODUCTION Making decisions amidst uncertainty presents a fundamental challenge cutting across diverse domains, including finance [32,80], engineering [45,74], and robotics [88] etc. Within these realms, decisions carry consequences that depend not only on expected rewards but also on the level of uncertainty and associated risks. Addressing this challenge necessitates approaches such as robust, and risk-sensitive decision-making. These approaches explicitly incorporate uncertainty and aim to find policies that perform well across a spectrum of scenarios and adeptly strike a balance between expected gains and potential risks. For robust decision-making in a dynamic environment, the robust Markov Decision Process (RMDP) is a popular framework. RMDPs model the environment as a Markov decision process, seeking policies that excel across various potential models. This involves solving a max-min problem, optimizing an objective function that considers the policy’s worst-case performance across all models within a defined uncertainty set. The RMDP framework was introduced by [41,57], spurring research into efficient planning algorithms when the model is given [34,96,93,1100,56]. There are also works focusing on the computational facets for these problems [37,6,35,21] which leverage convex formulation and regularization techniques to tackle robustness. In cases of unknown models, recent efforts have designed reinforcement learning (RL) algorithms with guarantees, but most are model-based for tabular cases, i.e., requiring an empirical estimation of the probability transition model [51,63,105,99,98,77], thereby impeding their applicability to large state spaces. Some works focus on the model-free setting and employ linear function approximation for handling large state spaces [85,70,4]. However, these approaches provide only asymptotic guarantees and rely on approximated robust dynamic programming, which inherently is computationally more expensive than standard dynamic programming. A recent contribution by [64] offers non-asymptotic sample 1By ‘unknown model’ we refer to the setting where the nominal probability transition model is unknown. Both model-based and model-free methods belong to this setting, where model-based methods keep an empirical estimate of the nominal model whereas model-free algorithms don’t require this empirical estimation step. complexity guarantees in the context of model-free robust RL. This achievement, however, introduces additional dual variables, thus adding additional computational complexity and imposing more stringent assumptions. An alternative approach for handling uncertainty is risk-sensitive decision-making, which intriguingly shares an elegant equivalence with robust decision-making. The concept of coherent risk measures was initially introduced and explored in [2][18][69], where the uncertainty is represented by a static random variable. The connection to robustness was established by characterizing risk measures as the infimum of expected shortfall across a set of probability measures, known as the risk envelope. The risk notion is further extended to convex risk measures which capture a broader class of risk evaluation functions [30][73][81]. Subsequently, conditional and dynamic risk measures were introduced to generalize risk assessment from static random variables to stochastic processes [3][14][29][22][68][72][65]. In particular, [71] introduces the Markov risk measure in the context of Markov Decision Processes (MDPs). However, the equivalence between the Markov risk measure and robust MDPs is not as straightforward as in static settings. Notably, [71][75][11][5][62] established the equivalence between optimizing the Markov risk measure and solving a modified RMDP problem, where the uncertainty set dynamically changes with the implemented policy. This differs from the standard RMDPs, where the uncertainty sets are typically unrelated to the policy. Though [62] attains stronger equivalence results with RMDPs, it is only applicable to specific risk measures, such as Conditional Value at Risk (CVaR). Similar to RMDPs, optimizing Markov risk measures also faces many challenges. Firstly, building upon the equivalence with the modified RMDP with policy-dependent uncertainty set, Huang et al. [39] highlights that, even in a tabular setting with direct parameterization, Markov risk measures may lack gradient-dominance – a stark contrast to the gradient domination observed in standard MDPs [1]. This implies that policy gradient algorithms may not ensure global optima, even in a straightforward, full-information environment. Further, the sample complexity is also harder to obtain. While there is a series of efforts dedicated to optimizing the Markov risk measure within the realm of RL [12][76][46], these works primarily provide asymptotic convergence results. The challenges outlined above motivate us to investigate the potential of introducing an alternative risk formulation. This new formulation seeks to capture risk in a way similar to Markov risk measures while achieving a stronger and broader equivalence with RMDPs. Moreover, we aim to enhance convergence properties, including the crucial aspect of gradient domination. These improvements are poised to support the development of learning algorithms for both RMDPs and risk-sensitive MDPs while maintaining provable guarantees. **Our Contributions:** In this paper, we propose a new formulation for risk-sensitive MDP, whose definition incorporates the general concepts of convex risk measures. We first establish the equivalence of risk-sensitive MDP with a class of soft RMDP problems, which includes the standard RMDP as a special case. Leveraging this equivalence, we proceed to derive the policy gradient theorem for both the aforementioned class of soft RMDPs and risk-sensitive MDPs (Theorem[5]) and prove the global convergence of the exact policy gradient method under the tabular setting with direct parameterization. Our result, to the best of our knowledge, presents the first global convergence analysis with iteration complexity for a general class of risk-sensitive MDPs. Based on the policy gradient theorem, we also highlight the difficulty of gradient estimation using samples compared with the standard MDP setting, motivating us to seek other types of sample-based learning methods. In the last part of this paper, we mainly focus on the setting of offline learning with nonlinear function approximation which is a relatively less-studied scenario, and propose a sample-based offline learning algorithm, namely the robust fitted-Z iteration (RFZI), that resembles policy iteration rather than policy gradient. Specifically, we focus on a setting where the regularization term for the RMDP is a KL-divergence term, which is equivalent to the risk-sensitive MDP with the entropy risk measure. The algorithm utilizes the equivalence between the two problems, which enables simpler algorithm design. Notably, our algorithm is model-free and does not rely on an empirically estimated probability transition model. The sample complexity for RFZI is also provided. Compared with [64] which considers offline robust RL with sample-complexity guarantees, our work considers a different uncertainty set, requires less computational and implementation complexity, and less stringent assumptions. Due to space limit, we defer a detailed literature review and numerical simulations to the appendix. 2 PROBLEM SETTINGS AND PRELIMINARIES Markov Decision Processes (MDPs). A finite Markov decision process (MDP) is defined by a tuple \( M = (S, A, P, r, \gamma, \rho) \), where \( S \) is a finite set of states, \( A \) is a finite set of actions available to the agent, and \( P \) is the transition probability function such that \( P(s'|s, a) \) describes the probability of transitioning from one state \( s \) to another \( s' \) given a particular action \( a \). For the sake of notation simplicity, we use \( P_{s,a} \) to denote the probability distribution \( P(\cdot|s, a) \) over the state space \( S \). \( r : S \times A \rightarrow [0, 1] \) is a reward function, \( \gamma \in [0, 1) \) is a discounting factor, and \( \rho \) specifies the initial probability distribution over the state space \( S \). A stochastic policy \( \pi : S \rightarrow \Delta^{|A|} \) specifies a strategy where the agent chooses its action based on the current state in a stochastic fashion; more specifically, the probability of choosing action \( a \) at state \( s \) is given by \( \Pr(a|s) = \pi(a|s) \). A deterministic policy is a special case of the stochastic policy where for every state \( s \) there is an action \( a_s \) such that \( \pi(a_s|s) = 1 \). For notation simplicity, we slightly overload the notation and use \( \pi(s) \) to denote the action \( a_s \) for deterministic policies. For a given stationary policy \( \pi \) and a set of transition probability distributions \( \{P_{s,a}\}_{s \in S, a \in A} \), we denote the discounted state visitation distribution by \[ d^{\pi,P}(s) := (1 - \gamma) \sum_{t=0}^{+\infty} \gamma^t \Pr^{\pi,P}(s_t = s | s_0 \sim \rho). \] Robust MDPs (RMDPs) and Soft Robust MDPs. Unlike the standard MDP which considers a fixed transition model \( \{P_{s,a}\} \), the robust MDP considers a set \( \mathcal{P} \) of transition probability distributions and aims to solve the sup-inf problem [41] \[ \sup_{\pi} \inf_{\{\hat{P}_t\}_{t \geq 0}} \mathbb{E}_{s_t, a_t \sim \pi, \hat{P}, s_0 \sim \rho} \sum_{t=0}^{+\infty} \gamma^t \left( r(s_t, a_t) \right)^2 \] where the objective is to find the best action sequence that maximizes a worst-case objective over all possible models in the uncertainty set \( \mathcal{P} \). Many papers [41,57,99,63,4] consider the uncertainty set under the \((s,a)\)-rectangularity condition \( \mathcal{P} = \otimes_{s \in S, a \in A} \mathcal{P}_{s,a} \), where \( \mathcal{P}_{s,a} = \{\hat{P}_{s,a} : \ell(\hat{P}_{s,a}, P_{s,a}) \leq \epsilon \} \), and \( \ell \) is a penalty function that captures the deviation of \( \hat{P}_{s,a} \) from a nominal model \( P_{s,a} \). Some popular penalty functions are KL divergence, total variation distance, etc. In this paper, we generalize the above robust MDP problem to a wider range of problems which we call the soft robust MDP[8]. The objective of the soft robust MDP solves the following sup-inf problem: \[ \sup_{\pi} \inf_{\{\hat{P}_t\}_{t \geq 0}} \mathbb{E}_{s_t, a_t \sim \pi, \hat{P}, s_0 \sim \rho} \sum_{t=0}^{+\infty} \gamma^t \left( r(s_t, a_t) + \gamma D(\hat{P}_{t;s_t,a_t}, P_{s_t,a_t}) \right). \] Note that here \( \inf_{\{\hat{P}_t\}_{t \geq 0}} \) is with respect to all the possible state-transition probability distributions. When the penalty function \( D \) is chosen as the indicator function \[ D(\hat{P}_{s,a}, P_{s,a}) = \begin{cases} 0 & \ell(\hat{P}_{s,a}, P_{s,a}) \leq \epsilon \\ +\infty & \text{otherwise} \end{cases}, \] it recovers the robust MDP problem [41]. When \( D \) is set as non-indicator functions, for example, \( D(\hat{P}_{s,a}, P_{s,a}) = \text{KL}(\hat{P}_{s,a}||P_{s,a}) \), Problem (2) is a robust MDP with a soft penalty term \( D \) on the deviation of \( \hat{P}_{s,a} \) from \( P_{s,a} \) rather than a hard constraint on \( \hat{P}_{s,a} \). Similar to the robust MDP problem, we can define the optimal value function as \[ V^\pi(s) := \sup_{\{\hat{P}_t\}_{t \geq 0}} \mathbb{E}_{s_t \sim \hat{P}} \left[ \sum_{t=0}^{+\infty} \gamma^t \left( r(s_t, a_t) + \gamma D(\hat{P}_{t;s_t,a_t}, P_{s_t,a_t}) \right) \middle| s_0 = s \right]. \] Additionally, given a stationary policy \( \pi \), the value function \( V^\pi \) under policy \( \pi \) is defined as follows: \[ V^\pi(s) := \inf_{\{\hat{P}_t\}_{t \geq 0}} \mathbb{E}_{s_t, a_t \sim \pi, \hat{P}} \left[ \sum_{t=0}^{+\infty} \gamma^t \left( r(s_t, a_t) + \gamma D(\hat{P}_{t;s_t,a_t}, P_{s_t,a_t}) \right) \middle| s_0 = s \right]. \] We also define the corresponding Q-functions as \[ Q^\pi(s, a) := \sup_{\{\hat{P}_t\}_{t \geq 1}} \inf_{\{\hat{P}_t\}_{t \geq 0}} \mathbb{E}_{s_t \sim \hat{P}} \left[ \sum_{t=0}^{+\infty} \gamma^t \left( r(s_t, a_t) + \gamma D(\hat{P}_{t;s_t,a_t}, P_{s_t,a_t}) \right) \middle| s_0 = s, a_0 = a \right] \] \[ Q^\pi(s, a) := \inf_{\{\hat{P}_t\}_{t \geq 0}} \mathbb{E}_{s_t, a_t \sim \pi, t \geq 1, \hat{P}} \left[ \sum_{t=0}^{+\infty} \gamma^t \left( r(s_t, a_t) + \gamma D(\hat{P}_{t;s_t,a_t}, P_{s_t,a_t}) \right) \middle| s_0 = s, a_0 = a \right]. \] For the sake of generality, we allow the transition probability to be non-stationary and the policy to be non-Markovian and stochastic. However, in later sections we will show that the sup-inf solution can be obtained by a stationary deterministic Markov policy and a stationary transition probability (Theorem 2). We adopt the term from robust optimization literature, the concept of regularizing the adversaries actions is referred as soft-robustness [9] (or comprehensive robustness [8] and globalized robustness [10]). Remark 1 (Soft Robust MDP.). The soft robust MDP problem is useful, especially when the uncertainty set is not explicitly given. In this case, it is more desirable to consider all possible probability transition models \( \{\hat{P}_t\}_{t \geq 0} \) while treating the deviation from the nominal model as a soft penalty term \( D \) rather than constraining it to be within a specified uncertainty set. In this paper, we establish a connection between the soft robust MDP and another class of MDPs, namely risk-sensitive MDPs. To define the risk-sensitive MDP, we will first introduce the notation of convex risk measures. Convex Risk Measures [30]. Consider a finite set \( S \), let \( \mathbb{R}^{|S|} \) denote the set of real-valued functions over \( S \). A convex risk measure \( \sigma : \mathbb{R}^{|S|} \to \mathbb{R} \) is a function that satisfies the following properties: 1. Monotonicity: for any \( V', V \in \mathbb{R}^{|S|} \), if \( V' \leq V \), then \( \sigma(V) \leq \sigma(V') \). 2. Translation invariance: for any \( V \in \mathbb{R}^{|S|}, m \in \mathbb{R}, \sigma(V + m) = \sigma(V) - m \). 3. Convexity: for any \( V', V \in \mathbb{R}^{|S|}, \lambda \in [0, 1], \sigma(\lambda V + (1-\lambda)V') \leq \lambda \sigma(V) + (1-\lambda)\sigma(V') \). Using standard duality theory, it is shown in classical results [30] that convex risk measures satisfy the following dual representation theorem: Theorem 1 (Dual Representation Theorem [30]). The function \( \sigma : \mathbb{R}^{|S|} \to \mathbb{R} \) is a convex risk measure if and only if there exists a “penalty function” \( D(.) : \Delta^{|S|} \to \mathbb{R} \) such that \[ \sigma(V) = \sup_{\hat{\mu} \in \Delta^{|S|}} (-\mathbb{E}_{\hat{\mu}}V - D(\hat{\mu})) . \] Further, the penalty function \( D \) can be chosen to satisfy the condition \( D(\hat{\mu}) \geq -\sigma(0) \) for any \( \hat{\mu} \in \Delta^{|S|} \) and it can be taken to be convex and lower-semicontinuous. In specific, it can be written in the following form: \[ D(\hat{\mu}) = \sup_V (-\sigma(V) - \mathbb{E}_{s \sim \hat{\mu}}V(s)) \] Note that \( \sigma \) and \( D \) serve as the Fenchel conjugate of each other. In most cases, the convex risk measure \( \sigma(V) \) can be interpreted as the risk associated with a random variable that takes on values \( V(s) \) where \( s \) is drawn from some distribution \( s \sim \mu \). Consequently, most commonly used risk measures are typically associated with an underlying probability distribution \( \mu \in \Delta^{|S|} \) (e.g., Examples 1). This paper focuses on this type of risk measures and thus we use \( \sigma(\mu, \cdot) \) to denote the risk measure, where the additional variable \( \mu \) indicates the associated probability distribution. Correspondingly, we denote the penalty term \( D(\hat{\mu}) \) of \( \sigma(\mu, \cdot) \) in the dual representation theorem as \( D(\hat{\mu}, \mu) \). Here we provide an example of convex risk measure and its dual form. Example 1 (Entropy risk measure [30]). For a given \( \beta > 0 \), the entropy risk measure takes the form: \[ \sigma(\mu, V) = \beta^{-1} \log \mathbb{E}_{s \sim \mu} e^{-\beta V(s)} . \] Its corresponding penalty function \( D \) in the dual representation theorem is the KL divergence \[ D(\hat{\mu}, \mu) = \beta^{-1} \text{KL}(\hat{\mu} || \mu) = \beta^{-1} \sum_{s \in S} \hat{\mu}(s) \log (\hat{\mu}(s)/\mu(s)) . \] Risk-Sensitive MDPs. Convex risk measures capture the risk associated with random variables. It would be desirable if the notion could be adapted to the MDP to capture the risk of a given policy under the Markov process. Given an MDP \( M \), a class of convex risk measures \( \{\sigma(P_{s,a}, \cdot)\}_{s \in S, a \in A} \), and a policy \( \pi(\cdot | s) \), the risk-sensitive value function \( \tilde{V}^\pi \) for the infinite discounted MDP is given as \[ \tilde{V}^\pi(s) = \sum_a \pi(a | s) \left( r(s, a) - \gamma \sigma(P_{s,a}, \tilde{V}^\pi) \right) , \forall s \in S . \] With the definition of risk-sensitive \( \tilde{V}^\pi \), the risk-sensitive MDP problem is to find the policy that maximizes \( \max_\pi \tilde{V}^\pi \). We denote the optimal value by \( \tilde{V}^* \), which is the fix-point solution of the following equation, --- 4Please note that the symbol \( D \) serves a dual purpose, representing both the regularization term in (2) and the penalty function for a risk measure in (5) and (6). This intentional notation overlap will become clear in the following sections, which reveal the connection between these two terms. \[ \tilde{V}^*(s) := \max_a \left( r(s, a) - \gamma \sigma(P_{s,a}, \tilde{V}^*) \right), \forall s \in S. \] (8) It is worth noting that the fixed point operators for (7),(8) are contractive (proof deferred to Appendix D), which immediately implies the following lemma which verifies that the fixed point equations for \( V^\pi \) (7) and \( \tilde{V}^* \) (8) are well-defined. **Lemma 1.** The solution to (7) exists and is unique. Same argument holds for (8). **Remark 2.** We would like to emphasize that when the policy \( \pi \) is stochastic, our definition of the value functions \( \tilde{V}^\pi \) are different from the Markov risk measures defined in [71,39,86,87]. However, the two quantities are equivalent when \( \pi \) is deterministic. Additionally, when further assuming that the risk measure \( \sigma \) is mixture quasiconcave (c.f. [17]), the optimal policy for the Markov risk measure is also deterministic and thus the risk-sensitive MDP and the Markov risk measure obtain the same optimal value \( \tilde{V}^* \) (see Appendix C for more details). We also define the Q-function of the risk sensitive MDP as: \[ \tilde{Q}^*(s, a) := r(s, a) - \gamma \sigma(P_{s,a}, \tilde{V}^*), \quad \tilde{Q}^\pi(s, a) := r(s, a) - \gamma \sigma(P_{s,a}, \tilde{V}^\pi). \] **Other notations:** For any function \( f : S \times A \rightarrow \mathbb{R} \), state-action distribution \( \mu \in \Delta(S \times A) \) the \( \mu \)-weighted 2-norm of \( f \) is defined as \( \|f\|_{2,\mu} = (\mathbb{E}_{s,a \sim \mu} f(s, a)^2)^{1/2} \). ### 3 EQUIVALENCE OF SOFT RMDPs AND RISK-SENSITIVE MDPs **Theorem 2** (Equivalence of Soft RMDPs and Risk-Sensitive MDPs). For a given MDP \( M \), a penalty function \( D \), a class of convex risk measures \( \{\sigma(P_{s,a}, \cdot)\} \), and a stationary policy \( \pi \), if the penalty function \( D \) satisfies \[ D(\hat{P}_{s,a}, P_{s,a}) = \sup_V \left( -\sigma(P_{s,a}, V) - \mathbb{E}_{s' \sim \hat{P}_{s,a}} V(s') \right), \] (9) then the value functions and Q-functions of the soft RMDP and the risk-sensitive MDP are always the same. That is, \( \tilde{V}^* = \tilde{V}^\pi =: V^*, \quad \tilde{V}^\pi = \tilde{V}^* =: V^\pi, \quad \tilde{Q}^* = \tilde{Q}^\pi =: Q^*, \quad \tilde{Q}^\pi = \tilde{Q}^* =: Q^\pi. \) Further, for every initial state \( s_0 \), the sup-inf solution of the policy and transition probabilities for \( V^*(s_0) \) defined in (3) is given by: \[ \pi^*(s) = \arg\max_a \left( r(s, a) - \gamma \sigma(P_{s,a}, V^*) \right), \] (10) \[ \hat{P}^*_t; s, a = \hat{P}^*_{s,a} = \arg\min_{\hat{P}} D(\hat{P}, P_{s,a}) + \mathbb{E}_{s' \sim \hat{P}} V^*(s'). \] where (10) means that the optimal action sequence \( \{a_t\}_{t \geq 1} \) can be achieved by implementing the deterministic policy \( a_t = \pi^*(s_t) \). Similarly, for any initial state \( s_0 \), the minimum solution of the transition probabilities for \( V^\pi(s_0) \) defined in (4) is given by \[ \hat{P}^\pi_t; s, a = \hat{P}^\pi_{s,a} = \arg\min_{\hat{P}} D(\hat{P}, P_{s,a}) + \mathbb{E}_{s' \sim \hat{P}} V^\pi(s'). \] (11) Since Theorem 2 has established the equivalence of risk-sensitive MDPs and soft RMDPs, from now on we use \( V^*, V^\pi, Q^*, Q^\pi \) to denote the value functions and Q-functions for both settings and assume by default that the penalty function \( D \) and the risk measure \( \sigma \) satisfy relationship (9). **Remark 3.** As a comparison to the equivalence result for the Markov risk measures [72,71,86], their uncertainty set for the robust problem generally depends on the policy \( \pi \) (see e.g. Assumption 2.2 in [86]), while in our setting, the penalization function \( D \) is independent of the policy and matches with the most standard formulation of RMDPs. --- 5 Due to this difference, the value function \( V^\pi \) can no longer be written as \( \rho(\sum_{t=0}^{+\infty} \gamma^t r(s_t, a_t)) \) where \( \rho \) is a time-consistent dynamic risk measure. This makes our definition different from the usual interpretation of the dynamic risk measures. 6 We would like to note that the equivalence of optimal value might fail if \( \sigma \) is not mixture semiconcave (e.g. mean (semi)-deviation, mean (semi)-moment measures [17]) or if policy regularization is added into the value function because the optimal policy might no longer be deterministic. 7 The equivalence \( \tilde{V}^\pi = \tilde{V}^*, \tilde{Q}^\pi = \tilde{Q}^* \) easily extends to the setting with policy regularization, since adding regularization only requires changing the reward function \( r(s, a) \) to be \( r^\pi(s, a) = r(s, a) + R(\pi(\cdot|s)) \), where \( R \) is the policy regularizer, in which case the proof of Theorem 2 can still carry through naturally. 4 Policy Gradient for Soft RMDPs In this section, we present the policy gradient theorem for a differentiable policy $\pi_\theta$ parameterized by $\theta$, which provides an analytical method for computing the gradient in soft RMDPs. Additionally, we prove the global convergence of the exact policy gradient ascent algorithm for the direct parameterization case. For simplicity, in this section we use the abbreviations $V^\theta, Q^\theta, \hat{P}^\theta, V^{(t)}, Q^{(t)}, \hat{P}^{(t)}$ to denote $V^{\pi_\theta}, Q^{\pi_\theta}, \hat{P}^{\pi_\theta}, V^{\pi_{\theta(t)}}, Q^{\pi_{\theta(t)}}, \hat{P}^{\pi_{\theta(t)}}$, respectively. **Theorem 3 (Policy gradient theorem).** Suppose that $\pi_\theta$ is differentiable with respect to $\theta$ and that $\sigma(P_{s,a}, \cdot) : \mathbb{R}^{|S|} \to \mathbb{R}$ is a differentiable function, then $V^\theta(s)$ is also a differentiable function with respect to $\theta$ and the gradient is given by $$ \nabla_\theta V^\theta(s) = \mathbb{E}_{a_t \sim \pi_\theta(\cdot | s_t), s_{t+1} \sim \hat{P}^\theta_{s_t, a_t}} \left[ \sum_{t=0}^{+\infty} \gamma^t Q^\theta(s_t, a_t) \nabla_\theta \log \pi_\theta(a_t | s_t) \bigg| s_0 = s \right], $$ where $\hat{P}^\theta$ is defined in [11]. We leave the discussion of this result to the end of this section in Remark 4. Theorem 3 immediately implies the following corollary on the policy gradient under direct parameterization (c.f. [1, 94]), where the parameter $\theta_{s,a}$ directly represents the probability of choosing action $a$ at state $s$, i.e., $\theta_{s,a} = \pi_\theta(a | s)$. **Corollary 1 (Policy gradient for direct parameterization).** Under direct parameterization, $$ \frac{\partial \mathbb{E}_{s_0 \sim \rho} V^\theta(s_0)}{\partial \theta_{s,a}} = \frac{1}{1 - \gamma} d^{\pi_\theta, \hat{P}^\theta}(s) Q^\theta(s, a). $$ (12) Note that the policy gradient theorem only holds for the case where $\sigma(P_{s,a}, \cdot)$ is differentiable; nevertheless, we can generalize (12) to the non-differentiable case by defining the variable $G(\theta) \in \mathbb{R}^{|S| \times |A|}$ as follows: $$ [G(\theta)]_{s,a} := \frac{1}{1 - \gamma} d^{\pi_\theta, \hat{P}^\theta}(s) Q^\theta(s, a). $$ For both differentiable and non-differentiable cases, we could perform the following (`quasi'-)gradient ascent algorithm: $$ \theta^{(t+1)} = \text{Proj}_X(\theta^{(t)} + \eta G(\theta^{(t)})), $$ (13) where $X = \otimes_{s \in S} \Delta^{|A|}$ denotes the feasible region of $\theta$. For the standard MDP case, it is known that the value function satisfies the gradient domination property under direct parameterization [1], which enables global convergence of the policy gradient algorithm. Similar properties also hold for the soft RMDP/risk-sensitive MDP setting which is shown in the following lemma: **Lemma 2 (Gradient domination under direct parameterization).** $$ \mathbb{E}_{s_0 \sim \rho} V^\pi(s_0) - V^\theta(s) \leq \left\| \frac{d^{\pi^\star, \hat{P}^\theta}}{d^{\pi_\theta, \hat{P}^\theta}} \right\|_\infty \max_\pi \langle \pi - \pi_\theta, G(\theta) \rangle, $$ where $\left\| \frac{d^{\pi^\star, \hat{P}^\theta}}{d^{\pi_\theta, \hat{P}^\theta}} \right\|_\infty := \max_s \frac{d^{\pi^\star, \hat{P}^\theta}(s)}{d^{\pi_\theta, \hat{P}^\theta}(s)}$. The gradient domination property suggests that as long as the term $\left\| \frac{d^{\pi^\star, \hat{P}^\theta}}{d^{\pi_\theta, \hat{P}^\theta}} \right\|_\infty$ is not infinite, all the first order stationary points are global optimal solutions. Based on this observation, we further derive the convergence rate for the policy gradient algorithm. Before that, we introduce the following sufficient exploration assumption: **Assumption 1 (Sufficient Exploration).** For any policy $\pi$, it holds that $d^{\pi^\star, \hat{P}^\pi}(s) > 0$, where $\hat{P}^\pi$ is defined as in [11]. We define the distributional shift factor $M$ to be a constant that satisfies $M \geq \frac{1}{d^{\pi^\star, \hat{P}^\pi}(s)}$ for all state $s$ and policy $\pi$. Note that when we start with an initial distribution where $\rho(s) > 0$ for every state $s$, the term $M$ can be upper bounded by $\frac{1}{(1-\gamma)\min_s \rho(s)}$. If Assumption 1 is satisfied, it can be concluded that $\left\| \frac{d^{\pi^\star, \hat{P}^\theta}}{d^{\pi_\theta, \hat{P}^\theta}} \right\|_\infty \leq M$. Thus we could use gradient domination to derive the global convergence rate. Theorem 4 (Convergence rate for exact policy gradient under direct parameterization). Under Assumption 1 by setting \( \eta = \frac{(1-\gamma)^3}{2|\mathcal{A}|M} \), running (13) guarantees that \[ \sum_{k=1}^{K} \left( \mathbb{E}_{s_0 \sim \rho} V^*(s_0) - V^{(k)}(s_0) \right)^2 \leq \frac{16|\mathcal{A}|M^4}{(1-\gamma)^4}. \] Therefore, by setting \( K \geq \frac{16|\mathcal{A}|M^4}{(1-\gamma)^4 e^2} \), it is guaranteed that \( \min_{1 \leq k \leq K} \mathbb{E}_{s_0 \sim \rho} (V^*(s_0) - V^{(k)}(s_0)) \leq \epsilon \). If we apply the same proof technique to standard MDPs, the convergence rate is \( O \left( \frac{|\mathcal{A}|M^2}{(1-\gamma)^2} \right) \). The dependency on the distributional shift factor \( M \) is worse for soft RMDPs, which is caused by the choice of a smaller stepsize \( \eta \) (see Remark 7 in the Appendix for more details). It is an interesting open question whether this worse dependency is fundamental or just a proof artifact. Remark 4 (Difficulties of Sample-based Gradient Estimation). Though Theorem 4 establishes the global convergence of exact policy gradient, it is hard to generalize the result to sample-based settings. Note that the policy gradient in Theorem 2 takes a similar form as compared to standard MDPs [83], however, there’s a primary distinction that the expectation is taken over trajectories sampled from the probability transition model \( \hat{P}_0 \) instead of the nominal model \( P \). Consequently, when confined to samples exclusively from the nominal model, estimating this expectation becomes exceptionally challenging, particularly in the context of non-generative models. 5 Offline Reinforcement Learning of the KL-soft RMDP Since the previous section considers learning with full information and studies iteration complexity, the major motivation for this section is to examine sample-based learning for risk sensitivity MDPs and soft robust MDPs. As discussed in Remark 4, developing sample-based policy gradient learning methods might be difficult, therefore, we seek an alternative sample-based method that resembles policy iteration rather than policy gradient. Specifically, we mainly focus on the setting of offline learning with nonlinear function approximation which is a relatively less-studied scenario. Moreover, due to the challenge in developing a method for soft MDPs with general \( D \) functions (or equivalently for risk sensitive MDPs with general risk functions \( \sigma \)), in this section, we look into a particular and important case of soft RMDP where the regularization term is the KL-divergence, i.e., \[ \max_{\pi} \min_{\hat{P}} \mathbb{E}_{s_t, a_t \sim \pi, \hat{P}, s_0 \sim \rho} \sum_{t=0}^{+\infty} \gamma^t \left( r(s_t, a_t) + \gamma \beta^{-1} \text{KL}(\hat{P}_{s_t, a_t} || P_{s_t, a_t}) \right). \] The hyperparameter \( \beta \) represents the penalty strength of the deviation of \( \hat{P} \) from \( P \), the smaller \( \beta \) is, the larger the penalty strength. From Example 1 and Theorem 2, the KL-soft RMDP is equivalent to the risk-sensitive MDP problem with the risk measures \( \sigma(P_{s,a}, \cdot) \) chosen as the entropy risk measure \[ \sigma(P_{s,a}, V) = \beta^{-1} \log \mathbb{E}_{s' \sim P_{s,a}} e^{-\beta V(s')}. \] In this case, the Bellman equations for the value functions \( V^\pi, V^*, Q^\pi, Q^* \) are given by: \[ V^\pi(s) = \sum_a \pi(a|s) Q^\pi(s, a), \quad Q^\pi(s, a) = r(s, a) - \gamma \beta^{-1} \log \mathbb{E}_{s' \sim P_{s,a}} e^{-\beta V^\pi(s')}, \] \[ V^*(s) = \max_a Q^*(s, a), \quad Q^*(s, a) = r(s, a) - \gamma \beta^{-1} \log \mathbb{E}_{s' \sim P_{s,a}} e^{-\beta V^*(s')}. \] For notational simplicity, we define the Bellman operator on the Q-functions \( T_Q : \mathbb{R}^{|\mathcal{S}| \times |\mathcal{A}|} \to \mathbb{R}^{|\mathcal{S}| \times |\mathcal{A}|} \) as: \[ [T_Q Q](s, a) := r(s, a) - \gamma \beta^{-1} \log \mathbb{E}_{s' \sim P(\cdot|s,a)} e^{-\beta \max_{a'} Q(s', a')}. \] It is not hard to verify from the above arguments that the optimal Q function \( Q^* \) satisfies \[ Q^* = T_Q Q^*. \] Offline robust reinforcement learning. The remainder of the paper focuses on finding the optimal robust policy \( \pi^* \) for the soft robust MDP problem (14). Specifically, we explore offline robust reinforcement learning algorithms which use a pre-collected dataset \( \mathcal{D} \) to learn \( \pi^* \). The dataset is typically generated under the nominal model \( \{P_{s,a}\}_{s \in \mathcal{S}, a \in \mathcal{A}} \), such that \( \mathcal{D} = \{s_i, a_i, r_i, s'_i\}_{i=1}^N \), where the state-action pairs \( (s_i, a_i) \sim \mu \) are drawn from a specific data-generating distribution \( \mu \). Definition 1 (Robustly Admissible Distributions). A distribution \( \nu \in \Delta^{|S| \times |A|} \) is robustly admissible if there exists \( h \geq 0 \) and a policy \( \pi \) and transition probability \( \hat{P} \in \{ P' : \text{KL}(P'|_{s,a} || P_{s,a}) \leq \beta \} \) (both can be non-stationary) such that \( \nu(s, a) = \Pr(s_h, a_h | s_0 \sim \rho, \pi, \hat{P}) \). Assumption 2 (Concentrability). The data-generating distribution \( \mu \) satisfies concentrability if there exists a constant \( C \) such that for any \( \nu \) that is robustly admissible, \( \max_{s,a} \frac{\nu(s,a)}{\mu(s,a)} \leq C \). Remark 5. The notion of robustly admissible distribution and concentrability are adapted from the corresponding notions defined for the standard MDP setting [13], where they also demonstrate the necessity of this assumption for standard RL with function approximation. It would be an interesting open question whether Assumption 2 is also necessary for robust RL settings. Recent works for standard offline RL also show that by considering variations of the RL algorithms (e.g., exploring pessimism [95] or the primal-dual formulation [102]), the concentrability assumption can be weakened to single-policy concentrability. Another interesting future direction is to study whether applying similar approaches for the soft RMDP would result in the same improvement. 5.1 Robust Fitted-Z Iteration (RFZI) The offline robust MDP learning method we propose is Robust fitted-Z iteration (RFZI). The main idea is to utilize the fix point equation \( Q^* = T_Q Q^* \) with the Bellman operator (15) from the corresponding equivalent risk-sensitive MDP. However, \( T_Q \) involves a term \( \log E_{s' \sim P_{s,a}} \), which is hard to approximate with empirical estimation. Thus, instead of directly solving \( Q^* \) using \( Q^* = T_Q Q^* \), we introduce an auxiliary variable, Z-function and solve a fix point equation for \( Z \), which play an important role in our algorithm design and theoretical analysis. The Z-functions. For a given Q-function \( Q : S \times A \rightarrow \mathbb{R} \), we define its corresponding Z-function as below: \[ Z(s, a) := E_{s' \sim P_{s,a}} e^{-\beta \max_{a'} Q(s', a')} . \] One can establish the relationship between the Z-function and the Q-function by \[ [T_Q Q](s, a) = r(s, a) - \gamma \beta^{-1} \log Z(s, a). \] Further, we also define the Z-Bellman operator on Z-functions as: \[ [T_Z Z](s, a) := E_{s' \sim P_{s,a}} e^{-\beta \max_{a'} (r(s', a') - \gamma \beta^{-1} \log Z(s', a'))}. \] Then \( T_Q [T_Q Q](s, a) = r(s, a) - \gamma \beta^{-1} \log [T_Z Z](s, a) \). Thus, instead of solving \( Q^* = \tilde{T}_Q Q^* \), an alternative approach is to solve \( Z^* = \tilde{T}_Z Z^* \) and recover \( Q^* \) by \( Q^* = r - \gamma \beta^{-1} \log Z^* \). This is the key intuition of our RFZI algorithm. Note that compare with \( T_Q \), \( T_Z \) eliminates the log dependency on the expectation term \( E_{s' \sim P_{s,a}} \), which makes it easier for empirical estimation. Function approximation and projected Z-Bellman operator. Given that \( \tilde{T}_Z \) is a contraction mapping, the solution \( Z^* \) can be obtained by running \( Z_{k+1} = \tilde{T}_Z Z_k, \lim_{k \rightarrow +\infty} Z_k = Z^* \). However, when the problem considered is of large state space, it is computationally very expensive to compute the Bellman operator \( \tilde{T}_Z \) exactly. Thus function approximation might be needed to solve the problem approximately. Given a function class \( \mathcal{F} \), we define the projected Z-Bellman operator as: \[ [T_{Z,\mathcal{F}} Z](s, a) := \argmin_{Z' \in \mathcal{F}} \| Z' - T_Z Z \|_{2,\mu}. \] One can verify that \( T_{Z,\mathcal{F}} Z \) is also the minimizer of the following loss function \( L \), \[ L(Z', Z) := E_{s,a \sim \mu} E_{s' \sim P_{s,a}} \left( Z'(s,a) - \exp(-\beta \max_{a'} (r(s', a') - \gamma \beta^{-1} \log Z(s', a'))) \right)^2, \] i.e., \( T_{Z,\mathcal{F}} Z = \argmin_{Z' \in \mathcal{F}} L(Z', Z) \). We make the following assumptions on the expressive power of the function class \( \mathcal{F} \): Assumption 3 (Approximate Completeness). \( \sup_{Z \in \mathcal{F}} \inf_{Z' \in \mathcal{F}} \| Z' - T_Z Z \|_{2,\mu} \leq \epsilon_c \). Assumption 4 (Positivity). \( e^{-\frac{\alpha}{1-\gamma}} \leq Z \leq 1, \forall Z \in \mathcal{F} \). Approximate the projected Z-Bellman operator with empirical loss minimization. The computation of the loss function \( L \) requires knowledge of the empirical model \( P_{s,a} \) which the algorithm doesn’t have access to. Thus, we introduce the following empirical loss to further approximate the loss function \( L \). Given an offline data set \( \{(s_i, a_i, s'_i)\}_{i=1}^N \) generated from the distribution \( (s_i, a_i) \sim \mu, \ s'_i \sim P_{s_i, a_i} \), we can define the empirical loss \( \hat{L} \) as: \[ \hat{L}(Z', Z) := \frac{1}{N} \sum_{i=1}^N \left( Z'(s_i, a_i) - \exp(-\beta \max_{a'}(r(s'_i, a') - \gamma \beta^{-1} \log Z(s'_i, a')))) \right)^2 \] Given the empirical loss \( \hat{L} \) the empirical projected Bellman operator is defined as \( \hat{T}_{Z,F} Z := \argmin_{Z' \in F} \hat{L}(Z', Z) \). Our robust fitted Z iteration (RFZI) is essentially updating the Z-functions iteratively by \( Z_{k+1} = \hat{T}_{Z,F} Z_k \). The detailed algorithm is displayed in Algorithm 1. **Algorithm 1 Robust Fitted Z Iteration (RFZI)** 1: **Input:** Offline dataset \( D = (s_i, a_i, r_i, s'_i)_{i=1}^N \), function class \( F \). 2: **Initialize:** \( Z_0 = 1 \in F \) 3: **for** \( k = 0, \ldots, K-1 \) **do** 4: Update \( Z_{k+1} = \argmin_{Z \in F} \hat{L}(Z, Z_k) \). 5: **end for** 6: **Output:** \( \pi_K = \argmax_a r(s, a) - \gamma \beta^{-1} \log Z_K(s, a) \) ### 5.2 Sample Complexity This section provides the theoretical guarantee for the convergence of the RFZI algorithm. Due to space limit, we defer the proof sketches as well as detailed proofs to Appendix H. **Theorem 5** (Sample complexity for RFZI). Suppose Assumption 2, 3, and 4 hold, then for any \( \delta \in (0, 1) \), with probability at least \( 1 - \delta \), the policy \( \pi_K \) obtained from RFQI algorithm (Algorithm 1) satisfies: \[ \mathbb{E}_{s_0 \sim \rho} [V^\star(s_0) - V^{\pi_K}(s_0)] \leq \frac{2\gamma^K}{(1-\gamma)^2} + \gamma \beta^{-1} e^{\frac{\beta}{1-\gamma}} \frac{2C}{(1-\gamma)^2} \left( 4\sqrt{\frac{2\log(|F|)}{N}} + 5\sqrt{\frac{2\log(8/\delta)}{N}} + \epsilon_c \right). \] The performance gap in Theorem 5 consists of three parts. The first part \( \frac{2\gamma^K}{(1-\gamma)^2} \) captures the effect of \( \gamma \)-contraction of the Bellman operators. The second term, which is the term with \( \gamma \beta^{-1} e^{\frac{\beta}{1-\gamma}} \frac{2C}{(1-\gamma)^2} \epsilon_c \), is related to the approximation error caused by using function approximation. The third term \( \gamma \beta^{-1} e^{\frac{\beta}{1-\gamma}} \frac{2C}{(1-\gamma)^2} \left( 4\sqrt{\frac{2\log(|F|)}{N}} + 5\sqrt{\frac{2\log(8/\delta)}{N}} \right) \) is caused by the error of replacing the projected Z-Bellman operator with its empirical version. **Remark 6** (Comparison and Discussions). Under similar Bellman completeness and concentrability assumptions, the sample complexity for risk-neutral offline RL [13] is \( O(C \log |F| (1-\gamma)^4 \epsilon_c^2) \), while our result gives \( O(C^2 (\beta^{-1} e^{\frac{\beta}{1-\gamma}})^2 \log |F| (1-\gamma)^4 \epsilon_c^2) \) (assuming \( \epsilon_c = 0 \)). As a consequence of robustness, our bound has a worse dependency on the concentrability factor \( C \) and an additional factor \( (\beta^{-1} e^{\frac{\beta}{1-\gamma}})^2 \). Note that the term \( \beta^{-1} e^{\frac{\beta}{1-\gamma}} \) first decreases and then increases with \( \beta \) as it goes from 0 to \( +\infty \), suggesting that the hyperparameter \( \beta \) also affects the learning difficulty of the problem. The choice of \( \beta \) should not be either too large or too small, ideally on the same scale with \( 1 - \gamma \). It is still unclear to us whether the exponential dependency \( e^{\frac{\beta}{1-\gamma}} \) is a proof artifact or intrinsic in our setting, however, there are results under similar settings that suggest this exponential dependency on parameter \( \beta \) and the effective length \( \frac{1}{1-\gamma} \) is fundamental (e.g. Theorem 3 in [26]). We also compare our performance bound with the RFQI algorithm [64] which considers a similar offline learning setting and obtains sample complexity \( O(\log(|F||G|)(\beta e)^2(1-\gamma)^6) \), where \( \beta \) in their setting is the radius of the uncertainty set. Note that both results share the same dependency on \( \epsilon \) and the concentrability constant \( C \). However, the bound in [64] includes an additional term on the size of the dual variable space \( \log |G| \), whereas we have the exponential dependence term \( e^{\frac{\beta}{1-\gamma}} \). 6 CONCLUSIONS AND DISCUSSIONS This paper proposes a new formulation of risk-sensitive MDP and establishes its equivalence with the soft robust MDP. This equivalence enables us to develop the policy gradient theorem and prove the global convergence of the exact policy gradient method under direct parameterization. Additionally, for the KL-soft robust MDP (or equivalently the risk-sensitive MDP with entropy risk measure) scenario, we propose a sample-based offline learning algorithm, namely the robust fitted-Z iteration (RFZI), and analyze its sample complexity. Our work admittedly has its limitations. Currently, our policy gradient result is limited to the exact gradient case, and further research is needed to extend it to approximate gradients. The RFZI algorithm is specifically designed for KL-soft problems and may be more suitable for small action spaces. Our future work will focus on developing practical algorithms that can handle large or even continuous state and action spaces, as well as generalizing the approach to accommodate different penalty functions. ACKNOWLEDGEMENT This work was funded by NSF AI institute: 2112085, NSF ECCS: 2328241. REFERENCES [1] Alekh Agarwal, Sham M. Kakade, Jason D. Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift, 2020. [2] Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, and David Heath. Coherent measures of risk. Mathematical finance, 9(3):203–228, 1999. [3] Philippe Artzner, Freddy Delbaen, Jean-Marc Eber, David Heath, and Hyejin Ku. Coherent multiperiod risk adjusted values and Bellman’s principle. Annals of Operations Research, 152:5–22, 2007. [4] Kishan Panaganti Badrinath and Dileep Kalathil. Robust reinforcement learning using least squares policy iteration with provable performance guarantees. In International Conference on Machine Learning, pages 511–520. PMLR, 2021. [5] Nicole Bäuerle and Alexander Glauner. Distributionally robust Markov decision processes and their connection to risk measures. Mathematics of Operations Research, 47(3):1757–1780, 2022. [6] Bahram Behzadian, Marek Petrik, and Chin Pang Ho. Fast algorithms for $l_\infty$-constrained s-rectangular robust MDPs. Advances in Neural Information Processing Systems, 34:25982–25992, 2021. [7] Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In International conference on machine learning, pages 449–458. PMLR, 2017. [8] Aharon Ben-Tal, Stephen Boyd, and Arkadi Nemirovski. Extending scope of robust optimization: Comprehensive robust counterparts of uncertain problems. Mathematical Programming, 107(1-2):63–89, 2006. [9] Aharon Ben-Tal, Dimitris Bertsimas, and David B Brown. A soft robust model for optimization under ambiguity. Operations research, 58(4-part-2):1220–1234, 2010. [10] Aharon Ben-Tal, Ruud Brekelmans, Dick Den Hertog, and Jean-Philippe Vial. Globalized robust optimization for nonlinear uncertain inequalities. INFORMS Journal on Computing, 29(2):350–366, 2017. [11] Nicole Bäuerle and Alexander Glauner. Markov decision processes with recursive risk measures. European Journal of Operational Research, 296(3):953–966, 2022. ISSN 0377-2217.
mOTiVzTgF2
The discussion of Adam condition number assumes very very small values of gradient, which is not observed realistically. This discussion should be contextualized given realistic values of the gradient.
ResiDual: Transformer with Dual Residual Connections Anonymous authors Paper under double-blind review Abstract Transformer networks have become the preferred architecture for many tasks due to their state-of-the-art performance. However, the optimal way to implement residual connections in Transformer, which are essential for effective training, is still debated. Two widely used variants are the Post-Layer-Normalization (Post-LN) and Pre-Layer-Normalization (Pre-LN) Transformers, which apply layer normalization after each residual block’s output or before each residual block’s input, respectively. While both variants enjoy their advantages, they also suffer from severe limitations: Post-LN causes gradient vanishing issue that hinders training deep Transformers, and Pre-LN causes representation collapse issue that limits model capacity. In this paper, we propose ResiDual, a novel Transformer architecture with Pre-Post-LN (PPLN), which fuses the connections in Post-LN and Pre-LN together, and inherits their advantages while avoids their limitations. We conduct both theoretical analyses and empirical experiments to verify the effectiveness of ResiDual. Theoretically, we prove that ResiDual has a lower bound on the gradient to avoid the vanishing issue due to the residual connection from Pre-LN. Moreover, ResiDual also has diverse model representations to avoid the collapse issue due to the residual connection from Post-LN. Empirically, ResiDual outperforms both Post-LN and Pre-LN on several machine translation benchmarks across different network depths and data sizes. Figure 1: Overview of Post-LN, Pre-LN, and ResiDual. Circles with different colors represent different variables and rectangles represent different operations. See Section 2 for more details. 1 Introduction Transformer (Vaswani et al., 2017) has emerged as a powerful neural network architecture that has been successfully applied in various AI tasks, including machine translation (Vaswani et al., 2017), language modeling and generation (Radford et al., 2018; 2019; Brown et al., 2020), image recognition (Dosovitskiy et al., 2020), and speech synthesis (Ren et al., 2019). Despite its success, researchers are still exploring ways to further enhance its performance and deepen the understanding of its inner workings (Wang et al., 2019; Katharopoulos et al., 2020; Fedus et al., 2021). Among them, one area of ongoing research is the study of residual connections in the Transformer architecture (Liu et al., 2020; Xiong et al., 2020; Bachlechner et al., 2021). Two variants of residual connections have been proposed since the introduction of the Transformer, known as Post-LN and Pre-LN. The Post-LN variant applies layer normalization (LN) operations after the output of each residual block. This variant is used in several prominent models such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and ALBERT (Lan et al., 2019). The Pre-LN variant, on the other hand, applies LN operations before the input to each residual block. This variant is used in models such as the GPT series, ViT (Dosovitskiy et al., 2020), and PaLM (Chowdhery et al., 2022). Although both variants have been widely used, each one has its own drawbacks, which are summarized in Table 1. As shown in Figure 1, the key difference between the two residual variants is how the layer normalization (LN) normalized the outputs of each block. With Post-LN, the output of lower blocks (i.e., the blocks close to input) are normalized multiple times. As a result, the gradient norm decays exponentially with depth and eventually vanishes in the lower layers (Xiong et al., 2020). This problem does not exist in Pre-LN because the gradient can flow directly to each block. However, the Pre-LN architecture has the representation collapse issue (Liu et al., 2020), which will negatively impact the model’s capacity. The representation collapse issue refers to the fact that the hidden representation of higher blocks (i.e., the blocks close to output) will be similar to each other in Pre-LN models. Therefore, the higher blocks will have little contribution to the model capacity. Several approaches have been proposed to address these problems, which can generally be categorized into three categories. Firstly, some methods aim to modify the architecture, such as DLCL (Wang et al., 2019), NormFormer (Shleifer et al., 2021), RealFormer He et al. (2021), and B2T (Takase et al., 2022), which adds extra components such as aggregations or LNs to stable training. Secondly, some methods add different weights to the residual, such as Admin (Liu et al., 2020), DeepNet (Wang et al., 2022a), $\tau$-ResNet (Zhang et al., 2022), and ReZero (Bachlechner et al., 2021). Lastly, some methods use better initialization, such as T-Fixup (Huang et al., 2020), DeepNet (Wang et al., 2022a), and Foundation Transformer (Wang et al., 2022b), to reduce variance and stabilize training. In this study, we focus on the first category and propose a new architecture for Transformer models to address the drawbacks of both variants while retaining their benefits. Figure 1(c) provides an overview of our method. Our design goal is to maintain the advantages of both variants and avoid their disadvantages by employing two residual connections. In particular, our ResiDual model utilizes a Pre-Post-LN (PPLN) that consists two residuals: one is similar to the Pre-LN to prevent the gradient vanishing issue, while the other one akin to the Post-LN, which sustains representation diversity to avoid the representation collapse issue. To validate the effectiveness of our proposed method, we conduct both theoretical analysis (Section 3) and empirical study (Section 4) to show that our method can achieve the best of both worlds. From the theoretical perspective, we first show that the gradient vanishing is still a critical problem even using Adam (Kingma & Ba, 2014) optimizer. We also show that ResiDual has a bounded gradient-norm thus do not have such an issue. Furthermore, we study the representation collapse issue and show that ResiDual has the same hidden representation diversity as Post-LN. Therefore, ResiDual do not have the representation collapse issue in Pre-LN. Empirically, we conduct comprehensive experiments on machine translation tasks, which are among the most representative tasks in natural language processing. Our dataset comprises small-scale (IWLST), mid-scale (WMT), and large-scale (OPUS) datasets. Our experimental results demonstrate that our method outperforms baselines across all three datasets. In summary, this work makes the following contributions: | Method | Gradient Vanishing | Representation Collapse | |--------|-------------------|------------------------| | Post-LN| 😊 | 😊 | | Pre-LN | 😊 | 😊 | | ResiDual| 😊 | 😊 | Table 1: Comparison of Post-LN, Pre-LN, and our method. 😊 means the model does not suffers from the issue and 😊 means the model has such issue. • We present ResiDual, a simple yet potent variation of the Transformer architecture, which tackles both the gradient vanishing problem in Post-LN and the representation collapse issue in Pre-LN Transformer models. • Our theoretical analysis demonstrates that this new design can leverage the strengths of both variants while avoiding their weaknesses. • Our experimental results provide further evidence of the effectiveness of our approach, as it achieves superior performance compared to both the Post-LN and Pre-LN Transformer models across multiple datasets. 2 METHOD 2.1 DISADVANTAGES OF POST-LN AND PRE-LN In this section, we briefly review the architecture of Post-LN and Pre-LN, whose illustrations are available in Figure 1 (a) and (b). We will also discuss the shortcomings of each architecture. Gradient Vanishes with Post-LN. The Post-LN architecture is shown in Figure 1(a). To be more specific, given a Post-LN Transformer network with $N$ residual blocks, we assume the input shape is $n \times d$ where the $n, d$ denotes the sequence length and embedding size\(^1\). The variables with vector arrow (e.g., $\vec{x} \in \mathbb{R}^{n \times d}$) denote the whole sequence and the variables without it (e.g., $x \in \mathbb{R}^d$) denote an element of the sequence. We use $\vec{x}^a \in \mathbb{R}^{n \times d}$ denote the tensor after add operation and use subscript $k$ (i.e. $\vec{x}_k^a$) denote the tensor in the $k$-th block. We also use $\vec{x}_k^{ln} \in \mathbb{R}^{n \times d}$ denotes the normalized tensor and $\vec{x}_k^f \in \mathbb{R}^{n \times d}$ denotes the output of the function $f_k(\cdot; w_k)$ in the $k$-th block. The $f_k$ can be a self-attention, cross-attention, or feed-forward with parameter $w_k$. Using these notations, the Post-LN computation of each element in the $k$-th block is $$x_k^a = x_k^{ln} + x_k^f = x_k^{ln} + f_k(\vec{x}_k^{ln}; w_k); \quad x_{k+1}^{ln} = \text{LN}(x_k^a).$$ Finally, the output $y$ is computed by $y = x_N^{ln+1} = \text{LN}(x_N^a)$. Intuitively, the $x_k^f$ is normalized $N - k$ times, so does the gradients of $w_k$. Therefore, the gradients of lower blocks will be small. From Xiong et al. (2020), we know that for Post-LN Transformer, the gradient norm decreases exponentially from deep layers to shallow layers. Intuitively, such an imbalanced gradients will impede the model training. Therefore, in practise, training tricks such as learning-rate warm-up are necessary to train a Post-LN model. Representation Collapses with Pre-LN. With the same notations, the Pre-LN computation is $$x_k^{ln} = \text{LN}(x_k^a); \quad x_{k+1}^a = x_k^a + x_k^f = x_k^a + f_k(\vec{x}_k^{ln}; w_k).$$ Similarly, the model output is $y = \text{LN}(x_N^a) = \text{LN}(\sum_{k=1}^{N} x_k^f)$. Intuitively, as the $x_k^f$ is only normalized once when computing the $y$, neither the forward nor the backward pass are blocked by LN. Thus, Pre-LN do not have the gradient vanish issue. However, it has another issue called representation collapse. More specifically, Liu et al. (2020) show that the $\frac{\sqrt{\text{Var}[x_k^f]}}{\sqrt{\text{Var}[x_k^a + x_k^f]}}$ is likely to be smaller for higher blocks (i.e, blocks with larger $k$). This means the output of the later blocks ($x_k^f$) has little contribution to the total variance of $x_k^a$. In Section 3.2, we show that the difference between $x_{k+1}^{ln}$ and $x_k^{ln}$ (i.e., $|x_{k+1}^{ln} - x_k^{ln}|$) decays along with $k$, which indicates the input of the higher blocks will collapse to similar values. We also show that this issue may limit the capacity of the model. 2.2 RESIDUAL The goal of our model is to take the advantages of both variants and avoid the both disadvantages. To achieve this goal, we use residuals from both variants and the overview of our method is in \(^1\)We omit the batch dimension that will not affect our analysis. Figure 1 (c). More specifically, the two residual connections are illustrated in the left and right vertical lines in the Figure. The left one, which is similar to the conventional Post-LN, is \[ x_k^a = x_k^{ln} + x_k^f = x_k^{ln} + f_k(\tilde{x}_k^{ln}; w_k); \quad x_{k+1}^{ln} = LN(x_k^a). \] Meanwhile, the right residual, which is similar to the conventional Pre-LN, is formulated by \[ x_{k+1}^d = x_k^d + x_k^f, \] where \( x_d \in \mathbb{R}^{n \times d} \) is the tensor to denote dual residual that similar to \( x^a \) in the Pre-LN that allows the gradients directly flow to each block. Finally, the output \( y \) is computed by adding the representation of both residuals, which is \[ y = x_{N+1}^{ln} + LN(x_{N+1}^d). \] 2.3 Discussion In this section, we will only introduce the intuitive understanding of ResiDual and the mathematical analysis is provided in Section 3. Avoiding the Gradient Vanishing In ResiDual, gradient of each block flows from both residual connections. Thus, even if the gradient comes from the Post-LN-like residual vanishes, there will still be gradients from the Pre-LN-like residual. This prevents the gradient vanishing issue. We provide the details of the lower-bound of the gradient norm in Section 3.1. Avoiding the Representation Collapse Our Pre-LN-like residual only affects the model output and does not affect the input to each block. Therefore, the representation capacity is the same as a Post-LN model. Furthermore, because the final output of our model is the sum of two residual connections, the representation of the output will not collapse either. We provide the details of the lower-bound of the representation capacity in Section 3.2. 3 Theoretical Analysis of ResiDual In this section, we formally study the gradient vanishing and representation collapse issue. We also prove that our method does not have such issues. 3.1 The Gradient Vanishing Issue In order to present the analysis in a concise way, we study a simple setting and make several assumptions. In Transformer, the \( f \) function can be either a feed-forward block or a multi-head attention block. For a feed-forward block, \( f(x) := Wx \) where we ignore the layer index. For a multi-head attention block, we have weight matrices \( W_Q, W_K, W_V \). For simplicity, we focus on single-head attention. Similar to Xiong et al. (2020), we initialize \( W_Q \) to be zero matrices and consequently, the attention is a uniform distribution at initialization and \( f(x^{(j)}) := \frac{1}{n} \sum_{j=1}^{n} x^{(j)} W_V \) where we drop the layer index and \( x^{(j)}, j \in [n] \) are the input sequence with length \( n \). We usually drop the superscript index \( (j) \) for notation simplicity when the context is clear itself. We introduce \( \tilde{x} := \{ x^{(j)}, j \in [n] \} \) and use \( w \) to denote the collection of parameter matrices in \( f \). Based on above assumption, without loss of generality, we further assume that the \( f \) function keeps the norm, i.e., \( \| f(x) \| = \| x \| \). This assumption is asymptotically true when the network width goes to infinity and the initialization variance is properly scaled. We assume that the signal is standardized after layer normalization, i.e., \( \| x_k^{ln} \| = \sqrt{d} \) for all \( k \in [N] \), and that for \( x \in \mathbb{R}^d \), the Jacobian matrix through LN satisfies \( \frac{\partial LN(x)}{\partial x} \approx \frac{\sqrt{d}}{\| x \|_2} I \). This approximation can be achieved if the mean of \( x \) is 0 and the variance is \( \frac{1}{d} \| x \|^2 \) while ignoring the gradient back-propagated through mean and variance. The rationale in this assumption is that the error signal (gradients) back-propagating through LN becomes smaller as the norm of the input to the LN gets larger. In the Post-LN Transformer, the scale of the inputs to the layer normalization is independent of \( N \), and thus the gradients of parameters in the last layer are independent of \( N \). Gradient Norm Estimation for Post and Pre-LN Transformer. From Xiong et al. (2020), we know that for Post-LN Transformer, the gradient norm of the block $k$ decreases exponentially as block index $k$ gets smaller. This indicates that the gradient of the block close to input would be exponentially small for deep transformers. In contrast, for Pre-LN Transformer, the gradient norm of each block is roughly independent with the block index $k$. For completeness, we rephrase the result from Xiong et al. (2020) with our notations and assumptions. We also present the proof in a more accurate way in Appendix. **Theorem 3.1** (Gradients of the $k$-th block in the Post-LN and Pre-LN Transformers). Given the above assumptions on $f$ and LN, for the Post-LN Transformer with $N$ blocks, the gradient of the parameters of the $k$-th block satisfies $$\left\| \frac{\partial L}{\partial w_k} \right\|_F \approx O \left( \frac{1}{2}^{(N-k)/2} e^{\sqrt{N-k}} \right),$$ for the Pre-LN Transformer with $N$ blocks, the gradient of the parameters of the $k$-th block satisfies $$\left\| \frac{\partial L}{\partial w_k} \right\|_F \approx O \left( \sqrt{\frac{\log(N-k)}{N}} \right),$$ where we ignore the terms irrelevant with $k$, $N$. Analysis of Adam. In practice, adaptive optimizers such as Adam are widely used to train Transformer networks. However, the vanished gradients issue cannot be solved by adaptive optimizers and thus we aim to fix the issue in the network architecture. More specifically, we show that the Adam updates is ill-conditioned in vanished gradients. More specifically, let the $\alpha, t, \epsilon, \beta_1, \beta_2$ denote the learning rate, step, smoothing factor, first decay rate and second decay rate, respectively, and the $w^{(t)}, g, m^{(t)}, v^{(t)}$ denote the parameters, gradients, bias-corrected first and second moment estimation at time $t$. Meanwhile, we use $u(g^{(t)}) = \alpha \cdot m^{(t)}/(\sqrt{v^{(t)}} + \epsilon)$ denote the Adam update (i.e., $w^{(t)} \leftarrow w^{(t-1)} - u(g^{(t)})$) and the full formula is in Appendix B. Because the Adam update is element-wise, we also use $u(g)$ to denote the scalar function of $u(g)$, which means $u(g) = [u(g_1), u(g_2), \cdots, u(g_d)]$. Then, we will show that, when the gradients vanish, the $u(g)$ is sensitive to small perturbation (i.e., ill-conditioned) because of its large condition number. **Theorem 3.2.** The Adam update function $u(g)$ is ill-conditioned for vanished gradients ($g = 0$) in early stage ($t$ is small). **Proof.** Considering that the $u(g)$ is differentiable, the absolute condition number $\hat{\kappa}$ for $u(g_t)$ is $$\hat{\kappa} = \lim_{\delta \to 0} \sup_{||\delta g|| \leq \delta} \frac{||u(g + \delta g) - u(g)||}{||\delta g||} = ||J(g)|| = \sqrt{\sum_{i=1}^{d} \left( \frac{\partial u}{\partial g_i} \right)^2}.$$ The full expression of $\frac{\partial u}{\partial g}$ can be found in Appendix B. In the early stage (i.e., $t$ is small), for the vanished gradient ($g_i = 0$), the absolute condition number $\hat{\kappa}$ is $$\hat{\kappa} = \alpha \frac{1 - \beta_1}{1 - \beta_2} \sqrt{\sum_{i=1}^{d} \frac{1}{\epsilon + \sqrt{\frac{\beta_2 v^{(t-1)}}{1 - \beta_2}}} \approx \frac{\alpha \sqrt{d}}{\epsilon}. \tag{3}$$ For example, in a classic setting where $d = 1024, \epsilon = 10^{-6}, \alpha = 10^{-4}$, we have $\hat{\kappa} = 3200$, which is a very large number. This tells us that in early stage, the $u(g_t)$ is ill-conditioned. Intuitively, when there is a small noise $||\delta g|| \leq \delta$ added to the gradient $g$, the change of the update $||u(g + \delta g) - u(g)||$ could be thousand times larger than $||\delta g||$. This will make the training... unstable and vulnerable to a small perturbation. This study is also consistent with the empirically findings by Wang et al. (2022a) that the exploding gradients in higher layers is not the root cause of Post-LN training difficulty. Further more, to verify our approximation, we also have simulation in Appendix B. Moreover, from Equation (3), given a fixed model with width $d$, seems there are two possible way to reduce the $\hat{\kappa}$: increasing the $\epsilon$ or decreasing the $\alpha$. However, the first one is not viable because a large $\epsilon$ will make an adaptive optimizer less adaptive. Therefore, in practise, researchers have to reduce the learning-rate $\alpha$ (e.g., using learning-rate warm-up) to ease this problem. To conclude, as the gradient vanishing is a critical issue even when the model is trained with adaptive optimizes. As a result, we purpose to solve this problem from the architecture aspect. ### 3.2 The Representation Collapse Issue **The Representation Collapse in Pre-LN** The issue with the representation capability of Pre-LN was initially observed by Liu et al. (2020). In summary, the Pre-LN Transformer’s hidden representation cannot be refined by deeper layers due to the normalization of layer outputs. In this work, we propose a novel analysis approach that directly examines the distribution of hidden state changes, represented by $|x_{k+1}^{ln} - x_k^{ln}|$, and output changes, denoted by $|y_N - y_{N-1}|$. Our new method offers a straightforward way to obtain quantitative results regarding the convergence rate. **Theorem 3.3.** For Pre-LN, assume $x_k^f \sim \mathcal{N}(0, \sigma^2 I)$ independently for all $k \in [N]$, we have $x_{k+1}^{ln} - x_k^{ln} \sim \mathcal{N}(0, \omega_k^2 I)$ where $\omega_k^2 = \frac{2}{\sqrt{k(\sqrt{k-1} + \sqrt{k})}}$. **Proof.** As $x_k^f \sim \mathcal{N}(0, \sigma^2 I)$, we have $x_k^a = \sum_{j=1}^{k-1} x_j^f$ thus $x_k^a \sim \mathcal{N}(0, (k-1)\sigma^2 I)$. For the normalization layer, we approximate its effect as follows, $x_k^{ln} = \frac{x_k^a}{\sqrt{k-1}\sigma}$. Then we have $$x_{k+1}^{ln} - x_k^{ln} = \frac{x_{k+1}^a}{\sqrt{k}\sigma} - \frac{x_k^a}{\sqrt{k-1}\sigma} = \frac{\sqrt{k-1} - \sqrt{k}}{\sqrt{k(k-1)}\sigma} \cdot x_k^a + \frac{1}{\sqrt{k}\sigma} \cdot x_k^f.$$ We know that $\frac{\sqrt{k-1} - \sqrt{k}}{\sqrt{k(k-1)}\sigma} \cdot x_k^a \sim \mathcal{N}(0, (\frac{\sqrt{k-1} - \sqrt{k}}{k})^2 I)$ and $\frac{1}{\sqrt{k}\sigma} \cdot x_k^f \sim \mathcal{N}(0, \frac{1}{k} I)$. Because $x_k^a$ and $x_k^f$ are independent, we have $a_{k+1} - a_k \sim \mathcal{N}(0, \omega_k^2 I)$ and $\omega_k^2 = (\frac{\sqrt{k-1} - \sqrt{k}}{k})^2 + \frac{1}{k} = \frac{2}{\sqrt{k(\sqrt{k-1} + \sqrt{k})}}$. **Corollary 3.4.** For each coordinate $i$ of $x_{k+1}^{ln} - x_k^{ln}$, we have $\mathbb{E}[|(a_{k+1} - a_k)_i|] \sim O(\frac{1}{\sqrt{k}})$ From Corollary 3.4, we can see that the expectation of $|(a_{k+1} - a_k)_i|$ decreases to 0 as $k$ increases to infinity with rate $1/\sqrt{k}$. This means, when the number of layers increases, the inputs to later layers will be similar to each other. Thus, the capability of the later layers are not fully used because they cannot further refine the representations. **Corollary 3.5.** When adding an extra layer to a $N-1$ layer Pre-LN Transformer, the output difference $\mathbb{E}[|(y_N - y_{N-1})_i|] \sim O(\frac{1}{\sqrt{N}})$ for each coordinate $i$. The proof of Corollary 3.5 is in Appendix C, it means that adding extra layer in the deep Pre-LN Transformer has little impact on the output. Intuitively, this means the extra layer also cannot refine the model outputs and the model’s capacity is not fully used. ### 3.3 Analysis of ResiDual **ResiDual Does Not Suffer From Gradient Vanishing Issue** For the ResiDual architecture (Figure 1c), we can view it as a mixture of Post-LN Transformer and Pre-LN Transformer. Specifically, in the forward process, ResiDual Transformer behaves exactly the same as Post-LN except adding a dual branch of normalized sum of all block outputs in the end. In the backward process, the error signal back-propagates through both branches. We can explicitly write down the gradients at block Figure 2: Study of the Gradient norm and hidden representation w.r.t layer $k$ in each method. Details in Appendix E $k$ as follows $$\frac{\partial \mathcal{L}}{\partial w_k} = \left( \frac{\partial \mathcal{L}}{\partial w_k} \right)_{\text{post}} + \left( \frac{\partial \mathcal{L}}{\partial w_k} \right)_{\text{dual}},$$ where $\left( \frac{\partial \mathcal{L}}{\partial w_k} \right)_{\text{post}}$ denotes the gradient component from the Post-LN branch and $\left( \frac{\partial \mathcal{L}}{\partial w_k} \right)_{\text{dual}}$ denotes the gradient component from the dual branch. Specifically, $$\left( \frac{\partial \mathcal{L}}{\partial w_k} \right)_{\text{post}} = \frac{\partial \mathcal{L}}{\partial \tilde{x}_{N+1}} \left( \prod_{l=k}^{N} \frac{\partial \tilde{x}_{l+1}}{\partial \tilde{x}_l^{\text{in}}} \frac{\partial \tilde{x}_l^{\text{in}}}{\partial \tilde{x}_l} \right) \frac{\partial \tilde{x}_k^f}{\partial w_k} = \frac{\partial \mathcal{L}}{\partial \tilde{x}_{N+1}} \left( \prod_{l=k}^{N} \left( I + \frac{\partial \tilde{x}_l^f}{\partial \tilde{x}_l^{\text{in}}} \frac{\partial \tilde{x}_l^{\text{in}}}{\partial \tilde{x}_l} \right) \right) \frac{\partial \tilde{x}_k^f}{\partial w_k},$$ and $$\left( \frac{\partial \mathcal{L}}{\partial w_k} \right)_{\text{dual}} = \frac{\partial \mathcal{L}}{\partial \tilde{x}_{N+1}} \left( \prod_{l=k+1}^{N} \frac{\partial \tilde{x}_{l+1}}{\partial \tilde{x}_l} \right) \frac{\partial \tilde{x}_{k+1}^f}{\partial w_k} = \frac{\partial \mathcal{L}}{\partial \tilde{x}_{N+1}} \left( \prod_{l=k+1}^{N} \left( I + \frac{\partial \tilde{x}_l^f}{\partial \tilde{x}_l^{\text{in}}} \frac{\partial \tilde{x}_l^{\text{in}}}{\partial \tilde{x}_l} \right) \right) \frac{\partial \tilde{x}_{k+1}^f}{\partial w_k}.$$ We see that when $k$ is small, the Pre-LN gradient component dominates and when $k$ is close to $N$, the Post-LN gradient component dominates. It is safe to estimate the gradient norm of the $k$-th block in ResiDual Transformer as follows, $$\left\| \frac{\partial \mathcal{L}}{\partial w_k} \right\|_F \approx \max \left\{ O \left( (1/2)^{(N-k)/2} e^{\sqrt{N-k}} \right), O \left( \sqrt{\log(N-k)} \cdot \frac{N}{N} \right) \right\},$$ where again we ignore the terms irrelevant with $N, k$. Therefore, the ResiDual architecture does not suffer gradient vanishing problem. It is worthy to note gradient vanishing problem does not directly relate to inefficient training because in Adam the actual update is rescaled to be normal even if extreme small gradient is obtained. However, the gradient vanishing problem would affect the stability of the Adam optimizer as we argue as follows. In Figure 2(a), we show the gradient distribution for different methods. We can find that the Post-LN has almost zero gradient for early layers, while the ResiDual (orange line) do not have such an issue. The clearly shows that our method can ensure a lower-bound of the gradient norm. Meanwhile, note that non of these models have the exploding-gradient issue. According to Theorem 3.1, the gradient of last layer (i.e., $k = N$) is not related to $N$. ResiDual Does Not Suffer From Representation Collapse Issue. The Post-LN and ResiDual do not have the representation collapse issue. Formally, **Theorem 3.6.** In Post-LN and ResiDual, assume $\tilde{x}_k^f \sim \mathcal{N}(0, \sigma^2 I)$ independently for all $k \in [N]$, the $\tilde{x}_{k+1}^{\text{in}} - \tilde{x}_k^{\text{in}} \sim \mathcal{N}(0, \omega^2)$ where $\omega$ is not related to $k$. Proof. As \( x_{k+1}^{ln} = \text{LN}(x_k^f) = \text{LN}(x_k^{ln} + x_k^f) \), and \( x_k^{ln} \sim \mathcal{N}(0, I) \), \( x_k^f \sim \mathcal{N}(0, \sigma^2 I) \), we have \[ x_{k+1}^{ln} - x_k^{ln} = \frac{x_k^{ln} + x_k^f}{\sqrt{1 + \sigma^2}} - x_k^{ln} = \frac{(1 - \sqrt{1 + \sigma^2})x_k^{ln} + x_k^f}{\sqrt{1 + \sigma^2}}. \] Thus, \( x_{k+1}^{ln} - x_k^{ln} \sim \mathcal{N}(0, \omega^2) \) where \( \omega^2 = 2 - 2\frac{\sqrt{1 + \sigma^2}}{1 + \sigma^2} \) and \( \omega \) is not related to \( k \). Corollary 3.7. When adding an extra layer to a \( N - 1 \) layer Pre-LN Transformer, the output difference \( \mathbb{E}[|(y_N - y_{N-1})_i|] \geq \sqrt{\frac{2}{\pi}} \omega \) for each coordinate \( i \). The proof of 3.7 is in the supplementary material. From these analyses, we can see that the variance of \( x_{k+1}^{ln} - x_k^{ln} \) will not decrease when the depth increases, so that later layers can continue refining the hidden representation. Meanwhile, according to Corollary 3.7, the model output can also be refined with a lower bound that is not related to depth. In another words, ResiDual can avoid the representation bottleneck of Pre-LN model. To demonstrate this, we also show the \( |x_{k+1}^{ln} - x_k^{ln}| \) for different architectures in Figure 2(b). As the lines show, our method (orange line) has a consistent value of \( |x_{k+1}^{ln} - x_k^{ln}| \), while the Pre-LN’s value will decrease when the depth is high. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTINGS Data We conducted experiments on three datasets: the IWSLT-14 English to German (EN→DE) dataset (Cettolo et al., 2014), the WMT German to English (DE→EN) dataset (Bojar et al., 2014), and the OPUS-100 multilingual dataset (Zhang et al., 2020). More details are in Appendix H. Model Our model is implemented using the FairSeq (Ott et al., 2019) framework with conventional settings as previous works. Notably, our method introduce only negligible parameters to the vanilla Transformer network. Meanwhile, given that the residual connection operations have a relatively small computational cost compared to Attention and FFN layers, the efficiency of our method should not hinder its practical use. We empirically observed about 3% increase in computation cost. Please refer to the Appendix H for hyper-parameters. 4.2 EXPERIMENTAL RESULTS ON IWSLT The experimental results of the IWSLT’14 dataset are presented in Table 2. Two types of models were used: shallow models with 6-layer encoders and 6-layer decoders (E6D6), and deep models with 12-layer encoders and 12-layer decoders (E12D12). We made the following observations: Firstly, the Post-LN method was successful in converging for E6D6 but not for E12D12. Secondly, the Pre-LN method converged in both depths, but its performance (35.12, 35.18) was inferior to that of the Post-LN E6D6 (35.37) or our E6D6 (35.63). Thirdly, the methods such as DeepNet (Wang et al., 2022a) and Admin (Liu et al., 2020) only showed a slight improvement over the vanilla models, and our method achieved best performance. Especially, in E12D12, we have 0.9-point BLEU gain over the standard Pre-LN model. Our preliminary experiments revealed that increasing the model depth further led to over-fitting issues for all models due to limited data. Therefore, we do not report 18 layer model results on this dataset. 4.3 EXPERIMENTAL RESULTS ON WMT Table 2: Experimental Results on IWSLT. | Method | E6D6 | E12D12 | |--------------|------|--------| | Post-LN | 35.37| Fail | | Pre-LN | 35.12| 35.18 | | DeepNet | 35.34| 35.39 | | Admin | 35.50| 35.67 | | T-Fixup | 34.88| 35.45 | | NormFormer | 35.14| 31.00 | | ResiDual(Ours)| 35.63| 36.09 | The experimental results on shallow (E6D6) and deep (E18D18) models are presented in Table 3. We only report the average score here and more details can be found in Table 6 and Table 7 in Appendix F. Firstly, we find that the Post-LN model can only converge in the E6D6 setting but not in E18D18 setting. Secondly, the Pre-LN model shows convergence in both E6D6 and E18D18. However, the performance of the Pre-LN model in E18D18 (26.57) is similar to Post-LN model in E6D6 (26.59). Finally, our method achieved the best performance for both shallow and deep models. Particularly, we observed an improvement over the Pre-LN performance by 1.1-point for the E18D18 model. ### 4.4 Experimental Results on OPUS-100 We evaluate our method on the OPUS-100 dataset, which consists of 100 language pairs and $55M$ parallel sentence pairs. Because we trained single model for both from English (EX) and to English (XE) direction, the total data size is about $110M$ sentence pairs and approximately 4 billion tokens. Table 4 shows the experimental results. In addition to the original baselines provided by Zhang et al. (2020), we also reproduced the 18-layer encoder and 18-layer decoder model (E18D18). We found that the Post-LN model failed to converge thus only show the Pre-LN results in Table 4. As we can see from the table, our method achieves about 0.7 BLEU points over the standard Pre-LN model. The BLEU score is almost identical to a 100-layer DeepNet (Wang et al., 2022a) model, which is about 5 times deeper of our model. This demonstrates that our model can more effectively use deeper layers. ### 4.5 Study of Learning-Rate Warm-Up One of the objectives of our approach is to facilitate easy and stable training for Transformer models. Therefore, we conducted experiments using different learning rate schedules on the IWSLT dataset. Table 5 presents the results for various models with or without learning-rate warm-up. Further details can be found in the Appendix G. We observe that Post-LN necessitates warm-up for convergence, while Pre-LN and our method are not. This is consistent with our study in Section 3. | Method | Post-LN | Pre-LN | ResiDual | |--------|---------|--------|----------| | LR Warm-Up | Yes No | Yes No | Yes No | | E6D6 | Fail | Fail | Fail | | E12D12 | Fail | Fail | Fail | ### 5 Conclusion This research is to advance the Transformer architecture and offers an effective strategy for optimizing it with enhanced performance. This paper first examines the limitations of two widely employed variants, and introduces a novel approach, referred to as ResiDual, to mitigate both issues. ResiDual consists two residual connections to circumvent the gradient vanishing and the representation collapse problem. Theoretical analysis and empirical results validates that the suggested model can surmount both challenges while preserving the advantages of each residual connection. REPRODUCIBILITY STATEMENT The complete proof can be found in Appendix A, B, C, and D. The detailed process to build Figure 2 is in Appendix E. Our code is anonymously available at https://anonymous.4open.science/r/residual_review-6F08. Meanwhile, you can refer Appendix H for implementation details like data processing scripts and hyper parameters. REFERENCES Thomas Bachlechner, Bodhisattwa Prasad Majumder, Henry Mao, Gary Cottrell, and Julian McAuley. Rezero is all you need: Fast convergence at large depth. In Uncertainty in Artificial Intelligence, pp. 1352–1361. PMLR, 2021. Ondřej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the ninth workshop on statistical machine translation, pp. 12–58, 2014. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam, volume 57, 2014. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pp. 4171–4186, 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. URL https://arxiv.org/abs/2010.11929. William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity, 2021. Ruining He, Anirudh Ravula, Bhargav Kanagal, and Joshua Ainslie. Realformer: Transformer likes residual attention. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021. doi: 10.18653/v1/2021.findings-acl.81. URL http://dx.doi.org/10.18653/v1/2021.findings-acl.81. Xiao Shi Huang, Felipe Perez, Jimmy Ba, and Maksims Volkovs. Improving transformer optimization through better initialization. In International Conference on Machine Learning, pp. 4475–4483. PMLR, 2020.
1vDArHJ68h
In Appendix G (BSuite environment), is there any hypothesis on why sometimes harder environments (longer memory steps) present better performance than easier ones? For instance, R2I’s performance on 31 memory steps is better than 15 memory steps. Similarly, the performance in 81 memory steps seems better (or more stable) than 41 memory steps.
MASTERING MEMORY TASKS WITH WORLD MODELS Mohammad Reza Samsami∗1,2 Artem Zholus∗1,3 Janarthanan Rajendran1,2 Sarath Chandar1,3,4 1Mila – Quebec AI Institute 2Université de Montréal 3Polytechnique Montréal 4CIFAR AI Chair ABSTRACT Current model-based reinforcement learning (MBRL) agents struggle with long-term dependencies. This limits their ability to effectively solve tasks involving extended time gaps between actions and outcomes, or tasks demanding the recalling of distant observations to inform current actions. To improve temporal coherence, we integrate a new family of state space models (SSMs) in world models of MBRL agents to present a new method, Recall to Imagine (R2I). This integration aims to enhance both long-term memory and long-horizon credit assignment. Through a diverse set of illustrative tasks, we systematically demonstrate that R2I not only establishes a new state-of-the-art for challenging memory and credit assignment RL tasks, such as BSuite and POPGym, but also showcases superhuman performance in the complex memory domain of Memory Maze. At the same time, it upholds comparable performance in classic RL tasks, such as Atari and DMC, suggesting the generality of our method. We also show that R2I is faster than the state-of-the-art MBRL method, DreamerV3, resulting in faster wall-time convergence. 1 INTRODUCTION In reinforcement learning (RL), world models (Kalweit & Boedecker, 2017; Ha & Schmidhuber, 2018; Hafner et al., 2019b), which capture the dynamics of the environment, have emerged as a powerful paradigm for integrating agents with the ability to perceive (Hafner et al., 2019a; 2020; 2023), simulate (Schrittwieser et al., 2020; Ye et al., 2021; Micheli et al., 2023), and plan (Schrittwieser et al., 2020) within the learned dynamics. In current model-based reinforcement learning (MBRL), the agent learns the world model from past experiences, enabling it to “imagine” the consequences of its actions (such as the future environment rewards and observations) and make informed decisions. MBRL necessitates learning a world model that accurately simulates the environment’s evolution and future rewards, integrating the agent’s actions over long horizons. This task is compounded by the credit assignment (CA) problem, where an action’s impact on future rewards must be evaluated. The agent also may need to memorize and recall past experiences to infer optimal actions. The challenge of long-term memory and CA frequently arises as a result of inadequate learning of long-range dependencies (Ni et al., 2023), due to constraints in world models’ backbone network architecture. More specifically, Recurrent Neural Networks (RNNs; Cho et al. (2014)) are employed in most MBRL methods (Ha & Schmidhuber, 2018; Hafner et al., 2019b;a; 2020; 2023) as the world models’ backbone architecture because of their ability to handle sequential data. However, their efficacy is hindered by the vanishing gradients (Bengio et al., 1994; Pascanu et al., 2013). Alternately, due to the remarkable achievements of Transformers (Vaswani et al., 2017) in language modeling tasks (Brown et al., 2020; Thoppilan et al., 2022), they have been recently adopted to build world models (Chen et al., 2022; Micheli et al., 2023; Robine et al., 2023). Nonetheless, the computational complexity of Transformers is quadratic in its input sequence length. Even the optimized Transformers (Dai et al., 2019; Zaheer et al., 2021; Choromanski et al., 2022; Bulatov et al., 2022; Ding et al., 2023) become unstable during training on long sequences (Zhang et al., 2022). This prohibits Transformers-based world models from scaling to long input sequence lengths that might be required in certain RL tasks. Recent studies have revealed that state space models (SSMs) can effectively capture dependencies in tremendously long sequences for supervised learning (SL) and self-supervised learning (SSL) tasks. ∗Equal contribution. {mohammad-reza.samsami, artem.zholus}@mila.quebec See our website here: recall2imagine.github.io (Gu et al., 2021a; Nguyen et al., 2022; Mehta et al., 2022; Smith et al., 2023; Wang et al., 2023). More specifically, the S4 model (Gu et al., 2021a) redefined the long-range sequence modeling research landscape by mastering highly difficult benchmarks (Tay et al., 2020). The S4 model is derived from a time-invariant linear dynamical system where state matrices are learned (Gu et al., 2021b). In SL and SSL tasks, it exhibits a remarkable capability to capture dependencies extending up to 16K in length, surpassing the limitations of all prior methods. Given these achievements and MBRL methods’ limitations in solving memory and CA tasks, the adoption of S4 or a modified version of it is a logical decision. In this paper, we introduce a novel method termed Recall to Imagine (R2I), which is the first MBRL approach utilizing a variant of S4 (which was previously employed in model-free RL (David et al., 2023; Lu et al., 2024)). This method empowers agents with long-term memory. R2I emerges as a general and computationally efficient approach, demonstrating state-of-the-art (SOTA) performance in a range of memory domains. Through rigorous experiments, we demonstrate that R2I not only surpasses the best-performing baselines but also exceeds human performance in tasks requiring long-term memory or credit assignment, all while maintaining commendable performance across various other benchmarks. Our contributions can be summarized as follows: - We introduce R2I, a memory-enhanced MBRL agent built upon DreamerV3 (Hafner et al., 2023) that uses a modification of S4 to handle temporal dependencies. R2I inherits the generality of DreamerV3, operating with fixed world model hyperparameters on every domain, while also offering an improvement in computational speed of up to 9 times. - We demonstrate SOTA performance of the R2I agent in a diverse set of memory domains: POPGym (Morad et al., 2023), Behavior Suite (BSuite; Osband et al. (2020)), and Memory Maze (Pasukonis et al., 2022). Notably, in the Memory Maze, which is a challenging 3D domain with extremely long-term memory needed to be solved, R2I outperforms human. - We investigate R2I’s performance in established RL benchmarks, namely Atari (Bellemare et al., 2013) and DMC (Tassa et al., 2018). We show that R2I’s improved memory does not compromise performance across different types of control tasks, highlighting its generality. - We conduct ablation experiments to show the impact of the design decisions made for R2I. 2 BACKGROUND 2.1 STATE SPACE MODELS A recent work (Gu et al., 2021a) has introduced a novel Structured State Space Sequence model (S4). This model has shown superior performance in SL and SSL tasks, compared to common deep sequence models, including RNNs, convolutional neural networks (CNNs; lec (1998)), and Transformers. It outperforms them in terms of both computational efficiency (Gu et al., 2021b) and the ability to model extremely long-range dependencies (Gu et al., 2020). S4 is a specific instance of state space models (SSMs), which can be efficiently trained by using specialized parameterization. SSMs are derived from a linear dynamical system with control variable \( u(t) \in \mathbb{R} \) and observation variable \( y(t) \in \mathbb{R} \), utilizing state variables \( x(t) \in \mathbb{C}^N \) for a state size \( N \). The system is represented by the state matrix \( A \in \mathbb{C}^{N \times N} \) and other matrices \( B \in \mathbb{C}^{N \times 1}, C \in \mathbb{C}^{1 \times N}, \) and \( D \in \mathbb{R}^{1 \times 1} \): \[ x'(t) = Ax(t) + Bu(t), \quad y(t) = Cx(t) + Du(t). \] (1) Note that these SSMs function on continuous sequences. They can be discretized by a step size \( \Delta \) to allow discrete recurrent representation: \[ x_n = \bar{A}x_{n-1} + \bar{B}u_n, \quad y_n = \bar{C}x_n + \bar{D}u_n, \] (2) where \( \bar{A}, \bar{B}, \bar{C}, \) and \( \bar{D} \) are discrete-time parameters obtained from the continuous-time parameters and \( \Delta \) using methods like zero-order hold and bilinear technique (Smith et al., 2023). These representations are incorporated as a neural network layer, and each SSM is used to process a single dimension of the input sequence and map it to a single output dimension. This means that there are separate linear transformations for each input dimension, which are followed by a nonlinearity. This allows working with discrete sequence tasks, such as language modeling (Merity et al., 2016), speech classification (Warden, 2018), and pixel-level 1D image classification (Krizhevsky et al., 2009). S4 model characterizes \( A \) as a matrix with a diagonal plus low-rank (DPLR) structure (Gu et al., 2021a). One benefit of this “structured” representation is that it helps preserve the sequence history; S4 employs HiPPO framework (Gu et al., 2020) to initialize the matrix $A$ with special DPLR matrices. This initialization grants the SSMs the ability to decompose $u(t)$ into a set of infinitely long basis functions, enabling the SSMs to capture long-range dependencies. Further, to make S4 more practical on modern hardware, Gu et al. (2021a) have reparameterized the mapping $u_{1:T}, x_0 \rightarrow y_{1:T}, x_T$ as a global convolution, referred to as the convolution mode, thereby avoiding sequential training (as in RNNs). This modification has made S4 faster to train, and as elaborated in Gu et al. (2021b), S4 models can be thought of as a fusion of CNNs, RNNs, and classical SSMs. Smith et al. (2023) uses parallel scan (Blelloch, 1990) to compute $u_{1:T}, x_0 \rightarrow y_{1:T}, x_{1:T}$ as efficient as convolution mode. S4 has demonstrated impressive empirical results on various established SL and SSL benchmarks involving long dependencies, and it outperforms Transformers (Vaswani et al., 2017; Dao et al., 2022) in terms of inference speed and memory consumption due to its recurrent inference mode. Moreover, some recent works have focused on understanding S4 models, as well as refining them and augmenting their capabilities (Gupta et al., 2022a; Gu et al., 2022; Mehta et al., 2022; Gupta et al., 2022b; Smith et al., 2023; Ma et al., 2023). We have provided additional details in Appendix B to explain this family of S4 models. For the sake of simplicity in this study, we will be referring to all the S4 model variations as “SSMs”. It is worth highlighting that a few recent methods optimize the performance of SSMs by integrating them with Transformers (Fu et al., 2023; Zuo et al., 2022; Fathi et al., 2023). This enhances the SSMs by adding a powerful local attention-based inductive bias. ### 2.2 From Imagination To Action We frame a sequential decision-making problem as a partially observable Markov decision process (POMDP) with observations $o_t$, scalar rewards $r_t$, agent’s actions $a_t$, episode continuation flag $c_t$, and discount factor $\gamma \in (0, 1)$, all following dynamics $o_t, r_t, c_t \sim p(o_t, r_t, c_t | o_{<t}, a_{<t})$. The goal of RL is to train a policy $\pi$ that maximizes the expected value of the discounted return $\mathbb{E}_\pi \left[ \sum_{t \geq 0} \gamma^t r_t \right]$. In MBRL, the agent learns a model of the environment’s dynamics (i.e., the world model), through an iterative process of collecting data using a policy, training the world model on the accumulated data, and optimizing the policy through the world model (Sutton, 1990; Ha & Schmidhuber, 2018). The Dreamer agent (Hafner et al., 2019a) and its subsequent versions (Hafner et al., 2020; 2023) have been impactful MBRL systems that learn the environment dynamics in a compact latent space and learn the policy entirely within that latent space. Dreamer agents consist of three primary components: the **world model**, which predicts the future outcomes of potential actions, the **critic**, which estimates the value of each state, and the **actor**, which learns to take optimal actions. In Dreamer, an RNN-based architecture called Recurrent State-Space Model (RSSM), proposed by Hafner et al. (2019b), serves as the core of the world model, and it can be described as follows. For every time step $t$, it represents the latent state through the concatenation of deterministic state $h_t$ and stochastic state $z_t$. Here, $h_t$ is updated using a Gated Recurrent Unit (GRU; Cho et al. (2014)), and then is utilized to compute $z_t$, which incorporates information about the current observation $o_t$ and is subsequently referred to as the posterior state. Additionally, the prior state $\hat{z}_t$ which predicts $z_t$ without access to $o_t$ is computed using $h_t$. By leveraging the latent state $(z_t, h_t)$, we can reconstruct various quantities such as $o_t, r_t,$ and $c_t$. The RSSM comprises three components: a sequence model ($h_t = f_\theta(h_{t-1}, z_{t-1}, a_{t-1})$), a representation model ($z_t \sim q_\theta(z_t | h_t, o_t)$), and a dynamics model ($\hat{z}_t \sim p_\theta(\hat{z}_t | h_t)$), where $a_{t-1}$ is the action at time step $t - 1$, and $\theta$ denotes the combined parameter vector of all components. In addition to the RSSM, the world model has separate prediction heads for $o_t, r_t, c_t$. Within the **imagination** phase, it harnesses the RSSM to simulate trajectories. This is performed through an iterative computation of states $\hat{z}_t, h_t$ and actions $\hat{a}_t \sim \pi(\hat{a}_t | \hat{z}_t, h_t)$ without the need for observations (except in the initial step). The sequences of $\hat{z}_{1:T}, h_{1:T}, \hat{a}_{1:T}$ are used to train the actor and the critic. See Appendix D for more details. ### 3 Methodology We introduce R2I (Recall to Imagine), which integrates SSMs in DreamerV3’s world model, giving rise to what we term the Structured State-Space Model (S3M). The design of the S3M aims to achieve two primary objectives: capturing long-range relations in trajectories and ensuring fast computational performance in MBRL. S3M achieves the desired speed through parallel computation during training and recurrent mode in inference time, which enables quick generation of imagined trajectories. In Figure 1, a visual representation of R2I is provided, and we will now proceed to describe its design. Figure 1: Graphical representation of R2I. (Left) The world model encodes past experiences, transforming observations and actions into compact latent states. Reconstructing the trajectories serves as a learning signal for shaping these latent states. (Right) The policy learns from trajectories based on latent states imagined by the world model. The representation corresponds to the full state policy, and we have omitted the critic for the sake of simplifying the illustration. 3.1 World Model Details Non-recurrent representation model. Our objective when updating the world model is to calculate S3M deterministic states $h_{1:T}$ in parallel by simultaneously feeding all actions $a_t$ and stochastic state $z_t$, where $T$ represents the length of the entire sequence. We aim to carry out this computation as $h_{1:T}, x_{1:T} = f_\theta((a_{1:T}, z_{1:T}), x_0)$ where $x_t$ is a hidden state and $f_\theta$ is a sequence model with a SSM network. To achieve this, prior access to all actions $a_{1:T}$ and stochastic states $z_{1:T}$ is required. However, we encounter a challenge due to the sequential nature of the relationship between the representation model $q_\theta(z_t \mid h_t, o_t)$ and sequence model $f_\theta(h_{t-1}, z_{t-1}, a_{t-1})$: at time step $t$, the representation model’s most recent output, denoted as $z_{t-1}$, is fed into the sequence model, and the resulting output $h_t$ is then used within the representation model to generate $z_t$. Hence, similar to Chen et al. (2022); Micheli et al. (2023); Robine et al. (2023); Deng et al. (2023), by eliminating the dependency on $h_t$ in the representation model, we transform it to a non-recurrent representation model $q_\theta(z_t \mid o_t)$. This modification allows us to compute the posterior samples independently for each time step, enabling simultaneous computation for all time steps. By utilizing a parallelizable function $f_\theta$, we can then obtain $h_{1:T}$ in parallel. Appendix M includes a systematic analysis to investigate how this modification impacts the performance of the DreamerV3 across a diverse set of tasks. The results indicate that transforming $q_\theta(z_t \mid o_t, h_t)$ to $q_\theta(z_t \mid o_t)$ does not hurt the performance. Architecture details. Inspired by Dreamer, R2I’s world model consists of a representation model, a dynamics model, and a sequence model (together forming S3M). In addition to that, there are three prediction heads: an observation predictor $p_\theta(\hat{o}_t \mid z_t, h_t)$, a reward predictor $p_\theta(\hat{r}_t \mid z_t, h_t)$, and an episode continuation predictor $p_\theta(\hat{c}_t \mid z_t, h_t)$. At each time step, S3M processes a pair of $(a_t, z_t)$ to output the deterministic state $h_t$. Inside, it operates over the hidden state $x_t$, so it can be defined as $h_t, x_t = f_\theta((a_{t-1}, z_{t-1}), x_{t-1})$. Specifically, $f_\theta$ is composed of multiple layers of SSMs, each one calculating outputs according to Equation 2. The outputs are then passed to GeLU (Hendrycks & Gimpel, 2023), which is followed by a fully-connected GLU transformation (Dauphin et al., 2017), and finally by a LayerNorm (Ba et al., 2016). This follows the architecture outlined by Smith et al. (2023). The deterministic state $h_t$ is the output from the final SSM layer. The set of all SSM layer hidden states is denoted $x_t$. See Appendix B.1 for SSMs design details. In image-based environments, we leverage a CNN encoder for $q_\theta(z_t \mid o_t)$ and a CNN decoder for $p_\theta(\hat{o}_t \mid z_t, h_t)$. In contrast, in tabular environments, both $q_\theta(z_t \mid o_t)$ and $p_\theta(\hat{o}_t \mid z_t, h_t)$ are MLPs. We include the details on network widths, depths, and other hyperparameters in Appendix E. Training details. R2I optimizes the following objective: $$L(\theta) = \mathbb{E}_{z_{1:T} \sim q_\theta} \sum_{t=1}^{T} L^{\text{pred}}(\theta, h_t, o_t, r_t, c_t, z_t) + L^{\text{rep}}(\theta, h_t, o_t) + L^{\text{dyn}}(\theta, h_t, o_t)$$ (3) \[ L_{\text{pred}}(\theta, h_t, o_t, r_t, c_t, z_t) = -\beta_{\text{pred}} \ln p_\theta(o_t | z_t, h_t) + \ln p_\theta(r_t | z_t, h_t) + \ln p_\theta(c_t | z_t, h_t) \] \[ L_{\text{dyn}}(\theta, h_t, o_t) = \beta_{\text{dyn}} \max(1, \KL[\sg(q_\theta(z_t | o_t)) \| p(z_t | h_t)]) \] \[ L_{\text{rep}}(\theta, h_t, o_t) = \beta_{\text{rep}} \max(1, \KL[q_\theta(z_t | o_t) \| \sg(p(z_t | h_t))]) \] \[ h_{1:T}, x_{1:T} = f_\theta((a_{1:T}, z_{1:T}), x_0) \] Here, \( \sg \) represents the stop gradient operation. This loss, resembling the objective utilized in (Hafner et al., 2023), is derived from Evidence Lower Bound (ELBO), but our objective differs from ELBO in three ways. First, we clip KL-divergence when it falls below the threshold of 1 (Hafner et al., 2020; 2023). Secondly, we use KL-balancing (Hafner et al., 2020; 2023) to prioritize the training of the S3M. Third, we use scaling coefficients \( \beta_{\text{pred}}, \beta_{\text{rep}}, \beta_{\text{dyn}} \) to adjust the influence of each term in the loss function (Higgins et al., 2017; Hafner et al., 2023). Some works on SSMs recommend optimizing state matrices using a smaller learning rate; however, our experiments indicate that the most effective approach is to use the same learning rate used in the rest of the world model. **SSMs Computational Modeling.** To enable the parallelizability of world model learning, as outlined in Section 2.1, we have the option to select between two distinct approaches: convolution (Gu et al., 2021a) and parallel scan (Smith et al., 2023). After thorough deliberation, we opted for parallel scan due to several compelling reasons. Firstly, as we discuss later in Section 3.2, it is essential to pass hidden states \( x_t \) to the policy in memory environments, a critical finding we empirically analyze in Appendix N. Another consequence of not yielding \( x_t \) via convolution mode is that it would necessitate several burn-in steps to obtain correct hidden states, akin to Kapturowski et al. (2019), resulting in quadratic imagination complexity. Furthermore, parallel scan enables scaling of sequence length in batch across distributed devices, a capability not supported by the convolution mode. Table 6 summarizes computational complexities associated with different types of recurrences, including RNNs, SSMs, and Attention used in studies like Chen et al. (2022). Finally, parallel scan can facilitate the resetting of hidden states. When sampling a sequence from the buffer, it may comprise of multiple episodes; thus, the hidden states coming from terminal states to the initial states in new episodes must be reset. This boosts the early training performance, when the episodes may be short. Inspired by Lu et al. (2024), we modify the SSM inference operator to support resetting hidden states. Achieving this is not feasible with convolution mode. Details of our SSMs operator used by the parallel scan is provided in Appendix C. ### 3.2 Actor-Critic Details In the design of Dreamer’s world model, it is assumed that \( h_t \) contains information summarizing past observations, actions, and rewards. Then, \( h_t \) is leveraged in conjunction with the stochastic state \( z_t \) to reconstruct or predict observations, rewards, episode continuation, actions, and values. Unlike DreamerV3, which utilizes a GRU cell wherein \( h_t \) is passed both to the reconstruction heads and the next recurrent step, R2I exclusively passes \( h_t \) to prediction heads, while SSM’s hidden state \( x_t \) is used in the next recurrent update of S3M. This implies that the information stored in \( h_t \) and \( x_t \) could potentially vary. Empirically, we discovered that this difference can lead to the breakdown of policy learning when using \( \pi(\hat{a}_t | z_t, h_t) \), but it remains intact when we use \( \pi(\hat{a}_t | z_t, x_t) \) in memory-intensive environments. Surprisingly, we found that incorporating all features into the policy \( \pi(\hat{a}_t | z_t, h_t, x_t) \) is not a remedy. The reason lies in the non-stationarity of these features; their empirical distribution changes over time as the world model trains, ultimately leading to instability in the policy training process. A similar phenomenon was also observed in Robine et al. (2023). We study the dependency of policy features on the performance in Appendix N, where we cover a diverse set of environments: from non-memory vector-based ones to image-based memory environments. In different environments, we condition the policy and value function on the information from S3M. | Method | Training Complexity | Inference Complexity | Imagination Complexity | Parallel | State Reset | |--------------|---------------------|----------------------|------------------------|----------|-------------| | Attn RNN | \( O(L^2) \) | \( O(L^2) \) | \( O((L + H)^2) \) | ✓ | ✓ | | SSM (Conv) | \( O(L) \) | \( O(1) \) | \( O(1) \) | ✗ | ✗ | | SSM (Par.Scan)| \( O(L) \) | \( O(1) \) | \( O(1) \) | ✓ | ✓ | Table 1: The asymptotic runtimes of different architectures. \( L \) is the sequence length and \( H \) is the imagination horizon. The outer loop of the imagination process cannot be parallelized. Attention and SSM+Conv accept the full context of \( O(L + H) \) burn-in and imagined steps which results in \( O((L + H)^2) \) step complexity for Attention and \( O(L) \) for SSM+Conv. SSMs combine compact recurrence with parallel computation reaching the best asymptotical complexity. in the following ways: we use the output state policy that takes \((z_t, h_t)\) as input, the hidden state policy that takes \((z_t, x_t)\) as input, and the full state policy that takes \((z_t, h_t, x_t)\) as input. To train actor-critic, we opt for the procedure proposed in DreamerV3 (Hafner et al., 2023). For a detailed description of the actor-critic training process, refer to Appendix D. 4 EXPERIMENTS We conduct a comprehensive empirical study to assess the generality and memory capacity of R2I across a wide range of domains, including credit assignment, memory-intensive tasks, and non-memory tasks, all while maintaining fixed hyperparameters of the world model. We cover five RL domains: BSuite (Osband et al., 2020), POPGym (Morad et al., 2023), Atari 100K (Lukasz Kaiser et al., 2020), DMC (Tassa et al., 2018), and Memory Maze (Pasukonis et al., 2022). The section is organized as follows. In Sections 4.1 and 4.2, we evaluate R2I’s performance in two distinct memory-intensive settings: simple tabular environments and complex 3D environments. We show that not only does R2I achieve the SOTA performance, but it also surpasses human-level performance in the complex Memory Maze domain. In Section 4.3, we demonstrate that we do not trade the generality for improved memory capabilities. Figure 2 shows R2I’s impressive computational efficiency, with a speed increase of up to 9 times compared to its predecessor, DreamerV3. Note that the image environments are representative of Memory Maze, and the vector environments represent POPGym. We reuse most of the world model hyperparameters from DreamerV3. In all environments, we use a First-in First-out (FIFO) replay buffer size of 10M steps to train R2I. We found this helps stabilize the world model and prevent overfitting on a small buffer. Also, we vary features that the policy is conditioned on (i.e., output state policy \(\pi(\hat{a}_t | z_t, h_t)\), hidden state policy \(\pi(\hat{a}_t | z_t, x_t)\), or full state policy \(\pi(\hat{a}_t | z_t, h_t, x_t)\)). Our primary takeaway is to leverage the output state policy in non-memory environments and the full state policy or hidden state policy within memory environments, as explained in Section 3.2. We also found that even in memory environments, the full state policy cannot be preferred over the hidden state policy because of the instability of features – since the world model is trained alongside the policy, the former might change the feature distribution which introduces non-stationarity for the policy. 4.1 QUANTIFYING MEMORY OF R2I In this section, we study the performance of R2I in challenging memory environments of BSuite and POPGym domains, which are tabular environments. Despite their simplicity, these environments pose a challenge for MBRL algorithms since the world model needs to learn causal connections over time. While SSMs have shown their ability to handle extremely long-range dependencies in SL and SSL (Gu et al., 2021a), this capability does not necessarily translate to MBRL, even though the world model optimizes the same supervised objective. This discrepancy arises from the lifelong nature of world model training. That is, it needs to bootstrap its performance from a very small dataset with hugely imbalanced reward “labels” (as opposed to big and well-balanced long-range datasets on which SSMs shine (Tay et al., 2021)). Additionally, the continuously growing replay buffer imposes the need to quickly learn the newly arrived data which requires an ability for quick adaptation of the world model throughout its optimization. The section’s goal is to give an insight into how extensive are R2I’s memory capabilities. Figure 3: Success rates of DreamerV3 (which holds the previous SOTA) and R2I in BSuite environments. A separate model is trained for every point on the x-axis. A median value (over 10 seeds) is plotted filling between 25-th and 75-th percentiles. Training curves are in Appendix F. Figure 4: R2I results in memory-intensive environments of POPGym. Our method establishes the new SOTA in the hardest memory environments; Autoencode: -Easy, -Medium; RepeatPrevious: -Medium, -Hard; Concentration: -Medium. Note that Concentration is a task that can be partially solved without memory. For PPO+S4D, refer to Appendix S. Behavior Suite experiments. To study the ability of the R2I model to handle longer episodes, we conduct quantitative experiments within a subset of the BSuite environments. These environments are specifically designed to evaluate an agent’s memory capacity and its ability to effectively perform credit assignment. In particular, we carry out experiments within Memory Length and Discounting Chain environments. The former focuses on memory, and the latter serves as a credit assignment task. In Memory Length environment, the goal is to output an action which is dictated by the initial observation (the episode length i.e., the memory steps number is an environment parameter). Essentially, the agent must carry the information from the initial observation throughout the entire episode. In the Discounting Chain, the first action (which is categorical) causes a reward that is only provided after a certain number of steps, specified by the parameter reward delay. As depicted in Figure 3, the previous SOTA DreamerV3 learns the dependencies between actions and rewards in both Discounting Chain and Memory Length with reward delays of up to 30 environment steps. Note that every run either converged to a maximum reward or failed (based on the random seed). We plot the success rate as the fraction of runs that achieved success. R2I excels in both tasks, significantly outperforming in the preservation of its learning ability across a wider range of varying environment complexities. In these experiments, we leverage the output state policy (i.e., operating on latent variable $z_t$ and S3M output $h_t$). More details are provided in Appendix E. POPGym experiments. We perform a study to assess R2I in a more challenging benchmark, namely, POPGym (Morad et al., 2023). This suite offers a range of RL environments designed to assess various challenges related to POMDPs, such as navigation, noise robustness, and memory. Based on Ni et al. (2023), we select the three most memory-intensive environments: RepeatPrevious, Autoencode, and Concentration. These environments require an optimal policy to memorize the highest number of events (i.e., actions or observations) at each time step. Each environment in POPGym has three difficulty levels: Easy, Medium, and Hard. In the memory environments of this study, the complexity is increased by the number of actions or observations that the agent should keep track of simultaneously. All environments in this study have categorical observation and action spaces. A detailed explanation of the environments is provided in Appendix G. As POPGym was not included in the DreamerV3 benchmark, we performed hyperparameter tuning of both DreamerV3 and R2I, solely on adjusting the network sizes of both. This is because DreamerV3 is a generalist agent that works with a fixed set of hyperparameters and in this environment, with sizes primarily influencing its data efficiency. We observed a similar characteristic in R2I. The results of hyperparameter tuning are available in Appendix L. For R2I, we use the hidden state policy: $\pi(\hat{a}_t | z_t, h_t)$ as we found it much more performant, especially in memory-intensive tasks (see Appendix N for policy inputs ablations). We train R2I in POPGym environments using a unified and fixed set of hyperparameters. In addition to R2I and DreamerV3, we include model-free baselines from Morad et al. (2023). These include PPO (Schulman et al., 2017) model-free policy with different observation backbones, such as GRU, LSTM, MLP, and MLP with timestep number added as a feature (PosMLP). PPO with GRU is the best-performing model-free baseline of POPGym while PPO+LSTM is the second best. PPO+MLP and PPO+PosMLP are included for a sanity check - the better their performance is, the less is the memory needed in the environment. 1A policy without any memory exists that outperforms a random policy but underperforms the optimal one. As illustrated in Figure 4, R2I demonstrates the new SOTA performance, outperforming every baseline in Autoencode, Easy and Medium tasks. Note that R2I outperforms all 13 model-free baselines of the POPGym benchmark by a huge margin (we did not include them due to space constraints). R2I also shows consistently strong performance in RepeatPrevious tasks, setting a new SOTA in both Medium and Hard (compared to all 13 model-free baselines and DreamerV3). In Concentration, the model-free memory baselines fail to outperform a simple MLP policy, suggesting that they all converge to a non-memory-based suboptimal policy. R2I advances this towards a better memory policy. Its performance is roughly equal to DreamerV3 in an Easy and slightly better in the Medium task. As Appendix G suggests, all RepeatPrevious tasks require up to 64 memorization steps, while Autoencode Easy and Medium require up to 104. In Concentration Easy and Medium this length is up to 208 steps, however, since PPO+MLP shows somewhat good performance, likely less than 208 memorization steps are required. This observation is consistent with the results of the BSuite experiments, which demonstrate that our model is capable of memorizing up to approximately 100 steps in time. To summarize, these results indicate that R2I significantly pushes the memory limits. 4.2 Evaluating Long-term Memory In Complex 3D Tasks Memory Maze (Pasukonis et al., 2022) presents randomized 3D mazes where the egocentric agent is repeatedly tasked to navigate to one of multiple objects. For optimal speed and efficiency, the agent must retain information about the locations of objects, the maze’s wall layout, and its own position. Each episode can extend for up to 4K environment steps. An ideal agent equipped with long-term memory only needs to explore each maze once, a task achievable in a shorter time than the episode’s duration; subsequently, it can efficiently find the shortest path to reach each requested target. This task poses a fundamental challenge for existing memory-augmented RL algorithms, which fall significantly behind human performance in these tasks. In this benchmark, we found that DreamerV3 works equally well as DreamerV2 reported in Pasukonis et al. (2022). Therefore, we use the size configuration of Dreamer outlined in Pasukonis et al. (2022). Note that this baseline also leverages truncated backpropagation through time (TBTT), a technique demonstrated to enhance the preservation of information over time (Pasukonis et al., 2022). We use the “medium memory” size configuration of R2I in this work (see Table 2 in Appendix). We use the full state policy \( \pi(\hat{a}_t | z_t, h_t, x_t) \) i.e., conditioning on stochastic state, and S3M output, and hidden states at step \( t \) in this environment. We trained and tested R2I and other methods on 4 existing maze sizes: 9x9, 11x11, 13x13, and 15x15. The difference between them is in the number of object rooms and the episode lengths. More difficult maze sizes have more environment steps in the episode making it more challenging to execute a successful series of object searches. R2I and other baselines are evaluated after 400M environment steps or two weeks of training. We also compare R2I with IMPALA (Espeholt et al., 2018), which is the leading model-free approach (Pasukonis et al., 2022). As shown in Figure 5, R2I consistently outperforms baseline methods in all of these environments. In 9x9 mazes, it demonstrates performance similar to the Dreamer, while significantly outperforming IMPALA. In 11x11, 13x13, and 15x15 mazes, it has a remarkably better performance than both baselines. Moreover, it has surpassed human-level abilities in solving 9x9, 11x11, and 13x13 mazes. These results establish R2I as a SOTA in this complex 3D domain. Figure 5: Scores in Memory Maze after 400M environment steps. R2I outperforms baselines across difficulty levels, becoming the domain’s new SOTA. Due to its enhanced computational efficiency, R2I was trained during a fewer number of days compared to Dreamer, as illustrated in Figure 26. 4.3 Assessing the Generality of R2I in Non-Memory Domains We conduct a sanity check by assessing R2I’s performance on two widely used RL benchmarks: Atari (Bellemare et al., 2013) and DMC (Tassa et al., 2018), as parts of the DreamerV3 benchmark (Hafner et al., 2023). Even though these tasks are nearly fully observable and do not necessitate extensive memory to solve (it is often enough to model the dynamics of only the last few steps), evaluating R2I on them is essential as we aim to ensure our agent’s performance across a wide range of tasks that require different types of control: continuous control (in DMC) and discrete (in Atari). In all the experiments conducted within Atari 100K (Łukasz Kaiser et al., 2020) and DMC, we fix hyperparameters of the world model. In Atari and the proprio benchmark in DMC, we utilize output state policies, as we found them more performant (for ablations with different policy types, see Appendix N). In the visual benchmark in DMC, we use hidden state policy. Note that for continuous control, the policy is trained via differentiating through the learned dynamics. R2I maintains a performance similar to DreamerV3 in these domains, as demonstrated in Figure 6, implying that in the majority of standard RL tasks (see Appendix Q), R2I does not sacrifice generality for improved memory capabilities. 5 Conclusion In this paper, we introduced R2I, a general and fast model-based approach to reinforcement learning that demonstrates superior memory capabilities. R2I integrates two strong algorithms: DreamerV3, a general-purpose MBRL algorithm, and SSMs, a family of novel parallelizable sequence models adept at handling extremely long-range dependencies. This integration helps rapid long-term memory and long-horizon credit assignment, allowing R2I to excel across a diverse set of domains, all while maintaining fixed hyperparameters across all domains. Through a systematic examination, we have demonstrated that R2I sets a new state-of-the-art in domains demanding long-term temporal reasoning: it outperforms all known baselines by a large margin on the most challenging memory and credit assignment tasks across different types of memory (long-term and short-term) and observational complexities (tabular and complex 3D). Remarkably, it transcends human performance in complex 3D tasks. Furthermore, we have demonstrated that R2I achieves computation speeds up to 9 times faster than DreamerV3. Our study presents the first model-based RL approach that uses SSMs. While R2I offers benefits for improving memory in RL, it also has limitations, which we leave for future research. For instance, it can be explored how R2I can be augmented with attention mechanisms, given that Transformers and SSMs exhibit complementary strengths (Mehta et al., 2022). As mentioned in Section 2.1, hybrid architectures have been introduced in language modeling tasks. Moreover, the sequence length within the training batches for world model learning is not currently extremely long, as is the horizon (i.e., the number of steps) of imagination in actor-critic learning. Future work could focus on these aspects to further enhance memory capabilities. ACKNOWLEDGEMENTS We thank Albert Gu for his thorough and insightful feedback on the SSM part of the project. We also acknowledge The Annotated S4 Blog (Rush & Karamcheti, 2022) and S5 codebase (Smith et al., 2023) which inspired our JAX implementation. We also thank Danijar Hafner, Steven Morad, Ali Rahimi-Kalhroudi, Michel Ma, Tianwei Ni, Darshan Patil, and Roger Creus for their helpful feedback on our method and the early draft of the paper. We thank Jurgis Pasukonis for sharing the data for memory maze baseline plots. This research was enabled by computing resources provided by Mila (mila.quebec), the Digital Research Alliance of Canada (alliancecan.ca), and NVIDIA (nvidia.com). We thank Mila’s IDT team, and especially Olexa Bilaniuk for helping with numerous technical questions during this work and especially for the help in the implementation of the new I/O efficient RL replay buffer. Janarthanan Rajendran acknowledges the support of the IVADO postdoctoral fellowship. Sarath Chandar is supported by the Canada CIFAR AI Chairs program, the Canada Research Chair in Lifelong Machine Learning, and the NSERC Discovery Grant. REFERENCES Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Mohammed Abbad. Perturbation and stability theory for Markov control problems. University of Maryland, Baltimore County, 1991. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G Bellemare. Deep reinforcement learning at the edge of the statistical precipice. Advances in Neural Information Processing Systems, 2021. Jose A Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. Rudder: Return decomposition for delayed rewards. Advances in Neural Information Processing Systems, 32, 2019. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization, 2016. M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, jun 2013. doi: 10.1613/jair.3912. URL https://doi.org/10.1613%2Fjair.3912. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994. Guy E. Blelloch. Prefix sums and their applications. Technical Report CMU-CS-90-190, School of Computer Science, Carnegie Mellon University, November 1990. William L Brogan. Modern control theory. Pearson education india, 1991. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. Aydar Bulatov, Yuri Kuratov, and Mikhail S. Burtsev. Recurrent memory transformer, 2022. Chang Chen, Yi-Fu Wu, Jaesik Yoon, and Sungjin Ahn. Transdreamer: Reinforcement learning with transformer world models, 2022. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation, 2014.
cYksYKbf6K
Maybe I’m missing something, but I do not see why “Our method is computationally simpler compared to previous methods”. Despite being conceptually straightforward, the method requires running a complete rollout up to the maximum length before any truncation occurs. Could you elaborate on the computational saving?
Imagine Within Practice: Conservative Rollout Length Adaptation for Model-Based Reinforcement Learning Anonymous authors Paper under double-blind review Abstract Model-based reinforcement learning (MBRL) algorithms achieve high sample efficiency by leveraging imagined rollouts from a world model for policy optimization. A crucial hyperparameter in MBRL is the rollout length, which represents a trade-off between data quality and efficiency by limiting the imaginary horizon. While longer rollout length offers enhanced efficiency, it introduces more unrealistic data due to compounding error, potentially leading to catastrophic performance deterioration. To prevent significant deviations between imagined rollouts and real transitions, most model-based methods manually tune a fixed rollout length for the entire training process. However, the fixed rollout length is not optimal for all rollouts and does not effectively prevent the generation of unrealistic data. To tackle this problem, we propose a novel method called Conservative Rollout Length Adaptation (CRLA), which conservatively restricts the agent from selecting actions that are rarely taken in the current state. CRLA truncates the rollout to preserve safety when there is a high probability of selecting infrequently taken actions. We apply our method to DreamerV3 and evaluate it on the Atari 100k benchmark. The results demonstrate that CRLA can effectively balance data quality and efficiency by adjusting rollout length and achieve significant performance gains in most Atari games compared to DreamerV3 in the default setting. 1 Introduction Reinforcement learning (RL) has recently achieved impressive success in a variety of complex decision-making tasks, such as robotics (Yang et al., 2022; Wu et al., 2023) and gaming (Silver et al., 2016; Wurman et al., 2022). However, it usually takes a huge amount of trial and error for RL algorithms to learn an effective policy. This makes the application of reinforcement learning challenging. Recent research has introduced various methods to improve sample efficiency (Schwarzer et al., 2021; 2023). Model-based methods are considered promising approaches to accelerate agent learning. Unlike model-free methods, they learn a dynamic model of the environment, also called world model (Ha & Schmidhuber, 2018), and allow the agent to interact in the world model to acquire more samples without touching the real environment, as one can learn in the imagination. However, the use of a world model is not blind because the predictive accuracy and generalization are not guaranteed in complex environments (Plaat et al., 2023). The rollout length, which is used to limit the imaginary horizon of the agent in the world model, is a critical hyperparameter in model-based approaches (Janner et al., 2019). Intuitively, a longer rollout length leads to greater sample efficiency since more data are generated. However, as long trajectories are generated, the prediction accuracy at each step decreases due to the compounding of model error, resulting in poor generation quality. Thus, the rollout length plays a crucial role as a trade-off between data quality and efficiency, which needs to be set carefully. Previous approaches tend to achieve better performance by manually adjusting the rollout length. However, using a fixed rollout length is not optimal for all rollouts during the training process (Nguyen et al., 2018). There are some approaches that try to utilize metrics from the training process for automatic adaptation (Nguyen et al., 2018; Xiao et al., 2019) but only slightly adjusted for simple environments. Intuitively, a conservative strategy for a safe rollout is to prefer practiced actions that have already been taken frequently in the current state when imagining, since humans usually avoid imagining outside the box if they lack a comprehensive understanding of the dynamics. Based on this inspiration, we propose a novel Conservative Rollout Length Adaptation method called CRLA. Our main idea is that the agent, when interacting with the world model, should try to choose practiced actions in the current state for safe rollout and truncate the rollout when there is a high probability of selecting unpracticed actions that are seldom taken. The overall framework of CRLA is shown in Figure 1. We train a neural network called conservator to predict the distribution formed by the take frequency of actions at each state. We determine whether it is safe to continue the rollout by judging if the output distributions of the conservator and the actor are close enough to each other, and truncate the rollout if it is not safe. Our approach is a conservative rollout strategy that prevents the rollout from falling into regions with large prediction errors by truncating the rollout when there is a high probability of selecting rarely taken actions. We evaluate CRLA applied to DreamerV3 (Hafner et al., 2023) on the Atari100k benchmark. Note that our method can be applied to most model-based method that works in the discrete action space environment. CRLA demonstrates a notable performance improvement over DreamerV3 in most Atari games, indicating its ability to effectively strike a balance between data quality and efficiency. Our contribution: The contributions of this work can be summarized as follows: 1. We introduce a conservative rollout strategy that stops unrolling when the agent selects an action seldom chosen in the current state. 2. We propose a novel conservative rollout length adaptation method following this strategy, aimed at discarding potentially unrealistic transitions for safety. 3. We evaluate CRLA applied to DreamerV3 on the Atari100k benchmark and achieves a significant improvement and demonstrates that further performance gains can be achieved by dynamically adjusting the rollout length. 2 BACKGROUND We consider a partially observable Markov decision process (POMDP) with discrete time steps $t \in \mathbb{N}$, high-dimensional image observations $x_t \in \mathbb{R}^{h \times w \times c}$, discrete actions $a_t \in \{1, ..., m\}$ and scalar rewards \( r_t \in \mathbb{R} \). Episode ends are indicated by a boolean variable \( d_t \in \{0, 1\} \). The goal is to find a policy \( \pi \) that maximizes the expected sum of discounted rewards \( \mathbb{E}_{\pi} \left[ \sum_{t=1}^{\infty} \gamma^{t-1} r_t \right] \), where \( \gamma \in [0, 1) \) is the discount factor. To introduce our method more conveniently, we specify it based on the Dreamerv3 algorithm, a state-of-the-art model-based reinforcement learning algorithm (Hafner et al., 2023), since our method is a general approach which can be applied to most model-based methods that work in the discrete action space environment. We briefly describe the framework of Dreamerv3 below. **World model:** One of the fundamental components of the model-based algorithm is the world model, which learns compact representations of observations and predicts future representations and rewards with potential actions. To process high-dimensional image inputs, the world model requires an encoder that learns compact representations to encode image observations \( x_t \) into hidden states \( z_t \) (Kingma & Welling, 2013). Then an RNN-based sequence model predicts the next recurrent state \( h_t \) based on past state \( z_{t-1} \) and action \( a_{t-1} \). The dynamics predictor predicts the next latent state \( \hat{z}_t \) based on the recurrent state. These three modules form the Recurrent State-Space Model (RSSM) (Hafner et al., 2019), which is the core of the world model: \[ \text{RSSM} \left\{ \begin{array}{l} \text{Sequence model: } h_t = f_\phi(h_{t-1}, z_{t-1}, a_{t-1}) \\ \text{Encoder: } z_t \sim q_\phi(z_t | h_t, x_t) \\ \text{Dynamics predictor: } \hat{z}_t \sim p_\phi(\hat{z}_t | h_t) \end{array} \right. \] The concatenation of the hidden state and the current state is used to predict the reward \( r_t \), the episodic continuation flags \( c_t \in \{0, 1\} \) and the next observation \( x_t \) for learning compact representations: \[ \begin{align*} \text{Reward predictor: } \hat{r}_t &\sim p_\phi(\hat{r}_t | h_t, z_t) \\ \text{Continue predictor: } \hat{c}_t &\sim p_\phi(\hat{c}_t | h_t, z_t) \\ \text{Decoder: } \hat{x}_t &\sim p_\phi(\hat{x}_t | h_t, z_t) \end{align*} \] **Actor-Critic learning:** Dreamer uses actor-critic framework for policy optimization. Both actor \( \pi_\theta(a_t | s_t) \) and critic \( v_\psi(s_t) \) operate on model states \( s_t = \{h_t, z_t\} \) and are trained on-policy entirely on trajectories imagined by the world model. Real trajectories are sampled from the replay buffer and used as starting points to generate imaginary trajectories in length \( T \). We then compute bootstrapped \( \lambda \)-returns (Sutton & Barto, 2018) on these trajectories and use this for optimization: \[ R^\lambda_t = r_t + \gamma c_t \left( (1 - \lambda) v_\psi(s_{t+1}) + \lambda R^\lambda_{t+1} \right) \quad R^\lambda_T = v_\psi(s_T) \] ### 3 Method In this section, we first explain the idea of the conservative rollout strategy. Then, we introduce our method inspired by this strategy and provide a practical implementation based on Dreamerv3. Finally, we illustrate the theoretical support behind our method. #### 3.1 IMAGINE WITHIN PRACTICE Due to the limitation of interaction steps, it is difficult for the world model to fully and accurately capture the dynamics of the environment. However, during the rollout process, it is essential for the world model to possess the capability of generalization to produce novel transitions that have not been previously observed. This generalization alone cannot ensure the quality of imagined transitions, as the world model learns only from limited data. Since the model error cannot be eliminated, and even minor errors can be compounded by multi-step rollout, this has a detrimental effect on policy optimization. Intuitively, since the world model is trained with practiced trajectories in the replay buffer, it can predict future information more accurately when it encounters frequently seen transitions. When the agent chooses an action that has not been practiced while interacting with the world model, the world model tends to produce large generalization error. Figure 2 illustrates this situation. Selecting unpracticed actions is unsafe as it can lead the rollout to deviate from the accurate prediction region and introduce risk to subsequent rollouts due to the compounding of model error. To ensure the safety of the rollout, a conservative strategy is to make the agent imagine within practice, since the world model is trained with practiced trajectories. This means that we want the agent to be able to determine whether the current action choice has been practiced sufficiently to make confident predictions during the imagination process. If the chosen action has rarely been practiced in the current state, the imagination should be interrupted to avoid falling into unrealistic fantasies. If the agent has a high probability of choosing an action that has rarely been practiced in the current state, the imagination should be interrupted so as not to fall into unrealistic fantasies. 3.2 Conservative Rollout Length Adaptation Based on this intuitive strategy, we propose a novel conservative rollout length adaptation method called CRLA. Specifically, we define $\pi_p(a_t \mid z_t)$ to represent the practiced action distribution, which is shaped by the frequency of each action taken at each latent state. Note that $\pi_p$ does not use $s_t = \{h_t, z_t\}$ as input, since it does not need to consider the context $h_t$. With $\pi_p$, we can identify which actions have been taken in the current latent state and their frequency to determine whether the action is practiced enough or not. Our main idea is to truncate the rollout when the agent has a higher probability of choosing actions that are not sufficiently practiced. This means that the rollout will continue only when the distance between $\pi_p$ and $\pi_\theta$ is sufficiently small. In this paper, we employ the Jensen–Shannon divergence as our distance metric. For each rollout step, we calculate the distance between $\pi_p$ and $\pi_\theta$ respectively for each rollouts. We set a threshold $\alpha$ to determine whether to continue the rollout. If the distance is less than $\alpha$, the rollout will continue, otherwise subsequent rollouts will be masked with $m_t$, as shown in Equation (4). Note that our approach does not directly judge the final action selection, but instead uses the action distribution as the basis for judgment, which means that the agent still has the probability to sample the untaken actions when the constraint is satisfied, preserving the exploration ability of the agent. However, $\pi_p$ is not easy to acquire. An easily thought of method is to directly count the frequency of practiced actions at each state in the replay buffer. But there are several problems with this. First, directly counting the frequency of practiced actions under the original observation is convenient but not reasonable, as the prediction of the world model is made in the latent space. Even if the observations are not identical, they may map to the same point in the latent space. Second, counting within the latent space would require re-counting after the update of the encoder, which significantly increases the computational cost. Lastly, owing to the world model’s generalization capacity, it generates latent states that do not truly exist but are close to the real one. These problems make it infeasible to obtain $\pi_p$ by counting. $$m_t = \begin{cases} 1, & \text{if } D_{JS}(\pi_D(z_t), \pi_\theta(s_t)) < \alpha \text{ and } m_{t-1} = 1 \\ 0, & \text{if } D_{JS}(\pi_D(z_t), \pi_\theta(s_t)) \geq \alpha \text{ or } m_{t-1} = 1 \end{cases}$$ In order to efficiently acquire an approximation of $\pi_p$, we parameterized it using a neural network. We call it conservator and use $\hat{\pi}_p(a_t \mid z_t)$ to denote. The conservator takes latent states $z_t$ as input and outputs the practiced action distribution under $z_t$. We train it using practiced trajectories from the replay buffer. Since one latent state can correspond to multiple actions, we refer to multi-label learning approaches. However, in the multi-label classification task, each input necessitates the provision of the complete label set for supervised training. This presents a challenge to our task because it requires iterating through the entire replay buffer to find all actions corresponding to each latent state, resulting in a significant increase in computational complexity. Therefore we use sampling for training, where the state-action pairs are uniformly sampled from the replay buffer at each training step. We then embed the original observations into latent states as input and use the one-hot coding of the actions as labels. We utilize the binary cross entropy loss function that is often used in multi-label learning tasks to calculate the loss shown in Equation 5. We normalize the final output logit of the conservator to obtain the predicted distributions of practiced actions. Because the sampling is uniform, the conservator is able to capture the frequency of each action in each latent state to approximate the \( \pi_p \). We validate this on the mnist dataset, referring to the Appendix A.2 for detailed results. \[ L_{BCE}(\hat{y}, y) = -\frac{1}{N} \sum_{i=1}^{N} y_i \cdot \log (\hat{y}_i) + (1 - y_i) \cdot \log (1 - \hat{y}_i) \] \[ L_{statistic} = \mathbb{E}_{s,a \in D} [L_{BCE}(\hat{\pi}_D(encoder(z | s)), one\_hot(a))] \] After obtaining \( \hat{\pi}_p \), we can calculate the Jensen–Shannon divergence between \( \hat{\pi}_p(a_t | z_t) \) and current policy \( \pi_0(a_t | s_t) \) at each rollout step in the world model, and determine whether to continue unrolling. For computational convenience, we first compute the trajectory with the max rollout length, and then apply the mask \( m_t \) to mask out those states that do not satisfy the condition and their successors. When calculating the bootstrapped \( \lambda \)-returns, it should be calculated respectively for different rollout lengths according to the mask as shown in Equation 7. \( F_t \in \{0, 1\} \) is the flag of the first invalid state which is equal to 1 only in this case to truncate the bootstrap returns of the invalid state. \[ R^\lambda_t = m_t \left[ r_t + \gamma c_t \left( (1 - \lambda)v_\psi(s_{t+1}) + \lambda(R^\lambda_{t+1} \cdot (1 - F_t) + v_\psi(s_{t+1}) \cdot F_t) \right) \right] \] The setting of the threshold \( \alpha \) is crucial for our method. To simplify the design of the threshold, we would like to set one threshold for all 26 Atari games. However, since the dimensions of action vary across Atari games, it may not be appropriate to set only one threshold for them. The reason is that for environments with small action dimensions, relatively low thresholds need to be set to provide a sensitive truncation of the rollout. For environments with large action dimensions, however, relatively high thresholds need to be set to avoid overly strict judgment conditions. Therefore, we use a threshold adjustment approach that is adaptive to the dimensionality of the action. We define \( p, q \) as two different n-dimensional one-hot vectors and \( u \) as an n-dimensional uniform distribution. We set a hyperparameter \( \beta \) and compute the threshold \( \alpha \) using the following equation: \[ \alpha = D_{JS}[(\beta p + (1 - \beta)u), (\beta q + (1 - \beta)u)] \] In summary, our method has the following advantages: (1) Adaptation. Our method adapts the rollout length for each rollout individually, thus utilizing the world model as much as possible while still being safe. (2) flexibility. Since the Jensen–Shannon divergence between \( \hat{\pi}_p \) and \( \pi_0 \) is used as the judgment condition, there remains a possibility for the agent to explore unpracticed actions while adhering to the constraints. Our method can be regarded as a form of soft rollout constraint. ### 3.3 Theoretical Analysis Previous researches have conducted theoretical analyses of the gap between returns under a branched rollout scheme and real environment interactions, and derived a bound. The branched rollout scheme is that we begin a rollout from a state under the previous policy’s state distribution \( d_{\pi_D}(s) \) and run k steps according to current \( \pi \) under the learned world model \( p_\theta \). We analyze the validity of our method based on this basis. **Theorem 3.1** (Janner et al., 2019) Let the expected total variation between two the learned model is bounded at each timestep under the expectation of \( \pi \) by \( \max_s B_{s \sim \pi} [D_{TV}(p(s | s, a) \| \hat{p}(s' | s, a))] \leq \epsilon_{m'} \), and the policy divergences are bounded as \( \max_s D_{TV}(\pi_D(a | s) \| \pi(a | s)) \leq \epsilon_\pi \), where the \( \pi_D(a | s) \) denote the data-collecting policy. Let \( \eta[\pi] \) denotes the returns of the policy in the true MDP. Then under a branched rollouts scheme with a branch length of \( k \), the returns \( \eta^{\text{branch}}[\pi] \) are bounded as: \[ \eta[\pi] \geq \eta^{\text{branch}}[\pi] - 2r_{\max} \left[ \frac{\gamma^{k+1} \epsilon_\pi}{(1 - \gamma)^2} + \frac{\gamma^k \epsilon_\pi}{1 - \gamma} + \frac{k}{1 - \gamma} (\epsilon_{m'}) \right] \] To reduce the gap between \( \eta^{\text{branch}}[\pi] \) and \( \eta[\pi] \), we need to reduce the second term on the right-hand side of Equation 9 as much as possible. In this item, there are three key factors: the model error under the current policy, the policy distribution shift $\epsilon_{\pi}$ between the current policy $\pi$ and the data-collecting policy $\pi_D$ and the rollout length $k$. For the model error $\epsilon_{m'}$, since the world model is trained by supervised learning to fit the data in the replay buffer rather than the full dynamics of the real environment, this leads to the generalization error. The error can be substantial for transitions that are seldom observed. Our method expects the agent to try to select practiced actions that have been taken before in the current state when interacting with the world model. We truncate the rollout when the agent chooses an unpracticed action, as this can cause large model errors. This avoids a further increase in the term $\frac{k}{1-\gamma}(\epsilon_{m'})$, which increases with rollout length $k$. For the policy distribution shift $\epsilon_{\pi}$, since the conservator is the approximation of $\pi_D(a \mid s)$, our method actually constrains $\epsilon_{\pi}$ explicitly. We add constraints to the rollout so that it continues when the policy’s action distribution $\pi_\theta(a \mid s)$ is similar to $\pi_D(a \mid s)$. This allows policy distribution shifts $\epsilon_{\pi}$ to be restrained during rollout thus protecting the quality of the generated trajectories. By dynamically adjusting the rollout length $k$ with our method, the model error and the policy distribution shift can be effectively constrained. This theoretically supports our method. 4 EXPERIMENT In this section, we aim to answer the following questions: (1) Whether CRLA can improve performance by adjusting only the rollout length? (2) Whether CRLA can balance data quality and efficiency? (3) Whether CRLA can truncate the rollout at the appropriate step? (4) For what kind of environments CRLA causes performance degradation? To answer these questions, We evaluate CRLA applied to DreamerV3 on the Atari100k benchmark. The Atari 100k benchmark (Kaiser et al., 2020) includes 26 games from the Arcade Learning Environment (Bellemare et al., 2013) and the agent is only allowed 100,000 steps of environment interaction per game, which are 400,000 frames with a frame-skip of 4 and corresponds to roughly two hours of real-time gameplay. It can effectively test the sample efficiency of the method in case of limited interactions. We want to test whether CRLA can effectively truncate those harmful generated trajectories when the interaction steps are limited. To avoid tedious hyperparameter tuning, we set $\beta = 0.78$ to automatically calculate thresholds $\alpha$ for all 26 games in the Atari 100k benchmark. All hyperparameters are identical to the Dreamerv3 default settings, except for the rollout length. We restrict the adjustment range of the rollout length to $[5, 16]$ to avoid too long or too short rollout length, while the default setting is a fixed one $T = 15$. Due to the lack of data in the replay buffer and the instability of the encoder at the early stage, we train and apply the conservator after 10k steps to avoid overfitting. We perform 10 runs per game and compute the average score over 100 episodes at the end of training for each run. Figure 4: The rollout length adaptation by CRLA in six games is shown in the first line and the comparison of the different rollout length in each game is shown in the second line. The solid line is the mean over 10 seeds for our method and 5 seeds for Dreamerv3 with $T = 8$ and $T = 15$, while the shaded area represents one pointwise standard deviation. 4.1 Performance Improvement We aim to assess whether CRLA could improve the performance of Dreamerv3 by adjusting the rollout length. Figure 3 illustrates the percentage performance improvements across all 26 Atari games compared to default Dreamerv3 with a fixed rollout length $T = 15$. It shows that CRLA significantly improves the performance of Dreamerv3 in most games. See the Appendix A.1 for detailed training curves. The results sufficiently demonstrate the effectiveness of CRLA. It’s noteworthy that we did not individually tune the threshold $\alpha$ for each game, emphasizing the user-friendliness of CRLA. And we believe that positive performance improvements can be achieved by fine-tuning the threshold and the adjustment range for each game respectively, since we only adjust the rollout length and do not modify the other hyperparameters. Figure 5: Full rollout trajectory on Ms Pacman. The image at $t = 0$ is the real observation as a starting point. The player controls the yellow Pac-Man in the upper right corner of the image at $t = 0$. The green border of the image represents $m_t = 1$ and the red border represents $m_t = 0$. The bars on the right side of each image show the action distribution output by the policy with blue bars and the practiced action distribution output by the conservator with yellow bars ranging from 0 to 1. Figure 6: Partial rollout trajectory on Seaquest game. The bars below each image correspond to the output distributions of the policy with blue bars and the conservator with yellow bars at each step. 4.2 Balancing Data Quality and Efficiency As each rollout varies in length, we record the mean and variance of the rollout length in the batch data at each training step, as shown in the first row of Figure 4. We aim to investigate whether our approach can achieve greater efficiency compared to the fixed rollout length setup when using a similar number of imagined rollout transitions. For convenience, we set the fixed rollout length $T = 8$, which closely approximates the average rollout length in our method. The results of this comparison are presented in the second row of Figure 4. As $T$ varies from 15 to 8, it results in a performance degradation at the fixed rollout length setup in some games. However, Our method is more sample efficient than both of them while using a smaller number of imagined rollout transitions but achieving better performance. It can be seen that the average rollout length stays in the middle region of the set range with a large variance to cover the entire range in our method. This suggests that CRLA has the ability to safely and flexibly adjust the rollout length to balance data quality and efficiency. 4.3 Analyzing the Validity of Truncation We want to observe where CRLA would choose to truncate the rollout. For illustration, We visualize the rollout trajectory of the Atari game Ms-Pacman. To demonstrate the validity of the conservator, we visualize the actor’s output distribution and the conservator’s output distribution at each step in the rollout trajectory. Figure 5 presents comprehensive information about the entire rollout trajectory. We can see that CRLA chooses to terminate the rollout at $t = 10$ when the action distribution significantly deviates from the output of the conservator. At this step, the agent selected the action UPLEFT, predicted by the conservator to be a rarely practiced action, while the action DOWN was considered a frequently practiced action. From $t = 11$ to 14, the agent selected the action UPLEFT, but the reconstructed observations reveal that the agent actually moved down. This illustrates that the world model is overfitted to the action DOWN at this latent states, resulting in an incorrect rollout trajectory. This addresses the third question and demonstrates that CRLA can effectively truncate rollouts at critical points. Appendix A.3 shows the more rollout trajectories of some other games. 4.4 Analysis of Performance Degradation Based on our experimental results, we would like to explore under which scenarios CRLA may lead to performance degradation. We select the Seaquest game as an illustrative example, since it shows performance degradation after using CRLA. In Figure 7, we present a partial rollout trajectory of the Seaquest game. A notable difference compared to the Ms-Pacman game is that the conservator’s output is more uniform in Seaquest, indicating that it predicts many actions in the current state with similar frequencies. In contrast, the actor is more explicit in its decisions, with a high probability to select a certain action. There are two possible reasons for the more uniform output of the conservator, one is that the action selection is more uniform when interacting with the environment for sampling, and the other is that the encoding of the observation in the latent space is not well learned, leading to latent state confusions. In this case, it may be difficult for the conservator to capture the real practiced action distribution. This causes conservator’s judgment becomes very sensitive to the threshold $\alpha$. The larger action space can cause this problem as well. In this case, the threshold need to be set carefully. 5 RELATED WORK Model-based reinforcement learning methods improve sample efficiency by interacting with the learned world model. However, the model errors prevent model-based approaches from acquiring high-quality data from the world model. Previous works have found that even small model errors can be compounded by multi-step rollout and deviate the predicted state from the region where the model has high accuracy. To mitigate the effects of compounding error, previous work has proposed many improvements. Some methods improve the model to achieve more accurate predictions. Kaiser et al. (2020) reduced prediction complexity by embedding complex high-dimensional image observations into low-dimensional hidden spaces using deep convolutional neural networks. Hafner et al. (2019) proposed the Recurrent State Space Model (RSSM) and achieved outstanding prediction accuracy. Michel et al. (2022) and Robine et al. (2023) utilized the powerful sequence modeling capabilities of transformer to accurately learn the dynamics of the environment. Other approaches reduce the model error by improving the training scheme. Yu et al. (2021) introduced the cycle-consistency constraint for representation and model learning to improve the sample efficiency. Eysenbach et al. (2022) proposed a single objective for jointly training the model and the policy to tackle the objective mismatch problem. Chugare et al. (2022) designed aligned latent models to simplify the training of the latent-space model and policy and remain self-consistent. However, since it is difficult to fully explore the whole state space in complex environments, the error of the world model cannot be completely eliminated. One idea is to mitigate the effects of model error by limiting the rollout length. Nguyen et al. (2018) argued that the fixed rollout length was problematic and proposed an adaptive rollout method using uncertainty estimation but only for simple deterministic environments. Xiao et al. (2019) introduce adaptive model-based value expansion method that adaptively selects planning horizons for each state according to the estimated compounding error but still can only plan horizons in a small range. Lai et al. (2020) develop bidirectional models to generate trajectories in the forward and backward directions at the starting point to reduce the compounding error without decreasing the rollout length. However, it still used a fixed rollout length. Lai et al. (2021) utilized metrics from the training process to guide rollout length adjustment but required additional training data to train the hyper-controller. Our approach differs from previous methods in that we introduce a conservative strategy to adjust the rollout length rather than utilizing metrics from the training process such as training loss. Our method is computationally simpler compared to previous methods and can safely and efficiently adjust the rollout length to balance data quality and efficiency. 6 CONCLUSION AND DISCUSSION In this paper, we propose a novel conservative rollout length adaptation method called CRLA, which prevents the rollout from falling into regions with large prediction errors by truncating the rollout when there is a high probability of selecting rarely taken action. CRLA avoids the rollout trajectory that deviate too far from the true transition by conservatively truncating the rollouts. We validate the effectiveness of our method through experimental results and theoretical analysis. We evaluate CRLA applied to DreamerV3 on the Atari100k benchmark and achieve significant performance gains in most environments. We believe that our work is an important step towards further improving the performance of model-based reinforcement learning methods. The limitations of our work are that it is only applicable to the discrete action space and the generalization of the conservator may not be sufficient since it is trained only on real samples but needs to be evaluated on imaginary trajectories. We will look into this further in our future work. REFERENCES Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research*, 47:253–279, 2013. Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, and Russ R Salakhutdinov. Mismatched no more: Joint model-policy optimization for model-based rl. *Advances in Neural Information Processing Systems*, 35:23230–23243, 2022. Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, and Ruslan Salakhutdinov. Simplifying model-based rl: learning representations, latent-space models, and policies with one objective. *arXiv preprint arXiv:2209.08466*, 2022. David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. *Advances in neural information processing systems*, 31, 2018. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In *International conference on machine learning*, pp. 2555–2565. PMLR, 2019. Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models, 2023. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada*, pp. 12498–12509, 2019. Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H. Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, and Henryk Michalewski. Model based reinforcement learning for atari. In *International Conference on Learning Representations*, 2020. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Hang Lai, Jian Shen, Weinan Zhang, and Yong Yu. Bidirectional model-based policy optimization. In *International Conference on Machine Learning*, pp. 5618–5627. PMLR, 2020. Hang Lai, Jian Shen, Weinan Zhang, Yimin Huang, Xing Zhang, Ruiming Tang, Yong Yu, and Zhenguo Li. On effective scheduling of model-based reinforcement learning. *Advances in Neural Information Processing Systems*, 34:3694–3705, 2021. Vincent Micheli, Eloi Alonso, and François Fleuret. Transformers are sample efficient world models. *arXiv preprint arXiv:2209.00588*, 2022. Nhat M Nguyen, Abhineet Singh, and Kenneth Tran. Improving model-based rl with adaptive rollout using uncertainty estimation. 2018. Aske Plaat, Walter Kosters, and Mike Preuss. High-accuracy model-based reinforcement learning, a survey. *Artificial Intelligence Review*, pp. 1–33, 2023. Jan Robine, Marc Höftmann, Tobias Uelwer, and Stefan Harmeling. Transformer-based world models are happy with 100k interactions. *arXiv preprint arXiv:2303.07109*, 2023. Max Schwarzer, Ankesh Anand, Rishab Goel, R. Devon Hjelm, Aaron C. Courville, and Philip Bachman. Data-efficient reinforcement learning with self-predictive representations. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net, 2021. Max Schwarzer, Johan Samir Obando Ceron, Aaron Courville, Marc G Bellemare, Rishabh Agarwal, and Pablo Samuel Castro. Bigger, better, faster: Human-level atari with human-level efficiency. In *International Conference on Machine Learning*, pp. 30365–30380. PMLR, 2023.
5KUiMKRebi
- I wonder about the integer inputs (i.e. the tensor coordinates) to the INR network. Do you normalise these coordinates somehow or directly input the integers into the INR network? Did such an integer input space cause any problems during training?
Implicit Neural Representation Inference for Low-Dimensional Bayesian Deep Learning Panagiotis Dimitrakopoulos¹, Giorgos Sfikas² & Christophoros Nikou¹ ¹Dept. of Computer Science & Engineering, University of Ioannina, Ioannina Greece ²Dept. of Surveying & Geoinformatics, University of West Attica, Athens Greece ¹{p.dimitrakopoulos,cnikou}@uoi.gr, ²gsfikas@uniwa.gr Abstract Bayesian inference is the standard for providing full predictive distributions with well calibrated uncertainty estimates. However, scaling to a modern, overparameterized deep learning setting typically comes at the cost of severe and restrictive approximations, sacrificing model predictive strength. With our approach, we factor model parameters as a function of deterministic and probabilistic components; the model is solved by combining maximum a posteriori estimation of the former, with inference over a low-dimensional, Implicit Neural Representation of the latter. This results in a solution that combines both predictive accuracy and good calibration, as it entails inducing stochasticity over the full set of model weights while being comparatively cheap to compute. Experimentally, our approach compares favorably to the state of the art, including much more expensive methods as well as less expressive posterior approximations over full network parameters. 1 Introduction Bayesian Neural Networks (BNNs) are a class of models that propose elegant solutions to the pathologies of standard NNs [Ritter et al., 2018a; Jospin et al., 2022; Gawlikowski et al., 2023]. In BNNs, model parameters are defined as random variables that follow a prior (posterior) distribution, which encodes knowledge about the model before (after) having “seen” the training data. Learning is cast as an inference problem, where the task is to compute efficiently the posterior distribution. In turn, making predictions on new data is replaced by computing a predictive distribution. Advantages include that uncertainty estimates are calibrated and robust, and hyperparameter estimation can be performed through a principled evidence maximization framework. In BNNs, Bayesian inference is not exact, and a direct application of Bayes’ law leads to an intractable computation. An approximation has to be applied, and in this respect numerous solutions have been proposed. A factor that complicates this problem is that the approximation must lead to a scalable, practical implementation that must take into account that the data and model size may be far larger than what was the norm in methods and models that dominated Bayesian inference in the pre-deep learning era. Several solutions have been proposed in this respect, rehashing and adapting older methods [Betancourt, 2017; Daxberger et al., 2021a] or putting forward completely fresh approaches [Maddox et al., 2019]. Implicit Neural Representations (INRs) are related to a different line of research that is orthogonal to that involving Bayesian networks [Sitzmann et al., 2020; Dupont et al., 2021b]. With INRs, the goal is to represent a signal in terms of a trained neural network. Unlike standard representations as discrete sets of values over a canonical grid, an INR accepts continuous coordinates as inputs. Therefore, the INRs allow for a continuous representation, with the underlying NN providing values of the represented signal at theoretically any granularity. Related breakthroughs in improving representation of high frequencies have contributed to the popularity of the approach [Sitzmann et al., 2020; Mildenhall et al., 2021]. Numerous signal representation use-cases have been explored, including images, video, 3D shapes or Neural Radiance Fields (NeRFs). With the latter, a NN is tasked with mapping ray position and direction to color and density values. Part of the parameters of a larger NN can also be encoded with an INR; in [Romero et al., 2021a], convolutional kernels are represented in terms of Multiplicative Anisotropic Gabor Networks. In this case, the implicit representation allows for kernels that generalize well to higher resolutions than the ones originally trained with. Aside from allowing for continuous representation at multiple scales, another major focus involves the INR’s capability of producing a compressed, low-dimensional representation (Benbarka et al., 2022; Strümpler et al., 2022). In this work, we propose a class of Bayesian Neural Network that is parameterized using a combination of deterministic and stochastic parameters. In recent work, similar partitions are employed (Kristiadi et al., 2020; Dusenberry et al., 2020; Daxberger et al., 2021b), where a specific subnetwork is set to be stochastic while the rest of the network is deterministic. Unlike these works, we define all parameters as functions conditioned over both deterministic and probabilistic components. Normally, this is very much desired but computationally prohibitive due to the huge number of parameters in modern NNs; in our work, this is made feasible due to the probabilistic component being parameterized through an INR hypernetwork, which compresses probabilistic factors through a low-dimensional SIREN representation (Sitzmann et al., 2020). It is over this representation that we assume a prior distribution, and perform inference. As the number of probabilistic factors is kept low, we are allowed to make fewer concessions w.r.t. constraining the form of the posterior and the predictive. The result is a process that is comparatively closer to exact inference, leading to more accurate estimates and better uncertainty calibration. In a nutshell, the deterministic model component is responsible for ensuring accurate results, while the low-dimensional probabilistic component is responsible for inducing stochasticity to the entirety of the network. We validate our claims and model across a variety of experimental trials, where we show that our model produces accurate and well-calibrated uncertainty estimates. 2 BACKGROUND AND MOTIVATION We consider a supervised learning setting, where we have a training dataset \( D = \{X, Y\} \), with inputs \( X = \{x_n\}_{n=1}^N \) and outputs \( Y = \{y_n\}_{n=1}^N \), and we define a mapping \( g_w : \mathcal{X} \rightarrow \mathcal{Y} \), where \( \mathcal{X} \) and \( \mathcal{Y} \) are the input and output domains respectively. This mapping is modelled as a NN with parameters (weights and biases) \( w \in \mathbb{R}^{d_w} \). Under the BNN paradigm, we assume that the mapping parameters are probabilistic, so we can say that they follow some prior distribution \( p(w) \), while we aim to compute (in practice, estimate) their posterior distribution \( p(w|D) \). Given the posterior distribution, we can then opt to find the predictive distribution for some unseen datum \( x^* \), formally: \[ p(y|D, x^*) = \int p(y|g_w(x^*))p(w|D)dw. \] (1) In contrast to a Maximum a Posteriori (MAP) solution, which would optimize a cost combining log-likelihood and log-prior terms: \[ w = \arg\max_w [\log p(y|g_w(x)) + \log p(w)], \] (2) a Bayesian solution aims to compute distributions for both the posterior and the predictive. Several options are available to proceed. The Stochastic Weight Averaging-Gaussian method (SWAG) assumes a Gaussian posterior for the weights, with the distribution mean and covariance approximated as a function of the objective optimization method (Stochastic Gradient Descent) with a modified learning schedule (Maddox et al., 2019). Laplace Approximation (LA) also assumes Gaussian distributed parameters, with a precision matrix that is computed as the negative Hessian of the loss. After having a weight posterior, an option can be to sample the predictive distribution and either obtain point estimates for test data, or perform Bayesian model averaging (Maddox et al., 2019). Additional simplifying assumptions can lead to a closed form also for the predictive. The Generalized Gauss-Newton approximation is closely related to a linearizing assumption for the output layer of the NN (Immer et al., 2021), which conveniently leads to a Gaussian approximation for the predictive distribution. The covariance of the predictive is then dependent on a combination of two factors: the covariance of the posterior (negative loss Hessian in LA) and the Jacobian for the specific point. Interestingly, a relation between the subspace spanned by the SGD trajectory vectors (used by SWAG) and the corresponding one to the most important eigenvectors of the Hessian (used by LA) is discussed in Gur-Ari et al. (2018). Normalizing Flows (NFs) represent a powerful framework for density estimation (Dinh et al., 2016), that may in principle also be used to model the posterior of a large NN. Scalability is a crucial factor when it comes to learning methods in the context of NNs. Assuming an entire network to be probabilistic implies significant overhead in terms of various factors. Common remedies include assuming a Gaussian form combined with a low-rank approximation of the Hessian, and using a simplified, even diagonal covariance structure. Kronecker-Factored Approximate Curvature (KFAC) expresses a useful tradeoff, which neglects only cross-layer correlations and uses a block-diagonal covariance matrix (Ritter et al., 2018b). Another option involves treating only part of the network as non-deterministic (Kristiadi et al., 2020; Daxberger et al., 2021b). We then have uncertainty only in the last layer neurons, treating the rest of the network as a feature extractor (Weber et al., 2018). As a consequence, and to the degree that these assumptions are overly simplistic, the approximate distributions may turn out to be very far from the actual posterior and predictive. This often translates to a dramatic reduction of predictive strength in practice. In the following Section, we shall discuss our approach to dealing with these issues. 3 Proposed Model: Low-Dimensional Bayesian Deep Learning We propose to move from the high-dimensional setting of full inference in a modern Neural Network to low-dimensional inference, by assuming an auxiliary Implicit Neural Representation alongside the main network. We perform density estimation over the parameters of the INR hypernetwork, while treating the factors corresponding to the original weights as deterministic parameters. This allows us to employ powerful inference methods (we discuss LA, SWAG, NFs) with minimal approximation concessions, by leveraging on the small size and representational strength of the INR. 3.1 INR Modeling Given the NN that models the mapping $g_w$ (cf. Section 2), the first step of our approach is to augment each weight $w$ with a multiplicative nuisance factor $\xi$ (Srivastava et al., 2014; Kingma et al., 2015; Louizos & Welling, 2017). In particular, we use $w \circ \xi$, where $\circ$ is point-wise multiplication, and the dimensionality of $\xi$ is identical to that of $w$. The $\xi$ factor is parameterized using an INR (Dupont et al., 2022), obtained as the output of a function $f_{w_{INR}} : I \rightarrow \mathbb{R}$, where tensor coordinates (domain $I$) are mapped to layer values. More specifically for a convolutional main network, the INR hypernetwork learns a mapping from a 5 dimensional $I$ to a scalar value which corresponds to the nuisance factor associated with the weight $w_{c,o,k_i,k_j,l}$ located at the kernel position $k_i,k_j$ at channel $c$ of filter $o$ in layer $l$ of the main/primary network (in the case of a fully-connected layer, dimensions $k_i$ and $k_j$ are omitted). With the above modeling choice, the hypernetwork can be easily shared across each layer of the main network and reduce the overall modeling complexity. The architecture of the INR is defined as a multi-layer perceptron with sinusoidal activations, as with the SIREN model of Sitzmann et al. (2020). Formally, the input vector $z_{i-1}$ for layer $i$ is transformed according to $z_{i-1} \rightarrow \sin(\omega_0(w^i_{INR}z_{i-1} + b^i_{INR}))$, where $w^i_{INR}, b^i_{INR}$ denote weights and biases of the INR layer $i$, and $\omega_0$ is a fixed hyperparameter. In INRs, any target quantity can be modelled regardless of its size, while in traditional networks parameter size is coupled with target dimensionality. This characteristic, in combination with the stochastic character of $\xi$ allows us to choose the complexity of $f_{w_{INR}}(\cdot)$ to be (much) lower than that of its target ($d_{w_{INR}} \ll d_\xi$). Thus, $w_{INR}$ parameters can also be interpreted as a low-dimensional representation of factors $\xi$. 3.2 Bayesian Inference In our method, we treat the product $w \circ \xi$ as a stochastic random variable coming from a parametric distribution $p(w,\xi) = p(w)p(\xi)$. Here we are taking advantage of the INR hypernetwork modeling of $\xi$ and implicitly place a prior over those variables, by defining a prior over the INR parameters $w_{INR}$. This allows us to reason about $\xi$ but in the much lower dimensional space of $w_{INR}$. Following the supervised learning setting of Section 2, our aim remains to compute the posterior $p(w,w_{INR}|D)$. Since the posterior distribution cannot be obtained in closed form, we cannot apply exact inference methods. Thus we resort to approximate inference, under an additional assumption that we only require a deterministic estimate over $w$. We encode this constraint as a factorization over separate approximate posterior distributions $q(w)$ and $q(w_{INR})$, where $q(w) = \delta(w - \bar{w})$, and $\delta(\cdot)$ is the Dirac delta function. This forces $w$ to be deterministic, equal to a point estimate $\bar{w}$. The full approximate posterior is then written as: \[ p(w, w_{INR}|D) \approx q(w, w_{INR}) = q(w_{INR})q(w) = q(w_{INR})\delta(w - \bar{w}). \] (3) **Laplace Approximation.** One way to proceed is by constructing a Laplace approximation over \( q(w_{INR}) \). We approximate \( p(w_{INR}|D) \) by \[ q(w_{INR}) = N(w_{INR}, \Lambda^{-1}), \] (4) \[ \Lambda = C^{-1} + \sum_{n=1}^{N} \nabla^2_{w_{INR}} \log p(y_n|g_w, w_{INR}(x_n))|w_{INR}, \] (5) where we have assumed a prior \( w_{INR} \sim N(0, C) \). Mean \( \bar{w}_{INR} \) is found as the Maximum a Posteriori solution (eq. 2). Under this scheme, \( q(w, w_{INR}) \) is expressed by a product of a Gaussian and a Dirac delta distribution, which can be seen alternatively as a single Gaussian distribution with precision \( \gamma \to +\infty \) for variates corresponding to \( w \) and zero covariance between \( w \) and \( w_{INR} \) terms by assumption (eq. 3). Concerning the weights and biases that directly parameterize the “main” network (i.e. the product \( w \circ \xi \)), we note that these are in general non-Gaussian, even under LA assumptions. The INR \( f_{w_{INR}}(\cdot) \) transforms the (approximately) Gaussian \( w_{INR} \) into a non-Gaussian density \( q(\xi) \). This is multiplied by deterministic \( w \) where the result follows a density that is a scaled version of \( q(\xi) \). The first and second moments are equal to \( W\mathbb{E}\{\xi\} \) and \( W\mathbb{V}\{\xi\}W \), where \( W = \text{diag}\{w\} \) and \( \mathbb{E}\{\cdot\}, \mathbb{V}\{\cdot\} \) denote expectation and covariance respectively. Once we have computed a posterior over the weights, we can estimate the predictive (eq. 1) by acquiring \( \xi \) samples by first sampling \( w_{INR} \sim q(w_{INR}) \) and evaluating \( \xi = f_{w_{INR}}(\cdot) \). We finally scale them by \( w \), then the product is used to compute \( g(x) \) and \( p(y|g(x)) \) in a Monte Carlo fashion. Alternatively, the predictive distribution (eq. 1) can be computed in closed form, as long as we impose a linearizing assumption over the network output. Specifically, this involves a first-order Taylor expansion of network output \( g(\cdot) \) around \( w_{INR} \). As by LA assumption, parameters \( w_{INR} \) are a posteriori Gaussian-distributed, a linear transformation over them through linearization would result in a Gaussian predictive as well; linearization over other variables (\( w, \xi \)) would not have been fruitful due to their being non-Gaussian. Hence, we only require parameters \( w_{INR} \) to vary in this approximation, while we assume the rest of the parameters \( w \) to be constant at their MAP solution. Formally we write: \[ g_{lin}(x) \approx g_{\bar{w}, \bar{w}_{INR}}(x) + J_{w_{INR}}(x)(w_{INR} - \bar{w}_{INR}), \] (6) where we used \( J_{w_{INR}}(x) = \frac{\partial g_{\bar{w}, w_{INR}}(x)}{\partial w_{INR}}|w_{INR} \). For the predictive we then have: \[ p(y|D, x^*) = N(g_{\bar{w}, \bar{w}_{INR}}(x^*), J_{w_{INR}}^T(x^*)\Lambda^{-1}J_{w_{INR}}(x^*)). \] (7) **Stochastic Weight Averaging.** An alternative over LA is to use SWAG (Maddox et al., 2019) over INR parameters. In this context, this amounts to approximating \( p(w_{INR}|D) \) by a Gaussian \( q(w_{INR}) \) as in eq. 4 but with inverse \( \Lambda \) equal to the sample covariance over the SGD trajectory: \[ \Lambda^{-1} = \frac{1}{T-1} \sum_{i=1}^{T} (w_{INR}^{(i)} - \bar{w}_{INR})(w_{INR}^{(i)} - \bar{w}_{INR})^T, \] (8) where \( \{w_{INR}^{(1)}, w_{INR}^{(2)}, \ldots, w_{INR}^{(T)}\} \) are training updates of INR parameters. The predictive distribution is estimated by Bayesian model averaging through Monte Carlo sampling. Formally we have: \[ p(y|D, x^*) \approx \frac{1}{K} \sum_{k=1}^{K} p(y|g_{\bar{w}, \xi_k}(x^*)), \] (9) where \( K \) samples \( \{\xi_1, \xi_2, \ldots, \xi_K\} \) are drawn from the approximate posterior \( q(\xi) \) by evaluating \( w_{INR} \sim q(w_{INR}) \) as described in the previous paragraph. **Normalizing Flows.** Normalizing Flows are another powerful modeling choice for \( q(w_{INR}) \). In this context, \( q(w_{INR}) \) is freed from the Gaussian restriction and can be any parameterized flexible parametric distribution. A normalizing flow transforms an initial random variable \( z \), typically sampled from a standard Normal, by applying a chain of invertible parameterized transformations. The RealNVP model (Dinh et al., 2016) is based on a flow composed of a series of affine coupling layers defined as: \( y \rightarrow m \circ z_{i-1} + (1 - m) \circ (z_{i-1} \exp(s(m \circ z_{i-1})) + t(m \circ z_{i-1})) \) where \( s \) and \( t \) stand for scale and translation, which are typical linear mappings, while \( m \) is a channel-wise masking scheme. The flow parameters can be computed by directly optimizing the variational lower bound: \[ L(w, w_{INR}) = \mathbb{E}_{q(w_{INR})} \log p(y|g_w, w_{INR}(x^*)) - KL(q(w_{INR})||p(w_{INR})), \] where the carefully designed coupling layers ensure that the inverse and the Jacobian of the determinant of each transformation can be efficiently computed. The predictive distribution is estimated by Bayesian model averaging through Monte Carlo sampling similar to eq.9. Table 1: Numerical results for classification on CIFAR10 (top) and Corrupted CIFAR10 (bottom) for different design choices. Log-Likelihood (\(\uparrow\)) and Expected Calibration Error (\(\downarrow\)) are reported. | | Modeling | Noise Structure | Type of INR | Noise Type | Activation Type | |-------|----------|-----------------|-------------|------------|----------------| | | \(w\) | \(w_\xi\) | Rank-1 | Channel | Full | | LL | -1.29 | -0.37 | -0.44 | -0.47 | -0.40 | | ECE | 0.01 | 0.05 | 0.06 | 0.06 | 0.05 | | LL | -1.80 | -0.97 | -1.60 | -1.43 | -1.18 | | ECE | 0.17 | 0.11 | 0.20 | 0.20 | 0.15 | 4 EXPERIMENTAL RESULTS In this Section, we provide numerical results for the proposed INR-based scheme, in comparison to recent Bayesian inference methods. Namely, we compare ourselves versus the following methods: MC Dropout (Gal & Ghahramani, 2016), Bayes by Hypernet (BbH) (Pawlowski et al., 2017), Deep Ensembles (Lakshminarayanan et al., 2017) – considered among the state-of-the-art methods for uncertainty estimation in Deep Learning (Ovadia et al., 2019; Ashukha et al., 2020) – and last layer Laplace approximation (LL). We start by experimenting with different types of modeling choices and evaluate each one on a baseline classification task, in order to quantify how our method performs under different modeling scenarios. For our main numerical analysis, we deployed three different experimental setups. First, we evaluate our predictive uncertainties for our method on a 1D synthetic regression task. We carried out experiments to evaluate INR performance on different types of regression UCI datasets. Last, we ran image classification trials (CIFAR100, CIFAR10 and MNIST) where we compare ResNet variants for prediction and out-of-distribution robustness. We test three variants of the proposed INR-based model, namely INR-Laplace (eq.4[57]), INR-SWAG (eq.4[89]) and INR-RealNVP (eq.10). The three variants differ w.r.t. the approximation strategy for the posterior and the predictive (cf. Section 3). For the first two cases we compute the full Gaussian covariance for the weight posterior (avoiding e.g. KFAC or low-rank approximations (Daxberger et al., 2021a)). Throughout our experiments, we found that the proposed model provides good predictive uncertainties on a variety of settings, highlighting the benefits of low-dimensional Bayesian inference. Concerning implementation details of the proposed and compared models and benchmark setup in general, we have moved additional information to the Appendix (App. B). 4.1 DESIGN CHOICES In this Section we carry out ablation studies that justify the particular modeling and INR architecture described in subsection 3.1 and help us understand the behavior of the hypernetwork under different settings. We numerically evaluate each different potential modeling scenario by training a ResNet-20 model at CIFAR-10 according to subsection 4.4 and evaluate its MAP solution in both in and out of distribution data. Table 1 includes the main results. Our first ablation study aims to justify the introduction and use of \( \xi \) variables i.e. we investigate how the BNNs perform with only the INR for the posterior (see Table 1 under the column ”Modeling”). As the \( \xi \) variables only serve to induce stochasticity, removing weights \( w \) result in a model which is not able to capture any information from the training data. Furthermore, augmenting \( w \) with \( \xi \) results in a more sophisticated model which yields better calibrated predictions. We choose the INR hypernetwork to be shared across all the layers of the main network. Sharing the INR hypernetwork, besides being efficient, can also reduce significantly the dimensionality of \( w_{INR} \), as the total \( d(w_{INR}) \). for the individual hypernetwork will be a multiple of the number of layers of the main network. As an example, for Wide-ResNets the magnitude of this figure can be up to hundreds of variates. Despite having less parameters, the shared version of the hypernetwork is highly comparable to its more expensive counterpart as Table 1 (column labelled as "Type of INR") indicates. Also, we introduce independent nuisance factors $\xi$ for every single weight $w$. In Table 1 (column labelled as "Noise Structure") we measure the benefits of our full-rank multiplicative noise versus other low-rank modeling options used in related works (Dusenberry et al., 2020; Louizos & Welling, 2017). In the same Table (column labelled as "Noise Type"), we can see results for evaluation of two different types of noise injection in the main model, namely multiplicative noise ("Mult") and additive noise ("Add"). The additive noise hugely underperforms where multiplicative noise factors seem to provide good and calibrated solutions. Because in the multiplicative structure during training $\nabla \xi$ depends on $W$, we argue that as $W$ is responsible for fitting the data, it can pass valuable information to the hypernetwork weights in the multiplicative case leading to significant increase in overall performance. Furthermore we find that Sine/Periodic activations – the “default” choice in Sitzmann et al. (2020) – slightly outperforms a hypernet with ReLU activations as we can see in Table 1 (column labelled as "Activation Type"), even though results are still very close. Finally, we evaluate the effects of INR network size on uncertainty estimates. We want to measure how increasing the number of parameters of the hypernetwork will affect the predictive behavior of the model. We trained 3 different INR models, with increasing numbers of trainable parameters. Following Fort et al. (2019) and Dusenberry et al. (2020), in Figure 1 we examine the normalized diversity of INRs of increasing size, where the posterior over $w \circ \xi$ was estimated via INR-SWAG and INR-MAP. Increasing the size of the INR hypernetwork results in more complex weight posteriors, which is depicted with better scores across all metrics in out-of-distribution data. Nevertheless, a small INR with only 350 trainable parameters is competitive in this training setup. 4.2 VISUALIZING UNCERTAINTY We use a synthetic 1D regression task with three disjoint clusters of input data as proposed in Izmailov et al. (2020). This dataset is carefully designed to test “in-between” uncertainty, i.e. model confidence in between these disjoint clusters of data (Foong et al., 2019). Ideally, we want a model to predict high uncertainty values as test data move away from the observed data. In this test, we use a fully-connected architecture with hidden layers that have [200, 50, 50, 50] neurons respectively. Following Izmailov et al. (2020), the network takes two inputs $\tilde{x} = (x, x^2)$ and outputs a single real value $y = f(\hat{x})$. The INR network has 3 layers consisting of $[2, 10, 4]$ neurons respectively, resulting totally in 160 training parameters (equal to only 1% of the number of the $\xi$ parameters, cf. Section 3). Results are shown in Figure 2. We also include a Gaussian Process (GP) with a Radial Basis Function (RBF) kernel as the state of the art for this problem. Our INR-Laplace preserves more of the uncertainty regarding both “out” and “in-between” of the observed data. Other methods, like Deep Ensembles and MC Dropout infer a desirable uncertainty structure but still remain quite overconfident. Furthermore, the proposed INR model is able to maintain the appealing characteristics of the approximate inference methods applied, specifically the stationary structure (or in-between-uncertainty) benefits of the Linearized Laplace approximation as shown in multiple works (Kristiadi et al., 2020; Daxberger et al., 2021b). ### 4.3 UCI Regression We next test our method on the UCI regression tasks (Asuncion & Newman, 2007). We experiment with 8 UCI regression datasets using standard training-evaluation-test splits from Hernández-Lobato & Adams (2015) and their GAP-variants (Foong et al., 2019). To measure performance we deployed Gaussian test log-likelihood (LL). Our training strategy follows the work of Daxberger et al. (2021b). The INR network has 4 layers consisting of $[5, 5, 5, 1]$ neurons respectively, resulting totally in 70 training parameters (equal to only 2% of the number of the $\xi$ parameters, cf. Section 3). ![Figure 3: Numerical results for regression trials on UCI datasets (Asuncion & Newman, 2007). Mean values of test Log-Likelihood ($\gamma$) are shown with ± 1 standard deviation error bars, obtained over standard (Hernández-Lobato & Adams, 2015) and GAP (Foong et al., 2019) splits.](image) The main results are depicted in Figure 3. The small MLP network enabled us to compute the full GGN matrix in the Laplace approximation of the main network and add it as baseline. As we can see, INR combined with RealNVP or LA achieves better test log-likelihood – a metric which considers both uncertainty and accuracy – compared to BbH and LL Laplace approximation, while followed closely by MC Dropout. Furthermore, the proposed INR remain competitive with Deep Ensembles networks, even surpassing them in five out of eight datasets while overall being close enough, in both standard and gap splits, as standard deviation bars indicate. ### 4.4 Image Classification under Distribution Shift We evaluate our method on standard image classification tasks over the CIFAR10, CIFAR100 (Krizhevsky et al., 2009) datasets. We use ResNet-50 (He et al., 2016) in order to test the ability of the proposed INR-based method to scale into larger models. A capable Bayesian inference technique is critical when applied in deep models, as they tend to exhibit less accurate calibration in this context (Guo et al., 2017). We provide experiments in a context of high degree of distribution shift, as under these conditions the evaluation of predictive uncertainty is the most useful in practice (Ovadia et al., 2019). Our INR hypernetwork (Sitzmann et al., 2020), has 4 layers with $[10, 10, 10, 1]$ neurons each, resulting in 260 training parameters (only 0.001% of the parameters $\xi$). Following Ovadia et al. (2019); Antorán et al. (2020), we train ResNet50 on CIFAR10/CIFAR100 and evaluate on data subject to 16 different types of corruption with 5 levels of noise intensity each (Hendrycks & Figure 4: Numerical results for classification trials on Corrupted CIFAR100 dataset. The $x$-axis of each plot corresponds to increasingly corruption levels. As Fig. 4 indicated, one of the proposed variants, INR-RealNVP, outperforms non-INR methods in terms, log-likelihood and expected calibration error. Both INR-based methods outperform LL Laplace and MC Dropout which are overconfident in their predictions and more often erroneous while still being competitive w.r.t Deep Ensembles. Overall, these results suggest that the proposed approach produces more calibrated and accurate models than other popular uncertainty quantification approaches. Figure 5: Rejection-Classification plots. We quantify the quality of uncertainty estimates by jointly evaluating the predictive entropy of our model on an in-distribution and an OOD test set. Ideally, we want predictive entropy to be high on OOD data as predictions should be more uncertain, and vice versa. Following Antorán et al. (2020) and Nadeem et al. (2009), we deployed the OOD rejection scenario by jointly evaluating the entropy of our model on an in-distribution and OOD test set, where we allow the models to reject an increasing proportion of the data based solely on their entropy values. Ideally, we want highly calibrated and robust models to reject all the OOD examples, as well as the in-distributional examples when the corresponding predictions are inaccurate. Figure 5 illustrates on what percentage of the remaining non-rejected examples the predictions are accurate. On CIFAR10-SVHN all methods have the same performance, while on CIFAR100 the INR-RealNVP model fails to distinguish very uncertain in-distribution data from low uncertainty OOD ones. On MNIST-Fashion, the proposed methods INR-SWAG and INR-RealNVP perform best, followed by LL Laplace and Dropout. Finally, we tried to measure the quality of proposed low-dimensional spaces in terms of predictive uncertainty. Specifically, we compare our INR low dimensional space with: rank-1 (Dusenberry et al., 2020) Wasserstein subnetwork (Daxberger et al., 2021b) and partially stochastic Resnets from Sharma et al. (2023). We trained (each method) combined with a Resnet18 for 100 epochs in CIFAR100 while keeping the approximate inference method the same across all low-dimensional spaces. Results in Table 2 show a trend in favor of both proposed INR-$x$ methods and validate to a considerable degree the premise of our method: instead of choosing a subset or subnet following the rationale of the corresponding methods, the INR produces $\xi$ outputs that endow the full network with the desirable stochasticity, while keeping the dimensionality of the random process that we want to do inference upon at a low level. 5 RELATED WORK Hypernetworks. Hypernetworks are NNs that are used to predict deterministically the parameters of another, typically larger network, termed the “primary” network. The terminology is due to Ha et al. Table 2: Numerical results for classification trials on CIFAR100 for different proposed low-dimensional spaces alongside their inference time. | Subspace | Inference | LL ↑ | Error ↓ | Brier ↓ | ECE ↓ | LL ↑ | Error ↓ | Brier ↓ | ECE ↓ | Time ↓ | |----------------|-----------|------|---------|---------|-------|------|---------|---------|-------|--------| | Rank1 | SWAG | −2.29| 0.34 | 0.55 | 0.22 | −4.77| 0.57 | 0.92 | 0.39 | 0.28 | | | Laplace | −4.01| 0.31 | 0.97 | 0.66 | −4.25| 0.58 | 0.97 | 0.40 | 0.55 | | INR | SWAG | −2.09| 0.30 | 0.50 | 0.22 | −4.18| 0.53 | 0.84 | 0.36 | 0.11 | | | Laplace | −3.91| 0.30 | 0.96 | 0.67 | −4.19| 0.58 | 0.97 | 0.39 | 0.51 | | Subnetwork | SWAG | −2.14| 0.30 | 0.49 | 0.20 | −3.97| 0.51 | 0.82 | 0.34 | 0.29 | | | Laplace | −3.95| 0.32 | 0.96 | 0.65 | −4.13| 0.51 | 0.97 | 0.46 | 0.42 | | Partially Stochastic | SWAG | −2.14| 0.30 | 0.49 | 0.20 | −3.97| 0.51 | 0.82 | 0.34 | 0.28 | | | Laplace | −3.99| 0.34 | 0.97 | 0.63 | −4.18| 0.51 | 0.98 | 0.47 | 0.49 | (2016), however the main idea can be traced back to earlier works (see discussion in e.g. Krueger et al. 2017, Karaletos et al. 2018). Krueger et al. (2017) have been among the first to extend hypernetworks to a Bayesian setting. Their Bayesian hypernetwork, modelled as a normalizing flow, learns to predict distributions of weights for the primary network. The flow predicts scaling per-neuron factors for the primary network weights. This is similar to the closely related (Louizos & Welling 2017), which however require an extra inference network to estimate the entropy term of the VLB. Almost concurrently, Pawlowski et al. (2017) proposed BbH for VI with implicit distributions. They use a discriminator network for density ratio estimation (DRE) in the context of prior-constrastive VI (Huszár 2017), and a generator to model the variational distribution. Shi et al. (2017) use a kernel method for DRE instead of a discriminator. Karaletos et al. (2018) and Karaletos & Bui (2020) explore hierarchical prior modeling using NN-based implicit distributions and Gaussian processes. INRs have also been used for approximating model parameters of deep NNs (Romero et al. 2021a,b). **Low-Dimensional Inference.** Bayesian inference in a low-dimensional space is another important related concept, with often considerable overlap to works that can be understood as forms of hypernetworks. Dusenberry et al. (2020) in the spirit of Wen et al. (2020) employ rank-1 multiplicative noise components, before attempting to estimate an approximate posterior over the weights. Izmailov et al. (2020) adopt post-hoc Bayesian inference by constructing a subspace of the BNN weights. They apply high fidelity inference on these small subspaces, and were able to produce state-of-the-art results at a moderately low computational cost. Pradier et al. (2018) learn a non-linear latent representation of network weights. Another subgroup of related work can be described as selecting a portion of the BNN parameters to be treated as random variables, and leaving the rest of the model to work deterministically. One of the most popular and straightforward approaches are last-layer BNNs. By selecting *a priori* only the last layer to have a probabilistic treatment, they resort to a linear model which ensures analytical tractability of both inference and predictive distribution in the spirit of Gaussian processes, while the remaining NN structure acts as a feature extractor (Watson et al. 2021, Snoek et al. 2015, Lázaro-Gredilla & Figueiras-Vidal 2010, Weber et al. 2018). Finally, Daxberger et al. (2021b) first obtain a MAP estimate of all weights, then define a subnetwork selected in a way that aims to maximally preserve predictive uncertainty. The small size of the subnetwork allows for the use of a full-covariance Gaussian posterior in tandem with linearized LA (MacKay 1992). **Stochastic INRs.** INRs have been used as models for signal compression (Dupont et al. 2021a), and more recently they have been extended to the Variational Bayesian setting (Guo et al. 2023). Shen et al. (2021) extend NeRFs to learning distributions of all possible radiance fields. A simple variational posterior is assumed, and the base model is extended to learn uncertainty estimates over scene parameters. Vasconcelos et al. (2022) use a BNN as an INR of computerized tomography. ### 6 Conclusion and Future Work We have presented an approach for scalable and efficient Bayesian Deep Learning, that leverages on the small size and representational strength of INRs. Our claims are corroborated by the reported experimental results, which show that the integration of the proposed method results in improving considerably overall uncertainty estimates. For future work, we aim at exploring other ways to integrate INRs (e.g. multiplicative filter networks (Fathony et al. 2020)) as well as integrating with different types of approximations, such as Hamiltonian Monte Carlo (Neal et al. 2011). REFERENCES Javier Antorán, James Allingham, and José Miguel Hernández-Lobato. Depth uncertainty in neural networks. *Advances in neural information processing systems*, 33:10620–10634, 2020. Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. *arXiv preprint arXiv:2002.06470*, 2020. Arthur Asuncion and David Newman. UCI machine learning repository, 2007. Nuri Benbarka, Timon Höfer, and Andreas Zell. Seeing implicit neural representations as Fourier series. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 2041–2050, 2022. Michael Betancourt. A conceptual introduction to Hamiltonian Monte Carlo. *arXiv preprint arXiv:1701.02434*, 2017. Glenn W Brier et al. Verification of forecasts expressed in terms of probability. *Monthly weather review*, 78(1):1–3, 1950. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux-effortless bayesian deep learning. *Advances in Neural Information Processing Systems*, 34:20089–20103, 2021a. Erik Daxberger, Eric Nalisnick, James U Allingham, Javier Antorán, and José Miguel Hernández-Lobato. Bayesian deep learning via subnetwork inference. In *International Conference on Machine Learning*, pp. 2510–2521. PMLR, 2021b. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. *arXiv preprint arXiv:1605.08803*, 2016. Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, and Arnaud Doucet. Coin: Compression with implicit neural representations. *arXiv preprint arXiv:2103.03123*, 2021a. Emilien Dupont, Yee Whye Teh, and Arnaud Doucet. Generative models as distributions of functions. *arXiv preprint arXiv:2102.04776*, 2021b. Emilien Dupont, Hyunjik Kim, SM Ali Eslami, Danilo Jimenez Rezende, and Dan Rosenbaum. From data to functa: Your data point is a function and you can treat it like one. In *International Conference on Machine Learning*, pp. 5694–5725. PMLR, 2022. Michael Dusenberry, Ghassen Jerfel, Yeming Wen, Yian Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, and Dustin Tran. Efficient and scalable bayesian neural nets with rank-1 factors. In *International conference on machine learning*, pp. 2782–2792. PMLR, 2020. Rizal Fathony, Anit Kumar Sahu, Devin Willmott, and J Zico Kolter. Multiplicative filter networks. In *International Conference on Learning Representations*, 2020. Andrew YK Foong, Yingzhen Li, José Miguel Hernández-Lobato, and Richard E Turner. 'in-between'uncertainty in bayesian neural networks. *arXiv preprint arXiv:1906.11537*, 2019. Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. Deep ensembles: A loss landscape perspective. *arXiv preprint arXiv:1912.02757*, 2019. Vincent Fortuin. Priors in bayesian deep learning: A review. *International Statistical Review*, 2022. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059. PMLR, 2016. Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. A survey of uncertainty in deep neural networks. *Artificial Intelligence Review*, pp. 1–77, 2023.
OinvjdvPjp
Could you please demonstrate how well the proposed method can handle situations where it needs to refer to previously mentioned real numbers in the context, ensuring these numbers remain unaltered? How does this embedding method impact a Language Model's capability to preserve real numbers in the given input?
xVal: A Continuous Number Encoding for Large Language Models Anonymous authors Paper under double-blind review Abstract Large Language Models (LLMs) have not yet been broadly adapted for the analysis of scientific datasets due in part to the unique difficulties of tokenizing numbers. We propose xVal, a numerical encoding scheme that represents any real number using just a single token. xVal represents a given real number by scaling a dedicated embedding vector by the number value. Combined with a modified number-inference approach, this strategy renders the model end-to-end continuous when considered as a map from the numbers of the input string to those of the output string. This leads to an inductive bias that is generally more suitable for applications in scientific domains. We empirically evaluate our proposal on a number of synthetic and real-world datasets. Compared with existing number encoding schemes, we find that xVal is more token-efficient and demonstrates improved generalization. 1 Introduction Even as Large Language Models (LLMs) exhibit sophisticated behavior in the generation and analysis of textual data, the scientific community has seen little success in applying these models to datasets consisting mostly of numerical values. LLMs have historically struggled to solve simple arithmetic problems such as multi-digit multiplication (Dziri et al., 2023) and have a tendency to “confabulate” answers (OpenAI, 2023; Frieder et al., 2023). Standard LLM tokenization schemes do not inherently capture the precise quantitative properties that distinguish numerical data from other natural language inputs (Testolin, 2023; Choi, 2021). Recent work exploring Chain-of-Thought reasoning in LLMs has shown improved performance on commonsense reasoning tasks such as arithmetic or mathematical word problems (Nye et al., 2021; Wei et al., 2023; Liu & Low, 2023; Imani et al., 2023), but such methods have limited applicability in making predictions about scientific datasets without highly domain-specific context. Recent work has explored several potential improvements for encoding numerical information as inputs to language models (see Thawani et al. (2021) for a review). For instance, numbers can be encoded digit-by-digit, in scientific notation format, or in base-10 format. (Jiang et al., 2020) maps numbers onto a finite set of “prototype numerals”, while (Sundararaman et al., 2020) enforces constraints such that the cosine distances between the embeddings of numbers reflects their actual mathematical distance. Transformers that use such encodings have been shown to successfully solve various mathematical problems, such as linear algebra problems including matrix multiplication (Charton, 2022). Despite these improvements, many challenges remain unresolved. Language models are known to exploit shortcuts and spurious correlations in the data (Tu et al., 2020; Liu et al., 2022; Dziri et al., 2023) and still struggle with interpolation and out-of-distribution generalization in mathematical problems and in scientific domains (Grosse et al., 2023; Anil et al., 2022). Functions appearing in such domains are often continuous or smooth, with certain exceptions such as points of criticality. Similarly, transformer architectures applied to vision and audio domains (e.g., Dosovitskiy et al., 2020; Garg et al., 2022) typically treat numbers continuously without tokenization (see however Copet et al., 2023; Chen et al., 2020b), but these models typically require highly structured inputs, and cannot be applied to datasets with arbitrary sequences of text and numbers. On the other hand, when encoding numbers as text, LLMs are inherently discontinuous in both the encoding and decoding stages. While discrete models can (and do) learn to approximate continuous func- Figure 1: A simplified example illustrating the xVal number encoding and the modified number inference paradigm. On the left, xVal is contrasted with the P1000 text-based numerical encoding scheme. On the right, we illustrate how numbers are addressed within the decoder. tions (d’Ascoli et al., 2022), this can be more challenging and less sample efficient compared to models that have continuity built-in by construction, as in many non-parametric regression models (Wasserman, 2006). In order to overcome this inherent challenge, it is necessary to impose the appropriate inductive bias based on our knowledge of the continuous nature of numbers. We introduce xVal, an inherently continuous method of encoding numerical values in Large Language Models. By encoding the magnitude of numerical values multiplicatively and orienting them in a learnable direction within the embedding space, xVal substantially changes how numbers are processed and interpreted by transformer architectures. This leads to an encoding scheme with a single vocabulary element that also encodes every number as a single token. xVal is therefore both token-efficient and has minimal vocabulary footprint. Coupled with a modified number-inference paradigm, xVal allows a transformer model to be continuous (or smooth given smooth non-linearities) when considered as a map between the numbers of the input string and those of the output. We expect that this leads to a better inductive bias when the functions being approximated are continuous or smooth. We evaluate xVal on a number of synthetic and real-world scientific datasets and compare with existing number encoding schemes. We demonstrate that xVal is both more token-efficient and exhibits better interpolation properties. OUR CONTRIBUTIONS • We introduce xVal, a novel approach for encoding numerical values in Large Language models. Compared to existing number encoding schemes, xVal is both token-efficient (every number is encoded as a single token) and has a minimal vocabulary footprint (a single number token). • We introduce a modified number inference scheme that, when used in conjunction with xVal, renders transformer models continuous as a function of the numerical values appearing in the text. • We evaluate xVal and a number of existing number encoding schemes on several synthetic and real world datasets. We demonstrate that xVal consistently provides better interpolation properties and is more compute-efficient than prior work. 2 METHODS In this section, we describe the details of the xVal number encoding as well as the number inference paradigm of our model. 2.1 xVal: A Continuous Number Encoding Instead of using different tokens for different digits or composite numbers, xVal embeds numerical values directly along a specific learnable direction of the embedding space. A diagram of this procedure can be seen in Fig. 1. Specifically, given a string input $x$ comprising both numbers and text, we first parse $x$ to extract all the numerical values and collect them in a separate list $x_{\text{num}}$. We then construct a new string $x_{\text{text}}$ by replacing all numbers in $x$ with a designated token $[\text{NUM}]$ that acts as a placeholder for numerical values. We tokenize and embed $x_{\text{text}}$, arriving at $h_{\text{text}}$. We then multiply the embedding of each appearance of the $[\text{NUM}]$ token with its associated numerical value in $x_{\text{num}}$. This process can be done efficiently by defining a new list $h_{\text{num}}$ by scattering $x_{\text{num}}$ to have the same length as the tokenized $x_{\text{text}}$ and inserting a 1 for any token other than $[\text{NUM}]$. The final embedding of the sample is $h_{\text{emb}} = h_{\text{num}} \times h_{\text{text}}$, which is then fed to the transformer trunk. This encoding process can be performed both for masked language modeling (MLM) and auto-regressive (AR) generation. During training, in cases where MLM is used, we simultaneously mask both $h_{\text{text}}$ and $h_{\text{num}}$, i.e., if the token being masked is a $[\text{NUM}]$ token, we replace the corresponding number in $h_{\text{num}}$ with 1. Continuous embeddings have been previously proposed for use in attention mechanism in the context of speech recognition Chorowski et al. (2014). Implicit normalization via layer-norm. In our implementation, the multiplicative embedding of xVal is followed by the addition of a positional encoding vector and then a layer-norm in the first transformer block. The effect of the layer-norm is to normalize the embedding of each token on a per-sample basis. In our experiments, we use additive positional encodings and therefore the result of the layer-norm is to normalize the sum of the vector associated with the $[\text{NUM}]$ token and the positional encoding vector. When the positional embeddings are not collinear to the embedding of the $[\text{NUM}]$ token, layer-norm scales the vector associated with the $[\text{NUM}]$ token such that its magnitude is effectively passed through a non-linear rescaling function. Indeed, denoting $u \in \mathbb{R}^d$ as the positional embedding, and $x \in \mathbb{R}$ as the scalar to be encoded, and assuming for simplicity $u \cdot p = 0$ with $\|u\| = \|p\| = 1$, we have $$u \cdot \frac{xu + p}{\|xu + p\|} = \frac{x}{\sqrt{1 + x^2}},$$ such that the value $x$ is still encoded in the same direction $u$. Figure 2 shows that such a property approximately holds empirically up to a constant after training, and we found these curves to be near-identical for any positional embedding. This normalization property implies that the dynamic range of xVal is more limited than those of other text-based encoding schemes. In the experiments of this paper, we normalize numbers in the text corpus such that they fall within the range $[-5, 5]$ as a preprocessing step before training. 2.2 Numerical Value Inference xVal defines an embedding that is continuous in the numerical values of the input. However, if we use a multi-class classification task as our output and training algorithm, the model as a whole will not be end-to-end continuous when considering the map from the input numbers to the output numbers. For this reason, we treat numbers separately at the output layer. This process is illustrated in the right-hand portion of Fig. 1. As is standard practice in transformer-based language models, we define a token head that outputs a probability distribution of the tokens of the vocabulary. However, since our formalism replaces numbers with the $[\text{NUM}]$ token, this head does not carry any information about the number value. We therefore introduce a new number head with a scalar output, trained via mean squared error (MSE) loss, to recover the numerical value associated with each instance of the $[\text{NUM}]$ token. For any input, we first look at the output of the token head. If the generated token is the \([\text{NUM}]\) token, we then look at the number head to fill in the value for this token. As shown in Section 3, since the transformer is now end-to-end continuous when inferring numerical values, it performs better when interpolating to previously unseen values. ## 3 EXPERIMENTS In this section, we evaluate the performance of XVAL and highlight its strengths and weaknesses compared to existing numerical encoding algorithms. In particular, we look at three datasets: a synthetic dataset of arithmetic operations, a dataset of global temperature data, and a dataset of planetary orbit simulations. For our transformer models, we use an architecture based on GPT-2 (Radford et al., 2019). Details of our specific architecture are included in Appendix A. We explore the effects of various architectural design choices in Appendix B.4. Table 1: Comparison of XVAL with four other number encodings. XVAL is more token-efficient and has a minimal vocabulary footprint. Vocabulary size differs from Charton (2022) because we only consider exponents from \(1E^{-8}\) to \(1E^{+8}\). | Encoding | Tokens | Tokens per number | Vocabulary Size | |----------|--------|-------------------|----------------| | P10 | \{\pm, d, E\pm d\} | \[-, 6, 0, 2, E-1\] | 5 | 28 | | P1000 | \{\pm, ddd, E\pm d\} | \[-, 602, E-1\] | 3 | 918 | | B1999 | \{\pm ddd, E\pm d\} | \[-602, E-1\] | 2 | 1816 | | FP15 | \{\pm ddd E\pm d\} | \[-602 E-1\] | 1 | 28800 | | XVAL | \{[NUM]\} | [NUM] | 1 | 1 | Comparison with other number encodings. We compare the performance of XVAL with four other number encodings, following the notation of Charton (2022). In these encodings, numbers are first processed into the format \(\pm ddd E\pm d\). The encodings are then determined by which parts of this format are encoded as single or multiple tokens. These range from encodings with limited vocabulary size but high number of tokens per number, leading to longer encoded sequence lengths (e.g., P10), to those with very large vocabulary footprints but only one token per number, leading to shorter encoded sequence lengths (e.g., FP15). XVAL provides a minimal vocabulary footprint and uses just a single token per number, leading to the shortest sequence lengths. A summary of these encodings and an example can be seen in Table 1. Number encodings that do not lead to a fixed number of tokens for all numbers (e.g., learned Byte Pair Encoding (Gage, 1994) used in GPT-2 (Radford et al., 2019)) can lead to erratic behaviors where the transformer learns spurious correlations that exist between the length of the encoded numbers in the dataset. An example of this type of behavior is shown in Appendix B.3. ### 3.1 LEARNING ARITHMETIC Simple arithmetic problems have acted as a test bed for probing the mathematical reasoning abilities of language models (Dziri et al., 2023). In this section, we investigate the effect of the number encoding scheme on the ability of language models to perform multi-digit multiplications as well as multi-operand mathematical operations. Multi-digit multiplication is a notably challenging task for even the largest LLMs (Borji, 2023). Dziri et al. (2023) show that GPT-4 achieves only 59% zero-shot accuracy on three-digit multiplication problems, while its accuracy for four- and five-digit multiplication drops to 4% and 0%, respectively. Table 2 reports the \(R^2\) scores for multi-digit multiplication problems on several language models designed to handle numerical values. All number encodings generally perform well on this task. However, we find that some encoding schemes (P10 and FP15) show a tendency to yield a small percentage of highly erroneous predictions in some contexts, thereby reducing the \(R^2\) score, while XVAL does not produce such outliers. For a more challenging arithmetic task, we designed a dataset of multi-operand mathematical operations. We used random binary trees combining a fixed number of operands (2, 3, or 4) using the binary operators of addition, subtraction, and multiplication. We then processed the samples according to the processing requirements of each number-encoding scheme. The task is evaluation of the expression on the left-hand side of the equation, implemented as a mask completion, where the right-hand-side number is masked. Table 3 shows the adjusted $R^2$ scores results on this task. XVAL performs remarkably well on this task. Table 2: Adjusted $R^2$ scores calculated between predictions and true values for the different encodings on various arithmetic datasets. (Higher is better; $R^2 = 1$ is the theoretical maximum.) | Encoding | 3-digit Multiplication | 4-digit Multiplication | 5-digit Multiplication | |----------|------------------------|------------------------|------------------------| | P10 | 0.9989 | 0.6071 | 0.9439 | | P1000 | 0.9997 | 0.9783 | 0.9991 | | B1999 | 0.9998 | 0.9984 | 0.9997 | | FP15 | 0.7119 | 0.9959 | 0.9980 | | XVAL | 0.9986 | 0.9975 | 0.9958 | Table 3: Arithmetic evaluation task of random binary trees combining different numbers of operands with addition, subtraction, and multiplication. $R^2$ measured between true expression value and transformer prediction. | Encoding | 2 operands | 3 operands | 4 operands | |----------|------------|------------|------------| | P10 | 0.998 | 0.996 | 0.992 | | P1000 | 0.991 | 0.990 | 0.991 | | FP15 | 0.993 | 0.981 | 0.935 | | XVAL | 0.99998 | 0.99994 | 0.99998 | Arithmetic experiments alone are not sufficient for fully evaluating the mathematical abilities of language models. The samples in these datasets are often short sequences and the underlying data manifold is low-dimensional. These problems therefore do not push the boundary of what is computationally possible with LLMs. In the remainder of this section, we consider experiments in more complex settings and much longer sequences. The goal of the next two subsections is not to construct state-of-the-art models in their respective domains, but rather to compare the performance of language models with different number encoding schemes in more complicated real-world scenarios. ### 3.2 Temperature Forecasting As an example of real-world scientific analysis, we look at the task of temperature forecasting. In this experiment, we construct a dataset as a subset of the ERA5 global climate dataset (Hersbach et al., 2020). For simplicity, we only focus on the surface temperature data (T2m field in ERA5). We split the dataset into individual samples, where each sample includes 2–4 days of surface temperature data (normalized to have unit variance) as well as the latitude and longitude from 60–90 randomly selected reporting stations. We also include the time of the first included timestep. We encode the coordinates by using the sine of the latitude and the sine and cosine of the longitude such that we preserve the periodicity. Similarly, we encode the time of year and time of day using the sine and cosine of the position along the 24 hour and 365 day cycles. We include all this information in a JSON format as follows: ```json {'description':{'coords':[[1,-.32,.95] ... [.96,.61,.79]], 'start':[0,1,-.026,-1]}, 'data':[-2.6,-2.6 ... -3.2,-3.1,-3]} ``` For demonstration purposes, we show a few digits per number, but for both scientific datasets, all numbers are floating point numbers. For the text-based encodings, this text string is then processed according to the procedure described above. The `coords`, `start`, and `data` correspond to the reporting station coordinates, the time of the first sample, and the normalized temperature data, each reported separately per station and then concatenated in the data list. In this way, the model needs to parse both the textual aspects of the sample (e.g., where the commas appear to separate different parts of the data) as well as the numerical values. Furthermore, as is often the case with JSON-formatted data, the data does not have a causal format. We therefore train the language models using an MLM approach instead of the more common AR approach. We evaluate the performance of the different numerical encodings on the task of predicting the next temperature timestep for all reporting stations simultaneously in a held out test set. We do so by masking the tokens (and numbers, if applicable) of all the data associated with the final timestep. Because the temperature data is provided separately per station, the masks are scattered throughout the input data and are not all simply at the end of the sample. Table 4 shows the results of this experiment. `xVal` provides the best performance while taking considerably less compute time. This task exemplifies one of the shortcomings of text-based encoding schemes: they can take advantage of spurious correlations in the data. In this case, P10, P1000 and B1999 have a tendency to predict normalized temperature $\pm 0.1$, which manifest as extended protrusions in Fig. 3. This is due to the over-abundance of this number in the dataset compared to other numbers, as seen in Fig 4. While individually, $100$ and $E^{-3}$ are the most common numbers and exponents in the dataset, when combined, $100E^{-2}$ is much more frequent than $100E^{-3}$. This explains why FP15, which encodes the digits and exponents as one token, does not get confused in this case. It also implies that the model has failed to learn the correct joint distribution of the numbers. In these cases, because of the tokenization scheme, the length of the tokenized samples are very long, averaging around 8000 and 5000 tokens respectively for P1000 and P10 (compared to 1800 tokens for FP15 and `xVal`). The poor performance in these models might therefore be due to the challenges of modelling long-range interactions (Qin et al., 2023). For more details on the performance of the different encodings, as well as comparison with some non-transformer baselines, see Appendix B.1. In Appendix B.3 we look at the performance of a BPE tokenizer on this task and demonstrate how LLMs can exploit the tokenized length of the number. In Appendix B.1.3 we train fine-tune these models on a simple binary classification task and compare their performance. | Method | Equal Samples | Equal Tokens | Equal Runtime | |--------|---------------|--------------|---------------| | | Loss | Runtime | Loss | Runtime | Loss | Runtime | | P10 | 73 | 2d 22h | 73 | 2d 22h | 73 | 2d 22h | | P1000 | 20 | 2d 2h | 23 | 3d 10h | 21 | 2d 22h | | B1999 | 20 | 20h | 19 | 2d 23h | 19 | 2d 22h | | FP15 | 2.14 | 19h | 1.76 | 3d 12h | 1.85 | 2d 22h | | xVal | 1.75 | 9h | 1.62 | 1d 15h | 1.51 | 2d 22h | Figure 3: Performance of the encoding schemes predicting the temperature of the next timestep. Figure 4: A failure mode of text based encoding scheme (left). Because of the distribution of the numbers in the training set (center and right), numbers that are close to ±1 (denoted by the black arrows) get misclassified as $100E^{-3}$, i.e. 0.1, the combination of the most common digit and the most common exponent in the dataset. ### 3.3 Predicting Planetary Orbits We then compare the performance of the various number encoding schemes on a simulated dataset of planetary orbits. We construct a dataset consisting of planetary motion simulations generated by the REBOUND N-body codebase (Rein, H. & Liu, S.-F., 2012) and integrated using IAS15 (Rein & Spiegel, 2015). The dataset consists of 1.25 million samples, split into 80%, 10%, 10% for training, validation, and test. Each sample consists of simulation parameters (mass and orbit properties of each planet and the simulation timestep size) as well as a sequence of $(x,y)$ positions for each planet, organized in a JSON format. The details of the simulation are provided in Appendix B.2. A typical sample in this dataset is given by: ```json { 'description': { 'planet0': {'m': 2.38, 'a': 2.96, 'e': 1.73}, 'planet1': {'m': 1.35, 'a': 2.96, 'e': 1.73}, 'stepsize': 0.2, 'data': [[[2.60, -0.75], [0.81, 0.42]], [[2.63, -0.63], [0.70, 0.60]]] } } ``` We pretrain the models using MLM and evaluate the models on the task of inferring the simulation parameters, specifically the simulation timestep $\Delta t$, and the semi-major axis, eccentricity and mass of the first planet $(a_1, e_1, m_1)$ by masking the appropriate locations. The quantities $\Delta t$ and $a_1$ in the training corpus take values that are either discrete or are sampled from intervals with gaps. This property makes these quantities a good testing ground for interpolation generalization. Table 5: Performance of the different encodings on the planetary motion inference problem. Here, OoD implies evaluation on samples where the quantity was not seen in the training corpus. The percentages in brackets denote the fraction of the predictions that could not be parsed as numbers. When not specified, this fraction was less than 0.01%. (†) The poor performance here is because of a number of outliers that are being mis-classified. | Method | $a_1$ | $a_1$ (OoD) | $e_1$ | $\Delta t$ | $\Delta t$ (OoD) | $m_1$ | |--------|-------|-------------|------|-----------|-----------------|------| | P10 | $7.6 \times 10^{-4}$ | 0.0076 (1%) | 0.20 | 0.0 | 0.0036 | 1.5 | | P1000 | $4.5 \times 10^{-6}$ | 0.048 | 0.0067 | 0.0 | 0.011 | 0.74 | | B1999 | $3.6 \times 10^{-6}$ | 0.11 | 0.0057 | 0.0 | 0.022 | 0.44 | | FP15 | $4.0 \times 10^{-6}$ | 0.050 | $3.6 \times 10^{-4}$ | 0.0065† | 0.0075 (0.2%) | 0.37 | | xVAL | $6.4 \times 10^{-5}$ | **0.0010** | 0.0020 | $6.6 \times 10^{-5}$ | **0.0021** | 1.4 | The results of this test are presented in Table 5. In the numerical encoding schemes other than xVAL, we see an overall inverse relationship between performance in- and out-of-distribution. For example, P10—the encoding with the fewest vocabulary elements—provides the worst in-distribution performance but is best on out of distribution tasks. This is an example of the bias/variance trade-off applied to the number of vocabulary elements. In comparison, we see that xVAL provides the best out-of-distribution performance while staying competitive in-distribution (with one exception). The out-of-distribution performance of these encoding methods can be seen in Fig. 5. Here we see that the text-based encodings, with the exception of P10, simply do not predict any number that they did not explicitly see for this parameter in the training corpus. As expected from a function that is continuous by construction, \(x_{\text{Val}}\) continuously interpolates between the values seen in the training set and offers much better performance. Figure 5 shows that the predictions coming from the text-based encodings can be discontinuous when evaluated out-of-distribution. This discontinuity has two potential sources: the discontinuous nature of the number embeddings and the argmax that is taken over the logits during inference. Since the encodings of the number tokens in text-based encodings have been shown to form continuous-looking structures (see Sec. B.5 and Power et al. (2022); d’Ascoli et al. (2022)), it is possible that the discontinuity is only a side effect of the argmax and that the logits themselves vary more smoothly. Figure 6 shows an example of the logits of the P1000 encoding when predicting the step-size out-of-distribution. Here, the color lines denote the highest-value logits, with the other logits carrying negligible weight. The dashed gray lines denote the values of the step-size seen in the training set. We see that these lines are smooth in neither small or larger scales. We expect that this is a combination of the text-based number encodings’ discrete embedding schemes together with the cross-entropy training paradigm that does not incorporate number distances into the loss. ### 3.4 Results Summary It is evident that embedding the magnitude of numbers directly, as in \(x_{\text{Val}}\), leads to a different inductive bias than treating numbers as tokenized text. This can be clearly seen in the varying performance of these language models in different tasks. When predicting the next timestep in the temperature dataset, \(x_{\text{Val}}\) provides by far the best results. On the other hand, in the mass prediction on the planetary task, it fails to learn the correct relationship, along with vocabulary-sparse P10. Where \(x_{\text{Val}}\) excels is in out-of-distribution performance, while the text-based encoding schemes fail to interpolate properly. The best interpolation for the text-based encodings is given by the vocabulary-sparse P10, which performs poorly on the in-distribution tasks. However, it often performs poorly when evaluated on in-distribution tasks. The extra encoding length of P10 also makes it prohibitively expensive to deploy as can be seen in Table 4. On the other hand, FP15 provides the best in-distribution performance but it has poor interpolation properties and expensive embedding cost. Overall, \(x_{\text{Val}}\) provides the best mix of in-distribution and out-of-distribution performance. Moreover, it is the most computationally efficient of the encoding schemes we considered. **Failure modes.** There are a number of ways that number inference via a large language model can fail. The language model can predict a non-numeric token in the place of the number, leading to an invalid prediction. These are denoted in the percentages in brackets in Table 5, shown only when the percentage exceeded 0.01%. This failure mode is uncommon and becomes less frequent the more the model is trained. Another failure mode is when the model exploits spurious correlations. For example, the model can learn the distribution of the digits, as discussed in the example of temperature dataset, or the length of the encoding (see Appendix B.3). A model can also fail to learn the correct distribution. In the planetary orbits example, learning the mass of the planet is the most challenging task – all encodings struggle with this. In this task, xVAL performs uncharacteristically poorly. We suspect that this is due to the high uncertainty in estimating the mass and that a multi-modal distribution such as the categorical distribution learned by traditional LLMs would perform better. This can be seen in Fig. 7, where the predictions of P10 and xVAL are shown. While both of these models perform poorly when considering the MSE of the prediction, the multi-modal prediction of P10 would be a better starting point for capturing an uncertain distribution. We therefore suspect that generalizing the number-head such that instead of predicting a scalar for each number, it fits a mixture of Gaussians, would improve this performance. We leave explorations in this direction for future investigation. ### 4 DISCUSSION In this work, we introduced xVAL, a continuous number encoding that makes transformer-based models end-to-end continuous when considered as a function mapping the numerical values of the input to those of the output. We demonstrated that even though xVAL is more token-efficient and has a minimal vocabulary footprint, it excels in numerical tasks and leads to superior performance, especially when evaluated on out-of-distribution samples. Because of the fundamentally different treatment of numbers across these cases, xVAL and text-based encodings lead to different inductive biases, making the choice of the best encoding method on a given dataset highly dependent on the problem under consideration. **Future directions.** As we have seen, using the xVAL encoding scheme renders the LLM not just continuous, but also differentiable as a function of the numbers it predicts. This enables the LLM loss to incorporate not just an MSE loss, but other statistical learning schemes. For example, we can add a Gaussian Mixture Model or any other differentiable loss and train the LLM to optimize this objective. This holds the promise to improve the experiments in which xVAL underperformed in this paper. A shortcoming of xVAL is that, because it embeds number values directly in the embedding space, its dynamic range is limited compared to text-based encodings. Very large numbers saturate the normalization, as discussed in Sec. 2, and very small numbers are negligible from the model’s perspective. There are methods that allow high dynamic ranges that maintain continuity (or smoothness). One such example is to use Fourier features on the logarithm of the number. This can be considered as a continuous analog of floating point precision encoding and would drastically improve the dynamic range of the xVAL encoding. xVAL, combined with our proposed number-inference paradigm, makes LLMs generally more suitable for applications in scientific domains. LLMs have become increasingly integrated in many scientific workflows today, enabling researchers to parse scientific language in sophisticated ways. However, their usefulness for analyzing data-heavy corpuses is currently limited. Crafting LLMs that have a better understanding of numerics has the potential to greatly increase their usefulness in scientific analysis and discovery. REFERENCES Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra, Vinay Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, and Behnam Neyshabur. Exploring Length Generalization in Large Language Models, 2022. Ali Borji. A Categorical Archive of ChatGPT Failures, 2023. François Charton. Linear algebra with transformers, 2022. Kunlong Chen, Weidi Xu, Xingyi Cheng, Zou Xiaochuan, Yuyu Zhang, Le Song, Taifeng Wang, Yuan Qi, and Wei Chu. Question directed graph attention network for numerical reasoning over text, 2020a. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative Pretraining From Pixels. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1691–1703. PMLR, 13–18 Jul 2020b. URL https://proceedings.mlr.press/v119/chen20s.html. Charles Q. Choi. 7 revealing ways ais fail: Neural networks can be disastrously brittle, forgetful, and surprisingly bad at math. IEEE Spectrum, 58(10):42–47, 2021. doi: 10.1109/MSPEC.2021.9563958. Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. End-to-end continuous speech recognition using attention-based recurrent nn: First results, 2014. Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Défossez. Simple and Controllable Music Generation, 2023. Stéphane d’Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, and François Charton. Deep Symbolic Regression for Recurrent Sequences, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2020. Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. Faith and fate: Limits of transformers on compositionality, 2023. Simon Frieder, Luca Pinchetti, Alexis Chevalier, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, and Julius Berner. Mathematical capabilities of chatgpt, 2023. Philip Gage. A new algorithm for data compression. C Users J., 12(2):23–38, feb 1994. ISSN 0898-9788. Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. What can transformers learn in-context? a case study of simple function classes. Advances in Neural Information Processing Systems, 2022. Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamile Lukosiu, Karina Nguyen, Nicholas Joseph, Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. Studying Large Language Model Generalization with Influence Functions, 2023. Hans Hersbach, Bill Bell, Paul Berrisford, Shoji Hirahara, Andras Horányi, Joaquín Muñoz-Sabater, Julien Nicolas, Carole Peubey, Raluca Radu, Dinand Schepers, Adrian Simmons, Cornel Soci, Saleh Abdalla, Xavier Abellan, Gianpaolo Balsamo, Peter Bechtold, Gionata Biavati, Jean Bidlot, Massimo Bonavita, Giovanna De Chiara, Per Dahlgren, Dick Dee, Michail Diamantakis, Rossana Dragani, Johannes Flemming, Richard Forbes, Manuel Fuentes, Alan Geer, Leo Haimberger, Sean Healy, Robin J. Hogan, Elías Hólm, Marta Janisková, Sarah Keeley,
cZo6pDtDZr
As best I can tell, the lower bound in Theorem 3 is not really comparable to the upper bound in Theorem 2 since one imposes $\beta = 0$ in the former while the upper bound requires $\beta>0$. For instance, when $\beta>0$, it does not seem to be necessary to have any dependence on $\alpha$, since it seems to me that simple approach is to have each agent to uniformly randomize their sample with probability $(1-\beta)$ and then debiasing the resulting collision probability. This would require $\mathsf{poly}(1/\beta)$ samples, so it may also be worth explaining which parameter dependencies are better in practice.
NEAR-OPTIMAL ALGORITHMS FOR PRIVATE ESTIMATION AND SEQUENTIAL TESTING OF COLLISION PROBABILITY Anonymous authors Paper under double-blind review ABSTRACT We present new algorithms for estimating and testing collision probability, a fundamental measure of the spread of a discrete distribution that is widely used in many scientific fields. We describe an algorithm that satisfies $(\alpha, \beta)$-local differential privacy and estimates collision probability with error at most $\varepsilon$ using $\tilde{O}\left(\frac{\log(1/\beta)}{\alpha^2\varepsilon^2}\right)$ samples for $\alpha \leq 1$, which improves over previous work by a factor of $\frac{1}{\alpha^2}$. We also present the first sequential testing algorithm for collision probability, which can distinguish between collision probability values that are separated by $\varepsilon$ using $\tilde{O}\left(\frac{1}{\varepsilon^2}\right)$ samples, even when $\varepsilon$ is unknown. Our algorithms have nearly the optimal sample complexity and in experiments we show that they require significantly fewer samples than previous methods. 1 INTRODUCTION A key property of a discrete distribution is how widely its probability mass is dispersed over its support. One of the most common measures of this dispersal is collision probability. Let $p = (p_1, \ldots, p_k)$ be a discrete distribution. The collision probability of $p$ is defined $$C(p) = \sum_{i=1}^{k} p_i^2.$$ Collision probability takes its name from the following observation. If $X$ and $X'$ are independent random variables with distribution $p$ then $C(p) = \Pr[X = X']$, the probability that the values of $X$ and $X'$ coincide. If a distribution is highly concentrated then its collision probability will be close to 1, while the collision probability of the uniform distribution is $1/k$. Collision probability has played an important role in many scientific fields, although each time it is rediscovered it is typically given a different name. In ecology, collision probability is called the Simpson index and serves as a metric for species diversity (Simpson [1949]; Lemster [2021]). In economics, collision probability is known as the Herfindahl–Hirschman index, which quantifies market competition among firms (Herfindahl [1997]), and also the Gini diversity index, a measure of income and wealth inequality (Gini [1912]). Collision probability is also known as the second frequency moment, and is used in database optimization engines to estimate self join size (Cormode & Garofalakis [2016]). In statistical mechanics, collision probability is equivalent to Tsallis entropy of second order, which is closely related to Boltzmann–Gibbs entropy (Tsallis [1988]). The negative logarithm of collision probability is Rényi entropy of second order, which has many applications, including assessing the quality of random number generators (Skorski [2017]) and determining the number of reads needed to reconstruct a DNA sequence (Motahari et al., 2013). Collision probability has also been used by political scientists to determine the effective size of political parties (Laakso & Taagepera [1979]). Collision probability is not equivalent to Shannon entropy, the central concept in information theory and another common measure of the spread of a distribution. However, collision probability has a much more intuitive interpretation, and is also easier to estimate. Specifically, estimating the Shannon entropy of a distribution with support size $k$ requires $\Omega\left(\frac{k}{\log k}\right)$ samples (Valiant & Valiant... while the sample complexity of estimating collision probability is independent of \( k \). Additionally, the negative logarithm of the collision probability of a distribution is a lower bound on its Shannon entropy, and this lower bound becomes an equality for the uniform distribution. 1.1 Our contributions We present novel algorithms for estimating and testing the collision probability of a distribution. **Private estimation:** We give an algorithm for estimating collision probability that satisfies \((\alpha, \beta)\)-local differential privacy.\(^1\) As in previous work, our algorithm is non-interactive, which means that there is only a single round of communication between users and a central server, and communication-efficient, in the sense that each user sends \( O(1) \) bits to the server (in fact, just 1 bit). If \( \alpha \leq 1 \) then our algorithm needs \( \tilde{O}\left(\frac{\log(1/\beta)}{\alpha^2 \varepsilon^2}\right) \) samples to output an estimate that has \( \varepsilon \) additive error, which nearly matches the optimal sample complexity and improves on previous work by an \( O\left(\frac{1}{\varepsilon^2}\right) \) factor (Bravo-Hermsdorff et al., 2022). **Sequential testing:** We give an algorithm for determining whether collision probability is equal to a given value \( c_0 \) or differs from \( c_0 \) by at least \( \varepsilon > 0 \), assuming that one of those conditions holds. Our algorithm needs \( O\left(\frac{1}{\varepsilon^2}\right) \) samples to make a correct determination, which nearly matches the optimal sample complexity. Importantly, \( \varepsilon \) is not known to the algorithm. In other words, the algorithm automatically adapts to easy cases by drawing fewer samples. While sequential testing algorithms have been developed for many distributional properties, such as total variation distance (Daskalakis & Kawase, 2017), as far as we know there is no existing sequential testing algorithm for collision probability. Instead, previous work has focused on the batch setting, in which the number of samples is specified in advance (Canonne, 2022a). All of our theoretical guarantees hold with high probability, and we present numerical simulations showing that our algorithms use significantly fewer samples than existing methods. For simplicity, in the main body of this paper we state all theorems using big-\( O \) notation and argue for their correctness with proof sketches only, reserving more detailed theorem statements and proofs for the Appendix. 2 Related work The collision probability of a distribution is equal to its second frequency moment, and frequency moment estimation has been widely studied in the literature on data streams, beginning with the seminal work of Alon et al. (1999). Locally differentially private estimation of frequency moments was first studied by Butucea & Issartel (2021), who gave a non-interactive mechanism for estimating any positive frequency moment. The sample complexity of their mechanism depends on the support size of the distribution, and they asked whether this dependence could be removed. Their conjecture was affirmatively resolved for collision probability by Bravo-Hermsdorff et al. (2022), but removing the dependence on support size led to a much worse dependence on the privacy parameter. It has remained an open question until now whether this trade-off is necessary. Property and closeness testing has a rich literature (Acharya et al., 2019a; 2013; Diakonikolas et al., 2015; Goldreich & Ron, 2000; Canonne, 2022b), but the sequential setting is studied much less intensively. Existing algorithms for sequential testing almost always define closeness in terms of total variation distance, which leads to sample complexities on the order \( O(\sqrt{k}/\epsilon^2) \), where \( k \) is the support size of the distribution and the distribution is separated from the null hypothesis by \( \epsilon \) in terms of total variation distance (Daskalakis & Kawase, 2017; Oukhn et al., 2021). By contrast, all of our results are entirely independent of \( k \), making our approach more suitable when the support size is very large. There are several batch testing approaches which are based on collision statistics. Most notably, the optimal uniform testing algorithm of Paninski (2003) distinguishes the uniform distribution from a distribution that is \( \epsilon \) far from uniform in terms of total variation distance with a sample complexity \( \Theta(\sqrt{k}/\epsilon^2) \). However, in the batch setting, the parameter \( \epsilon \) is given to the testing algorithm as input. \(^1\)Instead of denoting the privacy parameters by \( \varepsilon \) and \( \delta \), as is common in the privacy literature, we will use them to denote error and probability, as is common in the statistics literature. 3 PRELIMINARIES We study two problems related to learning the collision probability \( C(p) = \sum_i p_i^2 \) of an unknown distribution \( p = (p_1, \ldots, p_k) \). In the private estimation problem, a set of \( n \) users each possess a single sample drawn independently from distribution \( p \). We are given an error bound \( \varepsilon \geq 0 \) and confidence level \( \delta \in [0, 1] \). A central server must compute an estimate \( \hat{C} \) that satisfies \( |\hat{C} - C(p)| \leq \varepsilon \) with probability at least \( 1 - \delta \) while preserving the privacy of the users’ samples. A mechanism is a distributed protocol between the server and the users that privately computes this estimate. The execution of a mechanism can depend on the samples, and the output of a mechanism is the entire communication transcript between the server and the users. Mechanism \( M \) satisfies \((\alpha, \beta)\)-local differential privacy if for each user \( i \) and all possible samples \( x_1, \ldots, x_n, x'_i \), we have \[ \Pr[M(x_1, \ldots, x_n) \in O] \leq e^\alpha \Pr[M(x_1, \ldots, x_{i-1}, x'_i, x_{i+1}, \ldots, x_n) \in O] + \beta, \] where \( O \) is any set of possible transcripts between the server and the users. In other words, if the privacy parameters \( \alpha \) and \( \beta \) are small then changing the sample of a single user does not significantly alter the distribution of the mechanism’s output. Local differential privacy is the strongest version of differential privacy, and is suitable for a setting where the server is untrusted (Dwork et al., 2014). The sample complexity of the mechanism is the number of users \( n \). In the sequential testing problem, we are given a confidence level \( \delta \in [0, 1] \) and the promise that exactly one of the following two hypotheses hold: The null hypothesis is that \( C(p) = c_0 \), while the alternative hypothesis is that \( |C(p) - c_0| \geq \varepsilon > 0 \). An algorithm must decide which hypothesis is correct based on samples from \( p \). Instead of fixing the number of samples in advance, the algorithm draws independent samples from \( p \) one at a time, and after observing each sample decides to either reject the null hypothesis or to continue sampling. If the null hypothesis is false then the algorithm must reject it, and if the null hypothesis is true then the algorithm must not stop sampling, and each of these events must occur with probability at least \( 1 - \delta \). Importantly, while \( c_0 \) is known to the algorithm, \( \varepsilon \) is not known, and thus the algorithm must adapt to the difficulty of the problem. The sample complexity of the algorithm is the number of observed samples \( N \) if the null hypothesis is false, a random variable. 4 PRIVATE ESTIMATION In this section we describe a distributed protocol for privately estimating the collision probability of a distribution. In our protocol, a set of users each draw a sample from the distribution, and then share limited information about their samples with a central server, who computes an estimate of the collision probability while preserving the privacy of each user’s sample. As discussed in Section 1, the collision probability of a distribution is the probability that two independent samples from the distribution will coincide. Therefore the most straightforward strategy the server could employ would be to collect all the users’ samples and count the number of pairs of samples containing a collision. However, this approach would not be privacy-preserving. Instead, in Mechanism 1 below, each user applies a one-bit hash function to their private sample and shares only their hash value with the server. The server counts the number of collisions among all pairs of hash values and then applies a bias correction to form an estimate of the collision probability. To increase the robustness of this estimate, the server first partitions the hash values into groups and uses the median estimate from among the groups. The hashing procedure in Mechanism 1 is carefully designed to both preserve user privacy and also yield an accurate estimate. On the one hand, if each user privately chose an independent hash function, then their hash values would be entirely uncorrelated and contain no useful information about the underlying distribution. On the other hand, if every user applied the same hash function to their sample, then the server could invert this function and potentially learn some user’s sample. Instead, in Mechanism 1, the server sends the same hash function to all users, but each user prepends their sample with a independently chosen salt, or random integer, before applying the hash function. Salts are commonly used in cryptographic protocols to enhance security, and they play a similar role in our mechanism. The number of possible salts serves as a trade-off parameter between the privacy and accuracy of our mechanism, with more salts implying a stronger privacy guarantee. Mechanism 1 Private estimation for collision probability **Given:** Number of users \( n \), confidence level \( \delta \in [0, 1] \), privacy parameters \( \alpha \geq 0, \beta \in [0, 1] \). 1: Server transmits random hash function \( h : \{0, 1\}^* \mapsto \{0, 1\} \) to each user. 2: Each user \( i \) chooses salt \( s_i \) uniformly at random from \( \{1, \ldots, r\} \), where \( r = 6 \left( \frac{e^\alpha + 1}{e^\alpha - 1} \right)^2 \log \frac{4}{\beta} \). 3: Each user \( i \) draws sample \( x_i \) from distribution \( p \). 4: Each user \( i \) sends hash value \( v_i = h(s_i, x_i) \) to the server, where \( s_i, x_i \) is the binary encoding of \( s_i \) prepended to \( x_i \) and separated by a delimiter. 5: Server partitions users into \( k = 8 \log \frac{1}{\delta} \) groups of size \( m = \frac{n}{k} \) each. 6: Server computes the all-pairs hash value collision frequency \[ \bar{c}_g = \frac{2}{m(m-1)} \sum_{i,j \in I_g, i<j} 1 \{v_i = v_j\} \] for each group \( g \), where \( I_g \) is the set of users in group \( g \). 7: Server lets \[ \hat{c}_g = r(2\bar{c}_g - 1) \] be the bias-corrected estimate for each group \( g \). 8: Server outputs \( \hat{C} \), the median of the \( \hat{c}_g \)'s. The theorems in this section provide guarantees about the privacy and accuracy of Mechanism 1. **Theorem 1.** Mechanism 1 satisfies \( (\alpha, \beta) \)-local differential privacy. **Proof sketch.** We show that the communication transcript between the server and the users is not very likely to be different if a single user changes their sample. Note that the communication transcript consists of the random hash function chosen by the server and the users’ hash values. Suppose for now that the hash function is fixed. Each user’s choice of a random salt induces a distribution on their hash value, and this distribution can change if the user changes their sample. If the distribution changes too drastically then the mechanism will not be private. However, in expectation over the choice of the hash function, the distribution is always uniform, and deviations from this expectation will be small with high probability if the number of possible salts is sufficiently large. More concretely, note that the number of possible salts \( r \) in Mechanism 1 increases as the privacy parameters \( \alpha \) and \( \beta \) decrease. Finally, since the hash function is chosen independently of the samples, the hash function reveals no information about the samples by itself. **Theorem 2.** If the number of samples \( n \) satisfies \[ n \geq \Omega \left( \left( \frac{e^\alpha + 1}{e^\alpha - 1} \right)^2 \log \frac{4}{\beta} \log \frac{1}{\delta} \right) \] then the estimate \( \hat{C} \) output by Mechanism 1 satisfies \( |\hat{C} - C(p)| \leq \varepsilon \) with probability \( 1 - \delta \). Additionally, if \( \alpha \leq 1 \) then it suffices that \[ n \geq \Omega \left( \frac{\log \frac{1}{\beta} \log \frac{1}{\delta}}{\alpha^2 \varepsilon^2} \right). \] **Proof sketch.** The first step of the argument is to relate the likelihood of a hash collision to that of the underlying sample collision. It is not hard to see that if \( x_i \neq x_j \) then \( \Pr[v_i = v_j] = \frac{1}{2} \), while if \( x_i = x_j \) then \( \Pr[v_i = v_j] = \frac{1}{2} + \frac{1}{r} \), because two users with the same sample and same salt are guaranteed to produce the same hash value. This discrepancy allows us to use the number of hash collisions as an estimator of the number of sample collisions. In particular, it implies that each group estimate \( \hat{c}_g \) is an unbiased estimate of \( C(p) \). Next we bound the variance of each \( \hat{c}_g \). Clearly \( \text{Var}[\hat{c}_g] = O(r^2) \text{Var}[c_g] \). Bounding the variance \( \text{Var}[c_g] \) is non-trivial, because the \( v_i \)'s are not independent, since they are correlated by the random choice of the hash function. By the law of total variance we have \[ \text{Var}[c_g] = E[\text{Var}[c_g | h]] + \text{Var}[E[c_g | h]]. \] Since the \( v_i \)'s are independent for a given hash function, the first term can be bounded by applying Hoeffding’s theorem for U-statistics. The second term can be bounded by a fairly direct calculation. Having shown that the \( \hat{c}_g \)'s are unbiased estimates of collision probability, and also having shown that each of their variances is bounded, it remains to show that their median is concentrated about their mean. This concentration follows from the analysis of the median-of-means estimator (Lugosi & Mendelson 2019). 4.1 LOWER BOUND The next theorem proves that the sample complexity bound in Theorem 2 is tight for small \( \alpha \) up to logarithmic factors. **Theorem 3.** Let \( \hat{C}_{\alpha,n}(p) \) be a collision probability estimate returned by an \( (\alpha, 0) \)-locally differentially private mechanism that draws \( n \) samples from distribution \( p \). If \( \alpha \leq 1 \) and \( n \in o\left(\frac{1}{\alpha^2\varepsilon^2}\right) \) then there exists a distribution \( p \) such that \[ E\left[|\hat{C}_{\alpha,n}(p) - C(p)|\right] \geq \varepsilon. \] **Proof sketch.** We apply a technique due to Duchi et al. (2016) for proving minimax lower bounds for locally differentially private estimation. Their technique is a private version of Le Cam’s two-point method (LeCam 1973). It follows from Proposition 1 due to Duchi et al. (2016) that for all distributions \( p_0, p_1 \) there exists distribution \( p \) such that \[ E\left[|\hat{C}_{\alpha,n}(p) - C(p)|\right] \geq \frac{|C(p_0) - C(p_1)|}{2} \left(1 - \sqrt{2\alpha^2 n D_{KL}(p_0 || p_1)}\right). \] Thus if there exist \( p_0 \) and \( p_1 \) such that \( D_{KL}(p_0 || p_1) \leq O\left(\frac{1}{\alpha^2 n}\right) \) and \( |C(p_0) - C(p_1)| \geq \Omega\left(\frac{1}{\alpha \sqrt{n}}\right) \) then the above lower bound is \( \Omega\left(\frac{1}{\alpha \sqrt{n}}\right) \), which suffices to prove the theorem. We give an explicit construction of \( p_0 \) and \( p_1 \) in the Appendix. Briefly, \( p_0 \) places probability mass \( \frac{1}{2} \) on one element and uniformly distributes the remaining mass on the other \( k - 1 \) elements, while \( p_1 \) is nearly the same as \( p_0 \) except for a \( \Theta\left(\frac{1}{\alpha \sqrt{n}}\right) \) perturbation applied to each probability. 4.2 EFFICIENT IMPLEMENTATION In Mechanism T, the server computes the all-pairs hash collision frequency per group. If each group contains \( m \) samples, a naive implementation would require \( \Omega(m^2) \) time per group. The next theorem shows how this can be reduced to \( O(m) \) time per group by computing the histogram of hash values. **Theorem 4.** For any values \( v_1, \ldots, v_m \) if \( \bar{c} = \frac{2}{m(m-1)} \sum_{i<j} 1\{v_i = v_j\} \) is the all-pairs collision frequency and \( \hat{n}_v = \sum_i 1\{v_i = v\} \) is the multiplicity of value \( v \) then \[ \bar{c} = \frac{1}{m(m-1)} \sum_v \hat{n}_v^2 - \frac{1}{m-1}. \] 4.3 COMPARISON TO PREVIOUS WORK Butucea & Issartel (2021) gave a non-interactive \( (\alpha, 0) \)-locally differentially private mechanism for estimating collision probability with sample complexity \( \tilde{O}\left(\frac{(\log k)^2}{\varepsilon^2 \alpha^2}\right) \) and communication complexity \( O(k) \). Bravo-Hermsdorff et al. (2022) gave a non-interactive mechanism with the same privacy guarantee, sample complexity $\tilde{O}\left(\frac{1}{\alpha^2 \varepsilon^2}\right)$, and communication complexity $O(1)$. Thus the latter mechanism is better suited to distributions with very large support sizes, but is a worse choice when the privacy parameter $\alpha$ is very small. Our mechanism combines the advantages of these mechanisms, at the expense of a slightly weaker privacy guarantee and an additional $\tilde{O}(\log \frac{1}{\delta})$ samples. Notably, the earlier mechanism due to Bravo-Hermsdorff et al. (2022) is also based on counting collisions among salted hash values. But there are key differences between the mechanisms which lead to our improved sample complexity. In their mechanism, the server assigns salts to the users, each user adds noise to their hash value, and the server counts hash collisions among $\frac{n}{2}$ disjoint user pairs. In our mechanism, the salts are chosen privately, no additional noise is added to the hash values, and the server counts hash collisions among all $\binom{n}{2} = O(n^2)$ user pairs. Using all available pairs to count collisions is a more efficient use of data (although it significantly complicates the analysis, as the pairs are not all independent), and choosing the salts privately eliminates the need for additional randomness, which improves the accuracy of the estimate. 5 Sequential Testing In this section we describe an algorithm for sequentially testing whether $C(p) = c_0$ (the null hypothesis) or $|C(p) - c_0| \geq \varepsilon > 0$ (the alternative hypothesis), where $c_0$ is given but $\varepsilon$ is unknown. Algorithm 2 below draws samples from the distribution $p$ one at a time. Whenever the algorithm observes a sample $x_i$, it updates a running estimate of $|C(p) - c_0|$ based on the all-pairs collision frequency observed so far. The algorithm compares this estimate to a threshold that shrinks like $\Theta(\sqrt{\frac{i-1}{i} \log \log i})$ and rejects the null hypothesis as soon as the threshold is exceeded. Although our algorithm is simple to describe, its proof of correctness is non-trivial, as it involves showing that a sequence of dependent random variables (the running estimates) become concentrated. Our proof uses a novel decoupling technique to construct martingales based on the running estimates. Algorithm 2 Sequential testing of collision probability Given: Null hypothesis value $c_0$, confidence level $\delta \in [0, 1]$. 1: for $i = 1, 2, 3, \ldots$ do 2: Draw sample $x_i$ from distribution $p$. 3: Let $T_i = \sum_{j=1}^{i-1} 1\{x_i = x_j\} - 2(i-1)c_0$. 4: if $\left|\frac{2}{i(i-1)} \sum_{j=1}^{i-1} T_j\right| > 3.2 \sqrt{\frac{\log \log i + 0.72 \log (20.8/\delta)}{i}}$ then 5: Reject the null hypothesis. 6: end if 7: end for The next theorem provides a guarantee about the accuracy of Algorithm 2. Theorem 5. If $C(p) = c_0$ then Algorithm 2 does not reject the null hypothesis with probability $1 - \delta$. If $|C(p) - c_0| \geq \varepsilon$ then Algorithm 2 rejects the null hypothesis after observing $N$ samples, where $$N \in O\left(\frac{1}{\varepsilon^2} \log \log \frac{1}{\varepsilon} \log \frac{1}{\delta}\right)$$ with probability $1 - \delta$. The $\log \log \frac{1}{\varepsilon}$ factor in Theorem 5 results from our application of a confidence interval due to Howard et al. (2021) that shrinks like $\Theta(\sqrt{\frac{i-1}{i} \log \log i})$. Note that $\log \log \frac{1}{\varepsilon} < 4$ if $\varepsilon \geq 10^{-10}$, so this factor is negligible for nearly all problem instances of practical interest. Note that Bravo-Hermsdorff et al.'s original NeurIPS paper claimed $\tilde{O}\left(\frac{1}{\alpha^2 \varepsilon^2}\right)$ sample complexity, but a more recent version on Arxiv claims $\tilde{O}\left(\frac{1}{\alpha^2 \varepsilon^2}\right)$ sample complexity and explains that the original version contained mistakes. See References for a link to the Arxiv version. Proof sketch of Theorem 5. First note that \( T_1, T_2, \ldots \) which are used in Line 3 of Algorithm 2 are dependent sequences, so \( T_i \) depends on all \( x_1, \ldots, x_i \), which prevent us from computing a concentration bound for it. Therefore we shall apply a decoupling technique to derive a martingale sequence. Let us define \( \tilde{U}_m := U(X_1, \ldots, X_m) = \sum_{i<j} g(X_i, X_j) \) with \[ g(X_i, X_j) = 1 \{ X_i = X_j \} - E[1 \{ X_i = X_j \} | X_i] - E[1 \{ X_i = X_j \} | X_j] + E[1 \{ X_i = X_j \}] = 1 \{ X_i = X_j \} - Pr(X_i = X_j | X_i) - Pr(X_i = X_j | X_j) + c_0 . \] This decoupling technique is motivated by Theorem 8.1.1 of Tsypkov (2008) since the kernel function \( g \) has become centered and degenerated, i.e. \( E[g(X_i, X_j) | X_j] = E[g(X_i, X_j) | X_i] = 0 \) which implies that \( \tilde{U}_n \) is a zero-mean martingale with \( m \geq 2 \). The empirical sequence is \( \tilde{u}_m = \sum_{i=1}^m y_m \) with \[ y_j = \sum_{i=1}^{m-1} 1 \{ x_i = x_j \} - \sum_{i=1}^{m-1} p_{x_i} - (m-1)p_{x_j} + (m-1)c_0 \] which has bounded differences such that \( |\tilde{U}_k - \tilde{U}_{k-1}| = |Y_k| \leq 4m \) and \( y_1 = 0 \). However we cannot compute this empirical sequence, since the parameters of distribution are not known. As a remedy, we further decompose \( \tilde{U}_n \) as the sum of two sequences based on the observation that \[ E[p_{X_i}] = \sum_i p_{x_i}^2 = c_0 \] which implies that \( \sum_{i=1}^m (p_{X_i} - c_0) \) is again a zero-mean martingale sequence with the same filtration \( F_m \) such that the difference \( |p_{X_i} - c_0| < 1 \) for all \( i \). This motivates the following decomposition of \( \tilde{U}_n \) as \[ Y_j = \sum_{i=1}^{j-1} 1 \{ X_i = X_j \} - 2(j-1)c_0 + 2(j-1)c_0 - \sum_{i=1}^{j-1} p_{X_i} - (j-1)p_{X_j} \] Note that \( T_m \), used in Algorithm 2, can be computed and it is a zero-mean martingale sequence up to an error term \( E_n \) which cannot be computed, since the parameters of the underlying distribution \( p \) is not available. However \( E_n \) can be again decomposed into sequence of sums of zero mean-mean terms which we can upper bound with high probability. Important to note that if \( H_0 : c_0 = 1/K \), the error term is equal to zero in any time step, i.e. \( E_n = 0, \forall n \in [1, 2, \ldots] \), therefore \( T_m \) is a zero-mean martingale itself. Finally, we rely on the work of Howard et al. (2021) in which a sequence of confidence intervals is introduced for martingales that hold uniformly for each time step, even with random stopping time. We remark that our proof technique bears some superficial resemblance to the approach used in recent work by Oufkir et al. (2021). They make use of the fact that for any random variable \( T \) taking values from \( \mathbb{N} \) and for all \( T \in \mathbb{N}_+ \), it holds that \( E[T] \leq N + \sum_{t>N} P(T \geq t) \). Then with a carefully selected \( N \) and Chernoff bounds with infinite many applications of union bound implies upper bound on the expected sample complexity. By contrast, we construct a test martingale that is specific to collision probability and apply an anytime or time-uniform concentration bound to the martingale introduced by Waudby-Smith & Ramdas (2020). 5.1 Lower bound The next theorem proves that sample complexity bound in Theorem 5 is tight up to log-log factors. **Theorem 6.** Let \( N \) be the number of samples observed by a sequential testing algorithm for collision probability. For all \( \varepsilon, \delta \in [0, 1] \) there exists a distribution \( p \) and \( c_0 \in [0, 1] \) such that \( |C(p) - c_0| \geq \varepsilon \) and if the algorithm rejects the null hypothesis with probability \( 1 - \delta \) then \[ E[N] \geq \Omega \left( \frac{\log(1/\delta)}{\varepsilon^2} \right). \] **Proof sketch.** Our proof is based on a reduction to the problem of identity testing and a lower bound for that problem due to Oufkir et al. (2021). In an identity testing problem we are given a distribution \( p_0 \) and sample access to a distribution \( p_1 \) and the goal is to decide whether \( p_0 = p_1 \). or $\|p_0 - p_1\|_1 \geq \varepsilon > 0$. Oufkir et al. (2021) proved that if $\|p_0 - p_1\|_1 \geq \varepsilon$ then the number of samples $N$ needed to make a correct decision must satisfy $E[N] \geq \frac{\log(1/(3\varepsilon))}{D_{KL}(p_0||p_1)}$. We complete the proof by showing that there exist distributions $p_0$ and $p_1$ such that $\|p_0 - p_1\|_1 \geq \Omega(\varepsilon)$, $|C(p_0) - C(p_1)| \geq \Omega(\varepsilon)$ and $D_{KL}(p_0||p_1) \leq O(\varepsilon^2)$. An explicit construction of $p_0$ and $p_1$ is in the Appendix, and they are the same distributions as in the proof of Theorem 3. 6 EXPERIMENTS We compare our mechanism for private collision probability estimation (Mechanism 1) to the recently proposed mechanism from Bravo-Hernsdorff et al. (2022). As discussed in Section 4.3 we expect Mechanism 1 to outperform their mechanism when the support size of the distribution is large and the privacy requirement is strict. We also compare to an indirect method: Privately estimate the distribution itself, and then compute the collision probability of the estimated distribution. In our experiments we use an open-source implementation of a private heavy hitters algorithm due to Cormode et al. (2021). In Figure 1, we use each mechanism to privately estimate the collision probability of two distributions supported on 1000 elements: the uniform distribution ($p_i = 1/k$) and the power law distribution ($p_i \propto 1/i$). Our simulations show that Mechanism 1 has significantly lower error for small values of the privacy parameters $\alpha$ and $\beta$. ![Figure 1](https://github.com/Samuel-Maddock/pure-LDP) Figure 1: Sample complexity of private collision probability estimation mechanisms for $\alpha = 0.25$. Both mechanisms use the MD5 hash function and confidence level $\delta = 0.1$. For Mechanism 1 we let $\beta = 10^{-5}$. Error bars are one standard error. We next evaluate our sequential testing algorithm (Algorithm 2). Since we are not aware of any existing algorithm for sequential testing of collision probability, we compare Algorithm 2 to two batch testing algorithms, both of which are described in a survey by Canonne (2022a): - **Plug-in:** Form empirical distribution $\hat{p}$ from samples $x_1, \ldots, x_n$, and let $\hat{C} = C(\hat{p})$. - **U-statistics:** Let $\hat{C} = \frac{2}{n(n-1)} \sum_{i<j} 1 \{x_i = x_j\}$ be the all-pairs collision frequency. Each batch testing algorithm takes as input both the null hypothesis value $c_0$ and a tolerance parameter $\varepsilon$, and compares $|\hat{C} - c_0|$ to $\varepsilon$ to decide whether to reject the null hypothesis $C(p) = c_0$. The sample complexity of a batch testing algorithm is determined via worst-case theoretical analysis in terms of $\varepsilon$ (see Appendix). On the other hand, sequential testing algorithms automatically adapt their sample complexity to the difference $|C(p) - c_0|$. In Figure 2, we evaluate batch and sequential testing algorithms on both the uniform distribution and power law distributions. We use 20 different support sizes for each distribution, evenly spaced on a log scale between 10 and $10^6$ inclusively. Varying the support size also varies $|C(p) - c_0|$. As expected, when \(|C(p) - c_0|\) is large, our sequential testing algorithm requires many fewer samples than the batch algorithm to reject the null hypothesis, and as \(|C(p) - c_0|\) shrinks the number of samples required sharply increases (see grey areas in Figure 2). In all cases our sequential testing algorithm is never outperformed by the batch testing algorithms. ![Figure 2](image) **Figure 2:** Sample complexity of the sequential tester compared to the sample complexity of the batch testers. For the batch testers, the tolerance parameter \(\epsilon\) is set to 0.01. Note that in Figure 3, the plug-in tester has a worse sample complexity than the U-statistics tester. Since these sample complexities are determined by theoretical analysis, we experimentally confirmed that this discrepancy is not simply an artifact of the analysis. In Figure 3, we run simulations comparing the algorithms in terms of their error \(|\hat{C} - C(p)|\), and find that the plug-in tester is also empirically worse than the U-statistics tester. ![Figure 3](image) **Figure 3:** Empirical absolute error of plug-in and U-statistic estimators when the data is generated from uniform distribution and power law with domain size 1000. ### 7 CONCLUSIONS AND FUTURE WORK We introduced a locally differentially private estimator for collision probability that is near-optimal in a minimax sense and empirically superior to the state-of-the-art method introduced by Bravo-Hermsdorff et al. (2022). Our method is based on directly estimating the collision probability using all pairs of observed samples, unlike in previous work. We also introduced a near-optimal sequential testing algorithm that is likewise based on directly estimating the collision probability, and requires far fewer samples than the minimax optimal batch testing algorithm for many problem instances. In the future, we plan to combine these methods and develop a locally differentially private sequential testing algorithm which, to our best knowledge, does not currently exist. Also, we plan to develop an adaptive testing algorithm which accounts for the variance of the estimator, which may allow us to achieve even lower sample complexity (such as \(O(1/\epsilon)\)) for particularly easy problem instances. REFERENCES Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, and Ananda Suresh. A competitive test for uniformity of monotone distributions. In Carlos M. Carvalho and Pradeep Ravikumar (eds.), Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, volume 31 of Proceedings of Machine Learning Research, pp. 57–65, Scottsdale, Arizona, USA, 29 Apr–01 May 2013. PMLR. Jayadev Acharya, Alon Orlitsky, Ananda Theertha Suresh, and Himanshu Tyagi. The complexity of estimating rényi entropy. In Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms, pp. 1855–1869. SIAM, 2014. Jayadev Acharya, Clement Canonne, Cody Freitag, and Himanshu Tyagi. Test without trust: Optimal locally private distribution testing. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 2067–2076. PMLR, 16–18 Apr 2019a. Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. Hadamard response: Estimating distributions privately, efficiently, and with little communication. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1120–1129. PMLR, 2019b. Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. Journal of Computer and System Sciences, 58(1):137–147, 1999. Heinz Bauer. Probability theory, volume 23. Walter de Gruyter, 2011. Gecia Bravo-Hermosdorff, Róbert Busa-Fekete, Mohammad Ghavamzadeh, Andres Munoz Medina, and Umar Syed. Private and communication-efficient algorithms for entropy estimation. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 15382–15393. Curran Associates, Inc., 2022. URL https://arxiv.org/pdf/2305.07751.pdf Róbert Busa-Fekete, Dimitris Fotakis, Balázs Szörényi, and Emmanouil Zampetakis. Identity testing for mallows model. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 23179–23190, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/c315f0320b7cd4ec85756fac52d78076-Abstract.html Cristina Butucea and Yann Issartel. Locally differentially private estimation of functionals of discrete distributions. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 24753–24764. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/cf8c9be2a4508a24ae92c9d3d379131d-Paper.pdf Clément L. Canonne. Topics and techniques in distribution testing: A biased but representative sample. Found. Trends Commun. Inf. Theory, 19(6):1032–1198, nov 2022a. ISSN 1567-2190. doi: 10.1561/0100000114. URL https://doi.org/10.1561/0100000114 Clément L Canonne. Topics and techniques in distribution testing. Now Publishers, 2022b. Graham Cormode and Minos Garofalakis. Join sizes, frequency moments, and applications. In Data Stream Management: Processing High-Speed Data Streams, pp. 87–102. Springer, 2016. Graham Cormode, Samuel Maddock, and Carsten Maple. Frequency estimation under local differential privacy. PVLDB Journal Proceedings, 14(11):2046–2058, 2021. Constantinos Daskalakis and Yasushi Kawase. Optimal Stopping Rules for Sequential Hypothesis Testing. In 25th Annual European Symposium on Algorithms (ESA 2017), volume 87 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 32:1–32:14. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2017. URL http://drops.dagstuhl.de/opus/volltexte/2017/7823
4Qz9BT4mpM
Could you elaborate on the theoretical underpinnings of the 'agreement-on-the-line' method? How does it theoretically and practically differ from existing methods for assessing model performance on out-of-distribution data?
Predicting the Performance of Foundation Models via Agreement-on-the-Line Anonymous authors Paper under double-blind review Abstract Estimating out-of-distribution performance is critical to safely deploy machine learning models. Recently, Baek et al. showed that the phenomenon “agreement-on-the-line” can be a reliable method for predicting OOD accuracy of models in an ensemble consisting largely of CNNs trained from scratch. However, it is now increasingly common to lightly fine-tune foundation models, and it is unclear whether such fine-tuning is sufficient to produce the needed diversity in models for such agreement-based methods to work properly. In this paper, we develop methods for reliably applying agreement-on-the-line-based performance estimation to fine-tuned foundation models. In particular, we first study the case of fine-tuning a single foundation model, where we extensively study how different types of randomness (linear head initialization, data shuffling, and data subsetting) contribute to the agreement-on-the-line of the resulting model sets. Somewhat surprisingly, we find that it is possible to obtain strong agreement via random initialization of the linear head alone. Next, we find how multiple foundation models, pretrained on different data sets but fine-tuned on the same task, also observe agreement-on-the-line. Again rather surprisingly, we demonstrate that these models exhibit some key similarity, that causes them all to lie on the same agreement line. In total, these methods enable reliable and efficient estimation of OOD accuracy for fine-tuned foundation models, without leveraging any labeled OOD data. 1 Introduction Foundation models (FM) approaches, where one first pretrains a large model on open world data then fine-tunes or prompts for a specific downstream task, have proven to be a compelling paradigm for many common machine learning problems. The methods have achieved start-of-the-art results on image classification (Radford et al., 2019; Li et al., 2023; Wang et al., 2023), text classification (Brown et al., 2020), question answering (Devlin et al., 2018), and others, and are particularly noted for their often strong performance on out-of-distribution (OOD) data, that may vary substantially from the data used for fine-tuning (referred to as the in-distribution (ID) data) (Bommasani et al., 2021; Wortsman et al., 2022). Unfortunately, a substantial practical problem arises precisely in this OOD setting: in many cases, one does not have access to labeled OOD data, but only has such data available in unlabeled form. Obtaining an explicitly labeled hold-out set for each potential OOD shift is costly and impractical, and thus the field has explored other means for estimating OOD accuracy without labeled data. Interestingly, across a variety of distribution shift benchmarks, models often observe strong linear correlation between the ID and OOD accuracies of models, a phenomenon dubbed “Accuracy-on-the-line” (ACL) (Miller et al., 2021; Recht et al., 2019; Roelofs et al., 2019). Recently, Baek et al. (2022) empirically demonstrated that for ensembles of deep network classifiers trained from scratch, the rates of ID and OOD agreement also show a strong linear correlation with the same slope and bias. Baek et al. (2022) used this to estimate the accuracies of models in such ensembles, thus providing a simple method for estimating OOD accuracy via unlabeled data alone. In particular, whenever the ID versus OOD accuracy is strongly linearly correlated, one may estimate the linear trend using agreement without ground truth labels. Unfortunately, the AGL approach requires a diverse collection of classifiers over which to compute agreement: classifiers must vary in their predictions. Baek et al. (2022) achieve this diversity through training various models of differ- ent architectures from scratch. However, in the case of fine-tuned FMs, this diversity is seemingly lacking: we often want to lightly fine-tune just a single base foundation model for a downstream task, which even after multiple runs would seemingly lead to highly correlated downstream models, thus yielding model sets unsuitable for AGL-based OOD performance estimation. In this work, we develop methods for extending AGL performance estimation to the setting of FMs, thus enabling practitioners to estimate the OOD performance of fine-tuned models without any labeled data. We first investigate the ability to estimate OOD performance using a single base FM. Key to our approach is a detailed empirical study of different types of randomness that we can inject into the fine-tuning process, so as to encourage the needed degree of diversity amongst models. Specifically, we analyze three potential sources of diversity: 1) random linear head initialization; 2) random orderings of the fine-tuning data; and 3) random i.i.d subsets of the fine-tuning data. We find, somewhat surprisingly, that using random linear heads is able to much more reliably induce AGL behavior for the resulting classifiers, despite all settings still resulting in the ACL phenomenon alone. We find that these results hold across multiple different FMs and modalities, holding for CLIP-based image classification, and LLM-based question answering (QA)/text classification tasks. The end result is a simple and straightforward method for evaluating the OOD performance of a fine-tuned FM, applicable to settings where we only want to fine-tune a single such base model. Second, we analyze the ability of AGL-based method to predict OOD performance when using multiple different pretrained FMs. Here the likely problem seems to be opposite to what occurred previously: whereas before we expected to have too little diversity in models, here we encounter a setting where the different base models are pretrained on potentially entirely different data sets, using different architectures and training regiments. We show, however, that this degree of diversity is also sufficient for producing AGL behavior. Thus, for settings where multiple FMs exist, they can be all fine-tuned for a given downstream task, and AGL can allow us to estimate their accuracies. In total, our contributions are as follows: 1. We propose a new state-of-the-art method for unsupervised accuracy estimation under distribution shift when using large pre-trained foundation models that are lightly fine-tuned for specific tasks. Prior works have primarily dealt with models trained from scratch, and hence not directly applicable in this setting. Thus our study is new, computationally tractable, and extremely relevant in today’s context. 2. Furthermore, our work leveraging Agreement-on-the-line (AGL) for OOD estimation builds on top of prior work Baek et al. (2022); but extends it in important ways that apply to this new and important setting. The key to making AGL work is obtaining the right ensemble. In Baek et al. (2022), this was done by independently training multiple models from scratch, an unfeasible step for FMs. Our work shows how to side-step this, by systematically identifying a practical method for the same. Specifically, we show that creating an ensemble with randomly initialized linear heads and then fine-tuning, can allow for AGL behavior, and thus ALine methods for unsupervised accuracy estimation, while other similar forms of ensembling (such as data ordering or data subsetting) do not. 3. Besides being of practical relevance, this work also points to several interesting phenomena underlying AGL that go beyond previous knowledge. Prior work Baek et al. (2022) claimed that AGL does not hold for linear models. However, we find the contrary when using pre-trained features. Furthermore, other prior work Miller et al. (2021) suggests that the effective robustness (i.e. the linear fit between ID and OOD accuracy) would change depending on the pretraining data. We find that this is not the case for question answering with different pretrained LLMs. Thus we hope our findings can also advance our understanding of the robustness of ML models, particularly those that leverage foundation models. This work allows us to substantially expand the set of problems and models for which AGL-based OOD performance estimation is practical, and allows us to leverage much more powerful models for these settings where training models from scratch on tasks of interest is not feasible. 2 BACKGROUND AND RELATED WORK 2.1 SETUP Numerous tasks of interest boil down to mapping an input \( x \in X \) to a discrete output \( y \in Y \). In particular, consider a base FM \( B : X \mapsto \mathbb{R}^d \) that we fine-tune to get \( f(B) : X \mapsto Y \). In this work, we consider a variety of foundation models: GPT2 (Radford et al., 2019), GPT-Neo, OPT (Zhang et al., 2022), Llama2 (Touvron et al., 2023), and CLIP (Radford et al., 2021). **Fine-tuning.** We have access to labeled data from some distribution \( D_{ID} \) that we use for obtaining \( f(B) \) from \( B \). In this work, we consider the following standard fine-tuning procedures. 1. **Linear probing (LP):** Given features \( B_\theta \) from the base model \( B \), we train a linear head \( v \) such that the final classifier maps the score \( v^\top B_\phi(x) \) to a predicted class. We randomly initialize \( v \) and update \( v \) via gradient steps on a suitable loss function such as cross-entropy for classification. We keep the base model parameters frozen, and only update the linear head. We refer to \( v \) as either a linear probe (classification), or span prediction head (question answering) depending on the task of interest. 2. **Full fine-tuning (FFT):** Here also we randomly initialize a linear head \( v \) and optimize a suitable loss function, but we update all parameters of the backbone, i.e. the feature extract \( B_\phi \) is also updated. When infeasible to update all parameters natively, we perform low-rank adaptation (LoRA) (Hu et al., 2021) which uses trainable rank decomposition matrices to reduce the number of trainable parameters while still effectively updating the feature extractor \( B_\phi \). In this work, we do not distinguish between LoRA and FFT as they conceptually achieve the same effect, and seem to show similar empirical trends in our studies. Several variants of fine-tuning have been proposed, particularly focused on computational efficiency. However, full fine-tuning and linear probing remain the most commonly used approaches. **OOD performance estimation.** Given access to a labeled validation set from \( D_{ID} \) and unlabeled samples from a potentially different distribution \( D_{ood} \), our goal is to estimate performance on \( D_{ood} \). We consider the standard performance metrics for various tasks: Accuracy \( \ell_{0-1} : Y \mapsto Y \) for classification and Macro-averaged F1 score \( \ell_{F1} : Y \mapsto Y \) for Question Answering. We use \( \ell \) to denote the appropriate metric in the context. 2.2 Background on OOD accuracy estimation There is a rich literature on OOD performance estimation, with a variety of proposed approaches. One family of approaches attempts to quantify the degree of distribution shift through data and/or model dependent metrics e.g. uniform convergence bounds using metrics such as $H$-divergence (Ben-David et al., 2006; Mansour et al., 2009; Cortes et al., 2010; Kuzborskij & Orabona, 2013). However, these approaches only provide upper bounds on the OOD error, and these bounds tend to be loose when evaluated on deep networks used in practice (Miller et al., 2021). Another line of work looks at leveraging the model’s own softmax predictions i.e. the model’s confidence to predict the OOD performance (Hendrycks & Gimpel, 2017a; Hendrycks & Dietterich, 2019; Garg et al., 2022; Elsahar & Gallé, 2019; Guillory et al., 2021). Since models are typically overconfident, it is common practice to first calibrate these models using ID validation data to further improve the reliability of such approaches. While these approaches show empirical promise in some settings, they are not expected to work in general and often fail in the presence of large shifts (Garg et al., 2022). There are other heuristic OOD estimation strategies that are reported to work in some datasets such as using performance on auxiliary self-supervised tasks (Schelter et al., 2020; Deng & Zheng, 2021; Deng et al., 2021; Yu et al., 2022) or leveraging characteristics of self-trained models on the OOD data (Yu et al., 2022; Chen et al., 2021). There has also been growing interest in evaluating the reliability of foundation models in particular, and several distribution shift benchmarks have been proposed to specifically understand the failure modes of large models (Malinin et al., 2022; Tran et al., 2022). However, there has been a lack of study in terms of how well unsupervised performance estimators transfer to large models. 2.3 Accuracy and agreement on the line In recent work, Baek et al. (2022) propose a different approach for estimating OOD performance, that is empirically reliable across a variety of shifts and outperforms prior approaches. This approach is based on an earlier intriguing observation from (Miller et al., 2021; Recht et al., 2018; 2019; Roelofs et al., 2019; Yadav & Bottou, 2019; Taori et al., 2020; Miller et al., 2020)—there is a strong linear correlation between the ID and OOD performance of models for several distribution shifts. We call this phenomenon “accuracy-on-the-line” (ACL). ACL has been observed for image classification shifts such as some common corruptions on CIFAR10, ImageNetV2, FMoW-WILDS, and question answering shifts such as SQuAD-Shifts. However, ACL does not always hold e.g. Camelyon-WILDS (Miller et al., 2021) and SearchQA (Awadalla et al., 2022) do not show ACL. While ACL is a striking phenomenon, it does not immediately provide a practical method to estimate OOD performance—computing the slope and bias of the linear correlation requires access to labeled samples from $\mathcal{D}_{ood}$. Baek et al. (2022) propose to use the agreement between models rather than accuracy. Formally, given a pair of models $f_1$ and $f_2$ that map inputs to labels, accuracy and agreement can be defined as $$\text{Acc}(f_1) = \mathbb{E}_{x,y \sim \mathcal{D}}[\ell(f_1(x), y)], \quad \text{Agr}(f_1, f_2) = \mathbb{E}_{x,y \sim \mathcal{D}}[\ell(f_1(x), f_2(x))],$$ where $\ell$ is the appropriate performance metric of interest. Note that while accuracy requires access to the ground truth labels $y$, agreement only requires access to unlabeled data and a pair of models. Baek et al. (2022) observed that when ACL is observed i.e. the probit-scaled ID versus OOD accuracies of these models are strongly linearly correlated, then the ID versus OOD probit-scaled agreement of pairs of these models also observe a strong linear correlation with the same linear slope and bias. Furthermore, when accuracies do not show a linear correlation, agreements also do not. This phenomenon was called “agreement-on-the-line” (AGL). Previously, the connection between agreement and accuracy have been explored in-distribution (Jiang et al., 2022; Madani et al., 2004; Nakkiran & Bansal, 2020) and the variance in Bayesian neural networks is often utilized for uncertainty estimation (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017). Lee et al. (2023) provides some theoretical underpinning of AGL in the regression setting. Since computing agreement does not require ground truth labels, one can compute the respective slope and bias using OOD unlabeled data, and then estimate the OOD performance from the ID performance measured on ID validation data. We refer the reader to (Baek et al., 2022) for formal Figure 2: Some examples of datasets where ACL and AGL hold (CIFAR100C, ImageNetC and fMoW-WILDS). Similar to (Baek et al., 2022), we find that ACL doesn’t hold with a high correlation for the Camelyon17-WILDS dataset, and consequently neither does AGL. ALine algorithms (ALine-S and ALine-D) to use AGL for OOD performance estimation (Appendix 9.2). Note that ACL is a pre-requisite for good OOD performance estimation via ALine. However, we can easily detect whether or not ACL holds by simply checking the linear correlations afforded by agreements, and only rely on ALine when agreements show strong linear correlation. 2.4 ACL AND AGL: TRAINING FROM SCRATCH VS FINE-TUNING In this work, we are interested in estimating OOD performance when lightly fine-tuning foundation models. A crucial component for AGL is the diversity of the ensemble over which agreements are evaluated. Prior work on AGL has exclusively focused on training from scratch for several epochs, a very different regime from light fine-tuning. If the models are not diverse enough, AGL is bound to fail. As an extreme, consider an ensemble of effectively identical models. Their ID and OOD agreement will always be 1, and there is no linear fit to estimate. In this scenario, it was observed that simply varying the architectures was able to induce the needed diversity for AGL to hold. In contrast, in this work, we focus on how to introduce sufficient diversity during just the fine-tuning process which can start from the same base foundation model and usually involves far fewer gradient steps than training from scratch. 3 PREDICTING OOD PERFORMANCE: SINGLE BASE FOUNDATION MODEL Our first setting of interest concerns the case where we have a single FM that we would like to fine-tune for a given downstream task. Since AGL-methods cannot be applied to a single classifier (requiring a collection of classifiers over which to compute agreement between pairs), we need some method to introduce variability amongst multiple variants of this base model. Such variability can be introduced in many ways, but an overriding concern is that even with some randomness in the fine-tuning process, it may not be enough to overcome the underlying similarities in predictions due to the same base FM. To address this problem, in this section we evaluate multiple possible sources of diversity in the fine-tuning process, to see what approach (if any) can lead to AGL behavior. Specifically, we analyze three possible methods for introducing diversity into the fine-tuning process (which then lets us create a differentiated collection of classifiers by repeating the fine-tuning process multiple times): 1. **Random linear heads.** Before fine-tuning, we initialize the last layer of the network (i.e., the linear head) randomly, instead of via some zero-shot or pre-specified manner. 2. **Data shuffling.** We present the same data to each model, but shuffle the order for the data differently within each fine-tuning optimization run. 3. **Data subsetting.** We present each model to be fine-tuned with an independent subset of the (ID) fine-tuning data. For the case of training models from scratch, it is well established that independent data subsetting tends to lead to the greatest diversity of classifiers (Nakkiran & Bansal, 2020). Nonetheless, in this setting we find rather surprisingly, that just randomizing the linear head achieves the highest degree | Source of Diversity | CIFAR10C MAPE (%) | CIFAR10C MAE (%) | |---------------------|------------------|-----------------| | Random linear heads | **15.88** | **5.74** | | Data shuffling | 74.16 | 22.61 | | Data subsetting | 25.94 | 7.39 | Table 1: ALine-D MAE and MAPE for CLIP linear probing on CIFAR10 image classification. Note that the reported MAE and MAPE is averaged across all 19 CIFAR10C evaluated shifts. of agreement. We show that this finding persists over multiple models, multiple tasks, and indeed multiple modalities entirely. ### 3.1 INVESTIGATIONS ON VLM-BASED IMAGE CLASSIFICATION **CLIP Linear Probing** We use CLIP (Radford et al., 2021), specifically the ViT-B/32 model trained on LAION-2B (Schuhmann et al., 2022) for our image classification tasks. Given its well-established 0-shot capabilities, a popular method of fine-tuning CLIP for downstream tasks is to simply employ linear probing on top of the CLIP representation. Thus, in this section we evaluate the OOD performance of similarly obtained ensembles. **Datasets** We fine-tune and test our models for several different image classification datasets. We fine-tune models on CIFAR10 (Krizhevsky et al., 2009), and then test on CIFAR10C (Hendrycks & Dietterich, 2019), which contains 50k images with 19 different corruptions, some natural (Snow and Fog), and some synthetic (JPEG compression). We also test on the CIFAR10.1 dataset (Recht et al., 2018), which contains newer images of the same labels. We repeat the same for CIFAR100 (Krizhevsky et al.), ImageNet-1k (Russakovsky et al., 2014) and their respective shifted datasets CIFAR100C, ImageNetC (Hendrycks & Dietterich, 2019), and ImageNetV2 (Recht et al., 2019). We further validate our finding by testing on three real world shifts from the WILDS (FMoW, iWildCam, Camelyon17) (Koh et al., 2021) and the Office-Home (Venkateswara et al., 2017) benchmarks. **Results** In Figure 1, we observe the ID and OOD agreements and accuracies of linear probes trained on top of CIFAR10 CLIP representations. One may suspect that in this setting, simply fine-tuning linear models on top of the CLIP representations would agree highly and AGL may break. For example, Baek et al. (2022) has shown previously that AGL is a phenomena that is specific to neural networks e.g. linear models trained on top of the flattened CIFAR10 images do not observe AGL. Indeed, while ACL holds with strong correlation for each of the model collections constructed with the three sources of diversity, AGL does not hold for all model collections. However, AGL interestingly does hold strongly for the case of random head initialization. Thus, contrary to prior findings, even in linear models, when on top of neural network features (in this case CLIP) with the right type of diversity, one may observe AGL and use it to predict OOD estimation. On the contrary, for the other sources of diversity, we observe a consistent trend where agreement is also strongly linearly correlated but generally observe much higher agreement rate OOD. In fact, for all ensembles achieved through data subsetting and data shuffling, the agreement line strictly lies above the accuracy line, i.e. AGL was not observed. In some sense, this is particularly very surprising for linear models. Intuition may suggest that independent data subsetting leads to the greatest diversity as the other sources of diversity optimize over the same convex landscape. Yet even when we distribute the number of epochs trained to achieve a wide spread of ID accuracy models, AGL only holds for models that start at random initializations. The averaged Mean Absolute Percentage Error (MAPE) between the AGL-interpolated and actual OOD accuracies for the CIFAR10C shifts can be found in Table 1, further quantifying these visually apparent results. Figure 2 shows ACL and AGL holding for other datasets over CLIP fine-tuned ensembles obtained with the same random head initialization approach. We refer the reader to Appendix 9.8 for a more exhaustive evaluation. ### 3.2 INVESTIGATIONS ON LLM-BASED TEXT TASKS We conduct a similar systematic investigation of obtaining a set of fully fine-tuned single base LLMs that are amenable to estimating the OOD performance of any models within that ensemble. Sim- ilar to CLIP linear probing, we find that AGL cannot consistently be observed without randomly initializing the linear head of the model before performing fine-tuning. **Models** We evaluate a collection of 50 fine-tuned models for our experiments in this section. Each model is obtained by fine-tuning from the same checkpoint of a GPT2-Medium. We individually present findings on both these families of models in the following sections. Huggingface links to the base models we trained are in Appendix 9.11. **Full Fine-tuning** We fully fine-tune each of our models by attaching a span-prediction head to the pretrained models and fine-tuning the entire network on the ID dataset (SQuAD v1.1). The span-prediction head consists of two linear vectors that estimate the probability with which a token is the start and end of the answer span within the context. Each model is fine-tuned for up to 3 epochs to obtain a sufficient spread of model accuracy. Specifics on hyperparameters for fine-tuning are in Appendix 9.1. **Datasets** We study the aforementioned sources of diversity for LLMs for the task of Question Answering (QA). We also include a similar study on the task of Text Classification in Appendix 9.6. Each LLM is fine-tuned on the SQuAD v1.1 dataset (Rajpurkar et al., 2016) for the task of extractive QA. Extractive QA entails finding a single span of text from within the input context that answers the posed question. We evaluate the fine-tuned LLMs on four distribution shifts present in the SQuAD-Shifts (New Wiki, New York Times, Amazon, and Reddit) dataset (Miller et al., 2020). SQuAD-Shifts is a distribution shift in the dataset source. SQuAD builds reading comprehension questions from Wikipedia text, and SQuAD-Shifts replicates this pipeline by using paragraphs obtained from other sources. **Results** We find that not all sources of diversity are equally likely to yield a diverse enough ensemble of fine-tuned LLMs. Diversity arising from data shuffling and training on independent data subsets may not always be sufficient enough to yield an ensemble that is amenable to accurately estimating OOD accuracy (see Table 2). Specifically, these sources tend to yield model collections with correlated errors which results in the agreement line often lying above the accuracy line, although the trend is less stark than the one observed with CLIP linear probing for image classification. We refer the reader to Appendix 9.4 to observe these trends on all four shifts within the SQuAD-Shifts dataset. As seen with CLIP linear probing, varying the random initialization of the span head consistently provides sufficient stochasticity during fine-tuning to obtain a suitably diverse ensemble that demonstrates AGL and enables accurate prediction of OOD accuracy. ### 3.3 SUMMARY AND IMPLICATIONS As a substantial amount of diversity is needed for AGL to hold, it was anticipated that we would not be able to observe AGL for models lightly fine-tuned starting from a single base FM. However, seeing that AGL indeed holds when randomly initializing the linear head during fine-tuning, we show that it is possible to utilize ALine as a metric to compute OOD accuracy. Furthermore, the LLMs were only fully fine-tuned for up to 3 epochs which makes it all the more interesting that fine-tuning would overcome pretraining to achieve diversity. Thus, to a practitioner, when only a single base model is available, the OOD accuracy of another model with the same base can be estimated by randomly initializing a set of models, and fine-tuning. We also refer the reader to Appendix 9.3 for a likewise study of the three sources of diversity when fully fine-tuning CLIP. | Source of Diversity | SQuAD-Shifts MAPE (%) | Amazon MAE (%) | SQuAD-Shifts MAPE (%) | Reddit MAE (%) | |---------------------|-----------------------|---------------|----------------------|---------------| | Random Linear Heads | **6.34** | **0.69** | **3.48** | **0.79** | | Data Shuffling | 10.30 | 4.18 | 9.59 | 4.32 | | Data Subsetting | 16.21 | 5.2 | 13.94 | 4.71 | Table 2: ALine-D MAPE(%) and MAE (%) on the SQuAD-Shifts Amazon and Reddit datasets when applied to sets of fully-finetuned models, trained using different sources of randomness. | OOD Dataset | ALine-D | ALine-S | Naive Agr | ATC | AC | DF | |-----------------------------|---------|---------|-----------|-------|-------|-------| | SQuAD-Shifts Reddit | **1.20**| 2.60 | 20.21 | 12.74 | 49.25 | 6.09 | | SQuAD-Shifts Amazon | **1.64**| 3.10 | 20.40 | 15.35 | 51.06 | 7.39 | | SQuAD-Shifts Nyt | **0.82**| 1.33 | 18.46 | 3.11 | 38.61 | 3.18 | | SQuAD-Shifts New Wiki | 3.08 | 3.18 | 18.87 | 5.46 | 41.26 | **1.50**| | **Average** | **1.68**| 2.55 | 19.48 | 9.16 | 45.04 | 4.54 | | CIFAR10C (averaged across shifts) | 6.99 | **6.92**| 44.33 | 31.28 | 48.66 | 32.79 | | CIFAR10.1 (averaged across v4, v6) | **2.42**| 3.03 | 41.52 | 6.48 | 54.57 | 8.51 | | CIFAR100C (averaged across shifts) | **11.94**| 12.67 | 46.13 | 18.69 | 80.81 | 37.36 | | ImageNetC (averaged across shifts) | **10.91**| 11.04 | 56.76 | 27.25 | 79.00 | 37.86 | | ImageNet V2 (averaged across 3 format) | **4.96**| 5.03 | 47.65 | 8.96 | 77.34 | 7.86 | | fMoW-WILDS (val OOD split) | **2.59**| 2.74 | 83.94 | 9.03 | 44.59 | 5.86 | | iWildCam-WILDS (val OOD split) | **22.05**| 25.29 | 46.42 | 37.25 | 57.31 | 69.58 | | Camelyon17-WILDS (val OOD split)* | 9.93 | 10.71 | 19.99 | 18.92 | 24.64 | **7.18**| | OfficeHome-Art | **9.55**| 13.70 | 45.77 | 29.54 | 76.89 | 27.49 | | OfficeHome-ClipArt* | 14.60 | 16.23 | 50.81 | 18.22 | 79.44 | **14.29**| | OfficeHome-Product | **11.10**| 13.98 | 57.28 | 63.35 | 77.13 | 79.97 | | OfficeHome-Real | **4.80**| 7.20 | 45.16 | 16.43 | 86.44 | 21.93 | Table 3: The MAPE (%) of predicting OOD performance using ALine and other baseline methods. Evaluations on QA tasks (SQuAD-Shifts) are performed over a set of models finetuned from multiple base FMs (LlaMa, GPT, OPT). Evaluations on the image classification datasets are conducted with CLIP models fine-tuned with linear probing. ### 4 Predicting OOD performance: Multiple Foundation Models Alternatively, when multiple base foundation models are accessible, several additional questions arise. Instead of training multiple foundation models with random initialization, if multiple base models heavily pretrained on different data corpora lie on the same ID versus OOD accuracy trend, it’s conceivable that these models may also observe the same ID versus OOD agreement trend. However, AGL may potentially fail due to pairs of FMs fine-tuned from different base models disagreeing highly OOD, or models pre-trained on similar corpora observing relatively higher OOD agreement; thus breaking the linear correlation of agreement entirely. All the more, it is unclear whether models heavily pretrained on different text corpora lie on different or similar accuracy lines to begin with. We observe that for certain extractive question-answering shifts, foundation models fine-tuned from a wide range of base models observe both ACL and AGL. #### Models We train 41 models on the extractive QA benchmark SQuAD as in the previous section, and observe their OOD performance to SQuAD-Shifts. We fine-tune OPT-125M, OPT-350M, OPT-1.3B, GPT2-XL, GPT2-Large, GPT2-Medium, GPT2, GPT-Neo-135M, Llama2-7B, Alpaca-7B, and Vicuna-7B to extractive QA. In Appendix 9.7 we perform a similar study on text classification shifts. OPT was pretrained on a wide variety of data including BookCorpus (Zhu et al., 2015), Stories (Trinh & Le, 2018), a subset of PILE (Gao et al., 2020), CCNews v2 corpus, and PushShift.io Reddit (Baumgartner et al., 2020). Similarly, GPT2 was pretrained on BookCorpus while GPT-Neo was trained on PILE. Llama2 was trained on an undisclosed set of publicly available data. Sprouting from Llama2, Alpaca is additionally trained from Llama2 on instruction-following demonstrations while Vicuna is additionally trained from Llama2 on user-shared conversations from ShareGPT. ### 4.1 Results We investigate the behavior of an ensemble of foundation models fine-tuned from diverse base models on SQuAD to all SQuAD-Shifts datasets in Figure 3. We first make the observation that base LLM models pretrained on different sources of text corpus lead to fine-tuned models that lie on the same linear trend in accuracy on SQuAD. This is in contrast to how previous works benchmarking the performance of foundation models on image classification tasks (Radford et al., 2021; Taori et al., 2020) have indicated that models heavily pretrained on different image corpus may lie on... Figure 3: ACL and AGL observed on extractive Question Answering when computed over a set of models finetuned from different base foundation models. ACL and AGL are seen to hold for all four shifts of SQuAD-Shifts under this setup. The base models used here are OPT, GPT, and LLama. different lines. We suspect that the pretraining datasets for the models in our study observe much more homogeneity. Second, the ID versus OOD agreement for pairs of models in this ensemble, including pairs of different base foundation models, retains a strong linear correlation and the slope and bias closely matches that of accuracy. As a result, different pretraining does not break AGL. 5 Estimating OOD Accuracy using ALine on Diverse Ensembles With sufficient diversity in the ensemble, we observe that AGL succeeds over other OOD estimation baselines in terms of predicting the performance of the models in the ensemble. Specifically, we predict the performances of models in a collection consisting of 1) models trained from randomly-initialized heads (Section 3) and 2) different base models (Section 4). For image-classification, our model collection consist of just the former i.e. linear models over CLIP representations. Our question-answering model collection includes both i.e. GPT, OPT, Llama models individually trained from differently initialized heads. We compare the AGL prediction algorithms, ALine-S and ALine-D (Baek et al., 2022), with other existing methods: ATC (Garg et al., 2022), AC (Hendrycks & Gimpel, 2017b) and DOC-Feat (Guillory et al., 2021) that utilize model confidence to estimate OOD accuracy. We also assess the direct use of agreement to predict accuracy, dubbed naive agreement (Jiang et al., 2022; Madani et al., 2004). ALine-S simply transforms the ID accuracy using the slope and bias agreement estimates. ALine-D sets up and solves a system of $n$ choose 2 linear equations where the OOD accuracy of each model are the variables. Empirically, Aline-D performs better than ALine-S. More details for these algorithms are provided in Appendix 9.2. We observe that with the right diversity in the model collection (i.e. the ones exhibiting AGL), variants of the ALine algorithm surpass confidence/probability based methods by achieving the lowest error of predicting the OOD performance of fine-tuned foundation models on almost all tasks as seen in Table 3. For the confidence based methods (ATC, AC, DF), we pick the lower error value (from the ones obtained with and without the temperature scaling of the logits), even though in practice, this would not be known apriori. Though temperature scaling can be applied to calibrate models in terms of their accuracy, calibrating models for the F1 score by temperature scaling is not directly obvious. As a result, we observe that for extractive QA datasets, confidence based methods particularly suffer. 6 CONCLUSION We develop methods for extending AGL to foundation models to enable OOD performance prediction in this emerging paradigm. We found that applying AGL directly may sometimes fail and to properly utilize this phenomena for performance estimation requires a careful tuning of the distribution of models in the ensemble for their errors to be uncorrelated. Unlike the original paradigm of AGL, where models observed tens or hundreds of epochs of training on the in-distribution dataset, we find that stochasticity in specific optimization choices, specifically random initialization, is crucial for foundation models. We also remark that our findings suggest that even large pretrained models with light fine-tuning could be very sensitive to corruptions in the representation learned, especially with a randomly initialized linear head. Second, though Baek et al. (2022) posed AGL as a model centric phenomena that is specifically only observed in neural network ensembles, we find that linear models could also observe AGL when the data and the distribution shift contain certain structures (as is possible in the CLIP representation space). Our conclusion on AGL also sheds light on ACL (i.e. accuracy-on-the-line) in the presence of foundation models, a phenomenon that is of independent interest. Some recent works have studied the effect of different forms of fine-tuning on ACL (Radford et al., 2021; Awadalla et al., 2022). The main finding reported is that different forms of fine-tuning lead to different slopes in the linear correlations, a term that is often called “effective robustness”. In our results, we find that when fine-tuned the same way, models obtained from different base foundation models all (OPT, GPT2, GPT2-Neo, and Llama2) lie on the same line (Figure 3). This is particularly intriguing because it goes against the common wisdom that the amount of pretraining data determines the effective robustness. We leave these questions for future analysis. 7 REPRODUCIBILITY STATEMENT Appendix 9.1 outlines the hyperparameters we used to obtain our results, and Appendix 9.11 lists the Huggingface sources for each foundation model we evaluated. In order to make it easier to reproduce our findings, we plan to release code for linear probing, fine-tuning, and agreement calculation. 8 ETHICS STATEMENT Estimating the out-of-distribution performance of foundation models has rapidly grown in importance, especially as these models are increasingly deployed in real-world use cases. Our work focuses on a promising method to measure OOD performance without using labeled data, which could be a valuable tool to identify when performance degrades due to distribution shift. This would enable deployers to reduce the harm of machine learning systems when they encounter OOD inputs. However, deployers should be careful to not use AGL as the only signal for OOD performance. The correlation between agreement and accuracy is not guaranteed to hold for all distribution shifts, so other metrics should additionally be used to monitor model performance. In particular for foundation models, we observe that careful choices during fine-tuning is required to observe AGL. Furthermore, while AGL may correctly predict that the average OOD performance remains high, it may not identify whether different subpopulations experience drastically different changes in performance. These subgroups could correspond to protected categories like gender and race, or to inputs where
77N93tc3o5
In the manuscript, it sounds like identifiability should follow trivially from previous works, such as the iVAE framework (Khemakhem et al., 2020). Even if this were the case, it would be helpful to restate and discuss the required assumptions.
DEEP INDEPENDENT VECTOR ANALYSIS Anonymous authors Paper under double-blind review ABSTRACT We introduce a deep multivariate latent variable model, Deep Independent Vector Analysis (DeepIVA), for learning linked and identifiable latent sources across multiple data modalities by unifying multidataset independent subspace analysis (MISA) and identifiable variational autoencoders (iVAE). DeepIVA aims to learn hidden linkage information via the MISA loss to attain latent cross-modal alignment while leveraging the identifiability properties of the iVAE to ensure proper unimodal disentanglement. We propose a stricter set of performance measures, facilitating comprehensive evaluation. We demonstrate that DeepIVA can successfully recover nonlinearly mixed multimodal sources on multiple synthetic datasets compared with iVAE and MISA. We then apply DeepIVA on a large multimodal neuroimaging dataset, and show that DeepIVA can reveal linked imaging sources associated with phenotype measures. 1 INTRODUCTION One fundamental problem in representation learning is how to learn the latent variables used to generate the data. In blind source separation (BSS) (Silva et al., 2016), independent component analysis (ICA) (Comon, 1994) aims to recover latent sources that are statistically independent, but there is no guarantee of identifiability in general without additional assumptions. Notably, the solution of a linear ICA problem is identifiable only when at most one of latent sources is Gaussian (Comon, 1994). The solution of a nonlinear ICA problem, on the other hand, is highly non-unique without additional restrictions (Hyvärinen & Pajunen, 1999). If the learned sources are not identifiable, it is impossible to reveal the underlying structure of the data. Recent advancements in nonlinear ICA theory have proposed to recover identifiable latent sources mixed nonlinearly up to trivial indeterminacies by introducing auxiliary information (Hyvarinen & Morioka, 2016; Hyvarinen et al., 2019; Khemakhem et al., 2020). Specifically, an identifiable variational autoencoder (iVAE) (Khemakhem et al., 2020) has been proved to recover nonlinearly mixed sources up to permutations or sign flips by utilizing auxiliary variables such as time indices or class labels. It assumes that sources are conditionally independent given such auxiliary variables, in the form of an exponential family distribution. Apart from identifiability, we are often interested in learning linked representations from multiple data modalities, as each modality can only capture limited information of the data-generating system. For example, in the field of neuroimaging, structural magnetic resonance imaging (sMRI) can reveal static anatomical structure of the brain in high resolution, while functional magnetic resonance imaging (fMRI) can capture temporal dynamics at the cost of lower spatial resolution. Jointly analyzing two imaging modalities can uncover cross-modal relationships that cannot be detected by a single imaging modality, providing new insights into structural and functional interactions in the brain and its disorders (Calhoun & Sui, 2016). Recent studies on multi-view BSS assume that observations from different views originate from a shared source variable and distinct additive noise variables (Richard et al., 2020, 2021; Pandeva & Forr´e, 2023; Gresele et al., 2020). However, in the context of multimodal fusion, it is more reasonable to assume that each modality is generated by modality-specific latent variables which, in turn, are linked across modalities, rather than a shared set, especially for data modalities that are inherently heterogeneous. To identify linked sources from multiple datasets, a unified framework called multidataset independent subspace analysis (MISA) has been developed (Silva et al., 2020) encompassing multiple linear latent variable models, such as ICA (Comon, 1994), independent vector analysis (IVA) (Kim et al., 2006), and independent subspace analysis (ISA) (Cardoso, 1998). MISA can be applied to analyze... both multi-subject and multimodal neuroimaging data. Built upon MISA, multimodal IVA (MMIVA) (Silva et al., 2021) and multimodal subspace IVA (MSIVA) (Li et al., 2023a) have been recently developed to capture one-to-one and many-to-many latent multimodal associations, respectively. In both cases, the learned linked latents are found to be significantly associated with phenotype measures such as age, sex and psychosis from large-scale multimodal neuroimaging datasets including sMRI and fMRI. Although both MMIVA and MSIVA assume that sources undergo a linear mixing process, it is possible that the true mixing process in neuroimaging data is actually nonlinear, considering nonlinear transformations in modeling and preprocessing stages. For example, the hemodynamic response function that models the relationship between neural activities and fMRI signals is nonlinear; preprocessing steps such as coregistration include nonlinear transformations. Nonlinear methods such as deep neural networks (LeCun et al., 2015) have been increasingly applied for neuroimaging data analysis, showing the potential to learn robust brain-phenotype relationships (Abrol et al., 2021). Here, we ask the question: How can we learn linked and identifiable latent sources that are nonlinearly mixed across multiple data modalities? Built upon MISA and iVAE, we develop a deep multivariate latent variable model, Deep Independent Vector Analysis (DeepIVA), to learn linked and identifiable latent sources from multiple data modalities. In DeepIVA, we utilize the iVAE to identify sources from each modality, and the MISA loss function to align sources across all modalities. We demonstrate that DeepIVA can effectively recover sources compared to iVAE and MISA on multiple synthetic datasets and a large multimodal neuroimaging dataset. Our key contributions are as follows: • We propose a deep latent variable model, DeepIVA, to learn linked and identifiable representations from multimodal data by unifying MISA and iVAE; • We propose multiple evaluation metrics, including segment-specific minimum distance and trimmed mean correlation coefficient, to comprehensively characterize model performance; • We perform a systematic evaluation of model performance and demonstrate that DeepIVA can effectively learn linked and identifiable multimodal sources in multiple simulation configurations (different sources, segments, and observations per segment); • We apply DeepIVA on a large multimodal neuroimaging dataset to identify biologically meaningful sources associated with phenotype measures (age and sex). 2 METHODS 2.1 DEEP INDEPENDENT VECTOR ANALYSIS Independent Vector Analysis Independent vector analysis (IVA) (Kim et al., 2006) is a multivariate latent variable model which extends the ICA problem from a single dataset to multiple datasets and captures statistical dependence across datasets. IVA aims to identify linked vector sources across $M$ datasets or data modalities ($M > 1$) where each observation $x^m$ can be modeled as a linear mixture $A^m$ of statistically independent sources $s^m$: $$x^m = A^m s^m,$$ where $x^m \in \mathbb{R}^V$ is an observation in the $m$-th dataset or data modality $X^m \in \mathbb{R}^{N \times V}$, $s^m \in \mathbb{R}^C$ is the source corresponding to the observation $x^m$, $A^m \in \mathbb{R}^{V \times C}$ is the invertible linear mixing matrix, $m \in [1, M]$ indexes the dataset or data modality, $N$ is the number of observations, $V$ is the number of features, and $C$ is the number of sources ($C \leq V$). Particularly, in neuroimaging data, the observations are the subjects and the features are the volume pixel (voxel) intensities. The IVA algorithm seeks to identify the sources $\hat{s}^m$ by learning a demixing matrix $W^m$: $\hat{s}^m = W^m x^m$. The IVA problem can be solved by minimizing the following mutual information loss (Adali et al., 2014): $$L_{\text{IVA}} = \sum_{i=1}^{C} \left( \sum_{m=1}^{M} H(s^m_i) - I(s_i) \right) - \sum_{m=1}^{M} \log |\det W^m|,$$ where $H(\cdot)$ denotes the entropy, $I(\cdot)$ denotes the mutual information, $s_i$ is the $i$-th source component vector (SCV) which spans $M$ datasets, $s_i = [s^1_i, s^2_i, \ldots, s^m_i]^T$. The IVA objective aims to minimize the mutual information among SCVs while capturing multimodal dependence among sources within each SCV. Multidataset Independent Subspace Analysis (MISA) (Silva et al., 2020) is a unified framework encompassing multiple linear BSS models including ICA, IVA and ISA. MISA utilizes a multivariate Kotz distribution (Kotz, 1975) for SCV modeling: \[ p_{\psi}(s_i) = \frac{\beta^{\lambda} \Gamma\left(\frac{d_i}{2}\right)}{\pi^{d_i/2} (\det D_i)^{1/2} \Gamma(\nu)} e^{-\lambda(s_i^\top D_i^{-1}s_i)^{\beta}}, \] where \( \psi = [\beta, \lambda, \eta] \) is the set of Kotz hyperparameters, and \( d_i \) is the \( i \)-th SCV dimension, here \( d_i = M \). We define \( \nu = \frac{2\eta + d_i - 2}{2\beta} > 0 \) and \( \alpha = \frac{\Gamma(\nu + \beta^{-1})}{\lambda^{\beta-1} d_i \Gamma(\nu)} \) for brevity, where \( \Gamma(\cdot) \) denotes the gamma function. The positive definite dispersion matrix \( D_i \) is related to the SCV covariance matrix \( \Sigma_{s_i} \) as \( D_i = \alpha^{-1} \Sigma_{s_i} \). The Kotz distribution is highly flexible, as it encompasses the multivariate Gaussian distribution (\( \psi = [1, \frac{1}{2}, 1] \)) and the multivariate Laplace distribution (\( \psi = [\frac{1}{2}, 1, 1] \)). The MISA loss (Silva et al., 2020) is defined as the KL divergence between the joint distribution across all SCVs \( p_{\psi}(s) \) and the product of the Kotz distributions from each SCV \( p_{\psi}(s_i) \): \[ L_{\text{MISA}}(W) = D_{\text{KL}}(p_{\psi}(s) || \prod_{i=1}^{C} p_{\psi}(s_i)) \] \[ = - \sum_{m=1}^{M} J_{D_m} + \frac{1}{2} \sum_{i=1}^{C} J_{C_i} - f - \sum_{i=1}^{C} \frac{\mu - 1}{N} \sum_{n=1}^{N} J_{F_{in}} + \sum_{i=1}^{C} \frac{\lambda}{N} \sum_{n=1}^{N} J_{E_{in}}, \] where \( J_{D_m} = \sum_{i=1}^{C} \ln |\sigma_{m_i}| \) and \( \{\sigma_{m_i}\}_{i=1}^{C} \) is the set of non-zero singular values of the demixing matrix \( W_m \), \( J_{C_i} = \ln |\det D_i| \), \( J_{F_i} = \ln(s_i^\top D_i^{-1}s_i) \), \( J_{E_i} = \ln(s_i^\top D_i^{-1}s_i)^{\beta} \), \( f = \sum_{i=1}^{C} \left[ \ln \beta + \nu \ln \lambda + \ln \Gamma\left(\frac{d_i}{2}\right) - \frac{d_i}{2} \ln \pi - \ln \Gamma(\nu) \right] \). Identifiable Variational Autoencoder The original MISA framework only includes linear BSS methods. In practice, we are also often interested in learning nonlinear mixtures, especially for high-dimensional data such as neuroimaging. Recently, an identifiable variational autoencoder (iVAE) (Khemakhem et al., 2020) has been proposed to recover latent sources that are nonlinearly mixed by conditioning latents on auxiliary variables. It has also been proved that iVAE can recover independent conditional latent variables while maximizing the likelihood of generating the data, thus bridging the gap between iVAE and nonlinear ICA (see Appendix F in Khemakhem et al., 2020 for more details). Consider the following conditional unimodal generative model (Khemakhem et al., 2020): \[ x^m = f^m(s^m) + \epsilon^m, \quad m = 1, \ldots, M, \] \[ p_{\theta^m}(x^m, s^m | u) = p_{f^m}(x^m | s^m)p_{\rho^m,\lambda^m}(s^m | u), \] \[ p_{\epsilon^m}(x^m | s^m) = p_{\epsilon^m}(x^m - f^m(s^m)), \] \[ p_{T^m,\lambda^m}(s^m | u) = \prod_{i=1}^{C} \frac{Q_i^m(s^m)}{Z_i^m(u)} \exp \left[ \sum_{j=1}^{k} T_{i,j}^m(s^m)\lambda_{i,j}^m(u) \right], \] where \( x^m \in \mathbb{R}^V \) and \( u \in \mathbb{R}^S \) are observed random variables, \( s^m \in \mathbb{R}^C \) (\( C \leq V \)) is a latent variable, \( \epsilon^m \in \mathbb{R}^V \) is an independent modality-specific noise variable with probability density function \( p_{\epsilon^m}(\epsilon^m) \), \( \theta^m = (f^m, T^m, \lambda^m) \) is a set of parameters of the conditional generative model, and \( f^m : \mathbb{R}^C \to \mathbb{R}^V \) is a nonlinear mixing function. We assume that the prior on the latent variables \( p_{\rho^m}(s^m | u) \) is conditionally independent, and each unimodal source \( s^m \) follows a univariate exponential family distribution given the auxiliary variable \( u \), where \( Q_i^m \) is the base measure, \( Z_i^m(u) \) is the normalizing constant, \( T_{i,j}^m = (T_{i,1}^m, \ldots, T_{i,k}^m) \) are the sufficient statistics, \( \lambda_{i,j}^m(u) = (\lambda_{i,1}^m(u), \ldots, \lambda_{i,k}^m(u)) \) are the parameters depending on \( u \), and \( k \) is the dimension of each sufficient statistic. Given a dataset \( D = \{(x^{m(n)}, u^{(n)})\}_{n=1}^{N} \) with \( N \) observations sampled from the generative model defined by Equations 5–7,8, the iVAE aims to learn the parameters \( (\theta^m, \phi^m) \) that maximize the data generation likelihood by maximizing the evidence lower bound (ELBO): \[ L_{\text{iVAE}}(\theta^m, \phi^m) = \mathbb{E}_{q_D} \left[ \log p_{\theta^m}(x^m, s^m | u) - \log q_{\phi^m}(s^m | x^m, u) \right], \] where \( q_D \) is the empirical distribution of the dataset \( D \); \( p_{\theta^m}(x^m, s^m | u) \) is the observed conditional joint distribution; \( q_{\phi^m}(s^m | x^m, u) \) is the approximated posterior. The reparameterization trick Figure 1: DeepIVA overview. Step 1: An iVAE is trained to recover sources for each of $M$ data modalities. Step 2: The MISA loss is applied to align sources across $M$ data modalities. Steps 1 and 2 are iterated until convergence. (Kingma & Welling, 2013) is used to sample from a multivariate Gaussian distribution with a diagonal covariance, i.e., $q_{\phi_m}(s^m | x^m, u) = \mathcal{N}(s^m | g^m(x^m, u; \phi_{g^m}), I \sigma^2(x^m, u; \phi_{\sigma}))$. We implement an $L$-layer multilayer perceptron (MLP) as the backbone of the iVAE. The input dimension of the first layer in the encoder is equal to the sum of the feature dimension and the auxiliary information dimension. The input and output dimensions of each intermediate layer are the same, which doubles the feature size ($2V$). The output dimension of the last layer is again equal to the feature dimension. We use Leaky ReLU (Andrew et al., 2013) as the activation function. **Deep Independent Vector Analysis** Consider the following conditional multimodal generative model: $$x^m = f^m(s^m) + \epsilon^m, \quad m = 1, \ldots, M,$$ $$p_\theta(x^1, \ldots, x^M, s^1, \ldots, s^M | u) = \left( \prod_{m=1}^{M} p_{f^m}(x^m | s^m) \right) p_{\theta_s}(s | u),$$ where we define $$p_{f^m}(x^m | s^m) = p_{\epsilon^m}(x^m - f^m(s^m)),$$ $$p_{\theta_s}(s | u) = p_{\theta_s}(s^1, \ldots, s^M | u) = \prod_{i=1}^{C} p_{\theta_{s,i}}(s^i_1, \ldots, s^i_M | u).$$ Integrating $p_{\theta_s}(s | u)$ over $s^m_i$, $\forall i$, $\forall m'$, $m' \neq m$, implies the following (marginal) conditionally independent unimodal latent model: $$p_{\theta_s}(s^1_1, \ldots, s^C_1 | u) = \prod_{i=1}^{C} p_{\theta_{s,i}}(s^i_1 | u).$$ Built upon MISA and iVAE, we propose Deep Independent Vector Analysis (DeepIVA) to learn linked and identifiable latent sources from multiple data modalities defined according to Equations 10–14 (Figure 1). Assuming the unimodal marginals $s^m_i | u$ follow a univariate exponential family distribution, we show that the learned model parameters and sources from DeepIVA are identifiable up to a permutation and component-wise transformation (Appendix A). In DeepIVA, an iVAE is first initiated for each data modality and then a single MISA module is initiated across all data modalities. The iVAE aims to recover sources for each modality and the MISA module aims to identify linkage of sources across modalities. At each epoch, we alternate between training the cross-modal MISA and the unimodal iVAEs. Specifically, we process one segment (segments are defined by the auxiliary variables) from all $M$ modalities at a time, and simultaneously update the encoder parameters for all modalities according to the MISA loss (Equation 4). We then 1 Code will be made publicly available upon acceptance. update the iVAE model parameters (both encoder and decoder) using all segments simultaneously, for each of the $M$ modalities separately, following the iVAE loss (Equation 9). The MISA loss term $J_{D_m}$ in DeepIVA is different from the original MISA framework. Specifically, we compute the Jacobian matrix $\mathbf{J}^m$ of the nonlinear transformation parameterized by the MLP encoder $g^m$ for the $m$-th data modality. For computational efficiency, we approximate the determinant of each Jacobian by the determinant of the average Jacobian across samples, $\bar{\mathbf{J}}^m = \frac{1}{N} \sum_{n=1}^{N} \partial g^m(x^n)$. The loss term is defined as $J_{D_m} = \ln |\det \bar{\mathbf{J}}^m|$ if $\bar{\mathbf{J}}^m$ is a square matrix; $J_{D_m} = \sum_{i=1}^{C} \ln |\sigma_i^m|$ where $\{\sigma_i^m\}_{i=1}^{C}$ is the set of non-zero eigenvalues of $\bar{\mathbf{J}}^m \bar{\mathbf{J}}^m^\top$ if $\bar{\mathbf{J}}^m$ is not a square matrix. Additionally, since MISA is not designed to handle auxiliary information, we modify the original encoder architecture to distinguish between data features $x^m$ and auxiliary variables $u$ such that 1) the iVAE updates model parameters with respect to both $x^m$ and $u$ at the input layer, and 2) the MISA updates only those pertaining to $x^m$ but not $u$. The original iVAE model uses a single input layer taking the concatenated $x^m$ and $u$. In DeepIVA, we split this layer into two: one for data features $x^m$ and another for auxiliary variables $u$. The parameters with respect to $u$ will only be updated at the iVAE training step but will remain frozen at the MISA training step. Also, the inputs for the auxiliary variables are set to 0 during MISA training to ensure no influence from the frozen weights. ### 2.2 Synthetic Data Experiment **Synthetic Data** We generate multimodal synthetic datasets including non-stationary multivariate Gaussian sources. Specifically, we simulate a dataset $\mathbf{X} \in \mathbb{R}^{N \times C \times M}$ where $N = O \times S$ is the number of total observations, $O$ is the number of observations per segment, $S$ is the number of segments, $C$ is the number of sources, and $M$ is the number of modalities. Here, we set $M = 2$, $C \in \{5, 10, 15\}$, $S \in \{14, 8, 4\}$, $N \in \{2800, 5600\}$ to simulate real data, leading to 18 configurations in total. These configurations are chosen according to source identification performance in IVA tasks (Li et al., 2023b). For each segment, we generate a covariance matrix $\Sigma \in \mathbb{R}^{2C \times 2C}$ of both modalities, where the within-modality covariance matrices $\Sigma_{m,m} \in \mathbb{R}^{C \times C}$ along the main (block) diagonal are diagonal matrices with values sampled from a uniform distribution $[0.2, 4]$. Then, the between-modality covariance $\Sigma_{m,m'} \in \mathbb{R}^{C \times C}$ ($m \neq m'$) along the off-diagonal block is defined as a diagonal matrix with correlation values sampled from a uniform distribution $[0.7, 0.9]$ and scaled by the source standard deviations according to $\Sigma_{m,m}$. The data is then generated from a multivariate Gaussian distribution $\mathcal{N}(\mu, \Sigma)$, where $\mu \in \mathbb{R}^{2C}$ is sampled from a uniform distribution $[-3, 3]$. The auxiliary variable $u$ is the segment label with a uniform distribution on the integer set $[1, S]$. Latent variables within each modality are conditionally independent given segment labels $u$. Synthetic sources are visualized in Appendix B.1. A neural network with $L = 2$ layers was employed to act as the nonlinear mixing function $h$. For each layer, a Leaky ReLU (Maas et al., 2013) with a negative slope of 0.2 is used as the activation function. After the last Leaky ReLU layer, we multiply the mixed data from each modality by a different random orthogonal matrix $A$ to obtain the final mixed dataset $\mathbf{X}$. **Synthetic Data Experiment** For each configuration, we run iVAE, MISA and DeepIVA on the same synthetic data for 10 different random seeds, respectively. As for hyperparameters, we set an initial learning rate of 0.001 for the iVAE model. The corresponding MISA learning rate is equal to the iVAE learning rate divided by the number of segments, considering that the MISA model is trained on data from each segment separately. A learning rate scheduler is used to reduce the learning rate by a factor of 0.1 if there is no improvement for 20 epochs. We set the number of maximum contiguous iterations as 10 for both models. For synthetic datasets with 4, 8, and 14 segments, we use a batch size of 140, 160 and 160 for the iVAE model, and a batch size of 200, 350 and 700 for the MISA model, respectively. The model parameters are updated by the Adam optimizer (Kingma & Ba, 2014). Each model is trained for 300 epochs until convergence. ### 2.3 Neuroimaging Data Experiment **Neuroimaging Data** We utilize the UK Biobank dataset (Miller et al., 2016) $\mathbf{X} \in \mathbb{R}^{N \times V \times M}$ including two imaging modalities T1-weighted sMRI and resting-state fMRI ($M = 2$) from 2907 subjects ($N = 2907$). We preprocess sMRI and fMRI to obtain the gray matter tissue probability segmentation (GM) and amplitude of low frequency fluctuations (ALFF) feature maps, respectively. Each GM or ALFF feature map includes 44318 voxels ($V = 44318$). Here, we use age and sex groups Figure 2: Aggregated RDC matrices across segments from a synthetic dataset (2800 samples, 5 sources, 14 segments). IVAE can correctly identify sources from each modality while MISA can better capture linked sources across both modalities. DeepIVA, which unifies iVAE and MISA, can not only recover unimodal sources, but also capture cross-modal linkage. as auxiliary information, assuming that sources within each modality are conditionally independent given the age and sex group. This assumption is based on studies showing the significant impact of age and sex on both brain structure and function (Raz et al., 2004; Good et al., 2001; Ruigrok et al., 2014). We divide neuroimaging data into 14 segments according to sex and age groups such that segments approximately follow a uniform distribution (2 sex groups: male and female; 7 age groups: 46 – 53, 53 – 57, 57 – 61, 61 – 64, 64 – 67, 67 – 70, 70 – 79 years old). Neuroimaging Data Experiment We first run singular value decomposition on each data modality and choose the number of latent sources $C$ based on variance explained. We next apply multimodal group principal component analysis (MGPCA) on two data modalities (sMRI and fMRI) to reduce the feature dimension from 44318 voxels to $C$ common sources. After that, the transformation is applied separately to each dataset in order to obtain modality-specific reductions. We next run iVAE, MISA and DeepIVA on the reduced data $\mathbf{X}_r \in \mathbb{R}^{N \times C \times M}$, respectively. During the training process, we use a full batch size of 2907 samples for both iVAE and MISA, an iVAE learning rate of 0.001, a MISA learning rate of $7.14 \times 10^{-5}$, 300 epochs and 10 iterations per epoch. 2.4 Evaluation Metrics We utilize two metrics, the trimmed mean correlation coefficient between the 25th percentile and the 75th percentile (MCC) and the minimum distance (MD), to evaluate model performance. Unlike MCC, which only measures similarity along the main diagonal after permutation, MD also accounts for off-diagonal (dis)similarity. For each metric, we derive four types of coefficients: 1) a coefficient per modality, per segment; 2) an aggregated coefficient per modality; 3) an aggregated coefficient per segment; 4) a final aggregated coefficient across all modalities and segments. We first compute the randomized dependence coefficient (RDC) matrix $\mathbf{R}$ (Lopez-Paz et al., 2013) between the recovered sources and the ground-truth sources for each modality and each segment. Note that we compute a RDC matrix for each segment separately, instead of computing it across all segments by convention. Our segment-specific RDC can more precisely characterize the data within each segment and effectively mitigate the noise introduced when all segments are taken... simultaneously. Next, we aggregate the RDC matrices over segments by taking the mean to obtain an RDC matrix \( \mathbf{R}^m \) per modality (mean aggregation). We also obtain an aggregated RDC matrix \( \mathbf{R}^u \) per segment by taking the minimum across modalities for the entries corresponding to the sorted indices (i.e., the entries along the main diagonal after sorting) from a linear sum assignment problem (LSAP) solver (Crouse, 2016), and then taking the maximum for the remaining entries across all modalities (min-max aggregation). This min-max aggregation penalizes approaches that fail to detect cross-modal linkage, even when unimodal identifiability is high. To compute the final aggregated RDC matrix, we use min-max aggregation of \( \mathbf{R}^m \) across modalities. We use the permuted indices from the modality-specific RDC matrix \( \mathbf{R}^m \) which yields the lowest MD value as the global sorting indices to sort the other RDC matrices. For each sorted RDC matrix \( \mathbf{R}_s \), we compute the MCC, as well as the MD, slightly adjusted from Equation 4 in Nordhausen et al. (2011): \[ MD(\mathbf{R}) = \frac{1}{2}(1 + \frac{1}{d}\text{trace}(\mathbf{RR}^\top) - \frac{2}{d}\text{trace}(\mathbf{R}_s)), \] where \( \mathbf{R} \) is the unsorted matrix, \( \mathbf{R}_s \) is the sorted matrix, and \( d \) is the dimension of \( \mathbf{R} \). 3 RESULTS 3.1 DeepIVA Learns Linked and Identifiable Sources from Synthetic Datasets The aggregated RDC matrices for a synthetic dataset with 2800 samples, 5 sources and 14 segments from iVAE, MISA, and DeepIVA are shown in Figure 2. The aggregated RDC matrices for datasets with 4 and 8 segments are shown in the Appendix B.2, Figures 10 and 11. Columns I and II show the RDC matrices between the ground-truth sources and the recovered sources for the first modality (M1) and the second modality (M2), respectively. If an approach can successfully recover the latent sources that match the ground-truth sources, we anticipate that high RDC values align along the main diagonal after column permutation (same for both modalities). Greater contrast indicates better source identification performance. Column III shows the RDC matrices of the recovered sources between two modalities, while column IV shows the RDC matrices of the ground-truth sources between two modalities. If an approach can successfully identify the cross-modal linkage, high RDC values will be aligned along the main diagonal in column III, as the ground-truth linkage pattern in column IV. According to Figure 2, we observe that iVAE can identify sources with high RDC values within each modality (M1 MCC: 0.80, M2 MCC: 0.99; row I, columns I and II) but fail to capture cross-modal linkage (MCC: 0.62; row I, column III). By contrast, MISA reveals stronger cross-modal dependence along the main diagonal, suggesting its ability to detect cross-modal linkage (MCC: 0.65; row II, column III). However, MISA cannot fully recover unique unimodal sources (M1 MCC: 0.70, M2 MCC: 0.67; row II, columns I and II). In the first modality (M1), we note that the recovered SCV 1 shows high dependence with both ground-truth SCVs 2 and 3. DeepIVA, which unifies iVAE and MISA, can not only recover unimodal sources (M1 MCC: 0.91, M2 MCC: 0.92; row III, columns I and II) but also show the strongest cross-modal linkage (MCC: 0.72; row III, column III). The corresponding MD and MCC measures are presented in Figure 3. The iVAE shows the best performance for the per-modality per-segment metrics (low MDs, high MCCs). As these metrics only account for identifiability within each modality and each segment (no aggregation), these results again indicate that iVAE can effectively recover segment-specific unimodal sources. We also note that DeepIVA achieves comparable performance to iVAE, suggesting that DeepIVA can also effectively identify sources. The other measures (coefficients per modality, coefficients per segment, and aggregated coefficients) take not only unimodal identifiability but also cross-segment consistency and cross-modal linkage into account. From these metrics, we observe that DeepIVA exhibits superior performance (lowest MDs, highest MCCs) over the other two approaches in all simulation configurations. The aggregated MDs from DeepIVA are consistently lower than those from iVAE and MISA across different segments. Specifically, for 4, 8, and 14 segments, the aggregated MDs from DeepIVA are 68.62%, 49.59%, and 51.26% lower than those from iVAE, respectively. Similarly, the aggregated MDs from DeepIVA are 46.49%, 51.37%, and 44.41% lower than those from MISA for the corresponding segments. Furthermore, the aggregated MCCs from DeepIVA are consistently higher than those from iVAE and MISA. Notably, the aggregated MCCs from DeepIVA are 332.95%, 31.71%, and 88.25% higher than those from iVAE for 4, 8, and 14 segments, respectively. Likewise, the aggregated MCCs from DeepIVA are 22.93%, 34.29%, and 42.36% higher than those from MISA for the respective segments. Additionally, when comparing performance across datasets with 4, 8 and 14 segments, the configuration of 4 segments and 700 samples per segment shows the best source identification performance for the per-modality per-segment metrics. It suggests that variability in the dataset grows with the number of segments, making the optimization problem harder to solve. We perform a systematic evaluation of model performance across different data-generating configurations by varying both the problem scale (5, 10 and 15 sources) and the sample size (2800 and 5600 samples). The aggregated MD and MCC metrics are shown in Figure 4. Remarkably, DeepIVA outperforms iVAE and MISA in every configuration, showcasing its superior performance across all evaluated scenarios. Within each panel, we observe a consistent drop in model performance as the number of latent sources increases, suggesting that the optimization problem becomes more challenging as the latent dimension increases. Across horizontal panels, the DeepIVA performance improves for configurations with 10 and 15 sources when the sample size increases from 2800 to 5600, indicating that a larger sample size is necessary to better recover sources in a harder problem. 3.2 DeepIVA recovers linked neuroimaging sources associated with sex and age We run iVAE, MISA and DeepIVA on a multimodal neuroimaging dataset to evaluate their effectiveness in real data. Results from singular value decomposition of sMRI GM and fMRI ALFF feature maps suggest that top 15 sources can capture a large portion of variance explained in the data (Appendix C.1, Figure 12), and thus we choose to identify 15 common independent sources. The aggregated RDC matrices across segments between two neuroimaging modalities are presented in Figure 5: Aggregated RDC matrices across 14 segments of 15 recovered sources between two imaging modalities. DeepIVA captures cross-modal linkage from multimodal neuroimaging data. Figure 6: DeepIVA linked imaging SCVs associated with sex and age. Row I shows sex effect (blue: male; red: female). Row II shows aging effect (cold color: younger group; warm color: older group). Row III shows fitted linear lines from each segment (blue: male; red: female; light: younger group; dark: older group). Figure 5. Similar to simulations, DeepIVA shows the strongest cross-modal dependence along the main diagonal (MCC: 0.46), suggesting that it can better capture linked sources across two imaging modalities. We then color code the recovered sources from DeepIVA by sex and age groups (Figure 6), and observe noticeable sex clusters (e.g. SCVs 12 and 15) and age clusters (e.g. SCVs 8 and 11), indicating that DeepIVA captures linked sources related to phenotype measures. Furthermore, we fit a separate linear line for observations from each segment. If DeepIVA is capable of identifying consistent linked sources across segments, we should be able to observe that these fitted lines share similar slopes. Indeed, we note that slopes of fitted lines per segment are very consistent for most sources (e.g. SCVs 1 – 9). Color-coded sources from iVAE are less aligned across segments while those from MISA are not associated with sex and age (Appendix C.2 Figures 13 and 14). 4 DISCUSSION Summary We propose a deep multivariate latent variable model, Deep Independent Vector Analysis (DeepIVA), to learn linked and identifiable latent sources that are nonlinearly mixed across multiple data modalities. DeepIVA unifies iVAE and MISA, and exhibits unique advantages from each approach, specifically unimodal source identification from iVAE as well as cross-modal linkage detection from MISA. We demonstrate that DeepIVA can recover linked and identifiable sources from multiple synthetic datasets. Moreover, we show that DeepIVA reveals biologically meaningful linked sources from a large multimodal neuroimaging dataset. Limitations DeepIVA assumes that sources are conditionally independent given the auxiliary variable to achieve identifiability, as it utilizes the iVAE objective. However, there may not be sufficient information about such an auxiliary variable in real data. Though we obtain sources related to age and sex groups, the true data-generating process remains unknown in the neuroimaging data. Future Work We plan to extend our proposed method from nonlinear IVA problems to nonlinear ISA problems, aiming to capture source dependence by leveraging higher-dimensional subspaces. It is also worth exploring approaches that do not require side information, such as applying structural sparsity (Zheng et al., 2022), learning latent clusters (Willets & Paige, 2021; Jiang et al., 2016) or using a Gaussian mixture prior and a deep ReLU/Leaky-ReLU network (Kivva et al., 2022). REFERENCES Anees Abrol, Zening Fu, Mustafa Salman, Rogers Silva, Yuhui Du, Sergey Plis, and Vince Calhoun. Deep learning encodes robust discriminative neuroimaging representations to outperform standard machine learning. *Nature communications*, 12(1):1–17, 2021. Tülay Adali, Matthew Anderson, and Geng-Shen Fu. Diversity in independent component and vector analyses: Identifiability, algorithms, and applications in medical imaging. *IEEE Signal Processing Magazine*, 31(3):18–33, 2014. Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis. In *International conference on machine learning*, pp. 1247–1255. PMLR, 2013. Vince D Calhoun and Jing Sui. Multimodal fusion of brain imaging data: a key to finding the missing link (s) in complex mental illness. *Biological psychiatry: cognitive neuroscience and neuroimaging*, 1(3):230–244, 2016. J-F Cardoso. Multidimensional independent component analysis. In *Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP’98 (Cat. No. 98CH36181)*, volume 4, pp. 1941–1944. IEEE, 1998. Pierre Comon. Independent component analysis, a new concept? *Signal processing*, 36(3):287–314, 1994. David F Crouse. On implementing 2d rectangular assignment algorithms. *IEEE Transactions on Aerospace and Electronic Systems*, 52(4):1679–1696, 2016. Catriona D Good, Ingrid S Johnsrude, John Ashburner, Richard NA Henson, Karl J Friston, and Richard SJ Frackowiak. A voxel-based morphometric study of ageing in 465 normal adult human brains. *Neuroimage*, 14(1):21–36, 2001. Luigi Gresele, Paul K Rubenstein, Arash Mehrjou, Francesco Locatello, and Bernhard Schölkopf. The incomplete rosetta stone problem: Identifiability results for multi-view nonlinear ica. In *Uncertainty in Artificial Intelligence*, pp. 217–227. PMLR, 2020. Aapo Hyvarinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. *Advances in neural information processing systems*, 29, 2016. Aapo Hyvärinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. *Neural networks*, 12(3):429–439, 1999. Aapo Hyvarinen, Hiroaki Sasaki, and Richard Turner. Nonlinear ica using auxiliary variables and generalized contrastive learning. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 859–868. PMLR, 2019. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. *arXiv preprint arXiv:1611.05148*, 2016. Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In *International Conference on Artificial Intelligence and Statistics*, pp. 2207–2217. PMLR, 2020. Taesu Kim, Torbjørn Eltoft, and Te-Won Lee. Independent vector analysis: An extension of ica to multivariate components. In *International conference on independent component analysis and signal separation*, pp. 165–172. Springer, 2006. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, and Bryon Aragam. Identifiability of deep generative models without auxiliary information, 2022.
IcR1OOFzxm
One thing I'm not particularly sure is how is the answer selected in RAISE. When you generate the answer, how do you pick the candidate from the given set? PrAE and ALANS actually only generate the hidden latents and compare in the latent space. Do you compare in the pixel space? Do you think comparing in the hidden space would help further improve performance of RAISE?
Towards Generative Abstract Reasoning: Completing Raven’s Progressive Matrix via Rule Abstraction and Selection Fan Shi Bin Li∗ Xiangyang Xue Shanghai Key Laboratory of Intelligent Information Processing School of Computer Science, Fudan University fshi22@m.fudan.edu.cn {libin,xyxue}@fudan.edu.cn Abstract Endowing machines with abstract reasoning ability has been a long-term research topic in artificial intelligence. Raven’s Progressive Matrix (RPM) is widely used to probe abstract visual reasoning in machine intelligence, where models will analyze the underlying rules and select one image from candidates to complete the image matrix. Participators of RPM tests can show powerful reasoning ability by inferring and combining attribute-changing rules and imagining the missing images at arbitrary positions of a matrix. However, existing solvers can hardly manifest such an ability in realistic RPM tests. In this paper, we propose a deep latent variable model for answer generation problems through Rule Abstraction and SElection (RAISE). RAISE can encode image attributes into latent concepts and abstract atomic rules that act on the latent concepts. When generating answers, RAISE selects one atomic rule out of the global knowledge set for each latent concept to constitute the underlying rule of an RPM. In the experiments of bottom-right and arbitrary-position answer generation, RAISE outperforms the compared solvers in most configurations of realistic RPM datasets. In the odd-one-out task and two held-out configurations, RAISE can leverage acquired latent concepts and atomic rules to find the rule-breaking image in a matrix and handle problems with unseen combinations of rules and attributes. 1 Introduction The abstract reasoning ability is pivotal to abstracting the underlying rules from observations and quickly adapting to novel situations (Cattell [1963]; Zhuo & Kankanahalli [2021]; Malkinski & Mańdziuk [2022a]), which is the foundation of cognitive processes (Gray & Thompson [2004]) like number sense (Dehaene [2011]), spatial reasoning (Byrne & Johnson-Laird [1989]), and physical reasoning (McCloskey [1983]). Intelligent systems may benefit from human-like abstract reasoning when leveraging acquired skills in unseen tasks (Barrett et al. [2018]), for example, generalizing the law of object collision in the simulation environment to real scenes. Therefore, endowing intelligent systems with abstract reasoning ability is the cornerstone of higher-intelligence systems and a long-lasting research topic of artificial intelligence (Chollet [2019]; Malkinski & Mańdziuk [2022b]). Raven’s Progressive Matrix (RPM) is a classical test of abstract reasoning ability for human and intelligent systems (Malkinski & Mańdziuk [2022a]), where participators need to choose one image out of eight candidates to fill in the bottom-right position of a $3 \times 3$ image matrix (Raven & Court [1998]). Previous studies demonstrate that participators can display powerful reasoning ability by directly imagining the missing images (Hua & Kundu [2020]; Pekar et al. [2020]), and answer-generation tasks can more accurately reflect the model’s understanding of underlying rules than answer-selection ones (Mitchell [2021]). For example, some RPM solvers find shortcuts in discriminative tasks by selecting answers according to the bias of candidate sets instead of the given context. To solve answer-selection problems, many solvers fill each candidate to the matrix for score estimation and can hardly imagine answers from the given context (Barrett et al. [2018]; Hu et al. [2021]). ∗Corresponding author Some generative solvers have been proposed to solve answer-generation tasks (Pekar et al., 2020; Zhang et al., 2021b,a). They generate solutions for bottom-right images and select answers by comparing the solutions and candidates. However, some generative solvers do not parse interpretable attributes and attribute-changing rules from RPMs (Pekar et al., 2020), and usually introduce artificial priors in the processes of representation learning or abstract reasoning (Zhang et al., 2021b,a). On the other hand, most generative solvers are trained with the aid of candidate sets in training, bringing the potential risk of learning shortcuts (Hu et al., 2021; Benny et al., 2021). Deep latent variable models (DLVMs) (Kingma & Welling, 2013; Sohn et al., 2015) can capture underlying structures of noisy observations via interpretable latent spaces (Edwards & Storkey, 2017; Eslami et al., 2018; Garnelo et al., 2018; Kim et al., 2019). Previous work (Shi et al., 2021) solves generative RPM problems by regarding attributes and attribute-changing rules as latent concepts, which can generate solutions by executing attribute-specific predictive processes. Through conditional answer-generation processes that consider the underlying structure of RPM panels, the distractors are not necessary to train DLVM-based solvers. Although previous work has achieved answer generation in RPMs with continuous attributes, understanding complex discrete rules and abstracting global rules in realistic datasets is still challenging for DLVMs. This paper proposes a DLVM for generative RPM problems through Rule AbstractIon and SElection (RAISE). RAISE encodes image attributes (e.g., object size and shape) as independent latent concepts to bridge high-dimensional images and latent representations of rules. The underlying rules of RPMs are decomposed into subrules in terms of latent concepts and abstracted into atomic rules as a set of learnable parameters shared among RPMs. RAISE picks up proper rules for each latent concept and combines them into the integrated rule of an RPM to generate the answer. The conditional generative process of RAISE indicates how to use the global knowledge of atomic rules to imagine (generate) target images (answers) interpretably. RAISE can automatically parse latent concepts without meta information of image attributes to reduce artificial priors in the learning process. RAISE can be trained under semi-supervised settings, requiring only a small amount of rule annotations to outperform the compared models in non-grid configurations. By predicting the target images at arbitrary positions, RAISE does not require distractors of candidate sets in training and supports generating missing images at arbitrary and even multiple positions. RAISE outperforms the compared solvers when generating bottom-right and arbitrary-position answers in most configurations of datasets. We interpolate and visualize the learned latent concepts and apply RAISE in odd-one-out problems to demonstrate its interpretability. The experimental results show that RAISE can detect the rule-breaking image of a matrix through interpretable latent concepts. Finally, we evaluate RAISE on two out-of-distribution configurations where RAISE retains relatively higher accuracy when encountering unseen combinations of rules and attributes. 2 RELATED WORK Generative RPM Solvers. While selective RPM solvers (Zhuo & Kankanhalli, 2021; Barrett et al., 2018; Wu et al., 2020; Hu et al., 2021; Benny et al., 2021; Steenbrugge et al., 2018; Hahne et al., 2019; Zhang et al., 2019b; Zheng et al., 2019; Wang et al., 2019, 2020; Jahrens & Martinetz, 2020) focus on answer-selection problems, generative solvers predict representations or images at missing positions (Pekar et al., 2020; Zhang et al., 2021b,a). Niv et al. extract image representations through Variational AutoEncoder (VAE) (Kingma & Welling, 2013) and design a relation-wise perception process for answer prediction (Pekar et al., 2020). With interpretable scene representations, ALANS (Zhang et al., 2021b) and PrAE (Zhang et al., 2021a) adopt algebraic abstract and symbolic logical systems as the reasoning backends. These generative solvers predict answers at the bottom-right position. LGPP (Shi et al., 2021) and CLAP (Shi et al., 2023) learn hierarchical latent variables to capture the underlying rules of RPMs with random functions (Williams & Rasmussen, 2006; Garnelo et al., 2018), and can generate answers at arbitrary positions on RPMs with continuous attributes. RAISE is a variant of DLVM to realize generative abstract reasoning on realistic RPM datasets with discrete attributes and rules through atomic rule abstraction and selection. Bayesian Inference with Global Latent Variables. DLVMs (Kingma & Welling, 2013; Sohn et al., 2015; Sønderby et al., 2016) can capture underlying structures of high-dimensional data in latent 1 Code is available at https://github.com/FudanVI/generative-abstract-reasoning/tree/main/raise spaces, regard shared concepts as global latent variables, and introduce local latent variables conditioned on the shared concepts to distinguish each sample. GQN (Eslami et al., 2018) captures entire 3D scenes via global latent variables to generate 2D images of unseen perspectives. With object-centric representations (Yuan et al., 2023), global latent variables can explain layouts of scenes (Jiang & Ahn, 2020) or object appearances for multiview scene generation (Chen et al., 2021; Kabra et al., 2021; Yuan et al., 2022; Gao & Li, 2023; Yuan et al., 2024). Global concepts can describe common features of elements in data with exchange invariance like sets (Edwards & Storkey, 2017; Hewitt et al., 2018; Giannone & Winther, 2021). NP family (Garnelo et al., 2018; Kim et al., 2019; Foong et al., 2020) constructs different function spaces through global latent variables. DLVMs can generate answers at arbitrary positions of an RPM by regarding the concept-changing rules as global concepts (Shi et al., 2021, 2023). RAISE holds a similar idea of modeling underlying rules as global concepts. Unlike previous works, RAISE attempts to abstract the atomic rules shared among RPMs. 3 Method In this paper, an RPM problem is \((x_S, x_T)\) where \(x_S\) and \(x_T\) are mutually exclusive sets of images, \(S\) indexes the given context images, and \(T\) indexes the target images to predict (\(T\) can index multiple images). The objective of RAISE is to maximize the log-likelihood \(\log p(x_T | x_S)\) while learning atomic rules shared among RPMs. In the following sections, we will introduce the generative and inference processes of RAISE that can abstract and select atomic rules in the latent space. 3.1 Conditional Generation The generative process is the foundation of answer generation, including the stages of concept learning, abstract reasoning, and image generation. Concept Learning. RAISE extracts interpretable image representations for abstract reasoning and image generation in the concept learning stage. Previous studies have emphasized the role of abstract object representations in the abstract reasoning of infants (Kahneman et al., 1992; Gordon & Irwin, 1996) and the benefit of disentangled representations for RPM solvers (Van Steenkiste et al., 2019), which reflect the compositionality of human cognition (Lake et al., 2011). RAISE realizes compositionality by learning latent representations of attributes (Shi et al., 2021, 2023). RAISE regards image attributes as latent concepts and decomposes the rules of RPMs into atomic rules based on the latent concepts. Since the description of attributes is not provided in training, the latent concepts learned by RAISE are not exactly the same as the realistic attributes defined in the dataset. RAISE extracts \(C\) context latent concepts \(z_s = \{z_{sc}\}_{c=1}^C\) for each context image \(x_s (s \in S)\): \[ \mu^{1:C}_s = g_{\theta}^{\text{enc}}(x_s), \quad s \in S, \] \[ z_{sc} \sim N(\mu_{sc}, \sigma_z^2 I), \quad c = 1, ..., C, \quad s \in S. \] (1) The encoder \(g_{\theta}^{\text{enc}}\) outputs the mean of context latent concepts. The standard deviation is controlled by a hyperparameter \(\sigma_z\) to keep training stability. Each context image is processed through \(g_{\theta}^{\text{enc}}\). independently, making it possible to extract latent concepts for any set of input images. In this stage, the encoder does not consider any relationships between images and focuses on concept learning. **Abstract Reasoning.** As illustrated in Figure 1b, RAISE predicts target latent concepts \( z_T \) from context latent concepts \( z_S \) in the abstract reasoning stage, involving rule abstraction, rule selection, and rule execution processes. To abstract atomic rules and build the global knowledge set, RAISE adopts \( K \) global learnable parameters \( \psi = \{ \psi_k \}_{k=1}^K \), each indicating an atomic rule shared among RPMs. In rule selection, we use categorical indicators \( \{ r_c \}_{c=1}^C \) (\( r_c = 1, \ldots, K \)) to select a proper rule out of \( \psi \) for each concept. Inferring the indicators from \( z_S \) correctly is critical to rule selection. RAISE creates a \( 3 \times 3 \) representation matrix \( Z_c \) for each concept, initializing the representations of context images with the corresponding context latent concepts and those of target images with zero vectors. Then RAISE extracts the row-wise and column-wise representations: \[ p_i^c = f_{\phi_1}^{row}(Z_{i:1;3}^c), \quad q_i^c = f_{\phi_2}^{col}(Z_{1:3;i}^c), \quad i = 1, 2, 3, \quad c = 1, \ldots, C. \] RAISE averages the representations via \( \bar{p}_c = (p_1^c + p_2^c + p_3^c)/3 \) and \( \bar{q}_c = (q_1^c + q_2^c + q_3^c)/3 \) to obtain integrated representations of row and column rules. We concatenate \( \bar{p}_c \) and \( \bar{q}_c \) to acquire the probability of selecting atomic rules out of the global knowledge set: \[ r_c \sim \text{Categorical}(\pi_c), \quad \pi_1^c, \ldots, \pi_K^c = f_{\phi_3}^{\text{ind}}(\bar{p}_c, \bar{q}_c), \quad c = 1, \ldots, C. \] We denote the learnable parameters as \( \phi = \{ \phi_1, \phi_2, \phi_3 \} \) for convenience. In rule execution, RAISE selects and executes an atomic rule on each concept to predict the target latent concepts: \[ \mu_t^c = h(Z_c; \psi_r), \quad c = 1, \ldots, C, \] \[ z_t^c \sim N(\mu_t^c, \sigma_z^2 I), \quad t \in T, \quad c = 1, \ldots, C. \] RAISE instantiates \( h \) by selecting the \( r_c \)-th learnable parameters from the global knowledge set \( \psi \) to convert the zero-initialized target representations in \( Z_c \) into the mean of target latent concepts. As in the concept learning stage, the standard deviation of target latent concepts is controlled by \( \sigma_z \). \( h \) consists of convolution layers to aggregate information from neighbor context latent concepts on the matrix and update target latent concepts. Each learnable parameters in \( \psi \) indicates a type of atomic rule. See Appendix C.1 for the detailed description of \( h \). **Image Generation.** Finally, RAISE decodes the target latent concepts predicted in the abstract reasoning stage into the mean of target images: \[ x_t \sim N(A_t, \sigma_x^2 I), \quad A_t = g_{\phi}^{\text{dec}}(z_{t:1:C}), \quad t \in T. \] RAISE generates each target image independently to make the decoder focus on image reconstruction. We control the noise of target images by setting the standard deviation \( \sigma_x \) as a hyperparameter. According to Figure 1a, we decompose the conditional generative process as \[ p_\Theta(h, x_T | x_S) = \prod_{t \in T} p_\phi(x_t | z_t) \prod_{c=1}^C \left( p_\psi(z_T^c | r_c, z_S^c) p_\phi(r_c | z_S^c) \prod_{s \in S} p_\theta(z_s^c | x_s) \right) \] where \( h \) is the set of all latent variables and \( \Theta = \{ \theta, \phi, \psi, \varphi \} \) are learnable parameters of RAISE. ### 3.2 Variational Inference RAISE approximates the untractable posterior with a variational distribution \( q(h | x_T, x_S) \) [Kingma & Welling, 2013], which consists of the following distributions. \[ q(z_s^c | x_s) = N(\mu_s^c, \sigma_z^2 I), \quad s \in S, \quad c = 1, \ldots, C, \] \[ q(z_t^c | x_t) = N(\mu_t^c, \sigma_z^2 I), \quad t \in T, \quad c = 1, \ldots, C, \] \[ q(r_c | z_S^c, z_T^c) = \text{Categorical}(\pi_{1:K}^c), \quad c = 1, \ldots, C. \] Since RAISE shares the encoder between the generative and inference processes to reduce the model parameters, we compute context latent concepts \( \mu_{1:C}^s \) and target latent concepts \( \mu_{1:C}^t \) via the same process described in Equation 1. In the inference process, RAISE reformulates the variational distribution of the categorical indicator \( r_c \) as \( q(r_c | z_S^c, z_T^c) \propto p(z_T^c | r_c, z_S^c) p(r_c | z_S^c) \). That is, RAISE predicts the prior probabilities $\pi_{1:K}^c$ of $p(r^c|z_S^c)$ from the context latent concepts $z_S^c$ and compute the likelihood $p(z_T^c|r^c,z_S^c)$ by executing the atomic rule $r^c$ ($r^c = 1, \cdots, K$) on $z_S^c$. In this way, we can estimate the variational distribution $q(r^c|z_S^c,z_T^c)$ by considering both the prior probabilities and the likelihoods of $K$ atomic rules, which reduces the risk of model collapse (e.g., always selecting one atomic rule from $\psi$). We provide more details of $q(r^c|z_S^c,z_T^c)$ in Appendix A.1. Letting $\Psi = \{\theta, \phi, \psi\}$, we factorize the variational distribution as $$q_\Psi(h|x_T,x_S) = \prod_{c=1}^{C} q_{\phi,\psi}(r^c|z_S^c,z_T^c) \prod_{s \in S} q_\theta(z_s^c|x_s) \prod_{t \in T} q_\theta(z_t^c|x_t).$$ (8) 3.3 Parameter Learning We update the parameters of RAISE by maximizing the evidence lower bound (ELBO) of the log-likelihood $\log p(x_T|x_S)$ (Kingma & Welling [2013]). With the generative process $p_\Theta$ and the variational distribution $q_\Psi$ defined in Equations 6 and 8, the ELBO is ($q$ denotes the variational distribution, and we omit the parameter symbols $\Theta$ and $\Psi$ for convenience) $$L = E_{q_\Psi(h|x_T,x_S)} \left[ \log \frac{p_\Theta(h,x_T|x_S)}{q_\Psi(h|x_T,x_S)} \right]$$ $$= \sum_{t \in T} E_q \left[ \log p(x_t|z_t) \right] - \sum_{c=1}^{C} E_q \left[ \log \frac{q(z_T^c|x_T)}{p(z_T^c|r^c,z_S^c)} \right] - \sum_{c=1}^{C} E_q \left[ \log \frac{q(r^c|z_S^c,z_T^c)}{p(r^c|z_S^c)} \right].$$ (9) The reconstruction loss $L_{rec}$ measures the quality of the reconstruction images. The concept regularizer $R_{pred}$ estimates the distance between the predicted target concepts and the concepts directly encoded from target images. Minimizing $R_{pred}$ will promote RAISE to generate correct predictions in the space of latent concepts. The rule regularizer $R_{rule}$ expects RAISE to select the same rules when given different sets of images in an RPM. The variational posterior $q(r^c|z_S^c,z_T^c)$ conditioned on the entire matrix and the prior $p(r^c|z_S^c)$ conditioned on the context images are expected to have similar probabilities. The detailed derivation of the ELBO is provided in Appendix A.2. The abstraction and selection of atomic rules rely on the acquired latent concepts. Therefore, RAISE introduces auxiliary rule annotations to improve the quality of latent concepts and stabilize the learning process. We denote rule annotations as $v = \{v_a\}_{a=1}^{A}$ where $A$ is the number of ground truth attributes and $v_a$ indicates the type of rules on the $a$-th attribute. For example, $v = [2, 1, 3]$ means that the attributes follow the second, first, and third rules respectively. RAISE does not leverage the meta-information of attributes in training since the rule annotations only inform the type of rule on each attribute. The meaning of attributes is automatically learned by RAISE for accurate rule abstraction and selection. One key to guiding concept learning with rule annotations is determining the correspondence between latent concepts and attributes. RAISE introduces a $A \times C$ binary matrix $M$ where $M_{a,c} = 1$ indicates that the $a$-th attribute is encoded in the $c$-th latent concept. Therefore, the rule predicted on the $c$-th latent concept is supervised by the rule annotation $v_a$, and the auxiliary loss measures distances between the predicted and ground truth types of rules: $$L_{sup} = \frac{1}{2} \sum_{a=1}^{A} \sum_{c=1}^{C} M_{a,c} \log (\pi_{v_a}^c + \tilde{\pi}_{v_a}^c).$$ (10) The auxiliary loss $L_{sup}$ is the log-likelihood of the categorical distributions considering the attribute-concept correspondence $M$. The binary matrix $M$ is derived by solving the following assignment problem on a batch of RPM samples: $$\arg \max_M L_{sup} \text{ s.t. } \begin{cases} \sum_{a=1}^{A} M_{a,c} = 1, & a = 1, \ldots, A, \\ \sum_{a=1}^{A} M_{a,c} = 0 \text{ or } 1, & c = 1, \ldots, C, \\ M_{a,c} = 0 \text{ or } 1, & a = 1, \ldots, A, \quad c = 1, \ldots, C. \end{cases}$$ (11) Equation 11 allows the existence of redundant latent concepts, which can be solved using the modified Jonker-Volgenant algorithm (Crouse [2016]). In this case, the training objective becomes $$\arg \max_\Theta L_{rec} - \beta_1 R_{pred} - \beta_2 R_{rule} + \beta_3 L_{sup}$$ (12) Table 1: The accuracy (%) of selecting bottom-right answers on different configurations (i.e., Center, L-R, etc) of RAVEN/I-RAVEN. The table displays the average results of ten trials. | Models | Average | Center | L-R | U-D | O-IC | O-IG | 2×2Grid | 3×3Grid | |------------|---------|--------|-------|-------|-------|-------|---------|---------| | GCA-I | 12.0/24.1 | 14.0/30.2 | 7.9/22.4 | 7.5/26.9 | 13.4/32.9 | 15.5/25.0 | 11.3/16.3 | 14.5/15.3 | | GCA-R | 13.8/27.4 | 16.6/34.5 | 9.4/26.9 | 6.9/28.0 | 17.3/37.8 | 16.7/26.0 | 11.7/19.2 | 18.1/19.3 | | GCA-C | 32.7/41.7 | 37.3/51.8 | 26.4/44.6 | 21.5/42.6 | 30.2/46.7 | 33.0/35.6 | 37.6/38.1 | 43.0/32.4 | | ALANS | 54.3/62.8 | 42.7/63.9 | 42.4/60.9 | 46.2/65.6 | 49.5/64.8 | 53.6/52.0 | 70.5/66.4 | 75.1/65.7 | | PrAE | 80.0/85.7 | 97.3/99.9 | 96.2/97.9 | 96.7/97.7 | 95.8/98.4 | 68.6/76.5 | 82.0/84.5 | 23.2/45.1 | | LGPP | 6.4/16.3 | 9.2/20.1 | 4.7/18.9 | 5.2/21.2 | 4.0/13.9 | 3.1/12.3 | 8.6/13.7 | 10.4/13.9 | | ANP | 7.3/27.6 | 9.8/47.4 | 4.1/20.3 | 3.5/20.7 | 5.4/38.2 | 7.6/36.1 | 10.0/15.0 | 10.5/15.6 | | CLAP | 17.5/32.8 | 30.4/42.9 | 13.4/35.1 | 12.2/32.1 | 16.4/37.5 | 9.5/26.0 | 16.0/20.1 | 24.3/35.8 | | Transformer| 40.1/64.0 | 98.4/99.2 | 67.0/91.1 | 60.9/86.6 | 14.5/69.9 | 13.5/57.1 | 14.7/25.2 | 11.6/18.6 | | RAISE | 90.0/92.1 | 99.2/99.8 | 98.5/99.6 | 99.3/99.9 | 97.6/99.6 | 89.3/96.0 | 68.2/71.3 | 77.7/78.7 | where $\beta_1$, $\beta_2$, and $\beta_3$ are hyperparameters. RAISE also supports semi-supervised training settings. For samples that do not provide rule annotations, RAISE can set $\beta_3 = 0$ and update parameters via the unsupervised part $L_{rec} - \beta_1 R_{pred} - \beta_2 R_{rule}$. 4 EXPERIMENTS In the experiments, we compare the performance of RAISE with other generative solvers by generating answers at the bottom right and, more challenging, arbitrary positions. Then we conduct experiments to visualize the latent concepts learned from the dataset. Finally, RAISE carries out the odd-one-out task and is tested in held-out configurations to illustrate the benefit of learning latent concepts and atomic rules in generative abstract reasoning. Datasets. The models in the experiments are evaluated on the RAVEN (Zhang et al., 2019a) and I-RAVEN (Hu et al., 2021) datasets having seven image configurations (e.g., scenes with centric objects or object grids) and four basic rules. I-RAVEN follows the same configurations as RAVEN and reduces the bias of candidate sets to resist the shortcut learning of models (Hu et al., 2021). See Appendix B for details of datasets. Compared Models. In the task of bottom-right answer selection, we compare RAISE with the powerful generative solvers ALANS (Zhang et al., 2021b), PrAE (Zhang et al., 2021a), and the model proposed by Niv et al. (called GCA for convenience) (Pekar et al., 2020). RAISE selects the candidate closest to the predicted result in the latent space as the answer. We apply three strategies of answer selection in GCA: selecting the candidate having the smallest pixel difference to the prediction (GCA-I), having the smallest difference in the representation space (GCA-R), and having the highest panel score (GCA-C). Since these generative solvers cannot generate non-bottom-right answers, we take Transformer (Vaswani et al., 2017), ANP (Kim et al., 2019), LGPP (Shi et al., 2021), and CLAP (Shi et al., 2023) as baseline models to evaluate the ability to generate answers at arbitrary positions. We provide more details in Appendix C. Training and Evaluation Settings. For non-grid layouts, RAISE is trained under semi-supervised settings by using 5% rule annotations. RAISE leverages 20% rule annotations on O-IG and full rule annotations on 2×2Grid and 3×3Grid. The powerful generative solvers use full rule annotations and are trained and tested on each configuration respectively. We compare RAISE with them to illustrate the acquired bottom-right answer selection ability of RAISE under semi-supervised settings. The baselines can generate answers at arbitrary positions but cannot leverage rule annotations since they do not explicitly model the category of rules. We compare RAISE with the baselines to illustrate the benefit of learning latent concepts and atomic rules for generative abstract reasoning. Since the training of RAISE and the baselines do not require the candidate sets, and RAVEN/I-RAVEN only differ in the distribution of candidates, we train RAISE and the baselines on RAVEN and test them on RAVEN/I-RAVEN directly. See Appendix C for detailed training and evaluation settings. 4.1 BOTTOM-RIGHT ANSWER SELECTION This experiment conducts classical RPM tests that require models to find the missing bottom-right images in eight candidates. Table 1 illustrates RAISE’s outstanding generative abstract reason- Figure 2: Selection accuracy at arbitrary positions. The selection accuracy of RAISE (purple), Transformer (orange), CLAP (green), ANP (blue), and LGPP (black) in arbitrary positions. The x-axis of each plot indicates the number of candidates, and the y-axis is the selection accuracy. Figure 3: Answer generation at arbitrary positions. The prediction results on RAVEN are highlighted (red box) to illustrate the arbitrary-position generation ability. Due to the existence of noise, some predictions may differ from the original sample, but they still follow the correct rules. RAISE outperforms the compared generative solvers in most configurations of RAVEN/I-RAVEN, even if the distractors in candidate sets are not used in training. All the powerful generative solvers take full rule annotations for training, while RAISE in non-grid configurations only requires a small amount of rule annotations (5% samples) to achieve high selection accuracy. RAISE attains the highest selection accuracy compared to the baselines which can generate answers at arbitrary positions. By comparing the results on RAVEN/I-RAVEN, we find that generative solvers are more likely to have accuracy improvement on I-RAVEN, because I-RAVEN generates distractors that are less similar to correct answers to avoid significant biases in candidate sets. For grid-shaped configurations, we found that the noise in datasets will significantly influence the model performance. By removing the noise in object attributes, RAISE achieves high selection accuracy on three grid-shaped configurations using only 20% rule annotations. See Appendix D.1 for the detailed experimental results. 4.2 Answer Selection at Arbitrary Positions The above generative solvers can hardly generate answers at non-bottom-right positions. In this experiment, we probe the ability of RAISE and baselines to generate answers at arbitrary positions. We first generate additional candidate sets in the experiment because RAVEN and I-RAVEN do not provide candidate sets for non-bottom-right images. To this end, we sample a batch of RPMs from the dataset and split the RPMs into target and context images in the same way. For each matrix, we Figure 4: Panel (a) shows the interpolation results of latent concepts and the correspondence between the concepts and attributes. Panel (b) provides an example of RPM-based odd-one-out tests and displays the prediction deviations in concepts of each image. Panel (c) illustrates the strategy to split rule-attribute combinations in held-out configurations. use the target images of other $N_c$ samples in the batch as distractors to generate a candidate set with $N_c + 1$ entries. This strategy can adapt to the missing images at arbitrary and even multiple positions, and we can easily control the difficulty of answer selection through the number of distractors. Figure 2 displays the accuracy of RAISE and baselines when generating answers at arbitrary and multiple positions. RAISE maintains high accuracy in all configurations. Although Transformer has higher accuracy than the other three baselines, especially in non-grid scenes, the prediction accuracy drops significantly on $2 \times 2$ Grid and $3 \times 3$ Grid. Figure 5 provides the qualitative prediction results on RAVEN. It is difficult for ANP and LGPP to generate clear answers. CLAP can generate answers with partially correct attributes in simple cases (e.g., CLAP generates an object with the correct color but the wrong size and shape in the sample of Center). RAISE produces high-quality predictions and can solve RPMs with multiple missing images. By predicting multiple missing images at arbitrary positions, the qualitative results intuitively reveal the in-depth generative abstract reasoning ability in models, which the bottom-right answer generation task does not involve. 4.3 Latent Concepts Latent concepts bridge atomic rules and high-dimensional observations. Figure 4a visualizes the latent concepts learned from Center and O-IC by traversing concept representations of an image in the latent space. If the concepts are well decomposed, decoding the interpolated concept representations will change one attribute of the original image. Besides observing visualization results, we can find the correspondence between concepts and attributes with the aid of the binary matrix $M$. As shown in Figure 4a, RAISE can automatically set some redundant concepts when there are more concepts than attributes. (e.g., the first concept of Center). The visualization results illustrate the concept learning ability of RAISE, which is the foundation of abstracting and selecting atomic rules shared among RPMs. 4.4 Odd-One-Out in RPM In odd-one-out tests, RAISE attempts to find the rule-breaking image in a panel. To generate RPM-based odd-one-out problems, we replace the bottom-right image of an RPM with a random distractor in the candidate set. Taking Figure 4b as an example, we change the object color from white to black by replacing the bottom-right image. RAISE takes each image in an RPM as the target, gets the prediction results, and computes the prediction error on latent concepts. The right panel of Table 2: Selection accuracy (%) on two held-out configurations. | OOD Settings | RAISE | PrAE | ALANS | GCA-C | GCA-R | GCA-I | Transformer | ANP | LGPP | CLAP-NP | |--------------------|-------|------|-------|-------|-------|-------|-------------|-----|------|---------| | Center-Held-Out | 99.2 | **99.8** | 46.9 | 35.0 | 14.4 | 12.1 | 12.1 | 10.6| 8.6 | 19.5 | | O-IC-Held-Out | **56.1** | 40.5 | 33.4 | 10.1 | 5.3 | 4.9 | 15.8 | 7.5 | 4.6 | 8.6 | Figure 4b shows the concept-level prediction errors, and we find that the 7th concept of the bottom-right image deviates the most. According to Figure 4a, the 7th concept on Center represents the attribute Color, which is indeed the attribute modified when constructing the test. The last row has relatively higher concept distances since the incorrect image tends to influence the accuracy of answer generation at the most related positions. Because of the independent latent concepts and concept-specific reasoning processes of RAISE, the high concept distances only appear in the 7th concept. By solving RPM-based odd-one-out problems, we explain how concept-level predictions improve the interpretability of answer selection. Although RAISE is tasked with generating answers, it can handle answer-selection problems by excluding candidates violating the underlying rules. 4.5 Held-Out Configurations To explore the abstract reasoning ability on out-of-distribution (OOD) samples, we construct two held-out configurations based on RAVEN (Barrett et al., 2018) as illustrated in Figure 4. (1) Center-Held-Out keeps the samples of Center following the attribute-rule tuple (Size, Constant) as test samples, and the remaining constitute the training and validation sets. (2) O-IC-Held-Out keeps the samples of O-IC following the attribute-rule tuples (Type In, Arithmetic), (Size In, Arithmetic), (Color In, Arithmetic), (Type In, Distribute Three), (Size In, Distribute Three), and (Color In, Distribute Three) as test samples. The results given in Table 2 indicate that RAISE maintains relatively higher selection accuracy when encountering unseen combinations of attributes and rules. RAISE learns interpretable latent concepts to conduct concept-specific reasoning, by which the learning of rules and concepts are decoupled. Thus RAISE can tackle OOD samples via compositional generalization. Although RAISE has not ever seen the attribute-rule tuple (Size, Constant) in training, it can still apply the atomic rule Constant learned from other attributes to Size in the test phase. 5 Conclusion and Discussion This paper proposes a generative RPM solver RAISE based on conditional deep latent variable models. RAISE can abstract atomic rules from PRMs, keep them in the global knowledge set, and predict target images by selecting proper rules. As the foundation of rule abstraction and selection, RAISE learns interpretable latent concepts from images to decompose the integrated rules of RPMs into atomic rules. Qualitative and quantitative experiments show that RAISE can generate answers at arbitrary positions and outperform baselines, showing outstanding generative abstract reasoning. The odd-one-out task and held-out configurations verify the interpretability of RAISE in concept learning and rule abstraction. By using prediction deviations on concepts, RAISE can find the position and concept that breaks the rules in odd-one-out tasks. By combining the learned latent concepts and atomic rules, RAISE can generate answers on samples with unseen attribute-rule tuples. Limitations and Discussion. The noise in data is a challenge for the models based on conditional generation. In the experiment, we find that the noise of object attributes in grids will influence the selection accuracy of generative solvers like RAISE and Transformer on $2 \times 2$ Grid. The candidate sets can provide clearer supervision in training to reduce the impact of noise. Deep latent variable models (DLVMs) can potentially handle noise in RPMs since RAISE works well on Center and O-IC with noisy attributes like Rotation. In future works, exploring appropriate ways to reduce the influence of noise is the key to realizing generative abstract reasoning in more complicated scenes. For generative solvers that do not rely on candidate sets or are completely unsupervised, whether using datasets with large amounts of noise benefits the acquisition of generative abstract reasoning ability is worth exploring since the noise can make a generative problem have numerous solutions (e.g., PGM (Barrett et al., 2018)). In Appendices B.2 and D.1, we conduct an initial experiment and discussion on the impact of noise, but a more systematic and in-depth study will be carried out in the follow-up works. Some recent neural approaches attempt to solve similar systematic generalization problems (Rahaman et al., 2021; Lake & Baroni, 2023). We provide a discussion on the Bayesian and neural approaches of concept learning in Appendix E. ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (No.62176060) and the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning. REFERENCES David Barrett, Felix Hill, Adam Santoro, Ari Morcos, and Timothy Lillicrap. Measuring abstract reasoning in neural networks. In *International conference on machine learning*, pp. 511–520. PMLR, 2018. Yaniv Benny, Niv Pekar, and Lior Wolf. Scale-localized abstract reasoning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12557–12565, 2021. Ruth MJ Byrne and Philip N Johnson-Laird. Spatial reasoning. *Journal of memory and language*, 28(5):564–575, 1989. Raymond B Cattell. Theory of fluid and crystallized intelligence: A critical experiment. *Journal of educational psychology*, 54(1):1, 1963. Chang Chen, Fei Deng, and Sungjin Ahn. Roots: Object-centric representation and rendering of 3d scenes. *The Journal of Machine Learning Research*, 22(1):11770–11805, 2021. François Chollet. On the measure of intelligence. *arXiv preprint arXiv:1911.01547*, 2019. David F Crouse. On implementing 2d rectangular assignment algorithms. *IEEE Transactions on Aerospace and Electronic Systems*, 52(4):1679–1696, 2016. Stanislas Dehaene. *The number sense: How the mind creates mathematics*. OUP USA, 2011. Harrison Edwards and Amos Storkey. Towards a neural statistician. In *International Conference on Learning Representations*, 2017. SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. Neural scene representation and rendering. *Science*, 360(6394):1204–1210, 2018. Andrew Foong, Wessel Bruinsma, Jonathan Gordon, Yann Dubois, James Requeima, and Richard Turner. Meta-learning stationary stochastic process prediction with convolutional neural processes. *Advances in Neural Information Processing Systems*, 33:8284–8295, 2020. Chengmin Gao and Bin Li. Time-conditioned generative modeling of object-centric representations for video decomposition and prediction. In *Proceedings of the Conference on Uncertainty in Artificial Intelligence*, pp. 613–623, 2023. Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J Rezende, SM Eslami, and Yee Whye Teh. Neural processes. In *ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models*, 2018. Giorgio Giannone and Ole Winther. Hierarchical few-shot generative models. In *Fifth Workshop on Meta-Learning at the Conference on Neural Information Processing Systems*, 2021. Robert D Gordon and David E Irwin. What’s in an object file? evidence from priming studies. *Perception & Psychophysics*, 58(8):1260–1277, 1996. Jeremy R Gray and Paul M Thompson. Neurobiology of intelligence: science and ethics. *Nature Reviews Neuroscience*, 5(6):471–482, 2004. Lukas Hahne, Timo Lüddecke, Florentin Wörgötter, and David Kappel. Attention on abstract visual reasoning. *arXiv preprint arXiv:1911.05990*, 2019.
8sKcAWOf2D
The section detailing the use of CMAP to patch activations from Goat-7B to Llama-7B lacks clarity, particularly when validating the impact of QKV on the model's performance. The rationale behind why patching the QK-circuit of the Value Fetcher and the value vector of the Position Transmitter heads results in the most significant enhancement is not well-explained.
Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking Nikhil Prakash¹* Tamar Rott Shaham² Tal Haklay³ Yonatan Belinkov³ David Bau¹ ¹Northeastern University ²MIT CSAIL ³Technion – IIT Abstract Fine-tuning on generalized tasks such as instruction following, code generation, and mathematics has been shown to enhance language models’ performance on a range of tasks. Nevertheless, explanations of how such fine-tuning influences the internal computations in these models remain elusive. We study how fine-tuning affects the internal mechanisms implemented in language models. As a case study, we explore the property of entity tracking, a crucial facet of language comprehension, where models fine-tuned on mathematics have substantial performance gains. We identify the mechanism that enables entity tracking and show that (i) in both the original model and its fine-tuned versions primarily the same circuit implements entity tracking. In fact, the entity tracking circuit of the original model on the fine-tuned versions performs better than the full original model. (ii) The circuits of all the models implement roughly the same functionality: Entity tracking is performed by tracking the position of the correct entity in both the original model and its fine-tuned versions. (iii) Performance boost in the fine-tuned models is primarily attributed to its improved ability to handle the augmented positional information. To uncover these findings, we employ: Patch Patching, DCM, which automatically detects model components responsible for specific semantics, and CMAP, a new approach for patching activations across models to reveal improved mechanisms. Our findings suggest that fine-tuning enhances, rather than fundamentally alters, the mechanistic operation of the model. 1 Introduction The capabilities of models fine-tuned on general reasoning tasks have hinted at nontrivial mechanisms underlying task learning. While it has been widely understood that fine-tuning a pretrained model on a specific task can improve task performance on that same task (Howard & Ruder, 2018), studies of fine-tuning on generalized domains (Gururangan et al., 2020) have suggested that fine-tuning on generic problems can improve specific task performance as well. In particular, fine-tuning on coding has been observed to lead to a range of improved capabilities in a model (Madaan et al., 2022; Kim & Schuster, 2023). In this paper, we study the mechanisms underlying one specific capability which is dramatically improved by fine-tuning a standard large language model (LLM) on the generic task of arithmetic-problem solving: the ability of a model to perform in-context entity tracking, where the model can infer properties associated with an entity previously defined in the input context. For example, if we say “The apple is in Box C,” a model will later be able to infer “Box C contains the apple.” The ability to track and maintain information associated with various entities within the context is fundamental for complex reasoning (Karttunen, 1976; Heim, 1983; Nieuwland & Van Berkum, 2006; Kamp et al., 2010), thus making entity tracking an intriguing case study. We ask several specific questions about the mechanisms underlying the emergence of improved entity tracking in an arithmetic-tuned model. First, we ask: can the performance gap be explained because the fine-tuned models contain a different circuit for performing entity tracking? Or does it contain the same entity-tracking circuit as the base model? To answer this question, we explicitly identify the entity-tracking circuit in the base Llama-7B model, using the path-patching method from Elhage et al. (2021); Wang et al. (2022), consisting of a sparse set of 72 attention heads in four *Correspondence to prakash.nik@northeastern.edu tamarott@mit.edu, tal.ha@campus.technion.ac.il, belinkov@technion.ac.il, d.bau@northeastern.edu groups, each group active at a specific token location (Fig. 1); acting in isolation, this sparse circuit can reproduce the entire entity-tracking capability of the base model. Then, without altering the graph, we ask if exactly the same set of components constitutes the entity-tracking circuit in the fine-tuned models. We observe that the identical circuit exists in the fine-tuned models, which alone can restore at least 88% of the overall performance of the entire fine-tuned model. However, achieving the full performance of the fine-tuned models requires incorporation of additional components. Next, we ask: how does this common circuit work? Can we discern the role of each group of attention heads? To answer these questions, we use Desiderata-based Component Masking (DCM; Davies et al., 2023), a method for automatically identifying model components responsible for performing a specific semantic subtask. That is done by specifying a set of “desiderata,” each consisting of pairs of entity tracking tasks, a base task, and a carefully designed alternation of it. The alternation is done on a specific semantic part of the task (e.g., the entity name) with a known target output (e.g., switch the entity property). Using these sets of tasks, we automatically identify groups of model components that have causal effects that correspond to specific semantics. For example, we could identify whether circuit components are transporting entity name information (e.g., “Box C” in the previous example), or its associated property (e.g., “contains the apple”), or some other scheme. We test these hypotheses and surprisingly find a third scheme that is used: entity tracking is performed by identifying and transporting the position of the queried entity in the context, with multiple groups of heads collaborating to pass the position downstream. Furthermore, this scheme and specific role of each group of heads remain the same between models, confirming that fine-tuning preserves the overall mechanism for performing the entity tracking task. The mechanism invariance is observed in both low-rank adaptations (LoRA) (Hu et al., 2021) and fully fine-tuned models. Third, we ask: if the mechanism remains the same after fine-tuning, can we attribute the performance improvement to a specific step in the mechanism? To study this question, we introduce cross-model activation-patching (CMAP), which allows us to localize the specific sub-mechanism being improved by fine-tuning. Cross-model activation patching shows evidence that (i) the internal representation of both the original model and the fine-tuned models is similar enough so that patching components of the entity-tracking circuit from the fine-tuned models to Llama-7B leads to enhanced performance. (ii) In fine-tuned models the entity tracking circuit has augmented positional information for attending to the correct object and hence fetching its enhanced representation. Taken together, our findings indicate that fine-tuning enhances the existing mechanism of the original model rather than causing a fundamental shift. Notably, the entity tracking circuit remains consistent across both base and fine-tuned models and maintains the same functionality, with the performance gap mainly attributed to an improved core sub-mechanism. The code, data and fully fine-tuned model can be accessed at https://finetuning.baulab.info. 2 RELATED WORK Mechanistic interpretability aims to elucidate neural network behaviors by comprehending the underlying algorithms implemented by models (Olah et al., 2017; Elhage et al., 2022). Recently, notable progress has been made in identifying circuits performing various tasks within models (Nanda et al., 2023; Wang et al., 2022; Chughtai et al., 2023; Olah et al., 2020; Lieberum et al., 2023), and in methods enabling circuit discoveries (Davies et al., 2023; Conmy et al., 2024; Wu et al., 2024; Meng et al., 2022; Chan et al., 2022). We aim to harness mechanistic interpretability to uncover an explanation for the performance enhancement observed in fine-tuned models. Specifically, our exploration focuses on whether the performance gap results from varying circuit implementations of the same task and if not, we aim to identify the enhanced mechanism within the circuit. Fine-tuning on generic domains such as code, mathematics, and instructions has been shown to enhance language models performance, both in the context of general fine-tuning and when tailored for specific tasks (Christiano et al., 2017; Gururangan et al., 2020; Madaan et al., 2022; Ouyang et al., 2022; Chung et al., 2022; Taori et al., 2023; Chiang et al., 2023; Liu & Low, 2023; Kim & Schuster, 2023; Zheng et al., 2023; Touvron et al., 2023b; Bommarito II & Katz, 2022). Several attempts to understand the effect of such fine-tuning on model operations reveal interesting characteristics; instruction fine-tuning can destroy knowledge for OOD input (Kumar et al., 2022), shift the model’s weight to a task-depended sub-domain (Guetta et al., 2023; Ilharco et al., 2022), and enhance existing capabilities rather than introduce new knowledge (Zhou et al., 2023). Fine-tuned models were shown to have a localized set of components that perform the task (Panigrahi et al., 2023), and modified underlying embedding spaces and attention patterns (Kovaleva et al., 2019; Merchant et al., 2020; Wu et al., 2020; Zhou & Srikumar, 2022). Concurrent to our research, (Jain et al., 2023) delved into the impact of fine-tuning on LLMs from a mechanistic perspective. Although their main finding, suggesting that fine-tuning rarely alters pretrained capabilities, resonates with our result of enhancing existing mechanisms through fine-tuning, their study involved controlled experiments utilizing transformer models created using the tracr library (Lindner et al., 2024). In contrast, our experiments focus on established LLMs such as Llama-7B and their fine-tuned variants, specifically in the context of entity tracking tasks, which we believe better represent real-world language tasks. Entity tracking is a fundamental cognitive ability that enables AI models to recognize and trace entities, including objects, individuals, or concepts, within a given context (Karttunen, 1976; Heim, 1983; Nieuwland & Van Berkum, 2006; Kamp et al., 2010; Marcus, 2018). In the large language models realm, models such as GPT-2 (Radford et al., 2019) have shown some related abilities, such as predicting the next moves in board games (Toshniwal et al., 2022; Li et al., 2022). Utilizing a probing technique, Li et al. (2021) shows that entity state can be recovered from internal activations in BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020). Lately, Kim & Schuster (2023) presented a dataset of entity tracking tasks, showing that models fine-tuned on code data perform entity tracking more accurately. We use entity tracking as a case study to explore how fine-tuning changes the model’s functionality to achieve enhanced performance. Complimentary of our work, (Feng & Steinhardt, 2023) investigated how LLMs keep track of various properties associated with an entity. Their findings indicated that models generate binding ID vectors corresponding to entities and attributes. We find it intriguing to further investigate the interaction between these binding ID vectors and the entity tracking circuit we have identified. 3 EXPERIMENTAL SETUP To explore the internal mechanism that enables entity tracking we adapt the dataset presented in Kim & Schuster (2023), aimed at evaluating the ability of a language model to track state changes of discourse entities. The dataset contains English sentences describing different settings of objects located in different boxes, with different labels, and the task is to discover what is inside a specific box. For example, when the model is presented with “The apple is in box F, the computer is in Box Q, the document is in Box X... Box F contains the”, it should predict the next token as “apple” (see additional task examples in Fig. 2 and in the Appendix J). Each of our tasks involves 7 boxes and no operations (i.e. contents of the boxes are not altered), each box is labeled with a random alphabetic letter. For convenience, we only use single-token objects. In contrast to Kim & Schuster (2023), we reorder the structure of the context segment (where each box information is defined) such that the object is mentioned before the box label (“The apple is in box F” instead of “Box F contains the apple”). This is to ensure that the context segment and the query segment (where the box is queried) have different structures, and the model needs to infer the box information rather than locating the longest identical context segment in the text. We study four language models: LLaMA-7B (Touvron et al., 2023a), and three fine-tuned versions of it: Vicuna-7B (Chiang et al., 2023) that was fine-tuned on user-shared conversations collected from ShareGPT, Goat-7B (Liu & Low, 2023), fine-tuned on synthetically generated arithmetic expressions using LoRA (Hu et al., 2021), and FLoat-7B (Fine-tuned Llama on arithmetic tasks), fine-tuned on the same data as Goat-7B without LoRA. All these models achieve high performance on the entity tracking task, as shown in Table 1 (first column, evaluation was done over 500 tasks). Although Goat-7B and FLoat-7B were fine-tuned on arithmetic tasks, their ability to perform entity tracking is significantly improved compared to the base Llama-7B model. This aligns with Kim & Schuster (2023), who also found that models trained on structured data are better at performing entity tracking. We seek a mechanistic explanation for this performance gap. 4 IS THE SAME CIRCUIT PRESENT AFTER FINE-TUNING? In this section we ask whether the circuit that enables entity tracking changes across the different fine-tuned models. Entity tracking might be solved by the same circuit in all four models, or each model may implement a different circuit in the light of fine-tuning data. To answer this, we start with identifying the entity tracking circuit in Llama-7B, and then evaluate the same circuit components in Vicuna-7B, Goat-7B, and FLoat-7B. 4.1 Circuit Discovery in Llama-7B The entity-tracking circuit will be a subgraph of the transformer computational graph, where each node is an attention head at a specific token position, so the whole circuit is a set \( \text{Cir} = \{(a,t)\} \). For example, Fig. 1 illustrates the entity tracking circuit in Llama-7B consisting of four groups of nodes, each represented by a prominent head; e.g. Group A is characterized with \((a_{1,21H3}, t_{\text{last}})\). Given the nature of the entity tracking task, we are primarily interested in how and what kinds of information are transported between tokens rather than how that information is transformed. We therefore focus our analysis on the attention heads of the circuit, and we consider all MLP layers to be involved in the computation of the final output. To identify the components of the entity tracking circuit, we use Path Patching (Wang et al., 2022; Goldowsky-Dill et al., 2023), using the synthetic box tracking dataset with 300 examples. For each of the original entity tracking tasks \( x_{\text{org}} \) we define a corresponding noise task \( x_{\text{noise}} \) with a randomized query, box labels, and objects. Then we evaluate each candidate pair of nodes with a score defined as follows. We denote \( p_{\text{org}} \) as the probability of the correct token predicted by the original run, and we let \( p_{\text{patch}} \) be the probability assigned to the correct token when patching a specific path from one specific node to another using activations from the noisy run. The patching score for the candidate pair is defined as \( (P_{\text{patch}} - P_{\text{org}})/P_{\text{org}} \). At each iteration we add the paths with the lowest (most negative) scores. In the first step, we identify the group of heads that directly influence the final logit with the lowest patching scores. These attention heads attend mainly to the correct object token: in other words, they look directly at the answer, e.g., ‘apple’ that should be predicted (Fig. 1). We refer to this set of heads as Group A. We then iteratively identify groups of heads that have high direct effects on each other using the path patching score; this leads us to three additional groups of attention heads, (B, C, and D), active at the last, query label, and previous query label token positions, as shown in Fig. 1. We mark the paths between groups with either Q or V to indicate whether the heads of the previous group affect the query or the value vector calculation of the following group correspondingly. Overall, the circuit \( \text{Cir} \) consists of four groups of heads. Group D at the previous query label token collects information of its segment and passes it on to the heads in Group C at the query box label position via V-composition. The output of Group C is transported to the last token residual stream via the heads of Group B through V-composition, which is used by the heads of Group A via Q-composition to attend to the correct object token. The validity of this information flow channel is further substantiated by the results obtained from the attention knockout technique introduced in Geva et al. (2023), as demonstrated in Appendix A. Interestingly, this circuit suggests that correct object information is fetched directly from its token residual stream, instead of getting it from the query label token residual stream. This result is consistent with the findings of Lieberum et al. Table 1: **Entity-tracking circuit** found in Llama-7B, evaluated on Llama-7B, Vicuna-7B, Goat-7B, and FLoat-7B, without any adjustment of the circuit graph. The circuit achieves high accuracy and faithfulness scores in all models (chance accuracy is 0.14). | Model | Finetuned? | Accuracy | |-------------|---------------------|-------------------| | | | Full-Model | Circuit | Random Circuit | Faithfulness | | Llama-7B | – | 0.66 | 0.66 | 0.00 | 1.00 | | Vicuna-7B | User conversations | 0.67 | 0.65 | 0.00 | 0.97 | | Goat-7B | Arithmetic tasks (LoRA) | 0.82 | 0.73 | 0.01 | 0.89 | | FLoat-7B | Arithmetic tasks (w/o LoRA) | 0.82 | 0.72 | 0.01 | 0.88 | (2023), reporting that heads affecting final logit attend to the correct label, instead of content tokens, to identify the label corresponding to the already-determined correct answer. ### 4.2 Circuit Evaluation Although path patching ranks a head based on its relevance via the patching score, it does not provide a clear threshold for the number of heads that should be included in the circuit. In our setting, we include a total of 90 heads in the circuit discovered with path patching (50, 10, 25, 5 heads in Groups A,B,C,D respectively). However, there might be redundancy among the heads in each group. Hence, inspired by Wang et al. (2022), we use a minimality criterion to prune the initial circuit. We then measure the performance of the minimal circuit compared with that of the entire model using the faithfulness metric. We also evaluate it with the completeness metric in the Appendix C. For both criteria, we define the performance metric $F$ to be the accuracy score averaged over 500 examples. That is, for the model $M$ and its circuit $Cir$, $F(M)$, $F(Cir)$ represent the accuracy of the model and circuit respectively. Specifically, we compute $F(Cir)$ by first mean ablating of all the heads in the model that are not involved in $Cir$. **Minimality.** The minimality criterion helps identify heads that do not significantly contribute to the circuit performance found with path patching (90 heads in total). For each head, $v \in Cir$, and a subset of heads $K$, we measure the relative performance difference of $Cir$ when the heads in $K$ are knockout, with and without $v$ from the circuit. That is, we define the contribution of each head $v$ to $Cir$ as $(F(Cir \setminus K) - F(Cir \setminus (K \cup \{v\}))) / F(Cir \setminus (K \cup \{v\}))$. We filter out the heads with a score lower than 1% (e.g., contribute less than 1% to the performance of the circuit in the absence of the functionality defined by subset $K$). Unlike Wang et al. (2022), we use a greedy approach to form the subset for each head in $Cir$ (check Appendix B for more details), and only consider heads that positively contribute to the model performance (e.g., contribute to performing of the task). Using this criterion we prune 20% of the heads of the initial circuit, hence reducing the total number of heads to 72 (see Appendix D for exact distribution and heads in each group). **Faithfulness.** We next measure how good is the identified circuit compared with the entire model. We use the criterion of faithfulness, which is defined as the percentage of model performance that can be recovered with the circuit, i.e., $F(Cir)/F(M)$. As shown in Table 1, Llama-7B has a faithfulness score of 1.0, suggesting identified circuit can recover entire model performance. ### 4.3 Circuit Generalization Across Fine-Tuned Models As described in section 3, fine-tuned models perform the entity tracking task better than the base Llama-7B. Better performance could be attributed to a superior circuit in the fine-tuned models. Hence, in this subsection, we ask the question of whether the fine-tuned models use a different or the same circuit, i.e., with exactly the same group of heads, to perform the entity tracking task. To answer this, we evaluate the circuit identified in Llama-7B, on the fine-tuned models using the faithfulness criterion. Surprisingly, we find that fine-tuned models have good faithfulness scores for the circuit identified in Llama-7B (without any additional optimization or adaptation) as shown in Table 1. Specifically, Vicuna-7B has almost a perfect faithfulness score of 0.97, while Goat-7B and FLoat-7B exhibit slightly lower scores of 0.89 and 0.88, respectively. As a baseline, we calculate the average accuracy of 10 random circuits with the same total and per-position number of heads; random circuits have virtually zero accuracy. This suggests that Vicuna-7B utilizes roughly the same circuit as that of Llama-7B to perform entity tracking. Whereas, in Goat-7B and FLoat-7B the same circuit is present, but achieving the complete performance of the fine-tuned models requires the incorporation of additional components. To further investigate the overlap between the circuits of fine-tuned models and the base model, we identify the entity tracking circuits of the Goat-7B and FLoat-7B models, using the same procedure as in Section 4.1 (Refer to Appendix E and Appendix F). We found that these circuits are significantly larger, consisting of 175 attention heads and approximately forming a superset of the Llama-7B circuit (Refer to Appendix E4 and Appendix F4 for more details). This finding suggests that fine-tuning is inserting additional components to the circuitry that performs entity tracking. 5 IS CIRCUIT FUNCTIONALITY THE SAME AFTER FINE-TUNING? While the same circuit is primarily responsible for performing entity tracking in both the base and fine-tuned models, the specific functionality of different parts of the circuit remain unknown. In other words, to fully comprehend the underlying mechanism through which these models execute the task, it is crucial to understand the functionalities of the circuit components. There are two hypotheses pertaining to circuit functionality in base and fine-tuned models: (i) The same circuit exists in all four models, but the functionalities it implements may vary, accounting for the performance difference. (ii) The circuits of all models implement the same mechanism, but with an enhanced functionality in fine-tuned models. To investigate these hypotheses, we use the automatic Desiderata-based Component Masking (DCM) method, introduced in Davies et al. (2023), for identifying groups of model components responsible for specific functionalities. First, we use DCM on the groups of heads in the minimal circuit of Llama-7B to identify subsets of heads with specific functionalities, (e.g., moving positional information or object values). Then, for each model we apply activation patching on those subsets of heads, to quantify their efficacy on various functionalities. 5.1 DESIDERATA-BASED COMPONENT MASKING The DCM method involves using desiderata for identifying model components responsible for specific functionality. Each desideratum consists of numerous 3-tuple \((\text{original}, \text{alternative}, \text{target})\), where \(\text{original}\) is an original entity tracking task, \(\text{alternate}\) is a carefully designed counterfactual task, and \(\text{target}\) is the desired output, as shown in Fig. 2. If a set of components encodes information regarding the desired semantics, then patching activations from the \(\text{alternate}\) run into the \(\text{original}\) run should alter the model output to \(\text{target}\). Refer to Davies et al. (2023) for more details. DCM use gradient descent optimization procedures; For each desideratum, we train a sparse binary mask over potential model components to identify the ones that when patched from counterfactual to original run maximize the target value. Hence, compared to brute-force activation patching, DCM is much more efficient. More importantly, it overcomes a major drawback of activation patching, i.e., it can locate the subset of model components that work together to produce the final output. 5.2 CIRCUIT FUNCTIONALITY IN LLAMA-7B To untangle the functionality of groups of heads in the Llama-7B circuit, we define three desiderata, as shown in Fig. 2: (i) \(\text{Object}\) desideratum, which is used to identify model components encoding the value of correct object, (ii) \(\text{Label}\) desideratum, used to identify model components encoding the box label value information, and (iii) \(\text{Position}\) desideratum which can be used to identify model components encoding the positional information of the correct object. Please refer to Fig. 2 caption and Appendix G for additional details about each. We apply DCM to identify the subset of heads that encode these functionalities in Llama-7B circuit. For each group of heads, we train three binary masks, one for each desideratum, that identify the subset of heads encoding specific functionality (check Appendix H for more details). The results are shown in Table A2. All Group A heads encode the value of correct object in their output. While most of the heads in Group B (71.43%) and C (70.0%) encode positional information of the correct object in their output. The heads of Group D are not profoundly involved in any of the three functionalities. We next apply activation patching on this subset of heads, using additional \(N = 500\) samples from the three desiderata, and compute the accuracy with respect to the target value. In order to incorporate randomness in the generated data, we repeated the evaluation ten times with different samples of the test set and report the mean accuracy and standard deviation. The results are shown in Fig. 3, indicating that heads in Group A are primarily responsible for fetching the value information of the correct object. Hence, we refer to this set of heads as Value Fetcher. Heads in Group B and C are mainly responsible for detecting and transmitting the positional information of the correct object and are therefore referred to as Position Detector and Position Transmitter. Since we were unable to establish the functionality of heads in Group D, we used their attention pattern to annotate them. These heads primarily attend to tokens in their own segment, as shown in Fig. 1, hence we refer to them as Structure Reader heads. Overall, the circuit generates correct output by first detecting the positional information of the correct object with Position Detector heads, using the information collected by the Structure Reader heads. The positional information is transmitted to the Value Fetcher heads, by the Position Transmitter heads, which resolves this information to locate the correct object location and fetches its value, to be generated as final output. This indicates that the model is primarily using positional information to keep track of in-context entities. Additionally, we have some early evidence that the model is encoding positional information relative to the context segment; see Appendix J for more details. ### 5.3 Circuit functionality in fine-tuned models Now that we have identified the functionality of the group of heads in the Llama-7B circuit, we can examine whether this circuit, also present in the fine-tuned models, implements the same or different functionalities across different models. To assess this, we employ activation patching on the same subset of heads of Vicuna-7B, Goat-7B, and FLoat-7B that are involved in a specific functionality. As shown in Fig. 3, the functionality of the subset of heads remains the same across fine-tuned models. Position Detector and Position Transmitter heads of Vicuna-7B and Goat-7B achieve performance similar to that of Llama-7B, though they demonstrate enhanced accuracy in FLoat-7B. The Value Fetcher heads in fine-tuned models consistently show an improved capability to retrieve the correct object value, e.g., Goat-7B can achieve a performance improvement of 20% compared to Llama-7B. Furthermore, we found that both Goat-7B and FLoat-7B circuits implement precisely the same functionality within each group, as depicted in Fig. A8 and Fig. A9. These findings suggest that neither additional functionality nor a shift in functionality is introduced in fine-tuned models. Overall, the results confirm the hypothesis that circuits in fine-tuned models implement the same functionality with the insight that the Value Fetcher in fine-tuned models has a better ability to resolve positional information for fetching the correct object value information. Figure 3: **Circuit Functionality in Llama-7B, Vicuna-7B, Goat-7B, and FLoat-7B.** We use DCM to uncover functionality of each subgroup of Llama-7B circuit. Group A (pink) is mainly sensitive to value desideratum, while groups B, C (purple, turquoise) are responsible for positional information. We find group D insensitive to each of the three desideratum. Error bars indicate standard deviation. Combining the results from previous experiments indicates that not only the circuit from the base model is present in the fine-tuned models, but also its functionality remains the same. Further, additional components in fine-tuned models’ circuits implement the exact same functionality. Hence, we conclude that fine-tuned models implement the same mechanism to perform entity tracking task as the base model. However, the increased performance of fine-tuned models suggests that fine-tuning enhances that existing mechanism. This implies that unraveling the mechanism through which a fine-tuned model accomplishes a task provides valuable insights into how the same task would be executed in the base model. This insight is particularly crucial for tasks that the base model struggles to perform well, making unraveling its mechanism more challenging. 6 Why do Goat-7B and FLoat-7B perform better? In the previous sections, we established that fine-tuned models employ the same mechanism as the base model to perform the entity tracking task, albeit with additional components. In this section, we aim to attribute performance improvement to a specific step in the mechanism. 6.1 Cross-model activation patching In order to be able to attribute the performance improvement to a specific step in the mechanism, we introduce *Cross-Model Activation Patching* (CMAP). Unlike naive activation patching, which involves patching activations of the same model on different inputs, CMAP requires patching activations of the same components of *different models* on the same input, as shown in Fig 4. We use CMAP to patch the output of the subset of heads responsible for dominant functionality in each group of Goat-7B and FLoat-7B circuits. Since we do not fully understand the functionality of Structure Reader heads, we patch all the heads in this group. More specifically, we patch the output of heads in the Goat-7B circuit from the Goat-7B to Llama-7B model, to identify which step in the Goat-7B model mechanism leads to performance improvement. Similarly, we perform the same patching process for heads in the FLoat-7B circuit. Figure 4: Why do Goat-7B and FLoat-7B perform better? We use CMAP to patch activations of the Goat-7B and FLoat-7B circuit components, from Goat-7B and FLoat-7B to Llama-7B model respectively, to attribute the performance improvement to a specific sub-mechanism used to perform entity tracking tasks. We patch the output of the subset of heads in each group that are involved in the primary functionality. We find that patching Value Fetcher heads can solely improve the performance of Llama-7B to that of Goat-7B and FLoat-7B. Additionally, we also observe a significant performance boost when the output of Position Transmitter heads is patched. 6.2 Results As shown in Fig. 4, patching the output of the Position Transmitter and Value Fetcher heads from fine-tuned models to Llama-7B improves the performance of Llama-7B beyond its default performance (red dashed line). It is interesting to observe that the activations of fine-tuned models are compatible with base model, even though they could have been using completely different sub-spaces and/or norms to encode information. We observe the maximal increase in performance when the Value Fetcher heads are patched, recovering the full fine-tuned models’ performance (green dashed line). This indicates that the output of these heads in fine-tuned models encodes an enhanced representation of the correct object, corroborating results from Section 5. Additionally, we also see a substantial increase in performance when the outputs of the Position Transmitter heads are patched, suggesting that fine-tuned models are also transmitting augmented positional information. We speculate that the enhanced encoding in fine-tuned models stem from both additional components in their circuit and the improved ability to encode vital information of shared components with Llama-7B. 7 Discussion and Conclusion In this work, we investigated the effect of fine-tuning on circuit-level mechanisms in LLMs. We discovered that not only does the circuit from the base model persist in the fine-tuned models, but its functionality also remains unchanged. Further, the circuits in fine-tuned models, augmented with additional components, precisely employ the same functionality. We have introduced Cross-Model Activation Patching (CMAP) to compare mechanisms in two different models, revealing how a fine-tuned model enhances the existing mechanism in a base model to obtain a better performance on entity tracking. In our work we have studied the interaction between a single task and three fine-tuned models. Understanding whether such mechanism invariance is typical will require experience with further tasks on more models. Nevertheless, the methods presented in the paper are generic and could be applied to a variety of settings. Future work may study the training dynamics during the fine-tuning process, to pinpoint exactly when and how the circuit enhancement occurs. 8 ETHICS STATEMENT This work investigating the impact of fine-tuning on large language models suggests that fine-tuning primarily enhances existing mechanisms present in the base model. This highlights the importance of training safe and unbiased base models that are openly available. If such models are developed responsibly, then the risks of fine-tuning introducing new biases or dangerous behaviors can be greatly reduced. Hence, indicating that careful stewardship is required in the foundational phases of model development to promote beneficial applications as the capabilities of AI systems advance. 9 ACKNOWLEDGEMENT We would like to thank Open Philanthropy for their generous support through an AI Alignment grant (NP, TH, YB, DB). TH and YB also received support from the Israel Science Foundation (grant No. 448/20) and an Azrieli Foundation Early Career Faculty Fellowship. TRS received partial support from the Zuckerman STEM Leadership Program and the Viterbi Fellowship. We would also like to thank the Center for AI Safety (CAIS) for making computing resources available for this research. REFERENCES Michael Bommarito II and Daniel Martin Katz. Gpt takes the bar exam. arXiv preprint arXiv:2212.14402, 2022. Lawrence Chan, Adrià Garriga-Alonso, Nicholas Goldowsky-Dill, Ryan Greenblatt, Jenny Nitishinskaya, Ansh Radhakrishnan, Buck Shlegeris, and Nate Thomas. Causal scrubbing: A method for rigorously testing interpretability hypotheses. https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing/, 2022. Accessed: February 14, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. Bilal Chughtai, Lawrence Chan, and Neel Nanda. A toy model of universality: Reverse engineering how networks learn group operations. arXiv preprint arXiv:2302.03025, 2023. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Arthur Conmy, Augustine Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. Towards automated circuit discovery for mechanistic interpretability. Advances in Neural Information Processing Systems, 36, 2024. Xander Davies, Max Nadeau, Nikhil Prakash, Tamar Rott Shaham, and David Bau. Discovering variable binding circuitry with desiderata. arXiv preprint arXiv:2307.03637, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
qODvxQ8TXW
While the authors argue LRR finds a better mask than WR in Figure 3, I wonder if a longer training epochs within each IMP cycle would help WR to find a superior mask. In other words, are both WR and LRR fully converged?
Masks, Signs, And Learning Rate Rewinding Advait Gadhikar & Rebekka Burkholz CISPA Helmholtz Center for Information Security Saarbrücken, Germany {advait.gadhikar, burkholz}@cispa.de Abstract Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks. While both iterative pruning schemes couple structure and parameter learning, understanding how LRR excels in both aspects can bring us closer to the design of more flexible deep learning algorithms that can optimize diverse sets of sparse architectures. To this end, we conduct experiments that disentangle the effect of mask learning and parameter optimization and how both benefit from overparameterization. The ability of LRR to flip parameter signs early and stay robust to sign perturbations seems to make it not only more effective in mask identification but also in optimizing diverse sets of masks, including random ones. In support of this hypothesis, we prove in a simplified single hidden neuron setting that LRR succeeds in more cases than IMP, as it can escape initially problematic sign configurations. 1 Introduction Overparametrization has been key to the huge success of deep learning (Bubeck et al., 2023; Neyshabur et al., 2019; Belkin et al., 2019). Adding more trainable parameters to models has shown to consistently improve performance of deep neural networks over multiple tasks. While it has been shown that there often exist sparser neural network representations that can achieve competitive performance, they are usually not well trainable by standard neural network optimization approaches (Evci et al., 2022), which is a major challenge for learning small scale (sparse) neural networks from scratch to save computational resources. The Lottery Ticket Hypothesis (LTH) by Frankle & Carbin (2019) is based on an empirical existence proof that the optimization of at least some sparse neural network architectures is feasible with the right initialization. According to the LTH, dense, randomly initialized neural networks contain subnetworks that can be trained in isolation with the same training algorithm that is successful for the dense networks. A strong version of this hypothesis (Ramanujan et al., 2020a; Zhou et al., 2019), which has also been proven theoretically (Malach et al., 2020; Pensia et al., 2020; Orseau et al., 2020; Fischer et al., 2021; Burkholz et al., 2022; da Cunha et al., 2022; Gadhikar et al., 2023; Ferbach et al., 2023), suggests that the identified initial parameters might be strongly tied to the identified sparse structure. Related experimental studies and theoretical investigations support this conjecture (Evci et al., 2022; Paul et al., 2023). In line with these findings, contemporary pruning algorithms currently address the dual challenge of structure and parameter learning only jointly. Iterative Magnitude Pruning (IMP) (Frankle & Carbin, 2019) and successive methods derived from it, like Weight Rewinding (WR) (Frankle et al., 2020a) and Learning Rate Rewinding (LRR) (Renda et al., 2020; Liu et al., 2021a) follow an iterative pruning – training procedure that removes a fraction of parameters in every pruning iteration until a target sparsity is reached. This achieves state-of-the-art neural network sparsification (Paul et al., 2023), albeit at substantial computational cost. While this cost can be reduced by starting the pruning procedure from a sparser, randomly pruned network (Gadhikar et al., 2023), the question remains whether the identification of small sparse neural network models necessitates training an overparameterized model first. Multiple works attest that overparameterization aids pruning (Zhang et al., 2021; Chang et al., 2021; Golubeva et al., 2020). This suggests that overparameterized optimization obtains information that should be valuable for the performance of a sparsified model. Conforming with this reasoning, IMP was found less effective for complex architectures than Weight Rewinding (WR) \cite{Renda2020}, which rewinds parameters to values that have been obtained by training the dense, overparameterized model for a few epochs (instead of rewinding to their initial value like IMP). LRR \cite{Renda2020} completely gets rid of the weight rewinding step and continues to train a pruned model from its current state while repeating the same learning rate schedule in every iteration. Eliminating the parameter rewinding step has enabled LRR to achieve consistent accuracy gains and improve the movement of parameters away from their initial values \cite{Liu2021a}. Complimentary to \cite{Paul2023,Liu2021a}, we identify a mechanism that provides LRR with (provable) optimization advantages that are facilitated by pruning a trained overparameterized model. First, we gain provable insights into LRR and IMP for a minimal example, i.e., learning a single hidden ReLU neuron. Our exact solutions to the gradient flow dynamics for high-dimensional inputs could be of independent interest. The initial overparameterization of the hidden neuron enables learning provably and facilitates the identification of the correct ground truth mask by pruning. LRR benefits from the robustness of the overparameterized neuron to different parameter initializations, as it is capable of switching initially problematic parameter sign configurations that would result in the failure of IMP. We verify in extensive experiments on standard benchmark data that our theoretical insights capture a practically relevant phenomenon and that our intuition regarding parameter sign switches also applies to more complex architectures and tasks. We find that while LRR is able to perform more sign flips, these happen in early training – pruning iterations, when a higher degree of overparameterization is available to facilitate them. In this regime, LRR is also more robust to sign perturbations. This observation suggests that LRR could define a more reliable parameter training algorithm than IMP for general masks. However in iterative pruning schemes like IMP and LRR, the mask identification step is closely coupled with parameter optimization. Changing either of these aspects could affect the overall performance considerably. For example, learning only the mask (strong lottery tickets \cite{Kamnajan2020,Gadhikar2023}) or learning only the parameters with a random mask \cite{Liu2021b,Gadhikar2023} are unable to achieve the same performance as IMP at high sparsities. Yet, we carefully disentangle the optimization of parameters and mask learning aspect to show that LRR achieves more reliable training results for different masks. In addition, it can also identify a better mask that can sometimes achieve a higher performance than the IMP mask, even when both are optimized even with IMP. Contributions. Our main contributions are as follows: - To analyze the advantages of LRR for parameter optimization and mask identification, we conduct experiments that disentangle these two aspects and find that the benefits of LRR are two-fold. (a) LRR often finds a better sparse mask during training and (b) LRR is more effective in optimizing parameters of a diverse masks (e.g.: a random mask). - We experimentally verify that, in comparison with IMP, LRR is more flexible in switching parameter signs during early pruning iterations, when the network is still overparameterized. It also recovers more reliably from sign perturbations. - For a univariate single hidden neuron network, we derive closed form solutions of its gradient flow dynamics and compare them with training and pruning an overparameterized neuron. LRR is provably more likely to converge to a ground truth target while IMP is more susceptible to failure due to its inability to switch initial problematic weight signs. 1.1 Related Work Insights into IMP. \cite{Paul2023} attribute the success of IMP to iteratively pruning a small fraction of parameters in every step which allows consecutively pruned networks to be linearly mode connected \cite{Frankle2020a,Paul2022}. This can be achieved by WR if the dense network is trained for sufficiently many epochs. They argue that as long as consecutive networks are sufficiently close, IMP finds sparse networks that belong to the same linearly mode connected region of the loss landscape. \cite{Evci2022} similarly claim that IMP finds an initialization that is close to the pruning solution and within the same basin of attraction. \cite{Liu2021a} similarly show that initial and final weights are correlated for IMP. In our experiments we study the WR variant of IMP, where the dense network has been trained for sufficiently many epochs to obtain the initial parameters for IMP, but we still find that, in comparison, LRR switches more signs and can achieve better performance. The role of sign switches. While Wang et al. (2023) have recently verified the importance of suitable parameter signs for better training of neural networks in general, they have not analyzed their impact on neural network sparsification. Zhou et al. (2019) study the weight distributions for IMP and find that rewinding only parameter signs can be sufficient. Large scale problems, however, rely on learning signs in early epochs and require a good combination with respective parameter magnitudes, as discussed by Frankle et al. (2020b) for IMP. These results are still focused on the IMP learning mechanism and its coupling to the mask learning. In contrast, we show that identifying good signs (and magnitudes) early enables LRR to not only find a better mask but to also learn more effectively if the mask identification is independent from the parameter optimization. Mask optimization. Random sparse masks also qualify as trainable lottery tickets Su et al. (2020); Ma et al. (2021); Liu et al. (2021b) which suggests that the mask identification can be separated from parameter optimization upto certain sparsities Gadhikar et al. (2023). Our experiments isolate the advantages of LRR on both these aspects. Training dynamics of overparametrized networks. The training dynamics of overparametrized networks have been theoretically investigated in multiple works, which frequently employ a balanced initialization Du et al. (2018) and a related conservation law under gradient flow in their analysis. Arora et al. (2018, 2019) study deep linear networks in this context, while Du et al. theoretically characterizes the gradient flow dynamics of two layer ReLU networks. While they require a high degree of overparameterization, Boursier et al. (2022) obtains more detailed statements on the dynamics with a more flexible parameterization but assume orthogonal data input. Single hidden neuron setting. These results do not directly transfer to the single hidden neuron case, which has been subject of active research Yehudai & Ohad (2020); Lee et al. (2022a); Vardi et al. (2021); Oymak & Soltanolkotabi (2019); Soltanolkotabi (2017); Kalan et al. (2019); Frei et al. (2020); Diakonikolas et al. (2020); Tan & Vershynin (2019); Du et al. Most works assume that the outer weight $a$ is fixed, while only the inner weight vector $w$ is learned and mostly study noise free data. We extend similar results to trainable outer weight and characterize the precise training dynamics of an univariate (masked) neuron in closed form. Lee et al. (2022b) study a similar univariate case but do not consider label noise in their analysis. Most importantly, similar results have not been deduced and studied under the premise of network pruning. They enable us to derive a mechanism that gives LRR a provable benefit over IMP, which is inherited from overparameterized training. 2 THEORETICAL INSIGHTS FOR A SINGLE HIDDEN NEURON NETWORK Figure 1: (a) Target network. For one dimensional input, learning succeeds when the initial values $w(0), \alpha(0) > 0$ are both positive (yellow quadrant), but fails in all other cases (red). (b) For multidimensional input, IMP identifies the correct mask, but cannot learn the target if the model is reinitialized to $w^{(2)}(0) < 0$. (c) LRR identifies the correct mask and is able to inherit the correct initial sign $w^{(2)}(0) > 0$ from the trained overparameterized model if $\alpha^{(0)}(0) > 0$. Intuition behind LRR versus IMP. The advantage of IMP is that it was designed to identify lottery tickets and thus successfully initialize sparse masks (i.e., sparse neural network structures). However, in order to find such an initialization, we show that the information obtained in earlier pruning iterations with the aid of overparameterization is valuable in learning better models. Notably, we find that each pruning iteration transfers key information about parameter signs to the next iteration. Forgetting this information (due to weight rewinding) means that IMP is challenged to learn the appropriate parameter signs from scratch in each iteration. To establish provable insights of this form, we face the theoretical challenge to describe the learning dynamics of the parameters in response to different initializations. We therefore focus on an example of minimum complexity that still enables us to isolate a mechanism by which LRR has a higher chance to succeed in solving a learning task. In doing so, we study a single hidden neuron, 2-layer neural network under gradient flow dynamics, as visualized in Fig. 1(a). For our purpose, we focus on two main aspects: (i) The trainability of the masked neural network (i.e., a single hidden neuron with \( d = 1 \) input), once the sparse mask is identified. (ii) The ability of LRR to leverage the initial overparameterization (i.e., a single hidden neuron with \( d > 1 \) inputs) in the model to learn appropriate parameter signs. Regarding (i), we have to distinguish four different initialization scenarios. Only one scenario (yellow quadrant in Fig. 1(a)) leads to accurate learning. LRR is able to inherit this setup from the trained overparameterized network and succeed (see Fig. 1(c)) in a case when IMP fails (Fig. 1(b)) because it rewinds its parameters to an initial problematic setting. To explain these results in detail, we have to formalize the set-up. **LRR.** We focus on comparing Iterative Magnitude Pruning (IMP) and Learning Rate Rewinding (LRR). Both cases comprise iterative pruning-training cycles. The \( i \)-th pruning cycle identifies a binary mask \( M^{(i)} \in \{0, 1\}^N \), which is established by pruning a fraction of neural network parameters \( \theta^{(i-1)} \) with the smallest magnitude. A training cycle relearns the remaining parameters \( \theta^{(i)} \) of the masked neural network \( f(x | M^{(i)} \theta^{(i)}) \). The only difference between LRR and IMP is induced by how each training cycle is initialized (See Fig. 1(b)). In case of LRR, the parameters of the previous training cycle that were not pruned away are used as initial values of the new training cycle so that \( \theta^{(i)}(0) = \theta^{(i-1)}(t_{end}) \). Thus, training continues (with a learning rate that is reset to its initial value). **IMP.** In case of IMP, each pruning iteration starts from the same initial parameters \( \theta^{(i)}(0) = \theta^{(0)}(0) \) and parameters learnt in the previous iteration are forgotten. While our theoretical analysis focuses on IMP, Frankle et al. (2021); Su et al. (2020); Ma et al. (2021) have shown that IMP does not scale well to larger architectures. Hence, we employ the more successful variant Weight Rewinding (WR) in our experiments. Here, the parameters are not rewound to their initial values but to the parameters of the dense network which was trained for a few warm-up epochs \( \theta^{(i)}(0) = \theta^{(0)}(k) \) Frankle et al. (2020a); Renda et al. (2020). Our theory also applies to this case but we will mostly discuss rewinding to the initial values for simplicity. From now on we use IMP to refer to IMP in our theory and WR in our experiments. **Problem set-up.** Consider a single hidden neuron network with input \( x \in \mathbb{R}^d \), given as \( f(x) := a \phi(wx) \) with the ReLU activation \( \phi(x) = \max\{x, 0\} \) (see Fig. 1). Note that one of the weights could assume the role of a bias if one of the inputs is constant in all samples, e.g., \( x_i = 1 \). The task is to learn a scalar target \( t(x) = \phi(x_1) \) only dependent on the first coordinate of \( x \), from which \( n \) noisy training data points \( Y = t(X_1) + \zeta \) are generated (upper case denotes random variables.) For simplicity, we assume that all input components are independently and identically (iid) distributed and follow a normal distribution \( X_i \sim \mathcal{N}(0, I/d) \), while the noise follows an independent normal distribution \( \zeta \sim \mathcal{N}(0, \sigma^2) \). The precise assumptions on the data distributions are not crucial for our results but clarify our later experimental setting. Based on a training set \( (x_i, y_i) \) for \( i \in [n] = \{1, 2, \ldots, n\} \), learning implies minimizing the mean squared error under gradient flow \[ \mathcal{L} = \frac{1}{2n} \sum_{i=1}^{n} (f(x_i) - y_i)^2, \quad \frac{d\mathcal{L}}{dt} = -\frac{\partial \mathcal{L}}{\partial a}, \quad \frac{dw_i}{dt} = -\frac{\partial \mathcal{L}}{\partial w_i} \quad (\forall i \in [1, d]), \] which resembles the dynamics induced by minimizing \( \mathcal{L} \) with gradient descent for sufficiently small learning rates. Note that also higher learning rates and more advanced optimizers like LBFGS converge to the same values that we derive based on gradient flow for this exemplary problem. Stochastic Gradient (SGD) would introduce additional batch noise and exaggerate the issue that we will discuss for small sample sizes. As gradient flow is sufficient to highlight the mechanism that we are interested in, we focus our analysis on this case. To simplify our exposition and to establish closed form solutions, we assume that the parameters are initialized in a balanced state such that \(a(0)^2 = \sum_{i=1}^{d} w_i^2(0)\), which is preserved through training (Arora et al., 2018; Du et al., 2018) so that \(a(t)^2 = \sum_{i=1}^{d} w_i^2(t)\). ### 2.1 Training Dynamics for One-Dimensional Input (\(d = 1\)) Let us start with the case, in which we have identified the correct mask by pruning away the remaining inputs and we know the ground truth structure of the problem. Studying this one dimensional case will help us identify typical failure conditions in the learning dynamics and how these failure conditions are more likely to occur in IMP than LRR. Knowing the correct mask, our model is reduced to the one-dimensional input case (\(d = 1\)) after pruning, so that \(f(x) = a\phi(wx)\), while the target labels are drawn from \(y \sim \phi(x) + \zeta\). Since the ReLU neuron is active only when \(wx > 0\), we have to distinguish all possible initial sign combinations of \(w\) and \(a\) to analyze the learning dynamics. The following theorem states our main result, which is also visualized in Fig. 1(a). **Theorem 2.1.** Let a target \(t(x) = \phi(x)\) and network \(f(x) = a\phi(wx)\) be given such that \(a\) and \(w\) follow the gradient flow dynamics (7) with a random balanced parameter initialization and sufficiently many samples. If \(a(0) > 0\) and \(w(0) > 0\), \(f(x)\) can learn the correct target. In all other cases \((a(0) > 0, w(0) < 0)\), \((a(0) < 0, w(0) > 0)\) and \((a(0) < 0, w(0) < 0)\) learning fails. The proof in Appendix A.1 derives the closed form solutions of the learning dynamics of \(f(x)\) under gradient flow for each combination of initial signs. It establishes that training a single neuron \(f(x) = a\phi(wx)\) from scratch to learn the noisy target \(\phi(x) + \zeta\) can be expected to fail at least with probability \(3/4\) if we choose a standard balanced parameter initialization scheme where either signs are equally likely to occur for \(a(0), w(0)\). Why should this imply a disadvantage for IMP over LRR? As we will argue next, overparameterization in form of additional independent input dimensions \(x \in \mathbb{R}^d\) can improve substantially the learning success as the set of samples activated by ReLU becomes less dependent on the initialization of the first element \(w_1(0)\) of \(w\). Thus training an overparameterized neuron first, enables LRR and IMP to identify the correct mask. Yet, after reinitialization, IMP is reduced to the failure case described above with probability \(3/4\), considering the combination of initial signs of \(a(0)\) and \(w_1(0)\). In contrast, LRR continues training from the learned parameters. It thus inherits a potential sign switch from \(w_1(0) < 0\) to \(w_1(0) > 0\) if \(a(0) > 0\) during training (and pruning) the overparameterized model. Thus, the probability that LRR fails due to a bad initial sign after identifying the correct mask is reduced to \(1/2\), as also explained in Fig. 1. ### 2.2 Learning an Overparametrized Neuron (\(d > 1\)) As we have established the failure cases of the single input case in the previous section, we now focus on how overparameterization (to \(d > 1\)) can help avoid one case and thus aid LRR, while IMP is unable to benefit from the same. Multiple works have derived that convergence of the overparameterized model (\(d > 1\)) happens under mild assumptions and with high probability in case of zero noise and Gaussian input data, suggesting that overparameterization critically aids our original learning problem. For instance, (Yehudai & Ohad, 2020) have shown that convergence to a target vector \(v\) is exponentially fast \(\|w(t) - v\| \leq \|w(0) - v\| \exp(-\lambda t)\), where the convergence rate \(\lambda > 0\) depends on the angle between \(w(0)\) and \(w(t)\) assuming that \(a(0) = a(t) = 1\) is not trainable. **Insight:** For our purpose, it is sufficient that the learning dynamics can change the sign of \(w_1(0) < 0\) to \(w_1(\infty) > 0\) if \(d \geq 2\). This would correspond to the first training round of LRR and IMP. Furthermore, training the neuron with multiple inputs enables the pruning step to identify the correct ground truth mask under zero noise, as \(w_k(\infty) \approx 0\) for \(k \neq 1\). Yet, while IMP would restart training from \(w_1(0) < 0\) and fail to learn a parameterization that corresponds to the ground truth, LRR succeeds, as it starts from \(w_1(\infty) > 0\). These results assume, however, that \(a(0) = 1\) is fixed and not trainable. In the previous section, we have also identified major training failure points if \(a(0) < 0\). As it turns out, training a single multivariate neuron does not enable recovery from such a problematic initialization in general. **Lemma 2.2.** Assume that \(a\) and \(w\) follow the gradient flow dynamics induced by Eq. (2) with Gaussian iid input data, zero noise, and that initially \(0 < |a||w(0)| \leq 2\) and \(a(0)^2 = \|w(0)\|^2\). Then \(a\) cannot switch its sign during gradient flow. This excludes another relevant event that could have given IMP an advantage over LRR. Note that IMP could succeed while LRR fails, if we start from a promising initialization \(w_1(0) > 0\) and \(a(0) > 0\) but the parameters converge during the first training round to values \(w_1(0) < 0\) and \(a(0) < 0\) that would hamper successful training after pruning. This option is prevented, however, by the fact that \(a\) cannot switch its sign in case of zero noise. We therefore conclude our theoretical analysis with our main insight. **Theorem 2.3.** Assume that \(a\) and \(w\) follow the gradient flow dynamics induced by Eq. (2) with Gaussian iid input data, zero noise, and that initially \(0 < |a||w(0)| \leq 2\) and \(a(0)^2 = \|w(0)\|^2\). If \(w_1(0) < 0\) and \(a(0) > 0\), LRR attains a lower objective (7) than IMP. In all other cases, LRR performs at least as well as IMP. ### 2.3 Verifying theoretical insights based on single hidden neuron network Figure 2(a) empirically validates our theoretical insights for \(d > 1\) and compares LRR and IMP for each combination of initial signs of \(a(0)\), \(w_1(0)\). A single hidden neuron network with input dimension \(d = 10\) and random balanced Gaussian initialization is trained with LBFGS to minimize the objective function (1) for a noisy target (\(\sigma^2 = 0.01\)). Averages and 0.95 confidence intervals over 10 runs for each case are shown. In each run, we prune and train over 3 levels for 1000 epochs each, while removing the same fraction of parameters in each level to achieve a target sparsity of 90% so that only one single input remains. In line with the theory, we find that IMP is only successful in the case \(a(0) > 0\) and \(w_1(0) > 0\), while LRR succeeds as long as \(a(0) > 0\). ### 3 Experiments Our first objective is to analyze whether our theoretical intuition that LRR is more flexible in learning advantageous sign configurations transfers to more complex tasks related to standard benchmarks. Different from the simplified one hidden neuron setting, LRR and IMP also identify different masks. Thus, our second objective is to disentangle the impact of the different learning mechanisms and potential sign flips on both, the mask learning and the parameter optimization given a fixed mask. To this end, we perform experiments on CIFAR10, CIFAR100 (Krizhevsky, 2009) and Tiny ImageNet (Le & Yang, 2015) with ResNet18 or ResNet50 with IMP and LRR that start from the same initializations. Table 1 in the appendix describes the details of the setup. To strengthen the IMP baseline, we in fact study WR and thus rewind the parameters to values that we have obtained after a sufficiently high number of training epochs of the dense model, which is in line with successfully obtaining matching networks as found by Paul et al. (2023). **LRR modifications.** Different from our theoretical investigations, we have to take more complex factors into account that influence the training process like learning rate schedules and batch normalization (BN). We found that the originally proposed training schedule of LRR can suffer from diminishing BN weights that impair training stability on larger scale problems like CIFAR100 and Tiny ImageNet (see Fig. 8 and Fig. 11 in the appendix). To avoid this issue, we propose to rewind BN parameters when the mask is decoupled from parameter optimization. In all our experiments, we introduce warmup after each pruning iteration, which increases the flexibility of LRR to optimize different masks as well as improves baseline performance (see Fig. 8 in appendix). Fig. 4(c, d) provides an example where these modifications make LRR competitive with IMP on the IMP mask. We start our investigations with observations regarding the performance of LRR and IMP in different learning scenarios before we isolate potential mechanisms that govern these observations like sign flips and network overparameterization. Our experiments establish and confirm that LRR outperforms IMP on all our benchmarks. Does this performance boost result from an improved mask identification or stronger parameter optimization? LRR identifies a better mask. Even though the mask identification of IMP is coupled to its training procedure, Fig. 3(a, b) show that the mask that has been identified by LRR also achieves a higher performance than the IMP mask on CIFAR10 when its parameters are optimized with IMP. Similar improvements are observed on CIFAR100 (Fig. 3(c, d)) except at high sparsities (> 95%) where the coupling of the mask and parameter optimization is more relevant. Figure 3: The sparse mask learnt by LRR is superior and the performance of IMP is improved in combination with the LRR mask on (a, b) CIFAR10 and (c, d) CIFAR100. LRR is more flexible in optimizing different masks. According to Fig. 4(a, b), training LRR with the IMP mask (blue curve) is able to improve over IMP for CIFAR10. While the original LRR is less competitive for learning with the IMP mask on CIFAR100, LRR with BN parameter rewinding after each pruning iteration outperforms IMP both on CIFAR10 and CIFAR100 even at high sparsities. Similar results for Tiny ImageNet are presented in Fig. 2(d). Yet, are IMP and LRR masks sufficiently diverse? Since IMP and LRR masks are identified based on a similar magnitude based pruning criterion, the other mask and parameter initialization might still carry relevant information for the respective optimization task. In order to completely decouple the sparse mask from the parameter optimization and the initialization, we also study the LRR and IMP parameter optimization on a random mask. Figure 4: LRR improves parameter optimization within the mask learnt by IMP for (a, b) CIFAR10 and (c, d) CIFAR100. Random Masks. For the same randomly pruned mask with balanced sparsity ratios (Gadhikar et al., 2023) and identical initialization, we compare training from initial values (IMP-rand) or training from the values obtained by the previous training–pruning iteration (LRR-rand) (see Fig. 2(b, c)). Rewinding the BN parameters assists gradual random pruning and improves optimization, thus, LRR-rand (rewind BN) outperforms IMP-rand. This confirms that LRR seems to employ a more flexible parameter optimization approach irrespective of task specific masks. Our theoretical insights align with the observation that LRR learns network parameters more reliably than IMP. The main mechanism that strengthens LRR in our toy model is the fact that it inherits parameter signs that are identified by training an overparameterized model that is sufficiently flexible to correct initially problematic weight signs. To investigate whether a similar mechanism supports LRR also in a more complex setting, we study the sign flip dynamics. Figure 5: (top) The pruning iteration at which the parameter signs do not change anymore for LRR (purple) is much earlier than IMP (orange). (bottom) The number of times a parameter switches sign over pruning iterations (a) CIFAR10 (b) CIFAR100 and (c) Tiny ImageNet. **LRR enables early and stable sign switches.** Fig. 4 confirms that LRR corrects initial signs primarily in earlier iterations when the mask is denser and the model more overparameterized. Moreover, the signs also stabilize early and remain largely constant for the subsequent pruning iterations (Fig. 5). Learnt parameters at consecutive sparsity levels in LRR tend to share the same sign in later iterations, but IMP must align initial signs in each pruning iteration, leading to unstable, back and forth flipping of learnt signs across sparsity levels. Overall, LRR changes more signs than IMP at lower sparsities on CIFAR10, yet, the effect is more pronounced in larger networks for CIFAR100 and Tiny ImageNet, where IMP fails to identify stable sign configurations even at high sparsities (see also Fig. 13 in appendix). These results apply to settings where the mask and parameter learning is coupled. Constraining both IMP and LRR to the same mask, LRR also appears to be more flexible and is able to improve performance by learning a larger fraction of parameter signs earlier than IMP (see Fig. 5(b)). For random masks, generally more unstable sign flips occur due to the fact that the mask and parameter values are not aligned well. Yet, LRR appears to be more stable and is able to flip more signs overall (Fig. 14(a, b) in appendix). Even with the improved LRR mask, IMP seems unable to perform effective sign switches (Fig. 14(c, d) in appendix). Yet, maybe the LRR optimization can simply tolerate more sign switches? Furthermore, is LRR only able to switch signs in early training rounds due to the higher overparameterization of the networks? To answer these questions and learn more about the causal connection between sign switches and learning, next we study the effect of sign perturbations. Figure 6: 30% signs in each layer are flipped randomly at 20% sparsity for LRR and IMP (dotted) on (a, b) CIFAR10 and (c, d) CIFAR100. Solid lines denote baselines. **LRR recovers from random sign perturbations.** In order to characterize the effect of correct parameter signs on mask identification, we randomly perturb signs at different levels of sparsity. for both LRR and IMP. Sign perturbation at a low sparsity has little effect on CIFAR10 and both LRR and IMP are able to recover achieving baseline accuracy (Fig. 6a, b). For the more complex CIFAR100 dataset, signs have a stronger influence on masks and neither LRR nor IMP can fully recover to baseline performance. However, LRR is still able to achieve a higher performance than the IMP baseline, but IMP struggles after perturbing initial signs, as the mask does not fit to its initialization (Fig. 6c, d)). Fig. 7(a,b) shows results for perturbing a larger fraction of signs at much higher sparsity, i.e., 83%. LRR is able to recover over IMP at later sparsities on CIFAR10. Interestingly, on CIFAR100, LRR suffers more than IMP from the sign perturbation potentially due to a lack of overparameterization at high sparsity. LRR recovers slowly but still achieves baseline performance beyond 95% sparsity. The performance of subsequent masks obtained after perturbing signs reaffirms that parameter signs strongly influence the quality of the mask identification and LRR is capable of rearranging signs in order to find a better mask and optimize the corresponding parameters effectively. Yet, LRR requires training time and initial overparameterization to be effective. The interplay of magnitude and signs. Recent analyses of IMP (Frankle et al., 2020b; Zhou et al., 2019) have found that signs that are learnt at later iterations are more informative and initializing with them improves IMP. In line with this insight, Fig. 7(c) highlights that rewinding only weight amplitude while maintaining the learnt signs improves over IMP. Yet, according to Frankle et al. (2020b), the combination with learned weight magnitudes can further strengthen the approach. Our next results imply that the magnitudes might be more relevant for the actual mask learning than the parameter optimization. We find that the learnt signs and the LRR mask contain most of the relevant information. Fig. 7(c) confirms that if we initialize IMP with the LRR signs and restrict it to the LRR mask, we can match the performance of LRR despite rewinding the weight magnitudes in every iteration. These results imply that a major drawback of IMP as a parameter optimization procedure could be that it forgets crucial sign information during weight rewinding. Figure 7: 80% signs in each layer are flipped randomly at 83% sparsity for LRR and IMP (dashed) on (a) CIFAR10 and (b) CIFAR100. Rewinding only magnitudes while using the initial weights of IMP with the learnt LRR masks and signs on (c) CIFAR10 and (d) CIFAR100. 4 Conclusions Learning Rate Rewinding (LRR), Iterative Magnitude Pruning (IMP) and Weight Rewinding (WR) present cornerstones in our efforts to identify lottery tickets and sparsify neural networks, but the reasons for their successes and limitations are not well understood. To deepen our insights into their inner workings, we have highlighted a mechanism that gives LRR a competitive edge in structure learning and parameter optimization. In a simplified single hidden neuron model, LRR provably recovers from initially problematic sign configurations by inheriting the signs from a trained overparameterized model, which is more robust to different initializations. This main theoretical insight also applies to more complex learning settings, as we show in experiments on standard benchmark data. Accordingly, LRR is more flexible in switching signs during early pruning–training iterations by utilizing the still available overparameterization. As a consequence, LRR identifies not only highly performant masks. More importantly, it can also optimize parameters effectively given diverse sets of masks. In future, we envision that insights into the underlying mechanisms like ours could inspire the development of more efficient sparse training algorithms that can optimize sparse networks from scratch. ACKNOWLEDGEMENTS We gratefully acknowledge funding from the European Research Council (ERC) under the Horizon Europe Framework Programme (HORIZON) for proposal number 101116395 SPARSE-ML. REFERENCES Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. In *International Conference on Machine Learning*, pp. 244–253. PMLR, 2018. Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. A convergence analysis of gradient descent for deep linear neural networks. In *International Conference on Learning Representations*, 2019. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 2019. Etienne Boursier, Loucas Pillaud-Vivien, and Nicolas Flammarion. Gradient flow dynamics of shallow reLU networks for square loss and orthogonal inputs. In *Advances in Neural Information Processing Systems*, 2022. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. *arXiv preprint arXiv:2303.12712*, 2023. Rebekka Burkholz, Nilanjana Laha, Rajarshi Mukherjee, and Alkis Gotovos. On the existence of universal lottery tickets. In *International Conference on Learning Representations*. Rebekka Burkolz. Most activation functions can win the lottery without excessive depth. In *Advances in Neural Information Processing Systems*, 2022. Xiangyu Chang, Yingcong Li, Samet Oymak, and Christos Thrampoulidis. Provable benefits of overparameterization in model compression: From double descent to pruning neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 6974–6983, 2021. Arthur da Cunha, Emanuele Natale, and Laurent Viennot. Proving the lottery ticket hypothesis for convolutional neural networks. In *International Conference on Learning Representations*, 2022. Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R Klivans, and Mahdi Soltanolkotabi. Approximation schemes for relu regression. In *Conference on learning theory*, pp. 1452–1485. PMLR, 2020. Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In *International Conference on Learning Representations*. Simon S Du, Wei Hu, and Jason D Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. *Advances in neural information processing systems*, 31, 2018. Utku Evci, Yani Ioannou, Cem Keskin, and Yann Dauphin. Gradient flow in sparse neural networks and how lottery tickets win. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 6577–6586, 2022. Damien Ferbach, Christos Tsirigotis, Gauthier Gidel, and Joey Bose. A general framework for proving the equivariant strong lottery ticket hypothesis. In *International Conference on Learning Representations*, 2023. Jonas Fischer, Advait Gadhikar, and Rebekka Burkholz. Lottery tickets with nonzero biases. *arXiv preprint arXiv:2110.11150*, 2021.
VAvSUG3hwI
If the agent has to be trained step by step, could this affect how useful the solution is in situations where the agent must work with many teammates simultaneously? For instance, think of an online chatbot on an e-commerce platform where customers show up all together.
One by One, Continual Coordinating with Humans via Hyper-Teammate Identification Anonymous authors Paper under double-blind review Abstract One of the primary objectives in modern artificial intelligence researches is to empower agents to effectively coordinate with diverse teammates, particularly human teammates. Previous studies focused on training agents either with a fixed population of pre-generated teammates or through the co-evolution of distinct populations of agents and teammates. However, it is challenging to enumerate all possible teammates in advance, and it is costly, or even impractical to maintain such a sufficiently diverse population and repeatedly interact with previously encountered teammates. Additional design considerations, such as prioritized sampling, are also required to ensure efficient training. To address these challenges and obtain an efficient human-AI coordination paradigm, we propose a novel approach called Concord. Considering that human participants tend to occur in a sequential manner, we model the training process with different teammates as a continual learning framework, akin to how humans learn and adapt in the real world. We propose a mechanism based on hyper-teammate identification to prevent catastrophic forgetting while promoting forward knowledge transfer. Concretely, we introduce a teammate recognition module that captures the identification of corresponding teammates. Leveraging the identification, a well-coordinated AI policy can be generated via the hyper-network. The entire framework is trained in a decomposed policy gradient manner, allowing for effective credit assignment among agents. This approach enables us to train agents with each generated teammate or humans one by one, ensuring that agents can coordinate effectively with concurrent teammates without forgetting previous knowledge. Our approach outperforms multiple baselines in various multi-agent benchmarks, either with generated human proxies or real human participants. 1 Introduction Cooperative Multi-Agent Reinforcement Learning (MARL) has made significant progress in recent years, enabling multiple agents to work together towards a shared goal in diverse domains such as active voltage control (Wang et al., 2021a) and dynamic algorithm configuration (Xue et al., 2022b). A variety of MARL solutions have been proposed, including value-based approaches such as VDN (Sunehag et al., 2018), QMIX (Rashid et al., 2018), and QPLEX (Wang et al., 2021), policy gradient methods such as MADDPG (Lowe et al., 2017) and MAPPO (Yu et al., 2022), and newer variants like Transformer (Wen et al., 2022). However, building AI agents that can effectively coordinate with unseen teammates, especially human teammates, remains a significant challenge (Klein et al., 2004; Mutlu et al., 2013; Strouse et al., 2021; Koster et al., 2022). This ability is essential for numerous applications, including human-machine communication (Guzman & Lewis, 2020), cooperative autonomous vehicles (Toghi et al., 2021), and assistive robot control (Losey et al., 2022). Previous approaches focused on building effective behavior models from human data, which is inefficient for complex problems (Kidd & Breazeal, 2008) and may raise privacy concerns (Pan et al., 2019). Recent approaches, such as ad-hoc teamwork (Mirsky et al., 2022), zero-shot coordination (Treutlein et al., 2021), and few-shot teamwork (Fosong et al., 2022), have shown remarkable coordination abilities in a wide range of tasks, such as Overcooked (Carroll et al., 2019). However, these methods may suffer from over-fitting to their training teammates and struggle to coordinate effectively with unseen agents (Mahajan et al., 2022). Thus, the challenge of building AI agents capable of generalizing to unseen teammates in the open-world (Zhou, 2022) remains. Training with different teammates is a promising approach to tackling the mentioned issue, involving the generation of diverse teammates and an efficient training paradigm. To achieve the former, one approach is to use hand-crafted policies (Xie et al., 2021; Papoudakis et al., 2021), special object regularizer (Derek & Isola, 2021; Lupu et al., 2021), or Population-Based Training (PBT) (Strouse et al., 2021; Xue et al., 2022a; Zhao et al., 2023). Regarding the training paradigm, a naive way is self-play (Tesauro, 1994; Silver et al., 2018), where the agent iteratively improves via playing against itself. Fictitious Co-Play (FCP) (Heinrich et al., 2015; Strouse et al., 2021) trains agents to be the best response to both the fully-trained agents and their checkpoints. These approaches have demonstrated significant progress in benchmarks such as Overcooked (Carroll et al., 2019) and Hanabi (Lupu et al., 2021), offering promising prospects for human-AI coordination and cooperation. However, the aforementioned methods pre-generate various teammates and necessitate access to all of them during training, which might be unfeasible in the real world. On one hand, the task of enumerating all potential teammates in advance poses significant challenges. Maintaining a sufficiently diverse population has already placed a substantial demand on computing and storage resources, incurring considerable costs. Furthermore, if agents are required to cooperate with an unseen teammate, learning from scratch with all previous seen teammates (especially human participants) would be even more wasteful, considering the limitations of global time differences and economic costs. On the other hand, learning a generalized agent via interactions with a population of teammates requires meticulous design to ensure efficient training, such as prioritized sampling (Zhao et al., 2023). To address the aforementioned challenges, we draw inspiration from the manner in which humans learn to coordinate with diverse teammates. Human participants, with whom agents are trained to cooperate, typically emerge sequentially. Instead of learning to cooperate with all agents simultaneously, humans continually adapt to new teammates while retaining knowledge from previous interactions (Hadsell et al., 2020). In accordance with this idea, we first formulate the problem as a Multi-Agent Continual Markov Decision Process (MACMDP), where the controlled agents are trained to coordinate effectively with different teammates that appear sequentially. We observe that naively applying single-agent continual learning methods to MARL is inefficient, and typical MARL approaches are inadequate for continual learning scenarios (RQ1 in Sec. 4.1). Therefore, we propose a general framework called Concord (abbreviation for Continual coordination) to enable efficient continual training of AI agents and solve the MACMDP. Concretely, we introduce a teammate recognizer that captures the identification information of each corresponding teammate and represents it as a latent embedding. We then utilize the learned teammate embeddings based on hypernetworks (Ha et al., 2017; Oswald et al., 2020) to generate the policy parameters of each controlled agent. Finally, we use value decomposition to facilitate credit assignment among agents, enabling our framework to handle multiple controlled agents rather than only one. To evaluate the effectiveness of Concord, we conduct extensive experiments on various cooperative multi-agent benchmarks including Overcooked (Carroll et al., 2019) and StarCraft Multi-Agent Challenge benchmark (SMAC) (Samvelyan et al., 2019). We compare Concord against multiple methods in different scenarios, and the results demonstrate that Concord significantly outperforms these methods, achieving superiority across a range of evaluation metrics. Our main contributions are: • To the best of our knowledge, this is the first time that human-AI coordination challenge is explored in a continual manner via our formulated MACMDP. • We propose a continual MARL paradigm based on hyper-teammate identification, which is capable of effectively coordinating with diverse teammates and striking a balance between forward transferring and avoiding forgetting. • Empirical studies on various benchmarks demonstrate the effectiveness of Concord in continuously coordinating with diverse teammates, including real human teammates. 2 PROBLEM FORMALIZATION The purpose of human-AI coordination is for AI agents to cooperate well with diverse human teammates. Previous works relied on a fixed policy population, involving extensive training of AI agents, but this approach is computationally expensive and does not align with real-world scenarios where humans are encountered sequentially. Currently, there is limited research on training AI agents to work well with humans in the continual setting. To address this gap, we introduce a problem formulation for continual multi-agent coordination learning. Specifically, we formalize the continual multi-agent coordination learning problem as a Multi-Agent Continual Markov Decision Process (MACMDP) consisting of a tuple $\mathcal{M} := \{I, S, A, P, R, \Theta, \mu, \gamma, T\}$, where $I$ is the set of AI agents, $S$ and $A$ are the state and action space, respectively. $P$ is the transition function, $R$ is the reward function, $\gamma \in [0, 1)$ is the discount factor and $T$ is the continual learning length. Besides, $\Theta$ is the teammate type space, and $\mu$ is an indicator function which means that $\mu(\zeta) \in \Theta(1 \leq \zeta \leq T)$ denotes the teammate policy encountered at the $\zeta$-th stage. For the sake of brevity, we denote the policy determined by $\mu$ in the $\zeta$-th stage as $\pi^H_\zeta$. These teammates are encountered sequentially, of which one diagram is provided in App. B.1. In human-AI coordination tasks, cooperating with different humans can be defined as different tasks, and each human teammate policy $\pi^H_\zeta$ can be represented by a task indicator $\zeta$. In MACMDP, AI agents cannot interact with the previous human teammates, which corresponds to the assumption that the agent cannot interact with previous environments in the common continual reinforcement learning setting (Khetarpal et al., 2022), but are expected to remember how to cooperate with all previous human teammates. When handling the task $\zeta$, the AI policy $\pi^A$ and human policy $\pi^H_\zeta$ select actions $a^A, a^H$ according to the current state $s$, respectively. The next environment state $s'$ is obtained according to the environment transition function $P(s'|s, a^A, a^H)$ and a global reward is returned by the global reward function $R(s, a^A, a^H)$. We use the discounted return $$J_\zeta(\pi^A, \pi^H_\zeta) = \mathbb{E}_{s,a^A,a^H} \left[ \sum_t \gamma^t R(s, a^A, a^H) \right]$$ as the objective on task $\zeta$. Under this problem formalization, our goal is to find the AI agent policy $\pi^A$ that can maximize the expected return with different human teammates after the continual learning process, without forgetting the previous cooperating humans, which means to maximize $\sum_{\zeta=1}^{T} J_\zeta(\pi^A, \pi^H_\zeta)$. 3 Method 3.1 Teammate-adaptive AI Framework AI agents are anticipated to identify different teammates and adapt accordingly, enabling them to acquire the ability to coordinate continually. Additionally, the agents should be able to remember previous teammates while learning to coordinate with new ones. We here propose a comprehensive pipeline (depicted in Fig. 1 and App. B) that involves designing a human-aware actor-critic structure for policies training, building teammate representations for teammates capturing, and implementing an anti-forgetting mechanism to address the forgetting problem during continual learning. Human-Aware Actor-Critic In a human-AI coordination scenario, the decision-making process of the overall multi-agent system is jointly determined by both the AI agents and the humans. To enable AI agents to identify different teammates, we introduce a teammate-adaptive structure, an extension and variant of hyper-network (Ha et al., 2017; Oswald et al., 2020). This structure is specifically designed for continual human-AI coordination tasks and can generate actor networks that can effectively coordinate with specific teammates by leveraging consistent teammate representations. Concretely, the parameters of the AI actor networks are obtained through $\theta^A_i = f^\text{hyper}_\psi(z_\zeta)$, where $f^\text{hyper}_\psi$ represents the hyper-network parameterized by $\psi$. In essence, we incorporate the ability to adapt to different teammates into the hyper-network, so that it can generate appropriate actor networks for cooperating with the human teammates based on the input of different teammate representations $z_\zeta$. More details are reported in App. B.2. To better assess the contribution of AI to the overall performance, we here design a linearly decomposed centralized critic like DOP (Wang et al., 2021b). Therefore, we can create individual critics for the uncontrollable human agents, and model the global $Q$ function by combining the individual $Q$ values of both human and AI agents. This approach enables the global $Q$ function to more accurately reflect the joint decision-making process of human-AI coordination, and facilitate credit assignment between AI and humans, allowing us to isolate the contribution of the AI agents and provide more precise learning signals for updating $\pi^A$. In specific, the critic and actor networks are updated through minimizing the critic loss $$L_{\text{critic}} = \mathbb{E}_{\tau,a^A,a^H} \left[ \left(Q^\phi_{\text{tot}}(\tau, a^A, a^H) - y^\tau\right)^2 \right]$$ and actor loss $$L_{\text{actor}} = \mathbb{E}_{\pi^A} \left[ \sum_i k_i(\tau) \log \pi^A_i \left(a^A_i|\tau^A_i; \theta^A_i\right) Q^\phi_{\text{tot}}(\tau, a^A_i)\right],$$ respectively, where $k_i(\tau)$ represents Figure 1: The overall framework of Concord. During the training phase, a human-aware actor-critic is trained for AI to coordinate with humans effectively, where the AI actors are generated via a hyper-network. Teammate representations are trained via backpropagation, while generated by the recognizer in the test phase. When testing, the AI first interacts with human to collect few trajectories, then obtains the corresponding teammates representation for decision-making. the coefficients of value decomposition generated by an additional hyper-network when inputting the historical information $\tau$, the global value $Q_{\text{tot}}^{\phi}(\tau, a^A, a^H)$ is obtained through $Q_{\text{tot}}^{\phi}(\tau, a^A, a^H) = \sum_i k_i(\tau)Q_i^{\phi^A}(\tau, a^A_i) + \sum_j k_j(\tau)Q_j^{\phi^H}(\tau, a^H_j) + b(\tau)$, where $k_i(\tau)$ and $b(\tau)$ are generated by learnable networks whose inputs are global state, and $k_i(\tau)$ is restricted to be non-negative to ensure monotonicity, and $y^\tau$ denotes the target value $y^\tau = r + \gamma \mathbb{E}_{a^A_i, a^H_j} \left[ Q_{\text{tot}}^{\phi'}(\tau', a^A_i, a^H_j) \right]$. Note that we refer to the parameters of the critic network as $\phi$, and those of the target network as $\phi'$. Teammate Representation Learning To enable our method to adapt to different teammates, as mentioned above, we use one hyper-network that can generate adaptive actor networks by inputting different teammate representations. Here, the teammate representations serve the crucial role of recognizing the behavior patterns of different human teammates. To adopt this teammate representation scheme, we have two necessary prerequisites: 1) the teammate representations can help the hyper-network generate AI actor that can cooperates well with human; 2) the teammate representations should be obtainable with a minimal amount of interaction with humans during the test phase. To fulfill the first prerequisite, we design an end-to-end update scheme for teammate representations. In particular, we maintain a separate teammate representation for each group of human teammates encountered during the continual learning process. This teammate representation $\zeta_\xi$ is then fed into the hyper-network to obtain the parameters of the AI actor network $\theta^{\Lambda_c}$ which is expected to coordinate effectively with $\pi^H_\zeta$. Thus, we directly update the hyper-network and teammate representations with the gradients back-propagated from optimizing the actor loss $L_{\text{actor}}$. In terms of the second prerequisite, we additionally design a recognizer network that can reconstruct learned teammate representations from limited interaction data. This recognizer network is typically implemented as a trajectory-encoder that takes the trajectories of human teammates as input and outputs the prediction of the corresponding teammate representations. In specific, the trajectory encoder’s inputs differ in different processes. In the training process, we encode trajectories of varying lengths and minimize one reconstruction loss denoted as \( L_{\text{recon}} = \sum_{t=0}^{T_r-1} \lambda^{T_r-1-t} \| z_t^\zeta - z_\zeta \|_2^2 \). This loss function encourages the encoder to effectively reconstruct teammate representations using trajectories of different lengths. During the test phase, we roll out some trajectories (typically 2 in our experiments) with the teammate. Each entire trajectory undergoes encoding by the encoder, yielding a reconstructed representation for each trajectory. Then, they are averaged to yield the final recognition outcome, thereby achieving few-shot recognition of human teammates. **Anti-forgetting Mechanism** One of the key challenges in continual multi-agent cooperation learning is to avoid catastrophic forgetting of previous human teammates as the AI agent learns and adapts to new ones. Specifically, within our framework, we aim to combat the forgetting problem in both the recognizer network \( f_{\eta}^{\text{enc}} \) and hyper-network \( f_{\psi}^{\text{hyper}} \) simultaneously, which correspond to the ability to recognize and coordinate with the previous human teammates, respectively. To tackle this challenge, we employ an anti-forgetting mechanism to enforce regularization on both \( f_{\eta}^{\text{enc}} \) and \( f_{\psi}^{\text{hyper}} \) networks during the continual learning process. For the recognizer network, we utilize a small replay buffer to store the teammate representations and a few of interaction trajectories of the previous human teammates. During cooperation learning with the \( \zeta \)-th human teammates, the actual loss of the recognizer network is \( L_{\text{recon}} = \sum_{i=1}^{\zeta} L_{\text{recon}}^i \). Similarly, when training the hyper-network at the \( \zeta \)-th cooperation task, we additionally include one regularization loss \( L_{\text{reg}} = \frac{1}{\zeta-1} \sum_{i=1}^{\zeta-1} \| f_{\psi}^{\text{hyper}}(z_i) - f_{\psi}^{\text{hyper}}(z_i) \|_2^2 \) in addition to the actor loss to avoid \( f_{\psi}^{\text{hyper}} \)'s forgetting the previous cooperation knowledge, where \( f_{\psi}^{\text{hyper}} \) denotes the hyper-network obtained after the \( (\zeta - 1) \)-th task. In summary, we propose an anti-forgetting mechanism that effectively safeguards against the forgetting of past knowledge in both the hyper-network and the recognizer network through the preservation of the updated hyper-network from the previous round and a small memory pool containing few-shot interactions and teammate representations of previous human teammates. ### 3.2 Overall Algorithm In this section, we provide the overall description of the procedure of our approach, which is under a continual coordination learning setting. The pseudo-code for both the training and test phase of our algorithm can be found in App B.3. During the training phase, we conduct cooperation learning with human teammates one by one from \( \pi_1^H \) to \( \pi_T^H \), where we learn the hyper-network and the recognizer network together. In particular, when we cooperatively train with the \( \zeta \)-th human teammates, we firstly optimize the human-aware actor-critic architecture by minimizing the \( L_{\text{critic}} \) and \( L_{\text{actor}} \) losses, where the teammate representation \( z_\zeta \) is updated in an end-to-end manner with the back-propagated gradient \( \nabla_{z_\zeta} L_{\text{actor}} \). While for the hyper-network \( f_{\psi}^{\text{hyper}} \), we update it with the actor loss \( L_{\text{actor}} \) plus one regularization loss \( L_{\text{reg}} \), which means that the actual loss function for the hyper-network is \[ L_{\text{hyper}} = L_{\text{actor}} + \beta_{\text{reg}} L_{\text{reg}} = L_{\text{actor}} + \frac{\beta_{\text{reg}}}{\zeta-1} \sum_{i=1}^{\zeta-1} \| f_{\psi}^{\text{hyper}}(z_i) - f_{\psi}^{\text{hyper}}(z_i) \|_2^2, \] where \( \beta_{\text{reg}} \) is the coefficient balancing the two loss terms. After the training process of the hyper-network and teammate representation, we further optimize the recognizer network via reconstructing the representations of all the human teammates that have already been seen before, which means that we update the \( f_{\eta}^{\text{enc}} \) via minimizing the reconstruction loss \( \sum_{i=1}^{\zeta} L_{\text{recon}}^i \), where \( L_{\text{recon}}^i \) indicates the reconstruction loss for the teammate representation of the \( i \)-th group of human teammates. Afterwards, we move to the collaborative training with the next human teammates. In brief, the collaborative training with each group of human teammates is decomposed of two separate stages, where we train the hyper-network and teammate representations at the first stage and update the recognizer network at the second stage. When it comes to the test phase, we first allow a few interaction with the humans for the AI agents. The sampled human trajectories are fed into the recognizer \( f_{\eta}^{\text{enc}} \) to help attain the predicted teammate representation, which is then inputted into the hyper-network \( f_{\psi}^{\text{hyper}} \) to obtain the actor network for the AI agents. The final actor network here is employed to coordinate with the humans. Figure 2: The overall performance of distinct algorithms, where Concord achieves comparable coordination ability with Oracle and outperforms other baselines. 4 EXPERIMENTS In this section, we first conduct experiments on the widely-used Overcooked (Carroll et al., 2019) to verify whether Concord can effectively coordinate with human teammates under the continual setting. Then, we evaluate Concord’s ability to coordinate with multiple teammates through experiments on SMAC (Samvelyan et al., 2019). The human teammates involved in these experiments consist of both human-like models and real human participants. We compare the proposed Concord with the following methods. 1) Vanilla. The original policy gradient method DOP (Wang et al., 2021b), which is trained sequentially with teammates arriving one by one, without any extra design. 2) Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017). A regularization-based method that selectively reduces weight plasticity to prevent catastrophic forgetting in the continual setting. 3) Continual Learning with Experience And Replay (CLEAR) (Rolfick et al., 2018). CLEAR stores the data from old tasks and utilizes it for updating networks while learning new tasks. Specifically, CLEAR samples data not only from the buffer of the current task, but also from another one containing samples selected from all prior tasks with equal probability. 4) Oracle (Huang et al., 2021). Multi-task learning is trained with all the teammates at the same time rather than one by one. Besides, it knows which specific model it is to coordinate with when testing, which is the key oracle information in continual human-AI coordination. Note that our Oracle adopt similar paradigms with ZSC methods: 1) Oracle first constructs a diverse population of teammates trained from different algorithms, which is similar to the typical ZSC approaches (Strouse et al., 2021; Yu et al., 2023; Zhao et al., 2023). 2) In the training process, the training processes of Oracle and ZSC are both similar to multi-task learning. Detailed settings are provided in App.C. 4.1 Continual Coordination with Human-like Teammates First, we evaluate continual coordination ability with human-like teammates on Overcooked (Carroll et al., 2019), where different layouts present a unique challenge that can be overcome through effective coordination. The players are required to put three onions in a pot, collect an onion soup from the pot after 20 time steps and deliver the dish to a counter. The agents will receive 20 points for each dish served, and the goal is to serve as many dishes as possible within the allotted time. We run each algorithm for 5 random seeds and each seed was tested 32 times. The final results were obtained after averaging these results. We investigate the following research questions (RQs). **RQ1: How does Concord perform compared to other methods?** As can be seen from Fig. 2, Vanilla achieves the most inferior coordination ability in almost all scenarios, indicating a specific consideration for the tasks where teammates appear sequentially. Other successful approaches for single agent continual learning, like EWC, and CLEAR, also suffer from performance degradation in the involved benchmarks, demonstrating the necessity of specific design for MARL. The Oracle method, where we train all the tasks simultaneously, can be seen as an upper bound of performance on the related benchmarks, acquiring superiority over all baselines. Our approach Concord, obtains comparable performance to Oracle, indicating the efficiency of all the designed modules. Figure 3: The t-SNE (Van der Maaten & Hinton [2008]) projection of different behavior patterns. Completing a delivery task requires several key steps, including placing an onion into the pot, taking a dish, taking a ready soup, and delivering the soup, and each step has three directions for human-like models: always clockwise (CW), always counterclockwise (CCW), and bi-direction (Bi-D). Then, the behavior pattern of a teammate can be defined as the directions used in the key steps. Furthermore, we also show the comparisons of different methods on forgetting and forward transfer in Tab. 3 in App. D.1. Concord achieves the minimum forgetting on all the four layouts, validating the effectiveness of the proposed anti-forgetting mechanism. Besides, the forward transfer values of Concord on all the layouts are best, indicating that Concord can coordinate well with unseen teammates. Vanilla is worst, demonstrating the necessity of mechanism for continual learning. Specifically, we compare the single coordination performance of Concord and Vanilla on all the layouts in Fig. 12, 15 of App. D.2. We can find Vanilla exhibits a considerable amount of forgetting. RQ2: How does the teammate representation mechanism work? As an important aspect of Concord, the learning of teammate representations helps AI agents prevent catastrophic forgetting and effectively adapt to coordinate with different human teammates. The human recognizer encodes a representation embedding for each teammate based on their behavioral patterns. Using this embedding, an appropriate actor network is generated and utilized to effectively cooperate with the teammate. We demonstrate the differences in behavioral patterns among distinct human-like models and the corresponding relationship between embedding and behavior pattern in Fig. 3. Two main observations can be made: 1) The behavioral patterns of human-like models are diverse; 2) Similar behavior patterns have similar embedding representations, such as 3&4, 6&11, 8&12, and 5, 6, 8, 11&12 (i.e., using the same pot to cook), demonstrating the effectiveness of our method in identifying human teammate behavior patterns, which can help AI agents cooperate more effectively. RQ3: Is the value decomposition useful? One component of our approach design is the human-aware actor-critic architecture, which decomposes the global $Q_{\text{tot}}$ into the individual $Q$ values of teammates and AI agents. We here design an ablation that drops the value decomposition practice in Coord. Ring layout of Overcooked environment, which means treating the teammate as part of the environment and only learning the $Q$ value of the AI agent. This reduces to applying single agent reinforcement learning algorithm like previous works (Heinrich et al., 2015; Strouse et al., 2021). As shown in Fig. 4(a), where we report the testing performance on all 12 tasks after the whole continual training, the red curve consistently surpasses the orange one. This indicates that the complete Concord algorithm generally shows better anti-forgetting performance over the baseline w/o value decomposition, demonstrating the effectiveness of the human-aware actor-critic architecture to facilitate collaborating learning. Due to space limitations, further analysis, such as the number of episodes for recognition, comparison between few-shot and zero-shot recognition, comparison with Figure 4: (a) Performance comparison between Concord with and without value decomposition. (b) Experimental results on the map 4m. The gray shaded bar is used to denote the degree of forgetting, which equals to the highest test win rate during training minus the final test win rate. The x-axis represents the combination of two teammates, e.g., P&P means both teammates are poor (P) models. given teammate ID, and sensitivity analysis of the replay buffer size, as well as experiments conducted on the SMAC environment, can be found in Appendix D.3 and Appendix D.4 respectively. 4.2 Continual Coordination between Multiple Human-like Teammates and Multiple AI Agents Multi-player human-AI coordination is an understudied yet important setting in practical applications. To test the generalization ability of Concord, we use three maps in the SMAC environment (Samvelyan et al., 2019) to assess the coordination ability between multiple human-like teammates and multiple AI agents. We use DOP (Wang et al., 2021b) to train the human-like teammates. Three checkpoints with low, medium, high win rates during the training process were selected to simulate poor (P), medium (M) and expert (E) human players, respectively. Then, we train the agent via different algorithms to continually coordinate with single (maps 3m and 2s1z) or multiple (map 4m) of human-like teammates. Fig. 4(b) presents the results on map 4m, from which we can see that the gray shaded area of Concord is smallest, indicating that Concord generally has better anti-forgetting performance over other baselines. Results on maps 3m and 2s1z are reported in App. D.4. 4.3 Continual Coordination with Humans Finally, we conduct experiments on the Open Asy. Advantages layout to investigate the ability of coordinating with real humans. We first collect around 160 human-human trajectories (for a total of 64k environment timesteps) and partition them into two subsets, splitting each trajectory into two single-agent trajectories. Based on the collected human data, we construct 8 sets of human proxy models through behavior cloning. Then we train different methods by using these human proxy models as the teammates that appear sequentially. Note that the AI agents obtained by different methods will play with the real human participants in the test phase, not the human proxy models. More details about the construction of human proxy models and ethical statement concerning this experiment are provided in App. E.1 and E.2, respectively. Main results. We show the single coordination performance after training in Fig. 5(a). Among these algorithms, Concord is the closest to oracle in terms of performance, and outperforms CLEAR and EWC baselines with most teammates, which is consistent with the results in Section 4.1. The Vanilla method performs the worst and shows catastrophic forgetting, which again illustrates the importance of the anti-forgetting in continual human-AI coordination. Human preference. To check the preference of AI agents obtained by different algorithms of human players, we additionally introduce a popular subjective metric, i.e., human preference (Strouse et al., 2021; Yu et al., 2023), to evaluate different methods, with detailed explanation in App. E.3. As shown in Fig. 5(b), Concord is better than the three baselines and comparable with Oracle. For instance, the percentage of volunteers who prefer Concord method is 62.5% higher than those who prefer CLEAR. This shows that Concord is relatively more preferred by humans, demonstrating the value of Concord for practical human-AI coordination. Figure 5: (a) Single coordination performance results with humans. (b) Human preference results for row method over column method. The value at $i$-th row and $j$-th column is the difference in the percentage of humans who prefer $i$-th method compared to those who prefer $j$-th method more. 5 RELATED WORK Cooperative MARL has attracted popular attention (Oroojlooy & Hajinezhad, 2022), exhibiting significant advancements in diverse domains (Sartoretti et al., 2019; Wang et al., 2021a; Xue et al., 2022b). However, amidst the success in MARL, the challenge of developing AI agents that can effectively coordinate with diverse teammates remains a fundamental challenge (Dafoe et al., 2021). Typical approaches utilize techniques like modelling (Albrecht & Stone, 2018) to capture others’ intentions or behavior or (repeatedly) build an effective behavior model over human data and plan with the human model (Sheridan, 2016). Other related approaches like ad-hoc teamwork (AHT) (Mirsky et al., 2022), few-shot teamwork (FST) (Fosong et al., 2022), and zero-shot coordination (ZSC) (Treutlein et al., 2021) have been developed recently. AHT considers designing agents that can coordinate with new teammates without prior coordination (Stone et al., 2010). The goal of ZSC is to train the agent(s) that can coordinate effectively with a wide range of unseen teammates (Hu et al., 2020; Treutlein et al., 2021). Few-shot adaptation usually samples $K$ episodes by any policy to obtain a latent variable $z$ for downstream tasks, which plays a crucial role in single agent reinforcement learning (Kumar et al., 2020; Osa et al., 2022; Gaya et al., 2022b). Continual Reinforcement Learning emerges as a promising solution to address the aforementioned challenges (Khetarpal et al., 2022), where the agent aims to avoid catastrophic forgetting, as well as enable knowledge transfer to new tasks (a.k.a. stability-plasticity dilemma (Parisi et al., 2019)). Among the previous works, EWC (Kirkpatrick et al., 2017) conducts $\ell_2$ distance-based weight regularization with the weights learned in the previous tasks. CLEAR (Rolnick et al., 2018) is a task-agnostic method without need for task information during the continual learning process. It stores a large experience replay buffer and addresses the forgetting problem by sampling data. Hyper-CRL (Huang et al., 2021) utilizes a hypernetwork (Oswald et al., 2020) to enhance the efficiency of continual learning. A detailed related work can be found in App. A. 6 CONCLUSIONS AND LIMITATIONS Recognizing the importance and practicality of human-AI coordination, this study takes a significant stride towards addressing this challenge in a continual manner. We begin by formulating the problem as a MACMDP, where teammates are encountered sequentially. Subsequently, a mechanism based on hyper-teammate identification is proposed to avoid catastrophic forgetting while promote forward knowledge transfer for multi-agent coordination. To the best of our knowledge, our proposed method, Concord, is the first to address the human-AI coordination problem via continual training. Experiments on various multi-agent benchmarks validate the effectiveness of Concord, demonstrating its strong ability to continually coordinate with generated human-like models or real human participants across a range of evaluation metrics. Although Concord alleviates the issue of storage overhead compared to previous methods, it still necessitates a certain amount of storage to prevent forgetting. Besides, addressing large-scale problems poses a challenge for current human-AI coordination methods. Further research is required to delve deeper into these two topics. REFERENCES Stefano V Albrecht and Peter Stone. Autonomous agents modelling other agents: A comprehensive survey and open problems. *Artificial Intelligence*, 258:66–95, 2018. Samuel Barrett and Peter Stone. Cooperating with unknown teammates in complex domains: A robot soccer case study of ad hoc teamwork. In *Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence*, pp. 2010–2016, 2015. Lucian Busoniu, Robert Babuska, and Bart De Schutter. A comprehensive survey of multiagent reinforcement learning. *IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)*, 38(2):156–172, 2008. Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter Abbeel, and Anca Dragan. On the utility of learning about humans for human-AI coordination. In *Advances in Neural Information Processing Systems* 32, pp. 5175–5186, 2019. Rujikorn Charakorn, Poramate Manoonpong, and Nat Dilokthanakul. Generating diverse cooperative agents by learning incompatible policies. In *The 11th International Conference on Learning Representations*, 2023. Shuo Chen, Ewa Andrejczuk, Zhiguang Cao, and Jie Zhang. Aateam: Achieving the ad hoc teamwork by employing the attention mechanism. In *The Thirty-Fourth AAAI Conference on Artificial Intelligence*, pp. 7095–7102, 2020. Allan Dafoe, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, and Thore Graepel. Cooperative ai: machines must learn to find common ground. *Nature*, 593(7857):33–36, 2021. Kenneth Derek and Phillip Isola. Adaptable agent populations via a generative model of policies. In *Advances in Neural Information Processing Systems* 34, pp. 3902–3913, 2021. Elliot Fosong, Arrasy Rahman, Ignacio Carlucho, and Stefano V Albrecht. Few-shot teamwork. *arXiv preprint arXiv:2207.09300*, 2022. Ted Fujimoto, Samrat Chatterjee, and Auroop Ganguly. Ad hoc teamwork in the presence of adversaries. *arXiv preprint arXiv:2208.05071*, 2022. Jean-Baptiste Gaya, Thang Doan, Lucas Caccia, Laure Soulier, Ludovic Denoyer, and Roberta Raileanu. Building a subspace of policies for scalable continual learning. *arXiv preprint arXiv:2211.10445*, 2022a. Jean-Baptiste Gaya, Laure Soulier, and Ludovic Denoyer. Learning a subspace of policies for online adaptation in reinforcement learning. In *The 10th International Conference on Learning Representations*, 2022b. Pengjie Gu, Mengchen Zhao, Jianye Hao, and Bo An. Online ad hoc teamwork under partial observability. In *The 10th International Conference on Learning Representations*, 2021. Jun Guo, Yonghong Chen, Yihang Hao, Zixin Yin, Yin Yu, and Simin Li. Towards comprehensive testing on the robustness of cooperative multi-agent reinforcement learning. *arXiv preprint arXiv:2204.07932*, 2022. Andrea L Guzman and Seth C Lewis. Artificial intelligence and communication: A human–machine communication research agenda. *New Media & Society*, 22(1):70–86, 2020. David Ha, Andrew M. Dai, and Quoc V. Le. Hypernetworks. In *The 5th International Conference on Learning Representations*, 2017. Raia Hadsell, Dushyant Rao, Andrei A Rusu, and Razvan Pascanu. Embracing change: Continual learning in deep neural networks. *Trends in cognitive sciences*, 24(12):1028–1040, 2020. Johannes Heinrich, Marc Lanctot, and David Silver. Fictitious self-play in extensive-form games. In *Proceedings of the 32nd International Conference on Machine Learning*, pp. 805–813, 2015.
G0vdDSt9XM
The paper could provide a more in-depth analysis of the tool creation and retrieval components of CRAFT. Understanding how different types of tools contribute to performance improvements and how the retrieval mechanism interacts with various tasks would offer valuable insights.
CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets Lifan Yuan∗, Yangyi Chen∗, Xingyao Wang, Yi R. Fung, Hao Peng, Heng Ji University of Illinois Urbana-Champaign {lievanyuan173}@gmail.com {yangyic3,xingyao6,yifung2,haopeng,hengji}@illinois.edu Abstract Large language models (LLMs) are often augmented with tools to solve complex tasks. By generating code snippets and executing them through task-specific Application Programming Interfaces (APIs), they can offload certain functions to dedicated external modules, such as image encoding and performing calculations. However, most existing approaches to augment LLMs with tools are constrained by general-purpose APIs and lack the flexibility for tailoring them to specific tasks. In this work, we present CRAFT, a general tool creation and retrieval framework for LLMs. It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks. For each task, we collect specific code solutions by prompting GPT-4 to solve the training examples. Following a validation step ensuring the correctness, these solutions are abstracted into code snippets to enhance reusability, and deduplicated for higher quality. At inference time, the language model retrieves snippets from the toolsets and then executes them or generates the output conditioning on the retrieved snippets. Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning. Experiments on vision-language, tabular processing, and mathematical reasoning tasks show that our approach achieves substantial improvements compared to strong baselines. In addition, our in-depth analysis reveals that: (1) consistent performance improvement can be achieved by scaling up the number of tools and the capability of the backbone models; (2) each component of our approach contributes to the performance gains; (3) the created tools are well-structured and reliable with low complexity and atomicity. 1 Introduction Large language models (LLMs) have emerged as transformative tools in AI, exhibiting capabilities in complex problem-solving, including reasoning, planning, and producing creative outputs (Brown et al., 2020; Touvron et al., 2023b,a; Yuan et al., 2023). Recent evidence has shown that LLMs can dynamically interact with the environment through external tools, which grants them access to information beyond their pretrained parameters (Qin et al., 2023a; Mialon et al., 2023; Schick et al., 2023). For example, these models can generate code snippets and call APIs provided by visual tools like image encoding models, to solve problems that involve images or videos (Wu et al., 2023; Shen et al., 2023; Yang et al., 2024). Success has been achieved by integrating LLMs with large-scale, general-purpose tool collections (Qin et al., 2023b; Tang et al., 2023; Surís et al., 2023; Gao et al., 2023a; Chen et al., 2022a; Gao et al., 2023b; Patil et al., 2023). However, adapting LLMs to many domains and evolving applications involves working with more specialized APIs tailored to address specific challenges, which are often inadequately represented in general-purpose toolsets. In response, this work proposes to integrate LLMs with highly customizable toolsets that are curated for specific problems of interest. ∗Equal contribution. The first author conducts this research during an internship at UIUC. †The code is available at https://github.com/lifan-yuan/CRAFT. Our approach, dubbed CRAFT, constructs a toolset customized for a given task (see Figure 1). In contrast to previous approaches that only incorporate one single type of tool (Cai et al., 2023) or create unverified and non-reusable tools (Qian et al., 2023), our toolset contains diverse, reusable, and correct APIs that can tackle various problems. This is achieved through an automated process, by instructing LLMs to generate specific code solutions to solve training problems of the task or related ones. The specific solutions are then abstracted into code snippets, which can later be instantiated to solve similar problems. Dedicated validation and deduplication steps ensure the correctness of the tools and reduce redundancy, thereby enhancing the quality of the toolset. At inference time, precisely identifying and retrieving relevant tools for the given problems is challenging, especially given the constructed large toolset. Existing solutions typically rely on pre-selected tools (Parisi et al., 2022), heuristic-based tool selection strategy (Shen et al., 2023), and simple similarity measure (Qin et al., 2023b), which may be unsuitable or insufficient to pinpoint the related tools from a large toolset given the problems. CRAFT implements a retrieval component that takes into account the target problem, the names of the tools (a.k.a. APIs), and their docstrings through a multi-view matching function. The retrieved snippets are then added to the prompt of LLMs so that the retrieved tools can be invoked in the generated code solutions. The empirical effectiveness of CRAFT is validated through experiments on visual question answering, tabular processing, and mathematical reasoning tasks. Compared to strong baselines, CRAFT achieves an average of 43.16% relative improvement in F1 score compared to the best baselines in vision-language tasks, where the LLMs are required to interact with various visual tools to encode the images. Through our carefully designed analysis, we find (1) the performance continually increases as the number of tools and the capability of the backbone models increase; (2) Each component... design incorporated in CRAFT contributes to the performance gains; (3) the created tools exhibit atomicity and possess low complexity, underscoring their robust structures and reliability. The contribution of this work is two-fold. First, we introduce CRAFT, a broadly applicable framework to customize LLMs to various tasks and domains via tool creation and retrieval. Second, we release the created toolsets that include diverse, reusable, and correct tools, which are useful for various downstream tasks. Estimatedly, it costs around $2,500$ in total for the toolsets construction. ## 2 CRAFT We introduce CRAFT to address the challenges faced by prior research in the following two aspects: 1. **Tool Creation:** The establishment of an extensive toolset of diverse, reusable, and correct tools, in contrast to the reliance on limited examples (Cai et al., 2023; Qian et al., 2023); 2. **Tool Retrieval:** The effective retrieval of relevant tools from a large toolset, tailored to the specific question, thereby departing from the conventional approach of simplistic similarity matching (Qin et al., 2023b; Patil et al., 2023). By instantiating the retrieved code and adding it to the prompt, LLMs can then use the tools by calling the function to perform complex operations rather than implement every detail from scratch. ### 2.1 TOOL CREATION Based on a source dataset, namely a general instruction dataset or a training dataset that contains problem-answer pairs, CRAFT constructs the toolset through four steps: **Generation**, **Abstraction**, **Verification**, and **Deduplication**, which are illustrated in Figure 2 and will be described as follows. **Generation.** To create a toolset containing diverse tools that can be adopted to address various problems, we apply an iterative approach to sample problem-answer pairs from the source dataset. At a high level, the generation step involves iteratively sampling problems from the source dataset, generating code solutions, and filtering out incorrect ones. We use $Q$ to denote the set of sampled problems and $R_i$ to denote the set of remaining problems after the $i$-th iteration. $Q$ is initialized with $n$ random samples from the entire source dataset and $R_i$ is initialized as the rest. At each iteration, we use the highest similarity between each $q_r \in R_i$ and any $q_s \in Q$ as the similarity between each $q_r$ and set $Q$. To enhance the diversity of the toolset, $Q$ is updated by adding $k$ problems that are least similar to $Q$, where $k$ represents the desired number of samples for each iteration. This min-max sampling strategy is: $Q \leftarrow Q \cup \text{argTopK}_{\min} (\max_{q_r \in Q} \sim(q_r, q_s) | q_r \in R_i)$. Function $\text{argTopK}_{\min}$ returns the indices of the top $k$ elements with the smallest values from a set, which is set to 100 in our implementation, and $\sim(\cdot)$ denotes the cosine similarity of the representation vectors computed by SimCSE, a state-of-the-art sentence representation learning method based on contrastive learning (Gao et al., 2021). For each problem \( q_r \in Q \), we instruct GPT-4 (OpenAI, 2023) to generate a specific solution in Python that can be executed by an interpreter to get the answer. The prompts are shown in Appendix C. We keep those code solutions that are bug-free and can produce correct outputs, and discard everything else to ensure the correctness of the created tools. **Abstraction.** The generated code solutions are tailored for the given problems, keeping them from being useful for others. The abstraction step aims to promote the reusability of the toolset, ensuring that each tool can be adopted to tackle a broader range of similar problems. This abstraction step is achieved by instructing GPT-4 to replace all specific variable names with general ones (e.g., `cat` → `animal`, `desk` → `object`) and wrap textual inputs of internal function calls as arguments of the tool (e.g., `date = df["date"]` → `date = df[column_name]`, where the value of `column_name` is passed in by tool users) within the code piece, substituting them with more generic counterparts to adapt to similar problems (see Figure 2). In addition, we instruct GPT-4 to assign a suitable and general function name and compose a corresponding docstring to elucidate the functionality of created tools. The prompt is described in Appendix C. **Validation.** The validation step ensures the correctness of the created tools. This is achieved by examining whether the abstract tool functions can solve the original problems. Specifically, we offer GPT-4 access to the abstract tool function, with the expectation that it will address the original problems by supplying appropriate arguments to the tool function. The tools that fail to derive the correct answers given the original problems are discarded. **Deduplication.** To reduce the redundancy in the toolset and improve its diversity, we perform a deduplication step to streamline the toolset and mitigate potential confusion stemming from redundant tools (e.g., same function names). We organize created tools into groups based on function names and the corresponding number of input arguments. Each group contains tools that have the same function names and the number of input arguments. For groups that contain more than one tool, we prompt GPT-4 to decide on the most comprehensive tool with extensive applications within the groups, using the prompt shown in Appendix C. ### 2.2 Tool Retrieval Retrieving relevant tools from the large constructed toolset is challenging. For better retrieval outcomes, we prompt the LLM to “describe what it needs”. During inference, the evaluated LLM is asked to generate the function names \( f_t \) and the docstrings \( d_t \) based on the target problem \( q_t \). Then CRAFT adopts a similarity measuring strategy that takes into account three key aspects of the created tool \( t_i \): (1) The original problem used for creating the tool \( q_i \); (2) The tool’s function name \( f_i \); (3) The docstring of the function \( d_i \). For each tool \( t_i \), this results in a tuple \((q_i, f_i, d_i)\). We conduct multi-view matching, searching tools via \( q_t, f_t, \) and \( d_t \) respectively in the toolset \( T \). Specifically, we have: \[ T_{q_t} = \text{argTopK}_{\max} (\sim(q_i, q_t) | t_i \in T) \] where \( \text{argTopK}_{\max} \) is a function that returns the indices of the top \( k \) elements with the maximum values from a set, \( \sim(\cdot) \) measures the similarity between two sentences using SimCSE embeddings, and \( T_{q_t} \) is a list of \( k \) tools retrieved by matching problems. We then perform the similar retrieval by matching function names and docstring, obtaining \( T_{f_t} \) and \( T_{d_t} \) respectively. Next, the three lists of tools are aggregated and ranked by their frequency of occurrences. We then retrieve the three most frequent tools by majority vote. Finally, we filter out those that occur only once, if any. In extreme cases, it is also possible that all tools appear only once, i.e. the retrieved tool set is empty, then LLMs would directly perform code generation to solve question without invoking task-specific tools. After retrieval, the code snippets of tools are added to the prompt of LLMs for code generation to solve a given question. LLMs can invoke the tools (a.k.a, APIs) embedded in the code. Subsequently, the retrieved tool functions and LLM-generated code solutions are instantiated into executable code, and then they are executed to obtain the final predictions. **Summary and Discussion.** CRAFT creates a specialized toolset offline, and retrieves useful tools from the toolset in inference time. In toolset creation, we apply an iterative problem-sampling strategy based on similarity for diversity, followed by generating code solutions using GPT-4. To ensure the reusability of the created tools, we abstract the specific solutions into high-level tools that can tackle various kinds of problems by instructing GPT-4. To ensure the tools’ correctness, we evaluate the tools on the original problems and discard those outputting incorrect answers. Finally, we deduplicate the tools to reduce redundancy, and finally obtain a toolset. In inference, we apply a multi-view matching algorithm regarding the target problem, function name, and docstring between those in the toolset to retrieve related tools. We highlight several advantages of CRAFT. At a high level, by leveraging the tool creation paradigm, we can effectively utilize the domain-specific data to customize the LLMs without extensive fine-tuning, rendering CRAFT a training-free and plug-and-play approach. Due to CRAFT’s flexibility in accommodating various domains and tasks, it is broadly applicable across a spectrum of problem categories. In the concrete implementation, each tool is instantiated as an executable code snippet and is targeted at small atomic problems, such as identifying the color of an object. This ensures the explainability of the created tools. We can easily incorporate human efforts to examine the problematic tools and fix the errors. In addition, this allows for the decomposition of complex problems into multiple manageable steps, facilitating the compositionality of these created tools during inference. 3 EXPERIMENT 3.1 EXPERIMENTAL SETTING Evaluation Tasks, Datasets, and Metrics. To demonstrate the versatility of CRAFT, we select three distinct tasks for evaluation, spanning visual question answering (VQA), tabular processing, and mathematical reasoning: - **VQA**: The goal is to answer questions based on the information available in an associated image. We use three complex visual reasoning datasets, including GQA (Hudson & Manning, 2019), OK-VQA (Marino et al., 2019), and A-OKVQA (Schwenk et al., 2022). The GQA problems are more complex and require compositional reasoning to answer, while OK-VQA and A-OKVQA mainly use external real-world knowledge of objects or actions in the image. For evaluation, we formalize the VQA task as an open-ended generation problem and use the soft accuracy (SAcc) metric (Antol et al., 2015). In addition, we observe that LM-generated functions often produce descriptive responses instead of concise phrases, which hurts the exact match between predictions and ground-truth answers. This can potentially cause an underestimation of the performance, so we also use the F1 score for evaluation, which is frequently employed in extractive question-answering tasks (Rajpurkar et al., 2016). - **Tabular Processing**: It evaluates an LLM’s ability to process structured data in tables. We use TabMWP (Lu et al., 2023), a dataset with each sample containing one table and one corresponding problem in natural language. To handle the task, LLMs should understand the natural language descriptions of the problems, extract relevant information from the accompanying tables, and finally perform calculations based on the extracted information. We use the accuracy based on the exact match to measure model performance. - **Mathematical Reasoning**: LLMs are expected to solve mathematical problems written in natural language, leveraging both their understanding of textual inputs and complex reasoning capabilities. We use the algebra subset of MATH (Hendrycks et al., 2021), containing 881 challenging competition-level algebra problems. Evaluating CRAFT on all subsets goes beyond our budget constraint but we believe CRAFT is equally applicable to other math problems. The models’ performance is evaluated using accuracy. Baselines. We compare CRAFT with baseline methods of four categories: - **Basic Reasoning without Tools**: This line of methods solves downstream problems solely based on the intrinsic reasoning ability of LLMs without access to any external tool. We use the chain-of-thought prompting (CoT) (Wei et al., 2022), which prompts LLMs to generate the rationales before answers without using tools. However, it does not apply to the VQA task since LLMs cannot process visual information without external visual tools. - **Tool Learning**: We compare with approaches that directly leverage existing tools to assist the problem-solving process. In this case, LLMs only learn to use the human-provided tools without creating and retrieving tools. We compare to two approaches: (1) Vanilla stands for utilizing the most basic tools, such as Python Interpreter for all three tasks, and extra vision models to solve VQA problems. Specifically, the vanilla tool-using method for VQA is ViperGPT (Sur’sis et al., 2023), and that for the other two tasks is Program-of-Thoughts reasoning (Chen et al., Table 1: Distinctions between baseline methods and CRAFT in enhancing LLMs with created tools. | Tool-Creation Method | Dataset for Create Tools | Reuse Tools? | Tool Base Size | Retrieval-enhanced? | |----------------------|--------------------------|-------------|---------------|---------------------| | CREATOR | Test Set | No | 0 | No | | LATM | Train Set | Yes | 1 | No | | CRAFT | Instruction Dataset or Train Set | Yes | >100; Theoretically Unlimited | Yes | Table 2: The experimental results of CRAFT and four categories of baselines on three tasks. SAcc denotes soft accuracy, which is widely used for VQA. F1 is supplemented to tackle the issue of underestimated performance caused by the descriptive responses of LLMs. Acc denotes the accuracy. | Method | GQA | OK-VQA | A-OKVQA | TabMWP | MATHalg | |----------------------|-----|--------|---------|--------|---------| | Basic Reasoning | | | | | | | CoT | - | - | - | - | 75.2 | 50.9 | | Tool Learning | | | | | | | Vanilla | 35.0| 36.9 | 15.4 | 24.7 | 15.6 | 23.0 | 80.6 | 58.2 | | External | 34.2| 37.8 | 16.8 | 25.3 | 14.5 | 22.9 | 83.1 | 41.1 | | Different Tools | | | | | | | LATM | 29.4| 30.3 | 7.8 | 11.8 | 6.5 | 11.4 | 9.3 | 30.3 | | CREATOR | 34.3| 38.4 | 16.7 | 27.3 | 17.3 | 25.8 | 81.0 | 65.0 | | Alternative Retrieval| | | | | | | SimCSE | 36.4| 38.8 | 18.4 | 28.9 | 16.8 | 24.3 | 83.8 | 36.7 | | BM25 | 37.9| 39.0 | 13.4 | 24.3 | 17.8 | 26.1 | 89.2 | 35.9 | | This Work | | | | | | | CRAFT | 45.4| 48.8 | 33.4 | 43.0 | 30.8 | 40.6 | 88.4 | 68.1 | (2) **External library:** Therefore, we also explore the possibility of exploiting external tool functions in the Python libraries to enhance the vanilla methods. For VQA, we use Numpy (Harris et al., 2020), SciPy (Virtanen et al., 2020), Scikit-Image (Van der Walt et al., 2014), and Mahotas (Coelho, 2012). For the remaining two tasks, we substitute Scikit-Image and Mahotas with Pandas (McKinney et al., 2011) and SymPy (Meurer et al., 2017). - **Different LLM-Created Tools:** We compare with previous tool creation approaches, including LATM (Cai et al., 2023) and CREATOR (Qian et al., 2023). Specifically, LATM samples 3 examples from the training set and applies GPT-4 to create a tool for the task, which is further verified by 3 samples from the validation set. The created tool is then applied to all test cases. CREATOR creates one specific tool for each test case in the inference time. For fair comparisons, we remove the format checking and rectifying process used in the original work and only measure the one-pass accuracy. The distinctions between these two methods and CRAFT are shown in Table 3.1. - **Alternative Retrieval Methods:** We compare with previous tool retrieval approaches, which focus on the similarity measure between the problem and the API names. We include two prevalent measures, namely SimCSE and BM25 similarity, following Qin et al. (2023b) and Patil et al. (2023) respectively. The baseline retrieval methods are also based on our created toolset for fair comparison. In this work, we implement CRAFT and all baselines based on the GPT-3.5-Turbo (ChatGPT) backbone because: (1) It is more cost-effective compared to alternatives like GPT-4, with affordable cost and strong performance; (2) The Turbo-0613 version is specially optimized for the tool-learning purpose. Conversely, alternative backbone models (e.g., CodeLlama (Rozière et al., 2023)) demonstrate near-random performance in our setting, which can be attributed to their suboptimal tool-using capabilities. The concrete implementation details are described in Appendix B. ### 3.2 Experimental Results We present the results in Table 2. Particularly, we find that directly leveraging tools from external Python libraries fails to improve the performance, and in certain cases, may have a detrimental impact (e.g., in mathematical reasoning). This suggests that the relevance of tools affects the performance of augmented LLMs, motivating us to construct a high-quality tool base that customizes LLMs to each task. We observe that LATM struggles with all datasets and brings negative effects; CREATOR yields a notable enhancement in mathematical reasoning task performance, while its impact on other datasets appears marginal. This result suggests the necessity of sufficient and diverse tools to tackle problems of various categories in downstream datasets. For tool retrieval baselines, the performances vary across datasets. But in general, LLMs do not get substantial enhancement except on TabMWP, posing the need for better retrieval algorithms. Table 3: Results of further analysis, encompassing ablation study on abstraction and retrieval components, as well as the comparison between ViperGPT and CRAFT with different backbones. | | GQA | OK-VQA | A-OKVQA | |----------|-----------|-----------|-----------| | | SAcc F1 | SAcc F1 | SAcc F1 | | GPT-3.5-Turbo | | | | | ViperGPT | 35.0 36.9 | 15.4 24.7 | 15.6 23.0 | | CRAFT | **45.4** **48.8** | **33.4** **43.0** | **30.8** **40.6** | | w/o Abstraction | 37.1 39.7 | 31.0 41.4 | 28.0 39.3 | | w/o Problem | 42.4 45.8 | 32.7 42.3 | 29.8 38.7 | | w/o Name | 36.4 38.3 | 26.8 35.7 | 21.7 30.6 | | w/o Docstring | 37.3 39.1 | 29.8 38.8 | 25.0 34.0 | | GPT-4 | | | | | ViperGPT | 51.4 53.7 | 36.7 47.2 | 32.8 42.4 | | CRAFT | **55.6** **58.8** | **39.0** **49.1** | **35.3** **44.8** | Overall, CRAFT demonstrates superior performance on all datasets, especially on the challenging VQA tasks. Significantly, CRAFT demonstrates a notable enhancement over the vanilla baseline, namely ViperGPT, with absolute SAcc improvements of 10.4, 18.0, and 15.2 observed on the GQA, OK-VQA, and A-OKVQA datasets, respectively. In addition, based on the same created toolset, the retrieval approach incorporated in CRAFT demonstrates overall better performance compared to alternative ones, which exhibit a certain level of performance variance. One exception is the comparison with BM25 on TabMWP. This discrepancy can be attributed to the presence of relatively straightforward patterns within this dataset, which do not sufficiently showcase the advantages of our approach in tool retrieval. 4 Further Analysis. In this section, we conduct an in-depth analysis for CRAFT on VQA datasets. This task is particularly pertinent for assessing the impact of external tool augmentation, given that LLMs lack the capability to directly process images. Thus, it serves as a key testbed for measuring the influence of external tools. 4.1 Does Abstraction Facilitate Tool Use? Setup. Abstraction is a crucial step in constructing the toolset, converting solutions for specific problems into general-purpose tools that are applicable to diverse problems with a common pattern. In this section, we explore its efficacy with an ablation study. To scrutinize this, we establish a control group, where the toolset is created ablating the abstraction step. To ensure compatibility, we prompt GPT-4 to assign a distinctive function name and docstring for each solution to facilitate the multi-view retrieval approach for fair comparison. Results. Table 3 shows a clear performance drop when the abstraction step is ablated, confirming its importance. Moreover, comparing abstraction-ablated CRAFT with ViperGPT, improvements are achieved across all three datasets, especially on OK-VQA and A-OKVQA. We identify two potential reasons that can elucidate the improvement. First, the created toolset is large and diverse enough, facilitating the adoption of specific tools without abstraction for addressing new problems. Second, as retrieved tools offer a correct approach to problem-solving, LLMs can efficiently adapt these strategies to address new problems. 4.2 Is Every Matching in the Retrieval Triplet Equally Important? Setup. CRAFT retrieves tools based on multi-view matching. We demonstrate its effectiveness in Section 3.2. Next, we respectively ablate problems, function names, and docstring from the matching process to investigate their influence on performance. Results. As demonstrated in Table 3, it is clear that the removal of any of the three similarity measures from our multi-view matching function adversely impacts performance, thereby validating the rationale behind our design strategy. Among them, the function names appear the most important one, resulting in more than 6.6 absolute SAcc drop when ablated. 4.3 Does CRAFT still Work for More Powerful Backbone Models? Setup. In previous experiments, CRAFT is implemented using GPT-3.5-Turbo as the backbone. In this analysis, we evaluate CRAFT when using the more powerful GPT-4 as the backbone. Due to the budget limits, we only compare CRAFT with the vanilla baseline ViperGPT without tool creation. Results. The results in Table 3 demonstrate that CRAFT achieves consistently better performance with GPT-4, confirming that CRAFT is helpful even with more capable backbone models. However, it’s noteworthy that while the improvement of CRAFT on GPT-4 is pronounced, it is less obvious compared to the impact on GPT-3.5-Turbo. We hypothesize that this result is in line with the conclusions of recent work, which finds that LLMs can benefit from the guidance of more capable models while gaining no improvement from the guidance of itself (Fu et al., 2023; Wang et al., 2023). The tools, created by GPT-4, may provide comparatively fewer insights for itself, thereby limiting the potential benefits of external tool augmentation. 4.4 Can CRAFT Improve Performance as the Toolset Gets Larger? Setup. A feature of CRAFT distinctive from prior approaches is the extensibility of the toolsets. We examine the utility of extension by manipulating the toolset’s size and tracking performance trends. To elaborate, the iterative problem sampling strategy detailed in Section 2.1 is initialized with a total of 11 epochs. In this analysis, the sizes of the toolset are modified through the exclusive inclusion of tools created at distinct epochs. We choose tools from the initial epoch, the final epoch, and the epoch in between, resulting in toolset sizes of 0 (no created tool for comparison), 261, 337, and 525, respectively. Results. The results in Figure 3 show a consistent increase in soft accuracy as the toolset expands across 3 datasets, demonstrating the scalability and potential of CRAFT. The upward trend of soft accuracy continues, suggesting the potential for further improvement of CRAFT as the toolset keeps expanding. Significantly, the most substantial improvement is observed when transitioning from the absence of any created tools to the utilization of 261 tools. This validates the effectiveness of creating the specialized toolset to customize LLMs to various tasks and domains. 4.5 What is Inside the Toolset? We analyze the complexity and diversity of the code in toolsets. For complexity, we use the widely adopted cyclomatic complexity (McCabe, 1994) to measure the number of linearly independent paths, with the higher value indicating the code is more complicated and requires refactoring to make it more reliable. Good software should have a complexity of no more than 10, and a less complex toolset is desirable since it is less prone to trigger bugs. For diversity, we classify each tool into different groups. We use the number of distinct groups as the metric, with a larger number of tool groups indicating a wider range of problems that our toolset can address. | Task | VQA | Tabular Process | Mathematics Reasoning | |-----------------------|-----|-----------------|-----------------------| | Avg. Cyclomatic Complexity | 2.64 | 2.07 | 1.34 | | # Tools | 525 | 181 | 282 | | # Classes of Tools | 195 | 25 | 234 | Figure 3: The performance of CRAFT improves as the toolset scales up. We calculate the complexity using Lizard Python library\(^2\), and present the average complexity of tools for each task in Table 4. We observe that the created toolsets for 3 tasks exhibit relatively low complexity, indicating that the tools are well-structured and reliable. We then adopt the Louvain community detection method (Blondel et al., 2008), a graph-based community dividing algorithm, to group different tools. As shown in Table 4, for VQA, tabular process, and mathematics reasoning, there are 195, 23, and 234, distinct classes out of 525, 181, and 282 tools respectively. This suggests that the MATH dataset has the most diverse patterns, followed by VQA, while problems in the TabMWP dataset are more homogeneous and can be well-solved using fewer created tools. 5 RELATED WORK 5.1 TOOL LEARNING WITH LLMs LLMs, when integrated with real-world Application Programming Interfaces (APIs), gain the capability to actively interact with a range of external systems (a.k.a. tools) (Parisi et al., 2022; Schick et al., 2023; Tang et al., 2023; Patil et al., 2023; Song et al., 2023; Hao et al., 2023; Wang et al., 2024). The pioneering work connects GPT-3 (Brown et al., 2020) with the web browser to access latest information, and hires human annotators to provide demonstrations of web searching (Nakano et al., 2021). Further research expands upon this concept by encompassing a broader range of tools, such as calculators, calendars, interpreter, physical simulator, and maps (Shuster et al., 2022; Paranjape et al., 2023; Liu et al., 2023c; Chen et al., 2022a; Gao et al., 2023a; Drori et al., 2022; Pan et al., 2023; Liu et al., 2023b), and explores the application of weakly-supervised methods, such as bootstrapping (Parisi et al., 2022; Schick et al., 2023). More recently, progress has been achieved through the process of distilling the tool using the ability of closed-source LLMs (ChatGPT (ChatGPT Plugins)) to the open-source LLMs. The key idea revolves around allowing ChatGPT to produce synthetic data exemplifying the usage of specified APIs. Subsequently, this synthetic data is leveraged for the refinement of open-sourced LLMs (Qin et al., 2023b; Tang et al., 2023). In this work, we extend our approach beyond mere dependence on existing tools. We adapt LLMs to diverse downstream tasks through the creation of customized tools and the retrieval of relevant tools during inference. 5.2 TOOL CREATION & RETRIEVAL While the exploration on tool creation and retrieval is relatively limited compared to tool learning with LLMs, we identify some preliminary efforts in this domain. For tool creation, Cai et al. (2023) proposes an approach wherein tools are created through the utilization of three training samples, and their efficacy is subsequently assessed using three validation samples. Consequently, the resulting toolbase is constrained in quantity. This approach hinges on the assumption that there exists a notable similarity between the distributions of the training and testing data. Consequently, the tools produced can be readily incorporated. Similarly, Qian et al. (2023) adopts a strategy that involves generating tools exclusively based on the provided query. As a result, the created tools lack reusability, thereby undermining the fundamental purpose of tool creation. For tool retrieval, existing research primarily includes pre-selection of human-curated tools tailored to specific problems (Parisi et al., 2022; Tang et al., 2023; Schick et al., 2023; Zhuang et al., 2023), employing heuristic-based methods for tool selection (Shen et al., 2023; Liang et al., 2023), and adopting a straightforward similarity metric between user queries and API names (Qin et al., 2023b; Patil et al., 2023; Xu et al., 2023). In this work, we motivate to create a large tool base that can be effectively utilized on related downstream tasks and address the challenge of retrieving the relevant tools from the large tool base. 6 CONCLUSION In conclusion, this paper presents CRAFT, a general framework for tool creation and retrieval to generalize LLMs for diverse domains and tasks. The framework’s effectiveness is demonstrated through improved performance in challenging tasks, alongside insights into component contributions, constructed toolsets, and scalability. \(^2\)https://github.com/terryyin/lizard ACKNOWLEDGEMENT We thank the anonymous reviewers for their suggestions and comments. This research is based upon work supported by U.S. DARPA ECOLE Program No. HR00112390060 and U.S. DARPA ITM Program No. FA8650-23-C-7316. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. REFERENCES Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: visual question answering. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pp. 2425–2433. IEEE Computer Society, 2015. doi: 10.1109/ICCV.2015.279. URL https://doi.org/10.1109/ICCV.2015.279. Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008 (10):P10008, 2008. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. ArXiv, 2023. ChatGPT. URL https://chat.openai.com/. ChatGPT Plugins. URL https://openai.com/blog/chatgpt-plugins. Wenhua Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. CoRR, abs/2211.12588, 2022a. doi: 10.48550/arXiv.2211.12588. URL https://doi.org/10.48550/arXiv.2211.12588. Wenhua Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. CoRR, abs/2211.12588, 2022b. Yangyi Chen, Karan Sikka, Michael Cogswell, Heng Ji, and Ajay Divakaran. Measuring and improving chain-of-thought reasoning in vision-language models. arXiv preprint arXiv:2309.04461, 2023. Luis Pedro Coelho. Mahotas: Open source software for scriptable computer vision. arXiv preprint arXiv:1211.4907, 2012. Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences, 119(32):e2123433119, 2022.
tAcEidZ1Y2
Although reconstruction is an important topic in MRI, are there any other reasons that make the proposed method tie to MRI reconstruction? In other words, whether the proposed method is suitable for potential reconstruction tasks in natural image domains?
Self-supervision Meets Bootstrap Estimation: New Paradigm for Unsupervised Reconstruction with Uncertainty Quantification Anonymous authors Paper under double-blind review Abstract Deep learning-based self-supervised reconstruction (SSR) plays a vital role in diverse domains, including unsupervisedly reconstructing magnetic resonance imaging (MRI). Current powerful methodologies for self-supervised MRI reconstruction usually rely on capturing the relationships between different views or transformations of the same data such as serving as inputs and labels respectively, which show notable influence from analogous approaches in computer vision. Although yielding somewhat promising results, their designs are often heuristic without deep insights into reconstructed object characteristics, and the analytical and mathematical principles of such methods are not expressive. This paper addresses these issues with a novel SSR paradigm, BootRec, that not only provides an explanation for self-supervised reconstruction but also facilitates the development of downstream algorithms. Self-supervised MRI reconstruction is modeled as error-oriented parameter estimation - Bootstrap estimation for SSR (BootRec). In BootRec, we demonstrate the mathematical equivalence between bootstrapping in a sample set and the commonly used re-undersampling operation for SSR. This insight is further incorporated into designing models to estimate errors of MRI SSR results without accessing labeled data. The estimation can further serve as the loss function for unsupervisedly training the models. Experiments show that our new paradigm BootRec enables advanced MRI reconstruction performance against other zero-shot methods. The code is available at https://github.com/user19781945/rep10825984. 1 Introduction Magnetic resonance imaging (MRI) reconstruction receives continuous attention for its significance in medical imaging and challenges in often unsupervised settings due to costly labeling and obtaining ground truth. MRI reconstruction inherently requires a lengthy step of repeatedly collecting measurements in the frequency domain to fill the k-space before recovering the spatial signals using inverse Fourier transform (IFT). The advancements in techniques such as parallel imaging for acquiring signals and compressed sensing (CS) for reconstruction have provided approaches to reduce imaging time. Specifically, CS makes it possible to acquire fewer measurements than the Nyquist rate while reducing the aliasing artifacts [Donoho, 2006; Lustig et al., 2008]. The introduction of deep neural networks for deep learning (DL) to CS-MRI has also led to breakthroughs in a higher acceleration ratio and better reconstruction quality in MRI reconstruction [Chen et al., 2022; Wang et al., 2021; Lin & Heckel, 2022; Fabian et al., 2021]. However, these DL methods, though powerful, have several challenges in further applications. The first problem is that supervised DL training demands numerous labeled training data. In the situation of MRI reconstruction, it means that enough fully sampled images must be provided, which is impossible in many situations. Another important shortcut is the black-box nature of DL models, making the reconstruction lack explanation and uncertainty estimation. Hence, it is hard to evaluate the risk in real-world medical practice when doctors need to make critical decisions according to the images [Edupuganti et al., 2020]. We propose a new paradigm of Bootstrap estimation for self-supervised reconstruction (BootRec) of MRI. BootRec models MRI SSR as a parameter estimation problem, and applies Bootstrap estimation to quantify the errors. The learning target is then shifted to minimize the estimated mean squared error (MSE) between the reconstructed fully sampled images and the unknown ground truth. Summary of different pipelines and the insights of our modeling are in Figure 1. Figure 1: Demonstration of different pipelines of DL-based reconstruction models. All self-supervised methods incorporate some kind of re-undersampling. (a) Supervised training with paired fully sampled images as labels. (b) self-supervision via data undersampling pipeline. (c) Insights of modeling re-undersampling as Bootstrap. Only masks of virtual sample sets are plotted for simplification. The main contributions of our BootRec paradigm are summarized as follows: (1) We construct a new framework that models MRI SSR as a parameter estimation problem. (2) We demonstrate the equivalence between Bootstrap sampling and re-undersampling in certain conditions. (3) We propose using Bootstrap MSE estimation as uncertainty quantification for SSR. (4) We propose new algorithms to train self-supervised models and achieve advanced results. The notations used in the paper are summarized in Appendix A for reference. 2 BACKGROUND & RELATED WORK 2.1 DEEP-LEARNING-BASED RECONSTRUCTION FOR MRI The imaging process of parallel CS-MRI in one coil can be formulated as Equation 1, where \( y \) represents the acquired k-space data, \( x \) is the spatial anatomy data, \( r \) is the noise, \( F \) is the Fourier transform, \( U \) is the 0-1 valued matrix indicating the sampling points in k-space (which is called measurement matrix in CS), and \( C \) is the coil sensitivity. Note that \( x \), \( y \) and \( r \) should be multi-dimensional values. The notions in the paper are summarized in Appendix A. For simplification and to be consistent with other references, we represent them as flattened vectors. \[ y = UFc + r \tag{1} \] For simplification in the later analysis and without influencing the conclusion, we will skip the combination of multiple coils and ignore the noise \( r \) as it is usually modeled as Gaussian noise with mean of zero. We also ignore the coil sensitivity or merge them into \( x \), thus obtaining a simplified equation of CS-MRI: \[ y = UFx \tag{2} \] Given the acquired \( y \), the reconstruction is built as a reverse problem to recover \( x \) using some reconstruction model. Traditionally, the reconstructor is iterative based on CS theory, while in deep-learning-based methods, the model can be a neural network parameterized by \( \theta \) [Chen et al., 2022; Yang et al., 2016]. We represent any reconstruction model as Equation 3. \[ \hat{x} = f(y, U) \tag{3} \] 2.2 Self-Supervised Training of Reconstruction Models Early trials of unsupervised training of reconstruction models implement dictionary learning and other classical algorithms in CS (Majumdar, 2018; Singhal & Majumdar, 2020). Other methods include leveraging unpaired fully-sampled data (Oh et al., 2020; Chung et al., 2021; Korkmaz et al., 2022) and Deep Image Prior (DIP) (Ulyanov et al., 2018). Benefiting from the success of self-supervised methods in computer vision like contrastive learning (Chen et al., 2020) and masked Autoencoder (He et al., 2022), self-supervised training in MRI reconstruction has made progress in recent years and surpassed other methods (Zhou et al., 2022; Yaman et al., 2020; Zou et al., 2022; Wang et al., 2022b). The basic pipeline of self-supervised model is in Figure 1. As shown in Equation 4, reconstruction is conducted on the re-undersampled measurements \( y \), and the basic form loss of is \( L(\hat{x}^R, \hat{x}) \). The key points are the design of re-undersampling masks \( U^R \) and loss functions. In different models (Wang et al., 2022b; Yaman, 2022), different kinds of sampling methods (uniform, Gaussian, etc.) and ratios of re-undersampling are proposed and evaluated. The loss function of the self-supervision mainly comes from the undersampled k-space not being selected in re-undersampling, which can be defined in the frequency or spatial domain (Jafari et al., 2021; Senouf et al., 2019), with a wide range of choices from imaging processing. \[ \hat{x}^R = f(U^Ry, U) \] Generally speaking, the explorations of effective self-supervised algorithms for CS-MRI reconstruction are heuristic. Instead, Bootrec will try to provide a methodology and explanation for this field. 2.3 Uncertainty Quantification of MRI Reconstruction DL models show impressive advantages in many fields with a major concern about their result’s reliability, such as the hallucination of large language models (OpenAI, 2023). In MRI reconstruction, a concern is that DL models may “imagine” the anatomies and mislead the diagnosis. Uncertainty Quantification (UQ) can ameliorate the problem by providing “confidence level” of the results, enabling decision-makers aware of the risk of unauthentic imaging (Gawlikowski et al., 2021), and doctors can choose to conduct further examinations for results of high uncertainty. Derived from its origin, uncertainty of reconstruction can be divided into two categories (Kendall & Gal, 2017), aleatoric uncertainty stemming from the ill-posedness of the problem and epistemic uncertainty from the uncertainty of model parameters. The notion of uncertainty is also to be clarified. In the field of image tasks, the variance of the result is widely used, and other choices include quantiles and entropies (Angelopoulos et al., 2022). In MRI reconstruction and other image regression tasks, the residual error of the prediction also made notable progresses (Wang et al., 2022a). Uncertainty quantification has been considered in the community of computational imaging. In the field of MRI reconstruction, Edupuganti et al. (2020) leverages the variational Autoencoder (VAE) to convert the deterministic result to be probabilistic. Schlemper et al. (2018b) and Ekmekcı & Cetin (2022) builds a Bayesian neural network (BNN) and models the inherent uncertainty with a Gaussian distribution. A main limitation of existing methods is that supervised training is needed for the quantification so they cannot be applied to unsupervised models. We find Bootstrap estimated MSE can be viewed as UQ to some extent, which models the aleatoric uncertainty from (re)-undersampling well. Further experiments are conducted to assess the quantification. 3 Modeling Reconstruction as Parameter Estimation The BootRec framework consists of the following modules: (1) aggregation function for preprocessing and wrap reconstruction model as parameter estimator; (2) a virtual sample set as a mathematical tool to map a single observation to a sample set; (3) pseudo resampling trick to map Bootstrap sampling of a virtual sample set to re-undersampling of measurement; and (4) training algorithms for the specific loss function based on bootstrapping. These are detailed below. 3.1 Distribution of MRI Acquisition Observation Firstly, we assume the sampling mask $U$ obeys a multivariate Bernoulli distribution where each variable is independent, as Equation (5). We also make a constraint that all positions keep the possibility to be sampled, that is, $P_{U_i} > 0$ for any given position $i$. $$U \sim B(U; 1, P_U)$$ BootRec initially operates by training a separated model for each data point (zero-shot reconstruction) (Yaman [2022]) and we’ll discuss more general situation in Section 5.3. In the zero-shot case, the target of the $t_{th}$ reconstruction is fixed as $x^{(i)}$, so we directly use $x$ as $x^{(i)}$ in the following derivation. With former Equation (2), the randomness from the mask is introduced so the sampled k-space data can also be viewed as random variables. Usually, the sampling mask and the acquired k-space data are provided and processed simultaneously in CS, so we define the observation as $s = (y, U)$ for convenience. The distribution of $s$ can be fully parameterized by $x$ and $P_U$, written as $p(s; x, P_U)$. The reconstruction task in Equation (3) can then be viewed as estimating the parameter of $p(s)$ given observations of $s$. 3.2 Estimator Construction with Aggregation Function To formulate estimators from the reconstruction models, the main distinction is that observations are processed individually and independently without forming a set, as in Equation (3), so we propose aggregation function as an adapter. An aggregation function is a special mapping from observation sets $\{s^{(1)}, s^{(2)}, \ldots, s^{(n)}\}$ to a single observation $s^* = (y^*, U^*)$ and serves as a component of the estimator. The output can then be directly passed to any existing reconstruction model. In the situation of $n$ (positive integer) independent observations $\{s^{(1)}, s^{(2)}, \ldots, s^{(n)}\}$, the aggregation function is defined as Equation (6). Intuitively it takes the average observed value in positions being selected at least once and keeps the other positions zero-valued. A case of $n = 3$ is demonstrated in Figure 2. $$h(s^{(1)}, s^{(2)}, \ldots, s^{(n)})_i = (U^* y, U^*)_i = (y^*, U^*)_i$$ $$U^*_i = \begin{cases} 1 & (U^{(1)} + U^{(2)} + \ldots U^{(n)})_i \neq 0 \\ 0 & (U^{(1)} + U^{(2)} + \ldots U^{(n)})_i = 0 \end{cases}$$ After the aggregation function, it is obvious that $y^*$ will still be k-space data in which all the selected measurements and gradients in $U^*$ remain the same as that of the corresponding position in $y$, as a normal masking operation. Also, if only one observation is acquired, the aggregation function will be transparent and will not be adjusted. Aggregation function can be composed by any reconstruction method to form a new reconstruction method $f_{AF} = f \circ h$. The new function can take the sample set from the distribution of $P(s)$ and serve as an estimator of the parameter $x$ without modifying the reconstruction process defined by $f$. $$\hat{x} = f_{AF}(s^{(1)}, s^{(2)}, \ldots, s^{(n)})$$ 3.3 Equivalent Sample Distribution Under the definition of aggregation function, $U^*$ means a "selected at least once" matrix and $U^*$ obeys a multivariate Bernoulli distribution whose distribution parameter can be computed as Equation (10). The parameterized distribution of output observations is then $p(s^*; x, P_{U^*})$ correspondingly. $$U^* \sim B(U^*; 1, P_{U^*})$$ $$\text{diag}(P_{U^*}) = I - (I - \text{diag}(P_U))^n$$ Observing the process of aggregation function, we notice that multiple sample sets may be mapped to the same observation. Given a sample set containing $n$ identically distributed observations Figure 2: Demonstration of Aggregation Function with 3 observations (only the distribution of masks are presented for simplification). (a)/(c): The probability of being selected in different positions ($P_U$) and the distribution of a specific position being selected or not ($P_{U_i}$) before/after aggregation function. (b) The distribution of selections in the set of when size $n = 3$, and the color means the corresponding value after the aggregation function. $\{s^{(j)}|s^{(j)} \sim p(s), j = 0, 1, \ldots, n\}$, if the result of adding it to the aggregation function satisfy that $h(s^{(1)}, s^{(2)}, \ldots, s^{(n)})_i = (y^*, U^*)_i$, we define $p(s)$ the $n$-cardinality Equivalent Sample Distribution of $p(s^*)$, whose parameter $P_U$ satisfies Equation 10. The two distributions are connected by the aggregation function. In Figure 2, if (c) visualizes the distribution of actual observations, then (a) shows an equivalent sample distribution when $n = 3$. Solving Equation 10 by setting $P_U$ as unknowns, we get Equation 11, which computes the parameters of equivalent sample distribution. $$\text{diag}(P_{U_i}) = I - (I - \text{diag}(P_{U^*}))^{1/n}$$ 4 Connecting Bootstrap with Re-undersampling 4.1 Bootstrap Estimation of Multiple Observations For parameter estimation, the inference of the population is performed with the collected sample set of a certain size $n$. However, without reference to the population, the quality of the estimation cannot be computed. Bootstrap method solves the problem by sampling a new sample set of the same size $n$ in the original sample set (with replacement) $m$ times, and using the resampled sets (called Bootstrap sample sets) to form $m$ Bootstrap estimations. The quality of the estimation with the original sample set can then be inferred by assessing the Bootstrap estimation with respect to the original estimation, which is accessible. With much more that can be studied in applying bootstrapping, we here only focus on the non-parameterized Bootstrap method and the Bootstrap estimation of MSE. In the scale of our modeled reconstruction problem, we can represent the sample set of size $n$ with $\{s^{(1)}, s^{(2)}, \ldots, s^{(n)}\}$ and the original estimation as $\hat{x}$. The $k$th Bootstrap sample set can be represented by $\{s^{B_k(1)}, s^{B_k(2)}, \ldots, s^{B_k(n)}\}$, and the corresponding estimation as Equation 12. $$\hat{x}^{B_k} = f_{AF}(s^{B_k(1)}, s^{B_k(2)}, \ldots, s^{B_k(n)})$$ The MSE of the estimation $\hat{x}$ can be estimated by bootstrapping using Equation 13. For MRI reconstruction, we can see that without reference to the fully sampled image $x$, it is still possible to estimate the MSE of the reconstruction result. $$\text{mse}(\hat{x}) = \frac{1}{m} \sum_{k=1}^{m} (\hat{x}^{B_k} - \hat{x})^2$$ 4.2 Virtual Sample Set and Pseudo Resampling Trick In the previous section, we show that we can measure the quality of MRI reconstruction using Bootstrap method. However, in real-life applications, it is unrealistic to assume that there will be multiple observations to form a sample set of enough size to perform bootstrapping. In fact, often only one observation may be available in a specific scan. An intuitive method is to randomly generate a sample set with Equation 6 as a constraint or to assign the points to observations in the virtual sample simply uniformly. These methods will lead to high variance in computation with no prior knowledge leveraged. Instead, we propose to get the distribution of the observations by mapping the observation to a virtual sample set derived from the equivalent sample distribution defined in Section 3.3, which generates estimations equally distributed as a direct reconstruction given the single observation. If the single observation follows distribution \( p(s^*) \), the equivalent sample distribution is then \( p(s) \) correspondingly, which forms the prior distribution of the observations in the virtual sample set of corresponding size \( n \). However, given a specific observation \( s^* = (y^*, U^*) \), the operation of “sampled at least once” is constrained so we should instead model the observations in the virtual sample set as a conditioned distribution, as formalized in Equation 14: \[ P(U^V_i | U^*_i) = 1_{U^*_i=1} Pr(U^V_i = 1 | U^*_i = 1) \] (14) We can calculate the probabilities with Bayes’ theorem and then use approximate values to help implementation, the result is in Equation 15 and details of derivation can be found in Appendix C. The distribution of virtual sample set elements is parameterized as \( p(s^V; x, P_{UV}) \). \[ P_{UV} = Pr(U^V_i = 1) = \begin{cases} 1 & P_{U_i} = 1 \\ 1/n & U^*_i \neq 0 \& P_{U_i} \neq 1 \\ 0 & U^*_i = 0 \end{cases} \] (15) With the distribution of observations in the virtual sample set, the distribution of the output of the aggregation function, marked as \( p(s^{B*}) \) can be computed with the same methods as Equation 10. As a result, we can skip sampling the virtual sample set and directly draw instances from \( p(s^{B*}) \) to get the results of the aggregation function, which is similar to the kernel trick in kernel methods, so we name it Pseudo Resampling Trick. Figure 3: Demonstration of virtual sample set and pseudo resampling trick. The virtual sample set of an observation and its corresponding distribution are visualized. The gray area is the explicit construction of the virtual sample set and conducting Bootstrap sampling, while the purple area corresponds to the pseudo resampling trick. 4.3 Smarter Re-undersampling and Training with BootRec It’s easy to find that \( \{i | U^{B*}_i = 1\} \subseteq \{i | U^*_i = 1\} \), so the process results in re-undersampling in k-space. On the contrary, for any given re-undersampling mask \( U^R \) applied to the sample, the process can be described as the pseudo resampling virtual sample set, as long as the distribution of \( U^{R*} = U^R \odot U^* \) is the same as \( U^{B*} \). Based on this insight, new algorithms can be developed to implement the Bootstrap computation like Equation (13). The pseudo-code is provided in Appendix B. A further step is to use estimated MSE as a proxy for the loss function in training learning-based reconstruction models. Algorithm 2 can derive the loss function as Equation (16) and Appendix D provide an example pipeline of training. \[ L_{\text{bootrec}}(\hat{x}, y^*) = \text{mse} = \frac{1}{m} \sum_{k=1}^{k=m} (\hat{x}^{B_k} - \hat{x})^2 \] (16) The key attributes of our methodology include: 1. The re-undersampling pattern is derived from the distribution of sampling and is dynamic. 2. Spatial loss is used and the final target of $\hat{x}$ is involved in the training process. 3. The self-supervision loss can be interpreted as errors estimated. 5 EXPERIMENTS 5.1 IMPLEMENTATION METHODS AND BASELINES The experiment is conducted in fastMRI dataset (Zbontar et al., 2018), applying the setting of Wang et al. (2022), where 232 volumes are split into 2D slices and divided with around 8:1:1 for training, validation, and testing, with a sampling ratio of 33% and a fixed mask for all data points. Coil sensitivity maps are built with ESPIRiT (Uecker et al., 2014). We use a large hyper-parameter $n = 1000$, 100 epochs for training models, and 100 iterations in each zero-shot epoch. More details are in Appendix E. 5.2 EVALUATING ESTIMATED MSE AS UNCERTAINTY QUANTIFICATION To test the effectiveness of Algorithm 2 in an independent reconstructor, a DC-CNN model (Schlemper et al., 2018a) is trained with supervised MSE loss. The visualization of the results is in Figure 5. The data are collected with the test set so the model doesn’t meet them in training or validation. We evaluate the correlation between the estimated MSE and the ground truth computed from the label and the prediction and the influence of different $m$ values. The correlations of the estimated and ground truths mean that it’s possible to identify hard sample with the estimation. 5.3 ESTIMATED MSE AS LOSS FUNCTION FOR SELF-SUPERVISED TRAINING We test the capability of optimizing reconstruction model according to Bootstrap estimated MSE in the zero-shot reconstruction scenery (Yaman, 2022), where an untrained neural network is optimized. --- 1 In fact, some intuition of selecting such masks can also be derived, such as the re-undersampling ratio should not be lower than $\lim_{n \to +\infty} 1 - (1 - 1/n)^n = 1 - \frac{1}{e}$, otherwise it would result in a negative cardinality of the virtual sample set. Another situation is the corresponding $n$ is not integer. Figure 5: Estimated MSE with Algorithm 2 vs. MSE computed with the ground truth. All figures show clear linear correlations between the estimations and ground truths, and the different values of \( m \) seem to have little influence on the correlation. Figure 6: Training curves in zero-shot training. Estimated and GT MSE show consistent tendency, proving the effectiveness of optimizing Estimated MSE. according to the acquired data with some loss function. Note the optimization of the loss function cannot be conducted directly with gradient descent, since the model will collapse. Accordingly, we make some special designs including stopping the gradient of the original estimation (Grill et al., 2020; Chen & He, 2021) and enforcing consistency on positions not sampled. The details of the implementation can be found in Appendix D. We set \( m = 1 \) in the experiment. We use data from the fastMRI validation set. Our method is compared with other zero-shot methods like deep image prior (Ulyanov et al., 2018; Jafari et al., 2021; Senouf et al., 2019) and self-supervision via data undersampling (SSDU) (Yaman, 2022), where independent zero-shot models trained separately per image. Details of the models and results can be found in Appendix E. We also show the performances of SENSE reconstruction (Pruessmann et al., 1999), supervised DC-CNN methods, and the state-of-the-art (SOTA) unsupervised model from Wang et al. (2022b). The quantitative results can be found in Figure 7 and visual examples are displayed in Figure 8. Zero-shot model performances In Figure 7 we show that pure k-space loss functions failed to perform much better than the simple zero-filled SENSE method in our implementation, while our model shows clear advantages. We also probe the metrics during training in Figure 6 and found that the model continuously performs better as the optimization goes. Another positive finding is the MSE of the reconstruction shows a synchronous tendency as the Bootstrap estimated values, proving the accuracy of the estimation. Handling multiple observations Another intriguing prospect is generalizing the loss to the training over multiple samples in a dataset. For now, our theory doesn’t directly cover the multi-sample situation. If multiple samples are trained, the distribution is compositional and the to-be-estimated --- This model binds another more powerful backbone than DC-CNN in training so may have extra advantages. parameter is transferred to be the parameter of the distribution of $x^{(i)}$. Since the pseudo resampling and other tools are defined to be applied only for $x$ (here it only means a particular image), part of our theory needs to be re-explained and we leave it for future study. However, we demonstrate the effectiveness of directly applying Algorithm 2 in multiple samples. The model trained with multiple samples has better performances than the zero-shot models and even has a better structural similarity index (SSIM) than the SOTA unsupervised model while having a competitive peak signal-to-noise ratio (PSNR). ![Figure 7: PSNR and SSIM of different models. Purple boxes indicate supervised models; red boxes show unsupervised models; green boxes indicate methods without training data.](image) ![Figure 8: Visual examples of reconstruction of fastMRI validation set. The error maps are amplified by 5 times for better presentation.](image) ### 6 Conclusion and Future Work In conclusion, as an attempt to provide a theoretical foundation and direct design of self-supervised learning algorithms, we propose a new paradigm for unsupervised compressed sensing MRI reconstruction. Unsupervised MRI reconstruction is modeled as parameter estimation, then we can wrap existing reconstruction methods to form estimators. Based on this insight, several designs including aggregation function, equivalent sample distribution, virtual sample set, and pseudo resampling trick are proposed to connect re-undersampling in self-supervised learning with Bootstrap sampling. Our flexible framework can not only estimate the MSE of arbitrary reconstructions without accessing ground truth images but also train self-supervised models for better performance. We believe our paradigm may also inspire some new insights into transforming unsupervised learning. For example, if we define corresponding domains, all augmentations on self-supervised learning may be transformed to re-undersampling and thus can be analyzed with our framework. Also, the proposed aggregation function and estimators are not fully studied and have a large space for improvement with future efforts. The training process of the derived loss function may suffer from collapsing and exploding, which can also be further addressed for better solutions. REFERENCES Anastasios N. Angelopoulos, Amit Pal Singh Kohli, Stephen Bates, Michael I. Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, and Yaniv Romano. Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 717–730. PMLR, 2022. URL https://proceedings.mlr.press/v162/angelopoulos22a.html Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 1597–1607. PMLR, 2020. URL http://proceedings.mlr.press/v119/chen20j.html Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 15750–15758. Computer Vision Foundation / IEEE, 2021. doi: 10.1109/CVPR46437.2021.01549. URL https://openaccess.thecvf.com/content/CVPR2021/html/Chen_Exploring_Simple_Siamese_Representation_Learning_CVPR_2021_paper.html Yutong Chen, Carola-Bibiane Schonlieb, Pietro Lio, Tim Leiner, Pier Luigi Dragotti, Ge Wang, Daniel Rueckert, David Firmin, and Guang Yang. AI-Based Reconstruction for Fast MRI—A Systematic Review and Meta-Analysis. Proceedings of the IEEE, 110(2):224–245, February 2022. ISSN 0018-9219, 1558-2256. doi: 10.1109/JPROC.2022.3141367. URL https://ieeexplore.ieee.org/document/9703109/ Hyungjin Chung, Eunju Cha, Leonard Sunwoo, and Jong Chul Ye. Two-stage deep learning for accelerated 3d time-of-flight mra without matched training data. Medical Image Analysis, 71: 102047, 2021. D. L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006. Vineet Edupuganti, Morteza Mardani, Shreyas Vasanawala, and John Pauly. Uncertainty quantification in deep mri reconstruction. IEEE Transactions on Medical Imaging, 40(1):239–250, 2020. Canberk Ekmekci and Mujdat Cetin. Uncertainty quantification for deep unrolling-based computational imaging. IEEE Transactions on Computational Imaging, 8:1195–1209, 2022. Zalan Fabian, Reinhard Heckel, and Mahdi Soltanolkotabi. Data augmentation for deep learning based accelerated MRI reconstruction with limited data. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 3057–3067. PMLR, 2021. URL http://proceedings.mlr.press/v139/fabian21a.html Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna M. Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, Muhammad Shahzad, Wen Yang, Richard Bamler, and Xiao Xiang Zhu. A survey of uncertainty in deep neural networks. CoRR, abs/2107.03342, 2021. URL https://arxiv.org/abs/2107.03342 Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent - A new approach to self-supervised learning. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/f3ada80d5c4ee70142b17b8192b2958e-Abstract.html
lOsF9k1sxW
In the relevant work, it has been written that the previous defense methods have high calculation costs, which limits their practicability in the actual environment. But won't the calculation of FIM in SFT increase the complexity and cost of calculation?
FISHER INFORMATION GUIDED BACKDOOR PURIFICATION VIA NAÏVE EXPLOITATION OF SMOOTHNESS Anonymous authors Paper under double-blind review ABSTRACT Backdoor attacks during deep neural network (DNN) training have gained popularity in recent times since they can easily compromise the safety of a model of high importance, e.g., large language or vision models. Our study shows that a backdoor model converges to a bad local minima, i.e., sharper minima as compared to a benign model. Intuitively, the backdoor can be purified by re-optimizing the model to smoother minima. To obtain such re-optimization, we propose Smooth Fine-Tuning (SFT), a novel backdoor purification framework that exploits the knowledge of Fisher Information Matrix (FIM). However, purification in this manner can lead to poor clean test time performance due to drastic changes in the original backdoor model parameters. To preserve the original test accuracy, a novel regularizer has been designed to explicitly remember the learned clean data distribution. In addition, we introduce an efficient variant of SFT, dubbed as Fast SFT, which reduces the number of tunable parameters significantly and obtains an impressive runtime gain of almost $5\times$. Extensive experiments show that the proposed method achieves state-of-the-art performance on a wide range of backdoor defense benchmarks: four different tasks—Image Recognition, Object Detection, Video Action Recognition, 3D point Cloud; 10 different datasets including ImageNet, PASCAL VOC, UCF101; diverse model architectures spanning both CNN and vision transformer; 14 different backdoor attacks, e.g., Dynamic, WaiNet, ISSBA, etc. 1 INTRODUCTION Training a deep neural network (DNN) with a fraction of poisoned or malicious data is often security-critical since the model can successfully learn both clean and adversarial tasks equally well. This is prominent in scenarios where one outsources the DNN training to a vendor. In such scenarios, an adversary can mount backdoor attacks (Gu et al., 2019; Chen et al., 2017) by poisoning a portion of training samples so that the model will classify any sample with a particular trigger or pattern to an adversary-set label. Whenever a DNN is trained in such a manner, it becomes crucial to remove the effect of a backdoor before deploying it for a real-world application. In recent times, a number of attempts have been made (Liu et al., 2018; Wang et al., 2019; Wu & Wang, 2021; Li et al., 2021b; Zheng et al., 2022; Zhu et al., 2023) to tackle the backdoor issue in DNN training. Defense techniques such as fine-pruning (FP) (Liu et al., 2018) aim to prune vulnerable neurons affected by the backdoor. Most of the recent backdoor defenses can be categorized into two groups based on the intuition or perspective they are built on. They are i) pruning based defense (Liu et al., 2018; Wu & Wang, 2021; Zheng et al., 2022): some weights/channel/neurons are more vulnerable to backdoor than others. Therefore, pruning or masking bad neurons should remove the backdoor. ii) trigger approximation based defense (Zeng et al., 2021; Chai & Chen, 2022): recovering the original trigger pattern and fine-tuning the model with this trigger would remove the backdoor. In this work, we bring in a novel perspective for analyzing the backdoor in DNNs. Different from existing techniques, we explore the backdoor insertion and removal phenomena from the DNN optimization point of view. Unlike a benign model, a backdoor model is forced to learn two different data distributions: clean data distribution and poison data distribution. Having to learn both distributions, backdoor model optimization usually leads to a bad local minima or sharper minima w.r.t. clean distribution. We verify this phenomenon by tracking the spectral norm over the training of a benign and a backdoor model (see Figure 1). We also provide theoretical justification for such Figure 1: a & b) Eigen spectral density plots of loss Hessian for benign and backdoor (TrojanNet (Liu et al., 2017a)) models. In each plot, the maximum eigenvalue ($\lambda_{\text{max}}$), the trace of Hessian ($\text{Tr}(H)$), clean test accuracy (ACC), and attack success rate (ASR) are also reported. Here, low $\lambda_{\text{max}}$ and $\text{Tr}(H)$ hints at the presence of a smoother loss surface, which often results in low ASR and high ACC. Compared to a benign model, a backdoor model tends to reach sharper minima, as shown by the larger range of eigenvalues (x-axis). c) The convergence phenomena over the course of training. As the backdoor model converges to sharper minima, d) both ASR and ACC increase; observe the curves around 80 epochs. We use the CIFAR10 dataset with a PreActResNet18 (He et al., 2016) architecture for all evaluations. discrepancy in convergence behavior. Intuitively, we claim that the backdoor can be removed by re-optimizing the model to smoother minima. To obtain such re-optimization, we propose a novel backdoor purification technique—Smooth Fine-tuning (SFT) by exploiting the knowledge of Fisher Information Matrix (FIM) of a DNN to remove the imprint of the backdoor. Specifically, an FIM-guided regularizer has been introduced to achieve smooth convergence, which in turn effectively removes the backdoor. Our contribution can be summarized as follows: • **Novel Perspective for Backdoor Analysis.** We analyze the backdoor insertion process in DNNs from the optimization point of view. Our analysis shows that the optimization of a backdoor model leads to a bad local minima or sharper minima compared to a benign model. We also provide theoretical justifications for our novel findings. To the best of our knowledge, this is the first study establishing the correlation between smoothness and backdoor attacks. • **Novel Backdoor Defense.** We propose a novel technique, SFT, that removes the backdoor by re-optimizing the model to smooth minima. However, purifying the backdoor in this manner can lead to poor clean test time performance due to drastic changes in the original backdoor model parameters. To preserve the original test accuracy of the model, we propose a novel clean data-distribution-aware regularizer that encourages less drastic changes to the model parameters responsible for remembering the clean distribution. • **Better Runtime Efficiency.** In addition, we propose a computationally efficient variant of SFT, i.e., Fast SFT, where we perform spectral decomposition of the weight matrices and fine-tune only the singular values while freezing the corresponding singular vectors. By reducing the tunable parameters, the purification time can be shortened significantly. • **Comprehensive Evaluation.** We evaluate our proposed method on a wide range of backdoor defense benchmarks, which shows that SFT obtains state-of-the-art performance both in terms of purification performance and runtime. 2 RELATED WORK Existing backdoor defense methods can be categorized into backdoor detection or purifying techniques. Detection based defenses include trigger synthesis approach Wang et al. (2019); Qiao et al. (2019); Guo et al. (2020); Shen et al. (2021); Dong et al. (2021); Guo et al. (2021); Xiang et al. (2022); Tao et al. (2022), or malicious samples filtering based techniques Tran et al. (2018); Gao et al. (2019); Chen et al. (2019). However, these methods only detect the existence of backdoor without removing it. Backdoor purification defenses can be further classified as training time defenses and inference time defenses. Training time defenses include model reconstruction approach Zhao et al. (2020a); Li et al. (2021c), poison suppression approach Hong et al. (2020); Du et al. (2019); Borgnia et al. (2021), and pre-processing approaches Li et al. (2021b); Doan et al. (2020). Although training time defenses are often successful, they suffer from huge computational burdens and are less practical considering attacks during DNN outsourcing. Inference time defenses are mostly based on pruning approaches such as Koh & Liang (2017); Ma & Liu (2019); Tran et al. (2018); Diakonikolas et al. (2019); Steinhardt et al. (2017). Pruning-based approaches are typically based on model vulnerabilities to backdoor attacks. For example, MCR Zhao et al. (2020a) and CLP Zheng et al. (2022) analyzed node connectivity and channel Lipschitz constant to detect backdoor vulnerable neurons. Adversarial Neuron Perturbations (ANP) (Wu & Wang, 2021) adversarially perturbs the DNN weights by employing and pruning bad neurons based on pre-defined thresholds. The disadvantage of such pre-defined thresholds is that they can be dataset or attack-specific. ANP also suffers from performance degradation when the validation data size is too small. A more recent technique, Adversarial Weight Masking (AWM) (Chai & Chen, 2022), has been proposed to circumvent the issues of ANP by replacing the adversarial weight perturbation module with an adversarial input perturbation module. Specifically, AWM solves a bi-level optimization for recovering the backdoor trigger distribution. Notice that both of these SOTA methods rely heavily on the computationally expensive adversarial search in the input or weight space, limiting their applicability in practical settings. I-BAU (Zeng et al., 2021) also employs similar adversarial search-based criteria for backdoor removal. Recently, Zhu et al. (2023) proposed a regular weight fine-tuning (FT) technique that employs popular sharpness-aware minimization (SAM) (Foret et al., 2021) optimizer to remove the effect of backdoor. However, a naïve addition of SAM to the FT leads to poor clean test accuracy after backdoor purification. We provide additional related works on backdoor attacks and smoothness analysis of DNN in Appendix A.1. 3 THREAT MODEL **Attack Model.** Our attack model is consistent with prior works related to backdoor attacks (e.g., (Gu et al., 2019; Chen et al., 2017; Nguyen & Tran, 2021; Wang et al., 2022), etc.). We consider an adversary with the capabilities of carrying a backdoor attack on a DNN model, \( f_\theta : \mathbb{R}^d \rightarrow \mathbb{R}^c \), by training it on a poisoned data set \( D_{\text{train}} = \{ X_{\text{train}}, Y_{\text{train}} \} \); \( X_{\text{train}} = \{ x_i \}_{i=1}^{N_s}, Y_{\text{train}} = \{ y_i \}_{i=1}^{N_s} \), where \( N_s \) is the total number of training samples. Here, \( \theta \) is the parameters of the model, \( d \) is the input data dimension, and \( c \) is the total number of classes. Each input \( x \in X_{\text{train}} \) is labeled as \( y \in \{ 1, 2, \cdots, c \} \). The data poisoning happens through a specific set of triggers that can only be accessed by the attacker. The adversary goal is to train the model in a way such that any triggered samples \( x_b = x \oplus \delta \in \mathbb{R}^d \) will be wrongly misclassified to a target label \( y_b \), i.e., \( \arg \max(f_\theta(x_b)) = y_b \neq y \). Here, \( x \) is a clean test sample, and \( \delta \in \mathbb{R}^d \) represents the trigger pattern with the properties of \( ||\delta|| \leq \epsilon \), where \( \epsilon \) is the trigger magnitude determined by its shape, size, and color. Note that \( \oplus \) operator can be any specific operation depending on how an adversary designed the trigger. We define the poison rate (PR) as the ratio of poison and clean data in \( D_{\text{train}} \). An attack is considered successful if the model behaves as \( \arg \max(f_\theta(x)) = y \) and \( \arg \max(f_\theta(x_b)) = y_b \), where \( y \) is the true label for \( x \). We use attack success rate (ASR) for quantifying such success. **Defense Goal.** We assume the defender has complete control over the pre-trained model \( f_\theta(.) \), e.g., access of model parameters. Hence, we consider a defender with a task to purify the backdoor model \( f_\theta(.) \) using a small clean validation set \( D_{\text{val}} = \{ X_{\text{val}}, Y_{\text{val}} \} \) (usually 0.1 ~ 10% of the training data depending on the dataset). The goal is to repair the model such that it becomes immune to attack, i.e., \( \arg \max(f_{\theta_v}(x_b)) = y \), where \( f_{\theta_v} \) is the final purified model. Note that the defense method must retain clean accuracy of \( f_\theta(.) \) for benign inputs even if the model has no backdoor. 4 SMOOTHNESS ANALYSIS OF BACKDOOR MODELS In this section, we analyze the loss surface geometry of benign and backdoor models. To study the loss curvature properties of different models, we aim to analyze the Hessian of loss (loss-Hessian), \( H = \nabla_\theta^2 L \), where \( L \) is computed using the training samples. The spectral decomposition of symmetric square matrix \( H \) is \( H = [h_{ij}] = Q \Lambda Q^T \), where \( \Lambda = \text{diag}(\lambda_1, \lambda_2, \cdots, \lambda_N) \) is a diagonal matrix that contains the eigenvalues of \( H \) and \( Q = [q_1 q_2 \cdots q_N] \), where \( q_i \) is the \( i \)-th eigenvector of \( H \). As a measure for smoothness, we take the spectral norm of \( H \), \( \sigma(H) = \lambda_1 = \lambda_{\text{max}} \), and the trace of the Hessian, \( \text{Tr}(H) = \sum_{i=1}^{N} h_{ii} \). Low values for these two proxies indicate the presence of a highly smooth loss surface (Jastrzebski et al., 2020). The Eigen Spectral density plots in Fig. 1a and 1b elaborates on the optimization of benign and backdoor models. From the comparison of \( \lambda_{\text{max}} \) and \( \text{Tr}(H) \), it can be conjectured that optimization of a benign model leads to a smoother loss surface. Since the main difference between a benign and a backdoor model is that the latter needs to learn two different data distributions (clean and poison), we state the following observation: Observation 1. Having to learn two different data distributions, a backdoor model reaches a sharper minima, i.e., large $\sigma(H)$ and $\text{Tr}(H)$, as compared to the benign model. We support our observation with empirical evidence presented in Fig. 1c and 1d. Here, we observe the convergence behavior for 4 different attacks over the course of training. Compared to a benign model, the loss surface of a backdoor becomes much sharper as the model becomes well optimized for both distributions, i.e., high ASR and high ACC. Backdoor and benign models are far from being well-optimized at the beginning of training. The difference between these models is prominent once the model reaches closer to the final optimization point. As shown in Fig. 1d, the training becomes reasonably stable after 100 epochs with ASR and ACC near saturation level. Comparing $\lambda_{\text{max}}$ of benign and all backdoor models after 100 epochs, we notice a sharp contrast in Fig. 1c. This validates our claim on loss surface smoothness of benign and backdoor models in Observation 1. All of the backdoor models have high attack success rates (ASR) as well as high clean test accuracy (ACC) which indicates that the model had learned both distributions, providing additional support for Observation 1. Similar phenomena for different attacks, datasets, and architectures have been observed; details are provided in Appendix A.6.1. Theoretical Justification. (Keskar et al., 2017) shows that the loss-surface smoothness of $L$ for differentiable $\nabla_\theta L$ can be related to $L$–Lipschitz\(^1\) of $\nabla_\theta L$ as, $$\sup_\theta \sigma(\nabla^2_\theta L) \leq L$$ (1) Theorem 1. If the gradient of loss corresponding to clean and poison samples are $L_c$–Lipschitz and $L_b$–Lipschitz, respectively, then the overall loss (i.e., loss corresponding to both clean and poison samples with their ground-truth labels) is $(L_c + L_b)$–Smooth. Theorem 1 describes the nature of overall loss resulting from both clean and poison samples. Looking back to Eq. (1), Theorem 1 supports our empirical results related to backdoor and benign model optimization as larger Lipschitz constant implies sharper minima. 5 SMOOTH FINE-TUNING (SFT) Our proposed backdoor purification method—Smooth Fine-Tuning (SFT) consists of two novel components: (i) Backdoor Suppressor for backdoor purification and (ii) Clean Accuracy Retainer to preserve the clean test accuracy of the purified model. Backdoor Suppressor. Let us consider a backdoor model $f_\theta : \mathbb{R}^d \rightarrow \mathbb{R}^c$ with parameters $\theta \in \mathbb{R}^N$ to be fitted (fine-tuned) with input (clean validation) data $\{(x_i, y_i)\}_{i=1}^{|\mathcal{D}_{\text{val}}|}$ from an input data distribution $P_{x,y}$, where $x_i \in X_{\text{val}}$ is an input sample and $y_i \in Y_{\text{val}}$ is its label. We fine-tune the model by solving the following: $$\arg \min_\theta L(\theta),$$ (2) where $L(\theta) = L(y, f_\theta(x)) = \sum_{(x_i, y_i) \in \mathcal{D}_{\text{val}}} [-\log [f_\theta(x_i)]_{y_i}]$ is the empirical full-batch cross-entropy (CE) loss. Here, $[f_\theta(x)]_y$ is the $y$th element of $f_\theta(x)$. Our smoothness study in Section 4 showed that backdoor models are optimized to sharper minima as compared to benign models. Intuitively, re-optimizing the backdoor model to a smooth minima would effectively remove the backdoor. However, the vanilla fine-tuning objective presented in Eq. (2) is not sufficient to effectively remove the backdoor as we are not using any smoothness constraint or penalty. To this end, we propose to regularize the spectral norm of loss-Hessian $\sigma(H)$ in addition to minimizing the cross entropy-loss $L(\theta)$ as follows, $$\arg \min_\theta L(\theta) + \sigma(H).$$ (3) By explicitly regularizing the $\sigma(H)$, we intend to obtain smooth optimization of the backdoor model. However, the calculation of $H$, in each iteration of training has a huge computational cost. Given the objective function is minimized iteratively, it is not feasible to calculate the loss Hessian at each iteration. Additionally, the calculation of $\sigma(H)$ will further add to the computational cost. Instead of directly computing $H$ and $\sigma(H)$, we analytically derived a computationally efficient upper-bound of $\sigma(H)$ in terms of $\text{Tr}(H)$ as follows, \(^1\)Definition of $L$–Lipschitz and details of proof for Theorem 1 are presented in Appendix A.3. Lemma 1. The spectral norm of loss-Hessian \( \sigma(H) \) is upper-bounded by \( \sigma(H) \leq \text{Tr}(H) \approx \text{Tr}(F) \), where \[ F = \mathbb{E}_{(x,y) \sim P_{x,y}} \left[ \nabla_\theta \log[f_\theta(x)]_y \cdot (\nabla_\theta \log[f_\theta(x)]_y)^T \right] \] (4) is the Fisher-Information Matrix (FIM). Proof. The inequality \( \sigma(H) \leq \text{Tr}(H) \) follows trivially as \( \text{Tr}(H) \) of symmetric square matrix \( H \) is the sum of all eigenvalues of \( H \), \( \text{Tr}(H) = \sum_i \lambda_i \geq \sigma(H) \). The approximation of \( \text{Tr}(H) \) using \( \text{Tr}(F) \) follows the fact that \( F \) is negative expected Hessian of log-likelihood and used as a proxy of Hessian \( H \) (Amari, 1998). Following Lemma 1, we adjust our objective function described in Eq. (3) to \[ \arg \min_\theta L(\theta) + \eta_F \text{Tr}(F), \] (5) where \( \eta_F \) is a regularization constant. Optimizing Eq. (5) will force the backdoor model to converge to smooth minima. Even though this would purify the backdoor model, the clean test accuracy of the purified model may suffer due to significant changes in \( \theta \). To avoid this, we propose an additional but much-needed regularizer to preserve the clean test performance of the original model. Clean Accuracy Retainer. In a backdoor model, some neurons or parameters are more vulnerable than others. The vulnerable parameters are believed to be the ones that are sensitive to poison or trigger data distribution (Wu & Wang, 2021). In general, CE loss does not discriminate whether a parameter is more sensitive to clean or poison distribution. Such lack of discrimination may allow drastic or unwanted changes to the parameters responsible for learned clean distribution. This usually leads to sub-par clean test accuracy after purification, and it requires additional measures to fix this issue. To this end, we introduce a novel clean distribution aware regularization term as, \[ L_r = \sum_i \text{diag}(\tilde{F})_i \cdot (\theta_i - \tilde{\theta}_i)^2. \] Here, \( \tilde{\theta} \) is the parameter of the initial backdoor model and remains fixed throughout the purification phase. \( \tilde{F} \) is FIM computed only once on \( \tilde{\theta} \) and also remains unchanged during purification. \( L_r \) is a product of two terms: i) an error term that accounts for the deviation of \( \theta \) from \( \tilde{\theta} \); ii) a vector, \( \text{diag}(\tilde{F}) \), consisting of the diagonal elements of FIM \( (\tilde{F}) \). As the first term controls the changes of parameters w.r.t. \( \tilde{\theta} \), it helps the model to remember the already learned distribution. However, learned data distribution consists of both clean and poison distribution. To explicitly force the model to remember the clean distribution, we compute \( \tilde{F} \) using a clean validation set; with similar distribution as the learned clean data. Note that \( \text{diag}(\tilde{F})_i \) represents the square of the derivative of log-likelihood of clean distribution w.r.t. \( \theta_i \), \( [\nabla_{\tilde{\theta}_i} \log[f_{\tilde{\theta}}(x)]_y]^2 \) (ref. Eq. (4)). In other words, \( \text{diag}(\tilde{F})_i \) is the measure of importance of \( \theta_i \) towards remembering the learned clean distribution. If \( \text{diag}(\tilde{F})_i \) has a higher importance, we allow minimal changes to \( \theta_i \) over the purification process. This careful design of such a regularizer improves the clean test performance significantly. Finally, to purify the backdoor model as well as to preserve the clean accuracy, we formulate the following objective function as \[ \arg \min_\theta L(\theta) + \eta_F \text{Tr}(F) + \frac{\eta_r}{2} L_r, \] (6) where \( \eta_F \) and \( \eta_r \) are regularization constants. 5.1 FAST SFT (F-SFT) In general, any backdoor defense technique is evaluated in terms of removal performance and the time it takes to remove the backdoor, i.e., purification time. It is desirable to have a very short purification time. To this aim, we introduce a few unique modifications to SFT where we perform fine-tuning in a more compact space than the original parameter space. Let us represent the weight matrices for model with \( L \) number of layers as \( \theta = [\theta_1, \theta_2, \cdots, \theta_L] \). We take spectral decomposition of \( \theta_i = U_i \Sigma_i V_i^T \in \mathbb{R}^{M \times N} \), where \( \Sigma_i = \text{diag}(\sigma_i) \) and \( \sigma_i = [\sigma_i^1, \sigma_i^2, \cdots, \sigma_i^M] \) are singular values arranged in a descending order. The spectral shift of the parameter space is defined as the difference between singular values of original \( \theta_i \) and the updated Table 1: Removal Performance (%) of SFT and other defenses in single-label settings. Backdoor removal performance, i.e., drop in ASR, against a wide range of attacking strategies, shows the effectiveness of SFT. We use a poison rate of 10% for CIFAR10 and 5% for ImageNet. For ImageNet, we report performance on successful attacks (ASR ~ 100%) only. Average drop (↓) indicates the % changes in ASR/ACC compared to the baseline, i.e., No Defense. A higher ASR drop and lower ACC drop are desired for a good defense. | Dataset | Method | No Defense | ANP | I-BAU | AWM | FT-SAM | SFT (Ours) | |---------|--------|------------|-----|-------|------|--------|-----------| | | Attacks | ASR | ACC | ASR | ACC | ASR | ACC | ASR | ACC | ASR | ACC | | CIFAR-10 | Benign | 0 | 95.21 | 0 | 92.28 | 0 | 93.98 | 0 | 93.56 | 0 | 93.80 | 0 | 94.10 | | | Badnets | 100 | 92.96 | 6.87 | 86.92 | 2.84 | 85.96 | 9.72 | 87.85 | 3.74 | 86.17 | 1.86 | 89.32 | | | Blend | 100 | 94.11 | 5.77 | 87.61 | 7.81 | 89.10 | 6.53 | 89.64 | 2.13 | 88.93 | 0.38 | 92.17 | | | Trojan | 100 | 89.57 | 5.78 | 84.18 | 8.47 | 85.20 | 7.91 | 87.50 | 5.41 | 86.45 | 2.64 | 87.21 | | | Trojan-all | 100 | 88.33 | 4.94 | 84.18 | 9.57 | 83.89 | 9.82 | 84.97 | 3.48 | 84.30 | 2.77 | 86.10 | | | SIG | 100 | 88.84 | 2.04 | 84.92 | 1.37 | 83.60 | 8.35 | 83.57 | 0.73 | 83.38 | 0.92 | 86.73 | | | Dyn-one | 100 | 92.52 | 8.73 | 88.61 | 7.78 | 86.26 | 6.48 | 88.16 | 3.35 | 88.41 | 1.17 | 90.97 | | | Dyn-all | 100 | 92.61 | 7.28 | 88.32 | 8.19 | 84.51 | 6.30 | 89.74 | 2.46 | 87.72 | 1.61 | 91.19 | | | CLB | 100 | 92.78 | 5.83 | 89.41 | 3.41 | 85.07 | 5.78 | 86.70 | 1.89 | 87.18 | 2.04 | 91.37 | | | CFS | 93.36 | 90.17 | 25.80 | 86.80 | 24.15 | 85.63 | 26.27 | 85.05 | 18.31 | 85.53 | 14.60 | 86.97 | | | FBA | 100 | 90.02 | 11.05 | 86.90 | 16.70 | 87.42 | 10.53 | 85.35 | 10.31 | 87.06 | 6.21 | 87.50 | | | LIRA | 99.25 | 92.15 | 6.34 | 87.47 | 8.51 | 89.61 | 8.13 | 87.50 | 3.93 | 88.70 | 2.53 | 89.82 | | | WaNet | 98.64 | 92.29 | 9.81 | 88.70 | 7.18 | 89.24 | 8.72 | 85.94 | 2.96 | 87.45 | 2.38 | 89.67 | | | ISSBA | 99.80 | 92.80 | 10.76 | 85.42 | 9.82 | 89.20 | 9.48 | 88.03 | 3.68 | 88.51 | 4.24 | 90.18 | | | BPPA | 99.70 | 93.82 | 13.94 | 89.23 | 10.46 | 88.42 | 9.94 | 89.68 | 7.40 | 89.94 | 5.14 | 92.84 | | Avg. Drop | - | - | 90.34 ↓ | 4.57 ↓ | 90.75 ↓ | 4.96 ↓ | 90.31 ↓ | 4.42 ↓ | 94.29 ↓ | 4.53 ↓ | 95.86 ↓ | 2.28 ↓ | \[ \hat{\theta}_i \text{ can be expressed as } \delta_i = [\delta_1^i, \delta_2^i, \cdots, \delta_M^i]. \text{ Here, } \delta_j^i \text{ is the difference between individual singular value } \sigma_j^i. \text{ Instead of updating } \theta, \text{ we update the total spectral shift } \delta = [\delta_1, \delta_2, \cdots, \delta_L] \text{ as,} \] \[ \arg \min_{\delta} \mathcal{L}(\delta) + \eta_F \operatorname{Tr}(F') + \frac{\eta_r}{2} L_r \tag{7} \] Here, we keep the singular vectors \((U_i, V_i)\) frozen during the updates. We obtain the updated singular values as \( \hat{\Sigma}_i = \operatorname{diag}(\operatorname{ReLU}(\sigma_i + \delta_i)) \) which gives us the updated weights \( \hat{\theta}_i = U_i \hat{\Sigma}_i V_i^T \). Fine-tuning the model in spectral domain reduces the number of tunable parameters and purification time significantly (Table 5). ## 6 EXPERIMENTAL RESULTS ### 6.1 EVALUATION SETTINGS **Datasets.** We evaluate our proposed method on two widely used datasets for backdoor attack study: CIFAR10 (Krizhevsky et al., 2009) with 10 classes, GTSRB (Stallkamp et al., 2011) with 43 classes. As a test of scalability, we also consider Tiny-ImageNet (Le & Yang, 2015) with 100,000 images distributed among 200 classes and ImageNet (Deng et al., 2009) with 1.28M images distributed among 1000 classes. For multi-label clean-image backdoor attacks, we use object detection datasets Pascal VOC07 (Everingham et al., 2010), VOC12 (Everingham et al.) and MS-COCO (Lin et al., 2014). UCF-101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011) have been used for evaluating in action recognition task. In addition, ModelNet (Wu et al., 2015) dataset has also been considered for evaluation on 3D point cloud classifier. **Attacks Configurations.** We consider 14 state-of-the-art backdoor attacks: 1) Badnets (Gu et al., 2019), 2) Blend attack (Chen et al., 2017), 3 & 4) TrojanNet (Troj-one & Troj-all) (Liu et al., 2017a), 5) Sinusoidal signal attack (SIG) (Barni et al., 2019), 6 & 7) Input-Aware Attack (Dyn-one and Dyn-all) (Nguyen & Tran, 2020), 8) Clean-label attack (CLB) (Turner et al., 2018), 9) Composite backdoor (CBA) (Lin et al., 2020), 10) Deep feature space attack (FBA) (Cheng et al., 2021), 11) Warping-based backdoor attack (WaNet) (Nguyen & Tran, 2021), 12) Invisible triggers based backdoor attack (ISSBA) (Li et al., 2021d), 13) Imperceptible backdoor attack (LIRA) (Doan et al., 2021), and 14) Quantization and contrastive learning based attack (BPPA) (Wang et al., 2022). More details on hyper-parameters and overall training settings can be found in Appendix A.5.1. Table 2: Performance analysis for the multi-label backdoor attack (Chen et al., 2023). Mean average precision (mAP) and ASR of the model, with and without defenses, have been shown. | Dataset | No defense | FP | Vanilla FT | MCR | NAD | FT-SAM | SFT (Ours) | |---------|------------|----|------------|-----|-----|--------|------------| | | ASR mAP | ASR mAP | ASR mAP | ASR mAP | ASR mAP | ASR mAP | ASR mAP | | VOC12 | 86.4 | 92.5 | 61.8 | 87.2 | 19.3 | 86.9 | 28.3 | 86.0 | 26.6 | 87.3 | 17.9 | 87.6 | 16.1 | 89.4 | | MS-COCO | 84.8 | 91.9 | 70.2 | 86.1 | 18.5 | 85.3 | 20.8 | 84.1 | 19.0 | 84.9 | 15.2 | 85.7 | 13.8 | 88.6 | | | 85.6 | 88.0 | 64.3 | 83.8 | 17.2 | 84.1 | 24.2 | 82.5 | 22.6 | 83.4 | 14.3 | 83.8 | 15.0 | 85.2 | Table 3: Performance analysis for action recognition task where we choose 2 video datasets for evaluation. | Dataset | No defense | MCR | NAD | ANP | I-BAU | AWM | FT-SAM | SFT (Ours) | |---------|------------|-----|-----|-----|-------|-----|--------|------------| | | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | | UCF-101 | 81.3 | 75.6 | 23.5 | 68.3 | 26.9 | 69.2 | 24.1 | 70.8 | 20.4 | 70.6 | 22.8 | 70.1 | 14.7 | 71.3 | 12.1 | 72.4 | | HMDB-51 | 80.2 | 45.0 | 19.8 | 38.2 | 23.1 | 37.6 | 17.0 | 40.2 | 17.5 | 41.1 | 15.2 | 40.9 | 10.4 | 38.8 | 9.0 | 40.6 | Defenses Configurations. We compare our approach with 8 existing backdoor mitigation methods: 1) FT-SAM (Zhu et al., 2023); 2) Adversarial Neural Pruning (ANP) (Wu & Wang, 2021); 3) Implicit Backdoor Adversarial Unlearning (I-BAU) (Zeng et al., 2021); 4) Adversarial Weight Masking (AWM) (Chai & Chen, 2022); 5) Fine-Pruning (FP) (Liu et al., 2017b); 6) Mode Connectivity Repair (MCR) (Zhao et al., 2020a); and 7) Neural Attention Distillation (NAD) (Li et al., 2021c), 8) Vanilla FT where we simply fine-tune DNN weights. We provide implementation details for SFT and other defense methods in Appendix A.5.2 and Appendix A.5.3. Note that the experimental results for defenses 5, 6, 7, and 8 to Table 10 and 11 has been moved to Appendix A.5.4 due to page limitations. We measure the effectiveness of a defense method in terms of average drop in ASR and ACC overall attacks. A successful defense should have a high drop in ASR with a low drop in ACC. Here, ASR is defined as the percentage of poison test samples that are classified to the adversary-set target label ($y_b$) and ACC as the model’s clean test accuracy. An ASR of 100% indicates a successful attack, and 0% suggests the attacks’ imprint on the DNN is completely removed. 6.2 PERFORMANCE EVALUATION OF SFT Single-Label Settings. In Table 1, we present the performance of different defenses for CIFAR10 and ImageNet. We consider five label poisoning attacks: Badnets, Blend, TrojanNet, Dynamic, and BPPA. For TorjanNet, we consider two different variations based on label-mapping criteria: Troj-one and Troj-all. In Troj-one, all of the triggered images have the same target label. On the other hand, target labels are uniformly distributed over all classes for Troj-all. Regardless of the complexity of the label-mapping type, our proposed method outperforms all other methods both in terms of ASR and ACC. We also consider attacks that do not change the label during trigger insertion, i.e., clean label attack. Two such attacks are CLB and SIG. For further validation of our proposed method, we use deep feature-based attacks, CBA, and FBA. Both of these attacks manipulate deep features for backdoor insertion. Compared to other defenses, SFT shows better effectiveness against these diverse sets of attacks, achieving an average drop of 2.28% in ASR while sacrificing an ACC of 95.86% for that. Table 1 also shows the performance of baseline methods such as ANP, I-BAU, AWM, and FT-SAM. ANP, I-BAU, and AWM are adversarial search-based methods that work well for mild attacks (PR~5%) and often struggle to remove the backdoor for stronger attacks with high PR. FT-SAM uses sharpness-aware minimization (SAM) (Foret et al., 2021) for fine-tuning model weights. SAM is a recently proposed SGD-based optimizer that explicitly penalizes the abrupt changes of loss surface by bounding the search space within a small region. Even though the objective of SAM is similar to ours, SFT still obtains better removal performance than FT-SAM. One of the potential reasons behind this can be that SAM is using a predefined local area to search for maximum loss. Depending on the initial convergence of the original backdoor model, predefining the search area may limit the ability of the optimizer to provide the best convergence post-purification. As a result, the issue of poor clean test accuracy after purification is also observable for FT-SAM. For the scalability test of SFT, we consider the widely used dataset ImageNet. Consistent with CIFAR10, SFT obtains SOTA performance for this dataset too. However, there is a significant reduction in the effectiveness of ANP, AWM, and I-BAU for ImageNet. In case of large models and datasets, the task of identifying vulnerable neurons or weights gets more complicated and may result in wrong neuron pruning or weight masking. Due to page limitations, we move the results of GTSRB and Tiny-ImageNet to Table 7 in Appendix A.4. Multi-Label Settings. In Table 2, we show the performance of our proposed method in multi-label clean-image backdoor attack (Chen et al., 2023) settings. We choose 3 object detection datasets (Everingham et al., 2010; Lin et al., 2014) and ML-decoder (Ridnik et al., 2023) network architecture for Table 4: Removal performance (%) of SFT against backdoor attacks on **3D point cloud classifiers**. The attack methods (Li et al., 2021a) are poison-label backdoor attack (PointPBA) with interaction trigger (PointPBA-I), PointPBA with orientation trigger (PointPBA-O), clean-label backdoor attack (PointCBA). We also consider “backdoor points” based attack (3DPC-BA) described in (Xiang et al., 2021). | Attack | No Defense | MCR | NAD | ANP | I-BAU | AWM | FT-SAM | SFT (Ours) | |------------|------------|-----|-----|-----|-------|-----|--------|------------| | | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | ASR ACC | | PointBA-I | 98.6 89.1 | 14.8 81.2 | 13.5 81.4 | 14.4 82.8 | 13.6 82.6 | 15.4 83.9 | 8.1 84.0 | 9.6 85.7 | | PointBA-O | 94.7 89.8 | 14.6 80.3 | 12.5 81.1 | 13.6 81.7 | 14.8 82.0 | 13.1 82.4 | 9.4 83.8 | 7.5 85.3 | | PointCBA | 66.0 88.7 | 24.1 80.6 | 20.4 82.7 | 20.8 83.0 | 21.2 83.5 | 21.5 83.8 | 18.6 84.6 | 19.4 86.1 | | 3DPC-BA | 93.8 91.2 | 18.4 83.1 | 15.8 84.5 | 17.2 84.6 | 16.8 84.7 | 15.6 85.9 | 15.9 85.7 | 12.6 87.7 | this evaluation. It can be observed that SFT obtains a 1.4% better ASR drop as compared to FT-SAM for the VOC12 (Everingham et al.) dataset while producing a slight drop of 2.3% drop in mean average precision (mAP). The reason for such improvement can be attributed to our unique approach to obtaining smoothness. Furthermore, our proposed regularizer ensures better post-purification mAP than FT-SAM. More on attack and defense settings can be found in Appendix A.5.1 and Appendix A.5.2, respectively. **Video Action Recognition.** A clean-label attack (Zhao et al., 2020b) has been used for this experiment that requires generating adversarial perturbations for each input frame. We use two widely used datasets, UCF-101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011), with a CNN+LSTM network architecture. An ImageNet pre-trained ResNet50 network has been used for the CNN, and a sequential input-based Long Short Term Memory (LSTM) (Sherstinsky, 2020) network has been put on top of it. We subsample the input video by keeping one out of every 5 frames and use a fixed frame resolution of $224 \times 224$. We choose a trigger size of $20 \times 20$. Following (Zhao et al., 2020b), we create the required perturbation for clean-label attack by running projected gradient descent (PGD) (Madry et al., 2017) for 2000 steps with a perturbation norm of $\epsilon = 16$. Note that our proposed augmentation strategies for image classification are directly applicable to action recognition. During training, we keep 5% samples from each class to use them later as the clean validation set. Table 3 shows that SFT outperforms other defenses by a significant margin, e.g., I-BAU and AWM. Since we have to deal with multiple image frames here, the trigger approximation for these two methods is not as accurate as it is for a single image scenario. Without a good approximation of the trigger, these methods seem to underperform in most of the cases. **3D Point Cloud.** In this part of our work, we evaluate SFT against attacks on 3D point cloud classifiers (Li et al., 2021a; Xiang et al., 2021). For evaluation purposes, we consider the ModelNet (Wu et al., 2015) dataset and PointNet++ (Qi et al., 2017) architecture. The purification performance of SFT as well as other defenses are presented in Table 4. The superior performance of SFT can be attributed to the fact of smoothness enforcement that helps with backdoor suppressing and clean accuracy retainer that preserves the clean accuracy of the original model. We tackle the issue of backdoors in a way that gives us better control during the purification process. ### 6.3 Ablation Study In this section, we perform various ablation studies to validate the design choices for SFT. We consider mostly the CIFAR10 dataset for all of these experiments. **Smoothness Analysis of SFT.** Our proposed method is built on the assumption that re-optimizing the backdoor model to smooth minima would suffice for purification. Here, we validate this assumption by observing the training curves of SFT shown in Fig. 2a and 2b. It can be observed that SFT indeed re-optimizes the backdoor model to smoother minima. Due to such re-optimization, the ![Figure 2](image-url) **Figure 2:** Smoothness analysis of a DNN during backdoor purification processes. As the model is being re-optimized to smooth minima, the effect of the backdoor vanishes. We use CIFAR10 dataset for this experiment. Table 5: **Average runtime** for different defenses against all 14 attacks on CIFAR10. An NVIDIA RTX3090 GPU was used for this evaluation. | Method | ANP | I-BAU | AWM | FT-SAM | SFT (Ours) | |--------|-----|------|-----|--------|------------| | Runtime (sec.) | 118.1 | 92.5 | 112.5 | 98.1 | 20.8 | Table 6: Effect of fine-tuning only spectral shift, denoted by SFT ($\delta$) or f-SFT. SFT ($\theta$) implies the fine-tuning of all parameters according to Eq. (6). Although SFT ($\theta$) provides similar performance as SFT ($\delta$), the average runtime is almost $4.5 \times$ higher. Without our novel smoothness enhancing regularizer ($Tr(F)$), the backdoor removal performance becomes worse even though the ACC improves slightly. Effect of ($L_r$) on obtaining better ACC can also be observed. Due to this clean accuracy retainer, we obtain an average ACC improvement of $\sim 2.5\%$. The runtime shown here are averaged over all 14 attacks. | Method | Badnets ASR | Blend ASR | Trojan ASR | Dynamic ASR | CLB ASR | SIG ASR | Runtime (Secs.) | |-------------------------|-------------|-----------|------------|-------------|---------|--------|-----------------| | No Defense | 100 | 92.96 | 100 | 94.11 | 100 | 89.57 | | | SFT ($\theta$) | 1.72 | 89.19 | 1.05 | 91.58 | 3.18 | 86.74 | 1.47 | | SFT ($\theta$) w/o $Tr(F)$ | 5.34 | 90.62 | 4.74 | 91.88 | 5.91 | 87.68 | 3.93 | | SFT ($\theta$) w/o $L_r$ | 1.50 | 87.28 | 0.54 | 89.56 | 2.35 | 84.45 | 1.25 | | SFT ($\delta$) or f-SFT | 1.86 | 89.32 | 0.38 | 92.17 | 2.64 | 87.21 | 1.17 | The effect of the backdoor has been rendered ineffective. This is visible in Fig. 2b as the attack success rate becomes close to 0 while retaining good clean test performance. We report further results and explanations on this in Appendix A.6.1. **Runtime Analysis.** In Table 5, we show the average runtime for different defenses. Similar to purification performance, purification time is also an important indicator to measure the success of a defense technique. In Section 6.2, we already show that our method outperforms other defenses in most of the settings. As for the run time, SFT can purify the model in 20.8 seconds, which is almost $5 \times$ less as compared to FT-SAM. As part of their formulation, SAM requires a double forward pass to calculate the loss gradient twice. This increases the runtime of FT-SAM significantly. Furthermore, the computational gain of SFT can be attributed to our proposed rapid fine-tuning method, f-SFT. Since f-SFT performs spectral shift ($\delta$) fine-tuning, it employs a significantly more compact parameter space. Due to this compactness, the runtime, a.k.a. purification time, has been reduced significantly. Additional runtime analysis is in Appendix A.5.2. **Effect of Proposed Regularizer.** In Table 6, we analyze the impact of our proposed regularizers as well as the difference between fine-tuning $\theta$ and $\delta$. It can be observed that SFT ($\theta$) provides similar performance as SFT ($\delta$) for most attacks. However, the average runtime of the former is almost $4.5 \times$ longer than the latter. Such a long runtime is undesirable for a defense technique. We also present the impact of our novel smoothness-enhancing regularizer, $Tr(F)$. Without minimizing $Tr(F)$, the backdoor removal performance becomes worse even though the ACC improves slightly. We also see some improvement in runtime (14.4 vs. 20.8) in this case. Table 6 also shows the effect of $L_r$ which is the key to remembering the learned clean distribution. The introduction of $L_r$ ensures superior preservation of clean test accuracy of the original model. Specifically, we obtain an average ACC improvement of $\sim 2.5\%$ with the regularizer in place. Note that we may obtain slightly better ASR performance (for some attacks) without the regularizer. However, the huge ACC improvement outweighs the small ASR improvement in this case. Therefore, SFT ($\delta$) is a better overall choice as a backdoor purification technique. We provide more studies in Appendix A.6: e.g. Stronger Backdoor Attacks (Appendix A.6.2), Label Correction Rate (Appendix A.6.3), Effect of Clean Validation Sizes (Appendix A.6.4), Effect of Different Architectures (Appendix A.6.5), Combination of Attacks (Appendix A.6.7), etc. **7 CONCLUSION** In this work, we analyze the backdoor insertion and removal process from a novel perspective, model smoothness. Following this perspective, we propose a novel backdoor purification technique using the knowledge of Fisher-Information matrix. The proposed method is motivated by our analysis of loss surface smoothness and its strong correlation with the backdoor insertion and purification processes. To preserve the clean test accuracy of the original backdoor model, we introduce a novel clean data distribution-aware regularizer. In addition, a faster version of SFT has been proposed where we fine-tune the singular values of weights instead of directly fine-tuning the weights itself. Our proposed method achieves SOTA performance in a wide range of benchmarks. **Limitations.** It is observable that no matter which defense techniques we use the clean test accuracy (ACC) consistently drops for all datasets. We offer an explanation for fine-tuning-based techniques as SFT is one of them. As we use a small validation set for fine-tuning, it does not necessarily cover the whole training data distribution. Therefore, fine-tuning with this small amount of data bears the risk of overfitting and reduced clean test accuracy. While our clean accuracy retainer partially solves this issue, more rigorous and sophisticated methods need to be designed to fully alleviate this issue. REFERENCES Shun-Ichi Amari. Natural gradient works efficiently in learning. *Neural computation*, 10(2):251–276, 1998. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014. Mauro Barni, Kassem Kallas, and Benedetta Tondi. A new backdoor attack in cnns by training set corruption without label poisoning. In *2019 IEEE International Conference on Image Processing (ICIP)*, pp. 101–105. IEEE, 2019. Ondřej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. Findings of the 2014 workshop on statistical machine translation. In *Proceedings of the Ninth Workshop on Statistical Machine Translation*, pp. 12–58, Baltimore, Maryland, USA, June 2014. Association for Computational Linguistics. doi: 10.3115/v1/W14-3302. URL https://aclanthology.org/W14-3302. Eitan Borgnia, Valeriia Cherepanova, Liam Fowl, Amin Ghiasi, Jonas Geiping, Micah Goldblum, Tom Goldstein, and Arjun Gupta. Strong data augmentation sanitizes poisoning and backdoor attacks without an accuracy tradeoff. In *ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 3855–3859. IEEE, 2021. Stephen P Boyd and Lieven Vandenberghe. *Convex optimization*. Cambridge university press, 2004. Shuwen Chai and Jinghui Chen. One-shot neural backdoor erasing via adversarial weight masking. *arXiv preprint arXiv:2207.04497*, 2022. Huili Chen, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks. In *IJCAI*, volume 2, pp. 8, 2019. Kangjie Chen, Xiaoxuan Lou, Guowen Xu, Jiwei Li, and Tianwei Zhang. Clean-image backdoor: Attacking multi-label models with poisoned labels only. In *The Eleventh International Conference on Learning Representations*, 2023. Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. *arXiv preprint arXiv:1712.05526*, 2017. Siyuan Cheng, Yingqi Liu, Shiqing Ma, and Xiangyu Zhang. Deep feature space trojan attack of neural networks by controlled detoxification. In *AAAI*, volume 35, pp. 1148–1156, 2021. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In *international conference on machine learning*, pp. 1310–1320. PMLR, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, pp. 248–255. IEEE, 2009. Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. Sever: A robust meta-algorithm for stochastic optimization. In *International Conference on Machine Learning*, pp. 1596–1606. PMLR, 2019. Bao Gia Doan, Ehsan Abbasnejad, and Damith C Ranasinghe. Februus: Input purification defense against trojan attacks on deep neural network systems. In *Annual Computer Security Applications Conference*, pp. 897–912, 2020. Khoa Doan, Yingjie Lao, Weijie Zhao, and Ping Li. Lira: Learnable, imperceptible and robust backdoor attacks. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 11966–11976, 2021. Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, and Jun Zhu. Black-box detection of backdoor attacks with limited information and data. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 16482–16491, 2021.
Y8OaqdX5Xt
How does PToM perform compared with an agent that performs neurally-guided MCTS trained via self-play like AlphaZero, and how much of the fast convergence benefit comes from model-based planning vs. ToM?
PLANNING WITH THEORY OF MIND FOR FEW-SHOT ADAPTATION IN SEQUENTIAL SOCIAL DILEMMAS Anonymous authors Paper under double-blind review ABSTRACT Despite the recent successes of multi-agent reinforcement learning (MARL) algorithms, efficiently adapting to other agents in mixed-motive environments remains a significant challenge. One feasible approach is to use Theory of Mind (ToM) to reason about the mental states of other agents and model their behavior. However, these methods often encounter difficulties in efficient reasoning and utilization of inferred information. To address these issues, we propose Planning with Theory of Mind (PToM), a novel multi-agent algorithm that enables few-shot adaptation to unseen policies in sequential social dilemmas (SSDs). PToM is hierarchically composed of two modules: an opponent modeling module that utilizes ToM to infer others’ goals and learn corresponding goal-conditioned policies, and a planning module that employs Monte Carlo Tree Search (MCTS) to identify the best response. Our approach improves efficiency by updating beliefs about others’ goals both between and within episodes and by using information from the opponent modeling module to guide planning. Experimental results demonstrate that in three representative SSD paradigms, PToM converges expeditiously, excels in self-play scenarios, and exhibits superior few-shot adaptation capabilities when interacting with various unseen agents. Furthermore, the emergence of social intelligence during our experiments underscores the potential of our approach in complex multi-agent environments. 1 INTRODUCTION Constructing agents being able to rapidly adapt to previously unseen agents is a longstanding challenge for Artificial Intelligence. We refer to this ability as few-shot adaptation. Previous work has proposed well-performed MARL algorithms to study few-shot adaptation in zero-sum games (Vinyals et al., 2019; Vezhnevets et al., 2020) and common-interest environments (Barrett et al., 2011; Hu et al., 2020; Mahajan et al., 2022; Team et al., 2023). These environments involve a predefined competitive or cooperative relationship between agents. However, little attention has been given to the challenge of adapting to new opponents in mixed-motive environments, where cooperation coexists with defection. A majority of realistic multi-agent decision-making scenarios can be abstracted into mixed-motive environments (Komorita & Parks, 1995; Dafoe et al., 2020). We focus on few-shot adaptation of unseen agents in sequential social dilemmas (SSDs), a widely-studied kind of mixed-motive environments. SSDs extend classic matrix-form social dilemmas temporally and spatially. They enable the observation of others’ trajectories and modification of one’s own strategies within one episode (Leibo et al., 2017). SSDs are inherently complex, requiring the dynamic identification of potential partners and competitors. Decision making in SSDs should balance short-term interests with long-term rewards, while also considering the trade-off between self-interest and group benefit. Many algorithms struggle to perform well in SSDs despite success in zero-sum and pure-cooperative environments, because they use efficient techniques specific to reward structures, such as minimax (Littman, 1994; Li et al., 2019), Double Oracle (McMahan et al., 2003; Balduzzi et al., 2019) or IGM condition (Sunehag et al., 2017; Son et al., 2019; Rashid et al., 2020), which are not applicable in SSDs. These challenges make autonomous decision-making and few-shot adaptation more difficult in SSDs compared with zero-sum and pure-cooperative environments. 1 In this paper, we use “opponent” and “other agent” interchangeably to refer to agents that coexist with the focal agent in the same environment. According to cognitive psychology and related disciplines, humans’ ability to rapidly solve previously unseen problems depends on hierarchical cognitive mechanisms (Butz & Kutter, 2016; Kleiman-Weiner et al., 2016; Eppe et al., 2022). This hierarchical structure unifies high-level goal reasoning with low-level action planning. Meanwhile, researches on machine learning also emphasize the importance and effectiveness of hierarchical goal-directed planning for few-shot problem-solving (Eppe et al., 2022). Inspired by the hierarchical structure and theory of mind - the ability to understand others’ mental states (like goals and beliefs) from their actions (Baker et al., 2017), we propose an algorithm, named Planning with Theory of Mind (PToM), for tackling few-shot adaptation in SSDs. PToM consists of two modules: an opponent modeling module and a planning module. The opponent modeling module estimates opponents’ behavior by inferring their goals and learning their goal-conditioned policies. Based on the opponent’s behavior, the planning module generates the next action to take. To test PToM’s few-shot adaptation ability, we construct three typical SSD environments: sequential stag-hunt game (SSH), sequential snowdrift game (SS), and sequential prisoner’s dilemma (SPD). They are extensions of the three most representative paradigms of social dilemmas (Rousseau, 1999; Rapoport & Chammah, 1966; Rapoport et al., 1965; Santos et al., 2006), in terms of space, time, and number of participants. A detailed description of these environments is provided in Sec. 5.1. Experimental results illustrate that across all the three typical paradigms of SSDs, PToM exhibits superior few-shot adaptation ability compared with baselines, including the well-established MARL algorithms LOLA, social influence, A3C, and prosocial-A3C. Meanwhile, PToM exhibits expeditious convergence and achieves high rewards after convergence, showing its exceptional decision-making ability in SSDs. In addition, we observe self-organized cooperation and alliance of the disadvantaged emerging from the interaction between multiple PToM agents. 2 RELATED WORK MARL has explored multi-agent decision-making in SSDs. One approach is to add intrinsic rewards to incentivize collaboration and consideration of the impact on others, alongside maximizing extrinsic rewards. Notable examples include ToMAGA (Nguyen et al., 2020), MARL with inequity aversion (Hughes et al., 2018), and prosocial MARL (Peyakhovich & Lerner, 2018). However, many of these algorithms rely on hand-crafted intrinsic rewards and assume access to other agents’ rewards, which can make them exploitable by self-interested algorithms and less effective in realistic scenarios where others’ rewards are not visible (Komorita & Parks, 1995). To address these issues, Jaques et al. (2019) have included intrinsic social influence reward that use counterfactual reasoning to assess the effect of an agent’s actions on its opponents’ behavior. LOLA (Foerster et al., 2018) and its extension (such as POLA (Zhao et al., 2022), M-FOS (Lu et al., 2022)) consider the impact of one agent’s learning process, rather than treating them as a static part of the environment. However, LOLA requires knowledge of opponents’ network parameters, which may not be feasible in many scenarios. LOLA with opponent modeling relaxes this requirement, but scaling problems may arise in complex sequential environments that require long action sequences for rewards. Our work relates to opponent modeling (see (Albrecht & Stone, 2018) for a comprehensive review). I-POMDP (Gmytrasiewicz & Doshi, 2005) is a typical opponent modeling and planning framework, which maintains dynamic beliefs over the physical environment and beliefs over other agents’ beliefs. It maximizes a value function of the beliefs to determine the next action. However, the nested belief inference suffers from serious computational complexity problems, which makes it impractical in complex environments. Unlike I-POMDP and its approximation methods (Doshi & Perez, 2008; Doshi & Gmytrasiewicz, 2009; Hoang & Low, 2013; Han & Gmytrasiewicz, 2018, 2019; Zhang & Doshi, 2022), PToM explicitly uses beliefs over other agents’ goals and policies to learn a neural network model of other agents (MOA), an MCTS planner to compute next actions. PToM avoids nested belief inference and performs sequential decision-making more efficiently. Theory of mind (ToM), originally a concept of cognitive science and psychology (Baron-Cohen et al., 1985), has been transformed into computational models over the past decade and used to infer agents’ mental states such as goals and desires. Bayesian inference has been a popular technique used to make ToM computational (Baker et al., 2011; Track et al., 2018; Wu et al., 2021; Zhi-Xuan et al., 2022). With the rapid development of the neural network, some recent work has attempted to achieve ToM using neural networks (Rabinowitz et al., 2018; Shu & Tian, 2018; Wen et al., 2019). PToM gives a practical and effective framework to utilize ToM, and extend its application scenarios to SSDs, where both competition and cooperation are involved and the goals of opponents are private and volatile. Monte Carlo Tree Search (MCTS) is a widely adopted planning method for optimal decision-making. Recent work, such as AlphaZero (Silver et al., 2018) and MuZero (Schrittwieser et al., 2020), have used MCTS as a general policy improvement operator over the base policy learned by neural networks. However, MCTS is limited in multi-agent environments, where the joint action space grows rapidly with the number of agents (Choudhury et al., 2022). We avoid this problem by estimating opponent policies and planning only for the focal agent’s actions. 3 Problem Formulation We consider multi-agent hierarchical decision-making in SSDs, which can be described as a Markov game (Liu et al., 2022) with goals, specified by a tuple \(<N, S, A, T, R, \gamma, T_{\text{max}}, G>\). Here, agent \(i \in N = \{1, 2, \cdots, n\}\) chooses action from action space \(A_i = \{a_i\}\). \(A = A_1 \times A_2 \times \cdots \times A_n\) is the joint action space. The joint action \(a_{1:n} \in A\) will lead to a state transition based on the transition function \(T : S \times A \times S \rightarrow [0, 1]\). Specifically, after agents take the joint action \(a_{1:n}\) the state of the environment will transit from \(s\) to \(s'\) with probability \(T(s'|s, a_{1:n})\). The reward function \(R_i : S \times A \rightarrow \mathbb{R}\) denotes the immediate reward received by agent \(i\) after joint action \(a_{1:n}\) is taken on state \(s \in S\). The discount factor for future rewards is denoted as \(\gamma\). \(T_{\text{max}}\) is the maximum length of an episode. \(\pi_i : S \times A_i \rightarrow [0, 1]\) denotes agent \(i\)'s policy, specifying the probability \(\pi_i(a_i|s)\) that agent \(i\) chooses action \(a_i\) at state \(s\). The environments we study have a set of goals, denoted by \(G = G_1 \times G_2 \times \cdots \times G_n\), where \(G_i = \{g_i\}\) represents the set of goals for agent \(i\). For any two agents \(i\) and \(j\), \(j\)'s true goal is inaccessible to \(i\). However, \(i\) can infer \(j\)'s goal based on its action sequence. Specifically, \(i\) maintains a belief over \(j\)'s goals, \(b_{ij} : G_j \rightarrow [0, 1]\), which is a probability distribution over \(G_j\). Here, algorithms are evaluated in terms of self-play and few-shot adaptation to unseen policies in SSDs. Self-play involves multiple agents using the same algorithm to undergo training from scratch. The performance of algorithms in self-play is evaluated by their expected reward after convergence. Self-play performance demonstrates the algorithm’s ability to make autonomous decisions in complex and dynamic SSDs. Few-shot adaptation refers to the capability to recognize and respond appropriately to unknown policies within a limited number of episodes. The performance of algorithms in few-shot adaptation is measured by the rewards they achieve after engaging in these brief interactions. 4 Methodology In this section, we propose Planning with Theory of Mind (PToM), a novel algorithm for multi-agent decision-making in SSDs. PToM consists of two main modules: an opponent modeling module to infer opponents’ goals and predict their behavior and a planning module to plan the focal agent’s best response guided by the inferred information from the opponent modeling module. Based on the hypothesis in cognitive psychology that others’ behavior is goal-directed (Gergely et al., 1995; Buresh & Woodward, 2007), and that agents behave stably for a specific goal (Warren, 2006), the opponent modeling module models opponent behavior with two levels of hierarchy. At the high-level, the module employs ToM to infer opponents’ internal goals by analyzing their action sequences. Based on the inferred goals and the current state of the environment, the low-level component learns goal-conditioned policies to model the atomic actions of opponents. In the planning module, MCTS is used to plan for the best response of the focal agent based on the inferred opponents’ policies. To handle the uncertainty over opponents’ goals, we sample multiple opponent goal combinations from the current belief and return the action that maximizes the average return over the sampled configurations. Following AlphaZero (Silver et al., 2018) and MuZero (Schrittwieser et al., 2020), we maintain a policy and a value network to boost MCTS planning and in turn use the planned action and its value to update the neural network. Figure 1 gives an overview of PToM, and the pseudo-code of PToM is provided in Appendix A. Figure 1: Overview of PToM. PToM consists of an opponent modeling module and a planning module. The opponent modeling module models opponent behavior by inferring opponents’ goals and learning their goal-conditioned policies. Estimated opponent behavior is then fed to the planning module to select a rewarding action of the focal agent. 4.1 Opponent Modeling with Efficient Adaptation In goal-inference (as the light yellow component shown in Figure 1), PToM summarizes the opponents’ objectives based on the interaction history. However, it faces the challenge of the opponent’s goals potentially changing within episodes. To solve these issues, we propose two update procedures based on ToM: intra-ToM, which infers the opponent’s immediate goals within a single episode, and inter-ToM, which summarizes the opponent’s goals based on their historical episodes. Intra-ToM reasons about the goal of opponent \( j \) in the current episode \( K \) according to \( j \)'s past trajectory in episode \( K \). It ensures that PToM is able to quickly respond to in-episode behavior changes of other agents. Specifically, in episode \( K \), agent \( i \)'s belief about agent \( j \)'s goals at time \( t \), \( b_{ij}^{K,t}(g_j) \), is updated according to: \[ b_{ij}^{K,t+1}(g_j) = Pr(g_j | s^K_{0:t+1}, a^K_{j,0:t}) = \frac{Pr(g_j | s^K_{0:t}, a^K_{j,0:t-1}) Pr(a^K_{j,t} | s^K_{0:t}, a^K_{j,0:t-1}, g_j) Pr(s^K_{t+1} | s^K_{0:t}, a^K_{j,0:t}, g_j)}{Pr(s^K_{t+1} | s^K_{0:t}, a^K_{j,0:t})} = \frac{1}{Z_1} b_{ij}^{K,t}(g_j) Pr_i(a^K_{j,t} | s^K_{0:t}, g_j), \] (1) where \( Z_1 \) is the normalization factor that makes \( \sum_{g_j \in G_j} b_{ij}^{K,t+1}(g_j) = 1 \). The likelihood term \( Pr_i(a^K_{j,t} | s^K_{0:t}, g_j) \) is provided by the goal-conditioned opponent policies, whose detailed description is given in the following. However, intra-ToM may suffer from inaccuracy of the prior (i.e., \( b_{ij}^{K,0}(g_j) \)) when past trajectories are not long enough for updates. Inter-ToM makes up for this by calculating a precise prior based on past episodes. Belief update between two adjacent episodes is defined as: \[ b_{ij}^{K,0}(g_j) = \frac{1}{Z_2} [\alpha b_{ij}^{K-1,0}(g_j) + (1 - \alpha) 1(g_j^{K-1} = g_j)], \] (2) where \( \alpha \in [0, 1] \) is the horizon weight, which controls the importance of the history. As \( \alpha \) decreases, agents attach greater importance to recent episodes. \( 1(\cdot) \) is the indicator function. \( Z_2 \) is the normalization factor. The equation is equivalent to a time-discounted modification of the Monte Carlo estimate. Inter-ToM summarizes other agents’ goals according to all the previous episodes, which is of great help when playing with the same agents in a series of episodes. The goal-conditioned policy (as the light yellow component shown in Figure 1) \( \pi_\omega(a^K_{j,t} | s^K_{0:t}, g_j) \), which is obtained through a neural network \( \omega \). To train the network, a set of \( (s^K_{0:t}, a^K_{j,t}, g^K_{j,t}) \) is collected from episodes and sent to the replay buffer. \( \omega \) is updated at intervals to minimize the cross-entropy loss: \[ L(\omega) = \mathbb{E}[-\sum_{a \in A_j} 1(a^K_{j,t} = a) \log(\pi_\omega(a | s^K_{0:t}, g^K_{j,t}))]. \] (3) 4.2 Planning Under Uncertain Opponent Models Given the policies of other agents estimated by the opponent modeling module, we can leverage planning algorithms such as MCTS to compute an advantageous action. However, a key obstacle to applying MCTS is that opponent policies estimated by the opponent modeling module contain uncertainty over other agents’ goals. Naively adding such uncertainty as part of the environment would add a large bias to the simulation and degrade planning performance. To overcome this problem, we propose to sample opponents’ goal combinations according to the belief maintained by the opponent modeling module, and then estimate action value by MCTS based on the samples. To balance the trade-off between computational complexity and planning performance, we repeat the process multiple times and choose actions according to the average action value. In the following, we first introduce the necessary background of MCTS. We then proceed to introduce how we plan for a rewarding action under the uncertainty over opponent policies. MCTS. Monte Carlo Tree Search (MCTS) is a type of tree search that plans for the best action at each time step [Silver & Veness (2010), Liu et al. (2020)]. MCTS uses the environment to construct a search tree (right side of Figure 1), where nodes correspond to states and edges refer to actions. Specifically, each edge transfers the environment from its parent state to its child state. MCTS expands the search tree in ways (such as pUCT) that properly balance exploration and exploitation. Value and visit of every state-action (node-edge) pair are recorded during expansion [Silver et al. (2016)]. Finally, the action with the highest value (or highest visit) of the root state (node) is returned and executed in the environment. Planning under uncertain opponent policies. Based on beliefs over opponents’ goals and their goal-conditioned policies from the opponent modeling module, we run MCTS for $N_s$ rounds. In each round, other agents’ goals are sampled according to the focal agent’s belief over opponents’ goals $b_{ij}(g_j)$. Specifically, at time $t$ in episode $K$, we sample the goal combination $g_{-i} = \{g_j \sim b_{ij}^{K,t}(\cdot), j \neq i\}$. Then at every state $\tilde{s}^k$ in the MCTS tree of this round, other agents’ actions $\tilde{a}_{-i}$ are determined by $\tilde{a}_{-i} \sim \pi_\omega(\cdot|\tilde{s}^k,g_{-i})$ from the goal-conditioned policy. In each round, MCTS gives the estimated action value of the current state $Q(s^{K,t},a,g_{-i}) = V(\tilde{s}^t(a))$ ($a \in A_i$), where $\tilde{s}^t(a)$ is the next state after taking $\tilde{a}_{-i}^0 \cup a$ from $\tilde{s}^0 = s^{K,t}$. We average the estimated action value from MCTS in all $N_s$ rounds: $$Q_{avg}(s^{K,t},a) = \sum_{t=1}^{N_s} Q_t(s^{K,t},a,g_{-i}).$$ Agent $i$’s policy follows Boltzmann rationality model [Baker et al. (2017)]: $$\pi_{MCTS}(a|s^{K,t}) = \frac{\exp(\beta Q_{avg}(s^{K,t},a))}{\sum_{a' \in A_i} \exp(\beta Q_{avg}(s^{K,t},a'))},$$ where $\beta \in [0,\infty)$ is rationality coefficient. As $\beta$ increases, the policy gets more rational. We choose our action at time $t$ of the episode $K$ based on $\pi_{MCTS}(a|s^{K,t})$. Note that the effectiveness of MCTS is highly associated with the default policies and values provided to MCTS. When they are close to the optimal ones, they can offer an accurate estimate of state value, guiding MCTS search in the right direction. Therefore, following Silver et al. (2018), we train a neural network $\theta$ to predict the policy and value functions at every state following the supervision provided by MCTS. Specifically, the policy target is the policy generated by MCTS, while the value target is the true discounted return of the state in this episode. As for state $\tilde{s}^k$ in the MCTS, the policy function $\pi_\theta^k$ guides the exploration by having an impact on the pUCT functions. The value function $v_\theta^k$ estimates the return and provides the initial value of $\tilde{s}^k$ when $\tilde{s}^k$ is first reached. The network $\theta$ is updated based on the overall loss: $$L(\theta) = L_p(\pi_{MCTS},\pi_\theta) + L_v(r,v_\theta),$$ where $$L_p(\pi_1,\pi_2) = \mathbb{E}\left[-\sum_{a \in A_i} \pi_1(a|s^{K,t}) \log(\pi_2(a|s^{K,t}))\right],$$ $$L_v(r,v) = \mathbb{E}\left[(v(s^{K,t}) - \sum_{l=t}^{\infty} \gamma^{l-t} r_{i,l}^K)^2\right].$$ 5 EXPERIMENTS 5.1 EXPERIMENTAL SETUP Agents are tested in three representative paradigms of SSDs: sequential stag-hunt game (SSH), sequential snowdrift game (SS), and sequential prisoner’s dilemma (SPD) (see Appendix C). In **SSH**, four agents are rewarded for catching prey. As shown in Figure 2(a), each agent has six actions: idle, move left, move right, move up, move down, and hunt. If there are obstacles or boundaries in an agent’s moving direction, its position stays unchanged. Agents can hunt prey in their current grid, and there are two types of prey: stags and hares. A stag provides a reward of 10, and requires at least two agents located at its grid to execute “hunt” together. These cooperating agents will split the reward evenly. A hare provides a reward of 1, and each agent can catch a hare alone. After a successful hunting, both the hunters and the prey disappear from the environment. The game terminates when the time $T_{max} = 30$ runs out, or terminates 5 timesteps after the first successful hunting in each episode. The dilemma in SSH is a tension between maximizing benefit (i.e., hunting stags) and minimizing risk (i.e., hunting hares). The 5-timesteps termination rule ensures that the tension between payoff-dominant cooperation and risk-dominant defection is maintained. Without this rule, agents would have enough time to hunt hares if failing to hunt a stag, and the dilemma would be diluted. In **SS** (Figure 2(b)), there are six snowdrifts located randomly in an $8 \times 8$ grid. Similar to SSH, at every time step the agent can stay idle or move one step in any direction. Agents are additionally equipped with a “remove a snowdrift” action, which removes the snowdrift in the same cell as the agent. When a snowdrift is removed, removers share the cost of 4 evenly, and every agent gets a reward of 6. The game ends when all the snowdrifts are removed or the time $T_{max} = 50$ runs out. The game’s essential dilemma arises from the fact that an agent can obtain a higher reward by free-riding, i.e., waiting for other agents to remove the snowdrifts, than by removing a snowdrift themselves. However, if all agents take free rides, no one will remove any snowdrifts, and the group will not receive any reward. On the other hand, if any agent is satisfied with a suboptimal strategy and chooses to remove snowdrifts, both the group benefit and individual rewards increase. Finally, we investigate **SPD** (Figure 2(c)), inspired by the environment Cleanup from the Melting Pot benchmark (Leibo et al., 2021). In this $8 \times 8$ grid, there is a river in the top two rows and a forest with apples in the bottom two rows. Bags of waste are scattered throughout the river. Waste is produced in the river at a constant rate of 0.25, and the river becomes saturated with waste when it covers 40% of the river. Apples respawn at a rate of $1 - 2.5x$, where $x$ represents the percentage of waste in the river. Agents receive a reward of 10 for collecting an apple, and a reward of $-1$ for cleanup a bag of waste. The game terminates after $T_{max} = 100$ timesteps. At the beginning of each episode, the river is saturated with waste and no apple is present, so agents must consistently clean up waste to ensure the growth rate of the apple population. However, cleaning up waste hinders agents to collect apples since they are located far away in the environment. Agents receive less reward for cleaning up waste, regardless of what their opponents do, but no one receives a reward if no agents clean up waste, which is the central dilemma of SPD. In all three environments, four agents have no access to each other’s parameters, and communication between them is not allowed. Appendix D introduces the goal definition of these games. **Baselines.** Here, some baseline algorithms are introduced to evaluate the performance of PToM. During the evaluation of few-shot adaptation, baseline algorithms serve a dual purpose. Firstly, they act as unfamiliar opponents during the evaluation process to test the few-shot adaptation ability of PToM. Secondly, we evaluate the few-shot adaptation ability of the baseline algorithms to demonstrate PToM’s superiority. LOLA (Foerster et al., 2018; Zhao et al., 2022) agents consider a 1-step look-ahead update of opponents, and update their own policies according to the updated policies of opponents. SI (Jaques et al., 2019) agents have an intrinsic reward term that incentivizes actions maximizing their influence on opponents’ actions. The influence is accessed by counterfactual reasoning. A3C (Mnih et al., 2016) agents are trained using the Asynchronous Advantage Actor-Critic method, a well-established reinforcement learning (RL) technique. Prosocial-A3C (PS-A3C) (Peysakhovich & Lerer, 2018) agents are trained using A3C but share rewards between players during training, so they optimize the per-capita reward instead of the individual reward, emphasizing cooperation between players. The ablated version of PToM, direct-OM, retains the planning module, removes the opponent modeling module, and uses neural networks to model opponents directly (see details in Appendix E.3). In addition, we construct some rule-based strategies that are extreme strategies specific to the game. Random policy takes a valid action randomly at each step. An agent that consistently adopts cooperative behavior is called cooperator, and an agent that consistently adopts exploitative behavior is called exploiter. In SSH, the goals of cooperators and exploiters are hunting the nearest stag and hare, respectively. In SS, cooperators keep moving to remove the nearest snowdrift, and exploiters randomly take actions other than “remove a snowdrift”. In SPD, cooperators always move to clean the nearest waste, and exploiters move to collect apples if they exist. 5.2 Performance The experiment consists of two phases. The first phase focuses on self-play training, where agents using the same algorithm are trained until convergence. Self-play ability is measured by the algorithm’s average reward after convergence. The second phase evaluates the few-shot adaptation ability of PToM and learning baselines. Specifically, a focal agent interacts with three opponents using a different algorithm for 2400 steps. The focal agent’s average reward during the final 600 steps is used to measure its algorithm’s few-shot adaptation ability. At the start of the adaptation phase, any policy’s parameters are the convergent parameters derived from the corresponding algorithms in self-play. During the phase, policies can update their parameters if possible. Implementation details are given in Appendix E. The results of self-play and that of few-shot adaptation are displayed in Table 1 and Table 2, respectively. | | PToM | LOLA | SI | A3C | PS-A3C | direct-OM | |-------|----------|----------|----------|----------|----------|-----------| | SSH | **0.9767 ± 0.0117** | 0.9038 ± 0.0117 | 0.9125 ± 0.0233 | 0.9708 ± 0.0087 | 0.7347 ± 0.0029 | 0.9417 ± 0.0146 | | SS | **0.9900 ± 0.0047** | 0.6200 ± 0.0070 | 0.7133 ± 0.0060 | 0.6933 ± 0.0113 | 0.9500 ± 0.0093 | 0.7933 ± 0.0080 | | SPD | 0.0181 ± 0.0012 | 0.0064 ± 0.0008 | 0.0064 ± 0.0005 | 0.0000 ± 0.0000 | **0.4333 ± 0.0031** | 0.0163 ± 0.0007 | SSH. As demonstrated in Table 1, PToM and A3C perform comparably in self-play, close to the best possible reward. They both learn effective strategies that prioritize hunting stags. LOLA and SI agents have worse self-play performance than PToM and A3C. PS-A3C agents obtain the lowest reward. PS-A3C tends to delay hunting, as early hunting leads to leaving the environment and failing to obtain the group reward from subsequent hunting. Additionally, PS-A3C does not effectively learn the relationship between hunting and receiving rewards, since they can get rewards without hunting by itself. These reasons lead to PS-A3C may take suboptimal actions in the last few steps and thus fail to hunt. PToM gains considerable returns when adapting to all other types of opponents (see Table 2(a)). Although LOLA is not as good as A3C in self-play, both have their own advantages in terms of adaptation. SI performs significantly worse than LOLA on the adaptation test, although they perform similarly in self-play. Direct-OM consistently underperforms compared with PToM across all adaptation scenarios, with some instances revealing notable disadvantages. PS-A3C, as a result of the aforementioned reasons, has fewer successful hunts, leading to inferior performance. We would like to provide further intuition on why PToM is capable of efficiently adapting its policy to unseen agents. Take the experiment facing three exploiters (always attempting to hunt the nearest hare) as an example. There are two goals here: hunting stags or hunting hares. At the start of the Table 2: Few-shot adaptation performance of PToM and baselines in (a) SSH, (b) SS, and (c) SPD. The interaction happens between 1 agent using the row policy and 3 other agents using the column policy. Shown are the min-max normalized scores, with normalization bounds set by the rewards of LI-Ref and the random policy. See detailed description of LI-Ref and corresponding analysis in Appendix F.1. The results are depicted for the row policy from 1800 to 2400 step. (a) Performance in SSH | learning opponents | rule-based opponents | |-------------------|----------------------| | PToM | LOLA | | - | 0.97 ± 0.02 | | LOLA | SI | | 0.98 ± 0.02 | 0.96 ± 0.03 | | SI | A3C | | 0.89 ± 0.02 | 0.99 ± 0.02 | | A3C | PS-A3C | | 0.96 ± 0.02 | 0.88 ± 0.02 | | PS-A3C | random | | 0.32 ± 0.02 | 0.78 ± 0.07 | | direct-OM | cooperator | | 0.86 ± 0.01 | 1.00 ± 0.01 | | | exploiter | | | 0.36 ± 0.03 | (b) Performance in SS | learning opponents | rule-based opponents | |-------------------|----------------------| | PToM | LOLA | | - | 0.72 ± 0.05 | | LOLA | SI | | -0.50 ± 0.10 | 0.55 ± 0.30 | | SI | A3C | | -0.77 ± 0.14 | 0.39 ± 0.09 | | A3C | PS-A3C | | -0.74 ± 0.15 | -0.56 ± 0.39 | | PS-A3C | random | | -1.12 ± 0.11 | 0.36 ± 0.03 | | direct-OM | cooperator | | -0.61 ± 0.17 | -1.75 ± 0.25 | | | exploiter | | | 0.35 ± 0.01 | (c) Performance in SPD | learning opponents | rule-based opponents | |-------------------|----------------------| | PToM | LOLA | | - | 1.40 ± 0.23 | | LOLA | SI | | 0.75 ± 0.14 | 1.45 ± 0.23 | | SI | A3C | | 1.00 ± 0.11 | 1.28 ± 0.18 | | A3C | PS-A3C | | 0.75 ± 0.09 | 1.19 ± 0.02 | | PS-A3C | random | | -4.32 ± 0.12 | 0.27 ± 0.03 | | direct-OM | cooperator | | 1.51 ± 0.16 | 0.85 ± 0.01 | | | exploiter | | | 0.84 ± 0.03 | Evaluation phase, PToM holds the belief that every opponent is more likely to hunt a stag because PToM has seen its opponents hunt stags more than hares during self-play. This false belief for exploiters degrades PToM’s performance. Both intra-ToM and inter-ToM correct this false belief by updating during the interactions with exploiters (see visualization of belief update in Figure 4 in Appendix E.2). Intra-ToM provides the ability to correct the belief of hunting stags within an episode. Specifically, as an opponent keeps moving closer to a hare, intra-ToM will update the intra-episode belief for the opponent toward the goal “hare”, leading to accurate opponent models. Taking these accurate opponent policies as input, the planning module can output advantageous actions. Inter-ToM further accelerates the convergence towards true belief by updating the inter-episode belief, which is used as a prior for intra-ToM at the start of every episode. SS. As shown in Table 1, during self-play, PToM achieves the highest reward and it is close to the theoretically optimal average reward in this environment (i.e. when all snowdrifts are removed, resulting in a group average reward of 30.0). This outcome is a remarkable achievement in a fully decentralized learning setting and highlights the high propensity of PToM to cooperate. In contrast, LOLA, SI, and A3C prioritize maximizing their individual profits, which leads to inferior outcomes due to their failure to coordinate and cooperate effectively. PS-A3C performs exceptionally well in self-play, ranking second only to PToM. Like in SSH, it fails to achieve the maximum group average reward due to the coordination problem, which is prominent when there is only one snowdrift left. This issue highlights the instability of the strategy caused by the absence of action planning. PToM demonstrates the most effective few-shot adaptation performance (Table 2(b)). Specifically, when adapting to three exploiters, PToM receives substantially higher rewards than other policies. This highlights the effectiveness of PToM in quickly adapting to non-cooperative behavior, which differs entirely from opponent behavior in PToM’s self-play. In contrast, A3C and PS-A3C do not explicitly consider opponents. They have learned the strategies tending to exploit and cooperate, respectively. Therefore, A3C performs effectively against agents that have a higher tendency to cooperate, such as PToM and cooperator. However, its performance is relatively poor when facing agents unlikely to cooperate. Conversely, PS-A3C exhibits the opposite behavior. Direct-OM only performs well when facing cooperators, and performs poorly when facing relatively exploitative agents such as LOLA, SI, and A3C. **SPD.** In the scenario of decentralized training with no communication, a group of agents that optimize for their own returns can easily fall into the Nash equilibrium where individuals never clean up the waste and always try to pick apples. During self-play, PToM, along with other self-interested baselines (LOLA, SI, and A3C), converges to the equilibrium, which is attributed to the inherent characteristics of the prisoner’s dilemma game (Table 1). PS-A3C agents gain high returns and escape the undesirable equilibrium to a certain extent, as they aim to maximize the collective benefit. The adaptation results between PToM, LOLA, SI, A3C and direct-OM underscore that self-interested agents often sink into the undesirable Nash equilibrium in SPD (Table 2[c]). PToM obtains less reward than other self-interested algorithms when playing with rule-based cooperators. When faced with a new opponent, PToM tends to engage in exploratory cooperative actions to understand the opponent’s characteristics. This leads to relatively lower returns for PToM. When facing an agent exhibiting dynamic behavior, such as PS-A3C, it becomes imperative for the agent to think further. In such scenarios, some apples are available, and the focal agent needs to contend with opponents for apples. It is important to choose apples to pick and plan the path for picking those apples. The planning module within PToM empowers the agent to navigate and optimize its path and thus ensures a competitive advantage. PS-A3C aims to maximize the collective average reward. Thus, it is vulnerable to exploitation by other agents, leading to low returns when playing with self-interested opponents in SPD. Overall, this study demonstrates the remarkable adaptation ability of PToM across three distinct social dilemma paradigms. While the advantages of PToM may not be significant in specific test scenarios against particular opponents, its overall performance consistently surpasses the baselines. Meanwhile, PToM exhibits advantages during self-play. Ablation study indicates that inter-ToM and intra-ToM play crucial roles in adapting to agents with fixed goals and agents with dynamic goals, respectively. Moreover, if opponent modeling is not conditioned on goals, the self-play and few-shot adaptation abilities are greatly weakened. Further details are provided in Appendix F.3 We observe the emergence of social intelligence, including self-organized cooperation and an alliance of the disadvantaged, during the interaction of multiple PToM agents in SSDs. Further details can be found in Appendix G. ### 6 Conclusion and Discussion We propose Planning with Theory of Mind (PToM), a hierarchical algorithm for few-shot adaptation to unseen opponents in SSDs. It consists of an opponent modeling module for inferring opponents’ goals and behavior and a planning module guided by the inferred information to output the focal agent’s best response. Empirical results in three typical SSD paradigms (SSH, SS, and SPD) show that PToM performs better than state-of-the-art MARL algorithms, in terms of dealing with complex SSDs in the self-play setting and few-shot adaptation to previously unseen opponents. Whilst PToM exhibits superior abilities, there are several limitations illumining our future work. First, in any environment, a clear definition of goals is needed for PToM. To enhance PToM’s ability to generalize to various environments, a technique that can autonomously abstract goal sets in various scenarios is needed. Second, we investigate complex SSDs with the expectation that PToM can facilitate effective decision-making and adaptation in human society. Despite selecting diverse well-established algorithms as opponents, none of them adequately model human behavior. It would be interesting to explore how PToM can perform in a few-shot adaptation scenario involving human participants. As PToM is self-interested, it may not always make decisions that are in the best interest of humans. One way to mitigate this risk is leveraging PToM’s ability to infer and optimize for human values and preferences during interactions, thereby assisting humans in complex environments. REFERENCES Stefano V Albrecht and Peter Stone. Autonomous agents modelling other agents: A comprehensive survey and open problems. *Artificial Intelligence*, 258:66–95, 2018. Chris Baker, Rebecca Saxe, and Joshua Tenenbaum. Bayesian theory of mind: Modeling joint belief-desire attribution. In *Proceedings of the annual meeting of the cognitive science society*, volume 33, 2011. Chris L Baker, Julian Jara-Ettinger, Rebecca Saxe, and Joshua B Tenenbaum. Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. *Nature Human Behaviour*, 1(4):1–10, 2017. David Balduzzi, Marta Garnelo, Yoram Bachrach, Wojciech Czarnecki, Julien Perolat, Max Jaderberg, and Thore Graepel. Open-ended learning in symmetric zero-sum games. In *International Conference on Machine Learning*, pp. 434–443. PMLR, 2019. Simon Baron-Cohen, Alan M Leslie, and Uta Frith. Does the autistic child have a “theory of mind”? *Cognition*, 21(1):37–46, 1985. Samuel Barrett, Peter Stone, and Sarit Kraus. Empirical evaluation of ad hoc teamwork in the pursuit domain. In *The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2*, pp. 567–574, 2011. Daan Bloembergen, Steven De Jong, and Karl Tuyls. Lenient learning in a multiplayer stag hunt. In *Proceedings of 23rd Benelux Conference on Artificial Intelligence (BNAIC 2011)*, pp. 44–50, 2011. Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. *IEEE Transactions on Computational Intelligence and AI in games*, 4(1):1–43, 2012. Jennifer Sootsman Buresh and Amanda L Woodward. Infants track action goals within and across agents. *Cognition*, 104(2):287–314, 2007. Martin V Butz and Esther F Kutter. *How the mind comes into being: Introducing cognitive science from a functional and computational perspective*. Oxford University Press, 2016. Shushman Choudhury, Jayesh K Gupta, Peter Morales, and Mykel J Kochenderfer. Scalable online planning for multi-agent mdps. *Journal of Artificial Intelligence Research*, 73:821–846, 2022. Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R McKee, Joel Z Leibo, Kate Larson, and Thore Graepel. Open problems in cooperative ai. *arXiv preprint arXiv:2012.08630*, 2020. Prashant Doshi and Piotr J Gmytrasiewicz. Monte carlo sampling methods for approximating interactive pomdps. *Journal of Artificial Intelligence Research*, 34:297–337, 2009. Prashant Doshi and Dennis Perez. Generalized point based value iteration for interactive pomdps. In *AAAI*, pp. 63–68, 2008. Manfred Eppé, Christian Gumbsch, Matthias Kerzel, Phuong DH Nguyen, Martin V Butz, and Stefan Wermter. Intelligent problem-solving as integrated hierarchical reinforcement learning. *Nature Machine Intelligence*, 4(1):11–20, 2022. Jakob Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. Learning with opponent-learning awareness. In *Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems*, pp. 122–130, 2018. György Gergely, Zoltán Nádasdy, Gergely Csibra, and Szilvia Bíró. Taking the intentional stance at 12 months of age. *Cognition*, 56(2):165–193, 1995. Piotr J Gmytrasiewicz and Prashant Doshi. A framework for sequential planning in multi-agent settings. *Journal of Artificial Intelligence Research*, 24:49–79, 2005.
WSzRdcOkEx
The definition of robustness in Eq. (1) and Eq.(4) seems to be confusing and possibly wrong. In Eq. (1), $\Delta_{\min}$ is defined as the minimal perturbation of a sample-label pair causing the change of the top-1 class prediction. Then, I understand $g(x)$ is a lower bound of $\Delta_{\min}$. However, in Eq. (3), $g(x)$ is defined as the gap between two probabilities. Therefore, I am confused about how the gap between two probabilities measures the minimal perturbation.
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation Using Generative Models Anonymous authors Paper under double-blind review Abstract Current studies on adversarial robustness mainly focus on aggregating local robustness results from a set of data samples to evaluate and rank different models. However, the local statistics may not well represent the true global robustness of the underlying unknown data distribution. To address this challenge, this paper makes the first attempt to present a new framework, called GREAT Score, for global robustness evaluation of adversarial perturbation using generative models. Formally, GREAT Score carries the physical meaning of a global statistic capturing a mean certified attack-proof perturbation level over all samples drawn from a generative model. For finite-sample evaluation, we also derive a probabilistic guarantee on the sample complexity and the difference between the sample mean and the true mean. GREAT Score has several advantages: (1) Robustness evaluations using GREAT Score are efficient and scalable to large models, by sparing the need of running adversarial attacks. In particular, we show high correlation and significantly reduced computation cost of GREAT Score when compared to the attack-based model ranking on RobustBench (Croce et al., 2021). (2) The use of generative models facilitates the approximation of the unknown data distribution. In our ablation study with different generative adversarial networks (GANs), we observe consistency between global robustness evaluation and the quality of GANs. (3) GREAT Score can be used for remote auditing of privacy-sensitive black-box models, as demonstrated by our robustness evaluation on several online facial recognition services. 1 Introduction Adversarial robustness is the study of model performance in the worst-case scenario, which is a key element in trustworthy machine learning. Without further remediation, state-of-the-art machine learning models, especially neural networks, are known to be overly sensitive to small human-imperceptible perturbations to data inputs (Goodfellow et al., 2014b). Such a property of oversensitivity could be exploited by bad actors to craft adversarial perturbations leading to prediction-evasive adversarial examples. Given a threat model specifying the knowledge of the target machine learning model (e.g., white-box or black-box model access) and the setting of plausible adversarial interventions (e.g., norm-bounded input perturbations), the methodology for adversarial robustness evaluation can be divided into two categories: attack-dependent and attack-independent. Attack-dependent approaches aim to devise the strongest possible attack and use it for performance assessment. A typical example is Auto-Attack (Croce & Hein, 2020), a state-of-the-art attack based on an ensemble of advanced white-box and black-box adversarial perturbation methods. On the other hand, attack-independent approaches aim to develop a certified or estimated score for adversarial robustness, reflecting a quantifiable level of attack-proof certificate. Typical examples include neural network verification techniques (Wong & Kolter, 2018; Zhang et al., 2018), certified defenses such as randomized smoothing (Cohen et al., 2019), and local Lipschitz constant estimation (Weng et al., 2018). Despite a plethora of adversarial robustness evaluation methods, current studies primarily focus on aggregating local robustness results from a set of data samples. However, the sampling process of these test samples could be biased and unrepresentative of the true global robustness of the underlying data distribution, resulting in the risk of incorrect or biased robustness benchmarks. For instance, we find that when assessing the ranking of Imagenet models through Robustbench (Croce et al., 2020), using AutoAttack (Croce & Hein, 2020) with 10,000 randomly selected samples (the default choice) with 100 independent trials results in an unstable ranking coefficient of $0.907 \pm 0.0256$ when compared to that of the entire 50,000 test samples. This outcome affirms that AutoAttack’s model ranking has notable variations with an undersampled or underrepresented test dataset. An ideal situation is when the data distribution is transparent and one can draw an unlimited number of samples from the true distribution for reliable robustness evaluation. But in reality, the data distribution is unknown and difficult to characterize. In addition to lacking rigorous global robustness evaluation, many attack-independent methods are limited to the white-box setting, requiring detailed knowledge about the target model (e.g., model parameters and architecture) such as input gradients and internal data representations for robustness evaluation. Moreover, state-of-the-art attack-dependent and attack-independent methods often face the issue of scalability to large models and data volumes due to excessive complexity, such as the computational costs in iterative gradient computation and layer-wise interval bound propagation and relaxation (Katz et al., 2017; Gowal et al., 2019). To address the aforementioned challenges including (i) lack of proper global adversarial robustness evaluation, (ii) limitation to white-box settings, and (iii) computational inefficiency, in this paper we present a novel attack-independent evaluation framework called GREAT Score, which is short for global robustness evaluation of adversarial perturbation using generative models. We tackle challenge (i) by using a generative model such as a generative adversarial network (GAN) (Goodfellow et al., 2014a; 2020) or a diffusion model (Ho et al., 2020) as a proxy of the true unknown data distribution. Formally, GREAT score is defined as the mean of a certified lower bound on minimal adversarial perturbation over the data sampling distribution of a generative model, which represents the global distribution-wise adversarial robustness with respect to the generative model in use. It entails a global statistic capturing the mean certified attack-proof perturbation level over all samples from a generative model. For finite-sample evaluation, we also derive a probabilistic guarantee quantifying the sample complexity and the difference between the sample mean and true mean. For challenge (ii), our derivation of GREAT score leads to a neat closed-form solution that only requires data forward-passing and accessing the model outputs, which applies to any black-box classifiers giving class prediction confidence scores as model output. Moreover, as a byproduct of using generative models, our adversarial robustness evaluation procedure is executed with only synthetically generated data instead of real data, which is particularly appealing to privacy-aware robustness assessment schemes, e.g., remote robustness evaluation or auditing by a third party with restricted access to data and model. We will present how GREAT score can be used to assess the robustness of online black-box facial recognition models. Finally, for challenge (iii), GREAT score is applicable to any off-the-self generative models so that we do not take the training cost of generative models into consideration. Furthermore, the computation of GREAT score is lightweight because it scales linearly with the number of data samples used for evaluation, and each data sample only requires one forward pass through the model to obtain the final predictions. We highlight our main contributions as follows: - We present GREAT Score as a novel framework for deriving a global statistic representative of the distribution-wise robustness to adversarial perturbation, based on an off-the-shelf generative model for approximating the data generation process. - Theoretically, we show that GREAT Score corresponds to a mean certified attack-proof level of $\ell_2$-norm bounded input perturbation over the sampling distribution of a generative model (Theorem 1). We further develop a formal probabilistic guarantee on the quality of using the sample mean as GREAT Score with a finite number of samples from generative models (Theorem 2). - We evaluate the effectiveness of GREAT Score on all neural network models on RobustBench (Croce et al., 2020) (the largest adversarial robustness benchmark), with a total of 17 models on CIFAR-10 and 5 models on ImageNet. We show that the model ranking of GREAT score is highly aligned with that of the original ranking on RobustBench using AutoAttack (Croce & Hein, 2020), while GREAT Score significantly reduces the computation time. Specifically, on CIFAR-10 the computation complexity can be reduced by up to 2,000 times. The results suggest that GREAT score is a competitive and computationally-efficient alternative for adversarial robustness evaluation. As a demonstration of GREAT Score’s capability for remote robustness evaluation of access-limited systems, we show how Great Score can audit several online black-box facial recognition APIs. 2 Background and Related Works Adversarial Attack and Defense. Adversarial attacks aim to generate examples that can evade classifier predictions in classification tasks. In principle, adversarial examples can be crafted by small perturbations to a native data sample, where the level of perturbation is measured by different $L_p$ norms (Szegedy et al., 2014; Carlini & Wagner, 2017; Chen et al., 2018). The procedure of finding adversarial perturbation within a perturbation level is often formulated as a constrained optimization problem, which can be solved by algorithms such as projected gradient descent (PGD) (Madry et al., 2018). The state-of-the-art adversarial attack is the Auto-Attack (Croce & Hein, 2020), which uses an ensemble of white-box and black-box attacks. There are many methods (defenses) to improve adversarial robustness. A popular approach is adversarial training (Madry et al., 2018), which generates adversarial perturbation during model training for improved robustness. One common evaluation metric for adversarial robustness is robust accuracy, which is defined as the accuracy of correct classification under adversarial attacks, evaluated on a set of data samples. RobustBench (Croce & Hein, 2020) is the largest-scale standardized benchmark that ranks the models using robust accuracy against Auto-Attack on test sets from image classification datasets such as CIFAR-10. Generative Models. Statistically speaking, let $X$ denote the observable variable and let $Y$ denote the corresponding label, the learning objective for a generative model is to model the conditional probability distribution $P(X | Y)$. Among all the generative models, GANs have gained a lot of attention in recent years due to their capability to generate realistic high-quality images (Goodfellow et al., 2020). The principle of training GANs is based on the formulation of a two-player zero-sum min-max game to learn the high-dimension data distribution. Eventually, these two players reach the Nash equilibrium that $D$ is unable to further discriminate real data versus generated samples. This adversarial learning methodology aids in obtaining high-quality generative models. In practice, the generator $G(\cdot)$ takes a random vector $z$ (i.e., a latent code) as input, which is generated from a zero-mean isotropic Gaussian distribution denoted as $z \sim \mathcal{N}(0, I)$, where $I$ means an identity matrix. Conditional GANs refer to the conditional generator $G(\cdot | Y)$ given a class label $Y$. In addition to GAN, diffusion models (DMs) are also gaining popularity. DMs consist of two stages: the forward diffusion process and the reverse diffusion process. In the forward process, the input data is gradually perturbed by Gaussian Noises and becomes an isotropic Gaussian distribution eventually. In the reverse process, DMs reverse the forward process and implement a sampling process from Gaussian noises to reconstruct the true samples by solving a stochastic differential equation. In our proposed framework, we use off-the-shelf (conditional) GANs and DMs (e.g., DDPM (Ho et al., 2020)) that are publicly available as our generative models. Formal Local Robustness Guarantee and Estimation. Given a data sample $x$, a formal local robustness guarantee refers to a certified range on its perturbation level such that within which the top-1 class prediction of a model will remain unchanged (Hein & Andriushchenko, 2017). In $L_p$-norm ($p \geq 1$) bounded perturbations centered at $x$, such a guarantee is often called a certified radius $r$ such that any perturbation $\delta$ to $x$ within this radius (i.e., $||\delta||_p \leq r$) will have the same top-1 class prediction as $x$. Therefore, the model is said to be provably locally robust (i.e., attack-proof) to any perturbations within the certified radius $r$. By definition, the certified radius of $x$ is also a lower bound on the minimal perturbation required to flip the model prediction. Among all the related works on attack-independent local robustness evaluations, the CLEVER framework proposed in (Weng et al., 2018) is the closest to our study. The authors in (Weng et al., 2018) derived a closed-form of certified local radius involving the maximum local Lipschitz constant of the model output with respect to the data input around a neighborhood of a data sample $x$. They then proposed to use extreme value theory to estimate such a constant and use it to obtain a local robustness score, which is not a certified local radius. Our proposed GREAT score has major differences from (Weng et al., 2018) in that our focus is on global robustness evaluation, and our GREAT score is the mean of a certified radius over the sampling distribution of a generative model. In addition, for every generated sample, our local estimate gives a certified radius. Notations. All the main notations used in the paper are summarized in Appendix 6.1. 3 GREAT SCORE: METHODOLOGY AND ALGORITHMS 3.1 TRUE GLOBAL ROBUSTNESS AND CERTIFIED ESTIMATE Let \( f = [f_1, \ldots, f_K] : \mathbb{R}^d \to \mathbb{R}^K \) denote a fixed \( K \)-way classifier with flattened data input of dimension \( d \), \((x, y)\) denote a pair of data sample \( x \) and its corresponding groundtruth label \( y \in \{1, \ldots, K\} \), \( P \) denote the true data distribution which in practice is unknown, and \( \Delta_{\min}(x) \) denote the minimal perturbation of a sample-label pair \((x, y) \sim P\) causing the change of the top-1 class prediction such that \( \arg\max_{k \in \{1, \ldots, K\}} f_k(x + \Delta_{\min}(x)) \neq \arg\max_{k \in \{1, \ldots, K\}} f_k(x) \). Note that if the model \( f \) makes an incorrect prediction on \( x \), i.e., \( y \neq \arg\max_{k \in \{1, \ldots, K\}} f_k(x) \), then we define \( \Delta_{\min}(x) = 0 \). This means the model is originally subject to prediction evasion on \( x \) even without any perturbation. A higher \( \Delta_{\min}(x) \) means better local robustness of \( f \) on \( x \). The following statement defines the true global robustness of a classifier \( f \) based on the probability density function \( p(\cdot) \) of the underlying data distribution \( P \). **Definition 1 (True global robustness w.r.t. \( P \)).** The true global robustness of a classifier \( f \) with respect to a data distribution \( P \) is defined as: \[ \Omega(f) = \mathbb{E}_{x \sim P}[\Delta_{\min}(x)] = \int_{x \sim P} \Delta_{\min}(x)p(x)dx \] (1) Unless the probability density function of \( P \) and every local minimal perturbation are known, the exact value of the true global robustness cannot be computed. An alternative is to estimate such a quantity. Extending Definition 1, let \( g(x) \) be a local robustness statistic. Then the corresponding global robustness estimate is defined as \[ \hat{\Omega}(f) = \mathbb{E}_{x \sim P}[g(x)] = \int_{x \sim P} g(x)p(x)dx \] (2) Furthermore, if one can prove that \( g(x) \) is a valid lower bound on \( \Delta_{\min}(x) \) such that \( g(x) \leq \Delta_{\min}(x), \forall x \), then the estimate \( \hat{\Omega}(f) \) is said to be a certified lower bound on the true global robustness with respect to \( P \), and larger \( \hat{\Omega}(f) \) will imply better true global robustness. In what follows, we will formally introduce our proposed GREAT Score and show that it is a certified estimate of the lower bound on the true robustness with respect to the data-generating distribution learned by a generative model. 3.2 USING GMs TO EVALUATE GLOBAL ROBUSTNESS Recall that a generative model (GM) takes a random vector \( z \sim \mathcal{N}(0, I) \) sampled from a zero-mean isotropic Gaussian distribution as input to generate a data sample \( G(z) \). In what follows, we present our first main theorem that establishes a certified lower bound \( \hat{\Omega}(f) \) on the true global robustness of a classifier \( f \) measured by the data distribution given by \( G(\cdot) \). Without loss of generality, we assume that all data inputs are confined in the scaled data range \([0, 1]^d\), where \( d \) is the size of any flattened data input. The \( K \)-way classifier \( f : [0, 1]^d \mapsto \mathbb{R}^K \) takes a data sample \( x \) as input and outputs a \( K \)-dimensional vector \( f(x) = [f_1(x), \ldots, f_K(x)] \) indicating the likelihood of its prediction on \( x \) over \( K \) classes, where the top-1 class prediction is defined as \( \hat{y} = \arg\max_{k \in \{1, \ldots, K\}} f_k(x) \). We further denote \( c \) as the groundtruth class of \( x \). Therefore, if \( \hat{y} \neq c \), then the classifier is said to make a wrong top-1 prediction. When considering the adversarial robustness on a wrongly classified sample \( x \), we define the minimal perturbation for altering model prediction as \( \Delta_{\min}(x) = 0 \). The intuition is that an attacker does not need to take any action to make the sample \( x \) evade the correct prediction by \( f \), and therefore the required minimal adversarial perturbation level is 0 (i.e., zero robustness). Given a generated data sample \( G(z) \), we now formally define a local robustness score function as \[ g(G(z)) = \sqrt{\frac{\pi}{2}} \cdot \max\left\{ f_c(G(z)) - \max_{k \in \{1, \ldots, K\}, k \neq c} f_k(G(z)), 0 \right\} \] (3) The scalar \( \sqrt{\pi/2} \) is a constant associated with the sampling Gaussian distribution of \( G \), which will be apparent in later analysis. We further offer several insights into understanding the physical meaning of the considered local robustness score in (3): (i) The inner term \( f_c(G(z)) - \) max_{k \in \{1, \ldots, K\}, k \neq c} f_k(G(z)) represents the gap in the likelihood of model prediction between the correct class \(c\) and the most likely class other than \(c\). A positive and larger value of this gap reflects higher confidence of the correct prediction and thus better robustness. (ii) Following (i), a negative gap means the model is making an incorrect prediction, and thus the outer term \(\max\{\text{gap}, 0\} = 0\), which corresponds to zero robustness. Next, we use the local robustness score \(g\) defined in (3) to formally state our theorem on establishing a certified lower bound on the true global robustness and the proof sketch. **Theorem 1** (certified global robustness estimate). Let \(f : [0, 1]^d \mapsto [0, 1]^K\) be a \(K\)-way classifier and let \(f_k(\cdot)\) be the predicted likelihood of class \(k\), with \(c\) denoting the groundtruth class. Given a generator \(G\) such that it generates a sample \(G(z)\) with \(z \sim \mathcal{N}(0, I)\). Define \[ g(G(z)) = \sqrt{\frac{\pi}{2}} \cdot \max\{f_c(G(z)) - \max_{k \in \{1, \ldots, K\}, k \neq c} f_k(G(z)), 0\}. \] Then the global robustness estimate of \(f\) evaluated with \(L_2\)-norm bounded perturbations, defined as \(\hat{\Omega}(f) = \mathbb{E}_{z \sim \mathcal{N}(0, I)}[g(G(z))]\), is a certified lower bound of the true global robustness \(\Omega(f)\) with respect to \(G\). The complete proof is given in Appendix 6.4. ### 3.3 Probabilistic Guarantee on Sample Mean As defined in Theorem 1, the global robustness estimate \(\hat{\Omega}(f) = \mathbb{E}_{z \sim \mathcal{N}(0, I)}[g(G(z))]\) is the mean of the local robustness score function introduced in (3) evaluated through a generator \(G\) and its sampling distribution. In practice, one can use a finite number of samples \(\{G(z_i|y_i)\}_{i=1}^n\) generated from a conditional generator \(G(\cdot|y)\) to estimate \(\hat{\Omega}(f)\), where \(y\) denotes a class label and it is also an input parameter to the conditional generator. The simplest estimator of \(\hat{\Omega}(f)\) is the sample mean, defined as \[ \hat{\Omega}_S(f) = \frac{1}{n} \sum_{i=1}^n g(G(z_i|y_i)). \] In what follows, we present our second main theorem to deliver a probabilistic guarantee on the sample complexity to achieve \(\epsilon\) difference between the sample mean \(\hat{\Omega}_S(f)\) and the true mean \(\hat{\Omega}(f)\). **Theorem 2** (probabilistic guarantee on sample mean). Let \(f\) be a \(K\)-way classifier with its outputs bounded by \([0, 1]^K\) and let \(e\) denote the natural base. For any \(\epsilon, \delta > 0\), if the sample size \(n \geq \frac{32e \log(2/\delta)}{\epsilon^2}\), then with probability at least \(1 - \delta\), the sample mean \(\hat{\Omega}_S(f)\) is \(\epsilon\)-close to the true mean \(\hat{\Omega}(f)\). That is, \(|\hat{\Omega}_S(f) - \hat{\Omega}(f)| \leq \epsilon\). The complete proof is given in Appendix 6.5. The proof is built on a concentration inequality in (Maurer & Pontil, 2021). It is worth noting that the bounded output assumption of the classifier \(f\) in Theorem 2 can be easily satisfied by applying a normalization layer at the final model output, such as the softmax function or the element-wise sigmoid function. ### 3.4 Algorithm and Computational Complexity To conclude this section, the Algorithm 1 in Appendix 6.6 summarizes the procedure of computing GREAT Score using the sample mean estimator. It can be seen that the computation complexity of GREAT Score is linear in the number of generated samples \(N_S\), and for each sample, the computation of the statistic \(g\) defined in (3) only requires drawing a sample from the generator \(G\) and taking a forward pass to the classifier \(f\) to obtain the model predictions on each class. As a byproduct, GREAT Score applies to the setting when the classifier \(f\) is a black-box model, meaning only the model outputs are observable by an evaluator. ### 3.5 Calibrated GREAT Score In cases when one has additional knowledge of adversarial examples on a set of images from a generative model, e.g., successful adversarial perturbations (an upper bound on the minimal perturbation of each sample) returned by any norm-minimization adversarial attack method such as the CW attack (Carlini & Wagner, 2017). The CW attack employs two loss terms, classification loss and distance metric, to generate adversarial examples. See Appendix 6.7 for details. we can further “calibrate” the Great Score with respect to the available perturbations. Moreover, since Theorem 1 informs some design choices on the model output layer, as long as the model output is a non-negative $K$-dimensional vector $f \in [0, 1]^K$ reflecting the prediction confidence over $K$ classes, we will incorporate such flexibility in the calibration process. Specifically, we use calibration in the model ranking setup where there are $M$ models $\{f^{(j)}\}_{j=1}^M$ for evaluation, and each model (indexed by $j$) has a set of known perturbations $\{\delta^{(j)}_i\}_{i=1}^N$ on a common set of $N$ image-label pairs $\{x_i, y_i\}_{i=1}^N$ from the same generative model. We further consider four different model output layer designs (that are attached to the model logits): (i) sigmoid($\cdot|T_1$): sigmoid with temperature $T_1$, (ii) softmax($\cdot|T_2$): softmax with temperature $T_2$, (iii) sigmoid(softmax($\cdot|T_2 = 1)|T_1$): sigmoid with temperature after softmax, and (iv) softmax(sigmoid($\cdot|T_1 = 1)|T_2$): softmax with temperature after sigmoid. Finally, let $\{\hat{\Omega}_S(f^{(j)})\}_{j=1}^M$ denote the Great Score computed based on $\{x_i, y_i\}_{i=1}^N$ for each model. We calibrate Great Score by optimizing some rank statistics (e.g., the Spearman’s rank correlation coefficient) over the temperature parameter by comparing the ranking consistency between $\{\hat{\Omega}_S(f^{(j)})\}_{j=1}^M$ and $\{\delta^{(j)}_i\}_{i=1}^N$. In our experiments, we find that setting (iv) gives the best result and use it as the default setup for calibration, as detailed in Appendix 6.8. 4 EXPERIMENTAL RESULTS 4.1 Experiment Setup Datasets and Models. We conduct our experiment on several datasets including CIFAR-10 (Krizhevsky et al., 2009), ImageNet-1K (Deng et al., 2009) and CelebA-HQ (Karras et al., 2018)/CelebA (Liu et al., 2015). For neural network models, we use the available models on RobustBench (Croce et al., 2020) (see more details in the next paragraph), which includes 17/5 models on CIFAR-10/ImageNet, correspondingly. We also use several off-the-shelf GANs and diffusion models (DMs) trained on CIFAR-10 and ImageNet for computing GREAT Score in an ablation study (we defer the model details to later paragraphs). Summary of Classifiers on RobustBench. The RobustBench (RobustBench) is to-date the largest benchmark for robustness evaluation with publicly accessible neural network models submitted by contributors. RobustBench uses the default test dataset from several standard image classification tasks, such as CIFAR-10 and ImageNet-1K, to run Auto-Attack (Croce & Hein, 2020) and report the resulting accuracy with $L_2$-norm and $L_\infty$-norm perturbations (i.e., the robust accuracy – RA) as a metric for adversarial robustness. Even under one perturbation type, it is not easy to make a direct and fair comparison among all submitted models on RobustBench because they often differ by the training scheme, network architecture, as well as the usage of additional real and/or synthetic data. To make a meaningful comparison with GREAT Score, we select all non-trivial models (having non-zero RA) submitted to the CIFAR-10 and ImageNet-1K benchmarks and evaluated with $L_2$-norm perturbation with a fixed perturbation level of 0.5 using Auto-Attack. We list the model names in Table 1 and provide their descriptions in Appendix 6.9. GANs and DMs. We used off-the-shelf GAN models provided by StudioGAN (Minguk Kang & Park, 2021), a library containing released GAN models. StudioGAN also reports the Inception Score (IS) to rank the model quality. We use the GAN model with the highest IS value as our default GAN for GREAT Score, which are StyleGAN2 (Karras et al., 2020)/BigGAN (Brock et al., 2019) for CIFAR-10/ImageNet with IS = 10.477/99.705, respectively. For the ablation study of using different generative models in GREAT Score (Section 4.4), we also use the following GAN/DM models: LSGAN (Mao et al., 2017), GGAN (Lim & Ye, 2017), SAGAN (Zhang et al., 2019), SNGAN (Miyato et al., 2018), DDPM (Ho et al., 2020) and StyleGAN2 (Karras et al., 2020). GREAT Score implementation. The implementation follows Algorithm 1 in Appendix 6.6 with a sigmoid/softmax function on the logits of the CIFAR-10/ImageNet classifier to ensure the model output of each dimension is within $[0, 1]$, as implied by Theorem 1. As ImageNet-1K has 1000 classes, applying sigmoid will make the robustness score function in (3) degenerate. We use softmax instead. 500 samples drawn from a generative model were used for computing GREAT Score. Figure 1: Comparison of local GREAT Score and CW attack in $L_2$ perturbation on CIFAR-10 with Rebuffi_extra model (Rebuffi et al., 2021). The x-axis is the image id. The result shows the local GREAT Score is indeed a lower bound of the perturbation level found by CW attack. Comparative methods. We compare the effectiveness of GREAT Score in two objectives: robustness ranking (global robustness) and per-sample perturbation. For the former, we compare the RA reported in RobustBench on the test dataset (named RobustBench Accuracy) as well as the RA of Auto-Attack on the generated data samples (named AutoAttack Accuracy). For the latter, we report the RA of Auto-Attack in $L_2$-norm with a fixed perturbation level of 0.5. Evaluation metrics. For robustness ranking, we report Spearman’s rank correlation coefficient between two sets of model rankings (e.g., GREAT Score v.s. RobustBench Accuracy). A value closer to 1 means higher consistency. Robust accuracy refers to the fraction of correctly classified samples against adversarial perturbations. Calibration Method. We run $L_2$-norm CW attack (Carlini & Wagner, 2017) (with learning rate 0.005 and 200 iterations) on each generated data sample to find the minimal adversarial perturbation. Then, we use grid search in the range [0,2] with an interval of 0.00001 to find temperature value maximizing the Spearmans’ rank correlation coefficient between Great Score and CW attack distortion. Compute Resources. All our experiments were run on a GTX 2080 Ti GPU with 12GB RAM. 4.2 LOCAL AND GLOBAL ROBUSTNESS ANALYSIS Recall from Theorem 1 that the local robustness score proposed in (3) gives a certified perturbation level for generated samples from a generative model. To verify this claim, we randomly select 20 generated images on CIFAR-10 and compare their local certified perturbation level to the perturbation found by the CW attack (Carlini & Wagner, 2017) using the Rebuffi_extra model (Rebuffi et al., 2021). Rebuffi et al. (Rebuffi et al., 2021) proposed a fixing data augmentation method such as using CutMix (Yun et al., 2019) and GANs to prevent over-fitting. Figure 1 shows the perturbation level of local GREAT Score in (3) and that of the corresponding CW attack per sample. We can see that the local GREAT Score is a lower bound of CW attack, as the CW attack finds a successful adversarial perturbation that is no smaller than the minimal perturbation $\Delta_{\text{min}}$ (i.e., an over-estimation). The true $\Delta_{\text{min}}$ value lies between these lower and upper bounds. In Figure 2, we compare the cumulative robust accuracy (RA) of GREAT Score and Auto-Attack over 500 samples by sweeping the $L_2$ perturbation level from 0 to 1 with a 0.05 increment for Auto-Attack. The cumulative RA of GREAT Score at a perturbation level $r$ means the fraction of samples with local GREAT scores greater than $r$, which gives an attack-proof guarantee that no attacks can achieve a lower RA at the same perturbation level. We see that the trend of attack-independent certified robustness (GREAT Score) is similar to that of empirical attacks (Auto-Attack). The gap between our certified curve and the empirical curve of AutoAttack does not necessarily indicate inferiority, it could mean that there exist undiscovered adversarial examples at higher perturbation radii. Table 1 compares the global robustness statistics of the 17 grouped CIFAR-10 models on RobustBench for uncalibrated and calibrated versions respectively, in terms of the GREAT Score and the average distortion of CW attack, which again verifies GREAT Score is a certified lower bound on the true global robustness (see its definition in Section 3.1), while any attack with 100% attack success rate only gives an upper bound on the true global robustness. We also observe that calibration can indeed enlarge the Great Score and tighten its gap to the distortion of CW attack. Table 1: Comparison of (Calibrated) GREAT Score v.s. minimal distortion found by CW attack (Carlini & Wagner, 2017) on CIFAR-10. The results are averaged over 500 samples from StyleGAN2. | Model Name | RobustBench Accuracy(%) | AutoAttack Accuracy(%) | GREAT Score | Calibrated GREAT Score | CW Distortion | |------------|-------------------------|------------------------|-------------|------------------------|--------------| | Rebuffi extra Rebuffi et al. (2021) | 82.32 | 87.20 | 0.507 | 1.216 | 1.859 | | Gowal extra Gowal et al. (2020) | 80.53 | 85.60 | 0.534 | 1.213 | 1.324 | | Rebuffi_70_drops Rebuffi et al. (2021) | 80.42 | 90.61 | 0.542 | 1.208 | 1.943 | | Rebuffi_70_drops Rebuffi et al. (2021) | 78.80 | 90.00 | 0.424 | 1.214 | 1.796 | | Augustin_WRN_extra Augustin et al. (2020) | 78.79 | 86.20 | 0.525 | 1.206 | 1.340 | | Schlegl_128 Schlegl et al. (2021) | 77.24 | 92.20 | 0.501 | 1.143 | 1.392 | | Augustin_WRN Augustin et al. (2020) | 76.25 | 86.40 | 0.583 | 1.206 | 1.332 | | Rade Rade & Mosseri-Dzifcak (2021) | 76.15 | 86.60 | 0.413 | 1.200 | 1.486 | | RobustBench (test set) (Carlini et al., 2021) | 75.90 | 87.60 | 0.560 | 1.110 | 1.415 | | Gowal Gowal et al. (2020) | 74.50 | 86.40 | 0.124 | 1.116 | 1.253 | | Schlegl_312 Schlegl et al. (2021) | 74.41 | 88.80 | 0.520 | 1.185 | 1.343 | | Wd2020 adversarial Wd2 (2020) | 73.66 | 84.60 | 0.128 | 1.110 | 1.369 | | Augustin2020 adversarial Augustin et al. (2020) | 72.91 | 85.20 | 0.569 | 1.199 | 1.285 | | Engstrom Engstrom et al. (2019) | 72.42 | 81.00 | 0.502 | 1.020 | 1.084 | | Rice2020 overfitting Rice et al. (2020) | 67.68 | 81.80 | 0.152 | 1.040 | 1.097 | | Runy2019 decoupling Runy et al. (2019) | 66.44 | 79.20 | 0.275 | 1.101 | 1.165 | | Ding2020 MMA Ding et al. (2019) | 66.09 | 77.60 | 0.112 | 0.909 | 1.005 | 4.3 MODEL RANKING ON CIFAR-10 AND IMAGENET Following the experiment setup in Section 4.1, we compare the model ranking on CIFAR-10 using GREAT Score (evaluated with generated samples), RobustBench (evaluated with Auto-Attack on the test set), and Auto-Attack (evaluated with Auto-Attack on generated samples). Table 2 presents their mutual rank correlation (higher value means more aligned ranking) with calibrated and uncalibrated versions. We note that there is an innate discrepancy between Spearman’s rank correlation coefficient (way below 1) of RobustBench v.s. Auto-Attack, which means Auto-Attack will give inconsistent model rankings when evaluated on different data samples. In addition, GREAT Score measures classification margin, while AutoAttack measures accuracy under a fixed perturbation budget $\epsilon$. AutoAttack’s ranking will change if we use different $\epsilon$ values. E.g., comparing the ranking of $\epsilon = 0.3$ and $\epsilon = 0.7$ on 10000 CIFAR-10 test images for AutoAttack, the Spearman’s correlation is only 0.9485. Therefore, we note that GREAT Score and AutoAttack are complementary evaluation metrics and they don’t need to match perfectly. Despite their discrepancy, before calibration, the correlation between GREAT Score and RobustBench yields a similar value. With calibration, there is a significant improvement in rank correlation between Great Score to Robustbench and Auto-Attack, respectively. Table 3 presents the global robustness statistics of these three methods on ImageNet. We observe almost perfect ranking alignment between GREAT Score and RobustBench, with their Spearman’s rank correlation coefficient being 0.8, which is higher than that of Auto-Attack and RobustBench (0.6). These results suggest that GREAT Score is a useful metric for margin-based robustness evaluation. 4.4 ABLATION STUDY AND RUN-TIME ANALYSIS Ablation study on GANs and DMs. Evaluating on CIFAR-10, Figure 3 compares the inception score (IS) and the Spearman’s rank correlation coefficient between GREAT Score and RobustBench on five GANs and DDPM. One can observe that models with higher IS attain better ranking consistency. Run-time analysis. Figure 4 compares the run-time efficiency of GREAT Score over Auto-Attack on the same 500 generated CIFAR-10 images. We show the ratio of their average per-sample run-time (wall clock time of GREAT Score/Auto-Attack is reported in Appendix 6.11) and observe around 800-2000 times improvement, validating the computational efficiency of GREAT Score. Sample Complexity and GREAT Score. In Appendix 6.12, we report the mean and variance of GREAT Score with a varying number of generated data samples. The results show that the statistics of GREAT Score are quite stable even with a small number of data samples (i.e., $\geq 500$). Table 3: Robustness evaluation on ImageNet using GREAT Score, Robust-Bench (with test set), and Auto Attack (with generated samples). The Spearman’s rank correlation coefficient for GREAT score v.s. RobustBench is 0.9 and 0.872, respectively. | Model Name | RobustBench Accuracy (%) | AutoAttack Accuracy (%) | GREAT Score | Calibrated GREAT Score | CW Distortion | |------------|-------------------------|------------------------|-------------|------------------------|--------------| | Titan1 Titan et al. (2020) | 78.96 | 85.20 | 0.501 | 1.216 | 1.859 | | Titan2 Titan et al. (2020) | 78.96 | 85.20 | 0.501 | 1.216 | 1.859 | | DDPM DDPM (Chen et al., 2019) | 78.96 | 85.20 | 0.501 | 1.216 | 1.859 | | Fast Fast et al. (2020b) | 76.24 | 19.2 | 0.273 | 0.273 | 0.273 | | Trace3 Trace3 et al. (2020) | 75.32 | 19.6 | 0.273 | 0.273 | 0.273 | Figure 3: Comparison of Inception Score and Spearman’s rank correlation to RobustBench using GREAT Score with different GANs. Figure 4: Run-time improvement (GREAT Score over Auto-Attack) on 500 generated CIFAR-10 images. Table 4: Group-wise and overall robustness evaluation for online gender classification APIs over 500 generated samples (per group). | Online API Name | Old | Young | With Eyeglasses | Without Eyeglasses | Total | |-----------------|-----|-------|-----------------|--------------------|-------| | BetaFace | 0.950| 0.967| 0.938 | 0.919 | 0.942 | | Inferno | 0.948| 0.937| 0.858 | 0.969 | 0.937 | | Arsa-Technology | 1.031| 0.938| 0.799 | 1.082 | 0.933 | | DeepFace | 0.979| 0.774| 0.769 | 0.976 | 0.877 | | Baidu | 0.997| 0.997| 0.991 | 1.010 | 1.005 | | Luxand | 1.091| 0.912| 0.673 | 1.010 | 0.944 | 4.5 Evaluation on Online Facial Recognition APIs To demonstrate Great Score enables robustness evaluation of black-box models that only provide model inference outcomes based on date inputs, we use synthetically generated face images with hidden attributes to evaluate six online face recognition APIs for gender classification. It is worth noting that Great Score is suited for privacy-sensitive assessment because it only uses synthetic face images for evaluation and does not require using real face images. We use an off-the-shelf face image generator InterFaceGAN (Shen et al., 2020) trained on CelebA-HQ dataset (Karras et al., 2018), which can generate controllable high-quality face images with the choice of attributions such as eyeglasses, age, and expression. We generate four different groups (attributes) of face images for evaluation: Old, Young, With Eyeglasses, and Without Eyeglasses. For annotating the ground truth gender labels of the generated images, we use the gender predictions from the FAN classifier (He et al.). In total, 500 gender-labeled face images are generated for each group. Appendix 6.14 shows some examples of the generated images for each group. We evaluate the GREAT Score on six online APIs for gender classification: BetaFace (BetaFace), Inferdo (Inferdo), Arsa-Technology (Arsa-Technology), DeepFace (Serengil & Ozpinar, 2021), Baidu (Baidu) and Luxand (Luxand). These APIs are “black-box” models to end users or an external model auditor because the model details are not revealed and only the model inference results returned by APIs (prediction probabilities on Male/Female) are provided. Finally, we upload these images to the aforementioned online APIs and calculate the GREAT Score using the returned prediction results. Table 4 displays the group-level and overall GREAT Score results. Our evaluation reveals interesting observations. For instance, APIs such as BetaFace, Inferno, and DEEPFACE exhibit a large discrepancy for Old v.s. Young, while other APIs have comparable scores. For all APIs, the score of With Eyeglasses is consistently and significantly lower than that of Without Eyeglasses, which suggests that eyeglasses could be a common spurious feature that affects the group-level robustness in gender classification. The analysis demonstrates how Great Score can be used to study the group-level robustness of an access-limited model in a privacy-enhanced manner. To verify our evaluation, in Table 5 we compare GREAT Score to the black-box square attack (Andriushchenko et al., 2020) with $\epsilon=2$ and # queries=100 on DEEPFACE. For both Age and Eyeglasses groups (Old v.s. Young and W v.s. W/O eyeglasses), we see consistently that a higher Great Score (second row) indicates better robust accuracy (%), first row) against square attack. Table 5: GREAT Score v.s. accuracy under square attack (Andriushchenko et al., 2020). | DEEPFACE | Old | Young | With Eyeglasses | Without Eyeglasses | |----------|-----|-------|-----------------|--------------------| | Square Attack | 84.40% | 72.60% | 65.80% | 89.00% | | GREAT Score | 0.979 | 0.774 | 0.763 | 0.969 | 5 Conclusion In this paper, we presented GREAT Score, a novel and computation-efficient attack-independent metric for global robustness evaluation against adversarial perturbations. GREAT Score uses an off-the-shelf generative model such as GANs for evaluation and enjoys theoretical guarantees on its estimation of the true global robustness. Its computation is lightweight and scalable because it only requires accessing the model predictions on the generated data samples. Our extensive experimental results on CIFAR-10 and ImageNet also verified high consistency between GREAT Score and the attack-based model ranking on RobustBench, demonstrating that GREAT Score can be used as an efficient alternative for robustness benchmarks. We also demonstrated the novel use of Great Score for the robustness evaluation of online facial recognition APIs. Limitations. One limitation could be that our framework of global adversarial robustness evaluation using generative models is centered on $L_2$-norm based perturbations. This limitation could be addressed if the Stein’s Lemma can be extended for other $L_p$ norms. REFERENCES Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, pp. 484–501. Springer, 2020. Arsa-Technology. Arsa API. https://rapidapi.com/arsa-technology-arsa-technology-default/api/face-recognition18. Maximilian Augustin, Alexander Meinke, and Matthias Hein. Adversarial robustness on in-and out-distribution improves explainability. In European Conference on Computer Vision, pp. 228–245. Springer, 2020. Baidu. Baidu API. https://console.bce.baidu.com/. BetaFace. BetaFace API. https://rapidapi.com/betiface/api/face-recognition. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Blxsqj09Fm. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. IEEE, 2017. Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. EAD: elastic-net attacks to deep neural networks via adversarial examples. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 10–17, 2018. Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. International Conference on Machine Learning, 2019. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pp. 2206–2216. PMLR, 2020. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670, 2020. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https://openreview.net/forum?id=SSKZPJ Ct7B. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Mma training: Direct input space margin maximization through adversarial training. arXiv preprint arXiv:1812.02637, 2018. Logan Engstrom, Andrew Ilyas, Hadi Salman, Shibani Santurkar, and Dimitris Tsipras. Robustness (python library), 2019. URL https://github.com/MadryLab/robustness. Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. Efficient and accurate estimation of lipschitz constants for deep neural networks. Advances in Neural Information Processing Systems, 32, 2019. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014a. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11): 139–144, 2020. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b.
LOyYjE0blM
The top and bottom molecule in Fig 1c are identical yet lead to different phenotypes. The title of the figure says that different molecules yield different phenotypes in cells. If there was an intention for some of these aspects, could you possibly clarify it? If not, perhaps drop the second copy of the molecule.
Neural scaling laws for phenotypic drug discovery Anonymous authors Paper under double-blind review Abstract Recent breakthroughs by deep neural networks (DNNs) in natural language processing (NLP) and computer vision have been driven by a scale-up of models and data rather than the discovery of novel computing paradigms. Here, we investigate if scale can have a similar impact for models designed to aid small molecule drug discovery. We address this question through a large-scale and systematic analysis of how DNN size, data diet, and learning routines interact to impact accuracy on our Phenotypic Chemistry Arena (Pheno-CA) benchmark — a diverse set of drug discovery tasks posed on image-based high content screening data. Surprisingly, we find that DNNs explicitly supervised to solve tasks in the Pheno-CA do not continuously improve as their data and model size is scaled-up. To address this issue, we introduce a novel precursor task, the Inverse Biological Process (IBP), which is designed to resemble the causal objective functions that have proven successful for NLP. We indeed find that DNNs first trained with IBP then probed for performance on the Pheno-CA significantly outperform task-supervised DNNs. More importantly, the performance of these IBP-trained DNNs monotonically improves with data and model scale. Our findings reveal that the DNN ingredients needed to accurately solve small molecule drug discovery tasks are already in our hands, and project how much more experimental data is needed to achieve any desired level of improvement. We release our Pheno-CA benchmark and code to encourage further study of neural scaling laws for small molecule drug discovery. 1 Introduction Rich Sutton [Sutton, 2019] famously wrote, “the biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” The scale of compute, model, and data have proven over recent years to be the most important factors for developing high performing systems in nearly every domain of AI including computer vision [Dehghani et al., 2023], natural language processing [Kaplan et al., 2020], reinforcement learning [Hilton et al., 2023], protein folding [Lin et al., 2023], and design [Hesslow et al., 2022]. The foundation of each of these revolutions-of-scale rests on empirically derived “neural scaling laws,” which indicate that continued improvement on a given domain’s tasks are constrained by compute, model, and data scale rather than novel algorithmic solutions or additional domain-knowledge. Thus, one of the extraordinary opportunities for AI is finding and exploiting similar scaling laws in domains that have not benefited from them yet. Small molecule drug discovery is one of the domains where scaling laws could have an outsized impact. Biological experiments are costly and time intensive, while the space of molecules has been estimated to contain as many as $10^{60}$ compounds with drug-like properties [Lipinski et al., 2012]. The current standard approach for identifying interactions between small molecules and biological targets involves high throughput screening (HTS), in which libraries of hundreds of thousands of molecules are tested empirically in parallel for specific biological readouts at great cost. The ability to accurately predict \textit{in silico} whether a small molecule engages a biological target would at the very least reduce the size of the chemical libraries needed to find bioactive molecules and support significantly faster and cheaper discovery. Moreover, if models for small molecule drug discovery follow similar scaling laws as those discovered for natural language processing [Kaplan et al., 2020], then it would mean that even the loftiest goals may be within reach, such as screening larger parts of the $10^{60}$ space of molecules to find treatments for today’s intractable diseases. Can DNNs speed up drug discovery, and if so, do their abilities follow neural scaling laws? One of the most promising avenues for generating data that could be used to train DNNs on drug discovery tasks is image-based high-content screening (iHCS). This type of screen is widely used to measure the effects of drugs and find targets for treating disease because it can capture a large variety of biological signatures through different stains or biosensors, and has been helpful in drug discovery applications including hit identification (Simm et al., 2018; Bray et al., 2017) and expansion (Hughes et al., 2020), lead optimization (Caie et al., 2010), generating hypotheses on a drug’s mechanism-of-action (Young et al., 2008; Boyd et al., 2020; Sundaramurthy et al., 2011), and target (Schenone et al., 2013), and also identifying and validating disease targets (see Chandrasekaran et al., 2021 for a review). While iHCS is potentially more flexible than standard biochemical assays used in drug discovery, it still requires significant time, money, and effort to set up and run. The recently released JUMP dataset (Chandrasekaran et al., 2023a,b) contains nearly two orders of magnitude more iHCS data than was previously available to the public (Bray et al., 2017), and therefore represents a significant opportunity for deep learning. However, it is still unclear if DNNs can leverage the data in JUMP for drug discovery. Here, we use the JUMP dataset to investigate if DNNs trained on it for small molecule drug discovery tasks follow neural scaling laws. A positive answer to this question could bring about a revolution in biomedicine that mimics the ones in natural language processing and computer vision over recent years, making it faster, easier, and cheaper than ever to discover drugs. Contributions. We began by augmenting the JUMP dataset with our Phenotypic Chemistry Arena (Pheno-CA): a diverse set of drug discovery tasks posed on a subset of images in JUMP. We then tested if the performance of DNNs trained to solve each task could be predicted by the size of their models or the amount of data they were trained with. Surprisingly, it could not: the performance of these “task-supervised” DNNs was either unaffected or hurt by an increase in data and model sizes (Fig. A1). However, DNNs in domains like natural language processing and vision rely on specific objective functions to achieve efficient scaling — for instance, GPT models use the causal language modeling objective (Kaplan et al., 2020). We reasoned that a similar precursor task, especially one that could force DNNs to learn a causal model of biology, could have a large impact on scaling. We therefore developed a novel precursor task, the inverse biological process (IBP), and performed large-scale and systematic experiments on the Pheno-CA to understand how this task interacted with the size of DNN architectures and the amount of data used in training them. Through this large-scale survey, we found the following: • DNNs pretrained with IBP significantly outperform task-supervised DNNs on the Pheno-CA. • DNNs pretrained with IBP also follow linear scaling laws on the Pheno-CA that accurately predict how many novel samples and replicates are needed to achieve arbitrary levels of accuracy. • IBP-trained DNNs improved in predictable ways as the total number of model parameters was increased. The effect of model depth, on the other hand, was less clear, and impacted only a subset of tasks. • Scaling laws on IBP-trained DNNs indicate that to achieve 100% accuracy on a task like predicting a compound’s mechanism-of-action, JUMP would need to be expanded by approximately 3.25M compounds. Achieving this scale of data would take an impossible amount of time and money, meaning that additional experimental advances are needed to improve neural scaling laws and move significantly beyond the performances of our IBP-trained models. • We release our Pheno-CA challenge and code at https://anonymous.4open.science/r/pub_scaling_mols-B3E1/ to encourage the field to continue investigating scaling laws in iHCS drug discovery. 2 METHODS JUMP data The Joint Undertaking for Morphological Profiling (JUMP) project has produced the largest publicly available dataset for iHCS. The dataset consists of images of Human U2OS osteosarcoma cells from 12 different data generating centers. Each image depicts a well of cells in a plate that have been perturbed then stained with the “Cell Painting” toolkit (Bray et al., 2016). Cell Painting involves fixing and staining cells with six dyes that mark eight different cell organelles or compartments: DNA, nucleoli, actin, Golgi apparatus, plasma membrane, endoplasmic reticulum (ER), cytoplasmic RNA, and mitochondria. Together, these stains provide an unbiased read-out on the effects of different perturbations on cell biology (Fig. 1). JUMP perturbations include the addition of 116,750 different compounds and the knockout of 7,975 genes by Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)*. There are a total of 711,974 compound perturbation images and 51,185 CRISPR perturbation images, which amounts to an average of five replicates of each perturbation type. *JUMP also contains gene overexpression manipulations, but we do not include those images in this study. Figure 1: Designing a scalable precursor task for phenotypic drug discovery. (a) We investigate the ability of DNNs to learn drug discovery tasks from the large-scale JUMP dataset. (b) Our Phenotypic Chemistry Arena (Pheno-CA) measures the ability of DNNs trained on JUMP data to solve diverse drug discovery tasks. Task performance is measured through either “learned probes,” small neural network readouts that map a DNN’s learned representations to the labels for a task, or through “0-shot” evaluations of performance (no task-specific training). (c) We find that only those DNNs pretrained on a specialized precursor task — the inverse biological process — follow scaling laws on the Pheno-CA. Phenotypic Chemistry Arena. We introduced the Phenotypic Chemistry Arena (Pheno-CA, Fig. 1b) to evaluate DNNs trained on JUMP data for drug discovery tasks. The Pheno-CA consists of annotations on 6,738 well images from JUMP for four different discovery tasks. These tasks are (i) predicting a drug’s mechanism-of-action (“MoA deconvolution,” 1,282 categories), (ii) predicting a drug’s target (“target deconvolution,” 942 categories), (iii) predicting a molecule’s identity (“molecule deconvolution,” 2,919 categories), and (iv) finding compounds with the same target as a CRISPR perturbation (“compound discovery” Fig. 1b). The remaining images from JUMP are used for training, and depict perturbations from all 116,750 represented in the JUMP dataset (including the 2,919 compounds in the Pheno-CA). All well images used in the Pheno-CA were held out from model training. DNN performance on the Pheno-CA tasks was measured in different ways. Performance on MoA and target deconvolution was recorded as top-10 classification accuracy, i.e., a model was considered accurate if the model’s top-10 predictions included the molecule’s true (labeled) MoA or target. Molecule deconvolution performance was recorded using categorical cross entropy loss, which measured how closely the distribution of predicted molecule identities matched the true identity. Finally, to measure how accurately models could find the compounds that match a CRISPR perturbation, we constructed curves that indicated how many guesses it took a model to find the appropriate molecules, then computed area-under-the-curve (AUC). Preprocessing We preprocessed CP data in two ways. First, we aggregated all cells from a well into a single representation, which captured the effect of its particular experimental perturbation. Second, we normalized these representations to control for experimental nuisances such as well, plate, and batch effects. To aggregate cells into a single well-based representation, we took the median CP-vector per well then normalized these representations by subtracting off the per-plate median representation and dividing by the per-plate inter-quartile-range [Wong et al., 2023]. Lastly, before training, we principle components analysis (PCA) whitened the representations [Bell & Sejnowski, 1996], which yielded 878–dimensional vectors for each well. As we describe in the Appendix C, some of the models were also explicitly trained to ignore experimental nuisances. Inverse biological process learning as a generalist precursor task DNN and data scale paired with the appropriate precursor training task — so-called causal language modeling — have lead to breakthroughs in natural language processing. Here, we devised a learning procedure that we hypothesized would similarly help DNNs learn biological causality from JUMP data. Our basic insight is that each cell in the JUMP dataset undergoes a “forward biological process”, in which the addition of a small molecule transforms its phenotype from a control to a perturbed one (Fig. 1c). We reasoned that training a model to invert this process would force it to learn the host of underlying biophysical processes that cause a cell to change phenotypes, and that the resulting model representations would prove useful for downstream discovery tasks including those in the Pheno-CA [Ardizzone et al., 2018]. We refer to this precursor task as the inverse biological process (IBP). If a model improves on tasks in the Pheno-CA after IBP-training it means that the motivating hypothesis is at least partially correct. In practice, IBP involves learning to predict a molecule from the phenotype it causes. We investigated the efficacy of IBP on the Pheno-CA by first pretraining a DNN for this task before freezing its weights and training task-specific readouts on its representations as detailed below. Model zoo. We built a large “zoo” of DNNs to understand how changing model architecture, supervision methods, and the amount of data seen during training affects performance on the Pheno-CA. Each DNN ended with a task-specific 3-layer multilayer perceptron (MLP), which mapped its representations of image content to a Pheno-CA task. All DNNs consisted of a basic MLP block with a residual connection. The MLP consisted of a linear layer, followed by a 1-D BatchNorm, and finally a gaussian error linear unit (GELU; Hendrycks & Gimpel, 2016). DNNs consisted of 1, 3, 6, 9, or 12 layers of these blocks, each with 128, 256, 512, or 1512 features. We tested two types of DNN supervision. (i) DNNs were directly trained to solve each Pheno-CA task. (ii) DNNs pretrained with IBP were frozen and their representations were mapped to a Pheno-CA task with the 3-layer MLP readout. In other words, we compared DNNs that learned task-specific representations to DNNs that learned IBP representations. Each of these DNNs was also given images from 1e3, 2e4, 5e4, 8e4 or 1e5 molecules that were not included in the Pheno-CA, and hence were out-of-distribution (OOD) of that challenge. Our hypothesis was that OOD compound information would help IBP-trained DNNs more accurately model the biophysical effects of compounds on U2OS cells, and ultimately outperform task-supervised DNNs on Pheno-CA tasks. Finally, each DNN was trained on 1%, 25%, 50%, 75%, or 100% of replicates of each of the compounds included in their training set. All combinations of data, model, and supervision parameters yielded 1,876 unique DNNs. Each DNN was implemented in PyTorch and trained using one NVIDIA TITAN X GPU with 24GB of VRAM. DNNs were trained with a AdamW [Loshchilov & Hutter, 2017], a learning rate of 1e-4, a batch size of 6000, and mixed-precision weights using Huggingface Accelerate library (https://github.com/huggingface/accelerate). Training was ended early if test performance stopped improving for 15 epochs. Training took at most 16 hours per DNN. 3 Results The Phenotypic Chemistry Arena (Pheno-CA) is a large-scale evaluation benchmark we created to measure the performance of DNNs trained on iHCS for diverse phenotypic drug discovery tasks: (i) predicting a drug’s mechanism-of-action (“MoA deconvolution”; Chandrasekaran et al., 2021), (ii) predicting a drug’s target (“target deconvolution”; Schenone et al., 2013), (iii) predicting a molecule’s identity (“molecule deconvolution”; Chandrasekaran et al., 2021), and (iv) finding compounds that have the same target as a CRISPR perturbation (Fig. 1b; Zhang & Gant, 2008; Méndez-Lucio et al., 2020). By surveying 1,876 different DNNs on the Pheno-CA, we identified the training routines and DNN architectures that yielded the highest performance on these tasks, and discovered scaling laws that predict performance for certain model classes with respect to the amount and types of data used to train them. Challenge 1: Mechanism-of-action deconvolution. Phenotypic screening is a powerful way to find active compounds in a biologically relevant setting. Phenotypic screens have inspired many important drug programs, including the discoveries of FKBP12 (Harding et al., 1989), calcineurin25 (Liu et al., 1992), and mTOR26 (Brown et al., 1994). However, it often requires substantial effort to understand the mechanism-of-action and targets of small molecules that are bioactive in a phenotypic screen. By MoA, we mean the effect the compound has on a cellular pathway or class of molecules, for instance ‘inhibitor of bacterial cell wall synthesis’, or ‘glucocorticoid receptor agonist’. In contrast, in the target challenge below we refer to the actual cellular component (for instance a specific enzyme) that the compound alters (usually by binding to it). iHCS data has been used in the past to help solve MoA deconvolution through a “guilt-by-association” approach, in which compounds that have known MoAs and targets are added into an experiment and used to deduce those properties in other compounds (Chandrasekaran et al., 2021). Here, we pose a version of “guilt-by-association” MoA-discovery on JUMP data. Each DNN in our zoo was given images of cells perturbed by different compounds, and trained to predict the MoA of a given compound out of 1,282 possibilities (Fig. 1a). DNNs were either supervised directly for MoA deconvolution or pretrained with IBP (Fig. 2a). Next, DNN weights were frozen and three-layer MLP probes were used to transform image representations from both models into MoA predictions (i.e., there was no direct task supervision for IBP models). Our DNN zoo yielded a wide range of performances on this task. At the low end was a 12.09% accurate 12-layer and 128-feature DNN trained with IBP on 100% of out-of-distribution molecules but only 0.01% of the replicates of each compound. At the high-end was a 52.62% accurate 9-layer and 1512-feature DNN trained with IBP on 100% of out-of-distribution molecules and 75% of the replicates of each compound. The representations of this performant IBP-trained DNN clearly separated the phenotypes of different MoAs (Fig. 2b), and it was 46% more accurate than the highest performing task-supervised DNN (36.08%; 6-layer and 256-feature DNN). Overall, IBP-trained DNNs were significantly more accurate at MoA deconvolution than task-trained DNNs ($T(624) = 7.97$, $p < 0.001$). These results indicate that the IBP precursor task promotes generalist representations that outperform task-specific training and are already well suited for MoA deconvolution. Another key difference we found between task-supervised and IBP-trained DNNs is that the performance of the latter followed a scaling law. The MoA deconvolution accuracy of IBP-trained DNNs linearly increased as they were trained on additional molecules that were out-of-distribution (OOD) of the Pheno-CA (Fig. 2c). The discovered law indicated that IBP-DNN performance increases by 1% with the addition of approximately 56K (non-unique) OOD molecules for training. While DNN performance generally improved with the total number of model parameters, the rate of improvement was higher for 9-layer DNNs than 12-layer DNNs (Fig. 2c; analyzed further in Appendix ADD). We further analyzed scaling laws for MoA prediction by recomputing them for different amounts of experimental replicates. That is, we expected that DNNs which were able to observe more experimental variability would scale better than those that observed less variability. Indeed, we found that more replicates lead to better models on average, and that more data also generally improved the scaling law slope (Fig. 3). Challenge 2: Target deconvolution. Identifying a bioactive molecule’s target from its phenotypes is another essential challenge for phenotypic screens. We evaluated the ability of DNNs to automate this task in the Pheno-CA, and measured how accurately models can deconvolve a molecule’s target from its phenotype. Figure 4: **Pheno-CA challenge 2:** Target deconvolution. (a) DNNs were either trained directly for target deconvolution from phenotypes or first pretrained on the IBP task then “frozen” for testing. Testing in each case involved fitting a 3-layer probe to generate target predictions for a molecule’s imaged phenotype. (b) The highest-performing DNN was an IBP-pretrained model, and its representations discriminate between the most commonly appearing targets. (c) IBP-trained DNN performance is a linear function of the amount of data each model is trained on. Each individual colored curves depicts the performance of DNNs trained on a fixed number of molecules that fall “out-of-distribution” of the molecules in the Pheno-CA. Decreases on the right end of each curve indicate overfitting. The scaling law depicted here is a linear fit of the max-performing models in each curve. Chance is $1.1e^{-1}$. (d) While DNN performance generally improved as models grew in parameters, 9-layer DNNs were more accurate than 12-layer DNNs. This task followed the same general approach as the MoA deconvolution task described above, and tested models for 942-way target deconvolution (Fig. 4a). As with MoA deconvolution, IBP-trained DNNs were on average significantly better at target deconvolution than task-supervised DNNs ($T(624) = 15.07$, $p < 0.001$). The lowest performing DNN (35.00%) was an IBP-trained 12-layer and 256-feature model trained with 0.01% of out-of-distribution molecules and 25% of the replicates of each compound. The highest performing DNN (67.95%) was an IBP-trained 9-layer and 1512-feature model trained on 100% of out-of-distribution molecules and 75% of the replicates of each compound. The representations of this IBP-trained DNN separated the phenotypes of different targets (Fig. 2b), and it was 33% more accurate than the highest performing task-supervised DNN (51.03%), which was a 6-layer and 512-feature DNN. IBP-trained DNNs also followed a scaling law on target deconvolution (Fig. 2c). Model performance was linearly predicted by the number of out-of-distribution molecules included in training. The discovered law indicated that IBP-trained DNN performance increases 1% per 79K (non-unique) OOD molecules added to training. As with MoA deconvolution, DNNs improved as they grew in size, but the deepest 12-layer models were again less effective than 9-layer models (Fig. 4d). Challenge 3: Molecule deconvolution. We next probed how well trained DNNs could discriminate between individual molecules by their phenotypes, a task which we call molecule deconvolution. This task involved 2,919-way classification of molecule identities on the Pheno-CA (Fig. A2a). In contrast to MoA and target deconvolution, molecular deconvolution represents a distinct challenge since many of the 2,919 compounds may yield quite similar phenotypes. As such, this task measures the ceiling discriminability of phenotypic representations in the models. The best-performing DNN (4.02 CCE) was an IBP-trained 9-layer and 1512-feature model and trained with 100% of out-of-distribution molecules and 75% of the replicates of each compound. The representations of this IBP-trained DNN separated the phenotypes of different targets (Fig. A2b). IBP-trained DNNs followed another scaling law on this task; their performance was predicted by the number of OOD molecules included in training. The discovered law indicated that IBP-trained DNN performance improved by 1 point of crossentropy loss for every additional 606,061 (non-unique) OOD molecules added into training. DNNs on this task improved as they grew in size, and deeper models performed better (Fig. A2i). Challenge 4: Compound discovery. Another important use of phenotypic screening is to find compounds that affect biology in ways that resemble a biological perturbation such as a mutation in a specific gene or protein. If we could predict such compounds accurately, we could rapidly assemble compound libraries that we could screen for an illness associated with that mutation. We investigated the ability of our DNNs to perform this task “zero shot” (i.e., without a task-specific as like in the other challenges) by comparing model representations of molecule phenotypes and of CRISPR manipulations of specific targets. We measured the representational distance between the phenotype of a CRISPR perturbation of one gene and the phenotype of every molecule in the Pheno-CA. We then computed the rank-order of the distance between molecules and the target manipulation, and recorded how many molecules one would need to test in order to find those with the same target as the manipulation (Fig. 5). We repeated this analysis for all compounds with at least 5 replicates in the Pheno-CA (12 total) and found that the IBP-trained model with the lowest loss recorded during IBP-training produced representations that were significantly better at task than standard Cell Profiler (CP) representations ($T(11) = 2.91$, $p = 0.007$). Figure 5: Pheno-CA challenge 4: Compound discovery. We measured the “zero-shot” efficacyness of features from an IBP-trained DNN vs cell profiler (CP) for finding compounds that share the target of a CRISPR-perturbation. Lines depict the cumulative number of discovered molecules that match a target. The IBP-trained DNN is significantly better than CP ($p = 0.007$). 4 Related work Deep learning-based chemistry While computational methods have played a large role in drug discovery since at least the early 80’s (Van Drie, 2007), big data and large-scale DNN architectures have recently supported significant advances for a variety of discovery tasks, such as predicting protein folding (Lin et al., 2023) and designing proteins with specific functions (Hesslow et al., 2022). Far less progress has been made in leveraging iHCS data for computational drug discovery tasks, with several notable exceptions. For instance, one study (Wong et al., 2023) found that iHCS data from an earlier and smaller iteration of the JUMP dataset can be used to train models to deconvolve MoAs significantly better than chance. Others have showed that iHCS carries signal for decoding the types of chemical or genetic perturbations that have been applied to cells (Moshkov et al., 2023). Our study takes significant steps beyond these prior ones and aligns iHCS with the goals of large-scale and data-driven AI in three ways: (i) we introduce the Pheno-CA as a standardized benchmark for model evaluation, (ii) we identify model parameterizations that perform well on... this benchmark and are immediately useful for drug discovery, and (iii) we discover scaling laws that describe how datasets like JUMP need to be expanded for continued improvements. **Small molecule drug discovery benchmarks** While JUMP and the *Pheno-CA* offer a unique opportunity to train and evaluate DNNs on iHCS data for small molecule drug discovery, there are multiple other benchmarks that have focused on structure-based approaches to small molecule design. Fréchet ChemNet Distance ([Preuer et al., 2018](#)) (FCD) measures the distance between a model-generated small molecule and the distribution of molecules modeled by a DNN trained to predict the bioactivity of 6,000 molecules. Scoring high on this benchmark means that a model generates compounds that are within distribution of the FCD-model’s representations. Guacamol ([Brown et al., 2019](#)) and molecular sets ([Polykovskiy et al., 2020](#)) (MOSES) are benchmarks that evaluate a model’s generated molecules according to their FCD, validity, uniqueness, and novelty. Finally, MoleculeNet ([Wu et al., 2017](#)) consists of multiple types of benchmarks for models spanning quantum mechanics, physical chemistry, biophysics, and predictions of physiological properties like toxicity and blood-brain barrier penetration. These benchmarks can synergize with the *Pheno-CA*, for instance, to tune models for filtering molecules predicted by our *Pheno-CA*-adjudicated DNNs to have desirable phenotypic qualities. ## Discussion The many great breakthroughs of AI over recent years have been guided by the discovery of neural scaling laws ([Kaplan et al., 2020](#), [Dehghani et al., 2023](#)). Prior to these scaling laws, it was unclear if achieving human- or superhuman-level performance on challenging tasks in natural language processing and computer vision would require computing breakthroughs that shifted the paradigm beyond deep learning. But scaling laws indicate that sufficiently good algorithms are already in our hands, and that we need more data and compute to unlock their full potential. Here, we provide — to the best of our knowledge — the first evidence that DNNs trained with our IBP precursor task follow similar scaling laws for small molecule discovery. We find that IBP-trained DNNs are very useful for drug discovery, and significantly better than task-supervised DNNs of any tested size at solving the tasks in our *Pheno-CA*. While the number of experimental replicates included in training affected the overall accuracy of IBP-trained DNNs (Fig. 3), the introduction of additional molecules that fell “out-of-distribution” of the *Pheno-CA* was what actually enabled the accuracy of these models to scale up. This finding implies that the manifold relationship of small molecules and their phenotypes is highly nonlinear and filled with local-minima that make it easy for models to overfit — as if task-supervised DNNs are “looking for their keys under the streetlamp.” While it may not be feasible to generate the 14M additional experimental images (3.25M more novel molecules, with about five experimental replicates each) needed to achieve 100% accuracy on a task like MoA deconvolution, continuing to scale experimental data and DNNs towards this goal will unlock extraordinary opportunities to expedite drug discovery for the most challenging diseases we face today. We release our code and the *Pheno-CA* benchmark at [https://anonymous.4open.science/r/pub_scaling_mols-B3E1/](https://anonymous.4open.science/r/pub_scaling_mols-B3E1/) to support these efforts to revolutionize medicine. **Limitations** JUMP and other iHCS datasets present significant opportunities for advancing DNNs in small molecule discovery. However, the scaling laws that we discover demonstrate some key limitations. Purchasing just the 3.25M molecules needed to reach 100% accuracy in MoA deconvolution would cost around $325M (assuming $100 per compound). Generating replicate experiments for each could multiply that cost by orders of magnitude. Thus, there is an essential need to identify new imaging and experimental methods that can generate cheaper and better data for training DNNs with IBP. A partial solution to this problem is time-lapse imaging of single cells ([Arrasate et al., 2004](#), [Finkbeiner et al., 2015](#)), which enables analysis of single cells over time. Such time-course data has already been successfully used in multiple deep learning applications ([Linsley* et al., 2021](#), [Wang et al., 2022](#), [Christiansen et al., 2018](#)) and could approximate and supercharge the beneficial effects of replicates on scaling laws that we observed for MoA deconvolution (Fig. 3). ## References Lynton Ardizzone, Jakob Kruse, Sebastian Wirkert, Daniel Rahmer, Eric W Pellegrini, Ralf S Klessen, Lena Maier-Hein, Carsten Rother, and Ullrich Köthe. Analyzing inverse problems with invertible neural networks. August 2018. Montserrat Arrasate, Siddhartha Mitra, Erik S Schweitzer, Mark R Segal, and Steven Finkbeiner. Inclusion body formation reduces levels of mutant huntingtin and the risk of neuronal death. *Nature*, 431(7010):805–810, October 2004. Anthony Bell and Terrence J Sejnowski. Edges are the ‘independent components’ of natural scenes. In M C Mozer, M Jordan, and T Petsche (eds.), *Advances in Neural Information Processing Systems*, volume 9. MIT Press, 1996. Joseph C Boyd, Alice Pinheiro, Elaine Del Nery, Fabien Reyal, and Thomas Walter. Domain-invariant features for mechanism of action prediction in a multi-cell-line drug screen. *Bioinformatics*, 36(5):1607–1613, March 2020. Mark-Anthony Bray, Shantanu Singh, Han Han, Chadwick T Davis, Blake Borgeson, Cathy Hartland, Maria Kost-Alimova, Sigrun M Gustafsdottir, Christopher C Gibson, and Anne E Carpenter. Cell painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes. *Nat. Protoc.*, 11(9):1757–1774, September 2016. Mark-Anthony Bray, Sigrun M Gustafsdottir, Mohammad H Rohban, Shantanu Singh, Vebjorn Ljosa, Katherine L Sokolnicki, Joshua A Bittker, Nicole E Bodycombe, Vlado Dančík, Thomas P Hasaka, Cindy S Hon, Melissa M Kemp, Kejie Li, Deepika Walpita, Mathias J Wawer, Todd R Golub, Stuart L Schreiber, Paul A Clemons, Alykhan F Shamji, and Anne E Carpenter. A dataset of images and morphological profiles of 30 000 small-molecule treatments using the cell painting assay. *Gigascience*, 6(12):1–5, December 2017. Eric J Brown, Mark W Albers, Tae Bum Shin, Kazuo Ichikawa, Curtis T Keith, William S Lane, and Stuart L Schreiber. A mammalian protein targeted by g1-arresting rapamycin–receptor complex. *Nature*, 369(6483):756–758, June 1994. Nathan Brown, Marco Fiscato, Marwin H S Segler, and Alain C Vaucher. GuacaMol: Benchmarking models for de novo molecular design. *J. Chem. Inf. Model.*, 59(3):1096–1108, March 2019. Peter D Caie, Rebecca E Walls, Alexandra Ingleston-Orme, Sandeep Daya, Tom Houslay, Rob Eagle, Mark E Roberts, and Neil O Carragher. High-content phenotypic profiling of drug response signatures across distinct cancer cells. *Mol. Cancer Ther.*, 9(6):1913–1926, June 2010. Srinivas Niranj Chandrasekaran, Hugo Ceulemans, Justin D Boyd, and Anne E Carpenter. Image-based profiling for drug discovery: due for a machine-learning upgrade? *Nat. Rev. Drug Discov.*, 20(2):145–159, February 2021. Srinivas Niranj Chandrasekaran, Jeanelle Ackerman, Eric Alix, D Michael Ando, John Arevalo, Melissa Bennion, Nicolas Boisseau, Adriana Borowa, Justin D Boyd, Laurent Brino, Patrick J Byrne, Hugo Ceulemans, Carolyn Ch'ng, Beth A Cimini, Djork-Arne Clevert, Nicole Deflaux, John G Doench, Thierry Dorval, Regis Doyonnas, Vincenza Dragone, Ola Engkvist, Patrick W Faloon, Briana Fritchman, Florian Fuchs, Sakshi Garg, Tamara J Gilbert, David Glazer, David Gnutt, Amy Goodale, Jeremy Grignard, Judith Guenther, Yu Han, Zahra Hanifehlou, Santosh Hariharan, Desiree Hernandez, Shane R Horman, Gisela Hormel, Michael Huntley, Ilknur Icke, Makiyo Iida, Christina B Jacob, Steffen Jaensch, Jawahar Khetan, Maria Kost-Alimova, Tomasz Krawiec, Daniel Kuhn, Charles-Hugues Lardeau, Amanda Lembke, Francis Lin, Kevin D Little, Kenneth R Lofstrom, Sofia Lotfi, David J Logan, Yi Luo, Franck Madoux, Paula A Marin Zapata, Brittany A Marion, Glynn Martin, Nicola Jane McCarthy, Lewis Mervin, Lisa Miller, Haseeb Mohamed, Tiziiana Monteverde, Elizabeth Mouchet, Barbara Nicke, Arnaud Ogier, Anne-Laure Ong, Marc Osterland, Magdalena Otrocka, Pieter J Peeters, James Pilling, Stefan Prechtl, Chen Qian, Krzysztof Rataj, David E Root, Sylvie K Sakata, Simon Scrace, Hajime Shimizu, David Simon, Peter Sommer, Craig Spruiell, Iffat Sumia, Susanne E Swalley, Hiroki Terauchi, Amandine Thibaudeau, Amy Unruh, Jelle Van de Waeter, Michiel Van Dyck, Carlo van Staden, Michal Warchoł, Erin Weisbart, Amélie Weiss, Nicolas Wiest-Daessle, Guy Williams, Shan Yu, Bolek Zapiec, Marek Żyla, Shantanu Singh, and Anne E Carpenter. JUMP cell painting dataset: morphological impact of 136,000 chemical and genetic perturbations. March 2023a. Srinivas Niranj Chandrasekaran, Beth A Cimini, Amy Goodale, Lisa Miller, Maria Kost-Alimova, Nasim Jamali, John G Doench, Briana Fritchman, Adam Skepner, Michelle Melanson, John Arevalo, Marzieh Haghighi, Juan Caicedo, Daniel Kuhn, Desiree Hernandez, Jim Berstler, Hamdah Shafqat-Abbasi, David Root, Susanne E Swalley, Sakshi Garg, Shantanu Singh, and Anne E Carpenter. Three million images and morphological profiles of cells treated with matched chemical and genetic perturbations. May 2023b.
lygdvIKDxi
How does the similarity of the public dataset to the target model’s dataset affect results? The main results in Table 1 show significant overlap: CIFAR-10 is used as the public dataset when CIFAR-100 is the hidden dataset, and vice versa, and even Tiny-ImageNet has strong similarities compared to CIFAR 10/100.
SEEKER: SEMI-SUPERVISED KNOWLEDGE TRANSFER FOR QUERY-EFFICIENT MODEL EXTRACTION Anonymous authors Paper under double-blind review ABSTRACT Model extraction attacks against neural networks aim at extracting models without white-box access to model internals and training datasets. Unfortunately, most existing methods demand an excessive number of queries (up to millions) to reproduce a functional substitute model, greatly limiting their real-world applicability. In this work, we propose a query-efficient model extraction attack that effectively distills knowledge from publicly available data. To this end, we introduce a semantic alignment approach that trains the substitute model without interacting with the victim model. The proposed approach optimizes the substitute model to learn a generalizable image encoding pattern based on semantic consistency of neural networks. We further propose a query generator that enhances the information density of generated queries by aggregating public information, thereby greatly reducing the query cost required for constructing the substitute model. Extensive experiments demonstrate that our method achieves state-of-the-art performance which improves query-efficiency by as much as $50\times$ with higher accuracy. Additionally, our attack demonstrates the capability of bypassing most types of existing defense mechanisms. 1 INTRODUCTION The past decade has witnessed tremendous progress made by Deep Neural Networks (DNNs) in achieving human-level performance in various fields of applications, such as medicine, finance, and autonomous driving. DNN models carry high commercial values and sensitive information from the secret training data. Consequently, in many real-world applications, DNN models are provided as a black box, where only the inputs to and the outputs of the models can be observed. Unfortunately, recent works [Barbalau et al., 2020; Truong et al., 2021] unveiled that DNN models are still vulnerable to model extraction attacks even if the adversary can only access the models in a black-box manner. In such attacks, the adversary can obtain a substitute model that emulates the functionality of the original victim model solely through querying the black-box model with unlabeled inputs. Using the substitute model, it is shown that the adversary can infer sensitive attributes of other users [Zhang et al., 2023], craft tailored adversarial samples aimed at the victim model [Wang et al., 2022], or even reconstruct the secret training data employed by the victim [Kanla et al., 2022]. However, existing attacks primarily concentrate on enhancing the accuracy or transfer attack success rate (ASR) of the extracted model, while paying limited attention to query-efficiency of the model extraction process. An excessively large number of queries are used in these methods to extract a useful substitute model from the victim, leading to higher attack costs and an increased likelihood of encountering restrictions from the victim-side defense mechanisms. Existing model extraction attacks either synthesize queries from completely random inputs or with the assistance of publicly available data, both demanding an excessive number of queries. On the one hand, it is obvious that optimizing a generative network to produce queries from random distribution requires a large query budget to converge [Truong et al., 2021; Kariyappa et al., 2021]. On the other hand, even with the assistance of public datasets, existing attacks are still deemed query-inefficient due to two main reasons. First, as shown in Figure 1(a), traditional public dataset based attacks optimize the substitute model only through online interaction with the victim model. Second, most attacks lack an effective query generation process that constructs information-rich queries from the public data [Orekondy et al., 2019; Pal et al., 2020; Barbalau et al., 2020]. As a result, even the most query-efficient method [Sanyal et al., 2022] demands over 3M queries to extract a model that can reach 88% accuracy on CIFAR-10. More details can be found in the appendix. In this paper, we propose SEEKER, a query-efficient model extraction framework based on SEmi-supErvised public Knowledge transFEr, as shown in Figure 1(b). To tackle the aforementioned challenges, we devise an offline stage that pre-trains the substitute model without incurring any query costs, significantly improving the query-efficiency. Specifically, we design a semantic alignment scheme that optimizes generalizable encoding layers without requiring interaction with the victim model. The scheme is based on an intriguing observation that purely enforcing semantic self-consistency enables the substitute model to demonstrate similar activation patterns to the victim model. Moreover, we propose a multi-encoder query generator to efficiently enhance the consistency between the substitute and the victim models via parallel processing of multiple public data. As a result, SEEKER elevates the query-efficiency of model extraction to unprecedented levels while preserving high accuracy and ASR when compared to state-of-the-art (SOTA) methods. Experimental results demonstrate that our attack reduces the query budget by more than $50\times$ for obtaining the same level of ASR compared with the SOTA methods. SEEKER can also extract a substitute model with a remarkable accuracy of 93.97% on CIFAR-10, surpassing the performance of the most accurate model stealing approach. Besides, our results indicate that both active and passive defenses against model extraction attacks may fall short in guaranteeing the security and safety of cloud-based MLaaS schemes. Our main contributions are summarized as follows. • **Query-free self-supervised training**: To the best of our knowledge, our proposed semantic alignment scheme is the first self-supervised training procedure for model extraction, which increases the similarity between the substitute and victim models with zero query cost. • **Query-efficient query generator**: We propose a multi-input autoencoder for query generation in model extraction attacks, which elevates the information density of query inputs through integrating public knowledge in the latent space. • **Reproducible SOTA results**: Our attack significantly reduces the query budget and achieves higher accuracy and ASR than existing model extraction attacks. The implementation of our framework will be publicly available. 2 RELATED WORKS 2.1 MODEL EXTRACTION Model extraction attacks aim at reproducing a victim model without access to the model internals. Although query-efficiency is a major concern in practical model extraction, many existing works focus only on simply improving accuracy without considering the query budget limit. For example, Black-Box Ripper (Barbalau et al., 2020) requires a large number of queries in the generative evolutionary strategy to produce a small population of training samples. Meanwhile, some works (Zhou et al., 2020; Truong et al., 2021) try to generate queries from noise vectors without the help of public data, and therefore can take millions of queries to reproduce the victim model. Among those query-efficient model extraction attacks, some works (Orekondy et al., 2019; Yu et al., 2020) assume annotations in public datasets. For example, the adaptive policy of Knockoff Nets (Orekondy et al., 2019) relies on labels to organize the public data into a hierarchical architecture for its proposed active learning approach. CloudLeak (Yu et al., 2020) adopts a supervised extraction strategy that requires labels to fine-tune the substitute. Recently, a new model extraction setting is explored, where the adversary only has access to some unlabeled public datasets. For example, Mosali et al. (2019) generate query inputs by linearly merging public data. ActiveThief (Pal et al., 2020) attempts to select the most informative public data with active learning strategies. DFMS (Sanyal et al., 2022) crafts queries with a generative adversarial network (GAN), and utilizes the public datasets to assist the training process of GAN. Unfortunately, such attacks still lack query-efficiency, either due to inadequate information richness per query or lengthy generator pre-training. 2.2 BLACK-BOX ADVERSARIAL ATTACKS An important application of model extraction is mounting black-box adversarial attacks. Generally speaking, we can classify black-box adversarial attacks into three categories: substitute-based, transfer-based, and query-based. First, a number of substitute-based attacks have already demonstrated the effectiveness of using the extracted substitute model as a base for launching black-box adversarial attacks (Papernot et al., 2017; Zhou et al., 2020; Wang et al., 2021). Additionally, transfer-based attacks assume the adversary can obtain a substitute model trained on the same dataset as the victim model, and focus on improving the transferability of the adversarial samples synthesized based on the substitute model (Inkawhich et al., Wu et al., 2021; Zhang et al., 2022). Therefore, substitute-based and transfer-based attacks are generally complementary to each other. Finally, there are also query-based attacks (Li et al., 2019; Bai et al., 2020; Yuan et al., 2021) that directly utilize queries to construct adversarial samples. Most query-based adversarial attacks design optimization algorithms, such as gradient estimation methods (Tu et al., 2019; Ilyas et al., 2019), Bayesian optimization (Ru et al., 2019) or geometric mechanism (Maho et al., 2021), to construct adversarial samples. However, when the number of adversarial samples increases, such methods will also consume an impractically large number of queries. Some recent works (Ma et al., 2021; Yuan & He, 2021) incorporate substitute models into query-based approaches and achieve state-of-the-art query-efficiency among query-based attacks. However, such attacks are still only query-efficient when a very small number of adversarial samples are needed. Moreover, when query-based methods lose the connection to the API of the victim, they can no longer craft new adversarial samples. 3 METHOD 3.1 THREAT MODEL Here, we formalize the threat model of model extraction attacks considered in this work. Given a victim model $V$ trained on a secret dataset $D_{\text{secret}}$, an adversary attempts to extract a substitute model $S$ that mimics the behavior of $V$. The adversary can further generate perturbation $z$ for a clean image $c \in C$ based on $S$ so that $c + z$ is misclassified by $V$. In particular, we note that the adversary is aimed at attaining high query-efficiency while retaining high accuracy and attack success rate (ASR). Here, we assume that the adversary can only obtain the output probability of $V$, i.e., the adversary has no access to the training dataset ($D_{\text{secret}}$), the hyperparameters and weights of $V$. Similar to many previous works in model extraction (Orekondy et al., 2019; Pal et al., 2020; Barbalau et al., 2020), we make the assumption that the adversary has access to an unlabeled public dataset $D_{\text{pub}}$, which is assumed to have a different distribution from $D_{\text{secret}}$. Additionally, we assume that the adversary can prepare the attack in an offline stage and query $V$ in the online stage. 3.2 FRAMEWORK OVERVIEW As shown in Figure 2, we propose a model extraction framework based on semi-supervised learning. To reduce the query cost of the model extraction process, we combine a query-free self-supervised learning scheme (procedures 1 and 2) and a query-efficient supervised approach (procedure 3 and 4). The self-supervised scheme, namely semantic alignment, optimizes the substitute model to be self-consistent on $D_{\text{pub}}$, and does not require any query to $V$. For query-free self-supervised learning scheme, we develop offline semantic alignment that pre-trains $S$ to learn generalizable... features before interacting with the $V$, and online semantic alignment that assists the supervised approach during the iterative querying process. In the supervised approach, we focus on extracting more information with fewer queries. To this end, we develop a multi-encoder query generator that simultaneously processes several queries to synthesize an information-extracting query. Notably, we are the first to propose an offline stage for model extraction and develop self-supervised learning approach to pre-train the substitute model. Here, we present a brief outline of our framework, while the formal details are provided in the appendix. First, in the offline stage, we carry out semantic alignment procedure as follows. 1. **Offline Semantic Alignment**: In the offline semantic alignment process, the adversary pre-trains the substitute model $S$ on the public dataset $\mathcal{D}_{pub}$ using our proposed offline semantic consistency loss explained in the following section. In the online stage, SEEKER iterates through the following three procedures to train the substitute model. Here, we take the $i$-th iteration with the query number of $n_i$ as an example. 2. **Online Semantic Alignment**: In the online semantic alignment process, the adversary first generates pseudo labels of the unannotated public data, and then calculates the online semantic consistency loss. The pseudo labels are also involved in sampling generator inputs from $\mathcal{D}_{pub}$. 3. **Query Generation**: Here, the adversary uses an aggregated query generator to construct a set of the query inputs $\{x_{query}^{i,j} = G(x_{pub,1}^{i,j}, \ldots, x_{pub,m}^{i,j}) | x_{pub,1}^{i,j}, \ldots, x_{pub,m}^{i,j} \in \mathcal{D}_{pub}, j = 1, \ldots, n_i\}$. 4. **Supervised Train**: In the supervised training process, the adversary obtains the $i$-th query dataset $Q_i = \{(x_{query}^{i,j}, y_{query}^{i,j} = V(x_{query}^{i,j})) | j = 1, \ldots, n_i\}$ by querying the victim. Then the adversary updates the overall query dataset $Q = \bigcup_{k=1}^{i} Q_k$. The adversary calculates the supervised loss based on $Q$, and optimizes $S$ with the supervised loss and online semantic consistency loss. After substitute training, the query generator $G$ is optimized based on $S$ and $Q_i$. ### 3.3 Semantic Alignment We propose a self-supervised scheme for the substitute model to acquire similar features to the victim model based on the public data. Our approach builds on the assumption that a well-trained victim model maintains semantic consistency, i.e., outputs similar representations for different images featuring the same object. Leveraging this semantic consistency as an additional prior, we propose a semantic alignment scheme that also aligns substitute model representations for semantically equivalent data. The semantically equivalent data are constructed by transforming the same public data with a combination of basic augmentations, such as horizontal flip and color jittering. These augmentations are aimed at simulating diverse environmental scenarios, such as varied camera angles or lighting conditions. With our approach, the substitute model demonstrates similar encoding patterns to the victim, as shown in Figure 3. We note that the semantic alignment learns generalizable features exclusively from the unannotated public data and does not require any additional query budget. Additionally, we have provided a detailed theoretical analysis in our appendix. We devise different variants of our semantic alignment scheme for both the offline and online model extraction procedures. #### 3.3.1 Offline Semantic Consistency During the offline semantic alignment process, we employ a Siamese network architecture (He et al., 2020; Chen & He, 2021) to enhance the semantic consistency of the substitute model. Specifically, we employ two substitute models that share the same weights to process two sets of semantically equivalent data, and subsequently align the outputs of the two models. For any unlabeled public data $x_{pub} \in \mathcal{D}_{pub}$, $S$ is trained with a NT-Xent loss (Chen et al., 2020): $$L_C = -\log \frac{\text{sim}(S(\text{aug}_M^1(x_{pub})), S(\text{aug}_M^2(x_{pub})))}{\sum_{x'_{pub} \in \mathcal{D}_{pub}, x'_{pub} \neq x_{pub}} \text{sim}(S(x_{pub}), S(x'_{pub}))},$$ where $\text{aug}_M^1(\cdot)$ and $\text{aug}_M^2(\cdot)$ are medium-level augmentations, and $\text{sim}(a, b)$ is a similarity function measuring the resemblance of $a$ and $b$. In the offline stage, we replace the fully connected layers of the substitute model $S$ with a projection head to obtain the latent representations of $x_{pub}$ encoded by $S$. Intuitively, the loss function maximizes the similarity between the representations for differently augmented views of the same data point, while pushing away the representations of different data points. We utilize Grad-CAM to visually demonstrate the impact of our offline semantic alignment scheme, as illustrated in Figure 3. Here, Grad-CAM produces a heat map for an image, which highlights Figure 3: Comparisons of the activation heat maps between the substitute models with or without offline semantic alignment and the victim model. the crucial regions that a neural network is most activated to make predictions. We observe that the proposed training scheme enables $S$ to activate in a similar pattern to $V$ only using the public dataset $\mathcal{D}_{pub}$, even if $\mathcal{D}_{pub}$ follows a very different distribution (i.e., different classes of images) from the secret dataset $\mathcal{D}_{secret}$. Consequently, we see that adversaries can learn common encoding patterns from publicly available data, which can be utilized as \textit{a-priori} knowledge for inferencing the properties of secret neural networks trained on private data. 3.3.2 ONLINE SEMANTIC CONSISTENCY Different from the offline stage, the predictions of the substitute model becomes much closer to the victim model in the online stage. Based on this observation, we propose online semantic alignment that further improves the performance of the substitute model. Concretely, we first generate substitute output probabilities of weakly augmented public data. As the substitute model has similar predictions to the victim model, these probabilities can be used as pseudo labels for the public data. Then, we align the substitute outputs of strongly augmented data with the pseudo labels. Subsequently, we formulate the online semantic consistency loss as $L_U = ||S(\text{aug}_W(x_{pub})), S(\text{aug}_S(x_{pub}))||_2$, where $\text{aug}_W(\cdot)$ is weak augmentation and $\text{aug}_S(\cdot)$ is strong augmentation. 3.4 AGGREGATED QUERY GENERATOR To better leverage useful information from the public dataset under a limited number of queries, we propose an aggregated query generator that fuses multiple input data into a single information-extracting query. We consider three goals when designing the aggregated query generator: 1) \textbf{Aggregating}: the generator can effectively merge information from multiple public data, 2) \textbf{Informative}: the generator can produce information-extracting queries that minimize the gap between the substitute and the victim, 3) \textbf{Stealthy}: the synthesized queries maintain the structure of a natural image instead of collapsing into indistinguishable patterns. To achieve all three goals at the same time, we propose an aggregation architecture and three loss functions for the query generator. 3.4.1 AGGREGATION ARCHITECTURE We design a multi-encoder network architecture to encode features from different public data. Concretely, to generate query input $x_{query}$, the generator aggregates $m$ input data $x_{pub,1}, ..., x_{pub,m}$ from the public dataset. We can formulate the query generator $G$ as follows: $$x_{query} = G(x_{pub,1}, ..., x_{pub,m}) = f_{\text{dec}}([f^1_{\text{enc}}(x_{pub,1}), ..., f^m_{\text{enc}}(x_{pub,m})]) \oplus x_{pub,1},$$ where $f^i_{\text{enc}}(\cdot)$ denotes the $i$-th encoder, $f_{\text{dec}}(\cdot)$ the decoder, $[\cdot]$ the concatenation function, and $\oplus$ the element-wise addition operator. By applying Equation (2), we project the input data to the latent space with the respective encoders. Next, the decoder concatenates the representations of the public data and maps the latent code back to the image space. Finally, we apply a shortcut connection and add the first input $x_{pub,1}$ to the output of the decoder. During the aggregation process, the generator regards $x_{pub,1}$ as the base image and integrates the knowledge from the other public data $x_{pub,2}, ..., x_{pub,m}$ into $x_{pub,1}$. Here, the multi-encoder design aligns with goal 1, and shortcut design goal 3. We have also included an alternative design of the aggregated architecture in the Appendix. To sample more diversified public data as generator inputs, we design a sampling method based on the pseudo labels generated in online semantic alignment training step. In particular, we set the sampling probability for the $i$-th class as $p_i = \frac{-\log_e(freq_i)}{\sum_{j=1}^{n_c} -\log_e(freq_j)}$, where $freq_i$ denotes the frequency of the pseudo labels of the $i$-th class, and $n_c$ is the total number of classes. 3.4.2 Loss functions We design the reconstruction loss, the inconsistency loss and the diversity loss to optimize the aggregated query generator. 1) Reconstruction loss: To fully aggregate different input data from the public dataset, we design the reconstruction loss to measure how well the query reconstructs the input data as: $$L_R = \frac{1}{m} \sum_{j=1}^{m} \alpha_j \| G(x_{pub,1}, x_{pub,2}, ..., x_{pub,m}) - x_{pub,j} \|_2,$$ where $\alpha_j$ is a hyperparameter for balancing the stealthiness and information diversity in the generated query. We set $\alpha_1 = 1$ to preserve the basic appearance of $x_{pub,1}$ in the generated query, and $0 < \alpha_2 = ... = \alpha_m \leq 1$ to ensure the generated query encodes information from $x_{pub,j}$. Our reconstruction loss is aimed at achieving goal 1 and goal 3 at the same time. We illustrate the synthesized queries of our aggregated query generator in Figure 4. In each group of the images, the generator $G$ merges $x_{pub,1}$, $x_{pub,2}$, and $x_{pub,3}$ from $D_{pub}$ to craft the query image $G(x_{pub,1}, x_{pub,2}, x_{pub,3})$. As shown in Figure 4, the crafted query image largely resembles the original natural image $x_{pub,1}$ from $D_{pub}$, but has regional noises over some pixels that encode higher dimensional information from $x_{pub,2}$ and $x_{pub,3}$. Hence, we conclude that the aggregated query generator is effective in combining multiple input sources to produce information-rich queries that have similar visual patterns to natural images. 2) Inconsistency loss: The inconsistency loss can be formulated as: $$L_I = \exp(-L_{KL}(S(G(x_{pub,1}, x_{pub,2}, ..., x_{pub,m})), V(G(x_{pub,1}, x_{pub,2}, ..., x_{pub,m})))),$$ where $L_{KL}(.)$ is the KL divergence. The main objective of the inconsistency loss is to optimize the aggregated generator such that the generator produces queries upon which the substitute and the victim models produce different prediction results. We point out that, in the online supervised learning stage, only those queries that cause $S$ to behave differently from $V$ are constructive in further training $S$ to be more similar to $V$. 3) Diversity loss: To craft more diversified query inputs, we introduce the diversity loss that reduces the similarity between new query inputs for the next iteration and existing query inputs. For the $i$-th iteration, the diversity loss can be formulated as $$L_D = \text{sim}(S(x_{query}), S(G(x_{pub,1}^{i+1}, x_{pub,2}^{i+1}, ..., x_{pub,m}^{i+1}))).$$ To reduce the computational complexity of optimizing over the diversity loss, we construct a dynamic diversity set $D_{div}$ by selecting the most representative items from existing query inputs. Concretely, we add a query input to $D_{div}$ if and only if the cosine distances between this query input and existing items in $D_{div}$ are all above a threshold $T_{div}$. The inconsistency loss and diversity loss are proposed to fulfill goal 2. The overall loss $L_{Gen}$ for training the generator can be formulated as $L_{Gen} = L_R + \lambda_I L_I + \lambda_D L_D$, where $\lambda_I$ and $\lambda_D$ are hyperparameters to determine the relative importance of each loss item. 3.5 Supervised Training Based on the query samples crafted in Section 3.4 and losses derived in Section 3.3.2, the overall loss for optimizing the substitute model can be formulated as $L_{Sub} = L_S + \lambda_U L_U$. Here, $L_S$ is the online semantic consistency loss, $L_S$ is the supervised loss defined as $L_S = L_{KL}(S(x_{query}), y_{query})$, and $\lambda_U$ is a hyperparameter to balance the loss terms. To avoid the substitute model being overfitted in every iteration, we combine two simple yet effective approaches: weighted query sampling and loss-based training termination. First, weighted query sampling is proposed to balance the importance of old and new query datasets, where queries in the $i$-th iteration are assigned with weight $w = \alpha^{-i}$, where $0 < \alpha < 1$. Second, we design a loss-based termination mechanism to automatically stop the substitute training process. Within each iteration, substitute training ceases when the average loss does not drop for several consecutive epochs. 4 Experiments 4.1 Experimental setting Datasets and models. We use CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), Tiny ImageNet and ImageNet (Deng et al., 2009) datasets in our experiments, which are widely adopted by recent model Figure 5: Accuracy, fidelity, and ASR comparisons between SEEKER and the SOTA model extraction attacks under different query budgets. Table 1: Accuracy, fidelity, and ASR of different model extraction attacks under a relatively low query budget. The mean accuracy, fidelity, and ASR along with the standard deviations are provided. | Dataset | Attack | Acc (%) | Fid (%) | ASR (%) | |------------------|-----------------|---------------|---------------|---------------| | CIFAR-10 | Mosafi et al. | 26.19 (±1.32) | 26.15 (±1.29) | 32.98 (±2.78) | | | Knockoff Nets | 75.66 (±1.07) | 76.56 (±1.09) | 46.89 (±3.24) | | | ActiveThief | 75.23 (±0.92) | 76.41 (±0.93) | 45.04 (±3.38) | | | Black-Box Ripper| 10.72 (±1.00) | 11.06 (±0.98) | 34.91 (±2.58) | | | DFMS-SL | 52.57 (±1.13) | 53.34 (±1.13) | 45.93 (±2.43) | | | SEEKER (ours) | **88.01 (±0.97)** | **88.94 (±0.98)** | **96.43 (±2.69)** | | CIFAR-10 | Mosafi et al. | 24.47 (±0.88) | 24.55 (±0.89) | 30.32 (±2.76) | | | Knockoff Nets | 83.66 (±1.12) | 84.28 (±1.08) | 50.63 (±3.01) | | | ActiveThief | 84.07 (±1.04) | 84.96 (±1.07) | 51.80 (±3.38) | | | Black-Box Ripper| 11.06 (±0.92) | 11.41 (±1.06) | 33.49 (±2.52) | | | DFMS-SL | 54.35 (±0.95) | 55.83 (±0.97) | 47.93 (±2.79) | | | SEEKER (ours) | **88.93 (±0.94)** | **89.26 (±0.85)** | **97.20 (±2.62)** | | Tiny ImageNet | Mosafi et al. | 2.52 (±0.04) | 2.46 (±0.03) | 44.16 (±2.98) | | | Knockoff Nets | 54.88 (±0.91) | 55.98 (±0.98) | 48.67 (±2.86) | | | ActiveThief | 53.28 (±0.96) | 54.53 (±0.92) | 48.55 (±2.45) | | | Black-Box Ripper| 1.24 (±0.05) | 1.58 (±0.05) | 53.55 (±2.30) | | | DFMS-SL | 34.51 (±1.28) | 37.15 (±1.26) | 39.26 (±2.42) | | | SEEKER (ours) | **60.04 (±1.39)** | **63.81 (±1.42)** | **87.25 (±2.97)** | | CIFAR-10 | Mosafi et al. | 3.88 (±0.05) | 3.91 (±0.07) | 44.03 (±2.53) | | | Knockoff Nets | 61.70 (±1.14) | 65.94 (±1.17) | 71.05 (±3.49) | | | ActiveThief | 62.68 (±1.08) | 66.25 (±1.09) | 72.50 (±3.70) | | | Black-Box Ripper| 1.33 (±0.03) | 1.69 (±0.04) | 49.95 (±2.83) | | | DFMS-SL | 37.32 (±1.10) | 40.89 (±1.17) | 45.41 (±3.08) | | | SEEKER (ours) | **72.23 (±1.01)** | **75.81 (±1.03)** | **95.35 (±2.72)** | Extraction methods (Barbalau et al., 2020; Truong et al., 2021; Sanyal et al., 2022). We use ResNet-34 (He et al., 2016) as the model architecture of the victim. To evaluate the performance of different attacks across diverse model architectures, we test four widely-used classical model architectures for $S$: ResNet-34 (He et al., 2016), PyramidNet (Han et al., 2017), DenseNet (Huang et al., 2017), and WRN-28 (Zagoruyko & Komodakis, 2016). In Figure 3, Table 2, Table 3, and Table 4, we use CIFAR-10 as $D_{\text{secret}}$ and CIFAR-100 as $D_{\text{pub}}$. Evaluation metrics. Following existing methods, we use three metrics to evaluate model extraction attacks: accuracy (Acc), fidelity (Fid), and attack success rate (ASR). On top of the traditional prediction accuracy metric, we use fidelity (Jagielski et al., 2020) to measure how well the predictions of the substitute match with that of the victim (including both correct and incorrect predictions). Given a clean dataset $C$, fidelity can be formulated as $\text{Fid} = \frac{1}{|C|} \sum_{c \in C} I(V_t(c) = S_t(c))$, where $I(\cdot)$ denotes the indicator function. Since an important application of model extraction is launching substitute-based adversarial attacks, we use ASR to measure the success of non-targeted black-box adversarial attacks, which is formulated as $\text{ASR} = \frac{1}{|C_V|} \sum_{c \in C_V} I(V_t(c + z) \neq V_t(c))$, where $C_V$ is the dataset correctly classified by $V$. To compare query-efficiency of black-box adversarial attacks, we introduce query-efficiency ratio (QER) that measures the number of successful attacks per query as $\text{QER} = \frac{1}{n_q} \sum_{c \in C_V} I(V_t(c + z) \neq V_t(c))$, where $n_q$ denotes the query number. 4.2 Comparisons with Model Extraction Attacks To demonstrate the effectiveness of our proposed framework, we compare SEEKER against existing model extraction attacks. Query-efficiency. We first compare the query-efficiency between our attack and five SOTA model extraction attacks, including the attack proposed by Mosafi et al. (Mosafi et al., 2019), Knockoff Nets (Orekondy et al., 2019), ActiveThief (Pal et al., 2020), Black-Box Ripper (Barbalau et al., 2020), and DFMS-SL (Sanyal et al., 2022). For ASR comparisons, we use the white-box adversarial attack MI-FGSM (Dong et al., 2018) to perform non-targeted attacks over all model extraction methods. To ensure fair comparisons, the same set of parameters is employed for MI-FGSM across all the model extraction attacks in each experimental configuration. As illustrated in Figure 5, SEEKER achieves a high level of accuracy, fidelity, and ASR with an extremely small query budget. We point out that our method can reduce the query budget by $5\times$ to achieve 75.7% accuracy, and by more than $50\times$ to achieve 65.5% ASR when compared to the SOTA methods. It is noteworthy that Knockoff Nets and ActiveThief only sample query inputs from the public dataset, thus reaching the optimal extraction performance when the query budget is equal to the size of the public dataset. In contrast, the performance of our attack continues to rise with further querying. We further demonstrate the accuracy, fidelity, and ASR of the aforementioned model extraction attacks under a relatively small query budget of 100K in Table 1. We note that, since Black-Box Ripper and DFMS-SL require millions of queries in the query generation process, their performance under small query budgets is relatively poor. On the other hand, although Knockoff Nets and ActiveThief can obtain a relatively high accuracy with a small number of queries, its attack success rate can be less satisfactory. In contrast, SEEKER achieves high accuracy, fidelity, and ASR across different datasets. For example, SEEKER attains 12.35% higher in accuracy and 49.54% higher in ASR than Knockoff Nets when using CIFAR-10 as $\mathcal{D}_{\text{secret}}$ and CIFAR-100 as $\mathcal{D}_{\text{pub}}$. We have included more results across different public datasets in the Appendix. Best achievable accuracy. While our attack achieves remarkable query-efficiency compared to SOTA model extraction attacks, we note that the best achievable accuracy is an important metric for evaluating model extraction attacks, especially when the adversary has the capability of querying the victim model with an unlimited number of queries. Therefore, we perform different model extraction attacks under a significantly larger query budget to compare their highest attainable accuracy. As shown in Table 2, SEEKER achieves the highest accuracy amongst the SOTA model extraction attacks. Notably, our proposed attack attains 93.97% accuracy with a cost of 4M queries, whereas DFMS-SL requires 20M queries to extract a substitute of 93.96% accuracy. Model architecture generalization. We further compare the accuracy of different model extraction attacks using diverse model architectures for the substitute model. As shown in Table 3, SEEKER exhibits better generalization capability across different substitute model architectures than the other attacks based on public datasets. The results demonstrate that our approach can effectively extract a substitute even if the victim and substitute models do not have the same architecture. We also provide a more detailed analysis regarding model architecture generalization in the appendix. 4.3 Comparisons with Query-based Attacks We compare query-efficiency of our method against three query-based adversarial attacks: NES (Ilyas et al., 2018), Bandits (Ilyas et al., 2019), and Simulator attack (Ma et al., 2021) (SOTA). Figure 6 demonstrates QER of different black-box adversarial attacks by attacking 10K clean data from $\mathcal{C}$ for reaching similar levels of ASR with similar noise levels. Although the query-based attacks have higher QER for crafting a small number of adversarial samples, such attacks are easily outperformed by SEEKER when more than 1,600 samples are required. Furthermore, SEEKER is $1.9\times$ more query-efficient than Simulator attack when crafting adversarial 3000 samples, and $3.1\times$ as crafting 5,000 samples. Lastly, from Figure 6, we see that adversarial attacks based on substitute models (e.g., SEEKER) achieve asymptotically higher query-efficiency than query-based attacks when the number of successful adversarial samples increases. We include more comparisons between... Table 2: Optimal accuracy of different attacks. | Attack | Acc (%) | |-----------------|---------| | Knockoff Nets | 75.66 | | Black-Box Ripper| 90.00 | | DFMS-SL | 93.96 | | SEEKER (ours) | 93.97 | Table 3: Accuracy and ASR of different model extraction attacks with diverse model architectures. | Architecture | ResNet-50 | PyramidNet | DenseNet | WRN-28 | |------------------|-----------|------------|----------|--------| | Metric | Acc (%) | ASR (%) | Acc (%) | ASR (%) | | Knockoff Nets | 75.66 | 46.89 | 77.24 | 71.43 | | SEEKER (ours) | 88.01 | 96.43 | 87.43 | 91.10 | | | | | 66.13 | 46.84 | | | | | 77.82 | 29.52 | | | | | 88.56 | 96.73 | black-box adversarial attacks based on our method and query-based adversarial attacks in the appendix. 4.4 Ablation Studies We carefully designed a set of ablation experiments to examine the contributions of each of the components in our framework. Table 4 confirms that both semantic consistency based unsupervised training and aggregated query generator contribute to the overall performance of SEEKER. In particular, the offline unsupervised training procedure and our aggregated query generator contribute to a 10.7% rise in accuracy and a 47.41% rise in ASR, respectively. Overall, the combination of the proposed techniques improved accuracy by as much as 13.71% and ASR by 49.51%. 4.5 Penetrability Against Defense Mechanisms In this section, we evaluate the effectiveness of SEEKER against typical defense mechanisms, including active and passive approaches. We also provide a more detailed analysis in the appendix. Active defenses. Typical active defenses against model extraction include adding perturbations (Sha et al., 2023), truncating the top-k outputs (Orekondy et al., 2019) and rounding output scores (Tramer et al., 2016). Experimental results show that perturbation-based defense, while capable in reducing the performance of our attack, can also compromise the accuracy of the original victim model. For example, when Gaussian noise with a mean of 0 and a standard deviation of 0.5 is applied, the accuracy of our attack is decreased by 19%, accompanied with a significant 32% reduction in the accuracy of the victim model. For truncation and rounding, we consider an extreme setting where only the hard label is released by the victim, and show that our method can still achieve 0.92× of the original accuracy under this setting. Although active defensive approaches can degrade the performance of our attack by a small degree, we note that altering the prediction scores also limits the utility of the victim model for honest users, as discussed in (Chandrasekaran et al., 2020). Passive defenses. Existing passive defenses recognize model extraction attacks by analyzing the distribution of the query data. As a typical passive defense, PRADA (Juuti et al., 2019) computes the Shapiro-Wilk test statistic $W(D)$ to measure how the query input distribution deviates from the normal distribution. If $W(D)$ is below a threshold $\delta$, PRADA determines $D$ is from a model extraction attack. We set $\delta = 0.90$ in our experiments based on the original paper. Under a query budget of 100K, $W(D)$ for the query input distribution $D$ generated by SEEKER is 0.95, which is well above the threshold. The experimental results show that the distribution of the queries generated by our attack is only slightly deviated from normal distribution, and is able to circumvent PRADA detection. We point that the penetrability against distribution-based defense agree with the observation that the crafted queries mostly follow the distribution of a natural image, as demonstrated in Figure 4. 5 Conclusion In this paper, we propose a query-efficient model extraction framework based on two-stage semi-supervised public knowledge transfer. Our key insight is that unannotated public datasets can be of great help to query-efficient model extraction. In particular, public data can be used in both unsupervised substitute training and informative query generation. By carefully designing the overall architecture of the framework, we show that SEEKER is able to significantly outperform the SOTA model extraction techniques in terms of accuracy, ASR, and query-efficiency. 6 REPRODUCIBILITY STATEMENT The models and datasets for reproducing our results are introduced in the main manuscript, and more detailed experimental configurations can be found in the appendix. We point out that the datasets involved in our experimental evaluation are all publicly accessible. We also provide the code for SEEKER in our supplementary materials for better reproducibility. Please refer to the README file under the root directory for the introduction to our directory layout and detailed procedures to reproduce our results. REFERENCES Yang Bai, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia, and Weiwei Guo. Improving query efficiency of black-box adversarial attack. In European Conference on Computer Vision, pp. 101–116. Springer, 2020. Antonio Barbalau, Adrian Cosma, Radu Tudor Ionescu, and Marius Popescu. Black-box ripper: Copying black-box models using generative evolutionary algorithms. Advances in Neural Information Processing Systems, 33:20120–20129, 2020. Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, and Songbai Yan. Exploring connections between active learning and model extraction. In 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020, pp. 1309–1326. USENIX Association, 2020. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, pp. 1597–1607. PMLR, 2020. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 15750–15758. Computer Vision Foundation / IEEE, 2021. Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020, pp. 3008–3017. Computer Vision Foundation / IEEE, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 248–255. IEEE Computer Society, 2009. Gavin Weiguang Ding, Luyu Wang, and Xiaomeng Jin. AdverTorch v0.1: An adversarial robustness toolbox based on pytorch. arXiv preprint arXiv:1902.07623, 2019. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 9185–9193. Computer Vision Foundation / IEEE Computer Society, 2018. Dongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5927–5935, 2017. Charles R. Harris, K. Jarrod Millman, Stéfan van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with numpy. Nat., 585:357–362, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
RIEW6M9YoV
In Table 2, we can observe that the novelty of the generated molecules is low compared to those of the baselines (mainly on QM9). I would expect the authors to provide some explanation or intuitions about why the proposed model fails to produce novel graphs.
GRAPH GENERATION WITH $K^2$–TREES Yunhui Jang, Dongwoo Kim, Sungsoo Ahn Pohang University of Science and Technology {uni5510, dongwookim, sungsoo.ahn}@postech.ac.kr ABSTRACT Generating graphs from a target distribution is a significant challenge across many domains, including drug discovery and social network analysis. In this work, we introduce a novel graph generation method leveraging $K^2$–tree representation, originally designed for lossless graph compression. The $K^2$–tree representation encompasses inherent hierarchy while enabling compact graph generation. In addition, we make contributions by (1) presenting a sequential $K^2$–tree representation that incorporates pruning, flattening, and tokenization processes and (2) introducing a Transformer-based architecture designed to generate the sequence by incorporating a specialized tree positional encoding scheme. Finally, we extensively evaluate our algorithm on four general and two molecular graph datasets to confirm its superiority for graph generation. 1 INTRODUCTION Generating graph-structured data is a challenging problem in numerous fields, such as molecular design (Li et al., 2018; Maziarka et al., 2020), social network analysis (Grover et al., 2019), and public health (Yu et al., 2020). Recently, deep generative models have demonstrated significant potential in addressing this challenge (Simonovsky & Komodakis, 2018; Jo et al., 2022; Vignac et al., 2022). In contrast to the classic random graph models (Albert & Barabási, 2002; Erdős et al., 1960), these methods leverage powerful deep generative paradigms, e.g., variational autoencoders (Simonovsky & Komodakis, 2018), normalizing flows (Madhawa et al., 2019), and diffusion models (Jo et al., 2022). The graph generative models can be categorized into three types by the graph representation the models generate. First, an adjacency matrix is the most common representation (Simonovsky & Komodakis, 2018; Madhawa et al., 2019; Liu et al., 2021). Secondly, a string-based representation extracted from depth-first tree traversal on a graph can represent the graph as a sequence (Ahn et al., 2022; Goyal et al., 2020; Krenn et al., 2019). Finally, representing a graph as a composition of connected motifs, i.e., frequently appearing subgraphs, can preserve the high-level structural properties (Jin et al., 2018; 2020). We describe the representations on the left of Figure 1. Although there is no consensus on the best graph representation, two factors drive their development. First is the need for compactness to reduce the complexity of graph generation and simplify the search space over graphs. For example, to generate a graph with $N$ vertices and $M$ edges, the adjacency matrix requires specifying $N^2$ elements. In contrast, the string representation typically requires specifying $O(N + M)$ elements, leveraging the graph sparsity (Ahn et al., 2022; Goyal et al., 2020; Segler et al., 2018). Motif representations also save space by representing frequently appearing subgraphs by basic building blocks (Jin et al., 2018; 2020). The second factor driving the development of new graph representations is the presence of a hierarchy in graphs. For instance, community graphs possess underlying clusters, molecular graphs consist of distinct chemical fragments, and grid graphs exhibit a repetitive coarse-graining structure. In this context, motif representations (Jin et al., 2018; 2020) address the presence of a hierarchy in graphs; however, they are limited to a fixed vocabulary of motifs observed in the dataset or a specific domain. Contribution. In this paper, we propose a novel graph generation framework, coined Hierarchical Graph Generation with $K^2$–Tree (HGGT), which can represent not only non-attributed graphs but also attributed graphs in a compact and hierarchical way without domain-specific rules. The right-side table of Figure 1 emphasizes the benefits of HGGT. Since the $K^2$–tree recursively redefines Figure 1: (Left) Various representations used for graph generation. (Right) Comparing graph generative methods in terms of used graph representation. The comparison is made with respect to a method being hierarchical (H), able to handle attributed graphs (A), and domain-agnostic (DA). a graph into $K^2$ substructures, our representation becomes more compact and enables consideration of hierarchical structure in adjacency matrices.\footnote{This differs from the conventional hierarchical community structure. We provide the discussion in Appendix H.} Specifically, we model the process of graph generation as an autoregressive construction of the $K^2$–tree. To this end, we design a sequential $K^2$–tree representation that recovers the original $K^2$–tree when combined sequentially. In particular, we propose a two-stage procedure where (1) we prune the $K^2$–tree to remove redundancy arising from the symmetric adjacency matrix for undirected graphs and (2) subsequently flatten and tokenize the $K^2$–tree into a sequence to minimize the number of decisions required for the graph generation. We employ the Transformer architecture (Vaswani et al., 2017) to generate the sequential $K^2$–tree representation of a graph. To better incorporate the positional information of each node in a tree, we design a new positional encoding scheme specialized to the $K^2$–tree structure. Specifically, we represent the positional information of a node by its pathway from the root node; the proposed encoding enables the reconstruction of the full $K^2$–tree given just the positional information. To validate the effectiveness of our algorithm, we test our method on popular graph generation benchmarks across six graph datasets: Community, Enzymes (Schomburg et al., 2004), Grid, Planar, ZINC (Irwin et al., 2012), and QM9 (Ramakrishnan et al., 2014). Our empirical results confirm that HGGT significantly outperformed existing graph generation methods on five out of six benchmarks, verifying the capability of our approach for high-quality graph generation across diverse applications. To summarize, our key contributions are as follows: - We propose a new graph generative model based on adopting the $K^2$–tree as a compact, hierarchical, and domain-agnostic representation of graphs. - We introduce a novel, compact sequential $K^2$–tree representation obtained from pruning, flattening, and tokenizing the $K^2$–tree. - We propose an autoregressive model to generate the sequential $K^2$–tree representation using Transformer architecture with a specialized positional encoding scheme. - We validate the efficacy of our framework by demonstrating state-of-the-art graph generation performance on five out of six graph generation benchmarks. ## 2 RELATED WORK **Graph representations for graph generation.** The choice of graph representation is a crucial aspect of graph generation, as it significantly impacts the efficiency and allows faithful learning of the generative model. The most widely used one is the adjacency matrix, which simply encodes the pairwise relationship between nodes (Jo et al., 2022; Vignac et al., 2022; You et al., 2018; Liao et al., 2019; Shi et al., 2020; Luo et al., 2021; Kong et al., 2023; Chen et al., 2023). However, several methods (Vignac et al., 2022; You et al., 2018; Jo et al., 2022) suffer from the high complexity in generating the adjacency matrix, especially for large graphs. Figure 2: $K^2$–tree with $K = 2$. The $K^2$–tree describes the hierarchy of the adjacency matrix iteratively being partitioned to $K \times K$ submatrices. It is compact due to summarizing any zero-filled submatrix with a size larger than $1 \times 1$ (shaded in grey) by a leaf node $u$ with label $x_u = 0$. To address this issue, researchers have developed graph generative models that employ alternative graph representations such as motif-based representations and string-based representations. For instance, Ahn et al. (2022); Segler et al. (2018) proposed to generate molecule-specific string representations, and Jin et al. (2018; 2020); Yang et al. (2021) suggested generative models that extract reasonable fragments from data and generate the set of motifs. However, these methods rely on domain-specific knowledge and are restricted to molecular data. Lossless graph compression. Lossless graph compression (Besta & Hoefler, 2018) aims to reduce the size and complexity of graphs while preserving their underlying structures. Specifically, several works (Brisaboa et al., 2009; Raghavan & Garcia-Molina, 2003) introduced hierarchical graph compression methods that compress graphs leveraging their hierarchical structure. In addition, Bouritsas et al. (2021) derived the compressed representation using a learning-based objective. 3 $K^2$–TREE REPRESENTATION OF A GRAPH In this section, we introduce the $K^2$–tree as a hierarchical and compact representation of graphs, as originally proposed for graph compression (Brisaboa et al., 2009). In essence, the $K^2$–tree is a $K^2$-ary ordered tree that recursively partitions the adjacency matrix into $K \times K$ submatrices. Its key idea is to summarize the submatrices filled only with zeros with a single tree-node, exploiting the sparsity of the adjacency matrix. From now on, we indicate the tree-node as a node. The representation is hierarchical, as it associates each parent and child node pair with a matrix and its corresponding submatrix, respectively, as described in Figure 2. To be specific, we consider the $K^2$–tree representation $(T, X)$ of an adjacency matrix $A$ as a $K^2$-ary tree $T = (V, E)$ associated with binary node attributes $X = \{x_u : u \in V\}$. Every non-root node is uniquely indexed as $(i, j)$-th child of its parent node for some $i, j \in \{1, \ldots, K\}$. The tree $T$ is ordered so that every $(i, j)$-th child node is ranked $K(i - 1) + j$ among its siblings. Then the $K^2$–tree satisfies the following conditions: - Each node $u$ is associated with a submatrix $A^{(u)}$ of the adjacency matrix $A$. - If the submatrix $A^{(u)}$ for a node $u$ is filled only with zeros, $x_u = 0$. Otherwise, $x_u = 1$. - A node $u$ is a leaf node if and only if $x_u = 0$ or the matrix $A^{(u)}$ is a $1 \times 1$ matrix. By default, we assume the number of nodes in the original graph to be the power of $K^2$. Figure 3: Illustration of the sequential representation for $K^2$–tree. The shaded parts of the adjacency matrix $A$ and the $K^2$–tree $\mathcal{T}$ denote redundant parts, which are further pruned, while the purple-colored parts of $A$ and $\mathcal{T}$ denote non-redundant parts. Also, same-colored tree-nodes of pruned $K^2$–tree are grouped and tokenized into the same colored parts of the sequence $y$. - Let $B_{1,1}, \ldots, B_{K,K}$ denote the $K \times K$ partitioning of the matrix $A^{(u)}$ with $i,j$ corresponding to row- and column-wise order, respectively. The child nodes $v_{1,1}, \ldots, v_{K,K}$ of the tree-node $u$ are associated with the submatrices $B_{1,1}, \ldots, B_{K,K}$, respectively. The generated $K^2$–tree is a compact description of graph $G$ as any node $u$ with $x_u = 0$ and $d_u < \max_u d_u$ where $d_u$ is the distance from the root. summarizes a large submatrix filled only with zeros. In the worst-case scenario, the size of the $K^2$–tree is $MK^2(\log_{K^2}(N^2/M) + O(1))$ (Brisaboa et al., 2009), where $N$ and $M$ denote the number of nodes and edges in the original graph, respectively. This constitutes a significant improvement over the $N^2$ size of the full adjacency matrix. Additionally, the $K^2$–tree is hierarchical ensuring that (1) each tree node represents the connectivity between a specific set of nodes, and (2) nodes closer to the root correspond to a larger set of nodes. We emphasize that the nodes associated with submatrices overlapping with the diagonal of the original adjacency matrix indicate intra-connectivity within a group of nodes. In contrast, the remaining nodes describe the interconnectivity between two distinct sets of nodes. We also describe the detailed algorithms for constructing a $K^2$–tree from a given graph $G$ and recovering a graph from the $K^2$–tree in Appendices A and B, respectively. It is crucial to note that the ordering of the nodes in the adjacency matrix influences the $K^2$–tree structure. Inspired by Diamant et al. (2023), we adopt Cuthill-McKee (C-M) ordering as our ordering scheme. We empirically discover that C-M ordering (Cuthill & McKee, 1969) provides the most compact $K^2$–tree.\footnote{We provide the results in Section 5.3.} Our explanation is that the C-M ordering is specifically designed to align the non-zero elements of a matrix near its diagonal so that there is a higher chance of encountering large submatrices filled only with zeros, which can be efficiently summarized in the $K^2$–tree representation. 4 Hierarchical Graph Generation with $K^2$–Trees In this section, we present our novel method, hierarchical graph generation with $K^2$–trees (HGGT), exploiting the hierarchical and compact structure of the $K^2$–tree representation of a graph. In detail, we transform the $K^2$–tree into a highly compressed sequence through a process involving pruning and tokenization. Subsequently, we employ a Transformer enhanced with tree-based positional encodings, for the autoregressive generation of this compressed sequence. 4.1 Sequential $K^2$–Tree Representation Here, we propose an algorithm to flatten the $K^2$–tree into a sequence, which is essential for the autoregressive generation of the $K^2$–tree. In particular, we aim to design a sequential representation that is even more compact than the $K^2$–tree to minimize the number of decisions required for the generation of the $K^2$–tree. To this end, we propose (1) pruning $K^2$–tree by removing redundant nodes, (2) flattening the pruned $K^2$–tree into a sequence, and (3) applying tokenization based on the $K^2$–tree structure. We provide an illustration of the overall process in Figure 3. Figure 4: Illustration of the tree-node positions of $K^2$–tree. The shaded parts of the adjacency matrix denote redundant parts, e.g., $p_u < q_u$. Additionally, colored elements correspond to tree-nodes of the same color and the same-colored tree-edges signify the root-to-target downward path. Blue and red tuples denote the order in the first and second levels, respectively. The tree node $u$ is non-redundant as $p_u > q_u$ while $v$ is redundant as $p_v < q_v$. Pruning the $K^2$–tree. To obtain the pruned $K^2$–tree, we identify and eliminate redundant nodes due to the symmetry of the adjacency matrix for undirected graphs. In particular, without loss of generality, such nodes are associated with submatrices positioned above the diagonal since they mirror the counterparts located below the diagonal. To this end, we now describe a formula to identify redundant nodes based on the position of a submatrix $A^{(u)}$, tied to a specific node $u$ at depth $L$, within the adjacency matrix $A$. Let $v_0, v_1, \ldots, v_L$ be a sequence of nodes representing a downward path from the root node $r = v_0$ to the node $u = v_L$. With $(i_{v_\ell}, j_{v_\ell})$ denoting the order of $v_\ell$ among its $K \times K$ siblings, the node position can be represented as $\text{pos}(u) = ((i_{v_1}, j_{v_1}), \ldots, (i_{v_L}, j_{v_L}))$. Note that node $u$ at depth $L$ corresponds to an element of $K^L \times K^L$ partitions of the adjacency matrix $A$. The row and column indexes of the submatrix $A^{(u)}$ are derived as $(p_u, q_u) = (\sum_{\ell=1}^{L} K^{L-\ell}(i_{v_\ell} - 1) + 1, \sum_{\ell=1}^{L} K^{L-\ell}(j_{v_\ell} - 1) + 1)$ as illustrated in Figure 4. As a result, we eliminate any node associated with a submatrix above the diagonal, i.e., we remove node $u$ when $p_u < q_u$. Consequently, the pruned $K^2$–tree maintains only the nodes associated with submatrices devoid of redundant nodes, i.e., those containing elements of the adjacency matrix positioned at the diagonal or below the diagonal. Notably, following this pruning process, the $K^2$–tree no longer adheres to the structure of a $K \times K$-ary tree. Additionally, consider a non-leaf node $u$ is associated with a submatrix $A^{(u)}$ that includes any diagonal elements of the adjacency matrix $A$. Then the node $u$ possess $K(K + 1)/2$ child nodes after pruning $K(K - 1)/2$ child nodes associated with the redundant submatrices. Otherwise, the non-leaf node $u$ remains associated with $K \times K$ child nodes. Note that our framework can be extended to directed graphs by omitting the pruning process. Flattening and tokenization of the pruned $K^2$–tree. Next, we explain how to obtain a sequential representation of the pruned $K^2$–tree based on flattening and tokenization. Our idea is to flatten a $K^2$–tree as a sequence of node attributes $\{x_u : u \in V\}$ using breadth-first traversal and then to tokenize the sequence by grouping the nodes that share the same parent node, i.e., sibling nodes. For this purpose, we denote the sequence of nodes obtained from a breadth-first traversal of non-root nodes in the $K^2$–tree as $u_1, \ldots, u_{|V|-1}$, and the corresponding sequence of node attributes as $x = (x_1, \ldots, x_{|V|-1})$. It is important to note that sibling nodes sharing the same parent appear sequentially in the breadth-first traversal. Next, by grouping the sibling nodes, we tokenize the sequence $x$. As a result, we obtain a sequence $y = (y_1, \ldots, y_T)$ where each element is a token representing a group of attributes associated with sibling nodes. For example, the $t$-th token corresponding to a group of $K^2$ sibling nodes is represented by $y_t = (x_{v_{1,1}}, \ldots, x_{v_{K,K}})$ where $v_{1,1}, \ldots, v_{K,K}$ share the same parent node $u$. Such tokenization allows representing the whole $K^2$–tree using $M(\log_{K^2}(N^2/M) + O(1))$ space, where $N$ and $M$ denote the number of nodes and edges in the original graph, respectively. We highlight that the number of elements in each token $y_t$ may vary due to the pruned $K^2$–tree no longer being a $K \times K$-ary tree, as mentioned above. With this in consideration, we generate a vocabulary of $2K^2 + 2^{K(K+1)/2}$ potential configurations for each token $y_t$. This vocabulary size is small in practice since we set the value $K$ to be small, e.g., setting $K = 2$ induces the size of 24. Figure 5: An example of featured $K^2$–tree representation. The shaded parts of the adjacency matrix and $K^2$–tree denote the redundant parts. The black-colored tree-nodes denote the normal tree-nodes with binary attributes while other-colored feature elements in the adjacency matrix $A$ denote the same-colored featured tree-nodes and sequence elements. The node features (i.e., C and N) and edge feature (i.e., single bond $\rightarrow$) of the molecule are represented within the leaf nodes. In particular, we remark that a token with $K(K + 1)/2$ elements carries different semantics from another token with $K^2$ elements. The former corresponds to a submatrix situated on the adjacency matrix’s diagonal, thus indicating connectivity within a set of nodes. In contrast, the latter relates to a submatrix illustrating connectivity between pairs of node sets. This supports our decision to assign distinct values to a token with $K(K + 1)/2$ elements and another with $K^2$ elements, even when the tokens might represent the same combination of node features in the unpruned tree. Generating featured graphs. We also extend our HGGT to graphs with node and edge-wise features, e.g., molecular graphs. At a high level, we apply our algorithm to the featured adjacency matrix, where each diagonal element corresponds to a node feature and each non-diagonal element corresponds to an edge feature. Node attributes of leaf nodes in $K^2$–tree correspond to node and edge features, while attributes of non-leaf nodes are the same with the non-attributed $K^2$–trees (i.e., ones and zeros). See Figure 5 for an illustration and Appendix C for a complete description. 4.2 Generating $K^2$–tree with Transformer and $K^2$–tree positional encoding We describe our algorithm to generate the sequence of $K^2$–tree representation $y = (y_1, \ldots, y_T)$. We utilize the masked Transformer (Vaswani et al., 2017) to make predictions on $p_\theta(y_t | y_{t-1}, \ldots, y_1)$. To improve the model’s understanding of the tree structure, we devise a tree-positional encoding. We also offer an algorithm to construct the $K^2$–tree from the sequence generated by the Transformer. Transformer with $K^2$–tree positional encoding. We first introduce the Transformer architecture to parameterize the distribution $p_\theta(y_t | y_{t-1}, \ldots, y_1)$ for autoregressive generation. Briefly, the model is trained with self-attention, and during inference, it generates the sequence one token at a time, relying on the previously generated sequence. To account for tree structural information, we incorporate tree-positional encodings for each time-step $t$. During training, we mask the attention layer to ensure that predictions at each step are not influenced by future tokens of the sequence. The objective function is maximum likelihood, denoted by $\max \log p(y)$, where $p(y) = p(y_1)\Pi_{t=2}^{T}p(y_t | y_{1:t-1})$. This objective aims to maximize the probability of predicting the next token correctly based on the preceding tokens. For inference, we begin the process with a begin-of-sequence (BOS) token as the input to our trained Transformer decoder. The model then computes the distribution of potential tokens for the next step, denoted by $p(y_t | y_{1:t-1})$, and the next token is sampled from this distribution. This token is appended to the input sequence, and the extended sequence is fed back into the model to generate the subsequent token. This iterative procedure is terminated when a predefined maximum length is reached or an end-of-sequence (EOS) token emerges. To enhance the input $y_t$, we incorporate the positional encoding for $u$. As outlined in Section 4.1, the node attributes in $y_t$ are associated with child nodes of a particular node $u$. Therefore, the encoding is based on the downward path from the root node $r = v_0$ to the node $u = v_L$, represented as $(v_0, \ldots, v_L)$. In this context, the order of $v_e$ amongst its siblings in the non-pruned $K^2$–tree is denoted as a tuple $(i_{v_e}, j_{v_e})$. Subsequently, we further update the input feature $y_t$ with positional Table 1: Generic graph generation performance. The baseline results are from prior works (Jo et al., 2022; Liao et al., 2019; Martinkus et al., 2022; Luo et al., 2022) or public codes (marked by *). For each metric, the best number is highlighted in **bold** and the second-best number is _underlined_. | Method | Community-small | Planar | Enzymes | Grid | |------------|-----------------|--------|---------|------| | | Deg. Clus. Orb. | Deg. Clus. Orb. | Deg. Clus. Orb. | Deg. Clus. Orb. | | GraphVAE | 0.350 0.980 0.540 | - - - | 1.369 0.629 0.191 | 1.619 **0.000** 0.919 | | GraphRNN | 0.080 0.120 0.040 | 0.005 0.278 1.254 | 0.017 0.062 0.046 | 0.064 0.043 0.021 | | GNF | 0.200 0.200 0.110 | - - - | - - - | - - - | | GRAN* | 0.005 0.142 0.090 | 0.001 0.043 0.001 | 0.023 0.031 0.169 | 0.001 **0.004** 0.002 | | EDP-GNN | 0.053 0.144 0.026 | - - - | 0.023 0.268 0.082 | 0.455 0.238 0.328 | | GraphGen* | 0.075 0.065 0.014 | 1.762 1.423 1.640 | 0.146 0.079 0.054 | 1.550 0.017 0.860 | | GraphAF | 0.180 0.200 0.020 | - - - | 1.669 1.283 0.266 | - - - | | GraphDF | 0.060 0.120 0.030 | - - - | 1.503 1.283 0.266 | - - - | | SPECTRE | - - - | 0.010 0.067 0.010 | - - - | - - - | | GDSS | 0.045 0.086 0.007 | 0.250 0.393 0.587 | 0.026 0.061 **0.009** | 0.111 0.005 0.070 | | DiGress* | 0.012 0.025 0.002 | **0.000** 0.002 0.008 | 0.011 0.039 0.010 | 0.016 **0.000** 0.004 | | GDSM | 0.011 0.015 **0.001** | - - - | 0.013 0.088 0.010 | 0.002 **0.000** **0.000** | HGGT (ours) **0.001** **0.006** **0.003** **0.000** **0.001** **0.000** **0.005** **0.017** **0.000** **0.000** **0.000** Figure 6: Generated samples for Community-small (top), and Grid (bottom) datasets. encoding, which is represented as \( \text{PE}(u) = \sum_{\ell=1}^{L} \phi_\ell(i_{v_\ell}, j_{v_\ell}) \), where \( \phi \) denotes the embedding function that converts the order tuple into vector representations and \((i_{v_1}, j_{v_1}), \ldots, (i_{v_L}, j_{v_L})\) is the sequence of orders of a downward path from \( r \) to \( u \). **Constructing \( K^2 \)-tree from the sequential representation.** We next explain the algorithm to recover a \( K^2 \)-tree from its sequential representation \( y \). In particular, we generate the \( K^2 \)-tree simultaneously with the sequence to incorporate the tree information for each step of the autoregressive generation. The algorithm begins with an empty tree containing only a root node and iteratively expands each “frontier” node based on the sequence of the decisions made by the generative model. To facilitate a breadth-first expansion approach, the algorithm utilizes a first-in-first-out (FIFO) queue, which contains node candidates to be expanded. To be specific, our algorithm initializes a \( K^2 \)-tree \( T = (\{r\}, \emptyset) \) with the root node \( r \) associated with the node attribute \( x_r = 1 \). It also initializes the FIFO queue \( Q \) with \( r \). Then at each \( t \)-th step, our algorithm expands the node \( u \) popped from the queue \( Q \) using the token \( y_t \). To be specific, for each node attribute \( x \) in \( y_t \), our algorithm adds a child node \( v \) with \( x_v = x \). If \( x = 1 \) and the size of \( A(v) \) is larger than \( 1 \times 1 \), the child node \( v \) is inserted into the queue \( Q \). This algorithm is designed to retrieve the pruned tree, which allows the computation of positional data derived from the \( y_t \) information. ## 5 Experiment ### 5.1 Generic Graph Generation **Experimental setup.** We first validate the general graph generation performance of our HGGT on four popular graph benchmarks: (1) **Community-small**, 100 community graphs, (2) **Planar**, 200 Table 2: Molecular graph generation performance. The baseline results are from prior works (Jo et al., 2022; Luo et al., 2022) or obtained by running the open-source codes (denoted by *). The best results are highlighted in **bold** and the second best results are underlined. | Method | QM9 | ZINC250k | |------------|--------------|----------------| | | Val. ↑ | NSPDK ↓ | FCD ↓ | Uniq. ↑ | Nov. ↑ | Val. ↑ | NSPDK ↓ | FCD ↓ | Uniq. ↑ | Nov. ↑ | | EDP-GNN | 47.52 | 0.005 | 2.68 | **99.25** | 86.58 | 82.97 | 0.049 | 16.74 | 99.79 | 100 | | MoFlow | 91.36 | 0.017 | 4.47 | 98.65 | 94.72 | 63.11 | 0.046 | 20.93 | **99.99** | 100 | | GraphAF | 74.43 | 0.020 | 5.27 | 88.64 | 86.59 | 68.47 | 0.044 | 16.02 | 98.64 | 100 | | GraphDF | 93.88 | 0.064 | 10.93 | 98.58 | **98.54** | 90.61 | 0.177 | 33.55 | 99.63 | 100 | | GraphEBM | 8.22 | 0.030 | 6.14 | 97.90 | 97.01 | 5.29 | 0.212 | 35.47 | 98.79 | 100 | | GDSS | 95.72 | 0.003 | 2.9 | 98.46 | 86.27 | 97.01 | 0.019 | 14.66 | 99.64 | 100 | | DiGress* | 99.01 | 0.001 | **0.25** | 96.34 | 35.46 | **100** | 0.042 | 16.54 | 99.97 | 100 | | GDSM | **99.90** | 0.003 | 2.65 | - | - | 92.70 | 0.017 | 12.96 | - | - | | HGGT (ours)| **99.22** | **0.000** | **0.40** | 95.65 | 24.01 | 92.87 | **0.001** | **1.93** | **99.97** | **99.83** | planar graphs, (3) Enzymes (Schomburg et al., 2004), 587 protein tertiary structure graphs, and (4) Grid, 100 2D grid graphs. Following baselines, we adopt maximum mean discrepancy (MMD) to compare three graph property distributions between generated graphs and test graphs: degree (Deg.), clustering coefficient (Clus.), and 4-node-orbit counts (Orb.). We conduct all the experiments using a single RTX 3090 GPU. The detailed descriptions of our experimental setup are in Appendix D. Baselines. We compare our HGGT with twelve graph generative models: GraphVAE (Simonovsky & Komodakis, 2018), GraphRNN (You et al., 2018), GNF Liu et al. (2019), GRAN (Liao et al., 2019), EDP-GNN (Niu et al., 2020), GraphGen (Goyal et al., 2020), GraphAF (Shi et al., 2020), GraphDF (Luo et al., 2021), SPECTRE (Martinkus et al., 2022), GDSS (Jo et al., 2022), DiGress (Vignac et al., 2022), and GDSM (Luo et al., 2022). A detailed implementation description is in Appendix E. Results. Table 1 shows the experimental results. We observe that HGGT outperforms all baselines on all datasets. Note that our model consistently outperforms all baselines regardless of the graph sizes, indicating better generalization performance across various environments. In particular, we observe how the performance of HGGT is extraordinary for Grid. We hypothesize that HGGT better captures the hierarchical structure and repetitive local connectivity of the grid graphs than the other baselines. We also provide visualizations of the generated graphs in Figure 6. 5.2 Molecular graph generation Experimental setup. To test the ability of HGGT on featured graphs, we further conduct an evaluation of molecule generation tasks. We use two molecular datasets: QM9 (Ramakrishnan et al., 2014) and ZINC250k (Irwin et al., 2012). Following the previous work (Jo et al., 2022), we evaluate 10,000 generated molecules using five metrics: (a) validity (Val.), (b) neighborhood subgraph pairwise distance kernel (NSPDK), (c) Frechet ChemNet Distance (FCD), (d) uniqueness (Uniq.), and (e) novelty (Nov.). Note that NSPDK and FCD are measured between the generated samples and the test set. The validity, uniqueness, and novelty metrics are measured within the generated samples. Baselines. We compare HGGT with eight deep graph generative models: EDP-GNN (Niu et al., 2020), MoFlow (Zang & Wang, 2020), GraphAF (Shi et al., 2020), GraphDF (Luo et al., 2021), GraphEBM (Liu et al., 2021), GDSS (Jo et al., 2022), DiGress (Vignac et al., 2022), and GDSM (Luo et al., 2022). We provide a detailed implementation description in Appendix E. Results. The experimental results are reported in Table 2. We observe that HGGT showed competitive results on all the baselines on most of the metrics. The results suggest that the model can generate chemically valid features, i.e., atom types, accordingly, along with the structure of the graphs. In particular, for the ZINC250k dataset, we observe a large gap between our method and the baselines in NSPDK and FCD scores while showing competitive performance in the other metrics. Since FCD and NSPDK measure the similarity between molecular features and subgraph structures, respectively, HGGT can generate similar features and subgraphs observed in the real molecules. 5.3 Ablation studies Time complexity. We conduct experiments to measure the inference time of the proposed algorithm. The results are presented in the upper left table of Figure 7, where we report the time to generate | Method | Comm. | Planar | Enzymes | Grid | |--------|-------|--------|---------|------| | GRAN | 3.51 | 5.40 | 3.99 | 14.68| | GDSS | 0.54 | 8.85 | 1.09 | 25.90| | DiGress| 0.34 | 3.29 | 1.29 | 45.41| | HGGT (ours) | **0.03** | **0.58** | **0.09** | **8.16** | | Method | Comm. | Planar | Enzymes | Grid | |--------|-------|--------|---------|------| | BFS | 0.534 | 0.201 | 0.432 | 0.048| | DFS | 0.619 | 0.204 | 0.523 | 0.064| | C-M | **0.508** | **0.195** | **0.404** | **0.045** | Figure 7: (Upper left) Inference time to generate a single graph. (Lower left) Average compression ratio on various node orderings. (Right) Training loss on different positional encodings. Table 3: Ablation study for algorithmic components of HGGT. | Group | TPE | Prune | Community-small | Planar | Enzymes | |-------|-----|-------|-----------------|--------|---------| | | | | Degree Cluster. Orbit | Degree Cluster. Orbit | Degree Cluster. Orbit | | ✗ | ✗ | ✗ | 0.072 0.199 0.080 0.346 1.824 1.403 0.050 0.060 0.021 | | ✓ | ✗ | ✗ | 0.009 0.105 **0.001** 0.003 **0.001** 0.002 0.005 0.022 0.007 | | ✓ | ✓ | ✗ | 0.002 0.028 **0.001** 0.003 **0.001** 0.002 **0.002** 0.020 0.002 | | ✓ | ✓ | ✓ | **0.001** **0.006** **0.003** **0.000** **0.001** **0.000** 0.005 **0.017** **0.000** | a single sample. We can observe that HGGT generates a graph faster than the others due to the simplified representation. **Adjacency matrix orderings.** It is clear that the choice of node ordering influences the size of $K^2$–tree. We validate our choice of Cuthill-McKee (C-M) ordering (Cuthill & McKee, 1969) by comparing its compression ratio to other node orderings: breadth-first search (BFS) and depth-first search (DFS). The compression ratio is defined as the number of elements in $K^2$–tree divided by $N^2$. In the left below table of Figure 7, we present the compression ratios for each node ordering. One can observe that C-M ordering shows the best ratio in all the datasets compared to others. **Positional encoding.** In this experiment, we assess the impact of various positional encodings in our method. We compare our tree positional encoding (TPE) to absolute positional encoding (APE) (Vaswani et al., 2017) and relative positional encoding (RPE) (Shaw et al., 2018) on the Planar dataset. Our findings, as presented in the right figure of Figure 7, demonstrate that TPE outperforms other positional encodings with faster convergence of training loss. These observations highlight the importance of appropriate positional encoding for generating high-quality graphs. **Ablation of algorithmic components.** We introduce three components to enhance the performance of HGGT: grouping into tokens (Group), incorporating tree positional encoding (TPE), and pruning the $K^2$–tree (Prune). To verify the effectiveness of each component, we present the experimental results for our method with incremental inclusion of these components. The experimental results are reported in Table 3. The results demonstrate the importance of each component in improving graph generation performance, with grouping being particularly crucial, thereby validating the significance of our additional components to the sequential $K^2$–tree representation. ## 6 Conclusion In this paper, we presented a novel $K^2$–tree-based graph generative model (HGGT) which enables a compact, hierarchical, and domain-agnostic generation. Our experimental evaluation demonstrated state-of-the-art performance across various graph datasets. An interesting avenue for future work is the broader examination of other graph representations to graph generation, e.g., a plethora of representations (Boldi et al., 2009; Larsson & Moffat, 2000). Reproducibility All experimental code related to this paper is available at https://github.com/yunhuijiang/HGGT. Detailed insights regarding the experiments, encompassing dataset and model specifics, are available in Section 5. For intricate details like hyperparameter search, consult Appendix D. In addition, the reproduced dataset for each baseline is in Appendix E. Acknowledgements This work partly was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. IITP-2019-0-01906, Artificial Intelligence Graduate School Program (POSTECH)), the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1C1C1013366), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2022R1A6A1A0305295413, 2021R1C1C1011375), and the Technology Innovation Program (No. 20014926, Development of BIT Convergent AI Architecture, Its Validation and Candidate Selection for COVID19 Antibody, Repositioning and Novel Synthetic Chemical Therapeutics) funded by the Ministry of Trans, Industry & Energy (MOTIE, Korea). REFERENCES Sungsoo Ahn, Binghong Chen, Tianzhe Wang, and Le Song. Spanning tree-based graph generation for molecules. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=w60btE_8T2m. 1, 3 Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Reviews of modern physics, 74(1):47, 2002. 1 Reet Barik, Marco Minutoli, Mahantesh Halappanavar, Nathan R Tallent, and Ananth Kalyanaraman. Vertex reordering for real-world graphs and applications: An empirical evaluation. In 2020 IEEE International Symposium on Workload Characterization (IISWC), pp. 240–251. IEEE, 2020. 23 Maciej Besta and Torsten Hoefler. Survey and taxonomy of lossless graph compression and space-efficient graph representations. arXiv preprint arXiv:1806.01799, 2018. 3 Paolo Boldi, Massimo Santini, and Sebastiano Vigna. Permuting web and social graphs. Internet Mathematics, 6(3):257–283, 2009. 9 Giorgos Bouritsas, Andreas Loukas, Nikolaos Karalias, and Michael Bronstein. Partition and code: learning how to compress graphs. Advances in Neural Information Processing Systems, 34: 18603–18619, 2021. 3 Nieves R Brisaboa, Susana Ladra, and Gonzalo Navarro. k2-trees for compact web graph representation. In SPIRE, volume 9, pp. 18–30. Springer, 2009. 3, 4, 13 Xiaohui Chen, Jiaxing He, Xu Han, and Li-Ping Liu. Efficient and degree-guided graph generation via discrete diffusion modeling. arXiv preprint arXiv:2305.04111, 2023. 2 Elizabeth Cuthill and James McKee. Reducing the bandwidth of sparse symmetric matrices. In Proceedings of the 1969 24th national conference, pp. 157–172, 1969. 4, 9 Nathaniel Lee Diamant, Alex M Tseng, Kangway V Chuang, Tommaso Biancalani, and Gabriele Scalia. Improving graph generation by restricting graph bandwidth. In International Conference on Machine Learning, pp. 7939–7959. PMLR, 2023. 4, 18 Paul Erdős, Alfréd Rényi, et al. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17–60, 1960. 1 Nikhil Goyal, Harsh Vardhan Jain, and Sayan Ranu. Graphgen: a scalable approach to domain-agnostic labeled graph generation. In Proceedings of The Web Conference 2020, pp. 1253–1263, 2020. 1, 8, 18 Aditya Grover, Aaron Zweig, and Stefano Ermon. Graphite: Iterative generative modeling of graphs. In International conference on machine learning, pp. 2434–2444. PMLR, 2019. 1
EODzbQ2Gy4
If the method is not intended to transfer to different behaviours (only different object displacements), why not just perform planning on the target task directly? TransferStep itself appears to be a form of trajectory optimisation at each step, and at least for the tasks presented in the experiments (which are all relatively simple single-stage manipulation tasks with the object pose changing), I would expect both source and target task to be solvable by planning with the differentiable simulator.
DIFF-TRANSFER: MODEL-BASED ROBOTIC MANIPULATION SKILL TRANSFER VIA DIFFERENTIABLE PHYSICS SIMULATION Anonymous authors Paper under double-blind review ABSTRACT The capability to transfer mastered skills to accomplish a range of similar yet novel tasks is crucial for intelligent robots. In this work, we introduce Diff-Transfer, a novel framework leveraging differentiable physics simulation to efficiently transfer robotic skills. Specifically, Diff-Transfer discovers a feasible path within the task space that brings the source task to the target task. At each pair of adjacent points along this task path, which is two sub-tasks, Diff-Transfer adapts known actions from one sub-task to tackle the other sub-task successfully. The adaptation is guided by the gradient information from differentiable physics simulations. We propose a novel path-planning method to generate sub-tasks, leveraging Q-learning with a task-level state and reward. We implement our framework in simulation experiments and execute four challenging transfer tasks on robotic manipulation, demonstrating the efficacy of Diff-Transfer through comprehensive experiments. Supplementary and Videos are on the website https://sites.google.com/view/difftransfer 1 INTRODUCTION The capacity for rapidly acquiring new skills in object manipulation is crucial for intelligent robots operating in real-world environments. One might wonder, how can robots efficiently learn manipulation skills across diverse objects? A straightforward approach would involve teaching a robot a new manipulation skill for every distinct object and task. However, this method lacks efficiency and is infeasible due to the vast variety of objects and possible robot interactions. Nonetheless, we could also notice that different manipulation skills may share common properties. As shown in Fig. 1, the one-directional pushing skill could be correlated to an object reorientation skill. Thus, it may be feasible to leverage prior knowledge acquired from one task to aid in learning another similar task. Transferring this prior knowledge and acquired skill set to new tasks could greatly enhance learning efficiency compared to starting from scratch. Our intuition to solve this transfer learning problem is that Newton’s Laws apply universally in our physical world. Therefore, when involved in similar tasks where objects are moved by similar poses, robots should interact with objects in similar ways. In this way, efficiently leveraging the local information hidden in the variation of manipulation tasks could be the key to efficient task transfer learning. In this paper, we investigate the problem of transferring manipulation skills between two object manipulation tasks. Our proposed framework is depicted in Fig. 1. We approach this problem by interpolating the source task and target task by producing a large number of intermediate sub-tasks between them which gradually transform from the source task toward the target task. These continuously and gradually transforming intermediate sub-tasks act as the bridge for transferring the action sequence from the source task to the target task. To better leverage the physical property associated with the object shape and pose transformation, we leverage differentiable simulation to capture model-based gradient information and use it in transforming robot action sequences. We introduce a refined Q-learning method for path planning in the pose transfer problem, where we use a high-level state and a well-designed reward to generate the path of seamlessly connected sub-tasks with a sample-based searching method. Figure 1: The overall approach of Diff-Transfer includes a path of $L - 1$ sub-tasks. Diff-Transfer leverages Local Sampler, Q-function Network and argmax function to select the best candidate to generate the $(i + 1)$th sub-task given the $i$th sub-task, and learn the action sequence via differentiable physics simulation. We execute a series of challenging manipulation tasks using Jade (Yang et al., 2023), a differentiable physics simulator designed for articulated rigid bodies. We undertake four tasks: Close Grill, Change Clock, Open Door, and Open Drawer. The outcomes demonstrate that our system surpasses prevalent baselines for transfer learning and direct transfer without path planning through differentiable simulation, highlighting the efficacy and merits of our approach. Additionally, we perform several ablation studies. In summary, we make the following contributions: • We propose a systematic framework for model-based transfer learning, leveraging the differentiable physics-based simulation and applying our framework for pose transfer and object shape transfer. • We propose a novel path planning method for generating multiple sub-tasks in the task space and learning an action sequence for a new sub-task with the proximity property and leveraging $Q$-learning and differentiable physics simulation. • We conduct comprehensive experiments to demonstrate the effectiveness of our proposed transfer learning framework. 2 RELATED WORK 2.1 Differentiable simulation for manipulation. Significant advancements have been achieved in the field of differentiable physics engines, thanks to the evolution of automatic differentiation techniques (Paszke et al., 2019; Team et al., 2016; Hu et al., 2019a; Bell, 2020; Bradbury et al., 2018; Agarwal et al.). Various differentiable physics simulations have been developed for specific applications, such as rigid bodies (de Avila Belbute-Peres et al., 2018; Degrave et al., 2019; Yang et al., 2023), soft bodies (Hu et al., 2019a,b; Iatavallabhula et al., 2021; Geilinger et al., 2020; Du et al., 2021), cloth (Liang et al., 2019; Qiao et al., 2020; Li et al., 2022; Yu et al., 2023), articulated bodies (Werling et al., 2021; Ha et al., 2017; Qiao et al., 2021), and fluids (Um et al., 2020; Wandel et al., 2020; Holl et al., 2020; Takahashi et al., 2021). Several studies have applied differentiable physics simulations to robotic manipulations. Turpin et al. (2022) focused on multi-fingered grasp synthesis, while Lv et al. (2022) guided robots in manipulating articulated objects. Zhu et al. (2023a,b) enabled model-based learning from demonstrations. by optimizing over dynamics, and Lin et al. (2022a,b) targeted deformable object manipulation. Yang et al. (2023) developed a differentiable simulation called Jade for articulated rigid bodies with Intersection-Free Frictional Contact. However, the incorporation of contact dynamics often results in non-convex optimization challenges due to discontinuities from contact mode switching (Suh et al., 2022; Antonova et al., 2022; Zhu et al., 2023a). To mitigate this, contact-centric trajectory planning has been proposed (Mordatch et al., 2012; Marcucci et al., 2017; Cheng et al., 2021; Gabiccini et al., 2018; Zhu et al., 2023a; Chen et al., 2021), which plans both contact points and forces and generate manipulation actions afterward. Additionally, Pang et al. (2022) introduced smoothing techniques for contact gradients and employed a convex quasi-dynamics model for feasible action searching. In alignment with existing research, our study utilizes differentiable physics simulations for the purpose of transferring robotic manipulation skills across different task spaces, thereby facilitating model-based transfer learning. 2.2 Transfer Learning in Robotics. Transfer learning has become a cornerstone in robotics, aiming to generalize skills across varying tasks, environments, or robotic platforms. Although still an open challenge, the majority of research has employed reinforcement learning for skill transfer (Taylor & Stone, 2009). Several approaches have been proposed to address this challenge. Lazaric et al. (2008); Xu et al. (2021); Jian et al. (2021) utilize domain randomization during training to enhance agent robustness across diverse physical environments and to focus on task-relevant features. Tirinzoni et al. (2018); Hu et al. (2023) fine-tune reward and value functions on new tasks, while Konidaris & Barto (2007); Liu et al. (2021); Zhao et al. (2022) directly adapt policies to new environments. Finn et al. (2017) introduces a meta-learning framework to improve agent adaptability across various tasks. Chi et al. (2022) employs an iterative policy and approximates residual dynamics for runtime adaptation. Distinct from these approaches, our work adopts a model-based perspective for policy transfer. We utilize differentiable simulations to approximate physical dynamics and directly optimize pre-existing policies. We address the key differences between source and target environments as rewards where we accommodate varying manipulation goals that yield different reward functions. 3 Problem Statement We consider two object manipulation tasks on a robot with $m$ joints. We assume the source manipulation task is specified by the goal of object pose change $\Delta s_{\text{source}} \in \mathbb{R}^6$. Suppose applying a given expert action sequence $A_{\text{source}} = [a_{\text{source}}(t)]_{t=1}^{T}$ on the task would yield a state-action trajectory $\tau_{\text{source}} = [(s_{r,\text{source}}(t), s_{o,\text{source}}(t), a_{\text{source}}(t))]_{t=1}^{T}$ where $s_{r,\text{source}} \in \mathbb{R}^m$, $s_{o,\text{source}} \in \mathbb{R}^6$, $a_{\text{source}} \in \mathbb{R}^m$ denotes robot state, object state and robot action at time $t$. We assume action sequence $A_{\text{source}}$ can successfully complete the task, i.e. moving the object from the starting pose $s_{o,\text{source}}^{(1)}$ to the goal pose $s_{o,\text{source}}^{(T)} = s_{o,\text{source}}^{(1)} + \Delta s_{\text{source}}$. Our objective is to derive an action sequence $A_{\text{target}} = [a_{\text{target}}(t)]_{t=1}^{T}$ that can successfully complete a new target manipulation task $\Delta s_{\text{target}}$ specified by the goal of object pose change $\Delta s_{\text{target}}$. 4 Technical Approach We approach this problem by defining a path consisting of $L$ tasks $$P = [\Delta s_1, \Delta s_2, \ldots, \Delta s_L]$$ (1) that connects the source and target tasks where $\Delta s_1 = \Delta s_{\text{source}}$ is the source task and $\Delta s_L = \Delta s_{\text{target}}$ is the target task. Our approach consists of $L - 1$ steps of action transfer. At step $i$, our goal is to transfer a well-optimized action sequence $A_i$ on task $\Delta s_i$ to be a well-optimized action sequence $A_{i+1}$ on the next task in the sequence $\Delta s_{i+1}$. For any $i$, we assume the difference between tasks $\Delta s_i$ and $\Delta s_{i+1}$ is sufficiently small so that it is relatively easy to use local information such as differentiable simulation gradient to optimization for actions transfer. $$||\Delta s_i - \Delta s_{i+1}|| < \varepsilon_1$$ (2) where $\varepsilon_1$ denotes the upper limit between the final object state for two consecutive sub-tasks. This property is crucial to our gradient-based method in the following sub-section. 4.1 How to Accomplish a Sub-task Our approach to deduce the requisite actions is through a gradient-based methodology. Under the assumption that the subsequent sub-task goal pose deviates from the current goal pose with a limited distance as described in Eq. 2, we posit that the actions for the sub-task are in close proximity to the actions of the source. This postulation naturally lends itself to the application of gradient descent for optimization. We aim to optimize our current action sequence $\{a_{cur}^{(t)}\}_{t=1}^{T}$, denoted as $A_{cur}$, with its initialization of $A_i$. The rollout trajectory based on $A_{cur}$ is denoted $\tau_{cur} = \{(s_{r,cur}^{(t)}, s_{o,cur}^{(t)}, a_{cur}^{(t)})\}_{t=1}^{T}$. To elaborate, for each specific task, we introduce a loss function, $L_{task}$: $$L_{task} = ||\Delta s_{cur} - \Delta s_{i+1}||^2$$ (3) where $\Delta s_{target}$ is the object pose change of $(i + 1)$th sub-task goal and $\Delta s_{cur}$ is the object pose change of our rollout trajectory. We regard the task as accomplished if $L_{task}$ is smaller than a certain threshold $\varepsilon_t$. Utilizing the capabilities of the differentiable simulation framework Jade, we compute the gradient $\left\{\frac{\partial L_{task}}{\partial a_{cur}^{(t)}}\right\}_{t=1}^{T}$, denoted as $\frac{\partial L_{task}}{\partial A_{cur}}$. Subsequently, the current actions $A_{cur}$ are updated to minimize the task loss $L_{task}$: $$A_{cur} \leftarrow A_{cur} - \eta \frac{\partial L_{task}}{\partial A_{cur}}$$ (4) Thus we introduce Algorithm 1 as a function TRANSFERSTEP, since we will reuse this function in Section 4.1. It takes the trajectory $\tau_i$ for $i$th sub-task and the object pose change $\Delta s_{i+1}$ for $(i + 1)$th sub-task as input. And it will output the optimized task loss $L_{task}$, the boolean value $X$ indicating if the sub-task is successfully completed, and the rollout trajectory $\tau_{i+1}$ based on the optimized actions $A_{cur}$. If $X$ is True, then $A_{cur}$ is the desired $A_{i+1}$. This algorithm iteratively refines the action sequence $A_{cur}$ over a maximum of $n_{epoch}$ iterations or until a convergence criterion is met. Algorithm 1 Sub-Task Accomplishment ``` 1: Input: $\tau_i = \{(s_{r,i}^{(t)}, s_{o,i}^{(t)}, a_i^{(t)})\}_{t=1}^{T}, \Delta s_{i+1}$ 2: Output: $L_{task}, X, \tau_{i+1}$ 3: function TRANSFERSTEP($\tau_s, \Delta s_{i+1}$) 4: $s_{r,cur}^{(1)} \leftarrow s_{r,s}^{(1)}, a_{cur}^{(1)} \leftarrow a_s^{(1)}, t = 1, 2, \ldots, T$ 5: for $e$ in $1, 2, \ldots, n_{epoch}$ do 6: for $t$ in $1, 2, \ldots, T - 1$ do 7: $(s_{r,cur}^{(t+1)}, s_{o,cur}^{(t+1)}) \leftarrow \text{simulate}(s_{r,cur}^{(t)}, s_{o,cur}^{(t)}, a_{cur}^{(t)})$ 8: $\Delta s_{cur} \leftarrow s_{o,cur}^{(t+1)} - s_{o,cur}^{(t)}$ 9: $L_{task} \leftarrow ||\Delta s_{cur} - \Delta s_{i+1}||^2$ 10: $A_{cur} \leftarrow A_{cur} - \eta \frac{\partial L_{task}}{\partial A_{cur}}$ 11: if $L_{task} \leq \varepsilon_t$ then 12: return $L_{task}, \text{True}, \{(s_{r,cur}^{(t)}, s_{o,cur}^{(t)}, a_{cur}^{(t)})\}_{t=1}^{T}$ 13: return $L_{task}, \text{False}, \{(s_{r,cur}^{(t)}, s_{o,cur}^{(t)}, a_{cur}^{(t)})\}_{t=1}^{T}$ ``` 4.2 Sub-tasks Generation Given Algorithm 1 and the path $P$, it is easy to compute the optimized actions $A_t$ for our target task, since we can use dynamic programming to optimize $A_{i+1}$ based on $A_i$. The only problem is to generate one feasible path \( P \) where not only the property in Eq. (2) holds but also the Algorithm 1 tends to return the successful result with optimized action sequence \( A_{i+1} \) and the corresponding trajectory \( \tau_{i+1} \) for \((i + 1)\)th sub-task for each index \( i \). This reduces the problem into a path planning problem in the goal pose space where each node in the space denotes a goal final object state and we aim to build a path connecting the source goal pose and the target one. While there are lots of traditional path-planning algorithms in 3-D Euclidean space, they fail to solve our problem because the goal pose space is in a higher dimension and the obstacle is harder to detect. We introduce our innovative reinforcement learning method by predicting the difficulty of sub-tasks using a refined Q-function neural network \( Q(x; \theta) \) parameterized by \( \theta \). Instead of taking input of the conventional state and action at time \( t \), the network takes a high-level state input \( x \), which could be any object pose change like \( \Delta s_{\text{target}} \). The output \( r \) would be the estimated reward. Unlike traditional RL problems with clear task rewards, the reward in our problem needs an elaborate design because we are performing path planning on a higher task-space level. We introduce the reward function as \[ r(x) = -(\lambda_t \cdot L_{\text{task}} + \lambda_d \cdot ||x - \Delta s_{\text{target}}||^2) \] To illustrate this equation, the first term \( L_{\text{task}} \) is computed using Eq. (3) where \( \Delta s_{i+1} \) is given as \( x \) and \( \Delta s_{\text{cur}} \) is given by the optimized actions \( A_{\text{cur}} \) for sub-task goal \( x \). The second term \( ||x - \Delta s_{\text{target}}||^2 \), shortly as \( L_{\text{dis}} \), describes the distance from pose change \( x \) to the target pose change \( \Delta s_{\text{target}} \). Finally, \( \lambda_t \) and \( \lambda_d \) are weight coefficients to balance these two terms. Therefore, such reward results in a better path-planning algorithm because when the reward is high, both the task loss \( L_{\text{task}} \) and the distance to target goal \( L_{\text{dis}} \) are low. Suppose we have the accurate \( Q(x; \theta) \) network, we can generate the path \( P \) in either a gradient-based way or a sample-based way. We employ the sampled-based approach for the current pose transfer problem to increase the robustness of stochastic noise from the inaccurate network in reality. In detail, given \( i \)th sub-task with a pose change \( \Delta s_i \), we sample \( n \) vectors \( \{x_j\}_{j=1}^n \), denoted as \( S \), in the task space in the neighbourhood of the \( i \)th sub-task goal \( \Delta s_i \), so that \[ ||\Delta s_i - x_j|| < \varepsilon_{\text{sample}}, j = 1, 2, \ldots, n \] where \( \varepsilon_{\text{sample}} \) is the radius of the neighbourhood. In these \( n \) candidates for the \((i + 1)\)th sub-task, we choose the best one \( k \) based on our current knowledge to maximize the reward \( r_k \) \[ k = \arg \max_j r_j, j = 1, 2, \ldots, n \] Once we get the best candidate \( x_k \), we call the function \( \text{TRANSFERSTEP} \) in Algorithm 1 in an attempt to optimize an action sequence \( A_{i+1} \) for the given \((i + 1)\)th sub-task. Should this process be successful, we shall continue to generate the next sub-task recursively until the target goal is attained. Otherwise, we shall discard this candidate \( x_k \) and find an alternative best candidate from \( S \) iteratively, as is shown in Algorithm 2. To learn an approximate network \( Q(x; \theta) \), we maintain a dataset \( D \) dynamically during the path-planning process. Each time after we call the \( \text{TRANSFERSTEP} \) function and get more information about the task space, we add the data pair \((x_k, r_k)\) into \( D \), update \( \theta \) with the Q-learning method to gain a better network and proceed on path planning. ### 4.3 IMPLEMENTATION DETAILS In this section, we discuss the implementation details of Diff-Transfer in Algorithm 2. To begin with, we pre-train our network \( Q(x; \theta) \) with a refined initial reward in Eq. (5) where \( L_{\text{task}} \) is set to a certain constant \( c_t \) because we cannot know the difficulty of any sub-task beforehand. Specifically, we generate labels \((x_{\text{pre}}, r_{\text{pre}})\) randomly to build a dataset \( D_{\text{pre}} \) and use it to fit \( \theta \) using a supervised learning method via minimizing the loss \( l_{\text{pre}}(\theta) = ||Q(x_{\text{pre}}; \theta) - r_{\text{pre}}||^2 \). With online dataset \( D = \{(x_k, r_k)\}_{k=1}^m \) collected during execution of our path-planning method, network parameters \( \theta \) will be fine-tuned to minimize the loss \( l(\theta) = ||Q(x_k; \theta) - r_k||^2 \). It is worth noting that \( D \) doesn’t contain data from $D_{pre}$ because data in $D$ collected from rollouts in simulation reflect the actual rewards of sub-tasks while $D_{pre}$ just provides a rough estimation under the hypothesis that all sub-tasks have same difficulties, which is hardly true in the real transfer problem. **Algorithm 2 Q-function Network Guided Path Planning** 1: function PATHSEARCH($\tau_i$, $\Delta s_{target}$) 2: if $||\Delta s_i - \Delta s_{target}|| \leq \varepsilon_{pose}$ then 3: return $\tau_i$ 4: Randomly sample $n$ vectors $S \leftarrow \{x_j\}_{j=1}^n$ in the neighbourhood of $\Delta s_i$ 5: $r_j \leftarrow Q_\theta(x_j)$, $j = 1, 2...n$. 6: while $S \neq \emptyset$ do 7: $k \leftarrow \arg\max_j r_j$ 8: $L_{task}, X, \tau_{i+1} \leftarrow TRANSFERSTEP(\tau_i, x_k)$ 9: $L_{dis} \leftarrow ||x_k - \Delta s_{target}||^2$ 10: $r_k \leftarrow -(L_{task} + \lambda_d \cdot L_{dis})$ 11: $D \leftarrow D \cup \{(x_k, r_k)\}$ 12: Update $\theta$ using dataset $D$ 13: if $X = True$ then 14: PATHSEARCH($\tau_{i+1}$, $\Delta s_{target}$) 15: else 16: $S \leftarrow S - \{x_k\}$ 17: continue 18: return failure 5 EXPERIMENTS In this section, we present a rigorous experimental framework meticulously designed to elucidate the effectiveness of our proposed system Diff-Transfer. This exhaustive evaluation encompasses an assessment of the system’s performance across diverse conditions, while also subjecting it to rigorous scrutiny in the presence of unforeseen challenges. The tests conducted in this study are geared towards offering a comprehensive panorama of the system’s capabilities. Our foremost objective is to substantiate the theoretical foundations expounded earlier and establish a seamless connection between theory and practical implementation, thereby affirming the system’s scalability and adaptability across a multitude of application domains. 5.1 EXPERIMENTAL SETUP 5.1.1 SIMULATION SETTING We choose multiple manipulation tasks from RLBench (James et al., 2020) and adapt the environment to the Jade (Yang et al., 2023) simulation. Specifically, we acquire the trajectory of states for each task, along with the objects’ Unified Robot Description Format (URDF) files and corresponding mesh files. Actions are computed utilizing inverse dynamics and optimization within Jade, providing us with a comprehensive initial trajectory of both states and actions, denoted as $r_{source}$. 5.1.2 EVALUATION METRIC We employ the number of iterations $N$ in the optimization loop to evaluate the efficiency of our methods and compare the results. We also report the distance $d$, which is a task-related metric describing the completeness of manipulation. For each specific manipulation task, we run 5 times our method to reduce the effect of randomness and report the mean value for both the iterative steps and the distance as $\bar{N}$ and $\bar{d}$, and the standard deviation as $\sigma_N$ and $\sigma_d$. Figure 2: Source Task (grey object) and Target Task (orange object) for (a) Change Clock, (b) Close Grill, (c) Open Door, and (d) Open Drawer. | Method | Diff-Transfer | MAML | DMG | Direct Transfer | |-----------------|---------------|------|-----|-----------------| | Task Name | N | σ_N | d | σ_d | d | success | d | success | N | d | success | | Change Clock | 55.6 | 61.1 | 3.72| 1.38 | 10.27| × | 27.46| × | 1000+ | 19.66 | × | | Close Grill | 66.4 | 11.5 | 1.80| 0.55 | 18.54| × | 56.71| × | 1000+ | 8.53 | × | | Open Door | 57.8 | 38.2 | 0.64| 0.43 | 9.20 | × | 41.91| × | 255 | 1.40 | ✓ | | Open Drawer | 123.8 | 103.9| 0.06| 0.00 | 0.08 | × | 0.18 | × | 1000+ | 0.12 | × | Table 1: Experiment Results for Diff-Transfer, MAML, DMG, and Direct Transfer. Diff-Transfer is executed using 5 distinct random seeds. 5.1.3 Manipulation Skill Transfer Tasks Close Grill The robot is required to close a grill lid. This task is considered successful if the grill lid has been rotated to close. The distance \( d \) describes the distance from the final angle of the grill lid joint to the target angle, with a unit of degrees. Change Clock The robot is required to change a clock. This task is considered successful if the clock pointer has been revoluted to a specific orientation. The distance \( d \) describes the distance from the final angle of the clock pointer to the target angle, with a unit of degrees. Open Door The robot is required to open a door. This task is considered successful if the door has been rotated to a specific orientation from the door frame. The distance \( d \) describes the distance from the final angle of the door to the target angle, with a unit of degrees. Open Drawer The robot is required to open a drawer. The chest has 3 drawers. This task is considered successful if the specific drawer has been pulled out from the chest. The distance \( d \) describes the distance from the final translation of the drawer to the target angle, with a unit of meters. 5.1.4 Implementation Details To illustrate the details presented in Section 4, we define \( \Delta s_i \), the objective of the \( i \)th sub-task, as the base pose change of the manipulated object from its pose in the source task. This definition slightly diverges from the description in Section 3, as these intricate manipulation tasks require the robot to manipulate the object’s joint, rather than altering its pose by pushing. We employ a three-layer MLP to implement the Q-function network \( Q(x; \theta) \). Rather than directly utilizing the reward function in Eq. 5, we characterize the output network as an estimated loss with a value of \(-r(x)\), explaining why the landscapes in Fig. 3 exhibit a minimum area instead of a maximum, a point to be discussed in subsequent Section 5.3. 5.2 Baseline DMP Dynamic Movement Primitives is a method for learning and reproducing complex dynamic movement skills in robots and other systems, making it easier for them to perform tasks. | Method | Diff-Transfer | Diff-Transfer ($\lambda_t = 0$) | Linear Interpolation | |------------------------|---------------|---------------------------------|----------------------| | Task Name | $\bar{N}$ | $\sigma_N$ $d$ $\sigma_d$ | $\bar{N}$ $\sigma_N$ $d$ $\sigma_d$ | $N$ success $d$ | | Change Clock | 55.6 | 61.1 3.72 1.38 | 51.0 28.7 3.23 1.70 | 68.0 ✓ 5.43 | | Close grill | **66.4** | 11.5 **1.80** 0.55 | 96.6 28.4 2.45 0.55 | 157.0 ✓ 3.36 | | Open Door | **57.8** | 38.2 **0.64** 0.43 | 185.4 118.3 2.78 2.16 | 113.0 ✓ 4.11 | | Open Drawer | **123.8** | 103.9 **0.06** 0.00 | 527.0 712.0 **0.06** 0.00 | 309.0 ✗ 0.38 | Table 2: Experiment Results for Diff-Transfer, Diff-Transfer ($\lambda_t = 0$), and Linear Interpolation. Both Diff-Transfer and Diff-Transfer ($\lambda_t = 0$) are executed using 5 distinct random seeds. Figure 3: Visualization of learned Q-function Landscapes for (a) Change Clock, (b) Close Grill, (c) Open Door, and (d) Open Drawer. The x-axis represents translation, and the y-axis represents orientation. The origin symbolizes the change in target pose, $\Delta_{\text{Target}}$, while the top right corner denotes the change in source task pose, $\Delta_{\text{Source}}$. such as reaching and grasping objects. Specifically, for a transfer task, we use the robot trajectory of the source task to fit the dmp function, modify the object target on the target task and reproduce the motion trajectory. MAML Model-agnostic meta-learning (MAML) is a meta-learning algorithm that enables machine learning models to quickly adapt to new tasks with minimal training data by learning good initializations that can be fine-tuned for specific tasks, making it highly applicable to a variety of applications. Specifically, for a transfer task, we perform learning on 4 source tasks and perform trajectory prediction on a target task. In our experiments, the trained policy is a two-layer MLP network with 128 hidden units in each layer. We use the adam optimizer and SGD loss function to train the policy for 1000 epochs. In each epoch, we perform task-level training and meta-training. During each task-level training, we sample 20 trajectories on four source tasks to update the parameters of the task-level strategy. During each meta-training, we use task-level update parameters to sample 5 trajectories on 4 source tasks and update the policy parameters. We will train the final trained policy on the target task for 20 epochs to fine-tune the parameters, and calculate whether the policy given at this time can complete the target task. Direct Transfer To demonstrate the efficacy of our path-searching method, we assess the direct transferring technique on each task, using it as one of the baselines, denoted as Direct Transfer. Contrary to constructing a path where the source task and the target task are cohesively linked via several intermediate sub-tasks as in Algorithm 2, Direct Transfer solely endeavors to optimize an action sequence for the target task, directly drawing from the source task trajectory through differentiable simulation, as outlined in Algorithm 1. 5.3 Experiment Results The iteration counts $N$ and distances $d$ are detailed in Table 1 for Diff-Transfer, MAML, DMG, and Direct Transfer. As illustrated in the table, our algorithm manifests superior efficacy across all evaluated tasks. While MAML and DMG are unable to successfully accomplish any of the four tasks, and Direct Transfer only yields a successful outcome in the Open Door task, our Diff-Transfer manages to fulfill all four tasks, achieving a success rate of 100% across 5 varied random seeds. Additionally, Diff-Transfer requires significantly fewer iterative steps compared to Direct Transfer to accomplish the transfer task, underscoring the criticality of constructing a seamless path to mitigate the complexity of each sub-task transfer, and highlighting that attempts to transfer via brute force are frequently either impractical or necessitate more iterations. Regarding MAML and DMG, these methods, being somewhat antiquated, struggle to finalize this innovative transfer task within a reasonable time. To confirm the validity of our path-planning approach, we have depicted the landscape of our Q-function network in Fig. 3. In each depiction, the horizontal axis denotes the translation, and the vertical axis denotes the orientation, together constituting a task space for any alterations in pose. The origin represents the target pose change $\Delta s_{\text{target}}$ while the top right corner represents the source task pose change $\Delta s_{\text{source}}$. As exhibited in the images, there exists a minimum area surrounding the origin, indicating that the network directs correctly toward the target task. Moreover, this area does not necessarily need to be precisely at the origin; given the varying complexities of different tasks, completing a sub-task pose near the $\Delta s_{\text{source}}$ is often more feasible, resulting in a lower value of $L_{\text{task}}$ in Eq. 3, and, subsequently, contributing to a reduced total loss. This task-level characteristic elucidates why these landscapes exhibit a similar pattern with the aforementioned minimum area around the origin, aligning with our anticipations, even though the low-level manipulations might significantly diverge. 5.4 Ablation Study: Employ Different Path-Planning Methods We conduct two different ablation tests for Diff-Transfer with distinct path-planning methods. 1. We remove the Q-learning network and replace it with a deterministic linear interpolation method between $\Delta s_{\text{source}}$ and $\Delta s_{\text{target}}$, denoted as Linear Interpolation. 2. We refine the reward function in Eq. 5 by removing the task loss term, with $\lambda_t = 0$, denoted as Diff-Transfer ($\lambda_t = 0$). Our experiment results for the ablation study are presented in Table 2. Generally speaking, both Diff-Transfer and Diff-Transfer ($\lambda_t = 0$) achieve a 100% success rate across four tasks, employing 5 distinct random seeds, while Linear Interpolation succeeds in three out of the four transfer tasks. This denotes that path planning, even by naive methods, can substantially elevate the success rate in transferring manipulation tasks. To elaborate, the data reveals that our Diff-Transfer excels in tasks such as Close grill, Open Door, and Open Drawer, exhibiting quicker convergence (smaller $N$) and heightened precision in manipulation outcomes (smaller $d$) compared to Diff-Transfer ($\lambda_t = 0$) and Linear Interpolation. Regarding the Change Clock task, Diff-Transfer, ablation, and Linear Interpolation display comparable performance, suggesting that accomplishing this transfer task via differentiable physics simulation is relatively uncomplicated. In conclusion, the path-planning methodology employed in Diff-Transfer is imperative and efficient, leading to enhanced success rates and reduced time expenditures in most instances. 6 Conclusion In this paper, we introduced an advanced framework aiming to revolutionize the paradigm of robotic manipulation skill acquisition through transfer learning. Drawing inspiration from the omnipresence of Newtonian principles, our method centers on the potential to generalize manipulation strategies across object poses in 3-D Euclidean space. To navigate the complex landscape, we instigate a bridge mechanism, employing a continuum of intermediate sub-tasks as conduits for the seamless relay of skills between distinct object poses, where the path of sub-tasks is generated through a refined Q-function network with task-level states and rewards. This focus is further bolstered by our integration of differentiable simulation, affording us an intricate understanding of the physical intricacies inherent in pose transformations. The compelling results from our meticulous experiments underscore the robustness and efficacy of our proposed framework. In summation, our pioneering contributions herald a new era in robotic adaptability, reducing the dependency on ground-up learning and accelerating the skill transfer processes, particularly in the realms of manipulations with different object poses. REFERENCES Sameer Agarwal, Keir Mierle, and Others. Ceres solver. http://ceres-solver.org Rika Antonova, Jingyun Yang, Krishna Murthy Jatavallabhula, and Jeannette Bohg. Rethinking optimization with differentiable simulation from a global perspective. In 6th Annual Conference on Robot Learning, 2022. Bradley Bell. Cppad: a package for c++ algorithmic differentiation. http://www.coin-or.org/CppAD 2020. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL https://github.com/google/jax Claire Chen, Preston Culbertson, Marion Lepert, Mac Schwager, and Jeannette Bohg. Trajectotree: Trajectory optimization meets tree search for planning multi-contact dexterous manipulation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8262–8268, 2021. doi: 10.1109/IROS51168.2021.9636346. Xianyi Cheng, Eric Huang, Yifan Hou, and Matthew T. Mason. Contact mode guided sampling-based planning for quasistatic dexterous manipulation in 2d. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6520–6526, 2021. doi: 10.1109/ICRA48506.2021.9560766. Cheng Chi, Benjamin Burchfiel, Eric Cousineau, Siyuan Feng, and Shuran Song. Iterative residual policy for goal-conditioned dynamic manipulation of deformable objects. In Proceedings of Robotics: Science and Systems (RSS), 2022. Filipe de Avila Belbute-Peres, Kevin Smith, Kelsey Allen, Josh Tenenbaum, and J. Zico Kolter. End-to-end differentiable physics for learning and control. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/842424a1d0595b76ec4fa03c46e8d755-Paper.pdf Jonas Degrave, Michiel Hermans, Joni Dambre, et al. A differentiable physics engine for deep learning in robotics. Frontiers in neurorobotics, pp. 6, 2019. Tao Du, Kui Wu, Pingchuan Ma, Sebastien Wah, Andrew Spielberg, Daniela Rus, and Wojciech Matusik. Diffpd: Differentiable projective dynamics. ACM Trans. Graph., 41(2), nov 2021. ISSN 0730-0301. doi: 10.1145/3490168. URL https://doi.org/10.1145/3490168 Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126–1135. PMLR, 2017. Marco Gabiccini, Alessio Artoni, Gabriele Pannocchia, and Joris Gillis. A Computational Framework for Environment-Aware Robotic Manipulation Planning, pp. 363–385. Springer International Publishing, 2018. ISBN 978-3-319-60916-4. doi: 10.1007/978-3-319-60916-4_21. Moritz Geilinger, David Hahn, Jonas Zehnder, Moritz Bächer, Bernhard Thomaszewski, and Stelian Coros. Add: Analytically differentiable dynamics for multi-body systems with frictional contact. ACM Transactions on Graphics (TOG), 39(6):1–15, 2020. Sehoon Ha, Stelian Coros, Alexander Alspach, Joohyung Kim, and Katsu Yamane. Joint optimization of robot design and motion parameters using the implicit function theorem. In Siddhartha Srinivasu, Nora Ayanian, Nancy Amato, and Scott Kuindersma (eds.), Robotics, Robotics: Science and Systems, United States, 2017. MIT Press Journals. doi: 10.15607/rss.2017.xiii.003. Publisher Copyright: © 2017 MIT Press Journals. All rights reserved.; 2017 Robotics: Science and Systems, RSS 2017 ; Conference date: 12-07-2017 Through 16-07-2017. Philipp Holl, Vladlen Koltun, and Nils Thuerey. Learning to control pdes with differentiable physics. arXiv preprint arXiv:2001.07457, 2020.
2lDQLiH1W4
Let's say we input a set of views V = [v1, v2, v3, v4] and generate a shape A. Then we multiply all the views with a transformation matrix M and generate a shape B. Will shape A and shape B under the same canonicalized coordinate frame?
Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model Jiahao Li1,2∗ Hao Tan1 Kai Zhang1 Zexiang Xu1 Fujun Luan1 Yinghao Xu1,3 Yicong Hong1,4 Kalyan Sunkavalli1 Greg Shakhnarovich2 Sai Bi1 1Adobe Research 2TTIC 3Stanford University 4Australian National University {jiahao,greg}@ttic.edu yhxu@stanford.edu mr.yiconghong@gmail.com {hatan,kaiz,zexu,fluan,sunkaval,sbi}@adobe.com Abstract Text-to-3D with diffusion models has achieved remarkable progress in recent years. However, existing methods either rely on score distillation-based optimization which suffer from slow inference, low diversity and Janus problems, or are feed-forward methods that generate low-quality results due to the scarcity of 3D training data. In this paper, we propose Instant3D, a novel method that generates high-quality and diverse 3D assets from text prompts in a feed-forward manner. We adopt a two-stage paradigm, which first generates a sparse set of four structured and consistent views from text in one shot with a fine-tuned 2D text-to-image diffusion model, and then directly regresses the NeRF from the generated images with a novel transformer-based sparse-view reconstructor. Through extensive experiments, we demonstrate that our method can generate diverse 3D assets of high visual quality within 20 seconds, which is two orders of magnitude faster than previous optimization-based methods that can take 1 to 10 hours. Our project webpage is: https://jiahao.ai/instant3d/. 1 Introduction In recent years, remarkable progress has been achieved in the field of 2D image generation. This success can be attributed to two key factors: the development of novel generative models such as diffusion models (Song et al., 2021; Ho et al., 2020; Ramesh et al., 2022; Rombach et al., 2021), and the availability of large-scale datasets like Laion5B (Schuhmann et al., 2022). Transferring this success in 2D image generation to 3D presents challenges, mainly due to the scarcity of available 3D training data. While Laion5B has 5 billion text-image pairs, Obaverse-XL (Deitke et al., 2023a), the largest public 3D dataset, contains only 10 million 3D assets with less diversity and poorer annotations. As a result, previous attempts to directly train 3D diffusion models on existing 3D datasets (Luo & Hu, 2021; Nichol et al., 2022; Jun & Nichol, 2023; Gupta et al., 2023; Chen et al., 2023b) are limited in the visual (shape and appearance) quality, diversity and compositional complexity of the results they can produce. To address this, another line of methods (Poole et al., 2022; Wang et al., 2023a; Lin et al., 2023; Wang et al., 2023b; Chen et al., 2023c) leverage the semantic understanding and high-quality generation capabilities of pretrained 2D diffusion models. Here, 2D generators are used to calculate gradients on rendered images, which are then used to optimize a 3D representation, usually a NeRF (Mildenhall et al., 2020). Although these methods yield better visual quality and text-3D alignment, they can be incredibly time-consuming, taking hours of optimization for each prompt. They also suffer from artifacts such as over-saturated colors and the “multi-face” problem arising from the bias in pretrained 2D diffusion models, and struggle to generate diverse results from the same text prompt, with varying the random seed leading to minor changes in geometry and texture. In this paper, we propose Instant3D, a novel feed-forward method that generates high-quality and diverse 3D assets conditioned on the text prompt. Instant3D, like the methods noted above, builds on top of pretrained 2D diffusion models. However, it does so by splitting 3D generation into ∗This work was done while the author was an intern at Adobe Research. Figure 1: Our method generates high-quality 3D NeRF assets from the given text prompts within 20 seconds. Here we show novel view renderings from our generated NeRFs as well as the renderings of the extracted meshes from their density field. two stages: 2D generation and 3D reconstruction. In the first stage, instead of generating images sequentially (Liu et al., 2023b), we fine-tune an existing text-to-image diffusion model (Podell et al., 2023) to generate a sparse set of four-view images in the form of a $2 \times 2$ grid in a single denoising process. This design allows the multi-view images to attend to each other during generation, leading to more view-consistent results. In the second stage, instead of relying on a slow optimization-based reconstruction method, inspired by Hong et al. (2024), we introduce a novel sparse-view large reconstruction model with a transformer-based architecture that can directly regress a triplane-based (Chan et al., 2022) NeRF from a sparse set of multi-view images. Our model projects sparse-view images into a set of pose-aware image tokens using pretrained vision transformers (Caron et al., 2021), which are then fed to an image-to-triplane decoder that contains a sequence of transformer blocks with cross-attention and self-attention layers. Our proposed model has a large capacity with more than 500 million parameters and can robustly infer correct geometry and appearance of objects from just four images. Both of these stages are fine-tuned/trained with multi-view rendered images of around 750K 3D objects from Objaverse (Deitke et al., 2023b), where the second stage makes use of the full dataset and the first stage can be fine-tuned with as little as 10K data. While we use a relatively smaller dataset compared to the pre-training dataset for other modalities (e.g., C4 Raffel et al. (2020) for text and Laion5B for image), by combining it with the power of pretrained 2D diffusion models, Instant3D’s two-stage approach is able to generate high-quality and diverse 3D assets even from input prompts that contain complex compositional concepts (see Figure 1) and do not exist in the 3D dataset used for training. Due to its feed-forward architecture, Instant3D is exceptionally fast, requiring only about 20 seconds to generate a 3D asset, which is $200\times$ faster than previous optimization-based methods (Poole et al., 2022; Wang et al., 2023b) while achieving comparable or even better quality. 2 RELATED WORKS 3D generation. Following the success of generative models on 2D images using VAEs (Kingma & Welling, 2013; Van Den Oord et al., 2017), GANs (Goodfellow et al., 2014; Karras et al., 2019; Gu et al., 2022; Kang et al., 2023), and autoregressive models (Oord et al., 2016; Van Den Oord et al., 2016), people have also explored the applications of such models on 3D generation. Previous approaches have explored different methods to generate 3D models in the form of point clouds (Wu et al., 2016; Gadelha et al., 2017; Smith & Meger, 2017), triangle meshes (Gao et al., 2022; Pavllo et al., 2020; Chen et al., 2019; Luo et al., 2021), volumes (Chan et al., 2022; Or-El et al., 2022; Bergman et al., 2022; Skorokhodov et al., 2022; Mittal et al., 2022) and implicit representations (Liu et al., 2022; Fu et al., 2022; Sanghi et al., 2022) in an unconditional or text/image-conditioned manner. Such methods are usually trained on limited categories of 3D objects and do not generalize well to a wide range of novel classes. Diffusion models (Rombach et al., 2021; Podell et al., 2023; Ho et al., 2020; Song et al., 2021; Saharia et al., 2022) open new possibilities for 3D generation. A class of methods directly train 3D diffusion models on the 3D representations (Nichol et al., 2022; Liu et al., 2023c; Zhou et al., 2021; Sanghi et al., 2023) or project the 3D models or multi-view rendered images into latent representations (Ntavelis et al., 2023; Zeng et al., 2022; Gupta et al., 2023; Jun & Nichol, 2023; Chen et al., 2023b) and perform the diffusion process in the latent space. For example, Shap-E (Jun & Nichol, 2023) encodes each 3D shape into a set of parameters of an implicit function, and then trains a conditional diffusion model on the parameters. These approaches face challenges due to the restricted availability and diversity of existing 3D data, consequently resulting in generated content with poor visual quality and inadequate alignment with the input prompt. Therefore, although trained on millions of 3D assets, Shap-E still fails to generate 3D shapes with complex compositional concepts and high-fidelity textures. To resolve this, another line of works try to make use of 2D diffusion models to facilitate 3D generation. Some works (Jain et al., 2022; Mohammad Khalid et al., 2022) optimize meshes or NeRFs to maximize the CLIP Radford et al. (2021) score between the rendered images and input prompt utilizing pretrained CLIP models. While such methods can generate diverse 3D content, they exhibit a deficiency in visual realism. More recently, some works (Poole et al., 2022; Wang et al., 2023b; Lin et al., 2023; Chen et al., 2023c) optimize 3D representations using score distillation loss (SDS) based on pretrained 2D diffusion models. Such methods can generate high-quality results, but suffer from slow optimization, over-saturated colors and the Janus problem. For example, it takes 1.5 hours for DreamFusion (Poole et al., 2022) and 10 hours for ProlificDreamer (Wang et al., 2023b) to generate a single 3D asset, which greatly limits their practicality. In contrast, our method enjoys the benefits of both worlds: it’s able to borrow information from pretrained 2D diffusion models to generate diverse multi-view consistent images that are subsequently lifted to faithful 3D models, while still being fast and efficient due to its feed-forward nature. Sparse-view reconstruction. Traditional 3D reconstruction with multi-view stereo (Agarwal et al., 2011; Schönberger et al., 2016; Furukawa et al., 2015) typically requires a dense set of input images that have significant overlaps to find correspondence across views and infer the geometry correctly. While NeRF (Mildenhall et al., 2020) and its variants (Müller et al., 2022; Chen et al., 2022; 2023a) have further alleviated the prerequisites for 3D reconstruction, they perform per-scene optimization that still necessitates a lot of input images. Previous methods (Wang et al., 2021; Chen et al., 2021; Long et al., 2022; Reizenstein et al., 2021; Trevithick & Yang, 2021; Shen et al., 2023) have tried to learn data priors so as to infer NeRF from a sparse set of images. Typically they extract per-view features from each input image, and then for each point on the camera ray, aggregate multi-view features and decode them to the density (or SDF) and colors. Such methods are either trained in a category-specific manner, or only trained on small datasets such as ShapeNet; they have not been demonstrated to generalize beyond these datasets especially to the complex text-to-2D outputs. More recently, some methods utilize data priors from pretrained 2D diffusion models to lift a single 2D image to 3D by providing supervision at novel views using SDS loss (Liu et al., 2023b; Qian et al., 2023; Melas-Kyriazi et al., 2023) or generating multi-view images (Liu et al., 2023a). For instance, One-2-3-45 (Liu et al., 2023a) generates 32 images at novel views from a single input image using a fine-tuned 2D diffusion model, and reconstructs a 3D model from them, which suffers from inconsistency between the many generated views. In comparison, our sparse-view reconstructor adopts a highly scalable transformer-based architecture and is trained on large-scale 3D data. This gives it the ability to accurately reconstruct 3D models of novel unseen objects from a sparse set of 4 images without per-scene optimization. 3 METHOD Our method Instant3D is composed of two stages: sparse-view generation and feed-forward NeRF reconstruction. In Section 3.1, we present our approach for generating sparse multi-view images conditioned on the text input. In Section 3.2, we describe our transformer-based sparse-view large reconstruction model. Figure 2: Overview of our method. Given a text prompt (‘a car made out of sushi’), we perform multi-view generation with Gaussian blobs as initialization using fine-tuned 2D diffusion model, producing a 4-view image in the form of a $2 \times 2$ grid. Then we apply a transformer-based sparse-view 3D reconstructor on the 4-view image to generate the final NeRF. 3.1 Text-Conditioned Sparse View Generation Given a text prompt, our goal is to generate a set of multi-view images that are aligned with the prompt and consistent with each other. We achieve this by fine-tuning a pretrained text-to-image diffusion model to generate a $2 \times 2$ image grid as shown in Figure 2. In the following paragraphs, we first illustrate that large text-to-image diffusion models (i.e., SDXL (Podell et al., 2023)) have the capacity to generate view-consistent images thus a lightweight fine-tuning is possible. We then introduce three essential techniques to achieve it: the image grid, the curation of the dataset, and also the Gaussian Blob noise initialization in inference. As a result of these observations and technical improvements, we can fine-tune the 2D diffusion model for only 10K steps (on 10K data) to generate consistent sparse views. Multi-view generation with image grid. Previous methods (Liu et al., 2023b;a) on novel-view synthesis show that image diffusion models are capable of understanding the multi-view consistency. In light of this, we compile the images at different views into a single image in the form of an image grid, as depicted in Figure 2. This image-grid design can better match the original data format of the 2D diffusion model, and is suitable for simple direct fine-tuning protocol of 2D models. We also observe that this simple protocol only works when the base 2D diffusion has enough capacity, as shown in the comparisons of Stable Diffusion v1.5 (Rombach et al., 2021) and SDXL (Podell et al., 2023) in Section 4.3. The benefit from simplicity will also be illustrated later in unlocking the lightweight fine-tuning possibility. Regarding the number of views in the image grid, there is a trade-off between the requirements of multi-view generation and 3D reconstruction. More generated views make the problem of 3D reconstruction easier with more overlaps but increase possibility of view inconsistencies in generation and reduces the resolution of each generated view. On the other hand, too few views may cause insufficient coverage, requiring the reconstructor to hallucinate unseen parts, which is challenging for a deterministic 3D reconstruction model. Our transformer-based reconstructor learns generic 3D priors from large-scale data, and greatly reduces the requirement for the number of views. We empirically found that using 4 views achieves a good balance in satisfying the two requirements above, and they can be naturally arranged in a $2 \times 2$ grid as shown in Figure 2. Next, we detail how the image grid data is created and curated. Multi-view data creation and curation. To fine-tune the text-to-image diffusion model, we create paired multi-view renderings and text prompts. We adopt a large-scale synthetic 3D dataset Objaverse (Deitke et al., 2023b) and render four $512 \times 512$ views of about 750K objects with Blender. We distribute the four views at a fixed elevation (20 degrees) and four equidistant azimuths (0, 90, 180, 270 degrees) to achieve a better coverage of the object. We use Cap3D (Luo et al., 2023) to generate captions for each 3D object, which consolidates captions from multi-view renderings generated with pretrained image captioning model BLIP-2 (Li et al., 2023) using a large language model (LLM). Finally, the four views are assembled into a grid image in a fixed order and resized to the input resolution compatible with the 2D diffusion model. We find that naively using all the data for fine-tuning reduces the photo-realism of the generated images and thus the quality of the 3D assets. Therefore, we train a simple scorer on a small amount Figure 3: Architecture of our sparse-view reconstructor. The model applies a pretrained ViT to encode multi-view images into pose-aware image tokens, from which we decode a triplane representation of the scene using a transformer-based decoder. Finally we decode per-point triplane features to its density and color and perform volume rendering to render novel views. We illustrate here with 2 views and the actual implementation uses 4 views. (2000 samples) of manually labeled data to predict the quality of each 3D object. The model is a simple SVM on top of pretrained CLIP features extracted from multi-view renderings of the 3D object (please see Appendix for details). During training, our model only takes the top 10K data ranked by our scorer. We provide a quantitative study in Section 4.3 to validate the impact of different data curation strategies. Although the difference is not very significant from the metric perspective, we found that our curated data is helpful in improving the visual quality. Inference with Gaussian blob initialization. While our training data is multi-view images with a white background, we observe that during inference starting from standard Gaussian noise still results in images that have cluttered backgrounds (see Figure 5); this introduces extra difficulty for the feed-forward reconstructor in the second stage (Section 3.2). To guide the model toward generating images with a clean white background, inspired by SDEdit (Meng et al., 2022), we first create an image of a $2 \times 2$ grid with a solid white background that has the same resolution as the output image, and initialize each sub-grid with a 2D Gaussian blob that is placed at the center of the image with a standard deviation of 0.1 (please see Appendix for details). The visualization of this Gaussian Blob is shown in Figure 2. The Gaussian blob image grid is fed to the auto-encoder to get its latent. We then add diffusion noise (e.g., use $t=980/1000$ for 50 DDIM denoising steps), and use it as the starting point for the denoising process. As seen in Figure 5, this technique effectively guides the model toward generating images with a clean background. Lightweight fine-tuning. With all the above observations and techniques, we are able to adapt a text-to-image diffusion model to a text-to-multiview model with lightweight fine-tuning. This lightweight fine-tuning shares a similar spirit to the ‘instruction fine-tuning’ (Mishra et al., 2022; Wei et al., 2021) for LLM alignment. The assumption is that the base model is already capable of the task, and the fine-tuning is to unlock the base model’s ability without introducing additional knowledge. Since we utilize an image grid, the fine-tuning follows the exactly same protocol as the 2D diffusion model pre-training, except that we decrease the learning rate to $10^{-5}$. We train the model with a batch size of 192 for only 10K iterations on the 10K curated multi-view data. The training is done using 32 NVIDIA A100 GPUs for only 3 hours. We study the impact of different training settings in Section 4.3. For more training details, please refer to Appendix. 3.2 Feed-Forward Sparse-View Large Reconstruction Model In this stage, we aim to reconstruct a NeRF from the four-view images $\mathcal{I} = \{I_i | i = 1, ..., 4\}$ generated in the first stage. 3D reconstruction from sparse inputs with a large baseline is a challeng- ing problem, which requires strong model priors to resolve the inherent ambiguity. Inspired by a recent work LRM (Hong et al., 2024) that introduces a transformer-based model for single image 3D reconstruction, we propose a novel approach that enables us to predict a NeRF from a sparse set of input views with known poses. Similar to Hong et al. (2024), our model consists of an image encoder, an image-to-triplane decoder, and a NeRF decoder. The image encoder encodes the multi-view images into a set of tokens. We feed the concatenated image tokens to the image-to-triplane decoder to output a triplane representation (Chan et al., 2022) for the 3D object. Finally, the triplane features are decoded into per-point density and colors via the NeRF MLP decoder. In detail, we apply a pretrained Vision Transformer (ViT) DINO (Caron et al., 2021) as our image encoder. To support multi-view inputs, we inject camera information in the image encoder to make the output image tokens pose-aware. This is different from Hong et al. (2024) that feeds the camera information in the image-to-triplane decoder because they take single image input. The camera information injection is done by the AdaLN (Huang & Belongie, 2017; Peebles & Xie, 2022) camera modulation as described in Hong et al. (2024). The final output of the image encoder is a set of pose-aware image tokens $f_{I_i}^*$, and we concatenate the per-view tokens together as the feature descriptors for the multi-view images: $f_I = \oplus(f_{I_1}^*, ..., f_{I_n}^*)$ We use triplane as the scene representation. The triplane is flattened to a sequence of learnable tokens, and the image-to-triplane decoder connects these triplane tokens with the pose-aware image tokens $f_I$ using cross-attention layers, followed by self-attention and MLP layers. The final output tokens are reshaped and upsampled using a de-convolution layer to the final triplane representation. During training, we ray march through the object bounding box and decode the triplane features at each point to its density and color using a shared MLP, and finally get the pixel color via volume rendering. We train the networks in an end-to-end manner with image reconstruction loss at novel views using a combination of MSE loss and LPIPS (Zhang et al., 2018) loss. **Training details.** We train the model on multi-view renderings of the Objaverse dataset (Deitke et al., 2023b). Different from the first stage that performs data curation, we use all the 3D objects in the dataset and scale them to $[-1, 1]^3$; then we generate multi-view renderings using Blender under uniform lighting with a resolution of $512 \times 512$. While the output images from the first stage are generated in a structured setup with fixed camera poses, we train the model using random views as a data augmentation mechanism to increase the robustness. Particularly, we randomly sample 32 views around each object. During training, we randomly select a subset of 4 images as input and another random set of 4 images as supervision. For inference, we will reuse the fixed camera poses in the first stage as the camera input to the reconstructor. For more details on the training, please refer to the Appendix. ## 4 EXPERIMENTS In this section, we first do comparisons against previous methods on text-to-3D (Section 4.1), and then perform ablation studies on different design choices of our method. By default, we report the results generated with fine-tuned SDXL models, unless otherwise noted. ### 4.1 Text-to-3D We make comparisons to state-of-the-art methods on text-to-3D, including a feed-forward method Shap-E (Jun & Nichol, 2023), and optimization-based methods including DreamFusion (Poole et al., 2022) and ProlificDreamer (Wang et al., 2023b). We use the official code for Shap-E, and the implementation from three-studio (Guo et al., 2023) for the other two as there is no official code. We use default hyper-parameters (number of optimization iterations, number of denoising steps) of these models. For our own model we use the SDXL base model fine-tuned on 10K data for 10K steps. During inference we take 100 DDIM steps. **Qualitative comparisons.** As shown in Figure 4, our method generates visually better results than those of Shap-E, producing sharper textures, better geometry and substantially improved text-3D alignment. Shap-E applies a diffusion model that is exclusively trained on million-level 3D data, which might be evidence for the need of 2D data or models with 2D priors. DreamFusion and ProlificDreamer achieve better text-3D alignment utilizing pretrained 2D diffusion models. However, Figure 4: Qualitative comparisons on text-to-3D against previous methods. We include more uncategorized comparisons in the supplementary material. Table 1: Quantitative comparisons on CLIP scores against baseline methods. Our method outperforms previous feed-forward method Shap-E and optimization-based method DreamFusion, and achieves competitive performance compared to ProlificDreamer while being $1800 \times$ faster. | Method | ViT-L/14 ↑ | ViT-bigG-14 ↑ | Time(s) ↓ | |-----------------|------------|---------------|-----------| | Shap-E | 20.51 | 32.21 | 6 | | DreamFusion | 23.60 | 37.46 | 5400 | | ProlificDreamer | 27.39 | 42.98 | 36000 | | Ours | 26.87 | 41.77 | 20 | DreamFusion generates results with over-saturated colors and over-smooth textures. While ProlificDreamer results have better details, it still suffers from low-quality geometry (as in ‘A bulldozer clearing ...’) and the Janus problem (as in “a squirrel dressed like ...”, also more detailed in Appendix Figure 11). In comparison, our results have more photorealistic appearance with better geometric details. Please refer to the Appendix and supplementary materials for video comparisons and more results. Quantitative comparisons. In Table 4, we quantitatively assess the coherence between the generated models and text prompts using CLIP-based scores. We perform the evaluation on results with 400 text prompts from DreamFusion. For each model, we render 10 random views and calculate the average CLIP score between the rendered images and the input text. We report the metric using multiple variants of CLIP models with different model sizes and training data (i.e., ViT-L/14 from OpenAI and ViT-bigG-14 from OpenCLIP). From the results we can see that our model achieves higher CLIP scores than Shap-E, indicating better text-3D alignment. Our method even achieves consistently higher CLIP scores than optimization-based method DreamFusion and competitive scores to ProlificDreamer, from which we can see that our approach can effectively inherit the great text understanding capability from the pretrained SDXL model and preserve them in the generated 3D assets via consistent sparse-view generation and robust 3D reconstruction. Table 2: Quantitative comparisons against previous sparse-view reconstruction methods on GSO dataset. | Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |--------------|--------|--------|---------| | SparseNeus | 20.62 | 0.8360 | 0.1989 | | Ours | 26.54 | 0.8934 | 0.0643 | Inference time comparisons. We present the time to generate a 3D asset in Table 1. The timing is measured using the default hyper-parameters of each method on an A100 GPU. Notably, our method is significantly faster than the optimization-based methods: while it takes 1.5 hours for DreamFusion and 10 hours for ProlificDreamer to generate a single asset, our method can finish the generation within 20 seconds, resulting in a $270\times$ and $1800\times$ speed up respectively. In Figure 10, we show that our inference time can be further reduced without obviously sacrificing the quality by decreasing the number of DDIM steps. 4.2 Comparisons on Sparse View Reconstruction We make comparisons to previous sparse-view NeRF reconstruction works. Most of previous works (Reizenstein et al., 2021; Trevithick & Yang, 2021; Yu et al., 2021) are either trained on small-scale datasets such as ShapeNet, or trained in a category-specific manner. Therefore, we make comparisons to a state-of-the-art method SparseNeus (Long et al., 2022), which is also applied in One-2-3-45 (Liu et al., 2023a) where they train the model on the same Objaverse dataset for sparse-view reconstruction. We do the comparisons on the Google Scan Object (GSO) dataset (Downs et al., 2022), which consists of 1019 objects. For each object, we render 4-view input following the structured setup and randomly select another 10 views for testing. We adopt the pretrained model from Liu et al. (2023a). Particularly, SparseNeus does not work well for 4-view inputs with such a large baseline; therefore we add another set of 4 input views in addition to our four input views (our method still uses 4 views as input), following the setup in Liu et al. (2023a). We report the metrics on novel view renderings in Table 2. From the table, we can see that our method outperforms the baseline method even with fewer input images, which demonstrates the superiority of our sparse-view reconstructor. 4.3 Ablation Study for Sparse View Generation We ablate several key decisions in our method design, including (1) the choice of the larger 2D base model SDXL, (2) the use of Gaussian Blob during inference, (3) the quality and size of the curated dataset, and lastly, (4) the need and requirements of lightweight fine-tuning. We gather the quantitative results in Table 3 and place all qualitative results in the Appendix. We observe that qualitative results are more evident than quantitative results, thus we recommend a closer examination. Scalability with 2D text-to-image models. One of the notable advantages of our method is that its efficacy scales positively with the potency of the underlying 2D text-to-image model. In Figure 12, we present qualitative comparisons between two distinct backbones (with their own tuned hyper-parameters): SD1.5 (Rombach et al., 2021) and SDXL (Podell et al., 2023). It becomes readily apparent that SDXL, which boasts a model size $3\times$ larger than that of SD1.5, exhibits superior text comprehension and visual quality. We also show a quantitative comparison on CLIP scores in Table 3. By comparing Exp(l, m) with Exp(d, g), we can see that the model with SD1.5 achieves consistently lower CLIP scores indicating worse text-3D alignment. Gaussian blob initialization. In Figure 5, we show our results generated with and without Gaussian blob initialization. From the results we can see that while our fine-tuned model can generate multi-view images without Gaussian blob initialization, they tend to have cluttered backgrounds, which challenges the second-stage feed-forward reconstructor. In contrast, our proposed Gaussian blob initialization enables the fine-tuned model to generate images with a clean white background, which better align with the requirements of the second stage. Quality and size of fine-tuning dataset. We evaluate the impact of the quality and size of the dataset used for fine-tuning 2D text-to-image models. We first make comparisons between curated and uncurated (randomly selected) data. The CLIP score rises slightly as shown in Table 3 (i.e., comparing Exp(d, i)), while there is a substantial quality improvement as illustrated in Appendix Figure 7. This aligns with the observation that the data quality can dramatically impact the results in the instruction fine-tuning stage of LLM (Zhou et al., 2023). When it comes to data size, we observe a double descent from Table 3 Exp(a, d, g) with 1K, 10K, and 100K data. We pick Exp(a, d, g) here because they are the best results among different training steps for the same training data size. The reason for this double descent can be spotlighted by the Figure 5: Qualitative comparisons on results generated with and without Gaussian blob initialization. Table 3: Comparison on CLIP scores of NeRF renderings with different variants of fine-tuning settings. | Exp ID | Exp Name | Base | # Data | Curated | # Steps | ViT-L/14 | ViT-bigG-14 | |--------|---------------------------|--------|--------|---------|---------|----------|-------------| | (a) | Curated-1K-s1K | SDXL | TK | ✓ | 1K | 26.33 | 41.09 | | (b) | Curated-1K-s10k | SDXL | 1K | ✓ | 10k | 22.55 | 35.59 | | (c) | Curated-10K-s4k | SDXL | 10K | ✓ | 4k | 26.35 | 41.08 | | (d) | Curated-10K-s10k | SDXL | 10K | ✓ | 10k | 26.87 | 41.77 | | (e) | Curated-100K-s10k | SDXL | 100K | ✓ | 10k | 25.35 | 40.56 | | (f) | Curated-100K-s10k | SDXL | 100K | ✓ | 10k | 25.79 | 40.32 | | (g) | Curated-100K-s40k | SDXL | 100K | ✓ | 40k | 26.59 | 41.29 | | (h) | Curated-300K-s40k | SDXL | 300K | ✓ | 40k | 26.43 | 40.72 | | (i) | Random-10K-s10k | SDXL | 10K | ✗ | 10k | 26.87 | 41.47 | | (j) | Random-100K-s40k | SDXL | 100K | ✗ | 40k | 26.28 | 40.90 | | (k) | AllData-s40k | SDXL | 700K | ✗ | 40k | 26.13 | 40.60 | | (l) | Curated-10K-s10k (SD1.5) | SD1.5 | 10K | ✓ | 10k | 23.50 | 36.90 | | (m) | Curated-100K-s40k (SD1.5) | SD1.5 | 100K | ✓ | 40k | 25.48 | 39.07 | qualitative comparisons in Appendix Figure 13, where training with 1K data can lead to inconsistent multi-view images, while training with 100K data can hurt the compositionality, photo-realism, and also text alignment. Number of fine-tuning steps. We also quantitatively and qualitatively analyze the impact of fine-tuning steps. For each block in Table 3 we show the CLIP scores of different training steps. Similar to the findings in instruction fine-tuning (Ouyang et al., 2022), the results do not increase monotonically regarding the number of fine-tuning steps but have a peak in the middle. For example, in our final setup with the SDXL base model and 10K curated data (i.e., Exp(c, d, e)), the results are peaked at 10K steps. For other setups, the observations are similar. We also qualitatively compare the results at different training steps for 10K curated data in Appendix Figure 14. There is an obvious degradation in the quality of the results for both 4K and 20K training steps. Another important observation is that the peak might move earlier when the model size becomes larger. This can be observed by comparing between Exp(l,m) for SD1.5 and Exp(d,g) for SDXL. Note that this comparison is not conclusive yet from the Table given that SD1.5 does not perform reasonably with our direct fine-tuning protocol. More details are in the Appendix. We also found that Exp(a) with 1K steps on 1K data can achieve the best CLIP scores but the view consistency is actually disrupted. A possible reason is that the CLIP score is insensitive to certain artifacts introduced by reconstruction from inconsistent images, which also calls for a more reliable evaluation metric for 3D generation. 5 CONCLUSIONS In this paper we presented a novel feed-forward two-stage approach Instant3D that can generate high-quality and diverse 3D assets from text prompts within 20 seconds. Our method finetunes a 2D text-to-image diffusion model to generate consistent 4-view images, and lifts them to 3D with a robust transformer-based large reconstruction model. The experiment results show that our method outperforms previous feed-forward methods in terms of quality while being equally fast, and achieves comparable or better performance to previous optimization-based methods with a speed-up of more than 200 times. Instant3D allows novice users to easily create 3D assets and enables fast prototyping and iteration for various applications such as 3D design and modeling. Ethics Statement. The generation ability of our model is inherited from the public 2D diffusion model SDXL. We only do lightweight fine-tuning over the SDXL model thus it is hard to introduce extra knowledge to it. Also, our model can share similar ethical and legal considerations to SDXL. The curation of the data for lightweight fine-tuning does not introduce outside annotators. Thus the quality of the data might be biased towards the preference of the authors, which can lead to a potential bias on the generated results as well. The text input to the model is not further checked by the model, which means that the model will try to do the generation for every text prompt it gets without the ability to acknowledge unknown knowledge. Reproducibility Statement. In the main text, we highlight the essential techniques to build our model for both the first stage (Section 3.1) and the second stage (Section 3.2). We discuss how our data is created and curated in Section 3. The full model configurations and training details can be found in Appendix Section A.3 and Section A.6. We have detailed all the optimizer hyper-parameters and model dimensions. We present more details on our data curation process in Section A.2. We also attach the IDs of our curated data in Supplementary Materials to further facilitate the reproduction. REFERENCES Adobe. Adobe Firefly. https://firefly.adobe.com/, 2023. Sameer Agarwal, Yasutaka Furukawa, Noah Snavely, Ian Simon, Brian Curless, Steven M Seitz, and Richard Szeliski. Building rome in a day. Communications of the ACM, 54(10):105–112, 2011. Alexander Bergman, Petr Kellnhofer, Wang Yifan, Eric Chan, David Lindell, and Gordon Wetzstein. Generative neural articulated radiance fields. Advances in Neural Information Processing Systems, 35:19900–19916, 2022. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the International Conference on Computer Vision (ICCV), 2021. Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In CVPR, 2022. Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124–14133, 2021. Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision (ECCV), 2022. Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, and Andreas Geiger. Dictionary fields: Learning a neural basis decomposition. ACM Trans. Graph., 2023a. Hansheng Chen, Jiatao Gu, Anpei Chen, Wei Tian, Zhuowen Tu, Lingjie Liu, and Hao Su. Single-stage diffusion nerf: A unified approach to 3d generation and reconstruction. In ICCV, 2023b. Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation. arXiv preprint arXiv:2303.13873, 2023c. Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016. Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. Learning to predict 3d objects with an interpolation-based differentiable renderer. Advances in neural information processing systems, 32, 2019. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344–16359, 2022.
dIjwC8A0N6
The Lion optimizer demonstrates greater efficiency when employed with larger batch sizes and lower learning rates. It would be great if the authors can address how these specific characteristics of the Lion optimizer influence the training process within their experimental setup.
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources Anonymous authors Paper under double-blind review Abstract Large Language Models (LLMs) have showcased remarkable impacts across a wide spectrum of natural language processing tasks. Fine-tuning these pre-trained models on downstream datasets provides further significant performance gains, but this process has been challenging due to its extraordinary resource requirements. To this end, existing efforts focus on parameter-efficient fine-tuning, which, unfortunately, fail to capitalize on the powerful potential of full-parameter fine-tuning. In this work, we propose QFT, a novel Quantized Full-parameter Tuning framework for LLMs that enables memory-efficient fine-tuning without harming performance. Our framework incorporates two novel ideas: (i) we adopt the efficient Lion optimizer, which only keeps track of the momentum and has consistent update magnitudes for each parameter, an inherent advantage for robust quantization; and (ii) we quantize all model states and store them as integer values, and present a gradient flow and parameter update scheme for the quantized weights. As a result, QFT reduces the model state memory to 21% of the standard solution while achieving comparable performance, e.g., tuning a LLaMA-7B model requires only <30GB of memory, satisfied by a single A6000 GPU. 1 Introduction Large Language Models (LLMs), with up to hundreds of billions of parameters, have left an indelible mark on the landscape of natural language processing tasks, showcasing their remarkable impacts across a diverse spectrum of applications and domains (Touvron et al., 2023a,b; Brown et al., 2020; Zhang et al., 2022). Fine-tuning these pre-trained models on downstream datasets enhances their ability to understand and perform specific tasks (Zhao et al., 2023). However, due to the enormous number of parameters, the fine-tuning process requires unprecedented resources. Parameter-efficient fine-tuning, involving the tuning of only selected parameters, is deemed a practical choice for low-resource situations (Ding et al., 2022; Hu et al., 2021; Li & Liang, 2021). Regrettably, owing to the limited representational capacity of the smaller parameter set, the outcomes of this approach often fall short of expectations (Lv et al., 2023). Therefore, our emphasis is placed on full-parameter fine-tuning, with a keen interest in investigating memory optimization strategies to render it feasible on cost-effective resources. We begin by examining the full spectrum of memory usage in full-parameter fine-tuning, which can be categorized into three components: model states, activation, and other temporary or unusable memory. Model states, which include the model parameters (weights), gradients, and optimizer states (such as momentum and variances in Adam (Kingma & Ba, 2015)), are mandatory to store and consequently consume the majority of the memory (Rajbhandari et al., 2020). For instance, when employing the standard fp32 training settings with the Adam optimizer, the memory allocation for model parameters, gradients, momentum, and variances amounts to 4 times the number of parameters. As a result, tuning a LLaMA-7B model necessitates a minimum of 100.4GB of RAM, which presents a formidable challenge given the limitations of current GPU capacities. In this work, we are motivated to reduce the memory usage of all model states through quantized low-precision representations. First, instead of resorting to straightforward quantization, we expect an optimizer that simplifies the computation to replace Adam. Fortunately, the Lion optimizer (Chen et al., 2023) aligns almost perfectly with our expectations, as it only keeps track of the momentum and naturally eliminates the memory usage of the variances. And more importantly, its update has the same magnitude for each parameter, thus mitigating potential imbalances or inaccuracies in weight updates introduced by limited representation precision. Afterwards, we develop lightweight yet accurate quantizers for each model state, notably the dense-and-sparse quantizer (Kim et al., 2023) for weight parameters, which are then stored in the quantized integer format. During computation, these quantized representations are dequantized on-the-fly into the floating-point format to dynamically perform high-precision arithmetic. Moreover, we present a novel gradient flow scheme for the quantized weights to ensure proper error propagation and parameter updates in training. More specifically, our contribution can be summarized as follows: • We propose QFT, a novel Quantized Full-parameter Tuning framework for LLMs, which leverages quantization to optimize memory usage in fine-tuning without sacrificing performance. QFT can be seamlessly integrated into mainstream LLM training tools with minor modifications to a few training units, and is well compatible with existing memory optimization methods. • We analyze the simplicity and memory efficiency of the Lion optimizer and confidently recommend it as the best choice for quantized fine-tuning. On this basis, we proceed to quantize all model states into the integer format, with each quantizer striking a balance between training accuracy and throughput. We also present a gradient flow scheme for the quantized weights. • We perform instruction tuning on the pre-trained LLaMA-2 models and extensively evaluate performance on various benchmarks. The results demonstrate that our QFT, with memory usage reduced to 21%, achieves comparable performance to standard floating-point training. 2 RELATED WORKS Efficient Optimizer The primary optimizers employed for training transformer models are the Adam family (Kingma & Ba, 2015; Loshchilov & Hutter, 2017). They maintain a rolling average of the previous gradients to promote stable convergence in training. However, their optimizer states (momentum and variances) imposes an extra memory overhead proportional to the number of model parameters, and this becomes a significant burden as LLMs’ parameters increase. To overcome the memory challenges of model states, there are various memory-efficient schemes. LOMO (Lv et al., 2023) utilizes a vanilla SGD optimizer for training LLMs, which unfortunately fails to ensure training performance due to the slow convergence and weak stability of SGD (Li et al., 2023). Another imperfect solution is to utilize an Adafactor optimizer (Shazeer & Stern, 2018), which, despite storing only aggregated information, is also beset by instability issues. In this work, we adopt the Lion optimizer (Chen et al., 2023), relying on its advantage of only keeping track of the momentum but achieving comparable convergence to Adam. More importantly, thanks to the sign operation, its update has the same magnitude for each parameter, which gives it a great potential for robust quantization of gradients and optimizer states. Quantization for Memory Optimization Most existing quantization methods focus on inference efficiency (Gholami et al., 2022; Dong et al., 2019, 2020; Kim et al., 2023; Li et al., 2022a,b; Li & Gu, 2022; Jacob et al., 2018), and recently, quantization is also believed to have great potential for optimizing training efficiency. Note that this research line is different from traditional quantization-aware training (QAT) (Jacob et al., 2018; Liu et al., 2023). QAT inserts fake quantization nodes on weights and activations in training, where parameter arithmetic and storage retains the floating-point format, and thus training efficiency is not improved. As a comparison, quantization-based memory optimization methods, which attempt to utilize low-precision units to store parameters, can effectively reduce the memory budget in training, and thus have received increasing attention. Bitsandbytes (Dettmers et al., 2021) introduces a block-wise quantization method to compress the memory of optimizer states. QLoRA (Dettmers et al., 2023) uses quantized values to store frozen pre-training weights, keeping only the adapters in the floating-point format. In this work, we propose a novel memory-efficient full-parameter fine-tuning framework for LLMs, in which all model states are stored as quantized integer values, enabling comprehensive memory compression without sacrificing fine-tuning performance. Other Memory Optimization Methods Other prominent memory optimization methods include offloading (Huang et al., 2020; Wang et al., 2018; Peng et al., 2020) and gradient checkpointing (Chen et al., 2016; Kumar et al., 2019; Jain et al., 2020; Kirisame et al., 2020). Activation offloading offloads activation to external memory (e.g., CPU memory). It is worth noting that offloading comes at the cost of transferring data to another storage, which can increase execution time. Gradient checkpointing is a technique that discards activations in the forward pass and recomputes them in the backward pass as needed. This approach involves a trade-off between memory usage and computation cost. In addition, there are also customized schemes proposed for training LLMs. LOMO (Lv et al., 2023) fuses the gradient computation and the parameter update in one step. This method can reduce the memory usage of gradient tensors to $O(1)$; however, there is a potential caveat as it is incompatible with gradient accumulation for scaling batch sizes, limiting it to unstable training with small batch sizes. In contrast, our framework is orthogonal and well compatible with all the above methods. 3 METHODOLOGY 3.1 LION OPTIMIZER In a recent exploration of algorithm discovery through program search for neural network training, a novel optimization algorithm, Lion (EvoLved Sign Momentum), was conceived (Chen et al., 2023). The method explores an expansive program space while implementing program selection and simplification strategies. Lion stands out due to its simplicity and memory-efficiency, only tracking momentum, differing from adaptive optimizers by employing a consistent magnitude update for each parameter using the sign operation. Comparative studies with established optimizers, like Adam (Kingma & Ba, 2015) and Adafactor (Shazeer & Stern, 2018), underscored Lion’s efficacy, leading to superior results in various domains, from image classification to language modeling. Particularly notable, Lion boosts the accuracy of Vision Transformers (ViT) on ImageNet, decreases pre-training compute on JFT, and surpasses Adam in training diffusion models. However, its advantages grow with increased training batch sizes and necessitate a lower learning rate than Adam, given the larger update norm resulting from the sign function. Designing quantized fine-tuning algorithms involves working with limited-precision representations of parameters, gradients and momentum. This can lead to several challenges, including increased sensitivity to noise, potential accumulation of rounding errors, and other precision-related issues. We find Lion more suitable for the task of quantized fine-tuning, due to the following reasons: • **Simplicity:** Lion is simpler and more memory-efficient since it only keeps track of the momentum. This reduced complexity might be beneficial when dealing with quantized values, where added algorithmic intricacies can amplify quantization errors. • **Consistent Update Magnitudes:** Unlike adaptive optimizers, Lion ensures that updates have the same magnitude for each parameter, which is determined through the sign operation. In a quantized setting, this consistency can mitigate potential imbalances or inaccuracies in weight updates introduced by limited precision. • **Memory Efficiency:** Memory usage is a common concern in quantized neural networks, especially when deploying on edge devices with constrained memory. Lion’s memory efficiency (only tracking momentum) makes it a potentially better fit for such quantized settings than optimizers like Adam, which track more state variables. 3.2 QUANTIZATION The Lion optimizer simplifies the composition of model states, which consist only of model weights, gradients, and optimizer momentum, resulting in a 25% reduction in memory usage compared to the memory-intensive Adam optimizer. However, it is imperative to recognize that these model states are still retained in the original floating-point format, a characteristic that can introduce redundant representations and, consequently, contribute to memory inefficiency. In light of this consideration, quantization, which involves the use of reduced-precision formats such as INT8 to represent neural networks, emerges as a compelling avenue for further memory optimization. The field of quantization methods primarily emphasizes improving model inference efficiency, with limited attention paid to reducing training overhead (Dettmers et al., 2021). Our approach stands out through a comprehensive training memory compression, which is accomplished by quantizing all model states within the Lion optimizer and storing them as integer values. This sets our approach apart from traditional QAT (Jacob et al., 2018). In our method, we initially store model parameters as quantized integers, whereas traditional QAT introduces fake quantization nodes to floating-point parameters. This distinction highlights the significance of our approach, as the latter method, with the reliance on fake quantization nodes, do not inherently enhance training efficiency. To more clearly demonstrate this difference, we present a comparison in Figure 1. We first perform an in-depth examination of the numerical distributions of the model weights, gradients and optimizer momentum, as shown in Figure 2. This comprehensive analysis forms the basis for designing appropriate quantization strategies. Remarkably, we prioritize lightweight quantizers to minimize the impact of de-quantization on the training throughput. In the following, we describe in detail the quantizers employed for different model states. **Uniform Quantizer for Gradients and Momentum** The gradients and momentum values exhibit a central distribution with few outliers that deviate from the central range, allowing us to confidently utilize the uniform quantizer, which is regarded as the most fundamental quantization method. The uniform quantizer includes two essential procedures: quantization and de-quantization, which are defined as follows: \[ \text{Quant} : \mathbf{X}^{(Z)} = \text{clip}\left(\left\lfloor \frac{\mathbf{X}}{s} \right\rfloor + z, 0, 2^b - 1\right) \\ \text{De-quant} : \tilde{\mathbf{X}} = s \left(\mathbf{X}^{(Z)} - z\right) \approx \mathbf{X} \] where \( \mathbf{X} \) is the floating-point vector, \( \mathbf{X}^{(Z)} \) is the quantized integer vector, \( \lfloor \cdot \rfloor \) denotes the round function, and \( b \in \mathbb{N} \) is the quantization bit-width. \( s \in \mathbb{R}^+ \) and \( z \in \mathbb{Z} \) are the quantization scale and zero-point, respectively, and with fast computational considerations, they are directly determined by the arithmetic lower and upper bounds of \( \mathbf{X} \) as follows: \[ s = \frac{\max(\mathbf{X}) - \min(\mathbf{X})}{2^b - 1}, \quad z = \left\lfloor -\frac{\min(\mathbf{X})}{s} \right\rfloor \] Dense-and-Sparse Quantizer for Weights In contrast to gradients and momentum, whose probability distributions lend themselves well to quantization, the weights present a distinct challenge. This challenge arises from their considerably broader range, which is approximately three orders of magnitude larger than that of momentum, as well as the presence of pronounced outliers. This combination of factors makes the accurate quantization of weights a particularly formidable task (Kim et al., 2023; Frantar et al., 2022; Lin et al., 2023). Upon revisiting the weight distribution, we uncover an intriguing pattern: if we set aside the extreme outliers, the remaining parameters coalesce into a notably compact distribution. To elucidate, the initial expansive range is predominantly influenced by these extreme outliers, with a striking statistic that 99% of the values cluster within a mere 20% of the overall range. This revelation serves as the catalyst for our approach, drawing inspiration from the dense-and-sparse quantizer presented in (Kim et al., 2023). This method effectively ameliorates the issue of outliers by decomposing the weights into two distinct matrices: one dense and the other sparse. Formally, the method is defined as follows: \[ W = D + S \quad \text{s.t.} \quad D = W[T_{\min} \leq w \leq T_{\max}] \\ \text{and} \quad S = W[w < T_{\min} \text{ or } w > T_{\max}] \] where \(D\) is a dense matrix representing the centralized values, and \(S\) is a sparse matrix representing the outliers. Here, \(T_{\min}\) and \(T_{\max}\) are the thresholds for identifying outliers, which can be determined by the percentage of the range. It’s important to highlight that the matrix decomposition process is numerically straightforward, ensuring a high level of computational efficiency with minimal repercussions on training overhead. Subsequently, the dense matrix adheres to the simple uniform quantizer as described in Equation 1, while the sparse matrix retains its data in the floating-point format. Notably, given that the outliers constitute a relatively minor fraction, such as 1%, the sparse matrix can capitalize on memory-efficient storage techniques, like compressed sparse row (CSR) format, which can be instrumental in substantially mitigating memory overhead. ### 3.3 Overall Framework In this section, we integrate the above efficient Lion optimizer and quantization methods and introduce a memory-efficient fine-tuning framework for LLMs. We provide a comprehensive description of each training phase, including forward propagation, backward propagation, and parameter update, with particular emphasis on the quantized gradient flow and the quantized optimizer step. #### Algorithm 1 Gradient Flow of Quantized Weights ``` # T_l : saved tensors in forward pass of layer l # g_o : gradient of the current layer's output S_g ← stack () for l = L, L − 1, · · · , 1 do I_l, W_l(Z) ← T_l W_l ← dequant (W_l(Z)) calculate gradients of I_l and W_l g_i ← matmul (g_o, W_l) g_w ← matmul (g_o, I_l) g_w(Z) ← quant (g_w) ▷ store as INT8 push (S_g, g_w) ▷ collect gradient assign g_o of layer (l-1) g_o ← g_i end for ``` #### Algorithm 2 Quantized Lion Optimizer ``` # β_1, β_2, λ, η, f : optimizer parameters # m_l : optimizer momentum of layer l for l = 1, 2, · · · , L do g_w(Z) ← pop (S_g) ▷ retrieve gradient g_w ← dequant (g_w(Z)) m_l ← dequant (m_l(Z)) W_l ← dequant (W_l(Z)) update model parameters Δ ← β_1m_l + (1 − β_1)g_w W_l ← W_l − η(sign(Δ) + λW_l) update EMA of g_w m_l ← β_2m_l + (1 − β_2)g_w m_l(Z) ← quant (m_l) ▷ store as INT8 W_l(Z) ← quant (W_l) ▷ store as INT8 end for ``` Quantized Forward Propagation Within our framework, we initially represent weights as quantized integer values to optimize memory utilization. During the execution of forward propagation, we de-quantize these low-precision weights into the floating-point format on-the-fly, thereby enabling high-precision arithmetic operations. For more clarity, we visualize this critical process in Figure 1. Quantized Backward Propagation In the backward propagation phase, the final task loss is propagated forward from the last layer in a sequential manner, and throughout this process, the gradient of each parameter is computed. It’s worth noting that these gradients need to be kept in memory, as they serve as essential information for guiding subsequent updates to the parameters. However, in mainstream deep learning frameworks like PyTorch, only parameters in the floating-point format can possess the gradient property, while those in the integer format cannot. Consequently, we cannot compute and store the gradients using the automatic differentiation functionality (i.e., AutoGrad) in such cases. To this end, we design the gradient flow of integer weights, as presented in Algorithm 1. As in forward propagation, we begin by de-quantizing the weights into the floating-point format. Subsequently, leveraging the gradient of the output, we apply the chain rule to compute the gradients of both the input and the weights. Beyond the computational aspect, preserving the gradients of the weights presents its own set of formidable challenges. To address this, we introduce a gradient retention scheme centered around the maintenance of a global stack. In this scheme, the gradient of each layer is sequentially pushed to the stack, following the backward flow of information during the backward propagation. Quantized Parameter Update Ultimately, the parameter update are executed in accordance with the Lion optimizer procedures, with the notable difference that the gradients and momentum are stored in the integer format. The quantized optimizer step is outlined in Algorithm 2. Initially, we pop the elements from the global stack to access and retrieve the gradients. It is essential to emphasize the exceptional computational efficiency of this popping process, as its computational complexity consistently remains at $O(1)$, independent of the stack length. This efficiency arises from a distinct pattern: in the backward propagation phase, the gradients are sequentially pushed into the stack, beginning from the last layer. Conversely, in the optimizer step, the gradients are popped in a sequential manner, commencing from the first layer. This strategic arrangement ensures that the gradient of the current layer always occupies the last position in the stack, fully capitalizing on the first-in-last-out property inherent to stack data structures. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Models and Benchmarks We conduct adequate evaluation of the proposed QFT by fine-tuning the advanced pre-trained model, LLaMA-2 (Touvron et al., 2023b), including the 7b and 13b versions. The few-shot performance of fine-tuned models is comprehensively evaluated on a variety of standard benchmarks, including ARC (Clark et al., 2018), HellaSwag (Zellers et al., 2019), MMLU (Hendrycks et al., 2020), and TruthfulQA (Lin et al., 2021). All results are obtained using the Language Model Evaluation Harness tool (Gao et al., 2021). In addition, we also use MT-Bench (Zheng et al., 2023) with GPT-4 scores to evaluate the conversational abilities of the models. Dataset Preparation In our experiment, we utilized a dataset comprising 94.1K shareGPT entries (HuggingFace, 2023b; shareGPT, 2023), which encompass user interactions with chatGPT. We adopted the data cleaning procedures from Fastchat (Chiang et al., 2023), converting HTML to markdown, eliminating non-English conversations, and segmenting extended dialogues into sequences capped at a length of 2048. Baseline Methods We evaluate QFT in terms of both training memory and performance. For training memory, QFT is compared to floating-point Adam (Kingma & Ba, 2015), Lion (Chen et al., 2023), as well as bitsandbytes with quantized optimizer states (Dettmers et al., 2021). For the performance of instruction tuning, we take Vicuna (Chiang et al., 2023), which performs full-parameter fine-tuning in the floating-point format, as the baseline method. For a fair comparison, we reproduce its results using the same dataset as QFT. Table 1: Memory usage (in GB) when fine-tuning the LLaMA-2-7b model using different methods. We report the full spectrum of memory profiles, as well as the total allocated memory and peak allocated memory. For model states, the Lion optimizer in floating-point format provides a 25% memory reduction, and further, our QFT introduces quantization that reduces the memory to 21% of the Adam optimizer, allowing for fine-tuning within 30GB of RAM. | Method | Weights | Gradients | Optimizer States | Activation | Total | Peak | |--------------|---------|-----------|------------------|------------|-------|------| | | | | Weight Copies | Momentum | Variances | | | Adam | 25.1 | 25.1 | - | 25.1 | 25.1 | 3.75 | 104 | 129 | | Adam-mixed | 12.6 | 12.6 | 25.1 | 25.1 | 25.1 | 3.75 | 104 | 123 | | bitsandbytes | 12.6 | 12.6 | 25.1 | 6.31 | 6.31 | 3.75 | 66.6 | 86.6 | | Lion | 25.1 | 25.1 | - | 25.1 | - | 3.75 | 79.1 | 101 | | QFT | 7.42 | 7.06 | - | 7.06 | - | 3.75 | 25.3 | 28.9 | Training Details During training, we apply channel-wise quantization for all quantizers of model states. The threshold $T$ in the dense-and-sparse quantizer is obtained from 1% of the distribution range (please see Appendix A.1 for details). The training parameters are set to align with Vicuna’s settings: the global batch size is 128, the learning rate is 2e-5, and the total number of epochs is 3. 4.2 MEMORY PROFILE We start by discussing the memory usage using different methods, and the results of fine-tuning the LLaMA-2-7b model are reported in Table 1. In the training that employs the Adam optimizer with standard settings, it becomes evident that the memory consumption becomes significantly substantial. Specifically, the model weights, gradients, momentum, and variances each occupy a considerable 25.1GB of RAM, which is 4 times the model parameters, resulting in a horrible resource burden. Remarkably, this memory issue persists when employing the Adam optimizer with mixed precision settings. Despite the fact that the numerical precision of both weights and gradients experiences a 50% reduction during the forward and backward computations, the necessity to uphold full-precision weight copies within the optimizer states remains paramount. This stringent requirement is essential to guarantee the stability of parameter updates, as discussed in detail in Appendix A.2 and thus the goal of conserving memory remains unattainable. The Lion optimizer simplifies the optimizer states by only keeping track of the momentum, resulting in a noteworthy reduction in memory usage, 25% less than that of the Adam optimizer. Hence, it takes up 25% less memory than the Adam optimizer. Notably, the model states still retain the floating-point format, and this redundant representation offers additional opportunities for optimization. To this end, bitsandbytes employs quantization methods to convert the momentum and variances into the integer format, resulting in an impressive memory savings of 37 GB. Nevertheless, the retention of floating-point weights and gradients remains a hurdle, preventing complete memory conservation and continuing to strain the training resources. Our QFT, built on top of the Lion optimizer, employs a comprehensive quantization scheme encompassing all model states, including weights, gradients, and optimizer momentum. These parameters can be efficiently stored in the low-precision integer format. This allows the GPU to allocate only 21.5GB of RAM to store these parameters, marking a remarkable reduction to a mere 21% in comparison to the memory requirements of the Adam optimizer. During the practical training process, when taking into account factors such as activation, as well as several caches and memory fragments, the peak allocated memory remains comfortably below 30GB, allowing us to fine-tune within budget-friendly computing resources. 4.3 PERFORMANCE EVALUATION In this section, we conduct a comprehensive evaluation of the instruction fine-tuning performance in both conventional and advanced manners, which are in turn compared and analyzed in detail below. In addition, we also provide a qualitative analysis of the model’s language generation capabilities in Appendix A.3. Table 2: Few-shot performance of different models on various standard benchmarks. Here, the number of shots is aligned to Open LLM Leaderboard (HuggingFace, 2023a). We take the pre-trained LLaMA-2 model as the baseline and compare the instruction tuning results of our QFT and Vicuna. Our QFT, with less resource consumption, encouragingly provides substantial improvement over pre-trained models and rivals the outcomes of full-precision tuning. | Model | ARC-c (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA-mc (0-shot) | Average | |----------------|-----------------|---------------------|---------------|------------------------|---------| | LLaMA-2-7B | 53.1 | 78.6 | 46.9 | 38.8 | 54.4 | | Vicuna-7B* | 53.6 | 77.3 | 49.4 | 51.5 | 58.0 | | LLaMA-2-7B-QFT| 52.9 | 76.7 | 48.8 | 51.1 | 57.4 | | LLaMA-2-13B | 59.4 | 82.1 | 55.8 | 37.4 | 58.7 | | Vicuna-13B* | 57.0 | 81.2 | 55.8 | 50.9 | 61.2 | | LLaMA-2-13B-QFT| 56.2 | 81.0 | 55.9 | 48.6 | 60.4 | **Few-Shot Evaluation** We perform few-shot performance evaluations across a range of well-established benchmarks to assess the effectiveness of QFT. The obtained results, pertaining to various model configurations, are comprehensively presented in Table 2. To maintain consistency, we opt to employ the same evaluation metrics as those employed in Open LLM Leaderboard (HuggingFace, 2023a) and ensure alignment with key experimental settings, such as the number of shots. As we can see, when fine-tuning a LLaMA-2-7B model, it becomes evident that QFT introduces a remarkable enhancement in performance. Specifically, QFT substantially elevates the average performance score, catapulting it from an initial value of 54.4 to a significantly improved 57.4. Impressively, this achievement positions QFT within a mere 0.6 points of the Vicuna model, which has undergone full-precision tuning. Regarding specific individual metrics, such as 5-shot MMLU, we observe an improvement in results from 46.9 to 48.8, highlighting the model’s enhanced problem-solving capability. Furthermore, it is imperative to provide a clarification regarding the observed slight decline in the 10-shot HellaSwag results across both fine-tuning settings. This diminution can be attributed, in part, to the influence exerted by the fine-tuning dataset and, in part, to the inherent limitations of a single benchmark evaluation, which may introduce a certain degree of one-sidedness or even inaccuracies into the assessment process (Liao et al., 2021). Consequently, it becomes increasingly evident that the central focus should shift to a careful comparison between the performance of Vicuna and QFT rather than dwelling extensively on the improvement of the pre-trained model itself, and it is indeed reassuring to note that QFT consistently demonstrates the ability to achieve results comparable to those achieved by the Vicuna model. **MT-Bench Score** Besides the conventional benchmarks described above, there is a more advanced benchmark, MT-Bench, to evaluate the conversational abilities of LLMs. MT-bench consists of a series of challenging multi-round open-ended questions that match the characteristics and preferences of human conversations, and uses GPT-4 as a judge to automatically score the responses. The score results are reported in Table 3. As an illustrative example, we provide a detailed discussion of the 7B models. Initially, the LLaMA-2 model, in its pre-trained state, yields a rather modest score of 3.83, indicating a considerable limitation in its problem-solving ability. For the Vicuna model tuned in full precision, the score undergoes a substantial augmentation, surg- Table 3: MT-Bench scores using GPT-4 of different models. They can reflect the conversational abilities of these models. Our QFT significantly outperforms the pre-trained LLaMA-2 model, and achieves comparable results to the Vicuna model tuned in full precision. | Model | MT-Bench Score (GPT-4) | |----------------|------------------------| | GPT-3.5 | 7.94 | | LLaMA-2-7B | 3.83 | | Vicuna-7B* | 6.08 | | LLaMA-2-7B-QFT | 5.95 | | LLaMA-2-13B | 4.69 | | Vicuna-13B* | 6.46 | | LLaMA-2-13B-QFT| 6.27 | Figure 3: Radar charts of each capability in MT-Bench of different models. Compared to the pre-trained LLaMA-2 model, our QFT yields across-the-board improvements in all metrics. Compared to the Vicuna model tuned in full precision, our QFT achieves similar results and even surpasses it in some abilities, such as the Math metrics in the 7B model setting. To facilitate a more visual comparison, we provide radar charts that encompass eight capacity indicators, as illustrated in Figure 3. These radar charts clearly show that QFT provides a comprehensive and transformative improvement across all measured metrics compared to the baseline performance of the pre-trained LLaMA-2 model. In comparison to the Vicuna model tuned in full precision, QFT achieves comparable results and even outperforms it in certain aspects, e.g., in the 7B model setting, QFT exhibits superior performance in the Math metrics. 5 CONCLUSIONS AND BROADER IMPACTS In this paper, we propose a Quantized Full-parameter Tuning (QFT) framework for LLMs, which leverages quantization techniques to comprehensively optimize training memory to enable fine-tuning on affordable resources. We employ the memory-efficient Lion optimizer, which provides significant advantages for robust quantized fine-tuning. Upon this, we develop customized quantizers to store all model states in the integer format, significantly reducing the memory usage. QFT incorporates these two innovations and designs a novel gradient flow scheme to accommodate them. We perform instruction tuning on the pre-trained LLaMA-2 models to verify the effectiveness of QFT, and the results demonstrate that QFT can reduce memory usage to 21% while achieving comparable performance to standard floating-point training. QFT can be easily integrated into mainstream LLM training tools and offers great compatibility with other memory optimization methods, demonstrating remarkable adaptability and utility in real-world applications. Additionally, it has the potential to produce broader impacts: • Quantized Training from Scratch: The parameters to be updated and optimizer configurations in the full-parameter tuning are consistent with the pre-training process, thus QFT can be migrated to be applied to training-from-scratch cases. • Lower-Precision Optimizer Momentum: Recent research has explored the compression of optimizer states to 4-bits (Li et al., 2023). It holds promise to explore the combination of QFT with this approach for even more substantial memory reduction. REFERENCES Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. *arXiv preprint arXiv:1604.06174*, 2016. X Chen, C Liang, D Huang, E Real, K Wang, Y Liu, H Pham, X Dong, T Luong, CJ Hsieh, et al. Symbolic discovery of optimization algorithms. arxiv 2023. *arXiv preprint arXiv:2302.06675*, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/ Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. *arXiv preprint arXiv:1803.05457*, 2018. Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke Zettlemoyer. 8-bit optimizers via block-wise quantization. In *International Conference on Learning Representations*, 2021. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. *arXiv preprint arXiv:2305.14314*, 2023. Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. *arXiv preprint arXiv:2203.06904*, 2022. Zhen Dong, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq: Hessian aware quantization of neural networks with mixed-precision. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 293–302, 2019. Zhen Dong, Zhewei Yao, Daiyaan Arfeen, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq-v2: Hessian aware trace-weighted quantization of neural networks. *Advances in neural information processing systems*, 33:18518–18529, 2020. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. *arXiv preprint arXiv:2210.17323*, 2022. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. URL https://doi.org/10.5281/zenodo.5371628 Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. A survey of quantization methods for efficient neural network inference. In *Low-Power Computer Vision*, pp. 291–326. Chapman and Hall/CRC, 2022. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. *arXiv preprint arXiv:2009.03300*, 2020. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Chien-Chin Huang, Gu Jin, and Jinyang Li. Swapadvisor: Pushing deep learning beyond the gpu memory limit via smart swapping. In *Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems*, pp. 1341–1355, 2020.
52fz5sUAy2
why $r_{u,i}$ is affected by $o_{u,i}$? In my opinion, $o_{u,i}$ is just a treatment to observe $r_{u,i}$, and does not affect the 'value' of $r_{u,i}$. (i.e., the value of $r$ is affected only by $x$ and observed only when $o=1$).
Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference Haoxuan Li1 Chunyuan Zheng1 Sihao Ding2 Peng Wu3,* Zhi Geng3 Fuli Feng2 Xiangnan He2 1Peking University 2University of Science and Technology of China 3Beijing Technology and Business University hxli@stu.pku.edu.cn dsihao@mail.ustc.edu.cn {zhengchunyuan99, fulifeng93, xiangnanhe}@gmail.com {pengwu, zhigeng}@btbu.edu.cn Abstract Selection bias in recommender system arises from the recommendation process of system filtering and the interactive process of user selection. Many previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model, but ignore the fact that potential outcomes for a given user-item pair may vary with the treatments assigned to other user-item pairs, named neighborhood effect. To fill the gap, this paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference and introduces a treatment representation to capture the neighborhood effect. On this basis, we propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect. We further develop two new estimators for estimating the proposed ideal loss. We theoretically establish the connection between the proposed and previous debiasing methods ignoring the neighborhood effect, showing that the proposed methods can achieve unbiased learning when both selection bias and neighborhood effect are present, while the existing methods are biased. Extensive semi-synthetic and real-world experiments are conducted to demonstrate the effectiveness of the proposed methods. 1 Introduction Selection bias is widespread in recommender system (RS) and challenges the prediction of users’ true preferences (Wu et al., 2022; Chen et al., 2023), which arises from the recommendation process of system filtering and the interactive process of user selection (Marlin and Zemel, 2009; Huang et al., 2022). For example, in the rating prediction task, selection bias happens in explicit feedback data as users are free to choose which items to rate, so that the observed ratings are not a representative sample of all ratings (Steck, 2010; Wang et al., 2023c). In the post-click conversion rate (CVR) prediction task, selection bias happens due to conventional CVR models are trained with samples of clicked impressions while utilized to make inference on the entire space with samples of all impressions (Ma et al., 2018; Zhang et al., 2020; Wang et al., 2022a; Li et al., 2023). Inspired by the causal inference literature (Imbens and Rubin, 2015), many studies have proposed unbiased estimators for eliminating the selection bias, such as inverse propensity scoring (IPS) (Schnabel et al., 2016), self-normalized IPS (SNIPS) (Swaminathan and Joachims, 2015), and doubly robust (DR) methods (Wang et al., 2019; Chen et al., 2021a; Dai et al., 2022; Li et al., 2023d,e). Given the features of a user-item pair, these methods first estimate the probability of observing that user rating or clicking on the item, called propensity. Then the inverse of the propensity is used to weight the observed samples to achieve unbiased estimates of the ideal loss. However, the theoretical guarantees of the previous methods are all established under the Stable Unit Treatment Values Assumption (SUTVA) (Rubin, 1980), which requires that the potential outcomes for one user-item pair do not vary with the treatments assigned to other user-item pairs (also known *Corresponding author. Figure 1: Causal diagrams of the existing debiasing methods under no interference assumption (left), and the proposed method taking into account the presence of interference (right), where \( x_{u,i}, o_{u,i}, \) and \( r_{u,i} \) denote the confounder, treatment, and outcome of user-item pair \((u, i)\), respectively. In the presence of interference, \( N(u,i) \) and \( N_{-(u,i)} \) denote the other user-item pairs affecting and not affecting \((u, i)\), respectively, and \( g_{u,i} \) denotes the treatment representation to capture the interference. as no interference or no neighborhood effect), as shown in Figure 1(a). In fact, such an assumption can hardly be satisfied in real-world scenarios. For example, a user’s rating on an item can be easily influenced by other users’ ratings on that item, as well as a user’s clicking on an item might facilitate other users’ clicking and purchasing of that item [Chen et al., 2021b; Zheng et al., 2021]. Figure 1(b) shows a general causal diagram in the presence of interference in debiased recommendation. To fill this gap, in this paper, we first formulate the debias problem in Figure 1(b) from the perspective of causal inference and extend the definition of potential outcomes to be compatible in the presence of interference, then introduce a learnable treatment representation to capture such interference. Based on the extended potential outcome and treatment representation, we propose a novel ideal loss that can effectively evaluate the performance of the prediction model when both selection bias and neighborhood effect are present. We then propose two new estimators for estimating the proposed ideal loss, named neighborhood inverse propensity score (N-IPS) and neighborhood doubly robust (N-DR), respectively. Theoretical analysis shows that the proposed N-IPS and N-DR estimators can achieve unbiased learning in the presence of both selection bias and neighborhood effect, while the previous debiasing estimators cannot result in unbiased learning without imposing extra strong assumptions. Extensive semi-synthetic and real-world experiments are conducted to demonstrate the effectiveness of the proposed methods for eliminating the selection bias under interference. 2 Preliminaries: Previous Selection Bias Formulation Let \( u \in U \) and \( i \in I \) be a user and an item, \( x_{u,i}, o_{u,i}, \) and \( r_{u,i} \) be the feature, treatment (e.g., exposure), and feedback (e.g., conversion) of the user-item pair \((u, i)\), where \( o_{u,i} \) equals 1 or 0 represents whether the item \( i \) is exposed to user \( u \) or not. Let \( D = \{(u, i)|u \in U, i \in I\} \) be the set of all user-item pairs. Using the potential outcome framework [Rubin, 1974; Neyman, 1990], let \( r_{u,i}(1) \) be the potential feedback that would be observed if item \( i \) had been exposed to user \( u \) (i.e., \( o_{u,i} \) had been set to 1). The potential feedback \( r_{u,i}(1) \) is observed only when \( o_{u,i} = 1 \), otherwise it is missing. Then ignoring the missing \( r_{u,i}(1) \) and training the prediction model directly with the exposed data suffers from selection bias, since the exposure is not random and is affected by various factors. In the absence of neighborhood effect, the potential feedback \( r_{u,i}(1) \) represents the user’s preference by making intervention \( o_{u,i} = 1 \). To predict \( r_{u,i}(1) \) for all \((u, i) \in D \), let \( \hat{r}_{u,i} \triangleq f_\theta(x_{u,i}) \) be a prediction model parameterized with \( \theta \). Denote \( \hat{R} \in \mathbb{R}^{|U| \times |I|} \) as the predicted potential feedback matrix with each element being \( \hat{r}_{u,i} \). If all the potential feedback \( \{r_{u,i}(1)|(u, i) \in D\} \) were observed, the ideal loss for training the prediction model \( \hat{r}_{u,i} \) is formally defined as \[ L_{\text{ideal}}(\hat{R}) = |D|^{-1} \sum_{(u,i) \in D} \delta(\hat{r}_{u,i}, r_{u,i}(1)), \] where \( \delta(\cdot, \cdot) \) is a pre-defined loss function, e.g., the squared loss \( (r_{u,i}(1) - \hat{r}_{u,i})^2 \). However, since \( r_{u,i}(1) \) is missing when \( o_{u,i} = 0 \), the ideal loss cannot be computed directly from observational data. To tackle this problem, many debiasing methods are developed to address the selection bias by establishing unbiased estimators of \( L_{\text{ideal}}(\hat{R}) \), such as error imputation based (EIB) method [Hernández-Lobato et al., 2014], inverse propensity scoring (IPS) method [Schnabel et al., 2016], self-normalized IPS (SNIPS) method (Swaminathan and Joachims, 2015), and doubly robust (DR) methods (Wang et al., 2019; Chen et al., 2021a; Dai et al., 2022; Li et al., 2023e). We summarize the causal parameter of interest and the corresponding estimation methods in the previous studies as follows. - For the causal parameter of interest, previous studies assume the targeted user preference \( r_{u,i}(o_{u,i} = 1) \) depends only on the treatment status \( o_{u,i} = 1 \). Then the ideal loss is defined using the sample average of \( \delta(\hat{r}_{u,i}, r_{u,i}(o_{u,i} = 1)) \). - For the methods of estimating the causal parameter of interest, previous studies have made extensive efforts to estimate the probability \( P(o_{u,i} = 1 | x_{u,i}) \), called propensity, i.e., the probability of item \( i \) exposed to user \( u \) given the features \( x_{u,i} \). Then the existing IPS and DR methods use the inverse of the propensity for weighting the observed samples. Nevertheless, we argue that both the causal parameter and the corresponding estimation methods in the previous studies lead to the failure when eliminating the selection bias under interference. - (Section 3) For the causal parameter of interest, as shown in Figure 1(b), in the presence of interference, both the treatment status \( o_{u,i} \) and the treatment statuses \( o_{N(u,i)} \) would affect the targeted user preference \( r_{u,i}(o_{u,i}, o_{N(u,i)}) \), instead of \( r_{u,i}(o_{u,i}) \) in the previous studies. - (Section 4) For the estimation methods of the causal parameter of interest, as shown in Figure 1(b), when performing propensity-based reweighting methods, both \( o_{u,i} \) and \( o_{N(u,i)} \) from its neighbors should be considered as treatments of user-item pair \((u, i)\). Therefore, the propensity should be modeled as \( P(o_{u,i} = 1, o_{N(u,i)} | x_{u,i}) \) instead of \( P(o_{u,i} = 1 | x_{u,i}) \) in previous studies, which motivates us to design new IPS and DR estimators under interference. ### 3 MODELING SELECTION BIAS UNDER NEIGHBORHOOD EFFECT In this section, we take the neighborhood effect in RS as an interference problem in causal inference area, and then introduce a treatment representation to capture the neighborhood effect. Lastly, we propose a novel ideal loss when both selection bias and neighborhood effect are present. #### 3.1 BEYOND “NO INTERFERENCE” ASSUMPTION IN PREVIOUS STUDIES In the presence of neighborhood effect, the value of \( r_{u,i}(1) \) depends on not only the user’s preference but also the neighborhood effect, therefore we cannot distinguish the influence of user preference and the neighborhood effect, even if all the potential outcomes \( \{r_{u,i}(1) : (u, i) \in D\} \) are known. Conceptually, the neighborhood effect will cause the value of \( r_{u,i}(1) \) relying on the exposure status \( o_{u',i'} \) and the feedback \( r_{u',i'} \) for some other user-item pairs \((u', i') \neq (u, i)\). Formally, we say that interference exists when a treatment on one unit has an effect on the outcome of another unit (Ogburn and VanderWeele, 2014; Forastiere et al., 2021; Sävje et al., 2021), due to the social or physical interaction among units. Previous debiasing methods rely on the “no interference” assumption, which requires the potential outcomes of a unit are not affected by the treatment status of the other units. Nevertheless, such an assumption can hardly be satisfied in real-world recommendation scenarios. #### 3.2 PROPOSED CAUSAL PARAMETER OF INTEREST UNDER INTERFERENCE Let \( o = (o_{1,1}, ..., o_{|U||Z|}) \) be the vector of exposures of all user-item pairs. For each \((u, i) \in D\), we define a partition of \( o = (o_{u,i}, o_{N(u,i)}, o_{N-(u,i)}) \), where \( N(u,i) \) is all the user-item pairs affecting \((u, i)\), called the neighbors of \((u, i)\), and \( N-(u,i) \) is all the user-item pairs not affecting \((u, i)\). When the feedback \( r_{u,i} \) is further influenced by the neighborhood exposures \( o_{N(u,i)} \), then the potential feedback of \((u, i)\) should be defined as \( r_{u,i}(o_{u,i}, o_{N(u,i)}) \) to account for the neighbourhood effect. However, if we take \((o_{u,i}, o_{N(u,i)})\) as the new treatment directly, it would be a high-dimensional sparse vector when the dimension of \( o_{N(u,i)} \) is high and the number of exposed neighbors is limited. To address this problem and capture the neighborhood effect effectively, we make an assumption on the interference mechanism leveraging the idea of representation learning (Johansson et al., 2016). **Assumption 1 (Neighborhood Treatment Representation).** There exists a representation vector \( \phi : \{0, 1\}^{|N(u,i)|} \rightarrow G \), if \( \phi(o_{N(u,i)}) = \phi(o'_{N(u,i)}) \), then \( r_{u,i}(o_{u,i}, o_{N(u,i)}) = r_{u,i}(o_{u,i}, o'_{N(u,i)}) \). The above assumption implies that the value of \( r_{u,i}(o_{u,i}, o_{N(u,i)}) \) depends on \( o_{N(u,i)} \) through a specific treatment representation \( \phi(\cdot) \) that summarizes the neighborhood effect. Denote \( g_{u,i} = \phi(o_{N(u,i)}) \), then we have \( r_{u,i}(o_{u,i}, o_{N(u,i)}) = r_{u,i}(o_{u,i}, g_{u,i}) \) under Assumption 1, i.e., the feedback of \((u,i)\) under individual exposure \( o_{u,i} \) and treatment representation \( g_{u,i} \). We now propose ideal loss under neighborhood effect with treatment representation level \( g \in G \) as \[ L_{\text{ideal}}^N(\hat{R}|g) = |D|^{-1} \sum_{(u,i) \in D} \delta(\hat{r}_{u,i}, r_{u,i}(o_{u,i} = 1, g)), \] and the final ideal loss summarizes various neighborhood effects \( g \in G \) as \[ L_{\text{ideal}}^N(\hat{R}) = \int L_{\text{ideal}}^N(\hat{R}|g)\pi(g)dg, \] where \( \pi(g) \) is a pre-specified probability density function of \( g \). The proposed \( L_{\text{ideal}}^N(\hat{R}) \) forces the prediction model \( \hat{r}_{u,i} \) to perform well across varying treatment representation levels \( g \in G \). Thus, \( L_{\text{ideal}}^N(\hat{R}) \) is expected to control the extra bias that arises from the neighborhood effect. In comparison, the self interest and neighborhood effect are intertwined in previously used \( L_{\text{ideal}}(\hat{R}) \), whereas our proposed \( L_{\text{ideal}}^N(\hat{R}) \) is very flexible due to the free choice of \( \pi(g) \). The choice of \( \pi(g) \) depends on the target population that we want to make predictions on. Consider an extreme case of no neighborhood effect, this corresponds to \( g_{u,i} = 0 \) for all user-item pairs. In such a case, we can write \( r_{u,i}(1,0) \) as \( r_{u,i}(1) \) and \( L_{\text{ideal}}^N(\hat{R}) \) would reduce to \( L_{\text{ideal}}(\hat{R}) \). 4 UNBIASED ESTIMATION AND LEARNING UNDER INTERFERENCE In this section, we first discuss the consequence of ignoring the neighborhood effect, and then propose two novel estimators for estimating the ideal loss \( L_{\text{ideal}}^N(\hat{R}) \). Moreover, we theoretically analyze the bias, variance, optimal bandwidth, and generalization error bounds of the proposed estimators. Before presenting the proposed debiasing methods under interference, we briefly discuss the identifiability of the ideal loss \( L_{\text{ideal}}^N(\hat{R}) \). A causal estimand is said to be identifiable if it can be written as a series of quantities that can be estimated from observed data. **Assumption 2 (Consistency under Interference).** \( r_{u,i} = r_{u,i}(1,g) \) if \( o_{u,i} = 1 \) and \( g_{u,i} = g \). **Assumption 3 (Unconfoundedness under Interference).** \( r_{u,i}(1,g) \perp\!\!\!\perp (o_{u,i}, G_{u,i}) | x_{u,i} \). These assumptions are common in causal inference to ensure the identifiability of causal effects. Specifically, Assumption 2 implies that \( r_{u,i}(1,g) \) is observed only when \( o_{u,i} = 1 \) and \( g_{u,i} = g \). Assumption 3 indicates that there is no unmeasured confounder that affects both \( r_{u,i} \) and \((o_{u,i}, g_{u,i})\). **Theorem 1 (Identifiability).** Under Assumptions 2-3, \( L_{\text{ideal}}^N(\hat{R}|g) \) and \( L_{\text{ideal}}^N(\hat{R}) \) are identifiable. Theorem 1 ensures the identifiability of the proposed ideal loss \( L_{\text{ideal}}^N(\hat{R}) \). Let \( E \) denote the expectation on the target population \( D \), and \( p(\cdot) \) denotes the probability density function of \( P \). 4.1 EFFECT OF IGNORING INTERFERENCE The widely used ideal loss \( L_{\text{ideal}}(\hat{R}) \) under no neighborhood effect is generally different from the proposed ideal loss \( L_{\text{ideal}}^N(\hat{R}) \) in the presence of neighborhood effect. Next, we establish the connection between these two loss functions, to deepen the understanding of the methods of considering/ignoring neighborhood effect. For brevity, we let \( \delta_{u,i}(g) = \delta(\hat{r}_{u,i}, r_{u,i}(1,g)) \) hereafter. **Theorem 2 (Link to Selection Bias).** Under Assumptions 2-3 (a) if \( g_{u,i} \perp\!\!\!\perp o_{u,i} | x_{u,i} \), \( L_{\text{ideal}}^N(\hat{R}) = L_{\text{ideal}}(\hat{R}) \); (b) if \( g_{u,i} \not\perp\!\!\!\perp o_{u,i} | x_{u,i} \), \( L_{\text{ideal}}^N(\hat{R}) - L_{\text{ideal}}(\hat{R}) \) is equal to \[ \int E[\delta_{u,i}(g)|x_{u,i}] \cdot \{ p(g_{u,i} = g|x_{u,i}) - p(g_{u,i} = g|x_{u,i}, o_{u,i} = 1) \} \pi(g)dg. \] From Theorem 2(a), if the individual and neighborhood exposures are independent conditional on \( x_{u,i} \), then \( L^N_{\text{ideal}}(\hat{R}) \) is equal to \( L_{\text{ideal}}(\hat{R}) \), which indicates that the existing debiasing methods neglecting neighborhood effect are also unbiased estimator of \( L^N_{\text{ideal}}(\hat{R}) \). This is intuitively reasonable since in such a case, the neighborhood effect randomly influences \( o_{u,i} \) conditional on \( x_{u,i} \), and the effect of neighbors would be smoothed out in an average sense. Theorem 2(b) shows that a bias would arise when \( g_{u,i} \not\perp o_{u,i} \mid x_{u,i} \), and the bias mainly depends on the association between \( o_{u,i} \) and \( g_{u,i} \) conditional on \( x_{u,i} \), i.e., \( p(g_{u,i} = g \mid x_{u,i} = x) - p(g_{u,i} = g \mid x_{u,i} = x, o_{u,i} = 1) \). ### 4.2 Proposed Unbiased Estimators To derive an unbiased estimator of \( L^N_{\text{ideal}}(\hat{R}) \), it suffices to find an unbiased estimator of \( L^N_{\text{ideal}}(\hat{R} \mid g) \). Motivated by Schnabel et al. (2016), an intuitive solution is to take \((o_{u,i}, g_{u,i})\) as a joint treatment, then the IPS estimator of \( L^N_{\text{ideal}}(\hat{R} \mid g) \) should be \[ |D|^{-1} \sum_{(u,i) \in D} I(o_{u,i} = 1, g_{u,i} = g) \cdot \delta_{u,i}(g) / p_{u,i}(g), \] where \( I(\cdot) \) is an indicator function, \( p_{u,i}(g) = p(o_{u,i} = 1, g_{u,i} = g \mid x_{u,i}) \) is the propensity score. Clearly, this strategy works if \( g_{u,i} \) is a binary or multi-valued random variable. However, if \( g_{u,i} \) has a continuous probability density, the above estimator is numerically infeasible even if theoretically feasible, since almost all \( I(o_{u,i} = 1, g_{u,i} = g) \) will be zero in such a case. To tackle this problem, we propose a novel kernel-smoothing based neighborhood IPS (N-IPS) estimator of \( L^N_{\text{ideal}}(\hat{R} \mid g) \), which is given as \[ L^N_{\text{IPS}}(\hat{R} \mid g) = |D|^{-1} \sum_{(u,i) \in D} I(o_{u,i} = 1) \cdot K((g_{u,i} - g)/h) \cdot \delta_{u,i}(g) / h \cdot p_{u,i}(g), \] where \( h \) is a bandwidth (smoothing parameter) and \( K(\cdot) \) is a symmetric kernel function (Fan and Gijbels, 1996; Li and Racine, 2023; Wu et al., 2024) that satisfies \( \int K(t) dt = 1 \) and \( \int tK(t) dt = 1 \). For example, Epanechnikov kernel \( K(t) = 3(1-t^2)\mathbb{I}(|t| \leq 1)/4 \) and Gaussian kernel \( K(t) = \exp(-t^2/2)/\sqrt{2\pi} \) for \( t \in \mathbb{R} \). For ease of presentation, we state the results for a scalar \( g \). Similar conclusions can be derived for multi-dimensional \( g \) and we put them in Appendix C. Similarly, the kernel-smoothing based neighborhood DR (N-DR) estimator can be constructed by \[ L^N_{\text{DR}}(\hat{R} \mid g) = |D|^{-1} \sum_{(u,i) \in D} \left[ \delta_{u,i}(g) + I(o_{u,i} = 1) \cdot K((g_{u,i} - g)/h) \cdot (\delta_{u,i}(g) - \hat{\delta}_{u,i}(g)) / h \cdot p_{u,i}(g) \right], \] where \( \hat{\delta}_{u,i}(g) = \delta(\hat{r}_{u,i}, m(x_{u,i}, \phi_g)) \) is the imputed error of \( \delta_{u,i}(g) \), and \( m(x_{u,i}, \phi_g) \) is an imputation model of \( r_{u,i}(1, g) \). The imputation model is trained by minimizing the training loss \[ L^N_{\text{DR}}(\hat{R}) = \int |D|^{-1} \sum_{(u,i) \in D} I(o_{u,i} = 1) \cdot K((g_{u,i} - g)/h) \cdot (\delta_{u,i}(g) - \hat{\delta}_{u,i}(g))^2 / h \cdot p_{u,i}(g) \pi(g) dg. \] Then, the corresponding N-IPS and N-DR estimators of \( L^N_{\text{ideal}}(\hat{R}) \) are given as \[ L^N_{\text{IPS}}(\hat{R}) = \int L^N_{\text{IPS}}(\hat{R} \mid g) \pi(g) dg, \quad L^N_{\text{DR}}(\hat{R}) = \int L^N_{\text{DR}}(\hat{R} \mid g) \pi(g) dg. \tag{1} \] Next, we show the bias and variance of the proposed N-IPS and N-DR estimators, which rely on a standard assumption in kernel-smoothing estimation (Härdle et al., 2004; Li and Racine, 2023). **Assumption 4** (Regularity Conditions for Kernel Smoothing). (a) \( h \to 0 \) as \( |D| \to \infty \); (b) \( |D|h \to \infty \) as \( |D| \to \infty \); (c) \( p(o_{u,i} = 1, g_{u,i} = g \mid x_{u,i}) \) is twice differentiable with respect to \( g \). **Theorem 3** (Bias and Variance of N-IPS and N-DR). Under Assumptions 1-4, (a) the bias of the N-DR estimator is given as \[ \text{Bias}(L^N_{\text{DR}}(\hat{R})) = \frac{1}{2} \mu_2 \int \mathbb{E}\left[ \frac{\partial^2 p(o_{u,i} = 1, g_{u,i} = g \mid x_{u,i})}{\partial g^2} \cdot \{\delta_{u,i}(g) - \hat{\delta}_{u,i}(g)\} \right] \pi(g) dg \cdot h^2 + o(h^2), \] where \( \mu_2 = \int K(t)^2 dt \). The bias of N-IPS is provided in Appendix B.2. (b) the variance of the N-DR estimator is given as \[ \text{Var}(\mathcal{L}_{\text{DR}}^N(\hat{\mathbf{R}})) = \frac{1}{|\mathcal{D}|h} \int \psi(g)\pi(g)dg + o\left(\frac{1}{|\mathcal{D}|h}\right), \] where \( \psi(g) = \int \frac{1}{p_{u,i}(g')} \cdot K\left(\frac{g-g'}{h}\right) \cdot \{\delta_{u,i}(g') - \hat{\delta}_{u,i}(g')\} \{\delta_{u,i}(g') - \hat{\delta}_{u,i}(g')\} \pi(g')dg' \) is a bounded function of \( g \). \( K(\cdot) = \int K(t)K(\cdot+t)dt \). The variance of N-IPS is provided in Appendix B.2. From Theorem 3(a), the kernel-smoothing based N-DR estimator has a small bias of order \( O(h^2) \), which converges to 0 as \( |\mathcal{D}| \to \infty \) by Assumption 4(a). Theorem 3(b) shows that the variance of the N-DR estimator has a convergence rate of order \( O(1/|\mathcal{D}|h) \). Notably, the bandwidth \( h \) plays a key role in the bias-variance trade-off of the N-DR estimator: the larger the \( h \), the larger the bias and the smaller the variance. The following Theorem 4 gives the optimal bandwidth for N-IPS and N-DR. **Theorem 4 (Optimal Bandwidth of N-IPS and N-DR).** Under Assumptions 7, 4, the optimal bandwidth for the N-DR estimator in terms of the asymptotic mean-squared error is \[ h_{\text{N-DR}}^* = \left[ \frac{\int \psi(g)\pi(g)dg}{4|\mathcal{D}| \left( \frac{1}{2} \mu_2 \int E \left[ \frac{\partial^2 p(o_{u,i}=1|g_{u,i},x_{u,i})}{\partial g^2} \cdot \{\delta_{u,i}(g) - \hat{\delta}_{u,i}(g)\} \right]^2 \pi(g)dg \right)^{1/5}} \right], \] where \( \psi(g) \) is defined in Theorem 3. The optimal bandwidth for N-IPS is provided in Appendix B.3. Theorem 4 shows that the optimal bandwidth of N-DR is of order \( O(|\mathcal{D}|^{-1/5}) \). In such a case, \[ \left[ \text{Bias}(\mathcal{L}_{\text{DR}}^N(\hat{\mathbf{R}})) \right]^2 = O(h^4) = O(|\mathcal{D}|^{-4/5}), \quad \text{Var}(\mathcal{L}_{\text{DR}}^N(\hat{\mathbf{R}})) = O\left(\frac{1}{|\mathcal{D}|h}\right) = O(|\mathcal{D}|^{-4/5}), \] that is, the square of the bias has the same convergence rate as the variance. ### 4.3 Propensity Estimation Method Different from previous debiasing methods in RS, in the presence of neighborhood effect, the propensity is defined for joint treatment that includes a binary variable \( o \) and a continuous variable \( g \). To fill this gap, we consider a novel method for propensity estimation. Let \( P_u(g | o=1, x) \) be a uniform distribution on \( G \) and equals \( 1/c \) for all feature \( x \). Note that \[ \frac{1}{p_{u,i}(g)} = \frac{1}{P(o=1 | x)P(g | o=1, x)} = \frac{c}{P(o=1 | x)} \cdot \frac{P_u(g | o=1, x)}{P(g | o=1, x)}, \] where \( P(o=1 | x) \) can be estimated by using the existing methods such as naive Bayes or logistic regression with or without a few unbiased ratings, respectively (Schnabel et al., 2016). To estimate the density ratio \( P_u(g | o=1, x)/P(g | o=1, x) \), we first label the samples in the exposed data \( \{(x_{u,i}, g_{u,i})\}_{(u,i):o_{u,i}=1} \) as positive samples (\( L = 1 \)), then uniformly sample treatments \( g'_{u,i} \in G \) to generate samples \( \{(x_{u,i}, g'_{u,i})\}_{(u,i):o_{u,i}=1} \) with negative labels (\( L = 0 \)). Since the data generating process ensures that \( P_u(x | o=1) = P(x | o=1) \), we have \[ \frac{P_u(g | o=1, x)}{P(g | o=1, x)} = \frac{P(x, g | L=0)}{P(x, g | L=1)} = \frac{P(L=1)}{P(L=0)} \cdot \frac{P(L=0 | x, g)}{P(L=1 | x, g)}, \] where \( P(L=l | x, g) \) for \( l = 0 \) or 1 can be obtained by modeling \( L \) with \( (x, g) \). ### 4.4 Further Theoretical Analysis We further theoretically analyze the generalization error bounds of the proposed N-IPS and N-DR estimators. Letting \( \mathcal{F} \) be the hypothesis space of prediction matrices \( \hat{\mathbf{R}} \) (or prediction model \( f_\theta \)), we define the Rademacher complexity \[ R(\mathcal{F}) = \mathbb{E}_{\sigma \sim \{-1,+1\}^{|\mathcal{D}|}} \sup_{f_\theta \in \mathcal{F}} \frac{1}{|\mathcal{D}|} \sum_{(u,i) \in \mathcal{D}} \sigma_{u,i} \delta_{u,i}(g), \] where \( \sigma = \{\sigma_{u,i} : (u,i) \in \mathcal{D}\} \) is a Rademacher sequence (Mohri et al., 2018). Assumption 5 (Boundedness). \(1/p_{u,i}(g) \leq M_p\), \(d_{u,i}(g) \leq M_\delta\), and \(|\delta_{u,i}(g) - \hat{\delta}_{u,i}(g)| \leq M_{|\delta-\hat{\delta}|}\). Theorem 5 gives the generalization error bounds of the prediction model trained by minimizing our proposed N-IPS and N-DR estimators. **Theorem 5 (Generalization Error Bounds of N-IPS and N-DR).** Under Assumptions 3, 4, 5, and suppose that \(K(t) \leq M_K\), we have with probability at least \(1 - \eta\), \[ L_N^{\text{ideal}}(\hat{R}^\dagger) \leq \min_{\hat{R} \in \mathcal{F}} L_N^{\text{ideal}}(\hat{R}) + \mu_2 M_{|\delta-\hat{\delta}|} \int \mathbb{E} \left[ \frac{\partial^2 p(o_{u,i} = 1, g_{u,i} = g | x_{u,i})}{\partial g^2} \right] \pi(g) dg \cdot h^2 \\ + \frac{4M_p M_K}{h} R(\mathcal{F}) + \frac{5M_p M_K M_{|\delta-\hat{\delta}|}}{h} \sqrt{\frac{2}{|\mathcal{D}|} \log \left( \frac{4}{\eta} \right)} + o(h^2), \] where \(\hat{R}^\dagger = \arg \min_{\hat{R} \in \mathcal{F}} L_N^{\text{DR}}(\hat{R})\) is the learned prediction model by minimizing the N-DR estimator. The generalization error bounds of the N-IPS estimator is provided in Appendix B.4. ## 5 SEMI-SYNTHETIC EXPERIMENTS We conduct semi-synthetic experiments using the MovieLens 100K\(^1\) (ML-100K) dataset, focusing on the following two research questions (RQs): **RQ1.** Do the proposed estimators result in more accurate estimation for ideal loss compared to the previous estimators in the presence of neighborhood effect? **RQ2.** How does the neighborhood effect strength affect the estimation accuracy? ### Experimental Setup The ML-100K dataset contains 100,000 missing-not-at-random (MNAR) ratings from 943 users to 1,682 movies. Following the previous studies (Schnabel et al., 2016; Wang et al., 2019; Guo et al., 2021), we first complete the full rating matrix \(R\) by Matrix Factorization (MF) (Koren et al., 2009), resulting in \(r_{u,i} \in \{1, 2, 3, 4, 5\}\), and then set propensity \(p_{u,i} = p_0^{\max(0.4-r_{u,i})}\) with \(\alpha = 0.5\) to model MNAR effect (Wang et al., 2019; Guo et al., 2021). Next, to model the neighborhood effect, we compute \(g_{u,i} = \mathbb{I}(\sum_{(u',i') \in N(u,i)} o_{u',i'} \geq c)\) with varying \(c\) for 100,000 observed MNAR ratings, where \(N(u,i) = \{(u',i') \neq (u,i) \mid u' = u \text{ or } i' = i\}\). In our experiment, \(c\) is chosen to be the median of all \(\sum_{(u',i') \in N(u,i)} o_{u',i'}\) for \((u,i) \in \mathcal{D}\). Then we complete two full rating matrices \(R^{g=0}\) and \(R^{g=1}\) with \(r_{u,i}(1,g) \in \{1, 2, 3, 4, 5\}\) by MF, using \(\{(u,i) \mid o_{u,i} = 1, g_{u,i} = 0\}\) and \(\{(u,i) \mid o_{u,i} = 1, g_{u,i} = 1\}\) respectively. ### Experimental Details The computation of the ideal loss needs both a ground-truth rating matrix and a predicted rating matrix. Therefore, we generate the following six predicted matrices \(\hat{R}\): - **ONE:** The predicted rating matrix \(\hat{R}\) is identical to the true rating matrix, except that \(|\{(u,i) \mid r_{u,i} = 5\}|\) randomly selected true ratings of 1 are flipped to 5. This means half of the predicted fives are true five, and half are true one. - **THREE:** Same as ONE, but flipping true rating of 3. - **FOUR:** Same as ONE, but flipping true rating of 4. - **ROTATE:** For each predicted rating \(\hat{r}_{u,i} = r_{u,i} - 1\) when \(r_{u,i} \geq 2\), and \(\hat{r}_{u,i} = 5\) when \(r_{u,i} = 1\). - **SKEW:** Predicted \(\hat{r}_{u,i}\) are sampled from the Gaussian distribution \(\mathcal{N}(\mu = r_{u,i}, \sigma = (6 - r_{u,i})/2)\), and clipped to the interval \([1, 5]\). - **CRS:** Set \(\hat{r}_{u,i} = 2\) if \(r_{u,i} \leq 3\), otherwise, set \(\hat{r}_{u,i} = 4\). To consider the neighborhood effect, we assume that each user-item pair in the uniform data has an equal probability of having \(g_{u,i} = 0\) and \(g_{u,i} = 1\), that is, \(\pi(g) = 0.5\) for \(g \in \{0, 1\}\). Thus, \[ L_N^{\text{ideal}}(\hat{R}) = |\mathcal{D}|^{-1} \sum_{(u,i) \in \mathcal{D}} \{ \delta(\hat{r}_{u,i}, r_{u,i}(1,g = 0)) + \delta(\hat{r}_{u,i}, r_{u,i}(1,g = 1)) \}/2, \] where \(\delta(\cdot,\cdot)\) is the mean absolute error (MAE). We follow the previous studies (Guo et al., 2021; Li et al., 2023b) to adopt relative absolute error (RE) to measure the estimation accuracy, which is defined as \[ \text{RE}(L_{\text{est}}) = |L_N^{\text{ideal}}(\hat{R}) - L_{\text{est}}(\hat{R})|/L_N^{\text{ideal}}(\hat{R}), \] where \(L_{\text{est}}\) denotes the ideal loss estimation by the estimator. The smaller the RE, the higher the estimation accuracy (see Appendix E for more details). ### Performance Analysis We take three propensity-based estimators: IPS, DR, and MRDR as baselines (see Section 6 for baselines introduction). The results are shown in Table 1. First, the REs of our --- 1https://grouplens.org/datasets/movielens/100k/ 2Our codes and datasets are available at https://github.com/haoxuanli-pku/ICLR24-Interference. Table 1: Relative error on six prediction metrics. The best results are bolded. | | ONE | THREE | FOUR | ROTATE | SKEW | CRS | |--------|-------------|--------------|--------------|--------------|--------------|--------------| | Naive | 0.8612 ± 0.0068 | 1.0011 ± 0.0075 | 1.0471 ± 0.0077 | 0.2781 ± 0.0019 | 0.3538 ± 0.0038 | 0.3419 ± 0.0030 | | IPS | 0.4766 ± 0.0060 | 0.5501 ± 0.0056 | 0.5731 ± 0.0057 | 0.1434 ± 0.0040 | 0.1969 ± 0.0046 | 0.1885 ± 0.0028 | | N-IPS | **0.2383 ± 0.0066** | **0.2670 ± 0.0069** | **0.2829 ± 0.0062** | **0.0417 ± 0.0043** | **0.1024 ± 0.0051** | **0.0966 ± 0.0029** | | DR | 0.4247 ± 0.0088 | 0.4637 ± 0.0093 | 0.4661 ± 0.0096 | 0.0571 ± 0.0021 | 0.1938 ± 0.0043 | 0.0565 ± 0.0020 | | N-DR | **0.3089 ± 0.0088** | **0.3533 ± 0.0091** | **0.3577 ± 0.0092** | **0.0339 ± 0.0031** | **0.1219 ± 0.0039** | **0.0511 ± 0.0026** | | MRDR | 0.2578 ± 0.0070 | 0.2639 ± 0.0071 | 0.2611 ± 0.0073 | 0.1001 ± 0.0025 | 0.1538 ± 0.0038 | 0.0156 ± 0.0021 | | N-MRDR | **0.0622 ± 0.0065** | **0.0520 ± 0.0065** | **0.0503 ± 0.0064** | **0.0456 ± 0.0037** | **0.0672 ± 0.0038** | **0.0042 ± 0.0022** | Figure 2: The effect of mask numbers as interference strength on RE on six prediction matrices. estimators are significantly lower compared to the corresponding previous estimators, which indicates that our estimators are able to estimate the ideal loss accurately in the presence of neighborhood effect. In addition, to investigate how the neighborhood effect affects the estimation error, we randomly mask some user rows and item columns before sampling $o_{u,i}$, which results in $p_{u,i} = 0$ for the masked user-item pairs. For unmasked user-item pairs, we raise their propensities such that the expected total number of observed samples remains the same, which increases the proportion of observed samples with $g_{u,i} = 1$ to strengthen the neighborhood effect. Figure 2 shows the RE of the estimators with varying neighborhood effects. Our methods stably outperform the previous methods in all scenarios, which verifies that our methods are robust to the increased neighborhood effect. 6 Real-World Experiments Dataset and Experiment Details. We verify the effectiveness of the proposed estimators on three real-world datasets: Coat contains 6,960 MNAR ratings and 4,640 missing-at-random (MAR) ratings. Yahoo! R3 contains 311,704 MNAR ratings and 54,000 MAR ratings. KuaiRec contains 4,676,570 video watching ratio records from 1,411 users for 3,327 videos. We pre-specify three neighborhood choices for a user-item pair in MNAR data: (1) using user historical behavior, (2) using the purchase history of an item, and (3) using the interaction of users and items, and let $g_{u,i}$ be the neighborhood number of the user-item pair, which is a multi-valued representation. We report the best result of our methods among the three neighborhood choices using MSE, AUC, and NDCG@K as the evaluation protocols, where $K = 5$ for Coat and Yahoo! R3 and $K = 50$ for KuaiRec. We adopt both the Gaussian kernel and Epanechnikov kernel as the kernel function for implementing our proposed N-IPS, N-DR-JL, and N-MRDR (see Appendix F for more details). Baselines. We take Matrix Factorization (MF) (Koren et al., 2009) as the base model and consider the following debiasing baselines: IPS (Schnabel et al., 2016; Saito et al., 2020), SNIPS (Schnabel Table 2: Performance of MSE, AUC, and NDCG@5 on three real-world datasets. The best six results are bolded, and the best baseline is underlined. | Dataset | Method | Coat | Yahoo! R3 | KuaiRec | |---------|--------|------|-----------|---------| | | MSE ↓ | AUC ↑ | N@5 ↑ | MSE ↓ | AUC ↑ | N@5 ↑ | MSE ↓ | AUC ↑ | N@5 ↑ | | Base model (Koren et al., 2009) | 0.238 | 0.710 | 0.616 | 0.249 | 0.682 | 0.634 | 0.137 | 0.754 | 0.553 | | + CVIB (Wang et al., 2020) | 0.222 | 0.722 | 0.635 | 0.257 | 0.683 | 0.645 | 0.103 | 0.769 | 0.563 | | + DIB (Liu et al., 2021) | 0.242 | 0.726 | 0.629 | 0.248 | 0.687 | 0.641 | 0.142 | 0.754 | 0.556 | | + SNIPS (Schnabel et al., 2016) | 0.208 | 0.737 | 0.636 | 0.245 | 0.687 | 0.656 | 0.048 | 0.788 | 0.576 | | + ASIPS (Saito et al., 2022) | **0.205** | 0.722 | 0.621 | 0.230 | 0.678 | 0.643 | 0.097 | 0.753 | 0.554 | | + DAMF (Saito and Nomura, 2022) | 0.218 | 0.734 | 0.643 | 0.245 | **0.697** | 0.656 | 0.097 | 0.775 | 0.572 | | + DR (Saito, 2020b) | 0.208 | 0.726 | 0.634 | 0.216 | 0.684 | 0.658 | **0.046** | 0.773 | 0.564 | | + DR-BIAS (Dai et al., 2022) | 0.223 | 0.717 | 0.631 | 0.220 | 0.689 | 0.654 | **0.046** | 0.771 | 0.552 | | + DR-MSE (Dai et al., 2022) | 0.214 | 0.720 | 0.630 | 0.222 | 0.689 | 0.657 | 0.047 | 0.769 | 0.547 | | + MR (Li et al., 2023a) | 0.210 | 0.730 | 0.643 | 0.247 | 0.693 | 0.651 | 0.114 | 0.780 | 0.573 | | + TDR (Li et al., 2023b) | 0.229 | 0.710 | 0.634 | 0.234 | 0.674 | 0.662 | 0.134 | 0.769 | 0.573 | | + TDR-JL (Li et al., 2023b) | 0.216 | 0.734 | 0.639 | 0.248 | 0.684 | 0.654 | 0.121 | 0.771 | 0.560 | | + SDR (Liu et al., 2023e) | **0.208** | 0.736 | 0.642 | 0.210 | 0.690 | 0.655 | 0.116 | 0.775 | 0.574 | | + IPS (Schnabel et al., 2016) | 0.214 | 0.718 | 0.626 | 0.221 | 0.681 | 0.644 | 0.097 | 0.752 | 0.554 | | + N-IPS [LR, Gaussian] | 0.212 | 0.742 | **0.678** | 0.226 | 0.693 | **0.664** | 0.092 | **0.796** | **0.585** | | + N-IPS [LR, Epanechnikov] | 0.224 | **0.746** | 0.645 | 0.242 | **0.703** | **0.673** | 0.094 | **0.794** | **0.582** | | + N-IPS [NB, Gaussian] | **0.206** | **0.744** | **0.648** | **0.196** | **0.693** | **0.658** | 0.049 | **0.785** | **0.579** | | + N-IPS [NB, Epanechnikov] | 0.210 | **0.753** | 0.646 | **0.197** | 0.685 | 0.653 | **0.047** | 0.755 | 0.562 | | + DR-JL (Wang et al., 2019) | 0.211 | 0.721 | 0.620 | 0.224 | 0.682 | 0.646 | 0.050 | 0.764 | 0.526 | | + N-DR-JL [LR, Gaussian] | 0.231 | 0.731 | **0.651** | 0.247 | **0.698** | **0.664** | 0.113 | 0.779 | 0.537 | | + N-DR-JL [LR, Epanechnikov] | 0.235 | 0.741 | **0.655** | 0.251 | 0.693 | **0.663** | 0.108 | 0.784 | 0.552 | | + N-DR-JL [NB, Gaussian] | **0.204** | **0.748** | **0.650** | **0.198** | 0.691 | 0.653 | 0.049 | 0.778 | 0.574 | | + N-DR-JL [NB, Epanechnikov] | 0.209 | **0.744** | 0.648 | **0.191** | 0.681 | 0.637 | **0.046** | 0.786 | 0.570 | | + MDR-JL (Guo et al., 2021) | 0.214 | 0.721 | 0.631 | 0.215 | 0.686 | 0.650 | 0.047 | 0.777 | 0.554 | | + N-MDR-JL [LR, Gaussian] | 0.217 | 0.728 | **0.662** | 0.252 | 0.697 | **0.666** | 0.107 | 0.785 | 0.539 | | + N-MDR-JL [LR, Epanechnikov] | 0.233 | 0.734 | **0.656** | 0.253 | 0.695 | **0.666** | 0.097 | **0.791** | **0.560** | | + N-MDR-JL [NB, Gaussian] | **0.208** | **0.742** | **0.651** | **0.206** | **0.694** | **0.663** | **0.045** | **0.793** | **0.583** | | + N-MDR-JL [NB, Epanechnikov] | 0.207 | **0.756** | 0.655 | **0.194** | 0.690 | 0.644 | **0.044** | **0.802** | **0.587** | Following previous studies (Schnabel et al., 2016; Wang et al., 2019), for all baseline methods requiring propensity estimation, we adopt naive Bayes (NB) method using 5% MAR ratings for training the propensity model. For our proposed methods, we also adopt logistic regression (LR) to estimate the propensities without MAR ratings. Real-World Debiasing Performance. Table 2 shows the performance of the baselines and our methods on three datasets. Compared with the base model, the debiasing methods achieve better performance. Notably, the proposed methods can stably outperform the baseline methods in all metrics, showing that our methods can effectively take the neighborhood effect into account. This also provides empirical evidence of the existence of the neighborhood effect in real-world datasets. The proposed methods show competitive performance whether the MAR data are accessible (NB) or not (LR), and perform similarly in the case of adopting the Gaussian kernel or Epanechnikov kernel. 7 CONCLUSION In this paper, we study the problem of selection bias in the presence of neighborhood effect. First, we formulate the neighborhood effect in RS as an interference problem in causal inference. Next, a neighborhood treatment representation vector is introduced to reduce the dimension and sparsity of the neighborhood treatments. Based on it, we reformulate the potential feedback and propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect. Then, we propose two novel kernel-smoothing based neighborhood estimators for the ideal loss, which allows the neighborhood treatment representation vector to have continuous probability density. We systematically analyze the properties of the proposed estimators, including the bias, variance, optimal bandwidth, and generalization error bounds. In addition, we also theoretically establish the connection between the debiasing methods considering and ignoring the neighborhood effect. Extensive experiments are conducted on semi-synthetic and real-world data to demonstrate the effectiveness of our approaches. A limitation of this work is that the hypothesis space $G$ of $g$ relies on prior knowledge, and it is not obvious to choose it in practice. We leave it for our future work. 8 ACKNOWLEDGEMENT This work was supported in part by National Natural Science Foundation of China (No. 623B2002, 12301370). REFERENCES Peter M. Aronow and Cyrus Samii. Estimating average causal effects under general interference, with application to a social network experiment. *The Annals of Applied Statistics*, 11:1912–1947, 2017. Shuanghao Bai, Min Zhang, Wanqi Zhou, Siteng Huang, Zhirong Luan, Donglin Wang, and Badong Chen. Prompt-based distribution alignment for unsupervised domain adaptation. In *AAAI*, 2024. Jiawei Chen, Hande Dong, Yang Qiu, Xiangnan He, Xin Xin, Liang Chen, Guli Lin, and Keping Yang. Autodebias: Learning to debias for recommendation. In *SIGIR*, 2021a. Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. Bias and debias in recommender system: A survey and future directions. *ACM Transactions on Information Systems*, 41(3):1–39, 2023. Mouxiang Chen, Chenghao Liu, Jianling Sun, and Steven CH Hoi. Adapting interactional observation embedding for counterfactual learning to rank. In *SIGIR*, 2021b. Quanyu Dai, Haoxuan Li, Peng Wu, Zhenhua Dong, Xiao-Hua Zhou, Rui Zhang, Xiuqiang He, Rui Zhang, and Jie Sun. A generalized doubly robust learning framework for debiasing post-click conversion rate prediction. In *KDD*, 2022. Sihao Ding, Peng Wu, Fuli Feng, Xiangnan He, Yitong Wang, Yong Liao, and Yongdong Zhang. Addressing unmeasured confounder for recommendation with sensitivity analysis. In *KDD*, 2022. Jianqing Fan and Irene Gijbels. *Local Polynomial Modelling and Its Applications*. Chapman and Hall/CRC, 1996. Marc Ferracci, Grégory Jolivet, and Gerard J. van den Berg. Evidence of treatment spillovers within markets. *Review of Economics and Statistics*, 96:812–823, 2014. Laura Forastiere, Edoardo M. Airoldi, and Fabrizia Mealli. Identification and estimation of treatment and interference effects in observational studies on networks. *Journal of the American Statistical Association*, 116:901–918, 2021. Chongming Gao, Shijun Li, Wenqiang Lei, Jiawei Chen, Biao Li, Peng Jiang, Xiangnan He, Jiaxin Mao, and Tat-Seng Chua. KuaiRec: A fully-observed dataset and insights for evaluating recommender systems. In *CIKM*, 2022. Siyuan Guo, Lixin Zou, Yiding Liu, Wenwen Ye, Suqi Cheng, Shuaiqiang Wang, Hechang Chen, Dawei Yin, and Yi Chang. Enhanced doubly robust learning for debiasing post-click conversion rate estimation. In *SIGIR*, 2021. José Miguel Hernández-Lobato, Neil Houlsby, and Zoubin Ghahramani. Probabilistic matrix factorization with non-random missing data. In *ICML*, 2014. Guanglei Hong and Stephen W. Raudenbush. Valuating kindergarten retention policy: A case study of causal inference for multilevel observational data. *Journal of the American Statistical Association*, 101:901–910, 2006. Jin Huang, Harrie Oosterhuis, and Maarten de Rijke. It is different when items are older: Debiasing recommendations when selection bias and user preferences are dynamic. In *WSDM*, 2022. Shanshan Huang, Haoxuan Li, Qingsong Li, Chunyuan Zheng, and Li Liu. Pareto invariant representation learning for multimedia recommendation. In *ACM-MM*, 2023. Michael G Hudgens and M Elizabeth Halloran. Toward causal inference with interference. *Journal of the American Statistical Association*, 103:832–842, 2008.
Spp2i1hKwV
ideally, rather than (arbitrarily?) choosing k = 18 & k = 100, you should provide an automated approach to AUTOMATICALLY assess the smallest amount of annotations that leads to the best possible performance; you only introduce Auto-IDEAL in
IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models Shaokun Zhang\(^1*\) Xiaobo Xia\(^2*\)\(^†\) Zhaoqing Wang\(^2\) Ling-Hao Chen\(^3\) Jiale Liu\(^4\) Qingyun Wu\(^1†\) Tongliang Liu\(^2\) \(^1\)Pennsylvania State University \(^2\)The University of Sydney \(^3\)Tsinghua University \(^4\)Xidian University shaokun.zhang@psu.edu xiaoboxia.uni@gmail.com Abstract In-context learning is a promising paradigm that utilizes in-context examples as prompts for the predictions of large language models. These prompts are crucial for achieving strong performance. However, since the prompts need to be sampled from a large volume of annotated examples, finding the right prompt may result in high annotation costs. To address this challenge, this paper introduces an influence-driven selective annotation method that aims to minimize annotation costs while improving the quality of in-context examples. The essence of our method is to select a pivotal subset from a large-scale unlabeled data pool to annotate for the subsequent sampling of prompts. Specifically, a directed graph is first constructed to represent unlabeled data. Afterward, the influence of candidate unlabeled subsets is quantified with a diffusion process. A simple yet effective greedy algorithm for unlabeled data selection is lastly introduced. It iteratively selects the data if it provides a maximum marginal gain with respect to quantified influence. Compared with previous efforts on selective annotations, our influence-driven method works in an end-to-end manner, avoids an intractable explicit balance between data diversity and representativeness, and enjoys theoretical support. Experiments confirm the superiority of the proposed method on various benchmarks, achieving better performance under lower time consumption during subset selection. The project page is available at https://skzhang1.github.io/IDEAL/. 1 Introduction In-context learning (ICL) entails presenting a small set of examples with demonstrations as prompts (called in-context examples) to large language models (LLMs), before making predictions on test inputs (Wei et al., 2022a; Min et al., 2022; Akyürek et al., 2023). This emerging few-shot learning paradigm is an appealing alternative to supervised fine-tuning as it can avoid heavy parameter updates of language models while improving accuracy (Liu et al., 2021; Yoo et al., 2022). Recent studies indicate that obtaining prompts from a vast collection of annotated examples is crucial to achieving strong performance (Rubin et al., 2022). Notably, these studies have illuminated the substantial performance improvements when retrieving analogous examples (under specific embedding criteria) as in-context examples tailored for each individual test input. Since different test scenarios need distinct in-context examples, and each of them is equipped with its pertinent annotations, the necessity of a large volume of annotated examples is emphasized (Su et al., 2023). However, obtaining large-scale annotated examples for ICL requires substantial manpower and financial resources (Baldrige & Osborne, 2004; Engelson & Dagan, 1996; Snow et al., 2008). This is because humans not only need to annotate the true label for each example but also need to provide the example demonstration in the annotation process (Wei et al., 2022b). *Equal contributions. †Corresponding authors. (a) Low-influence subset in unlabeled data. (b) High-influence subset in unlabeled data. Figure 1: Visualization of the information diffusion process (Goldenberg et al., 2001) of two subsets with equal sizes. Experiments are conducted using the SST-5 training set (Socher et al., 2013). To avoid the denseness, we randomly sample 100 examples in total. In this visualization, black nodes present the initial subset without information diffusion. White nodes correspond to the examples that are not influenced by diffusion. For other nodes, darker nodes represent earlier influenced examples. We can observe that the subset with high influence (b) can achieve better performance by influencing a larger group of examples in the unlabeled data pool compared to the subset with low influence (a). To reduce the annotation cost, the previous effort Vote-\(k\) (Su et al., 2023) made attempts by proposing to select a diverse and representative subset from a large-scale unlabeled data pool to annotate. Particularly, Vote-\(k\) initially selects a small portion of data for diversity and annotates them manually. Then, these annotated data act as prompts for predictions on all other unlabeled data, and choose the remaining ones that need to be annotated, based on diverse confidence scores. However, despite its strong performance in empirical evaluations, Vote-\(k\) is still unsatisfactory in practice. We detail the issues from three aspects. (1) The data selection procedure of Vote-\(k\) is not end-to-end. This results in inconvenience, increased processing complexity, and added inference costs due to the predictions on unlabeled data. (2) Diversity and representativeness need to be balanced carefully (Su et al., 2023). Highlighting diversity in data selection is crucial for comprehensive coverage, but may sacrifice representativeness by overlooking exemplary data. Besides, the excessive emphasis on diversity of Vote-\(k\) causes the selection of outliers (see evidence in Appendix C.2). (3) Vote-\(k\) lacks theoretical guarantees, making it challenging to assess the algorithm’s reliability in realistic tasks and constraining its practical utility. In this paper, to minimize annotation costs for ICL and address the issues of existing work, an innovative data selection method is introduced, where we utilize influence-driven selective annotations to empower in-context learners (IDEAL). In essence, IDEAL aims to identify a subset of data that acts as a proxy and closely approximates the vast unlabeled dataset. Once annotated, these selected data can be considered a viable substitute for the large annotated examples in subsequent ICL tasks. In further detail, our method works in an unsupervised and end-to-end manner. We first construct a directed graph, where its vertices represent unlabeled data and its edges bridge different data based on their similarities. Inspired by influence maximization that aims to select a vertex set at key positions in social graphs (Li et al., 2018), we then propose to quantify the influence of each candidate unlabeled subset in our constructed graph, through a classic independent-cascade diffusion model illustrated in Figure 2. To find the subset with high influence, a simple greedy algorithm for unlabeled data selection is introduced. The algorithm does not need a delicate trade-off between diversity and representativeness. Instead, it iteratively selects a vertex if it provides a maximum marginal gain to the influence metric, until the selection is completed based on the annotation budget. Theoretically, under the influence-driven selective paradigm, we provide the lower bound for the subset influence selected by our method, demonstrating it is at least as large as a certain proportion of the influence of the optimal solution. Empirically, we conduct comprehensive experiments over 9 datasets across diverse tasks (covering classification, commonsense reasoning, dialogue, and text/code generation). Various LLMs and prompt retrieval technologies are included in evaluations. Experimental results demonstrate that our IDEAL can achieve better performance than Vote-\(k\) in 17 out of 18 cases in the experiments, with only 13% time consumption during subset selection. This creates a strong baseline of selective annotations for follow-up research. Source codes have been attached for the reproducibility of results. 2 METHODOLOGY In this section, to reduce the annotation cost of ICL, a framework of influence-driven selective annotations is formulated. We discuss how examples should be selected to annotate, leading to better in-context learners for LLMs. 2.1 PROBLEM SETUP We begin by defining notations and setting up the research problem. Specifically, LLMs perform in-context learning tasks based on a task-specific prompt \( Z = [z_1, \ldots, z_c] \), where each \( z_i \) represents one example \((x_i, y_i)\) consisting of the instance \( x_i \) and label \( y_i \), with \( c \) examples in total. LLMs generate the prediction for one test input \( x_{\text{test}} \) conditioned on the prompt \( Z \) followed by \( x_{\text{test}} \), i.e., \( y_{\text{test}} = \arg\max_{y \in C} P(y | Z, x_{\text{test}}) \), where \( C \) denotes the label space. As each prompt needs distinct annotations, the importance of having a substantial number of annotated examples is stressed, resulting in huge annotation costs. This motivates us to investigate selective annotations. Given a pool of unlabeled instances \( D_u = \{x_i\}_{i=1}^n \), where \( n \) is the number of unlabeled instances, the aim of selective annotations is to select a subset \( S_u \subset D_u \) to make manual annotations, such that performing ICL using prompts retrieved from the selected subset can yield good performance on an unseen test set \( D_{\text{test}} \). The size of \( S_u \) is controlled by the annotation budget \( m \), i.e., \( |S_u| = m \). 2.2 INFLUENCE-DRIVEN SELECTIVE ANNOTATIONS Overview. For selective annotations in ICL, we need to identify a subset that approximates vast unlabeled data. Therefore, quantifying the coverage of each candidate subset is critical. To achieve this, we construct a directed graph using the embeddings of unlabeled data and portray their relationships using the edges in the graph. We then quantify the influence of each candidate subset in the constructed graph. An information diffusion model is used for this purpose. Through the information diffusion model to quantifying the influence of each candidate subset, we avoid the delicate trade-off between diversity and representativeness. After the quantification, we can search the subset with maximum influence, which most closely approximates the unlabeled data. Below we detail the above procedure step by step. Constructing the directed graph. We first compute a vector embedding for each unlabeled instance using Sentence-BERT (Reimers & Gurevych, 2019)\(^1\). The obtained embeddings are employed to build a directed graph \( G = (V, E, P) \), where the vertices \( V = \{v_i\}_{i=1}^n \) represent the embeddings of the unlabeled instances, \( E \) denotes the set of edges in the graph, and \( P \) denotes the set of weights assigned to edges. In more detail, for each vertex \( v \in V \), we connect it to its \( k \) nearest successors\(^2\) in terms of the cosine similarity between the embeddings and then get \( E \). For the edge \((v, u) \in E\) that connects \( v \) and its successor \( u \), we assign the weight \( p(v, u) = \cos(v, u)/\sum_{z \in N(v, k)} \cos(v, z) \) with \( p \in P \), where \( N(v, k) \) represents the set including \( k \) nearest successors of \( v \), and \( \cos(\cdot, \cdot) \) is a function that calculates the cosine similarity of two embeddings. The constructed graph depicts the relationships between unlabeled examples in terms of the embedding similarity. Quantifying subset influence. Here we propose to quantify each candidate subset within the constructed graph, which is detailed in Algorithm 1. Specifically, given the constructed graph \( G \) and a candidate subset \( S \), the quantification algorithm simulates the progression of information diffusion originating from \( S \). The number of influenced vertices can be considered as a measure of the influence of the candidate subset. In other words, the subset that influences more vertices within the graph can provide a better approximation of the vast unlabeled data. The diffusion process unfolds discretely, progressing through multiple steps. At the beginning, the subset \( S \) is activated. Then at each step, each vertex \( v \) activates its successors that remained inactive in the last step with a probability defined by \( p(v, u) \). The activation can be conceptualized as a coin toss where the outcome is determined by the head probability \( p(v, u) \). If the result is the head, the vertex \( v \) becomes activated; otherwise, it remains inactive. Starting from \( S \), the diffusion terminates when no further vertex can be activated in the graph. Lastly, we quantify the influence of the set with the number of activated vertices, where a larger number corresponds to greater influence. In order to get a stable result, we \(^1\)https://huggingface.co/sentence-transformers/all-mpnet-base-v2. \(^2\)In graph theory (Harary, 2018), a vertex \( u \) is the successor of a vertex \( v \) if it is at the end of an outgoing directed edge \((v, u)\). Figure 2: The procedure aims to quantify the influence of each subset of in-context examples. In this procedure, we start with a subset of examples (the red points in (a)). Gradually, the successors of this subset are activated based on the weight $p$ and a random number $r$ sampled from 0 to 1. From (a) to (d). The influence of the subset is determined by the number of points that have been activated. **Algorithm 1:** Subset influence quantification. **Input**: Directed graph $\mathcal{G} = (\mathcal{V}, \mathcal{E}, \mathcal{P})$, subset $\mathcal{S}$. **Output**: Number of influenced vertices by $\mathcal{S}$ in $\mathcal{G}$. $\mathcal{S}_{\text{active}} \leftarrow \mathcal{S}$, $\mathcal{S}_{\text{new}} \leftarrow \emptyset$, $L = 0$; while $\mathcal{S}_{\text{active}} \neq \emptyset$ do for each node $v$ in $\mathcal{S}_{\text{active}}$ do for each successor $u$ of $v$ in $\mathcal{G}$ do if $u$ not in $\mathcal{S}$ then Generate random number $\tau \in [0, 1]$; if $\tau \leq p(v, u)$ then $\mathcal{S} \leftarrow \mathcal{S} \cup u$; $\mathcal{S}_{\text{new}} \leftarrow \mathcal{S}_{\text{new}} \cup u$; $\mathcal{S}_{\text{active}} \leftarrow \mathcal{S}_{\text{new}}$; $L \leftarrow L + |\mathcal{S}_{\text{new}}|$; $\mathcal{S}_{\text{new}} \leftarrow \emptyset$; return $L$. **Algorithm 2:** Searching the subset with maximum influence. **Input**: The directed graph $\mathcal{G} = (\mathcal{V}, \mathcal{E}, \mathcal{P})$, the annotation budget $m$. **Result**: The set $\mathcal{S}_u$ that includes $m$ examples to annotate. Initialize $\mathcal{S}_0 \leftarrow \emptyset$, $t = 0$; while $t < m$ do $v_t \leftarrow \arg\max_{v \in \mathcal{V} \setminus \mathcal{S}_t} f_{\mathcal{G}}(\mathcal{S}_t \cup \{v\})$; $\mathcal{S}_{t+1} \leftarrow \mathcal{S}_t \cup v_t$; $t \leftarrow t + 1$; Obtain $\mathcal{S}_u$ with $\mathcal{S}_m$ using the correspondence between embeddings and instances; return $\mathcal{S}_u$. repeat this process ten times and take the average influence. To help understand the procedure of Algorithm 1, we provide an illustration as shown in Figure 2. For convenience, we express Algorithm 1 as an influence function $f_{\mathcal{G}}(\mathcal{S})$ for the graph $\mathcal{G}$ that takes example set $\mathcal{S}$ as inputs, and returns the number of activated vertices $L$. **Searching the subset with maximum influence.** We exploit a simple yet effective greedy algorithm (Kempe et al., 2003) to search the subset with maximum influence, which is illustrated in Algorithm 2. Specifically, the algorithm is initialized with an empty set, and iteratively involves an instance if it can provide the maximum marginal gain to the influence function. The search algorithm terminates when the selected subset meets the annotation budget. Finally, we achieve the set $\mathcal{S}_u$ that includes $m$ examples to annotate, using the correspondence between embeddings and instances. It is worth mentioning that this searching process aims to maximize the influence of the whole selected subset rather than considering each example separately. This is because combining all the individual high-impact examples together does not necessarily achieve the highest-impact subset. 2.3 Prompt Retrieval After the above influence-driven selective annotations, the subset $S_u$ is achieved. By making manual annotations on $S_u$, a set of annotated examples is obtained. We can then retrieve examples from the annotated set as in-context examples for each test input. Following previous studies (Liu et al., 2021; Su et al., 2023), we will calculate embeddings for all annotated examples using Sentence-BERT (Reimers & Gurevych, 2019) and identify the most similar instances to each test input based on the cosine similarity. Notice that, the proposed method is agnostic to prompt retrieval methods. As demonstrated in §4.3.3, our method can be combined with any other prompt retrieval technologies. Better prompt retrieval technologies can further boost final performance. 3 Theoretical Analysis In this section, we perform theoretical analysis on the influence of the subset searched by our algorithm and provide the corresponding lower bound. For any constructed graph $G$, we exploit $\psi_v(S)$ to denote the influence improvement of the subset $S$ after adding $v$ into $S$, i.e., $\psi_v(S) = f_G(S \cup v) - f_G(S)$. For convenience, we use $\psi_t = f_G(S_t) - f_G(S_{t-1})$ ($t \geq 1$) to denote the incremental value of the influence function $f_G$ after adding $v_t$ into $S_{t-1}$. Also, we employ $S^*_m$ to represent the subset with the optimal influence value in the graph $G$ with annotation budget $m$. Afterward, the optimal solution we expect to search in Algorithm 2 can be regarded as $$S^*_m = \arg\max_{S \subseteq V} f_G(S), \quad \text{s.t. } |S| = m.$$ (1) In the following, we present the submodular condition to facilitate theoretical analysis of our method. **Condition 1 (submodular condition).** In the problem of selective annotations, given any graph $G$ constructed by our procedure, the influence function $f_G$ is a submodular function which satisfies, for $\forall v \in V, \forall S_a \subset S_b \subset V$, $$f_G(S_a \cup v) - f_G(S_a) \geq f_G(S_b \cup v) - f_G(S_b).$$ (2) **Remark 1.** Intuitively speaking, given any graph $G$, we say the influence function $f_G$ satisfies the submodular condition if adding one data point to a smaller subset provides more influence than adding the same data point to a larger subset. In other words, it reflects the principle of diminishing returns: the marginal gain of including a data point in a set decreases as the size of the set increases. This condition can hold within the influence function (Li et al., 2019). Considering an extreme case, when subset $S = V$, the influence improvement of adding any data point to $S$ will be zero. **Proposition 1.** In Algorithm 2, if the influence function $f_G$ satisfies Condition 1, then for $f_G(S^*_m)$, $$\forall t \in [0, m - 1], f_G(S^*_m) \leq f_G(S_t) + mv_{t+1}.$$ (3) **Remark 2.** Proposition 1 proposes an upper bound for $f_G(S^*_m)$ in the form of the influence $f_G(S_t)$ and its improvement at next step $t + 1$, when Algorithm 2 is applied to selective annotations. **Theorem 1.** In Algorithm 2, if influence function $f_G$ satisfies Condition 1, when the algorithm terminates at the step $m - 1$, $f_G(S_m)$ has a lower bound: $$f_G(S_m) \geq (1 - (1 - 1/m)^m)f_G(S^*_m).$$ (4) **Remark 3.** Theorem 1 provides an approximation guarantee for the influence of the selected subset returned by our method. The influence of the selected subset is at least as large as a certain proportion of the influence of the optimal solution, i.e., $1 - (1 - 1/m)^m$. With the annotation budget $m$ increases, this fraction gets closer to $1 - 1/e$. For the proofs of Proposition 1 and Theorem 1, readers can refer to Appendix B. 4 Experiments In this section, we evaluate our method (IDEAL) on multiple datasets that have different categories of tasks. Experimental setups are first introduced (§4.1). We then demonstrate that the proposed method can find a better selective annotation subset in a more efficient way compared with baselines (§4.2). Moreover, we perform in-depth investigations to provide a better understanding of the superiority of the proposed method (§4.3). Finally, a case study is also provided to further evaluate the selected subset from our method in an automatic annotation scenario (§4.4). 4.1 Experimental setups Datasets and tasks. Following previous work (Su et al., 2023), we employ 9 datasets for the evaluations, which can be categorized into 4 different tasks, including classification, multi-choice, dialogue, and generation. The details of the datasets are provided in Appendix D.1. For each dataset, the original “train/dev/test” split from the Transformer library (Wolf et al., 2019) is utilized. We use test data for evaluation if they are available publicly (SST-5 (Socher et al., 2013), DBpedia (Lehmann et al., 2015), MWoZ (Budzianowski et al., 2018), and Xsum (Narayan et al., 2018)). Otherwise, we follow the same setting in (Su et al., 2023) and use the development set. We use accuracy as metric for all classifications and multiple choices tasks, joint accuracy (Budzianowski et al., 2018) for MWoZ, test suite accuracy (Zhong et al., 2020) for GeoQuery (Zelle & Mooney, 1996), and ROUGE-L (Lin, 2004) for Xsum. Models. If not otherwise specified, we run all experiments on the GPT-J 6B model (Wang & Komatsuzaki, 2021) except the GeoQuery and MWoZ datasets where we use Text-devinci-002 (Chen et al., 2021). We also provide experiments on other models including GPT-Neo 2.7B (Black et al., 2021) and more advanced models GPT-3.5-Turbo (Openai, 2023) in §4.3.4. Our implementation is detailed in Appendix D.2. Baselines. In the main experiments, we perform a comprehensive evaluation of our method that is compared with previous state-of-the-art selective annotation baselines, i.e., Vote-\(k\) (Su et al., 2023) and random selection (abbreviated as “Random” below). Note that, in §4.3.2, we also compare our method with alternative methods that can select a coreset from large-scale unlabeled data on typical datasets. For the baseline Vote-\(k\), we conduct experiments by running its official code\(^3\). 4.2 Main results | Method | Classification | Multi-Choice | Dialogue | Generation | |--------|----------------|--------------|----------|------------| | | MRPC SST-5 MNLI DBpedia RTE | HellaSwag | MWoZ | GeoQ Xsum | | 100 Random | 64.3 49.6 38.2 89.8 55.3 | 66.7 | 39.9 | 55.3 15.3 | | 100 Vote-\(k\) | 64.6 46.6 38.9 89.2 57.6 | 67.9 | 48.3 | **58.8** 17.2 | | 100 IDEAL | **66.4** 51.4 **41.0** **90.6** **58.9** | **68.6** | **52.2** | **58.2** **19.9** | | 18 Random | 57.4 42.9 37.8 85.2 57.9 | 66.0 | 37.0 | 47.5 13.6 | | 18 Vote-\(k\) | 61.1 41.7 39.1 89.9 58.2 | 66.5 | 37.7 | 50.9 15.2 | | 18 IDEAL | **63.0** 43.2 **40.0** **90.1** **59.4** | **67.1** | **38.5** | **52.0** **19.6** | Table 1: The performance of our method and baselines on 9 different datasets with an annotation budget of 100 and 18. We use similar-based prompt retrieval for all methods and report the average results with 3 different runs for each method. We can observe that our method works better than Random and Vote-\(k\) in almost all cases (17/18) under two annotation budgets. The best result in each case is bolded. We also provide the maximum and minimum values of the results in Appendix C.3. Measurement on performance. We first perform the evaluations for Random, Vote-\(k\), and our method. The annotation budget is set to 18 and 100 respectively following the same setting as Vote-\(k\). Note that we include 18 as the annotation budget considering all annotated examples can be fit to the prompt of the large language models within context limits. Therefore, the prompt retrieve stage can be ignored and the evaluation results can naturally represent the quality of the selected examples. We provide experimental results in Table 1. As can be seen, our method achieves better performance than baselines. \(^3\)https://github.com/HKUNLP/icl-selective-annotation. Figure 3: Comparison of our method and Vote-\(k\) with respect to time consumption during subset selection under the same hardware condition. Here the annotation budget is 18. The y-axis represents the time consumption with a log scale. We can observe that our method largely reduces the time cost compared with Vote-\(k\). in most of the evaluation scenarios (17 out of 18). Interestingly, we find that random selection outperforms Vote-\(k\) in 3 out of 18 cases. We conjecture that, under some ideal circumstances, the selected subset by random selection can approximate the distribution of full data. If test data follows the same distribution, good performance can be achieved. Note that we also illustrate selected examples and label distributions in selective annotations in Appendix C.1 and Appendix C.4 respectively. **Measurement on time cost.** Previous work Vote-\(k\) (Su et al., 2023) encompasses generating prediction for most unlabeled data with a set of selected examples as prompts and performs data selection according to the confidence scores of the prediction. However, this process results in large unnecessary costs at inference time. Meanwhile, LLMs are often used as a service and an extra charge will appear with the usage of the token in both the input and output. In Figure 3, we compare the time cost of subset selection in our method against Vote-\(k\) on all tasks with the same hardware. The annotation budget is set to 18. We can observe that our method saves a tremendous amount of cost compared to Vote-\(k\). Specifically, under the same hardware conditions, IDEAL achieves a 7.8× lead on average over Vote-\(k\). The speed improvement benefits from the fact that the proposed method does not need to perform example selection by generating predictions on a large number of unlabeled examples and is completely unsupervised. ### 4.3 More analysis #### 4.3.1 Larger influence brings better performance We conduct experiments to investigate the correlation between subset influence and its corresponding in-context learning performance. Specifically, we randomly select a collection of example subsets from a large unlabeled data pool. We then evaluate each subset as a prompt and record its performance and influence in the constructed graph, resulting in a set of influence-performance pairs. Our goal is to analyze the correlation between these two metrics. To achieve this, we perform experiments on SST-5 and MNLI. We sample 30 subsets and order them according to their influences, where each subset includes 5 examples. We divide this sorted subset sequence equally into three influence levels, with each level containing 10 subsets. We visualize the performance of subsets in each influence level in Figure 4. Our analysis reveals that subsets with larger influence levels achieve better average, median, and worst-case performance. This finding further demonstrates that quantifying the influence of each potential subset is an effective metric in the selective annotation problem. ![Figure 4: Influence vs. Performance. The illustration of the positive correlation between the influence achieved by Algorithm 1 and final performance.](image) #### 4.3.2 Comparisons with alternative methods We also compare our method with other alternative methods that can select the coreset from large-scale unlabeled data. We perform the evaluations on MRPC, MNLI and HellaSwag. We include the following alternative methods (1) \(K\)-Means (Lloyd, 1982), which groups all examples into \(m\) clusters, and selects the centroid example from each cluster. (2) Maximizing facility location (MFL) (Lin & Bilmes, 2009), which aims at optimizing the representativeness of the selected subset. (3) Fast Vote-\(k\) (Su et al., 2023), which is an efficient alternative to Vote-\(k\) which directly picks \(m\) examples with the largest Vote-\(k\) scores. | Method | \(K\)-Means | MFL | Fast Vote-\(k\) | Vote-\(k\) | IDEAL | |------------|-------------|-----|-----------------|------------|-------| | MRPC | 57.4 | 58.2| 59.3 | 61.1 | **63.0** | | MNLI | 35.8 | 38.8| 39.5 | 39.1 | **40.0** | | HellaSwag | 65.4 | 65.2| 65.9 | 66.5 | **67.1** | Table 2: Comparisons of alternative methods that can select a coreset from large-scale unlabeled data. The annotation budget is 18. Experimental results are reported by averaging over three random trials. The performance of the baseline Vote-\(k\) is also included here. The best performance in each case is **bolded**. | Method | Datasets | |------------|-------------------| | Selection | Retrieval | | Vote-\(k\) | Similar | | IDEAL | Similar | | Method | Datasets | |------------|-------------------| | Vote-\(k\) | Random | | IDEAL | Random | Table 3: Comparison of random and similar prompt retrieval with Vote-\(k\) and IDEAL on MRPC, MNLI, and HellaSwag. The subset selection method with a similar prompt retrieve achieves better performance compared with its version with a random prompt retrieve method. The best performance in each case is **bolded**. We show the results in Table 2. We can observe IDEAL consistently outperforms the baselines in all datasets, demonstrating its superiority. Note that, the graph-based methods (Vote-\(k\), Fast Vote-\(k\), and our IDEAL) outperform the methods non-graph-based methods (\(K\)-Means and MFL) in all cases. This phenomenon suggests that graph-based methods are suitable for capturing similarity relationships between examples in the selective annotation problem, which can lead to better results. ### 4.3.3 Evaluation with Different Retrieval Methods In previous experiments, we used a similarity-based prompt retrieval method by default. In this section, we conduct experiments to quantify the effect of different prompt retrieval methods under the annotation 100. We present the results in Table 3. We observe that both Vote-\(k\) and IDEAL suffer from a significant performance drop when the prompt retrieval method is changed from similarity-based to random selection. Notably, IDEAL also achieves better performance than Vote-\(k\) when combined with random retrieval in all datasets. It suggests that IDEAL can cultivate a more stable training subset (Chang & Jia, 2023) for in-context learning tasks. Note that we also show that our IDEAL is more stable and robust against the order of in-context examples in Appendix C.5. ### 4.3.4 Evaluation on Other Language Models Here we evaluate IDEAL on other language models, including GPT-Neo 2.7B (Black et al., 2021), and the advanced chat model GPT-3.5-Turbo where we use the same instruction as other language models for each dataset. While GPT-3.5-Turbo has mainly been optimized for chat, it also performs well on traditional completion tasks (Kheiri & Karimi, 2023). To conduct experiments, we select three classification tasks (MRPC, MNLI, and RTE), considering they are easier for prompting GPT-3.5-Turbo to return responses without pleasantries or explanatory content. The evaluation results are presented in Figure 5. Our evaluations reveal that IDEAL consistently outperforms the baselines across all models tested. This demonstrates the versatility of our method in the context of learning tasks using models of varying sizes. Notably, we observe that the largest model, i.e., GPT-3.5-Turbo, performs worse than GPT-Neo and GPT-J. This situation arises because GPT-3.5-Turbo is primarily optimized for chat tasks and faces challenges in following human instructions for classification. This scenario also has been identified in Ye et al. (2023). ### 4.3.5 Evaluation on Out-of-Distribution Tasks We further evaluate our method on out-of-distribution tasks (Zhou et al., 2022; Wang et al., 2022b; Zhang et al., 2023b; Huang et al., 2023c;d), where there is a distribution shift between the selective annotation data and test data. Following (Chang & Jia, 2023), we compare IDEAL and Vote-\(k\) using SST-2 (Socher et al., 2013), BoolQ (Clark et al., 2019) as source tasks, and IMDb (Maas et al., 2011), BoolQ Contrast Set (Gardner et al., 2020) as target tasks, respectively. In all evaluations, we set the annotation budget as 18 and use the similarity-based retrieve to perform the evaluations on the test set in target domains. We use GPT-J 6B and GPT-Neo 2.7B here and show the results in | Method | Models | Test Domain | |--------|----------|-------------| | Vote-\(k\) | GPT-Neo | IMDb 71.1, BoolQ Cst. 56.4 | | IDEAL | GPT-Neo | IMDb 72.2, BoolQ Cst. 58.0 | | Vote-\(k\) | GPT-J | IMDb 76.4, BoolQ Cst. 56.1 | | IDEAL | GPT-J | IMDb 76.8, BoolQ Cst. 56.4 | Table 4: The evaluations on out-of-distribution tasks. We show the performance of different methods on IMDb and BoolQ Contrast Set (target domains). In the evaluations, the prompts consist of selected SST-2 and BoolQ training examples, respectively (source domains). The best performance in each case is **bolded**. Figure 5: Comparisons with various models when the annotation budget is 18. IDEAL consistently achieves the best performance compared with baselines in models with different datasets. Table 4. We can observe that IDEAL still outperforms baselines on all datasets with two models, implying that IDEAL could select the subset which could depict the invariant properties of this kind of tasks and generalize to out-of-distribution scenarios. 4.4 CASE STUDY: AUTOMATIC ANNOTATION In previous experiments, we used a small set of manually annotated examples as candidate prompts to make predictions. In contrast, here we are interested in a case study that utilizes the subset selected by IDEAL to annotate all available unlabeled data automatically, leading to a larger set of candidate prompts. Specifically, we first choose an initial subset from the pool of unlabeled data using IDEAL and manually label this selected subset. Afterward, we simulate the information diffusion process from the initial subset to all other data, where we employ those activated data as prompts to predict upcoming activated data at each step and label them accordingly with prediction results. This process ultimately makes a fully labeled training dataset. Finally, all examples (including manual labeling and automatic labeling) are utilized as potential prompts in conjunction with the prompt retrieve technique for final testing. We name this paradigm as Auto-IDEAL and compare it with Vote-\(k\) and origin IDEAL on all classification datasets. We choose 300 training data for each dataset to perform experiments. The manual annotation budget is set to 150, i.e., half of the labels of the candidate prompts in Auto-IDEAL are annotated automatically. Experimental results are provided in Table 5. As can be observed, Auto-IDEAL even achieves better performance than IDEAL in 4 of 5 cases. Notably, although the performance is worse on MNLI, it is still competitive (better than Vote-\(k\)). It suggests that expanding the candidate prompts through automatic annotation following the diffusion process can further boost the performance of IDEAL. It benefits from the fact that information only diffuses between similar examples. Therefore, unlabeled examples will be automatically annotated using the most similar annotated examples as prompts leading to a promising annotation success rate. | Method | MRPC | SST-5 | MNLI | DBpedia | RTE | |------------|------|-------|------|---------|-----| | Vote-\(k\) | 63.8 | 48.6 | 39.5 | 90.2 | 55.7| | IDEAL | 65.2 | 49.4 | 40.3 | 90.8 | 57.4| | Auto-IDEAL | **65.8** | **50.4** | **39.8** | **91.8** | **58.3** | Table 5: Comparison between Vote-\(k\), IDEAL, and Auto-IDEAL. Auto-IDEAL is an expanded version of IDEAL for automatic annotation. We evaluate these algorithms on all classification tasks and average their performance over three random trials. The best performance in each case is bolded. The results indicate that Auto-IDEAL can enhance the performance of IDEAL and achieve the best performance in 4 out of 5 cases. 5 CONCLUSION A series of recent works have confirmed the powerful ability of in-context learning for large language models. We investigate the ability from the perspective of selective annotations and propose an influence-driven method that selects a subset of data that acts as a proxy and closely approximates full data. Theoretical analysis is provided to establish an upper limit for the global optimal solution, and demonstrate that our greedy search algorithm selects a subset with influence at least as substantial as a specific proportion of the optimal solution’s influence. Empirical evaluations illustrate the superiority of our method across a range of benchmarks, delivering superior performance while largely reducing the time required for subset selection. We hope this work can help researchers and practitioners understand the promise and potential of selective annotations in in-context learning, and facilitate them in the efficient conceptualization of novel language-based challenges. ACKNOWLEDGEMENTS Tongliang Liu is partially supported by the following Australian Research Council projects: FT220100318, DP220102121, LP220100527, LP220200949, and IC190100031. REFERENCES Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? investigations with linear models. In ICLR, 2023. Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. Transformers as statisticians: Prov-able in-context learning with in-context algorithm selection. arXiv preprint arXiv:2306.04637, 2023. Jason Baldridge and Miles Osborne. Active learning and the total cost of annotation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 9–16, 2004. Hritik Bansal, Karthik Gopalakrishnan, Saket Dingliwal, Sravan Bodapati, Katrin Kirchhoff, and Dan Roth. Rethinking the role of scale for in-context learning: An interpretability-based case study at 66 billion scale. In ACL, 2022. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. TAC, 7:8, 2009. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. Gpt-neo: Large scale auto-regressive language modeling with mesh-tensorflow. 2021. Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. Multiwoz—a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In EMNLP, 2018. Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang, Aaditya Singh, Pierre Richemond, James McClelland, and Felix Hill. Data distributional properties drive emergent in-context learn-ing in transformers. In NeurIPS, pp. 18878–18891, 2022. Ting-Yun Chang and Robin Jia. Data curation alone can stabilize in-context learning. In ACL, pp. 8123–8144, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, and He He. Meta-learning via language model in-context tuning. In ACL, 2022. Hyunsoo Cho, Hyuhng Joon Kim, Junyeob Kim, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, and Taeuk Kim. Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners. In AAAI, pp. 12709–12718, 2023. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL-HLT, pp. 2924–2936, 2019. Justin Cui, Ruochen Wang, Si Si, and Cho-Jui Hsieh. Scaling up dataset distillation to imagenet-1k with constant memory. In ICML, pp. 6565–6590, 2023. Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong Zhang. Active prompting with chain-of-thought for large language models. arXiv preprint arXiv:2302.12246, 2023. William Dolan, Chris Quirk, Chris Brockett, and Bill Dolan. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In ACL, 2004.
xRiZddh5Pb
The authors explain the idea behind designing the input features in Proposition 1, wherein a ranking metric using the designed distance input features following a specific format that meets the RankPres property. It is not straightforward to me why this format would satisfy the property. Detailed proof would be helpful.
Learning from A Single Graph is All You Need for Near-Shortest Path Routing Anonymous authors Paper under double-blind review Abstract We propose a simple algorithm that needs only a few data samples from a single graph for learning local routing policies that generalize across classes of geometric random graphs in Euclidean and hyperbolic metric spaces. We thus solve the all-pairs near-shortest path problem by training deep neural networks (DNNs) that let each graph node efficiently and scalably route (i.e., forward) packets by considering only the node’s state and the state of the neighboring nodes. Our algorithm design exploits network domain knowledge in the selection of input features and in the selection of a “seed graph” and its data samples. The leverage of domain knowledge provides theoretical assurance that the seed graph and node subsampling suffice for learning that is generalizable, scalable, and efficient. Remarkably, one of these DNNs we train —using distance as the only input feature— learns a policy that exactly matches the well-known Greedy Forwarding policy, which forwards packets to the neighbor with the shortest distance to the destination. We also learn a new policy, which we call Greedy Tensile routing —using both distance and stretch factor as the input features— that almost always outperforms greedy forwarding. We demonstrate the explainability and ultra-low latency run-time operation of Greedy Tensile routing by symbolically interpreting its DNN in low-complexity terms of two linear actions. 1 Introduction There has been considerable interest in machine learning to mimic the human ability of learning new concepts from just a few instances. While human learning with high data efficiency is yet to be matched by formal frameworks for machine learning, such as the language-identification-in-the-limit framework (Gold, 1967) and probably approximately correct (PAC) learning (Valiant, 1984), few-shot (or one-shot) learning methods (Muggleton, 1996; Wang et al., 2020; Muggleton, 2023) have sought to bridge the gap by using prior knowledge to rapidly generalize to new instances given training on only a small number of samples with supervised information. In this paper, we explore the learnability of policies that generalize in the domain of graph routing, where a one-size-fits-all solution is challenging when the class of graphs has strict scalability limits for network capacity (Xue & Kumar, 2006) or has significant graph dynamics (Hekmat & Van Meghem, 2004; Grossglauser & Tse, 2002). Manually designed algorithms and heuristics often cater to particular network conditions and come with tradeoffs of complexity, scalability, and generalizability across diverse graphs. Many machine learned algorithms in this space incur relatively high computational complexity during training (Reis et al., 2019), have high overhead at run-time which limits their use in graphs with high dynamics, or are applicable only for small-scale graphs or graphs of limited types (e.g., high density graphs). Our work focuses attention on answering the following question: Can we design a high data efficient machine learning algorithm for graph routing based on local search that addresses complexity, scalability and generalizability issues all at once? We answer this question in the affirmative for the all-pairs near-shortest path (APNSP) problem over the class of uniform random graphs in Euclidean and in hyperbolic metric spaces. It is well known that uniform random graphs in Euclidean metric spaces can represent the topologies inherent in wireless networks and that the graphs in hyperbolic metric spaces can represent the tree-like topologies of the Internet and social networks where node degree follows a power-law distribution (Boguná et al., 2010; Verbeek & Suri, 2014); the policies we learn are thus broadly applicable to real-world wireless and wired networks. Our key insight is that—in contrast to pure black-box approaches—domain knowledge suffices to theoretically guide the selection of “seed” graph(s) and corresponding sparse training data for efficiently learning models that generalize to (almost) all graphs in these geometric classes. To motivate our focus on local search, we recall that approaches to solve the APNSP problem can be divided into two categories: global search and local search. Global search encodes the entire network state into graph embeddings (Narayanan et al., 2017) and finds optimal paths, whereas local search needs only node embeddings (Grover & Leskovec, 2016) to predict the next forwarder on the shortest path. The model complexity (in time and space) resulting from the latter is inherently better than the former, as is the tolerance to network perturbations. The latter can even achieve stateless design, as is illustrated by geographic routing (Cadger et al., 2012), where packet forwarding can be based on using only the location of the forwarding node and the destination. In other words, local search can achieve scalability and dynamic adaptation in a fashion that is relatively independent of the network configuration. We seek to achieve these properties by learning a low-complexity policy that by virtue of its generalizability can be instantiated and adapted in real-time. We model the APNSP problem as a Markov decision process (MDP) and propose a DNN-based approach to learn a “single-copy” local routing policy that at each routing node and at each time only considers the states from that node and one of its neighbors to predict a local metric (a Q-value) for routing. Routing thus uses a single neighbor for which the Q-value is the largest. For achieving efficient learning that generalizes over a class of graphs, we develop a theory based on the similarity between the local ranking of node neighbors in terms of Q-value and the global ranking with respect to the (shortest) path length metric. If local input features can be chosen to thus achieve high similarity for most nodes in almost all graphs in the chosen class, the APNSP objective can be realized with high probability by training a DNN that characterizes the local metric of each neighbor as a potential forwarder. Moreover, the DNN policy can generalize even if it is trained from only a few data samples chosen from a single “seed” graph. The theory guides our selection of input features as well as corresponding training data and is corroborated by empirical validation of our learned routing solutions. Our approach thus yields a light-weight solution to graph routing in chosen classes of graphs, in the sense that (a) the routing policy is rapidly learned from a small dataset that is easily collected from a single “seed” graph; (b) the learned policy can be used on all nodes of a graph, and is able to generalize across almost all graphs in the classes without additional training on the target networks; and (c) the routing decision only depends on the local network state, i.e., the state of the node and its one-hop neighbor nodes. Our main contributions and findings are as follows: First, generalization from few-shot learning from a single graph is feasible for APNSP and theoretically assured by domain knowledge. Second, domain knowledge also guides the selection of input features and training samples to increase the training efficiency and testing accuracy. Third, learning from a single graph using only a distance metric matches the well-known greedy forwarding routing. Fourth, learning from a single graph using both distance and node stretch factor relative to a given origin-destination node pair yields a new policy, Greedy Tensile routing, that achieves even better generalized APNSP routing. Fifth, both these learned policies can be symbolically interpreted in a low complexity fashion—they are approximated by policies with one and two linear actions respectively. Lastly, reinforcement learning from a single graph achieves comparable generalization performance for ASNSP. 2 Problem Formulation for Generalized Routing Consider the class \( \mathcal{G} \) of all graphs \( G = (V, E) \) whose nodes are uniformly randomly distributed over a 2-dimensional geometric space, that is either an Euclidean space or a hyperbolic space. Each node \( v \in V \) knows its global coordinates. For any pair of nodes \( v, u \in V \), edge \( (v, u) \in E \) holds if and only if the distance between \( v \) and \( u \) is at most the communication radius \( R \). For the case of \( G \) in a 2-dimensional Euclidean plane, we let \( R \) be a user-defined constant. Let \( \rho \) denote the network density, where network density is defined to be the average number of nodes per area, and \( n \) the number of nodes in \( V \). It follows that all nodes in \( V \) are distributed in a square whose side is of length \( \sqrt{\frac{n \times R^2}{\rho}} \). For the case of \( G \) in a 2-dimensional hyperbolic plane, all nodes in \( V \) are distributed in a disk of radius \( R \). Each node \( v \) thus has hyperbolic polar coordinates \((r_v, \theta_v)\) with \( r_v \in [0, R] \) and \( \theta_v \in [0, 2\pi] \). Let \( \delta \) be the average node degree. And let \( n \) and \(-\alpha\) denote the number of nodes in \( V \) and the negative curvature, respectively. All nodes are randomly distributed points with radial density \( p(r) = \alpha \frac{\sinh(\alpha r)}{\cosh(\alpha R) - 1} \) and uniform by angle, where \( R = 2 \log \frac{n}{\delta} \). It is well known that such uniform random graphs in the hyperbolic plane yield a power-law distribution for the node degrees (Aldecoa et al., 2015). ### 2.1 All-Pairs Near-Shortest Path Problem (APNSP) The objective of APNSP routing problem is to locally compute for all node pairs of any graph \( G \in \mathbb{G} \) their near-shortest path. Here, near-shortest path is defined as one whose length is within a user-specified factor (\( \geq 1 \)) of the shortest path length. Formally, let \( d_e(O, D) \) denote the distance between two endpoints \( O \) and \( D \), and \( d_{sp}(O, D) \) denote the length of the shortest path between these endpoints. Further, let \( \zeta(O, D) \) denote the path stretch of the endpoints, i.e., the ratio \( \frac{d_{sp}(O, D)}{d_e(O, D)} \). **The APNSP Problem.** Learn a routing policy \( \pi \) such that, for any graph \( G = (V, E) \in \mathbb{G} \) and any origin-destination pair \((O, D)\) where \( O, D \in V \), \( \pi(O, D, v) = u \) finds \( v \)'s next forwarder \( u \) and in turn yields the routing path \( p(O, D) \) with path length \( d_p(O, D) \) that with high probability is a near-shortest path. In other words, \( \pi \) optimizes the accuracy of \( p(O, D) \) as follows: \[ \max \quad \text{Accuracy}_{G,\pi} = \frac{\sum_{O,D \in V} \eta(O, D)}{|V|^2}, \] \[ \text{s.t. } \eta(O, D) = \begin{cases} 1, & \text{if } \frac{d_p(O, D)}{d_e(O, D)} \leq \zeta(O, D)(1 + \epsilon) \\ 0, & \text{otherwise} \end{cases} \] Note that the user-specified factor for APNSP is \( \zeta(O, D)(1 + \epsilon) \), where \( \epsilon \geq 0 \). ### 2.2 MDP Formulation for the APNSP Problem To solve APNSP, we first formulate it as a Markov decision process (MDP) problem that learns to choose actions that maximize the expected future reward. In general, an MDP consists of a set of states \( S \), a set of actions \( A(s) \) for any state \( s \in S \), an instantaneous reward \( r(s, a) \), indicating the immediate reward for taking action \( a \in A(s) \) at state \( s \), and a state transition function \( P(s'|s, a) \) which characterizes the probability that the system transits to the next state \( s' \in S \) after taking action \( a \) at the current state \( s \in S \). To simplify the routing behavior in the problem, the state transition is assumed to be deterministic. Specifically, each state \( s \) represents the features of a node holding a packet associated with an origin-destination pair \((O, D)\), and an action \( a \in A(s) \) indicates the routing behavior to forward the packet from the node to one of its neighbors. Given the current state \( s \) and an action \( a \in A(s) \) which selects one neighbor as the next forwarder, the next state \( s' \) is determined as the features of the selected neighbor such that the probability \( P(s'|s, a) \) is always one. The tuple \((s, a, r, s')\) is observed whenever a packet is forwarded. In addition, we define the \( Q \)-value to specify the cumulative future reward from state \( s \) and taking action \( a \): \( Q(s_t = s, a_t = a) = \sum_{i=t}^{L} \gamma^{i-t} r(s_i, a_i) \), where \( \gamma, 0 \leq \gamma \leq 1 \), is the discount factor. When --- **Figure 1:** Schema for solution using DNN to predict \( Q \)-values for selecting the routing forwarder. \( \gamma = 0 \), the instantaneous reward is considered exclusively, whereas the future rewards are treated as equally important as the instantaneous reward in the Q-value if \( \gamma = 1 \). In the APNSP problem, we define the instantaneous reward \( r(s, a) \) as the negative length of the corresponding edge \((s, s')\), and set \( \gamma = 1 \). Therefore, the optimal Q-value, \( Q^*(s, a) \), is equal to the cumulative negative length of the shortest path from \( s \) to the destination. For solving APNSP, we seek to learn the optimal Q-value through a data-driven approach with DNNs. As depicted in Figure 1, each state \( s \) and action \( a \) will be embedded into a set of input features denoted by \( f_s(s) \) and \( f_a(a) \), respectively. A DNN will be learned to approximate the optimal Q-value given the input features, based on which a near-shortest path routing policy can be obtained by choosing actions with largest Q-values. ### 2.3 Design of Input Features To learn a routing protocol that generalizes across graphs with different scales, densities, and topologies, the input features of the DNN should be designed to be independent of global network configurations, including the identity of nodes and of packets. Recall that each node knows its own coordinates and the coordinates of the origin and the destination. Input features based on distance and on stretch factor have been found useful in local geographic routing protocols. Accordingly, we design the input features, including state and action features, as follows: - **State feature**, \( f_s(O, D, v) \). For a packet with its specified origin \( O \) and destination \( D \) at node \( v \), the state features are the vectors with the elements below. 1. **Distance to destination**, \( \text{dist}(v, D) \): the distance between \( v \) and \( D \). 2. **Stretch factor**, \( sf(O, D, v) = \frac{\text{dist}(O, v) + \text{dist}(v, D)}{\text{dist}(O, D)} \): the stretch of the indirect distance between \( O \) and \( D \) that is via \( v \) with respect to the direct distance between \( O \) and \( D \). - **Action feature**, \( f_a(O, D, a) = f_s(O, D, u) \). The feature for the action that forwards a packet from \( v \in \text{nbr}(v) \), \( f_a(O, D, a) \) is chosen to be the same as the state feature of \( u, f_s(O, D, u) \). Henceforth, we consider learning with two different combinations of input features, one with only \( \text{dist}(v, D), \text{dist}(u, D) \) and the other with both \( \text{dist}(v, D), \text{dist}(u, D) \) and \( sf(O, D, v), sf(O, D, u) \). ### 3 Assuring Generalizability of Routing Policies We begin with a sufficient condition for how a routing policy, \( \pi \), learned from a seed graph \( G^* = (V^*, E^*) \in \mathcal{G} \), where \( \mathcal{G} \) is the set of all uniform random graphs, with select samples generated from a subset of nodes in \( V^* \), can generalize over other (and potentially all) graphs \( G \in \mathcal{G} \). A basis for generalizability in this setting is the concept of a ranking metric for each node \( v \in V \) with respect to each \( u \in \text{nbr}(v) \), where \( \text{nbr}(v) \) denotes the set of \( v \)'s one-hop neighbors. Let \( f_s : V \rightarrow \mathbb{R}^I \) be a map of \( v \in V \) to its state features (of cardinality \( I \)) and let \( f_a : V \rightarrow \mathbb{R}^J \) be a map of \( u \in \text{nbr}(v) \) to its action features (of cardinality \( J \)). We define a ranking metric \( m(f_s(O, D, v), f_a(O, D, u)) \in \mathbb{R} \) to be a linear function over the input features associated with \( v \) and \( u \). For notational convenience, given an ordering \((u_0, ..., u_d)\) of all nodes in \( \text{nbr}(v) \), let \( X_v = \{(f_s(O, D, v), f_a(O, D, u_0)), ..., (f_s(O, D, v), f_a(O, D, u_d))\} \), denote the set of input vectors for each corresponding neighbor \( u_k \in \text{nbr}(v), 0 \leq k \leq d \). Also, let \( Y_v = \{Q(v, u_0), ..., Q(v, u_d)\} \) denote the corresponding set of Q-values. Consider then a sufficient condition on the relation between the local ranking metric \( m \) and the corresponding Q-values set \( Y_v \), namely, the global ranking metric, to learn a DNN model for ranking the neighbors \( u \in \text{nbr}(v) \) according to their Q-values. **Theorem 0 (Learnability).** Let \( v \) be any node in \( V \) for which ranking metric \( m(f_s(O, D, v), f_a(O, D, u)) \) satisfies the following property, RankPres: If \( \langle m(f_s(O, D, v), f_a(O, D, u_0)), ..., m(f_s(O, D, v), f_a(O, D, u_d)) \) is monotonically increasing,\(^1\) then \( \langle Q(v, u_0), ..., Q(v, u_d) \) is monotonically increasing. There exists a learnable DNN \( H : \mathbb{R}^{I+J} \rightarrow \mathbb{R} \), with training samples \( \langle X_v, Y_v \rangle \), that achieves optimal ranking of all \( u \in \text{nbr}(v) \), i.e., its output for the corresponding neighbors of \( v \), \( \langle H(f_s(O, D, v), f_a(O, D, u_0)), ..., H(f_s(O, D, v), f_a(O, D, u_d)) \), is monotonically increasing. Next, we lift the sufficient condition to provide a general basis, first, for ranking the neighbors of all nodes in a graph according to their optimal shortest paths, from only the samples derived from one (or a few) of its nodes; and second, for similarly ranking the neighbors of all graphs in \( G \). **Theorem 1** (Cross-Node Generalizability). For any graph \( G = (V, E) \), if there exists a ranking metric \( m(f_s(O, D, v), f_a(O, D, u)) \) that satisfies the RankPres property for all \( v \in V \), then an optimal ranking policy for all \( v \in V \) is learnable with only a subset of training samples \( \langle X_{V'}, Y_{V'} \rangle \), where \( V' \subseteq V \), \( X_{V'} = \bigcup_{v \in V'} X_v \), and \( Y_{V'} = \bigcup_{v \in V'} Y_v \). Note that if the \( Q(v, u) \) value corresponds to the optimal (shortest) path \( Q \)-value for each \((v, u)\) pair, then the DNN indicated by Theorem 1 achieves an optimal routing policy for all nodes in \( V \). Note also in this case that if the ranking metric \( m \) satisfies RankPres not for all nodes but for almost all nodes, a policy learned from samples from one or more nodes \( v \) that satisfy RankPres may not achieve optimal routing for all nodes. Nevertheless, if the relative measure of the number of nodes that do not satisfy RankPres to the number of nodes that do satisfy RankPres is small, then with high probability the policy achieves near-optimal routing. **Theorem 2** (Cross-Graph Generalizability). If there exists a ranking metric \( m(f_s(O, D, v), f_a(O, D, u)) \) that satisfies the RankPres property for the nodes in all graphs \( G \in G \), then an optimal ranking policy is learnable by using training samples from one or more nodes in one or more chosen seed graph(s) \( G^* \in G \). Again, if Theorem 2 is considered in the context of \( Q \)-values corresponding to optimal shortest paths, the learned routing policy \( \pi \) generalizes to achieving optimal routing over all graphs \( G \in G \). And if we relax the requirement that RankPres holds for all nodes of all graphs in \( G \) to only requiring that for almost all graphs \( G \in G \), there is a high similarity between the ranking metric \( m \) and the optimal \( Q \)-value, then with high probability the policy achieves near-optimal routing. Proofs of the above-mentioned theorems are relegated to Appendix A. **Proposition 1.** For APNSP, there exists a local ranking metric \( m_1(f_s(O, D, v), f_a(O, D, u)) \) of the form \( w_1 \cdot \text{dist}(v, D) + w_2 \cdot \text{dist}(u, D) \) based on the distance input feature that satisfies the RankPres property for almost all nodes in almost all graphs \( G \). Also, there exists a local ranking metric \( m_2(f_s(O, D, v), f_a(O, D, u)) \) of the form \( w_1 \cdot \text{dist}(v, D) + w_2 \cdot \text{sf}(O, D, v) + w_3 \cdot \text{dist}(u, D) + w_4 \cdot \text{sf}(O, D, u) \) based on both distance and stretch factor input features that satisfies the RankPres property for almost all nodes in almost all graphs \( G \). We empirically validate Proposition 1 as presented in Appendix B. RankPres is quantified in terms of Ranking Similarity; high ranking similarity implies that RankPres holds with high probability. We show that Proposition 1 holds for both Euclidean and hyperbolic spaces with respectively chosen weights \( w \) for \( m_1 \) and \( m_2 \). It follows that an efficient generalizable policy for APNSP is feasible for each of the two chosen input feature sets, given the existence of respective ranking metrics that with high probability satisfy RankPres. For APNSP, the optimal \( Q(v, u) \) values can be retrieved by calculating the length of the shortest path starting from \( v \) toward \( u \) until reaching the destination. A near-optimal routing policy may then be learned via supervised learning on a single seed graph. \(^1\)By monotonic increasing order in \( \langle m(f_s(O, D, v), f_a(O, D, u_0)), ..., m(f_s(O, D, v), f_a(O, D, u_d)) \), we mean \( m(f_s(O, D, v), f_a(O, D, u_0)) \leq ... \leq m(f_s(O, D, v), f_a(O, D, u_d)) \). 4 SINGLE GRAPH LEARNING ALGORITHM 4.1 SELECTION OF SEED GRAPH AND GRAPH SUBSAMPLES To achieve both cross-graph generalizability and cross-node generalizability, we develop a knowledge-guided mechanism with the following two selection components: Seed Graph Selection. The choice of seed graph depends primarily on the analysis of cross-node generalizability (Theorem 1) across a sufficient set of uniform random graphs with diverse sizes and densities/average node degrees. In Figures 10 and 11 in Appendix B.2, we empirically show that, with the use of distance and stretch factor, a good seed graph is likely to exist in a set of graphs with small size (e.g., 50) and high density (e.g., 5) in the Euclidean space and high average node degree (e.g., 4) in the hyperbolic space. There may be applications where analysis of large (or full) graphs is not always possible. In such situations, given a graph $G$, an alternative choice of seed graph can be from a small subgraph $G' = (V', E')$, $V' \subset V$, $E' \subset E$ with relatively high cross-node generalizability. Note that, in Theorem 1, for a graph $G = (v, E)$ satisfying the RankPres property for all $v \in V$, RankPres still holds for $v' \in V'$ in a subgraph $G' = (V', E')$ of $G$. This is because the learnable function $m$ still preserves the optimal routing policy for all nodes in a subset of $nbr(v)$. Graph Subsamples Selection. We provide the following scheme of subsample selection for a given graph $G = (V, E)$ to choose a set of $\phi$ nodes for generating $\phi$ training samples. 1. Select an origin and destination pair $(O, D), O, D \in V$. 2. Select $\phi$ nodes, $v_0, ..., v_{\phi-1} \in V \setminus D$. 3. For each chosen node $v_0, ..., v_{\phi-1}$, respectively, collect the subsamples $(X, Y)$, where \[ X = \bigcup_{v \in \{v_0, ..., v_{\phi-1}\}} \{f_s(O, D, v), f_a(O, D, u)\} \] and \[ Y = \bigcup_{v \in \{v_0, ..., v_{\phi-1}\}} \{Q^*(v, u)\}. \] In Appendix F, we analyze the complexity of graph subsampling. An alternative for seed nodes search is also provided to limit the search complexity. 4.2 SUPERVISED LEARNING FOR APNSP WITH OPTIMAL Q-VALUES Given the dataset $(X, Y)$ collected from a seed graph, we train a DNN based on supervised learning to capture the optimal ranking policy. Specifically, suppose the DNN $H$ is parameterized by $\Theta$. We seek to minimize the following loss function: \[ \min_\Theta \sum_{(X, Y)} \|H_\Theta(f_s(O, D, v), f_a(O, D, u)) - Q^*(v, u)\|^2. \] (3) Note that we assume that the optimal $Q$-values are known for the seed graph in the supervised learning above, which can be obtained based on the shortest path routing policies of the seed graph. By leveraging these optimal $Q$-values and supervised learning on the seed graph, a generalized routing policy is learned for APNSP routing over almost all uniform random graphs in both Euclidean and hyperbolic spaces, as we validate in the experiments in Section 5. 4.3 REINFORCEMENT LEARNING FOR APNSP For the case where the optimal $Q$-values of graphs are unknown, we solve the APNSP problem using RL. Using the same input features and seed graph selection procedure, the RL algorithm continuously improves the quality of $Q$-value estimations by interacting with the seed graph. In contrast to the supervised learning algorithm, where we collect only a single copy of data samples from a set of chosen (shortest path) nodes once before training, in RL new training data samples from nodes in a shortest path, predicted by most recent training episode (i.e., based on the current $Q$-value estimation), are collected at the beginning of each training episode. Remarkably, the generalizability of the resulting RL routing policy across almost all uniform random graphs in Euclidean and hyperbolic spaces is preserved. The details of the RL algorithm, named as RL-APNSP-ALGO, are shown in Algorithm 1 in Appendix C. 5 ROUTING POLICY PERFORMANCE FOR SCALABILITY AND ZERO-SHOT GENERALIZATION In this section, we discuss implementation of our machine learned routing policies and evaluate their performance in predicting all-pair near-shortest paths for graphs across different sizes and densities over Euclidean and hyperbolic spaces in Python3. We use PyTorch 2.0.1 (The Linux Foundation, 2023) on the CUDA 11.8 compute platform to implement DNNs as shown in Figure 1. Table 3 in Appendix D.1 shows our simulation parameters for training and testing the routing policies. 5.1 COMPARATIVE EVALUATION OF ROUTING POLICIES We compare the performance of the different versions of Greedy Tensile policies obtained using the following approaches: **Supervised (φ=3) / Supervised (all):** Supervised learning from appropriately chosen seed graph $G^*$ using graph subsamples selection from φ=3 or all nodes. **RL (φ=3) / RL (all):** RL from appropriately chosen seed graph $G^*$ using graph subsamples selection from φ=3 or all nodes. **GF:** Greedy forwarding that forwards packets to the one-hop neighbor with the minimum distance to the destination. Note that for a given set of input features, both supervised learning and reinforcement learning schemes use the same DNN configuration to learn the routing policies. By using the subsampling mechanism, not only the sample complexity but also the training time will be significantly reduced in Supervised (φ=3) and RL (φ=3) compared to those in Supervised (all) and RL (all), respectively. 5.2 ZERO-SHOT GENERALIZATION OVER DIVERSE GRAPHS To evaluate the scalability and generalizability of the policies, we directly (i.e., without any adaptation) test the policies learned from the seed graph $G^*$ on new uniform random graphs with different combinations of $(N_{test}, p_{test})$ in Euclidean spaces and $(N_{test}, \delta_{test})$ in hyperbolic spaces. We select 20 random graphs for each pair and calculate the average prediction accuracy over these $20N_{test}^2$ shortest paths. For the DNNs with input $\text{dist}(v, D)$ and $\text{dist}(u, D)$, the tests confirm that the performance of all the learned policies exactly match the prediction accuracy of Greedy Forwarding in both Euclidean and hyperbolic spaces. (In Appendix E, we symbolically interpret the learned DNN with input $\text{dist}(v, D)$ and $\text{dist}(u, D)$ to show it can be reduced to GF.) For the Greedy Tensile DNNs with input $\langle \text{dist}(v, D), \text{sf}(O, D, v), \text{dist}(u, D), \text{sf}(O, D, u) \rangle$, we plot in Figure 2 the respective average prediction accuracies across graph sizes in $\{27, 64, 125, 216\}$ with density in $\{2, 3, 4, 5\}$ in Euclidean space and average node degree in $\{1, 2, 3, 4\}$ in hyperbolic space. In Euclidean space, the Supervised (φ=3) approach achieves the best performance among all the approaches. In particular, compared to GF, the Supervised (φ=3) policy improves the accuracy up to 9% over GF, whereas the other learned policies show at least comparable performance in low density. graphs ($\rho = 2$) and achieve an improvement of up to 5% in graphs with $\rho \geq 3$. The performance gap between the DNNs and GF increases as the network density increases to a high level (e.g., $\rho = 5$), wherein GF was believed to work close to the optimal routing. In hyperbolic space, the RL (all) policy improves the accuracy up to 3% over GF, whereas the other learned policies show at least comparable performance in low degree graphs ($\delta = 1$) and achieve an improvement of up to 2% in graphs with $\delta \geq 2$. To the best of our knowledge, we are the first to provide routing policies that outperform GF in almost all random graphs; recall GF was shown to find almost optimal shortest path in scale-free topologies (Papadopoulos et al., 2010). 6 Symbolic Interpretability of Learned Model In this section, we symbolically interpret the learned model for Greedy Tensile routing, achieving a two orders of magnitude reduction in its operational complexity. Figure 3 plots the output of its DNN, towards explaining the learned policy. Since $dist(v, D)$ and $sf(O, D, v)$ stay unchanged for a fixed routing node $v$ at a given time, we plot the shape of the ranking metric (z-axis) of the learned DNN according to varying $sf(O, D, v)$ (x-axis) and $dist(u, D)$ (y-axis) in the figure. ![Figure 3](image) (a) DNN, $dist(v, D) = 2$ (b) DNN, $dist(v, D) = 4$ (c) Symbolic Approximation Figure 3: The shape of ranking metrics of the Greedy Tensile DNN and its two-linear action Symbolic Approximation policy, given $sf(O, D, v) = 1.2$. The x and y axes represent $sf(O, D, u)$ and $dist(u, D)$, and the z axis is the ranking metric for routing. Figures 3(a) and 3(b) show that the Greedy Tensile DNN has two planes separated by a transition boundary that varies as $dist(v, D)$ changes. The two planes can be respectively approximated by two different linear functions. The first function (for the upper plane) prefers both smaller $sf(O, D, u)$ and $dist(u, D)$. The second (for the lower plane) significantly prioritizes smaller $dist(u, D)$. We find that the two functions that approximate the Greedy Tensile DNN can be symbolically represented by a guarded command: $$ z = \begin{cases} -0.013dist(v, D) - 0.023sf(O, D, u) - 0.012dist(u, D) - 0.063, \\ \text{if } dist(u, D) < 1.020dist(v, D) + 0.567sf(O, D, u) - 0.690 \\ 0.025dist(v, D) - 0.002sf(O, D, u) - 0.044dist(u, D) - 0.146, \text{ otherwise}. \end{cases} $$ (4) The weights of the guarded command are calculated using linear regression. Its first action assigns dominant weights to both $sf(O, D, u)$ and $dist(u, D)$, while its second action gives almost negligible weight to $sf(O, D, u)$ compared to the weight of $dist(uD)$. The shape of ranking metrics of the guarded command given $dist(v, D) = 4$ is shown in Figure 3(c), which has a two-plane surface similar to Figure 3(b). Figure 14 in Appendix D.3 shows that the accuracy of the two-linear action policy using Equation 4 is close to that of Greedy Tensile DNN. The simplified two-linear-action policy also has substantially reduced operation complexity. Whereas the Greedy Tensile DNN requires at least $\Omega \times N_e[1] \times N_e[2]$ ($= 4 * 200 * 4$) multiplications to output the Q-value for a given (state, action) pair, where $\Omega$ represents the number of --- Since Greedy Tensile models in Euclidean and hyperbolic spaces have a similar shape, we only visualize the learned model in the Euclidean space here. input features of the DNN and $N_e[i]$ denotes the number of neurons in the $i$-th hidden layer, the two-linear-action policy needs less than ten multiplications. 7 RELATED WORK Feature Selection for Routing. A classic feature for local routing comes from greedy forwarding (Finn [1987]), where the distance to the destination node (in an Euclidean or hyperbolic metric space) is used to optimize forwarder selection. It has been proven that this feature achieves nearly optimal routing in diverse network configurations, including scale-free networks (Kleinberg [2000], Papadopoulos et al. [2010]). A stretch bound on routing paths using greedy forwarding is investigated in diverse models with or without the assumption of unit disk graphs (Flury et al. [2009], Tan et al. [2009b], Tan & Kermarrec [2011], Tan et al. [2009a], Won & Stoleru [2014]). Other features for forwarder selection include Most Forward within Radius (MFR) (Takagi & Kleinrock [1984]), Nearest with Forwarding Progress (NFP) (Hou & Li [1986]), the minimum angle between neighbor and destination (aka Compass Routing) (Kranakis [1999]), and Random Progress Forwarding (RPF) (Nelson & Kleinrock [1984]). Network domain knowledge has also been used to guide search efficiency in routing protocols. A recent study (Chen et al. [2023]) shows that searching for shortest paths in uniform random graphs can be restricted to an elliptic search region with high probability. Its geographic routing protocol, QF-Geo, uses node Stretch Factor as an input feature to determine whether a node’s neighbors lie in the search region and to forward packets only within the predicted elliptic region. Generalizability of Machine Learned Routing. Only recently has machine learning research started to address generalizability in routing contexts. For instance, generalizability to multiple graph layout distributions, using knowledge distillation, has been studied for a capacitated vehicle routing problem (Bi et al. [2022]). Some explorations have considered local search: i.e., wireless network routing strategies via local search based on deep reinforcement learning (Manfredi et al. [2021], [2022]) have been shown to generalize to other networks of up to 100 nodes, in the presence of diverse dynamics including node mobility, traffic pattern, congestion, and network connectivity. Deep learning has also been leveraged for selecting an edge set for a well-known heuristic, Lin-Kernighan-Helsgaun (LKH), to solve the Traveling Salesman Problem (TSP) (Xin et al. [2021]). The learned model generalizes well for larger (albeit still modest) sized graphs and is useful for other network problems, including routing. Likewise, graph neural networks and learning for guided local search to select relevant edges have been shown to yield improved solutions to the TSP (Hudson et al. [2021]). In related work, deep reinforcement learning has been used to iteratively guide the selection of the next solution for routing problems based on neighborhood search (Wu et al. [2021]). 8 CONCLUSIONS AND FUTURE WORK We have shown that guiding machine learning with domain knowledge can lead to the rediscovery of a well-known routing policy (somewhat surprisingly), in addition to discovering a new routing policy, that perform well in terms of complexity, scalability, and generalizability. The theory we have presented in the paper is readily extended to other classes of graphs (such as non uniform cluster distributions), ranking metrics that are nonlinear, and MDP actions that span multiple neighbors. Thus, albeit our illustration intentionally uses relatively familiar input features and local routing architectures, richer domain theory will be useful to guide machine learning of novel routing algorithms. Moreover, the routing policies are likely to be competitive for richer classes of graphs than the class of uniform random graphs on which we have focused our validation. While samples from nodes of a single seed graph suffice for generalizable learning, in practice, learning from multiple seed graphs may be of interest. For instance, if an ideal seed graph is not known a priori, online learning from better or multiple candidate seed graphs as they are encountered may be of interest for some applications. Along these lines, we recall that the set of ideal (and near ideal) seed graphs is relatively large in the problem we considered. One way to relax the knowledge of ideal seed graphs is to leverage online meta-learning, for learning a good model initialization and continuing to improve the initialization based on better seed graphs as they are encountered. Towards this end, we have also been studying the merits of efficiently fine tuning the model for the target graph as an alternative to zero-shot generalization. REFERENCES Rodrigo Aldecoa, Chiara Orsini, and Dmitri Krioukov. Hyperbolic graph generator. *Computer Physics Communications*, 196:492–496, 2015. Jieyi Bi, Yining Ma, Jiahai Wang, Zhiguang Cao, Jinbiao Chen, Yuan Sun, and Yeow Meng Chee. Learning generalizable models for vehicle routing problems via knowledge distillation. In *Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems (NeurIPS)*, 2022. Marián Boguná, Fragkiskos Papadopoulos, and Dmitri Krioukov. Sustaining the internet with hyperbolic mapping. *Nature Communications*, 1(1):62, 2010. Fraser Cadger, Kevin Curran, Jose Santos, and Sandra Moffett. A survey of geographical routing in wireless ad-hoc networks. *IEEE Communications Surveys & Tutorials*, 15(2):621–653, 2012. Yung-Fu Chen, Kenneth W Parker, and Anish Arora. QF-Geo: Capacity aware geographic routing using bounded regions of wireless meshes. *arXiv preprint arXiv:2305.05718*, 2023. Gregory G Finn. Routing and addressing problems in large metropolitan-scale internetworks. Technical report, University of Southern California Marina Del Rey Information Sciences Inst, 1987. Roland Flury, Sriram V Pemmaraju, and Roger Wattenhofer. Greedy routing with bounded stretch. In *IEEE INFOCOM*, pp. 1737–1745, 2009. E Mark Gold. Language identification in the limit. *Information and Control*, 10(5):447–474, 1967. Matthias Grossglauser and David NC Tse. Mobility increases the capacity of ad hoc wireless networks. *IEEE/ACM Transactions on Networking*, 10(4):477–486, 2002. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In *Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 855–864, 2016. Ramin Hekmat and Piet Van Mieghem. Interference in wireless multi-hop ad-hoc networks and its effect on network capacity. *Wireless Networks*, 10:389–399, 2004. Ting-Chao Hou and Victor Li. Transmission range control in multihop packet radio networks. *IEEE Transactions on Communications*, 34(1):38–44, 1986. Benjamin Hudson, Qingbiao Li, Matthew Malencia, and Amanda Prorok. Graph neural network guided local search for the traveling salesperson problem. *arXiv preprint arXiv:2110.05291*, 2021. Kalervo Järvelin and Jaana Kekäläinen. Cumulated gain-based evaluation of IR techniques. *ACM Transactions on Information Systems (TOIS)*, 20(4):422–446, 2002. Jon M Kleinberg. Navigation in a small world. *Nature*, 406(6798):845–845, 2000. Evangelos Kranakis. Compass routing on geometric networks. In *Proceedings of the 11th Canadian Conference on Computational Geometry (CCCG 1999), Vancouver, August*, 1999. Victoria Manfredi, Alicia P Wolfe, Bing Wang, and Xiaolan Zhang. Relational deep reinforcement learning for routing in wireless networks. In *2021 IEEE 22nd International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)*, pp. 159–168, 2021. Victoria Manfredi, Alicia Wolfe, Xiaolan Zhang, and Bing Wang. Learning an adaptive forwarding strategy for mobile wireless networks: Resource usage vs. latency. In *Reinforcement Learning for Real Life (RL4RealLife) Workshop in the 36th Conference on Neural Information Processing Systems (NeurIPS)*, 2022. SH Muggleton. Hypothesizing an algorithm from one example: the role of specificity. *Philosophical Transactions of the Royal Society A*, 381(2251):20220046, 2023.
xAqcJ9XoTf
In Figure 2, how did you compute the Lipschitz constant of MLPs? We can compute the Lipschitz constant for models that consist of a single layer, but exact computation of the Lipschitz constant of MLPs is NP-hard [1].
ON THE STABILITY OF EXPRESSIVE POSITIONAL ENCODINGS FOR GRAPHS Yinan Huang*1, William Lu*2, Joshua Robinson3, Yu Yang4, Muhan Zhang5, Stefanie Jegelka6, Pan Li1 1Georgia Institute of Technology, 2Purdue University, 3Stanford University, 4Tongji University, 5Peking University, 6MIT CSAIL {yhuang903, panli}@gatech.edu, lu909@purdue.edu, joshrob@cs.stanford.edu, yangyu0879@tongji.edu.cn, muhan@pku.edu.cn, stefje@mit.edu ABSTRACT Designing effective positional encodings for graphs is key to building powerful graph transformers and enhancing message-passing graph neural networks. Although widespread, using Laplacian eigenvectors as positional encodings faces two fundamental challenges: (1) Non-uniqueness: there are many different eigendecompositions of the same Laplacian, and (2) Instability: small perturbations to the Laplacian could result in completely different eigenspaces, leading to unpredictable changes in positional encoding. Despite many attempts to address non-uniqueness, most methods overlook stability, leading to poor generalization on unseen graph structures. We identify the cause of instability to be a “hard partition” of eigenspaces. Hence, we introduce Stable and Expressive Positional Encodings (SPE), an architecture for processing eigenvectors that uses eigenvalues to “softly partition” eigenspaces. SPE is the first architecture that is (1) provably stable, and (2) universally expressive for basis invariant functions whilst respecting all symmetries of eigenvectors. Besides guaranteed stability, we prove that SPE is at least as expressive as existing methods, and highly capable of counting graph structures. Finally, we evaluate the effectiveness of our method on molecular property prediction, and out-of-distribution generalization tasks, finding improved generalization compared to existing positional encoding methods. Our code is available at https://github.com/Graph-COM/SPE. 1 INTRODUCTION Deep learning models for graph-structured data such as Graph Neural Networks (GNNs) and Graph Transformers have been arguably one of the most popular machine learning models on graphs, and have achieved remarkable results for numerous applications in drug discovery, computational chemistry, and social network analysis, etc. (Kipf & Welling, 2017; Bronstein et al., 2017; Duvenaud et al., 2015; Stokes et al., 2020; Zhang & Chen, 2018; Ying et al., 2021; Rampášek et al., 2022b). However, there is a common concern about these models: the limited expressive power. For example, it is known that message-passing GNNs are at most expressive as the Weisfeiler-Leman test (Xu et al., 2019; Morris et al., 2019) in distinguishing non-isomorphic graphs, and in general cannot even approximate common functions such as the number of certain subgraph patterns (Chen et al., 2020; Arvind et al., 2020; Tahmasebi et al., 2020; Huang et al., 2023). These limitations could significantly restrict model performance, e.g., since graph substructures can be closely related to the target function in chemistry, biology and social network analysis (Girvan & Newman, 2002; Granovetter, 1983; Koyutürk et al., 2004; Jiang et al., 2010; Bouritsas et al., 2022). To alleviate expressivity limitations, there has been considerable interest in designing effective positional encodings for graphs (You et al., 2019; Dwivedi & Bresson, 2021; Wang et al., 2022a). Generalized from the positional encodings of 1-D sequences for Transformers (Vaswani et al., 2017), the idea is to endow nodes with information about their relative position within the graph and thus make them more distinguishable. Many promising graph positional encodings use the eigenvalue decomposition of the graph Laplacian (Dwivedi et al., 2023; Kreuzer et al., 2021). The eigenvalue *equal contribution decomposition is a strong candidate because the Laplacian fully describes the adjacency structure of a graph, and there is a deep understanding of how these eigenvectors and eigenvalues inherit this information (Chung, 1997). However, eigenvectors have special structures that must be taken into consideration when designing architectures that process eigenvectors. Firstly, eigenvectors are not unique: if \( v \) is an eigenvector, then so is \(-v\). Furthermore, when there are multiplicities of eigenvalues then there are many more symmetries, since any orthogonal change of basis of the corresponding eigenvectors yields the same Laplacian. Because of this basis ambiguity, neural networks that process eigenvectors should be basis invariant: applying basis transformations to input eigenvectors should not change the output of the neural network. This avoids the pathological scenario where different eigendecompositions of the same Laplacian produce different model predictions. Several prior works have explored sign and basis symmetries of eigenvectors. For example, Dwivedi & Bresson (2021); Kreuzer et al. (2021) randomly flip the sign of eigenvectors during training so that the resulting model is robust to sign transformation. Lim et al. (2023) instead design new neural architectures that are invariant to sign flipping (SignNet) or basis transformation (BasisNet). Although these basis invariant methods have the right symmetries, they do not yet account for the fact that two Laplacians that are similar but distinct may produce completely different eigenspaces. This brings us to another important consideration, that of stability. Small perturbations to the input Laplacian should only induce a limited change of final positional encodings. This “small change of Laplacians, small change of positional encodings” actually generalizes the previous concept of basis invariance and proposes a stronger requirement on the networks. But this stability (or continuity) requirement is a great challenge for graphs, because small perturbations can produce completely different eigenvectors if some eigenvalues are close (Wang et al. (2022a), Lemma 3.4). Since the neural networks process eigenvectors, not the Laplacian matrix itself, they run the risk of being highly discontinuous with respect to the input matrix, leading to an inability to generalize to new graph structures and a lack of robustness to any noise in the input graph’s adjacency. In contrast, stable models enjoy many benefits such as adversarial robustness (Cisse et al., 2017; Tsuzuku et al., 2018) and provable generalization (Sokolić et al., 2017). Unfortunately, existing positional encoding methods are not stable. Methods that only focus on sign invariance (Dwivedi & Bresson, 2021; Kreuzer et al., 2021; Lim et al., 2023), for instance, are not guaranteed to satisfy “same Laplacian, same positional encodings” if multiplicity of eigenvalues exists. Basis invariant methods such as BasisNet are unstable because they apply different neural networks to different eigensubspaces. In a high-level view, they perform a hard partitioning of eigenspaces and treat each chunk separately (see Appendix C for a detailed discussion). The discontinuous nature of partitioning makes them highly sensitive to perturbations of the Laplacian. The hard partition also requires fixed eigendecomposition thus unsuitable for graph-level tasks. On the other hand, Wang et al. (2022a) proposes a provably stable positional encoding. But, to achieve stability, it completely ignores the distinctness of each eigensubspaces and processes the merged eigenspaces homogeneously. Consequently, it loses expressive power and has, e.g., a subpar performance on molecular graph regression tasks (Rampášek et al., 2022a). **Main contributions.** In this work, we present Stable and Expressive Positional Encodings (SPE). The key insight is to perform a soft and learnable “partition” of eigensubspaces in a eigenvalue dependent way, hereby achieving both stability (from the soft partition) and expressivity (from dependency on both eigenvalues and eigenvectors). Specifically: - SPE is provably stable. We show that the network sensitivity w.r.t. the input Laplacian is determined by the gap between the \(d\)-th and \((d + 1)\)-th smallest eigenvalues if using the first \(d\) eigenvectors and eigenvalues. This implies our method is stable regardless of how the used \(d\) eigenvectors and eigenvalues change. - SPE can universally approximate basis invariant functions and is as least expressive as existing methods in distinguishing graphs. We also prove its capability in counting graph substructures. - We empirically illustrate that introducing more stability helps generalize better but weakens the expressive power. Besides, on the molecule graph prediction datasets ZINC and Alchemy, our method significantly outperforms other positional encoding methods. On DrugOOD (Ji et al., 2023), a ligand-based affinity prediction task with domain shifts, our method demonstrates a clear and constant improvement over other unstable positional encodings. All these validate the effectiveness of our stable and expressive method. 2 PRELIMINARIES Notation. We always use \( n \) for the number of nodes in a graph, \( d \leq n \) for the number of eigenvectors and eigenvalues chosen, and \( p \) for the dimension of the final positional encoding for each node. We use \( \| \cdot \| \) to denote the L2 norm of vectors and matrices, and \( \| \cdot \|_F \) for the Frobenius norm of matrices. Graphs and Laplacian Encodings. Denote an undirected graph with \( n \) nodes by \( G = (A, X) \), where \( A \in \mathbb{R}^{n \times n} \) is the adjacency matrix and \( X \in \mathbb{R}^{n \times p} \) is the node feature matrix. Let \( D = \text{diag}(\sum_{j=1}^{n} A_{i,j})_{i=1}^{n} \) be the diagonal degree matrix. The normalized Laplacian matrix of \( G \) is a positive semi-definite matrix defined by \( L = I - D^{-1/2}AD^{-1/2} \). Its eigenvalue decomposition \( L = V \text{diag}(\lambda)V^T \) returns eigenvectors \( V \) and eigenvalues \( \lambda \), which we denote by \( \text{EVD}(L) = (V, \lambda) \). In practice we may only use the smallest \( d \leq n \) eigenvalues and eigenvectors, so abusing notation slightly, we also denote the smallest \( d \) eigenvalues by \( \lambda \in \mathbb{R}^d \) and the corresponding \( d \) eigenvectors by \( V \in \mathbb{R}^{n \times d} \). A Laplacian positional encoding is a function that produces node embeddings \( Z \in \mathbb{R}^{n \times p} \) given \( (V, \lambda) \in \mathbb{R}^{n \times d} \times \mathbb{R}^d \) as input. Basis invariance. Given eigenvalues \( \lambda \in \mathbb{R}^d \), if eigenvalue \( \lambda_i \) has multiplicity \( d_i \), then the corresponding eigenvectors \( V_i \in \mathbb{R}^{n \times d_i} \) form a \( d_i \)-dimensional eigenspace. A vital symmetry of eigenvectors is the infinitely many choices of basis eigenvectors describing the same underlying eigenspace. Concretely, if \( V_i \) is a basis for the eigenspace of \( \lambda_i \), then \( V_iQ_i \) is, too, for any orthogonal matrix \( Q_i \in O(d_i) \). The symmetries of each eigenspace can be collected together to describe the overall symmetries of \( V \) in terms of the direct sum group \( O(\lambda) := \oplus_i O(d_i) = \{ \oplus_i Q_i \in \mathbb{R}^{\sum_i d_i \times \sum_i d_i} : Q_i \in O(d_i) \} \), i.e., block diagonal matrices with \( i \)th block belonging to \( O(d_i) \). Namely, for any \( Q \in O(\lambda) \), both \( (V, \lambda) \) and \( (VQ, \lambda) \) are eigendecompositions of the same underlying matrix. When designing a model \( f \) that takes eigenvectors as input, we want \( f \) to be basis invariant: \( f(VQ, \lambda) = f(V, \lambda) \) for any \( (V, \lambda) \in \mathbb{R}^{n \times d} \times \mathbb{R}^d \), and any \( Q \in O(\lambda) \). Permutation equivariance. Let \( \Pi(n) = \{ P \in \{0, 1\}^{n \times n} : PP^T = I \} \) be the permutation matrices of \( n \) elements. A function \( f : \mathbb{R}^n \to \mathbb{R}^n \) is called permutation equivariant, if for any \( x \in \mathbb{R}^n \) and any permutation \( P \in \Pi(n) \), it satisfies \( f(Px) = Pf(x) \). Similarly, \( f : \mathbb{R}^{n \times n} \to \mathbb{R}^n \) is said to be permutation equivariant if satisfying \( f(PXP^T) = Pf(X) \). 3 A PROVABLY STABLE AND EXPRESSIVE PE In this section we introduce our model Stable and Expressive Positional Encodings (SPE). SPE is both stable and a maximally expressive basis invariant architecture for processing eigenvector data, such as Laplacian eigenvectors. We begin with formally defining the stability of a positional encoding. Then we describe our SPE model, and analyze its stability. In the final two subsections we show that higher stability leads to improved out-of-distribution generalization, and show that SPE is a universally expressive basis invariant architecture. 3.1 STABLE POSITIONAL ENCODINGS Stability intuitively means that a small input perturbation yields a small change in the output. For eigenvector-based positional encodings, the perturbation is to the Laplacian matrix, and should result in a small change of node-level positional embeddings. Definition 3.1 (PE Stability). A PE method \( \text{PE} : \mathbb{R}^{n \times d} \times \mathbb{R}^d \to \mathbb{R}^{n \times p} \) is called stable, if there exist constants \( c, C > 0 \), such that for any Laplacian \( L, L' \), \[ \| \text{PE}(\text{EVD}(L)) - P_* \text{PE}(\text{EVD}(L')) \|_F \leq C \| L - P_* L' P_*^T \|_F^c, \] where \( P_* = \arg \min_{P \in \Pi(n)} \| L - PLP^T \|_F \) is the permutation matrix matching two Laplacians. It is worth noting that here we adopt a slightly generalized definition of typical stability via Lipschitz continuity (\( c = 1 \)). This definition via Hölder continuity describes a more comprehensive stability behavior of PE methods, while retaining the essential idea of stability. Remark 3.1 (Stability implies permutation equivariance). Note that a PE method is permutation equivariant if it is stable: simply let \( L = PLP^T \) for some \( P \in \Pi(n) \) and we obtain the desired permutation equivariance \( \text{PE}(\text{EVD}(PLP^T)) = P \cdot \text{PE}(\text{EVD}(L)) \). Stability is hard to achieve due to the instability of eigenvalue decomposition—a small perturbation of the Laplacian can produce completely different eigenvectors (Wang et al. (2022a), Lemma 3.4). Since positional encoding models process the eigenvectors (and eigenvalues), they naturally inherit this instability with respect to the input matrix. Indeed, as mentioned above, many existing positional encodings are not stable. The main issue is that they partition the eigenvectors by eigenvalue, which leads to instabilities. See Appendix C for a detailed discussion. ### 3.2 SPE: A POSITIONAL ENCODING WITH GUARANTEED STABILITY To achieve stability, the key insight is to avoid a hard partition of eigensubspaces. Simultaneously, we should fully utilize the information in the eigenvalues for strong expressive power. Therefore, we propose to do a “soft partitioning” of eigenspaces by leveraging eigenvalues. Instead of treating each eigensubspace independently, we apply a weighted sum of eigenvectors in an **eigenvalue dependent** way. If done carefully, this can ensure that as two distinct eigenvalues converge—these are exactly the degenerate points creating instability—the way their respective eigenvectors are processed becomes more similar. This means that if two eigenvectors are “swapped”, as happens at degenerate points, the model output does not change much. The resulting method is (illustrated in Figure 1): \[ \text{SPE} : \quad \text{SPE}(V, \lambda) = \rho(V \text{diag}(\phi_1(\lambda)) V^\top, V \text{diag}(\phi_2(\lambda)) V^\top, ..., V \text{diag}(\phi_m(\lambda)) V^\top), \] where the input is the $d$ smallest eigenvalues $\lambda \in \mathbb{R}^d$ and corresponding eigenvectors $V \in \mathbb{R}^{n \times d}$, $m$ is a hyper-parameter, and $\phi_\ell : \mathbb{R}^d \rightarrow \mathbb{R}^d$ and $\rho : \mathbb{R}^{n \times n \times m} \rightarrow \mathbb{R}^{n \times p}$ are always permutation equivariant neural networks. Here, permutation equivariance means $\phi_\ell(P \lambda) = P \phi_\ell(\lambda)$ for $P \in \Pi(d)$ and $\rho(P A P^\top) = P \rho(A)$ for any $P \in \Pi(n)$ and input $A$. There are many choices of permutation equivariant networks that can be used, such as element-wise MLPs or Deep Sets (Zaheer et al., 2017) for $\phi_\ell$, and graph neural networks for $\rho$. The permutation equivariance of $\phi_\ell$ and $\rho$ ensures that SPE is basis invariant. Note that in Eq. (2), the term $V \text{diag}(\phi_\ell(\lambda)) V^\top$ looks like a spectral graph convolution operator. But they are methodologically different: SPE uses $V \text{diag}(\phi_\ell(\lambda)) V^\top$ to construct positional encodings, which are not used as a convolution operation to process node attributes (say as $V \text{diag}(\phi_\ell(\lambda)) V^\top X$). Also, $\phi_\ell$’s are general permutation equivariant functions that may express the interactions between different eigenvalues instead of elementwise polynomials on each eigenvalue separately which are commonly adopted in spectral graph convolution. It is also worthy noticing that term \( V \text{diag}(\phi_\ell(\lambda))V^\top \) will reduce to hard partitions of eigenvectors in the \( \ell \)-th eigensubspace if we let \( [\phi_\ell(\lambda)]_j = 1 \) (\( \lambda_j \) is the \( \ell \)-th smallest eigenvalue). To obtain stability, what we need is to constrain \( \phi_\ell \) to continuous functions to perform a continuous “soft partition”. **Assumption 3.1.** The key assumptions for SPE are as follows: - \( \phi_\ell \) and \( \rho \) are permutation equivariant (see definitions after SPE Eq. (2)). - \( \phi_\ell \) is \( K_\ell \)-Lipschitz continuous: for any \( \lambda, \lambda' \in \mathbb{R}^d \), \( \| \phi_\ell(\lambda) - \phi_\ell(\lambda') \|_F \leq K_\ell \| \lambda - \lambda' \| \). - \( \rho \) is \( J \)-Lipschitz continuous: for any \([A_1, A_2, ..., A_m] \in \mathbb{R}^{n \times n \times m} \) and \([A'_1, A'_2, ..., A'_m] \in \mathbb{R}^{n \times n \times m} \), \( \| \rho(A_1, A_2, ..., A_m) - \rho(A'_1, A'_2, ..., A'_m) \|_F \leq J \sum_{l=1}^m \| A_l - A'_l \|_F \). These two continuity assumptions generally hold by assuming the underlying networks have norm-bounded weights and continuous activation functions, such as ReLU. As a result, Assumption 3.1 is mild for most neural networks. Now we are ready to present our main theorem, which states that continuity of \( \phi_\ell \) and \( \rho \) leads to the desired stability. **Theorem 3.1 (Stability of SPE).** Under Assumption 3.1, SPE is stable with respect to the input Laplacian: for Laplacians \( L, L' \), \[ \| \text{SPE(EVD}(L)) - P_* \text{SPE(EVD}(L')) \|_F \leq (\alpha_1 + \alpha_2) d^{5/4} \sqrt{\| L - P_* L P_*^\top \|_F} \\ + \left( \alpha_2 \frac{d}{\gamma} + \alpha_3 \right) \| L - P_* L P_*^\top \|_F, \] where the constants are \( \alpha_1 = 2J \sum_{l=1}^m K_\ell \), \( \alpha_2 = 4\sqrt{2}J \sum_{l=1}^m M_\ell \) and \( \alpha_3 = J \sum_{l=1}^m K_\ell \). Here \( M_\ell = \sup_{\lambda \in [0, 2]^d} \| \phi_\ell(\lambda) \| \) and again \( P_* = \arg \min_{P \in \Pi(n)} \| L - P_* L P_*^\top \|_F \). The eigengap \( \gamma = \lambda_{d+1} - \lambda_d \) is the difference between the \((d + 1)\)-th and \(d\)-th smallest eigenvalues, and \( \gamma = +\infty \) if \( d = n \). Note that the stability of SPE is determined by both the Lipschitz constants \( J, K_\ell \) and the eigengap \( \gamma = \lambda_d - \lambda_{d+1} \). The dependence on \( \gamma \) comes from the fact that we only choose to use \( d \) eigenvectors/eigenvalues. It is inevitable as long as \( d < n \), and it disappears (\( \gamma = +\infty \)) if we let \( d = n \). This phenomenon is also observed in PEG (Wang et al. (2022a), Theorem 3.6). ### 3.3 FROM STABILITY TO OUT-OF-DISTRIBUTION GENERALIZATION An important implication of stability is that one can characterize the domain generalization gap by the model’s Lipschitz constant (Courty et al., 2017; Shen et al., 2018). Although our method satisfies Hölder continuity instead of strict Lipschitz continuity, we claim that interestingly, a similar bound can still be obtained for domain generalization. We consider graph regression with domain shift: the training graphs are sampled from source domain \( L \sim \mathbb{P}_S \), while the test graphs are sampled from target domain \( L \sim \mathbb{P}_T \). With ground-truth function \( f(L) \in \mathbb{R} \) and a prediction model \( h(L) \in \mathbb{R} \), we are interested in the gap between in-distribution error \( \varepsilon_s(h) = \mathbb{E}_{L \sim \mathbb{P}_S} |h(L) - f(L)| \) and out-of-distribution error \( \varepsilon_t(h) = \mathbb{E}_{L \sim \mathbb{P}_T} |h(L) - f(L)| \). The following theorem states that for a base GNN equipped with SPE, we can upper bound the generalization gap in terms of the Hölder constant of SPE, the Lipschitz constant of the base GNN and the 1-Wasserstein distance between source and target distributions. **Proposition 3.1.** Assume Assumption 3.1 hold, and assume a base GNN model \( \text{GNN}(L, X) \in \mathbb{R} \) that is \( C \)-Lipschitz continuous, i.e., \[ | \text{GNN}(L, X) - \text{GNN}(L', X') | \leq C \min_{P \in \Pi(n)} \left( \| L - PLP^\top \|_F + \| X - PX' \|_F \right), \] for any Laplacians \( L, L' \) and node features \( X, X' \). Now let GNN take positional encodings as node features \( X = \text{SPE(EVD}(L)) \) and let the resulting prediction model be \( h(L) = \text{GNN}(L, \text{SPE(EVD}(L))) \). Then the domain generalization gap \( \varepsilon_t(h) - \varepsilon_s(h) \) satisfies \[ \varepsilon_t(h) - \varepsilon_s(h) \leq 2C(1 + \alpha_2 \frac{d}{\gamma} + \alpha_3) W(\mathbb{P}_s, \mathbb{P}_t) + 2Cd^{5/4}(\alpha_1 + \alpha_2) \sqrt{W(\mathbb{P}_s, \mathbb{P}_t)}, \] where $W(\mathbb{P}_S, \mathbb{P}_T)$ is the 1-Wasserstein distance\footnote{For graphs, $W(p_s, p_t) := \inf_{\pi \in \Pi(\mathbb{P}_S, \mathbb{P}_T)} \int \min_{L \in \Pi(n)} \| L - PL'P^T \|_F \pi(L, L') dLdL'$. Here $\Pi(\mathbb{P}_S, \mathbb{P}_T)$ is the set of product distributions whose marginal distribution is $\mathbb{P}_S$ and $\mathbb{P}_T$ respectively.}. ### 3.4 SPE IS A UNIVERSAL BASIS INVARIANT ARCHITECTURE SPE is a basis invariant architecture, but is it universally powerful? The next result shows that SPE is universal, meaning that any continuous basis invariant function can be expressed in the form of SPE (Eq. 2). To state the result, recall that $\text{SPE}(V, \lambda) = \rho(V \text{diag}(\phi(\lambda)) V^\top)$, where for brevity, we express the multiple $\phi_\ell$ channels by $\phi = (\phi_1, \ldots, \phi_m)$. **Proposition 3.2 (Basis Universality).** SPE can universally approximate any continuous basis invariant function. That is, for any continuous $f$ for which $f(V) = f(VQ)$ for any eigenvalue $\lambda$ and any $Q \in O(\lambda)$, there exist continuous $\rho$ and $\phi$ such that $f(V) = \rho(V \text{diag}(\phi(\lambda)) V^\top)$. Only one prior architecture, BasisNet (Lim et al., 2023), is known to have this property. However, unlike SPE, BasisNet does not have the critical stability property. Section 5 shows that this has significant empirical implications, with SPE considerably outperforming BasisNet across all evaluations. Furthermore, unlike prior analyses, we show that SPE can provably make effective use of eigenvalues: it can distinguish two input matrices with different eigenvalues using 2-layer MLP models for $\rho$ and $\phi$. In contrast, the original form of BasisNet does not use eigenvalues, though it is easy to incorporate them. **Proposition 3.3.** Suppose that $(V, \lambda)$ and $(V', \lambda')$ are such that $VQ = V'$ for some orthogonal matrix $Q \in O(d)$ and $\lambda \neq \lambda'$. Then there exist 2-layer MLPs for each $\phi_\ell$ and a 2-layer MLP $\rho$, each with ReLU activations, such that $\text{SPE}(V, \lambda) \neq \text{SPE}(V', \lambda')$. Finally, as a concrete example of the expressivity of SPE for graph representation learning, we show that SPE is able to count graph substructures under stability guarantee. **Proposition 3.4 (SPE can count cycles).** Assume Assumption 3.1 hold and let $\rho$ be 2-IGNs (Maron et al., 2019b). Then SPE can determine the number of 3, 4, 5 cycles of a graph. ### 4 RELATED WORKS **Expressive GNNs.** Since message-passing graph neural networks have been shown to be at most as powerful as the Weisfeiler-Leman test (Xu et al., 2019; Morris et al., 2019), there are many attempts to improve the expressivity of GNNs. We can classify them into three types: (1) high-order GNNs (Morris et al., 2020; Maron et al., 2019a;b); (2) subgraph GNNs (You et al., 2021; Zhang & Li, 2021; Zhao et al., 2022; Bevilacqua et al., 2022); (3) node feature augmentation (Li et al., 2020; Bouritsas et al., 2022; Barceló et al., 2021). In some senses, positional encoding can also be seen as an approach of node feature augmentation, which will be discussed below. **Positional Encoding for GNNs.** Positional encodings aim to provide additional global positional information for nodes in graphs to make them more distinguishable and add global structural information. It thus serves as a node feature augmentation to boost the expressive power of general graph neural networks (message-passing GNNs, spectral GNNs or graph transformers). Existing positional encoding methods can be categorized into: (1) Laplacian-eigenvector-based (Dwivedi & Bresson, 2021; Kreuzer et al., 2021; Maskey et al., 2022; Dwivedi et al., 2022; Wang et al., 2022b; Lim et al., 2023; Kim et al., 2022); (2) graph-distance-based (Ying et al., 2021; You et al., 2019; Li et al., 2020); and (3) random node features (Eliasof et al., 2023). A comprehensive discussion can be found in (Rampášek et al., 2022a). Most of these methods do not consider basis invariance and stability. Notably, Wang et al. (2022a) also studies the stability of Laplacian encodings. However, their method ignores eigenvalues and thus implements a stricter symmetry that is invariant to rotations of the entire eigenspace. As a result, the “over-stability” restricts its expressive power. Bo et al. (2023) propose similar operations as $V \text{diag}(\phi(\lambda)) V^\top$. However they focus on a specific architecture design ($\phi$ is transformer) for spectral convolution instead of positional encodings, and do not provide any stability analysis. **Stability and Generalization of GNNs.** The stability of neural networks is desirable as it implies better generalization (Sokolić et al., 2017; Neyshabur et al., 2017; 2018; Bartlett et al., 2017) and... transferability under domain shifts (Courty et al., 2017; Shen et al., 2018). In the context of GNNs, many works theoretically study the stability of various GNN models (Gama et al., 2020; Kenlay et al., 2020; 2021; Yehudai et al., 2020; Arghal et al., 2022; Xu et al., 2021; Chuang & Jegelka, 2022). Finally, some works try to characterize the generalization error of GNNs using VC dimension (Morris et al., 2023) or Rademacher complexity (Garg et al., 2020). 5 EXPERIMENTS In this section, we use numerical experiments to verify our theory and the empirical effectiveness of our SPE. Section 5.1 tests SPE’s strength as a graph positional encoder, and Section 5.2 tests the robustness of SPE to domain shifts, a key promise of stability. Section 5.3 further explores the empirical implications of stability in positional encodings. Our key finding is that there is a trade-off between generalization and expressive power, with less stable positional encodings fitting the training data better than their stable counterparts, but leading to worse test performance. Finally, Section 5.4 tests SPE on challenging graph substructure counting tasks that message passing graph neural networks cannot solve, and SPE significantly outperforms prior positional encoding methods. Datasets. We primarily use three datasets: ZINC (Dwivedi et al., 2023), Alchemy (Chen et al., 2019) and DrugOOD (Ji et al., 2023). ZINC and Alchemy are graph regression tasks for molecular property prediction. DrugOOD is an OOD benchmark for AI drug discovery, for which we choose ligand-based affinity prediction as our classification task (to determine if a drug is active). It considers three types of domains where distribution shifts arise: (1) Assay: which assay the data point belongs to; (2) Scaffold: the core structure of molecules; and (3) Size: molecule size. For each domain, the full dataset is divided into five partitions: the training set, the in-distribution (ID) validation/test sets, the out-of-distribution validation/test sets. These OOD partitions are expected to be distributed on the domains differently from ID partitions. Implementation. We implement SPE by: $\phi_i$ either being a DeepSet (Zaheer et al., 2017), element-wise MLPs or piece-wise cubic splines (see Appendix B.1 for detailed definition); and $\rho$ being GIN (Xu et al., 2019). Note that the input of $\rho$ is $n \times n \times m$ tensors, hence we first split it into $n$ many $n \times m$ tensors, and then independently give each $n \times m$ tensors as node features to an identical GIN. Finally, we sum over the first $n$ axes to output a permutation equivariant $n \times p$ tensor. Baselines. We compare SPE to other positional encoding methods including (1) No positional encodings, (2) SignNet and BasisNet (Lim et al., 2023), and (3) PEG (Wang et al., 2022a). In all cases we adopt GIN as the base GNN model. For a fair comparison, all models will have comparable budgets on the number of parameters. We also conducted an ablation study to test the effectiveness of our key component $\phi_\ell$, whose results are included in Appendix B. | Dataset | PE method | #PEs | #param | Test MAE | |---------|-----------|------|--------|----------| | ZINC | No PE | N/A | 575k | 0.1772±0.0040 | | | PEG | 8 | 512k | 0.1444±0.0076 | | | PEG | Full | 512k | 0.1878±0.0127 | | | SignNet | 8 | 631k | 0.1034±0.0056 | | | SignNet | Full | 662k | 0.0853±0.0026 | | | BasisNet | 8 | 442k | 0.1554±0.0068 | | | BasisNet | Full | 513k | 0.1555±0.0124 | | | SPE | 8 | 635k | 0.0736±0.0007 | | | SPE | Full | 650k | 0.0693±0.0040 | | Alchemy | No PE | N/A | 1387k | 0.112±0.001 | | | PEG | 8 | 1388k | 0.114±0.001 | | | SignNet | Full | 1668k | 0.113±0.002 | | | BasisNet | Full | 1469k | 0.110±0.001 | | | SPE | Full | 1785k | 0.108±0.001 | Table 2: AUROC results (5 random seeds) on DrugOOD. | Domain | PE Method | ID-Val (AUC) | ID-Test (AUC) | OOD-Val (AUC) | OOD-Test (AUC) | |----------|-----------|--------------|---------------|---------------|----------------| | Assay | No PE | 92.92±0.14 | 92.89±0.14 | 71.02±0.79 | 71.68±1.10 | | | PEG | 92.51±0.17 | 92.57±0.22 | 70.86±0.44 | 71.98±0.65 | | | SignNet | 92.26±0.21 | 92.43±0.27 | 70.16±0.56 | 72.27±0.97 | | | BasisNet | 88.96±1.35 | 89.42±1.18 | 71.19±0.72 | 71.66±0.05 | | | SPE | 92.84±0.20 | 92.94±0.15 | 71.26±0.62 | 72.53±0.66 | | Scaffold | No PE | 96.56±0.10 | 87.95±0.20 | 79.07±0.97 | 68.00±0.60 | | | PEG | 95.65±0.29 | 86.20±0.14 | 79.17±0.29 | 69.15±0.75 | | | SignNet | 95.48±0.34 | 86.73±0.56 | 77.81±0.70 | 66.43±1.06 | | | BasisNet | 85.80±3.75 | 78.44±2.45 | 73.36±1.44 | 66.32±5.68 | | | SPE | 96.32±0.28 | 88.12±0.41 | 80.03±0.58 | 69.64±0.49 | | Size | No PE | 93.78±0.12 | 93.60±0.27 | 82.76±0.04 | 66.04±0.70 | | | PEG | 92.46±0.35 | 92.67±0.23 | 82.12±0.49 | 66.01±0.10 | | | SignNet | 93.30±0.43 | 93.20±0.39 | 80.67±0.50 | 64.03±0.70 | | | BasisNet | 86.04±4.01 | 85.51±4.04 | 75.97±1.71 | 60.79±3.19 | | | SPE | 92.46±0.35 | 92.67±0.23 | 82.12±0.49 | 66.02±1.00 | 5.1 Small Molecule Property Prediction We use SPE to learn graph positional encodings on ZINC and Alchemy. We let $\phi_l$ be Deepsets using only the top 8 eigenvectors (PE-8), and be element-wise MLPs when using all eigenvectors (PE-full). As before, we take $\rho$ to be a GIN. Results. The test mean absolute errorx (MAEs) are shown in Table 4. On ZINC, SPE performs much better than other baselines, both when using just 8 eigenvectors (0.0736) and all eigenvectors (0.0693). On Alchemy, we always use all eigenvectors since the graph size only ranges from 8 to 12. For Alchemy we observe no significant improvement of any PE methods over base model w/o positional encodings. But SPE still achieves the least MAE among all these models. 5.2 Out-of-Distribution Generalization: Binding Affinity Prediction We study the relation between stability and out-of-distribution generalization using the DrugOOD dataset (Ji et al., 2023). We take $\phi_l$ to be element-wise MLPs and $\rho$ be GIN as usual. Results. The results are shown in Table 2. All models have comparable Area Under ROC (AUC) on the ID-Test set. However, there is a big difference in OOD-Test performance on Scaffold and Size domains, with the unstable methods (SignNet and BasisNet) performing much worse than stable methods (No PE, PEG, SPE). This emphasizes the importance of stability in domain generalization. Note that this phenomenon is less obvious in the Assay domain, which is because the Assay domain represents concept (labels) shift instead of covariant (graph features) shift. 5.3 Trade-offs between Stability, Generalization and Expressivity We hypothesize that stability has different effects on expressive power and generalization. Intuitively, very high stability means that outputs change very little as inputs change. Consequently, we expect highly stable models to have lower expressive power, but to generalize more reliably to new data. To test this behavior in practice we evaluate SPE on ZINC using 8 eigenvectors. We control the stability by tuning the complexity of underlying neural networks in the following two ways: 1. Directly control the Lipschitz constant of each MLP in SPE (in both $\phi_l$ and $\rho$) by normalizing weight matrices. 2. Let $\phi_x$ be a piecewise cubic spline. Increase the number of spline pieces from 1 to 6, with fewer splines corresponding to higher stability. See Appendix B for full details. In both cases we use eight $\phi_x$ functions. We compute the summary statistics over different random seeds. As a measure of expressivity, we report the average training loss over the last 10 epochs on ZINC. As a measure of stability, we report the generalization gap (the difference between the test loss and the training loss) at the best validation epoch over ZINC. Figure 2: Training error, test error and generalization gap v.s. model complexity (stability). In the first row, we directly change the Lipschitz constant of individual MLPs; in the second row, we choose $\phi_\ell$ to be piecewise spline functions and change the number of pieces. **Results.** In Figure 2, we show the trend of training error, test error and generalization gap as Lipschitz constant of individual MLPs (first row) or the number of spline pieces (second row) changes. We can see that as model complexity increases (stability decreases), the training error gets reduced (more expressive power) while the generalization gap grows. This justifies the important practical role of model stability for the trade-off between expressive power and generalization. ### 5.4 Counting Graph Substructures To empirically study the expressive power of SPE, we follow prior works that generate random graphs (Zhao et al., 2022; Huang et al., 2023). The dataset contains Erdős-Rényi random graphs and other random regular graphs (see Appendix M.2.1 in Chen et al. (2020)) and is randomly split into train/valid/test splitting with ratio 3:2:5. and label nodes according to the number of substructures they are part of. We aggregate the node labels to obtain the number of substructures in the overall graph and view this as a graph regression task. We let $\phi_l$ be element-wise MLPs and $\rho$ be GIN. **Results.** Figure 3 shows that SPE significantly outperforms SignNet in counting 3,4,5 and 6-cycles. We emphasize that linear differences in log-MAE correspond to exponentially large differences in MAE. This result shows that SPE still achieves very high expressive power, whilst enjoying improved robustness to domain-shifts thanks to its stability (see Section 5.2). ### 6 Conclusion We present SPE, a learnable Laplacian positional encoding that is both provably stable and expressive. Extensive experiments show the effectiveness of SPE on molecular property prediction benchmarks, the high expressivity in learning graph substructures, and the robustness as well as generalization ability under domain shifts. In the future, this technique can be extended to link prediction or other tasks involving large graphs where stability is also crucial and desired. Finally, our analysis provides a general technique for graph eigenspace stability, not just limited to domains of positional encodings and graph learning. ACKNOWLEDGMENTS The authors would like to thank Derek Lim for a constructive discussion. Yinan Wang and Pan Li are partially supported by the NSF awards PHY-2117997, IIS-2239565. REFERENCES Raghu Arghal, Eric Lei, and Shirin Saeedi Bidokhti. Robust graph neural networks via probabilistic lipschitz constraints. In Learning for Dynamics and Control Conference, pp. 1073–1085. PMLR, 2022. Vikraman Arvind, Frank Fuhlbrück, Johannes Köbler, and Oleg Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 113:42–59, 2020. Pablo Barceló, Floris Geerts, Juan Reutter, and Maksimilian Ryschkov. Graph neural networks with local graph parameters. Advances in Neural Information Processing Systems, 34:25280–25293, 2021. Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30, 2017. Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. In International Conference on Learning Representations, 2022. Deyu Bo, Chuan Shi, Lele Wang, and Renjie Liao. Specformer: Spectral graph neural networks meet transformers. In The Eleventh International Conference on Learning Representations, 2023. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022. Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017. Guangyong Chen, Pengfei Chen, Chang-Yu Hsieh, Chee-Kong Lee, Benben Liao, Renjie Liao, Weiwén Liu, Jiezhang Qiu, Qiming Sun, Jie Tang, Richard S. Zemel, and Shengyu Zhang. Alchemy: A quantum chemistry dataset for benchmarking ai models. CoRR, abs/1906.09427, 2019. Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? Advances in neural information processing systems, 33:10383–10395, 2020. C. Chuang and S. Jegelka. Tree mover’s distance: Bridging graph metrics and stability of graph neural networks. In Neural Information Processing Systems (NeurIPS), 2022. Fan RK Chung. Spectral graph theory, volume 92. American Mathematical Soc., 1997. Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. In International conference on machine learning, pp. 854–863. PMLR, 2017. Nicolas Courty, Rémi Flamary, Amaury Habrard, and Alain Rakotomamonjy. Joint distribution optimal transportation for domain adaptation. Advances in neural information processing systems, 30, 2017. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems, 28, 2015. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. AAAI Workshop on Deep Learning on Graphs: Methods and Applications, 2021.
oGNdBvymod
This could be due to my limited understanding to Bayesian NN: For the image classification experiement, how do you compare the results obtained from sampling-based and optimization-based algorithm? Specifically, do you obtain certain point estimates of NN's weights from MCMC samples, and compute metrics on test dataset using NN with the estimated weights?
Entropy-MCMC: Sampling from Flat Basins with Ease Bolian Li, Ruqi Zhang Department of Computer Science, Purdue University, USA {li4468,ruqiz}@purdue.edu Abstract Bayesian deep learning counts on the quality of posterior distribution estimation. However, the posterior of deep neural networks is highly multi-modal in nature, with local modes exhibiting varying generalization performance. Given a practical budget, targeting at the original posterior can lead to suboptimal performance, as some samples may become trapped in “bad” modes and suffer from overfitting. Leveraging the observation that “good” modes with low generalization error often reside in flat basins of the energy landscape, we propose to bias sampling on the posterior toward these flat regions. Specifically, we introduce an auxiliary guiding variable, the stationary distribution of which resembles a smoothed posterior free from sharp modes, to lead the MCMC sampler to flat basins. By integrating this guiding variable with the model parameter, we create a simple joint distribution that enables efficient sampling with minimal computational overhead. We prove the convergence of our method and further show that it converges faster than several existing flatness-aware methods in the strongly convex setting. Empirical results demonstrate that our method can successfully sample from flat basins of the posterior, and outperforms all compared baselines on multiple benchmarks including classification, calibration, and out-of-distribution detection. 1 Introduction The effectiveness of Bayesian neural networks relies heavily on the quality of posterior distribution estimation. However, achieving an accurate estimation of the full posterior is extremely difficult due to its high-dimensional and highly multi-modal nature (Zhang et al., 2020b; Izmailov et al., 2021). Moreover, the numerous modes in the energy landscape typically exhibit varying generalization performance. Flat modes often show superior accuracy and robustness, whereas sharp modes tend to have high generalization errors (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Bahri et al., 2022). This connection between the geometry of energy landscape and generalization has spurred many works in optimization, ranging from theoretical understanding (Neyshabur et al., 2017; Dinh et al., 2017; Dziugaite & Roy, 2018; Jiang et al., 2019a) to new optimization algorithms (Mobahi, 2016; Izmailov et al., 2018; Chaudhari et al., 2019; Foret et al., 2020). However, most of the existing Bayesian methods are not aware of the flatness in the energy landscape during posterior inference (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2015; Zhang et al., 2020b). Their inference strategies are usually energy-oriented and cannot distinguish between flat and sharp modes that have the same energy values. This limitation can significantly undermine their generalization performance, particularly in practical situations where capturing the full posterior is too costly. In light of this, we contend that prioritizing the capture of flat modes is essential when conducting posterior inference for Bayesian neural networks. This is advantageous for improved generalization as justified by previous works (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Bahri et al., 2022). It can further be rationalized from a Bayesian marginalization perspective: within the flat basin, each model configuration occupies a substantial volume and contributes significantly to a more precise estimation of the predictive distribution (Bishop, 2006). Moreover, existing flatness-aware methods often rely on a single solution to represent the entire flat basin (Chaudhari et al., 2019; Foret et al., 2020), ignoring the fact that the flat basin contains many high-performing models. Therefore, Bayesian marginalization can potentially offer significant improvements over flatness-aware optimization by sampling from the flat basins (Wilson, 2020; Huang et al., 2020). Prioritizing flat basins during posterior inference poses an additional challenge to Bayesian inference. Even for single point estimation, explicitly biasing toward the flat basins will introduce substantial computational overhead, inducing nested loops (Chaudhari et al., 2019; Dziugaite & Roy, 2018), doubled gradients calculation (Foret et al., 2020; Möllenhoff & Khan, 2022) or min-max problems (Foret et al., 2020). The efficiency problem needs to be addressed before any flatness-aware Bayesian method becomes practical for deep neural networks. In this paper, we propose an efficient sampling algorithm to explicitly prioritize flat basins in the energy landscape of deep neural networks. Specifically, we introduce an auxiliary guiding variable $\theta_a$ into the Markov chain to pull model parameters $\theta$ toward flat basins at each updating step (Fig. 1a). $\theta_a$ is sampled from a smoothed posterior distribution which eliminates sharp modes based on local entropy (Baldassi et al., 2016) (Fig. 1b). $\theta_a$ can also be viewed as being achieved by Gaussian convolution, a common technique in diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019). Our method enjoys a simple joint distribution of $\theta$ and $\theta_a$, and the computational overhead is similar to Stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011). Theoretically, we prove that our method is guaranteed to converge faster than some common flatness-aware methods (Chaudhari et al., 2019; Dziugaite & Roy, 2018) in the strongly convex setting. Empirically, we demonstrate that our method successfully finds flat basins efficiently across multiple tasks. Our main contributions are summarized as follows: - We propose Entropy-MCMC (EMCMC) for sampling from flat basins in the energy landscape of deep neural networks. EMCMC utilizes an auxiliary guiding variable and a simple joint distribution to efficiently steer the model toward flat basins. - We prove the convergence of EMCMC and further show that it converges faster than several existing flatness-aware methods in the strongly convex setting. - We provide extensive experimental results to demonstrate the advantages of EMCMC in sampling from flat basins. EMCMC outperforms all compared baselines on classification, calibration, and out-of-distribution detection with comparable overhead akin to SGLD. We release the code at https://github.com/lblaoke/EMCMC 2 RELATED WORKS Flatness-aware Optimization. The concept of flatness in the energy landscape was first studied by Hochreiter & Schmidhuber (1994), and its connection with generalization was then empirically discussed by Keskar et al. (2017); Dinh et al. (2017); Jiang et al. (2019b). To pursue flatness for better generalization, Baldassi et al. (2015) proposed the local entropy to measure the flatness of local modes, Baldassi et al. (2016) used “replicated” models to implement local entropy, Entropy-SGD (Chaudhari et al., 2019) introduced a nested chain to approximate the local entropy, SAM (Foret et al., 2020) developed a new optimizer to minimize the worst-case near the current model, bSAM (Möllenhoff & Khan, 2022) further improved SAM with a Bayes optimal convex lower bound, LPF (Bisla et al., 2022) introduced low-pass filter to actively search flat basins, and... SWA (Izmailov et al., 2018) found that averaging weights along the trajectory of SGD training can also find flatter modes. Our Entropy-MCMC follows the local entropy measurement and collects more than a single point to fully exploit the flat basins. For detailed comparisons with prior works considering local entropy, please refer to Appendix B. **MCMC on Deep Neural Networks.** Markov chain Monte Carlo is a class of general and practical sampling algorithms (Andrieu et al., 2003), which has been applied to infer Bayesian neural network posteriors (Neal, 2012). SGMC (Welling & Teh, 2011; Ma et al., 2015) methods use the mini-batching technique to adapt MCMC to deep neural networks. SGHMC (Chen et al., 2014) exploited the second-order Langevin dynamics to calibrate the stochastic estimates of HMC gradients. cSGMC (Zhang et al., 2020b) further improves sampling efficiency by leveraging a cyclical step size schedule. Symmetric Split HMC (Cobb & Jalaian, 2021) developed a way to apply HMC to deep neural networks without stochastic gradients. Our Entropy-MCMC builds upon the SGMC framework and is designed to favor the flat basins in the energy landscape during sampling. ### 3 PRELIMINARIES **Flatness-aware Optimization.** One common flatness-aware optimization technique is to use the concept of *local entropy*, which measures the geometric properties of the energy landscape (Baldassi et al., 2016; Chaudhari et al., 2019). The local entropy is computed by: $$F(\theta; \eta) = \log \int_\Theta \exp \left\{ -f(\theta') - \frac{1}{2\eta} \| \theta - \theta' \|^2 \right\} d\theta',$$ where $f(\cdot)$ is the loss function computed over the entire dataset and $\eta$ is a scalar. The local entropy of a point $\theta$ is determined by its neighbors weighted by their distances, which considers the volume of local modes. Previous optimization methods minimize $-F(\theta; \eta)$ to find the flat minimum. **SGMC.** Given a dataset $D$, a neural network with parameters $\theta \in \mathbb{R}^d$, the prior $p(\theta)$ and the likelihood $p(D|\theta)$, we can use Markov chain Monte Carlo (MCMC) to sample from the posterior $p(\theta|D) \propto \exp(-U(\theta))$, where the energy function is $U(\theta) = -\sum_{x \in D} \log p(x|\theta) - \log p(\theta)$. However, the computational cost for MCMC with large-scale data is too high to be practical. SGMCMC tackles this problem by stochastic gradient $\nabla U_\Xi$ based on a subset of data $\Xi \subseteq D$. We use Stochastic Gradient Langevin Dynamics (SGLD) (Welling & Teh, 2011) in the paper as the backbone MCMC algorithm, which has the following updating rule: $$\theta \leftarrow \theta - \alpha \nabla_\theta U_\Xi(\theta) + \sqrt{2\alpha} \cdot \epsilon,$$ where $\alpha$ is the step size and $\epsilon$ is standard Gaussian noise. Our method can also be implemented by other SGMC methods. During testing, Bayesian marginalization is performed to make predictions based on the sample set collected during sampling $S = \{\theta_j\}_{j=1}^M$ and the predictive distribution is obtained by $p(y|x, D) = \int p(y|x, \theta)p(\theta|D)d\theta \approx \sum_{\theta \in S} p(y|x, \theta)$. ### 4 ENTROPY-MCMC In this section, we present the Entropy-MCMC algorithm. We introduce the guiding variable $\theta_a$ obtained from the local entropy in Section 4.1 and discuss the sampling strategy in Section 4.2. #### 4.1 FROM LOCAL ENTROPY TO FLAT POSTERIOR While flat basins in the energy landscape are shown to be of good generalization (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Bahri et al., 2022), finding such regions is still challenging due to the highly multi-modal nature of the DNN energy landscape. The updating direction of the model typically needs extra force to keep the sampler away from sharp modes (Chaudhari et al., 2019; Foret et al., 2020). To bias sampling to flat basins, we look into the local entropy (Eq. 1), which can eliminate the sharp modes in the energy landscape (Chaudhari et al., 2019). We begin by the original posterior distribution $p(\theta|D) \propto \exp(-f(\theta)) = \exp\{\log p(D|\theta) + \log p(\theta)\}$, which contains both sharp and flat modes. By replacing the original loss function with local entropy, we obtain a smoothed posterior distribution in terms of a new variable $\theta_a$: $$p(\theta_a | D) \propto \exp F(\theta_a; \eta) = \int_\Theta \exp \left\{ -f(\theta) - \frac{1}{2\eta} \| \theta - \theta_a \|^2 \right\} d\theta.$$ (3) The effect of local entropy on this new posterior is visualized in Fig. 1B. The new posterior measures both the depth and flatness of the mode in $p(\theta | D)$ by considering surrounding energy values. Thereby, $p(\theta_a | D)$ is expected to primarily capture flat modes in the energy landscape, which can be used as the desired external force to revise the updating directions of the model parameter $\theta$. Moreover, the smoothed posterior $p(\theta | D)$ can be regarded as being obtained through Gaussian convolution, a common approach in diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019). We also show the effect of hyper-parameter $\eta$ on the flatness of $p(\theta | D)$ in Appendix A.4. However, the complex integral in Eq. 3 requires marginalization on the model parameter $\theta$, which poses a non-trivial challenge. Previous works using local entropy usually adopt an inner Markov chain for approximation (Chaudhari et al., 2019; Dziugaite & Roy, 2018), which sacrifices the accuracy in local entropy computation and induces computationally expensive nested loops in training. We tackle this challenge in a simple yet principled manner, eliminating the need for nested loops or approximation. This is achieved by coupling $\theta \sim p(\theta | D)$ and $\theta_a \sim p(\theta_a | D)$ into a joint posterior distribution, which enjoys a simple form, as discussed in Lemma 1. **Lemma 1.** Assume $\tilde{\theta} = [\theta^T, \theta_a^T]^T \in \mathbb{R}^{2d}$ and $\tilde{\theta}$ has the following distribution: $$p(\tilde{\theta} | D) = p(\theta, \theta_a | D) \propto \exp \left\{ -f(\theta) - \frac{1}{2\eta} \| \theta - \theta_a \|^2 \right\}.$$ (4) Then the marginal distributions of $\theta$ and $\theta_a$ are the original posterior $p(\theta | D)$ and $p(\theta_a | D)$ (Eq. 3). Further, the density $p(\tilde{\theta} | D)$ integrates to a finite quantity and thus it is mathematically well-defined. This joint posterior offers three key advantages: i) by coupling $\theta$ and $\theta_a$, we avoid the intricate integral computation, and thus remove the requirement of expensive nested training loops and mitigate the MC approximation error; ii) the joint posterior turns out to be surprisingly simple, making it easy to sample from both empirically and theoretically (details discussed in Sections 4.2 and 5); iii) after coupling, $\theta_a$ provides additional paths for $\theta$ to traverse, making $\theta$ reach flat modes efficiently. ### 4.2 Sampling from Flat Basins We discuss how to sample from the joint posterior distribution (Eq. 4) in this section. We adopt SGLD (Welling & Teh, 2011), a simple stochastic gradient MCMC algorithm that is suitable for deep neural networks, as the backbone of EMCMC sampling. More advanced MCMC algorithms can also be combined with our method. The energy function of the joint parameter variable $\tilde{\theta}$ is $U(\tilde{\theta}) = f(\theta) + \frac{1}{2\eta} \| \theta - \theta_a \|^2$, and thus its gradients is given by: $$\nabla_{\tilde{\theta}} U(\tilde{\theta}) = \begin{bmatrix} \nabla_\theta U(\tilde{\theta}) \\ \nabla_{\theta_a} U(\tilde{\theta}) \end{bmatrix} = \begin{bmatrix} \nabla_\theta f(\theta) + \frac{1}{\eta} (\theta - \theta_a) \\ \frac{1}{\eta} (\theta_a - \theta) \end{bmatrix}.$$ (5) For the model parameter $\theta$, the original gradient direction $\nabla_\theta f(\theta)$ is revised by $\frac{1}{\eta} (\theta - \theta_a)$ to get the flatness-aware gradient direction $\nabla_\theta U(\tilde{\theta})$, as visualized in Fig. 1a. Importantly, the practical implementation does not require computing $\nabla_{\theta_a} U(\tilde{\theta})$ through back-propagation, as we can utilize the analytical expression presented in Eq. 5. Therefore, despite $\tilde{\theta}$ being in a $2d$ dimension, our cost of gradient computation is essentially the same as $d$-dimensional models (e.g., standard SGLD). With the form of the gradients in Eq. 5, the training procedure of EMCMC is straightforward using the SGLD updating rule in Eq. 2. The details are summarized in Algorithm 1. At testing stage, the collected samples $S$ are used to approximate the predictive distribution $p(y | x, D) \approx \sum_{\theta_a \in S} p(y | x, \theta_a)$. Our choice of sampling from the joint posterior distribution using SGLD, rather than a Gibbs-like approach (Gelfand, 2000), is motivated by SGLD’s ability to simultaneously update both $\theta$ and $\theta_a$, which is more efficient than alternative updating (see Appendix A for a detailed explanation). --- 1 Although we refer to $p(\tilde{\theta} | D)$ as a joint “posterior” to denote its dependency on the dataset, it is obtained through coupling rather than Bayes’ rule. Thus, it does not have an explicit prior distribution. For the sample set \( S \), we collect both \( \theta \) and \( \theta_a \) after the burn-in period in order to obtain more high-quality and diverse samples in a finite time budget (see Appendix D.2 for the evidences that \( \theta \) and \( \theta_a \) find the same mode and Appendix E.3 for performance justification). In summary, thanks to EMCMC’s simple joint distribution, conducting sampling in EMCMC is straightforward, and its computational cost is comparable to that of standard SGLD. Despite its algorithmic simplicity and computational efficiency, EMCMC is guaranteed to bias sampling to flat basins and obtain samples with enhanced generalization and robustness. **Algorithm 1: Entropy-MCMC** **Inputs:** The model parameter \( \theta \in \Theta \), guiding variable \( \theta_a \in \Theta \), and dataset \( D = \{(x_i, y_i)\}_{i=1}^N \); **Results:** Collected samples \( S \subset \Theta \); \( \theta_a \leftarrow \theta, S \leftarrow \emptyset; \) /* Initialize */ for each iteration do \( \Xi \leftarrow \) A mini-batch sampled from \( D \); \( U_\Xi \leftarrow -\log p(\Xi|\theta) - \log p(\theta) + \frac{1}{2n} \| \theta - \theta_a \|^2; \) \( \theta \leftarrow \theta - \alpha \nabla_\theta U_\Xi + \sqrt{2\alpha} \cdot \epsilon_1; \) /* \( \epsilon_1, \epsilon_2 \sim N(0, I) \) */ \( \theta_a \leftarrow \theta_a - \alpha \nabla_{\theta_a} U_\Xi + \sqrt{2\alpha} \cdot \epsilon_2; \) if after burn-in then \( S \leftarrow S \cup \{\theta, \theta_a\}; \) /* Collect samples */ end 5 THEORETICAL ANALYSIS In this section, we provide a theoretical analysis on the convergence rate of Entropy-MCMC and compare it with previous local-entropy-based methods including Entropy-SGD (Chaudhari et al., 2019) and Entropy-SGLD (Dziugaite & Roy, 2018) (used as a theoretical tool in the literature rather than a practical algorithm). We leverage the 2-Wasserstein distance bounds of SGLD, which assumes the target distribution to be smooth and strongly log-concave (Dalalyan & Karagulyan, 2019). While the target distribution in this case is unimodal, it still reveals the superior convergence rate of EMCMC compared with existing flatness-aware methods. We leave the theoretical analysis on non-log-concave distributions for future work. Specifically, we have the following assumptions for the loss function \( f(\cdot) \) and stochastic gradients: **Assumption 1.** The loss function \( f(\theta) \) in the original posterior distribution \( \pi = p(\theta|D) \propto \exp(-f(\theta)) \) is \( M \)-smooth and \( m \)-strongly convex (i.e., \( mI \preceq \nabla^2 f(\theta') \preceq MI \)). **Assumption 2.** The variance of stochastic gradients is bounded by \( \mathbb{E}[\|\nabla f(\theta) - \nabla f_\Xi(\theta)\|^2] \leq \sigma^2 \) for some constant \( \sigma > 0 \). To establish the convergence analysis for EMCMC, we first observe that the smoothness and convexity of the joint posterior distribution \( \pi_{\text{joint}}(\theta, \theta_a) = p(\theta, \theta_a|D) \) in Eq. 4 is the same as the original posterior \( p(\theta|D) \), which is formally stated in Lemma 2. **Lemma 2.** If Assumption 1 holds and \( m \leq 1/\eta \leq M \), then the energy function in the joint posterior distribution \( \pi_{\text{joint}}(\theta, \theta_a) = p(\theta, \theta_a|D) \) is also \( M \)-smooth and \( m \)-strongly convex. With the convergence bound of SGLD established by Dalalyan & Karagulyan (2019), we derive the convergence bound for EMCMC in Theorem 1. **Theorem 1.** Under Assumptions 1 and 2 let \( \mu_0 \) be the initial distribution and \( \mu_K \) be the distribution obtained by EMCMC after \( K \) iterations. If \( m \leq 1/\eta \leq M \) and the step size \( \alpha \leq 2/(m + M) \), the 2-Wasserstein distance between \( \mu_K \) and \( \pi_{\text{joint}} \) will have the following upper bound: \[ W_2(\mu_K, \pi_{\text{joint}}) \leq (1 - \alpha m)^K \cdot W_2(\mu_0, \pi) + 1.65(M/m)(2\alpha d)^{1/2} + \frac{\sigma^2(2\alpha d)^{1/2}}{1.65M + \sigma \sqrt{m}}. \] --- 2 Assumption 1 & 2 are only for the convergence analysis. Our method and experiments are not restricted to the strong convexity. Comparing Theorem 1 with the convergence bound of SGLD obtained by Dalalyan & Karagulyan (2019), the only difference is the doubling of the dimension, from $d$ to $2d$. Theorem 1 implies that the convergence rate of EMCMC will have at most a minor slowdown by a constant factor compared to SGLD while ensuring sampling from flat basins. In contrast, previous local-entropy-based methods often substantially slow down the convergence to bias toward flat basins. For example, consider Entropy-SGD (Chaudhari et al., 2019) which minimizes a flattened loss function $f_{\text{flat}}(\theta) = -F(\theta; \eta) = -\log \int_\Theta \exp \left\{ -f(\theta') - \frac{1}{2\eta} \|\theta - \theta'\|^2 \right\} d\theta'$. We discuss the convergence bound of Entropy-SGD in Theorem 2, which shows how the presence of the integral (and the nested Markov chain induced by it) slows down the convergence. **Theorem 2.** Consider running Entropy-SGD to minimize the flattened loss function $f_{\text{flat}}(\theta)$ under Assumptions 1 and 2. Assume the inner Markov chain runs $L$ iterations and the 2-Wasserstein distance between the initial and target distributions is always bounded by $\kappa$. Let $f^*_\text{flat}$ represent the global minimum value of $f_{\text{flat}}(\theta)$ and $E_t := \mathbb{E}[f_{\text{flat}}(\theta_t) - f^*_\text{flat}]$. If the step size $\alpha \leq 2/(m + M)$, then we have the following upper bound: $$E_K \leq \left(1 - \frac{\alpha m}{1 + \eta M}\right)^K \cdot E_0 + \frac{A(1 + \eta M)}{2m},$$ where $A^2 = (1 - \alpha m)^L \cdot \kappa + 1.65 \left(\frac{M+1/\eta}{m+1/\eta}\right) (\alpha d)^{1/2} + \frac{\sigma^2(\alpha d)^{1/2}}{1.65(M+1/\eta)+\sigma\sqrt{m+1/\eta}}$. Another example is Entropy-SGLD (Dzulugate & Roy, 2018), a theoretical tool established to analyze Entropy-SGD. Its main distinction with Entropy-SGD is the SGLD updating instead of SGD updating in the outer loop. The convergence bound for Entropy-SGLD is established in Theorem 3. **Theorem 3.** Consider running Entropy-SGLD to sample from $\pi_{\text{flat}}(\theta) \propto \exp F(\theta; \eta)$ under Assumptions 1 and 2. Assume the inner Markov chain runs $L$ iterations and the 2-Wasserstein distance between initial and target distributions is always bounded by $\kappa$. Let $\nu_0$ be the initial distribution and $\nu_K$ be the distribution obtained by Entropy-SGLD after $K$ iterations. If the step size $\alpha \leq 2/(m + M)$, then: $$W_2(\nu_K, \pi_{\text{flat}}) \leq (1 - \alpha m)^K \cdot W_2(\nu_0, \pi_{\text{flat}}) + 1.65 \left(\frac{1 + \eta M}{1 + \eta m}\right) (M/m)(\alpha d)^{1/2} + \frac{A(1 + \eta M)}{m},$$ where $A^2 = (1 - \alpha m)^L \cdot \kappa + 1.65 \left(\frac{M+1/\eta}{m+1/\eta}\right) (\alpha d)^{1/2} + \frac{\sigma^2(\alpha d)^{1/2}}{1.65(M+1/\eta)+\sigma\sqrt{m+1/\eta}}$. The complete proof of theorems is in Appendix C. Comparing Theorems 1, 2, and 3, we observe that the convergence rates of Entropy-SGD and Entropy-SGLD algorithms are significantly hindered due to the presence of the nested Markov chains, which induces a large and complicated error term $A$. Since $\sigma$ and $\alpha$ are typically very small, the third term in Theorem 1 will be much smaller than both the third term in Theorem 3 and the second term in Theorem 2. To summarize, the theoretical analysis provides rigorous guarantees on the convergence of Entropy-MCMC and further demonstrates the superior convergence rate of Entropy-MCMC compared to previous methods in the strongly convex setting. ### 6 EXPERIMENTS We conduct comprehensive experiments to show the superiority of EMCMC. Section 6.1 and 6.3 demonstrate that EMCMC can successfully sample from flat basins. Section 6.2 verifies the fast convergence of EMCMC. Section 6.4 and 6.5 demonstrate the outstanding performance of EMCMC on multiple benchmarks. Following Zhang et al. (2020b), we adopt a cyclical step size schedule for all sampling methods. For more implementation details, please refer to Appendix E. #### 6.1 SYNTHETIC EXAMPLES To demonstrate EMCMC’s capability to sample from flat basins, we construct a two-mode energy landscape $\frac{1}{2}\mathcal{N}([-2, -1]^T, 0.5I) + \frac{1}{2}\mathcal{N}([2, 1]^T, I)$ containing a sharp and a flat mode. To make the Figure 2: Sampling trajectories on a synthetic energy landscape with sharp (lower left) and flat (top right) modes. The initial point is located at the ridge of two modes. EMCMC successfully biases toward the flat mode whereas SGD and SGLD are trapped in the sharp mode. Figure 3: Logistic regression on MNIST in terms of training NLL and testing accuracy (repeated 10 times). EMCMC converges faster than others, which is consistent with our theoretical analysis. case challenging, we set the initial point at \((-0.2, -0.2)\), the ridge of the two modes\(^3\), which has no strong preference for either mode. The settings for this experiment are: \(\eta = 0.5, \alpha = 5 \times 10^{-3}, 1000\) iterations, and collecting samples per 10 iterations. Fig. 2 shows that the proposed EMCMC finds the flat basin while SGD and SGLD still prefer the sharp mode due to the slightly larger gradients coming from the sharp mode. From Fig. 2(c)&(d), we see that the samples of \(\theta_\alpha\) are always around the flat mode, showing its ability to eliminate the sharp mode. Although \(\theta\) visits the sharp mode in the first few iterations, it subsequently inclines toward the flat mode, illustrating the influence of gradient revision by the guiding variable \(\theta_\alpha\). If choosing an appropriate \(\eta\), EMCMC will find the flat mode no matter how it is initialized. This is due to the stationary distribution of \(\theta_\alpha\), which is flattened and removes the sharp mode. Through the interaction term, \(\theta_\alpha\) will encourage \(\theta\) to the flat mode. We also show the results for different initialization in Appendix D.1 6.2 Logistic Regression To verify the theoretical results on convergence rates in Section 5, we conduct logistic regression on MNIST [LeCun, 1998] to compare EMCMC with Entropy-SGD [Chaudhari et al., 2019], SGLD [Welling & Teh, 2011] and Entropy-SGLD [Dziugaite & Roy, 2018]. We follow MacLaurin & Adams (2015) and Zhang et al. (2020a) to use a subset containing 7s and 9s and the resulting posterior is strongly log-concave, satisfying the assumptions in Section 5. Fig. 3 shows that EMCMC converges faster than Entropy-SG(L)D, demonstrating the advantage of using a simple joint distribution without the need for nested loops or MC approximation, which verifies Theorems 1, 2 & 3. Besides, while EMCMC and SGLD share similar convergence rates, EMCMC achieves better generalization as shown by its higher test accuracy. This suggests that EMCMC is potentially beneficial in unimodal distributions under limited budgets due to finding samples with high volumes. 6.3 Flatness Analysis on Deep Neural Networks We perform flatness analysis with ResNet18 [He et al., 2016] on CIFAR100 [Krizhevsky, 2009]. We use the last sample of SGD, SGLD and EMCMC (averaged result from \(\theta\) and \(\theta_\alpha\)) respectively, and each experiment is repeated 3 times to report the averaged scores. Eigenspectrum of Hessian. The Hessian matrix of the model parameter measures the second-order gradients of a local mode on the energy landscape. Smaller eigenvalues of Hessian indicate a flatter local geometry [Chaudhari et al., 2019; Foret et al., 2020]. Since computing the exact Hessian \(^3\)A set of local-maximum points with zero gradients in all directions. Figure 4: Eigenspectrum of Hessian matrices of ResNet18 on CIFAR100. x-axis: eigenvalue, y-axis: frequency. A nearly all-zero eigenspectrum indicates a local mode that is flat in all directions. EMCMC successfully finds such flat modes with significantly smaller eigenvalues. Figure 5: Parameter space interpolation of ResNet18 on CIFAR100. Exploring the neighborhood of local modes from $\theta$ to (a)-(b): a random direction in the parameter space, and (c): $\theta_\alpha$. (a) and (b) show that EMCMC has the lowest and the most flat NLL and error curves. (c) shows that $\theta$ and $\theta_\alpha$ converge to the same flat mode while maintaining diversity. of deep neural networks is extremely costly due to the dimensionality \cite{Luo2023}, we use the diagonal Fisher information matrix \cite{Wasserman2004} to approximate its eigenspectrum: $$[\lambda_1, ..., \lambda_d]^T \approx \text{diag}(I(\theta)) = \mathbb{E}[(\nabla U - \mathbb{E}\nabla U)^2],$$ where $\lambda_1, ..., \lambda_d$ are eigenvalues of the Hessian. Fig. 4 shows the eigenspectra of local modes discovered by different algorithms. The eigenvalues of EMCMC are much smaller compared with SGD and SGLD, indicating that the local geometry of EMCMC samples is flatter. The eigenspectrum comparison verifies the effectiveness of EMCMC to find and sample from flat basins. Parameter Space Interpolation. Another way to measure the flatness of local modes is directly interpolating their neighborhood in the parameter space \cite{Izmailov2018}. Local modes located in flat basins are expected to have larger widths and better generalization performance \cite{Keskar2017, Chaudhari2019}. The interpolation begins at $\theta$ and ends at $\theta_\epsilon$ (a random point near $\theta$ or $\theta_\epsilon = \theta_\alpha$). The interpolated point $\theta_\delta$ is computed by: $$\theta_\delta = (1 - \delta/\|\theta - \theta_\epsilon\|)\theta + (\delta/\|\theta - \theta_\epsilon\|)\theta_\epsilon,$$ where $\delta$ is the Euclidean distance from $\theta$ to $\theta_\delta$. Fig. 5a and 5b show the training NLL and testing error respectively. The neighborhood of EMCMC maintains consistently lower NLL and errors compared with SGD and SGLD, demonstrating that EMCMC samples are from flatter modes. Furthermore, Fig. 5c visualizes the interpolation between $\theta$ and $\theta_\alpha$, revealing that both variables essentially converge to the same flat mode while maintaining diversity. This justifies the benefit of collecting both of them as samples to obtain a diverse set of high-performing samples. 6.4 IMAGE CLASSIFICATION We conduct classification experiments on CIFAR \cite{Krizhevsky2009}, corrupted CIFAR \cite{Hendrycks2019b} and ImageNet \cite{Deng2009}, to compare EMCMC with both flatness-aware optimization methods (Entropy-SGD \cite{Chaudhari2019}, SAM \cite{Foret2020} and bSAM \cite{Mollenhoff2022}) and MCMC methods (SGLD \cite{Welling2011} and Entropy-SGLD \cite{Dziugaite2018}). We use ResNet18 and ResNet50 \cite{He2016} for CIFAR and ImageNet respectively. All sampling algorithms collect a total of 16 samples for Bayesian marginalization, and all entries are repeated 3 times to report the mean±std. Table 1 shows the results on the 3 datasets, in which EMCMC significantly outperforms all baselines. The classification results strongly suggest that by sampling from flat basins, Bayesian neural networks can achieve outstanding performance and EMCMC is an effective and efficient method to do so. The results for corrupted CIFAR \cite{Hendrycks2019a} are shown in Table 1b to show the robustness of EMCMC against multiple types of noises. The results are averaged over all noise types, Table 1: Classification results on (a) CIFAR10/100, (b) corrupted CIFAR and (c) ImageNet, measured by NLL and accuracy. EMCMC outperforms all compared baselines. (a) CIFAR10 and CIFAR100 | Method | CIFAR10 | CIFAR100 | |--------------|---------|----------| | | ACC (%) ↑ | NLL ↓ | ACC (%) ↑ | NLL ↓ | | SGD | 94.87 ± 0.04 | 0.205 ± 0.015 | 76.49 ± 0.27 | 0.935 ± 0.021 | | Entropy-SGD | 95.11 ± 0.09 | 0.184 ± 0.020 | 77.45 ± 0.03 | 0.895 ± 0.009 | | SAM | 95.25 ± 0.12 | 0.166 ± 0.005 | 78.41 ± 0.22 | 0.876 ± 0.007 | | 6SAM | 95.53 ± 0.09 | 0.163 ± 0.002 | 78.92 ± 0.25 | 0.870 ± 0.005 | | SGGLD | 95.47 ± 0.11 | 0.167 ± 0.011 | 78.79 ± 0.35 | 0.854 ± 0.031 | | Entropy-SGLD | 94.46 ± 0.24 | 0.194 ± 0.020 | 77.98 ± 0.39 | 0.897 ± 0.027 | | EMCMC | **95.69 ± 0.06** | **0.162 ± 0.002** | **79.16 ± 0.07** | **0.840 ± 0.004** | (b) Corrupted CIFAR (ACC (%) ↑) | Severity | 1 | 2 | 3 | 4 | 5 | |----------|-----|-----|-----|-----|-----| | SGD | 88.43 | 82.43 | 76.20 | 67.93 | 55.81 | | SGLD | 88.61 | 82.46 | 76.49 | 69.19 | 56.98 | | EMCMC | **88.87** | **83.27** | **77.44** | **70.31** | **58.17** | (c) ImageNet | Metric | NLL ↓ | Top-1 (%) ↑ | Top-5 (%) ↑ | |--------|-------|-------------|-------------| | SGD | 0.960 | 76.046 | 92.776 | | SGLD | 0.921 | 76.676 | 93.174 | | EMCMC | **0.895** | **77.096** | **93.424** | Table 2: OOD detection on CIFAR-SVHN. The predictive uncertainty quantified by EMCMC is the best among the compared algorithms. | Method | CIFAR10-SVHN | CIFAR100-SVHN | |--------------|--------------|---------------| | | AUROC (%) ↑ | AUPR (%) ↑ | AUROC (%) ↑ | AUPR (%) ↑ | | SGD | **98.30** | **99.24** | 71.96 | 84.08 | | Entropy-SGD | **98.71** | **99.37** | 79.15 | 86.92 | | SAM | 94.23 | 95.67 | 74.56 | 84.61 | | SGGLD | 97.66 | 98.64 | 72.51 | 83.35 | | Entropy-SGLD | 90.07 | 91.80 | 71.83 | 82.89 | | EMCMC | **98.15** | **99.04** | **81.14** | **87.18** | and the severity level refers to the strength of noise added to the original data. EMCMC consistently outperforms all compared baselines across all severity levels, indicating that samples from flat basins are more robust to noise. The results for individual noise types are shown in Appendix D.3 6.5 Uncertainty and OOD Detection To illustrate how predictive uncertainty estimation benefits from flat basins, we evaluate EMCMC on out-of-distribution (OOD) detection. We train each model on CIFAR and quantify uncertainty using the entropy of predictive distributions (Malmi & Gales, 2018). Then we use the uncertainty to detect SVHN samples in a joint testing set combined by CIFAR and SVHN (Netzer et al., 2011). We evaluate each algorithm with Area under ROC Curve (AUROC) (McClish, 1989) and Area under Precision-Recall curve (AUPR) (Olson & Delen, 2008). All other settings remain the same as the classification experiments. Table 2 shows the evaluation results, where EMCMC outperforms nearly all baselines, especially when trained on CIFAR100. This indicates that predictive uncertainty estimation is more accurate if the samples are from flat basins of the posterior. The confidence calibration experiments are shown in Appendix D.4 7 Conclusion and Discussion We propose a practical MCMC algorithm to sample from flat basins of DNN posterior distributions. Specifically, we introduce a guiding variable based on the local entropy to steer the MCMC sampler toward flat basins. The joint distribution of this variable and the model parameter enjoys a simple form which enables efficient sampling. We prove the fast convergence rate of our method compared with two existing flatness-aware methods. Comprehensive experiments demonstrate the superiority of our method, verifying that it can sample from flat basins and achieve outstanding performance on diverse tasks. Our method is mathematically simple and computationally efficient, allowing for adoption as a drop-in replacement for standard sampling methods such as SGGLD. The results hold promise for both Bayesian methods and deep learning generalization. On the one hand, we demonstrate that explicitly considering flatness in Bayesian deep learning can significantly improve generalization, robustness, and uncertainty estimation, especially under practical computational constraints. On the other hand, we highlight the value of marginalizing over flat basins in the energy landscape, as a means to attain further performance improvements compared to single point optimization methods. REFERENCES Ahmad Ajalloeian and Sebastian U. Stich. On the convergence of sgd with biased gradients. *Journal of Machine Learning Research*, 2020. URL https://api.semanticscholar.org/CorpusID:234358812. Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. An introduction to mcmc for machine learning. *Machine learning*, 50:5–43, 2003. Dara Bahri, Hossein Mobahi, and Yi Tay. Sharpness-aware minimization improves language model generalization. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 7360–7371, 2022. Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Sub-dominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses. *Physical review letters*, 115(12):128101, 2015. Carlo Baldassi, Christian Borgs, Jennifer T Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes. *Proceedings of the National Academy of Sciences*, 113(48):E7655–E7662, 2016. Christopher M Bishop. *Pattern recognition and machine learning*, volume 4. Springer, 2006. Devansh Bisla, Jing Wang, and Anna Choromanska. Low-pass filtering sgd for recovering flat optima in the deep learning optimization landscape. In *International Conference on Artificial Intelligence and Statistics*, pp. 8299–8339. PMLR, 2022. Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-sgd: Biasing gradient descent into wide valleys. *Journal of Statistical Mechanics: Theory and Experiment*, 2019(12):124018, 2019. Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In *International conference on machine learning*, pp. 1683–1691. PMLR, 2014. Adam D Cobb and Brian Jalaian. Scaling hamiltonian monte carlo inference for bayesian neural networks with symmetric splitting. In *Uncertainty in Artificial Intelligence*, pp. 675–685. PMLR, 2021. Arnak S Dalalyan and Avetik Karagulyan. User-friendly guarantees for the langevin monte carlo with inaccurate gradient. *Stochastic Processes and their Applications*, 129(12):5278–5311, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In *International Conference on Machine Learning*, pp. 1019–1028. PMLR, 2017. Gintare Karolina Dziugaite and Daniel Roy. Entropy-sgd optimizes the prior of a pac-bayes bound: Generalization properties of entropy-sgd and data-dependent priors. In *International Conference on Machine Learning*, pp. 1377–1386. PMLR, 2018. Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *International Conference on Learning Representations*, 2020. Alan E Gelfand. Gibbs sampling. *Journal of the American statistical Association*, 95(452):1300–1304, 2000. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016.
qxLVaYbsSI
In the scenario of both unlabeled and labeled data, some combinations of federated algorithms and semi-supervised algorithms does not perform as well as federated learning using only labeled data, which violates normal cognition. The authors can try to explain the reasons for this phenomenon.
Robust Training of Federated Models with Extremely Label Deficiency Yonggang Zhang\textsuperscript{1}\thanks{Equal contributions.} \quad Zhiqin Yang\textsuperscript{1}\thanks{Equal contributions.} \quad Xinmei Tian\textsuperscript{2} \quad Nannan Wang\textsuperscript{3} \quad Tongliang Liu\textsuperscript{4} \quad Bo Han\textsuperscript{1}\thanks{Correspondence to Bo Han (bhanml@comp.hkbu.edu.hk).} \textsuperscript{1}TMLR Group, Hong Kong Baptist University \quad \textsuperscript{2}University of Science and Technology of China \quad \textsuperscript{3}Xidian University \quad \textsuperscript{4}Sydney AI Centre, The University of Sydney Abstract Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency. Advanced FSSL methods predominantly focus on training a single model on each client. However, this approach could lead to a discrepancy between the objective functions of labeled and unlabeled data, resulting in gradient conflicts. To alleviate gradient conflict, we propose a novel twin-model paradigm, called Twin-sight, designed to enhance mutual guidance by providing insights from different perspectives of labeled and unlabeled data. In particular, Twin-sight concurrently trains a supervised model with a supervised objective function while training an unsupervised model using an unsupervised objective function. To enhance the synergy between these two models, Twin-sight introduces a neighbourhood-preserving constraint, which encourages the preservation of the neighbourhood relationship among data features extracted by both models. Our comprehensive experiments on four benchmark datasets provide substantial evidence that Twin-sight can significantly outperform state-of-the-art methods across various experimental settings, demonstrating the efficacy of the proposed Twin-sight. The code is publicly available at: github.com/tmlr-group/Twin-sight. 1 Introduction Federated learning (FL) (Yang et al., 2019; Kairouz et al., 2021; Li et al., 2021; McMahan et al., 2017; Wang et al., 2020) has gained widespread popularity in machine learning, enabling models to learn from decentralized devices under diverse domains (Li et al., 2019; Xu et al., 2021; Long et al., 2020a). Despite the benefits of FL, obtaining high-quality annotations remains challenging in resource-constrained scenarios, often leading to label deficiency and degraded performance (Jin et al., 2023). In this regard, federated semi-supervised learning (FSSL) (Diao et al., 2022; Liu et al., 2021c; Jeong et al., 2021) has achieved significant improvements in tackling label scarcity by jointly training a global model using labeled and/or unlabeled data. Advanced FSSL methods propose to combine off-the-rack semi-supervised methods (Sohn et al., 2020; Xie et al., 2020a; Berthelot et al., 2020) with FL (McMahan et al., 2017; Li et al., 2020), leveraging the strengths of both approaches like pseudo-labeling (Lee et al., 2013) and teacher-student models (Tarvainen & Valpola, 2017). These methods typically train a single model on each client using labeled or unlabeled data, following the inspirits of traditional semi-supervised learning. However, the decentralized nature of FL scenarios distinguishes FSSL from traditional semi-supervised learning, where labeled and unlabeled data are on the same device. Namely, clients in FL may have diverse capabilities to label data, leading to label deficiency on many clients (Liu et al., 2021c; Yang et al., 2021; Liang et al., 2022). Training a single model using different objective functions could make gradients on different distributions collide, as depicted in Figure 2(a). Thus, it is urgent to develop an FL-friendly semi-supervised learning framework to tackle label deficiency. To combat label deficiency, we propose a twin-model paradigm, called Twin-sight, to enhance mutual guidance by providing insights from different perspectives of labeled and unlabeled data, adapting... Figure 1: Overview of Twin-sight. The framework illustrates the process for both fully-labeled and fully-unlabeled clients. Each client incorporates a supervised model and an unsupervised model. The supervised model undergoes supervised learning using either ground-truth labels or pseudo labels, while the unsupervised model performs self-supervised learning. This approach enables the generation of twin sights for each sample, capturing both supervised and unsupervised perspectives. Subsequently, these two models are aligned, leveraging the complementary information. Traditional semi-supervised learning to FL. In particular, Twin-sight trains a supervised model using a supervised objective function, while training an unsupervised model using an unsupervised objective function. The twin-model paradigm naturally avoids the issue of gradient conflict. Consequently, the interaction between the supervised and unsupervised models plays a crucial role in Twin-sight. Drawing inspiration from traditional semi-supervised learning (Belkin & Niyogi, 2004) from a manifold perspective (Roweis & Saul, 2000), we introduce a neighborhood-preserving constraint to encourage preserving the neighborhood relation among data features extracted by these two models. Consequently, the supervised and unsupervised models can co-guide each other by providing insights from different perspectives of labeled and unlabeled data without gradient conflict. The overview of the proposed Twin-sight can be found in Figure 1. In Twin-sight, the unsupervised objective function, e.g., instance discrimination (Wu et al., 2018)\(^1\), does not vary with the presence or absence of labels for the unsupervised model. In contrast, the supervised objective function varies with the presence or absence of labels. For clients with label information, it can be a vanilla objective, e.g., cross-entropy loss. For clients without labels, Twin-sight regards predictions with high confidence as reliable labels to perform supervised learning. In Twin-sight, the constraint remains the same whether labels exist, encouraging the preservation of neighborhood relation (Sarkar et al., 2022; Gao et al., 2023; Pandey et al., 2021). Comprehensive experiments conducted on four standard datasets demonstrate the efficacy of the proposed Twin-sight. Overall, our contributions can be summarized as follows: • We point out that the discrepancy between the objective functions of labeled and unlabeled data could cause gradient conflict, posing specific challenges for semi-supervised learning approaches in FL scenarios. • To tackle label deficiency, we propose a twin-model framework, Twin-sight, to tackle gradient conflict in federated learning. Twin-sight trains a supervised model paired with an unsupervised model. Meanwhile, Twin-sight introduces a constraint to make the two models co-guide each other with insights from different perspectives of labeled and unlabeled data by preserving the neighborhood relation of data features. --- \(^1\)The objective function can refine class-level identification into fine-grained challenges causally (Chalupka et al., 2014; Mitrovic et al., 2020). • We conduct comprehensive experiments under various settings using widely used benchmark datasets. Our experimental results show that Twin-sight outperforms previous methods, achieving state-of-the-art performance. 2 RELATED WORK Federated Learning. Federated learning (FL) enables distributed clients to collaboratively train a global model with privacy-preserving (Kairouz et al., 2021; Ji et al., 2023). However, the performance of federated learning typically suffers from heterogeneity in data distributions, processing capabilities, and network conditions among clients (Lin et al., 2020; Li et al., 2022; Diao et al., 2023; Zhu et al., 2022; Tang et al., 2022). One of the most popular algorithms in FL is FedAvg (McMahan et al., 2017), which aggregates parameters from randomly selected clients to create a global model and achieves convergence after several rounds of communication. A series of works, e.g., FedProx (Li et al., 2020), SCAFFOLD (Karimireddy et al., 2020), is proposed to calibrate the local updating direction. These methods implicitly assume that all clients can label data, which could be violated in many practical scenarios. Some approaches include the sharing of privacy-free information (Tang et al., 2022) or the use of protected features (Yang et al., 2023). These strategies have shown promise in achieving better performance. Semi-Supervised Federated Learning (SemiFL). To relax the assumption, SemiFL (Diao et al., 2022) assumes that the server can annotate data, while clients collect data without labels. In SemiFL, selected clients generate pseudo-labels using the global model and then fine-tune the aggregated model using labeled data on the server side. Semi-supervised learning is a well-established approach that has proven to be effective in improving the performance of machine learning models by making use of both labeled and unlabeled data (Zhu et al., 2003; Zhu & Goldberg, 2009). Self-training methods (Xie et al., 2020b; Zoph et al., 2020; Liu et al., 2021b) have emerged as a popular approach for semi-supervised learning, in which a teacher model is trained on labeled data and used to generate pseudo-labels for the remaining unlabeled data. Another significant line of work is based on consistency training (Tarvainen & Valpola, 2017; Xie et al., 2020a). Apart from the above, combining these two methods is effective in achieving improved performance on various benchmark datasets, e.g., MixMatch (Berthelot et al., 2019), FixMatch (Sohn et al., 2020), and RemixMatch (Berthelot et al., 2020). However, the server may fail to collect data due to privacy concerns. Federated Semi-Supervised Learning (FSSL). Advanced works assume that the some clients have labeled data (Jin et al., 2023; Liu et al., 2021c), which has garnered significant attention. One stream of research focuses on the fully-labeled clients versus fully-unlabeled clients (Liu et al., 2021c; Yang et al., 2021; Liang et al., 2022), while another body of literature studies the use of partially labeled data at each client (Long et al., 2020b; Lin et al., 2021; Wei & Huang, 2023). For instance, RSCFed (Liang et al., 2022) leverages mean-teacher on fully-unlabeled clients and sub-sample clients for sub-consensus by distance-reweighted model aggregation. This approach comes at the cost of increased communication burden. FedIRM (Liu et al., 2021c) learns the inter-client relationships between different clients using a relation-matching module. However, these methods merely train a single model on labeled and unlabeled data, causing the gradient conflict issue. Self-Supervised Learning. Self-supervised learning is an increasingly popular approach to acquiring meaningful representations without needing explicit labels (He et al., 2020; Chen & He, 2021). Contrastive methods (Wu et al., 2018; Bachman et al., 2019; Misra & Maaten, 2020) have demonstrated state-of-the-art performance, which enforces the similarity of representations between two augmented views of input. One of the predominant methods, SimCLR (Chen et al., 2020), applies InfoNCE (Oord et al., 2018) loss to discriminate positive pairs from numerous negative samples. There is a work (Zhuang et al., 2021) also investigates the federated version of these unsupervised methods. Previous work shows that the instance discrimination task can be regarded as a (more challenging) fine-grained version of the downstream task (Mitrovic et al., 2020). These insightful works inspire us to introduce self-supervised learning into FSSL for processing unlabeled data. 3 METHODOLOGY In this section, we present our “Twin-sight” framework in detail. Before that, we provide a formal definition of the studied problem (Sec 3.1) and the motivation (Sec 3.2). We then elaborate on the twin-model paradigm (Sec 3.3), outlining the roles and training procedures of the supervised and unsupervised models. Finally, we explore the interaction between these two sights (Sec 3.4). ### 3.1 Problem Definition In general, FL tends to train a global model parameterized by \( w \) with \( K \) participants collaboratively. In other words, the objective function \( J(w) \) of the global model is composed of the local function over all participants’ data distribution: \[ \min_w J(w) = \sum_{k=1}^{K} \beta_k J_k(w_k), \] where \( \beta_k \) determines the weight of the \( k \)-th client’s objective function. The \( k \)-th client possesses a local private dataset denoted by \( D_k \), drawn from the distribution \( P(X_k, Y_k) \). In FSSL, a typical scenario involves \( M \) clients with fully-labeled data, while the remaining \( T \) clients have unlabeled data. The set of all clients \( C = \{c_k\}_{k=1}^{K} \) can be divided into two subsets, \( C^L = \{c_m\}_{m=1}^{M} \) and \( C^U = \{c_t\}_{t=1}^{T} \), corresponding to the clients with labeled and unlabeled data, respectively. The dataset of the \( m \)-th client in \( C^L \) is \( D^L_m = \{(x^i_m, y^i_m)\}_{i=1}^{N_m} \sim P(X_m, Y_m) \) and \( D^U_t = \{(x^i_t)\}_{i=1}^{N_t} \sim P(X_t) \) denotes the dataset containing data without annotation for \( c_t \). ### 3.2 Motivation In the existing FSSL framework, the local objective function on labeled data can be formulated as: \[ J_m(w_m) := \mathbb{E}_{(x,y) \sim P(X_m,Y_m)} \ell(w_m; x, y), \] where \((X_m, Y_m)\) is the random variable denoting the image \( X_m \) and its label \( Y_m \) and \( \ell(\cdot) \) is the cross-entropy loss. To leverage unlabeled data, advanced methods (Liang et al., 2022; Liu et al., 2021c; Yang et al., 2021) propose to employ traditional semi-supervised learning techniques such as pseudo-labeling (Lee et al., 2013) and mean-teacher (Tarvainen & Valpola, 2017) in conjunction with a transformation function \( T(\cdot) \). These methods utilize a global model parameterized to utilize unlabeled data on the \( t \)-th client: \[ J_t(w_t) := \mathbb{E}_{x \sim P(X_t)} f(w_t; x, T(x)), \] where \( f(\cdot; \cdot) \) denotes a consistency constraint. Therefore, the global objective can be rewritten as: \[ \min_w J(w) = \sum_{m=1}^{M} \beta_m J_m(w) + \sum_{t=1}^{T} \beta_t J_t(w). \] In centralized training, this approach can achieve state-of-the-art performance. However, the objective function may cause “client drift” due to the different objective functions of clients. Specifically, all parameters will be aggregated to construct a global model, even if these models are trained with different objective functions. In practice, aggregating models with different objective functions will cause “client drift” (Wang et al., 2020). To verify the client drifts, we calculate the similarity between gradients calculated under different objective functions, i.e., Eq. 2 and Eq. 3. The results are shown in Figure 2(a), demonstrating that gradients from these two do not align well, i.e., gradient conflict. The gradient conflict issue is inherently attributed to the decentralized nature of data. Specifically, FL models are trained on labeled or unlabeled data, leading to aggregation with models trained using different objective functions and data distributions. ### 3.3 Twin-Model Paradigm Built upon the aforementioned analysis, we propose to introduce a twin-model paradigm to tackle gradient conflict. Intuitively, we can train a supervised model using labeled data while training an unsupervised model using unlabeled data. Consequently, the main challenge is designing an effective interaction mechanism between these two models, making these two models promote each other by providing insights from different perspectives of labeled and unlabeled data. Algorithm 1 pseudo-code of Twin-sight Server input: communication round $R$ Client $k$’s input: local epochs $E$, $k$-th local dataset $\mathcal{D}^k$ Initialization: all clients initialize the model $w_{s,k}^0, w_{u,k}^0$. Server Executes: for each round $r = 1, 2, \ldots, R$ do server random samples a subset of clients $C_r \subseteq \{1, \ldots, K\}$, server communicates $w_s^r, w_u^r$ to selected clients for each client $c_k \in C_r$ in parallel do $w_{u,k}^{r+1}, w_{s,k}^{r+1} \leftarrow$ Local_Training $(k, w_s^r, w_u^r)$ end for $w_s^{r+1}, w_u^{r+1} \leftarrow$ AGG $(w_{s,k}^{r+1}, w_{u,k}^{r+1}, c_k \in C_r)$ end for Local_Training($(k, w_s^r, w_u^r)$): if $c_k \in C_L$ then $w_{u,k}^{r+1}, w_{s,k}^{r+1} \leftarrow$ SGD update by Eq 10 in $E$ epochs. else if $c_k \in C_U$ then $w_{r+1}, w_{s,k}^{r+1} \leftarrow$ SGD update by Eq 11 in $E$ epochs. end if Return $w_{u,k}^{r+1}, w_{s,k}^{r+1}$ to server The twin-model paradigm has two models: an unsupervised model parameterized with $w_u$ and a supervised model parameterized with $w_s$. The unsupervised model is trained with a fine-grained task of a downstream classification task, i.e., instance discrimination: $$\min_{w_u} J^u(w_u) = \sum_{m=1}^{M} \beta_m J^u_m(w_u) + \sum_{t=1}^{T} \beta_t J^u_t(w_u),$$ where the objective function $J^u(\cdot)$ is the same for all client$^2$: $$J^u(w_u) = -\log \frac{\exp \left( \text{sim}(f(w_u; x_i), f(w_u; x_j)) / \tau \right)}{\sum_{k=1}^{2N} \mathbb{I}[k \neq i] \exp \left( \text{sim}(f(w_u; x_i), f(w_u; x_k)) / \tau \right)},$$ where $f(w_u; \cdot)$ is the unsupervised model and $\tau$ is the temperature hyper-parameter. Thus, the unsupervised model can be trained in a vanilla FL manner. Supervised models on clients with labeled data can be trained with a cross-entropy loss $J_m(\cdot)$. Notably, the label information is invalid on clients sampled from the unlabeled subset $C_U = \{c_t\}_{t=1}^{T}$. Thus, we introduce a surrogate loss $J^s_t(\cdot)$ to train the supervised model with unlabeled data on client $c_t$. This can be formulated as: $$\min_{w_s} J^s(w_s) = \sum_{m=1}^{M} \beta_m J_m(w_s) + \sum_{t=1}^{T} \beta_t J^s_t(w_s),$$ where the surrogate loss replaces the label used in cross-entropy loss with a pseudo label $\tilde{y}$ predicted by the supervised model $f(w_s; \cdot)$: $$J^s_t(w_s) := -\mathbb{I}[\sigma(\tilde{y}) > r] \sigma(\tilde{y}) \log f(w_s; x_i),$$ where $\mathbb{I}(\cdot)$ is an indicator function, $\sigma(\cdot)$ can select the maximum value for a given vector, and $r$ is a threshold working as a hyper-parameter to select predictions with high confidence. This is because training models using data with low-confidence predictions cause performance degradation, which is consistent with previous work (Wang et al., 2022). Consequently, we can train a supervised model in a vanilla FL manner. $^2$Here, we omit the difference induced by distribution discrepancy between clients. 3.4 Twin-sight Interaction Training two models separately cannot make these two models benefit each other. Therefore, we introduce a Twin-sight loss to complete the Twin-sight framework. The inspiration is drawn from local linear embedding (Roweis & Saul, 2000) and distribution alignment (Zhang et al., 2022), where the features (or embeddings) of the same data should keep the same neighborhood relations under different feature spaces. Specifically, we introduce a constraint to encourage preserving the neighborhood relation among data features extracted by supervised and unsupervised models. The intuition is straightforward that features extracted by the supervised model and the unsupervised model can be drastically different, making it hard to align the feature distributions. Thus, we propose to Twin-sight loss \( J_a(\cdot) \) to align the neighborhood relation among features: \[ \min_{w_s, w_u} J_a(w_s, w_u) := d(N(f(w_s; x)), N(f(w_u; x))), \] where \( d \) is a certain metric to measure the difference between two matrices, e.g., \( \ell_F \)-norm and \( N(\cdot) \) stands for the function used to construct a neighborhood relation. The Twin-sight loss can be used to train both the supervised and unsupervised models in a vanilla FL manner. Consequently, the objective function on labeled data \( J^l(\cdot) \) is formulated as: \[ J^l(w_s, w_u) = J_m(w_s) + \lambda_u J_u(w_u) + \lambda_d J_a(w_s, w_u), \] Similarly, we can leverage unlabeled data by loss function \( J^u(\cdot) \): \[ J^u(w_s, w_u) = J^*_m(w_s) + \lambda_u J_u(w_u) + \lambda_d J_a(w_s, w_u), \] where \( \lambda_u \) and \( \lambda_d \) is the hyper-parameters to adjust. The overview of the Twin-sight framework is illustrated in Figure 1 and Algorithm 1. According to the framework of Twin-sight, it is also possible to apply Twin-sight to a similar label deficiency scenario where all clients hold data with a portion of it labeled. This superiority is supported by our experiments, as shown in Table 3. 4 Experiments To evaluate our method, we have structured this section into four parts: 1) Detailed description of the datasets and baseline methods used in this paper within FSSL (Sec 4.1). 2) The main results that demonstrate the efficacy of our proposed method (Sec 4.2). 3) Extensive evaluations of Twin-sight to another scenario in FSSL, where all clients possess partially labeled data (Sec 4.3). 4.1 Experimental Setup Datasets. In our experiments, we use four popular datasets that have been extensively utilized in FSSL research (Liang et al., 2022; Wei & Huang, 2023) including CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), Fashion-MNIST (FMNIST) (Xiao et al., 2017), and CIFAR-100 (Krizhevsky et al., 2009). The training sets of these four datasets are 50,000, 73,257, 60,000, and 50,000 respectively. They are partitioned into \( K \) clients in federated learning, and we resize all images to \( 32 \times 32 \) size. Federated Semi-supervised Learning Setting. 1) Data heterogeneity: To simulate data heterogeneity, we partition the dataset across clients using the Latent Dirichlet Sampling (LDA) strategy (Hsu et al., 2019), with \( \gamma \) in \( \text{Dir}(\gamma) \) controlling the label and quantity skewness of the data distribution among clients. In our experiments, we mainly use a severe non-IID setting with \( \gamma = 0.1 \) (Fig. 2(b) shows the data distribution across 10 clients), which closely resembles real-world scenarios and is important for evaluating the effectiveness of federated learning algorithms. 2) FSSL: We follow the setting of existing FSSL works (Liang et al., 2022). Specifically, our federated learning (FL) system Figure 2: (a) The gradient similarity between two objective functions, i.e., defined on labeled and unlabeled data, throughout the training process. The figure demonstrates the gradient conflict. (b) Data heterogeneity under $D_{ir}(\gamma = 0.1)$. Each bubble indicates the number of $y$-th class at client $k$. comprises $K$ clients, among which $M$ have access to fully-labeled training data, and $T$ have only unlabeled data. The proportion of fully-unlabeled clients, represented by the ratio $\alpha = \frac{T}{K}$, constitutes a key factor determining the extent of annotation scarcity in Twin-sight, while $(1 - \alpha) = \frac{K-T}{K} = \frac{M}{K}$ highlights the degree of label richness across the participating clients. Baselines. To verify the performance and robustness of Twin-sight, we compare it against several methods, including the combination of semi-supervised and FL methods, as well as other state-of-the-art baseline methods in FSSL. 1) FedAvg (McMahan et al., 2017), trained only with labeled data as a lower bound for comparison. 2) FedProx (Li et al., 2020), proposed to mitigate heterogeneous scenarios in FL. 3) FedAvg+FixMatch (McMahan et al., 2017; Sohn et al., 2020), the combination of two excellent methods in the respective fields of federated learning (FL) and semi-supervised learning (SSL). 4) FedProx+FixMatch (Li et al., 2020; Sohn et al., 2020), revise federated learning strategy to fit into heterogeneity. 5) FedAvg+Freematch (McMahan et al., 2017; Wang et al., 2023), vanilla FL method deployed with SOTA semi-supervised framework. 6) FedProx+Freematch (Li et al., 2020; Wang et al., 2023), a combination of two methods too. 7) Fed-Consist (Yang et al., 2021), use consistency loss computed by augmented data. 8) FedIRM (Liu et al., 2021c), a relation matching scheme between fully-labeled clients and fully-unlabeled clients. 9) RSCFed (Liang et al., 2022), randomly sub-sample for sub-consensus. Implementation Details. Similar to many works (Tang et al., 2022; Wei & Huang, 2023; Huang et al., 2024), we use Resnet-18 (He et al., 2016) as a backbone feature extractor on all datasets and baselines to ensure a fair comparison. In federated learning, we aggregate weights in a FedAvg (McMahan et al., 2017) manner. In accordance with previous works (Liang et al., 2022), all of our experimental results report on the performance of the global model after $R = 500$ rounds of training. The server randomly samples a subset of all clients which means $|C_r| = 5$ when clients number $K = 10$, namely the sampling rate $S = 50\%$. The random seed in our experiments is 0. We use the SGD optimizer with a learning rate of 0.01, weight decay of 0.0001, and momentum of 0.9 in all of our experiments. The batch size is set to 64 for all datasets. 4.2 Main Results The experimental results for Twin-sight on CIFAR-10, SVHN, FMNIST, and CIFAR-100 are presented in Table 1 and Table 2. The experiments were conducted using the same random seed, with 6 out of 10 clients randomly selected to be fully-unlabeled clients while the remainder were fully-labeled clients, namely $\alpha = 60\%$, and we select 5 clients per communication round ($S = 50\%$) in FL system. Overall, the performance of both the baseline methods and Twin-sight is lower than the upper bound of FedAvg. However, our proposed method, “Twin-sight,” demonstrates a significant improvement in performance, outperforming all baselines, indicating successful mitigation of the Table 1: The performance of Twin-sight is compared to state-of-the-art (SOTA) methods on CIFAR-10 and CIFAR-100, with $\gamma = 0.1$ and $K = 10$. | Method | No. Fully-labeled Clients/Fully-unlabeled Clients | CIFAR-10 | CIFAR-100 | |---------------------------------------------|--------------------------------------------------|-----------|------------| | | Labeled Clients (M) Unlabeled Clients (T) Acc↑ Round↓ Acc↑ Round↓ | | Vanilla FL method | | | | | FedAvg-Lower Bound | 4 0 | 61.58 | 295 | 48.36 | 469 | | FedProx-Lower Bound | 4 0 | 63.66 | 168 | 44.64 | None | | Combination of FL and SSL method | | | | | FedAvg+FixMatch (McMahan et al., 2017; Sohn et al., 2020) | 4 6 | 63.58 | 207 | 48.73 | 315 | | FedProx+FixMatch (Li et al., 2020; Sohn et al., 2020) | 4 6 | 62.44 | 269 | 43.61 | None | | FedAvg+Freematch (McMahan et al., 2017; Wang et al., 2023) | 4 6 | 58.47 | None | 48.67 | 417 | | FedProx+Freematch (Li et al., 2020; Wang et al., 2023) | 4 6 | 59.28 | None | 40.45 | None | | Existing FSSL method | | | | | Fed-Consist (Yang et al., 2021) | 4 6 | 62.42 | 231 | 47.31 | None | | FedIRM (Liu et al., 2021c) | 4 6 | – | – | – | – | | RSCFed (Liang et al., 2022) | 4 6 | 60.78 | None | 43.48 | None | | Twin-sight (Ours) | 4 6 | **70.06** | **115** | **49.98**| **400** | The performance of FedAvg-Lower Bound is the target accuracy. “Round” refers to the communication round required to reach the target accuracy. “None” indicates that this method did not attain the target accuracy throughout the entire training period. The bold indicates the best result, while the underlined represents the runner-up. Table 2: The performance of Twin-sight is compared to state-of-the-art (SOTA) methods on SVHN and FMNIST, with $\gamma = 0.1$ and $K = 10$. | Method | No. Fully-labeled Clients/Fully-unlabeled Clients | SVHN | FMNIST | |---------------------------------------------|--------------------------------------------------|------|--------| | | Labeled Clients (M) Unlabeled Clients (T) Acc↑ Round↓ Acc↑ Round↓ | | FedAvg-Lower Bound | 4 0 | 51.10| 70 | 72.46 | 172 | | FedProx-Lower Bound | 4 0 | 49.22| None | 70.71 | None | | FedAvg+FixMatch | 4 6 | 58.68| **35** | 67.52 | None | | FedProx+FixMatch | 4 6 | 45.58| None | 63.20 | None | | FedAvg+Freematch | 4 6 | 59.74| **45** | 63.10 | None | | FedProx+Freematch | 4 6 | 50.91| None | 69.62 | None | | Fed-Consist | 4 6 | 56.87| 103 | 68.51 | None | | RSCFed | 4 6 | 54.50| 69 | **76.58**| **88** | | Twin-sight (Ours) | 4 6 | **62.94**| 125 | **79.95**| **140** | gradient conflict. Specifically, our method achieves excellent results on all datasets, with a particularly notable improvement on CIFAR-10. Despite its potential advantages, RSCFed did not exhibit superior performance compared to our methods due to the presence of gradient conflict (see Figure 2(a)). However, the combination of FedAvg (McMahan et al., 2017) and Fixmatch (Sohn et al., 2020) or Freematch (Wang et al., 2023) achieved comparable performance in certain scenarios, leveraging two fundamental methods from different fields despite its simplicity. Moreover, FedIRM results in a NaN loss when used in severely skewed label distributions. 4.3 Partially Labeled Data Scenario Furthermore, we explore the scenario where all clients have partially labeled data. To quantify the availability of labeled data for each client, we introduce $\tau$, which represents the labeled data ratio, indicating the proportion of labeled data available. In addition to the vanilla FL method and the combination of FL and Semi-supervised learning (SSL) methods used in the previous setting, we Table 3: The performance of Twin-sight is compared to state-of-the-art (SOTA) methods on CIFAR-10, CIFAR-100, SVHN and FMNIST in another scenario with $\gamma = 0.1$ and $K = 10$. | Method | Labeled ratio ($\tau$) | Unlabeled data ratio | CIFAR-10 | SVHN | CIFAR-100 | FMNIST | |-------------------------|------------------------|----------------------|----------|------|-----------|--------| | Vanilla FL method | | | | | | | | FedAvg-Upper Bound | 100% | 0% | 82.78 | 87.34| 64.45 | 88.89 | | FedAvg-Lower Bound | 5% | 0% | 45.35 | 37.81| 19.46 | 75.21 | | FedProx-Lower Bound | 5% | 0% | 45.44 | 27.34| 20.47 | 79.77 | | Combination of FL and SSL method | | | | | | | | FedAvg+FixMatch | 5% | 95% | 74.97 | 64.44| 33.58 | 75.62 | | FedProx+FixMatch | 5% | 95% | 60.89 | 67.34| 23.01 | 81.09 | | FedAvg+Freematch | 5% | 95% | 75.47 | 68.43| 44.16 | 74.78 | | FedProx+Freematch | 5% | 95% | 64.73 | 69.01| 31.78 | 76.75 | | Existing FSSL method | | | | | | | | FedSem (Albaseer et al., 2020) | | 95% | 43.17 | 63.41| 20.11 | 76.87 | | FedSiam (Long et al., 2020b) | | 95% | 47.05 | 57.18| 21.25 | 80.53 | | FedMatch (Jeong et al., 2021) | | 95% | 52.86 | 69.08| 23.64 | 80.16 | | Twin-sight (Ours) | 5% | 95% | **78.89**| **73.24**| **45.62**| **80.11**| incorporate three additional methods specifically designed for this partially labeled data scenario, FedSem (Albaseer et al., 2020), FedSiam (Long et al., 2020b), and FedMatch (Jeong et al., 2021). The results presented in Table 3 highlight the remarkable improvements achieved by Twin-sight in the new scenario, with the exception of the FMNIST dataset. The observed performance difference in the FMNIST dataset could be attributed to the fact that the algorithm’s performance has reached a bottleneck when trained on only 5% of the available data. However, Twin-sight still demonstrates comparable results with other methods on the FMNIST dataset. 5 CONCLUSION In this work, we present Twin-sight, a novel twin-model paradigm designed to address the challenge of label deficiency in federated learning (FL). There are three key factors contributing to the improvement of Twin-sight. First of all, we decouple the learning objective into two models which avoids gradient conflicts. In the most important part, twin-sight interaction, our unsupervised model conducts an instance classification task which is a fine-grained classification problem. Namely, this task would contribute to the downstream classification tasks (Mitrovic et al., 2020). Moreover, the data, model, and the objective function are consistent among all clients. Lastly, our supervised model conducts a classification task. Furthermore, the data, model, and objective functions are consistent among all clients, except for some unlabelled data paired with pseudo labels. Limitation The twin-model paradigm introduces an additional model, which can potentially increase memory and communication overhead in federated learning (FL). As part of our future work, we aim to explore a memory-friendly dual-model paradigm that addresses these concerns. Future works Currently, few existing methods can effectively address multiple FSSL scenarios. Therefore, future research should focus on proposing multi-scenario generalization and robust methods capable of handling FSSL problems in various situations. Furthermore, it is essential to consider communication overhead, computation overhead, and performance in the experimental evaluations to provide diverse solutions that cater to the different requirements of cross-silo and cross-device scenarios. ETHIC STATEMENT This paper does not raise any ethical concerns. This study does not involve any human subjects, practices to data set releases, potentially harmful insights, methodologies and applications, potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, privacy and security issues, legal compliance, and research integrity issues. REPRODUCIBILITY STATEMENT To make all experiments reproducible, we have listed all detailed hyper-parameters of each FL algorithm. Due to privacy concerns, we will upload the anonymous link of source codes and instructions during the discussion phase to make it only visible to reviewers. ACKNOWLEDGMENTS AND DISCLOSURE OF FUNDING Yonggang Zhang, Zhiqin Yang and Bo Han were supported by the NSFC General Program No. 62376235, Guangdong Basic and Applied Basic Research Foundation No. 2022A1515011652, HKBU Faculty Niche Research Areas No. RC-FNRA-IG/22-23/SCI/04, CCF-Baidu Open Fund, and HKBU CSD Departmental Incentive Scheme. Xinmei Tian was supported in part by NSFC No. 62222117, the Fundamental Research Funds for the Central Universities under contract WK3940000005, and KY2100000117. Nannan Wang was supported in part by the National Natural Science Foundation of China under Grants U22A2096. Tongliang Liu is partially supported by the following Australian Research Council projects: FT220100318, DP220102121, LP220100527, LP220200949, and IC190100031. REFERENCES Abdullatif Albaseer, Bekir Sait Ciftler, Mohamed Abdallah, and Ala Al-Fuqaha. Exploiting unlabeled data in smart cities using federated learning. *arXiv preprint arXiv:2001.04030*, 2020. Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. *Advances in neural information processing systems*, 32, 2019. Mikhail Belkin and Partha Niyogi. Semi-supervised learning on riemannian manifolds. *Machine learning*, 56:209–239, 2004. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. *Advances in neural information processing systems*, 32, 2019. David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. In *ICLR*, 2020. Krzysztof Chalupka, Pietro Perona, and Frederick Eberhardt. Visual causal feature learning. *arXiv preprint arXiv:1412.2309*, 2014. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607, 2020. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 15750–15758, 2021. Enmao Diao, Jie Ding, and Vahid Tarokh. Semifl: Semi-supervised federated learning for unlabeled clients with alternate training. *Advances in Neural Information Processing Systems*, 35:17871–17884, 2022. Yiqun Diao, Qinbin Li, and Bingsheng He. Towards addressing label skews in one-shot federated learning. In *The Eleventh International Conference on Learning Representations*, 2023. Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, and Dequan Wang. Back to the source: Diffusion-driven test-time adaptation. In *CVPR*, 2023. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentín Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in neural information processing systems*, 33:21271–21284, 2020.
TiY8Cvc2SR
In clinical practice the smaller the areas of tumor the more challenging the case is so degradation in performance in the datasets with smaller number of positive tiles may make it so this method is not the best choice for more challenging datasets.
PROGRESSIVE PSEUDO BAG AUGMENTATION WITH INSTANCE IMPORTANCE ESTIMATION FOR WHOLE SLIDE IMAGE CLASSIFICATION Anonymous authors Paper under double-blind review ABSTRACT In the field of computational pathology, the classification of whole-slide images (WSI) remains a challenging task due to the vast amount of gigapixel information and the limited availability of refined manual annotations. Recently, multiple instance learning (MIL) has emerged as a promising approach to address this issue. While attention-based MIL methods utilize attention mechanisms to distill instance information for training or further fine-tuning, the current ranking of attention scores fails to accurately locate positive instances. In this study, we propose the instance importance score (IIS) based on the Shapley value to tackle this problem. This approach enables the identification and prioritization of crucial features. Building upon this foundation, we present a novel framework for the progressive assignment of pseudo bags. Through comprehensive experiments, our approach achieves state-of-the-art performance compared to other superior methods on the CAMELYON-16, BRACS, and TCGA-LUNG datasets. Furthermore, the visualization results demonstrate the enhanced interpretability provided by the IIS in the classification of WSI. Code for our framework is accessible at https://github.com/****. 1 INTRODUCTION In recent years, computational pathology has undergone rapid advancements driven by the progress in digital imaging techniques. These advancements have transformed stained tissue specimens into comprehensive whole-slide images (WSIs), which serve as a basic resource for advanced diagnostic procedures. Deep learning-based computational algorithms, operating on patches tiled from WSIs, play a pivotal role in discerning essential features and making critical decisions across various clinical tasks [Campanella et al., 2019; Yan et al., 2023]. Among these tasks, WSI classification stands out as a significant endeavour, yet it confronts substantial challenges. The primary challenge lies in learning gigapixel-level information with only slide-level labels, as the refined manual annotations are prohibitively expensive [Zhu et al., 2023; Chen et al., 2022; Yufei et al., 2022]. Moreover, there is a growing demand from the clinical for an approach that delivers high performance and offers interpretability [Pati et al., 2022; Jaume et al., 2021; Schwab & Karlen, 2019]. To overcome these challenges, researchers have employed multiple instance learning (MIL), which is a weakly supervised approach that aggregates instances within a bag for classification. In its early stages, MIL approaches employed mean pooling or max pooling for feature aggregation [Pinheiro & Collobert, 2015; Feng & Zhou, 2017; Zhu et al., 2017]. Moreover, attention-based pooling has taken the forefront due to its effectiveness in amalgamating information [Ilse et al., 2018]. Recent studies have introduced methods that operate under the assumption that the ranking of attention scores can accurately identify positive instances. Lu et al. introduced an additional cluster branch founded on attention scores to distinguish features via projection [Lu et al., 2021]. Yu et al. proposed a Bayesian collaborative learning (BCL) framework, which assigns slide-level labels to patches garnering the highest attention for patch-level training, effectively fine-tuning the feature encoder by the agent task [Yu et al., 2023]. Li et al. identified and selected instances with top-ranking attention scores for end-to-end MIL training, addressing the information bottleneck [Li et al., 2023]. However, as depicted in Fig. 1, the attention distributions of different MIL models reveal that the top 5 instances collectively receive a significant portion of the attention scores. Moreover, when examining Figure 1: The attention distributions generated from attention-based pooling in CAMELYON-16, along with examples of the top 5 instances. (a), (b), and (c) employ ABMIL, CLAM, and DTFD as the backbone models, respectively. The attention scores from all patches across all slides are normalized for visualization, and the top 5 instances from one example slide are extracted and ordered. the attention score rankings, it becomes apparent that positive instances can be misordered, even within the top 5 samples. This leads to the observation that attention can be fuzzy and deceptive, as it tends to concentrate on a limited subset of instances, resulting in a noisy ranking of instance importance rather than a precise one. To promote MIL models to learn a greater number of positive instances, one approach is to partition the regular bag into multiple pseudo bags [Shao et al., 2021a]. Zhang et al. adopted a strategy where one bag is randomly partitioned into multiple pseudo bags for both training and inference [Zhang et al., 2022]. In order to solve the mislabeling issue of pseudo bags, they distilled a feature vector from each pseudo bag, and proposed a Tier-2 MIL model upon the distilled features for slide classification. However, as depicted in Fig.1(c), their method does not fundamentally resolve the inherent mislabeling issue in pseudo bag assignment, which can adversely affect the MIL process. Inspired by cooperative game theory [Shapley et al., 1953], we introduce a metric to measure the contribution of each instance termed instance importance score (IIS). In cooperative game theory, the classical Shapley value serves as an indicator for comprehensively measuring the contribution of features under different cooperative relationships. The concept of the Shapley value can also be applied to the field of pathology WSI classification, where multiple patches contribute to the final diagnosis. Building upon the introduced IIS, we propose a MIL framework called PMIL, which incorporates progressive pseudo bag augmentation. This approach systematically divides a regular bag into a series of pseudo bags, enhancing the model’s fitting and generalization capabilities. In summary, our key contributions are as follows: • Acknowledging the limitations of attention scores in terms of ranking accuracy and interpretability, we tackle this issue by introducing the Shapley value-based IIS value as a measure of instance contributions in the context of multiple instance learning. • We propose a framework, called PMIL, that utilizes IIS to gradually assign pseudo bags, effectively enhancing the MIL model. • Extensive experiments have been conducted, which demonstrates that our method achieves a state-of-the-art level of performance and provides improved interpretation. 2 METHOD 2.1 PSEUDO BAG AUGMENTED MULTIPLE INSTANCE LEARNING FOR WSI CLASSIFICATION To combat the challenges in whole-slide image classification, we first retrospect the pseudo bag augmented multiple instance learning. We denote the training set of labeled WSIs as \( \mathcal{D} = \{X_i, Y_i\}_{i=1}^{|D|} \), where \( X_i = \{x_{i,j}\}_{j=1}^{N_i} \) represents the \( i \)th bag (slide) of \( N_i \) instances after feature extraction, and \( |D| \) is the number of labeled bags. Traditional MIL involves aggregating instances into a bag-level representation and mapping it to a bag-level prediction, as follows: \[ \hat{Y} = f \left( g \left( \{x_{i,j}\}_{j=1}^{N_i} \right) \right), \] where \( g(\cdot) \) is the aggregator and \( f(\cdot) \) is the fully connected (FC) layer. Considering randomly splitting a regular bag into \( M \) pseudo bags, each pseudo bag inherits the label from its parent bag, resulting in an expanded training set \( \mathcal{D}_{pse} = \{X_{pse,i}, Y_i\}_{i=1}^{M \times |D|} \). We can obtain \( \hat{Y}_{pse} \) via Eq[1] and the objective function for pseudo bag augmented MIL is then defined as, \[ J(\mathcal{D}_{pse}; \theta) = \sum_{i=1}^{M \times |D|} L \left( \hat{Y}_{pse,i}, Y_i \right), \] where \( \theta \) is the parameter of the MIL classifier, including the aggregator and the FC layer. The loss function \( L \) used in this work is the cross entropy loss. However, this augmentation can assign pseudo bags with incorrect labels. The objection function in Eq[2] can be further divided into two items: \[ J(\mathcal{D}_{pse}; \theta; \varepsilon) = \sum_{i=1}^{M \times |D| - \varepsilon} L \left( \hat{Y}_{pse,i}, Y_i \middle| Y_i = Y_{pse,i} \right) + \sum_{i=1}^{\varepsilon} L \left( \hat{Y}_{pse,i}, Y_i \middle| Y_i \neq Y_{pse,i} \right), \] where \( \varepsilon \) represents the number of pseudo bags with incorrectly assigned labels. Eq[3] reveals a trade-off in the training process, dependent on \( \varepsilon \): the first term increases the number of training bags, thereby bolstering the diversity of positive instances; while the second term introduces training noise, leading to an unstable training. A common practice is to randomly split pseudo bags, which fixes \( \varepsilon \) for optimization. Our target is to obtain proper \( \theta \) and \( \varepsilon \) to minimize the overall objection function, which can be decoupled into optimizations of \( \theta \) in Eq[2] and \( \varepsilon \) by approximation. However, we cannot directly improve \( \varepsilon \) since the true label of the pseudo bag is not available. Thus, we transfer this issue to the optimization of pseudo bag assignment. 2.2 SHAPLEY VALUE-BASED INSTANCE IMPORTANCE SCORE ESTIMATION To fully leverage the benefits of pseudo bag augmentation, we introduce the concept of instance importance scores (IIS) to estimate the contribution of each instance, guiding the process of splitting regular bags into pseudo bags to minimize \( \varepsilon \). In attention-based MIL models, using attention scores derived from pooling operations as IIS is a logical choice. As shown in Fig[1], our observation reveals that attention scores might not accurately reflect the ranking of importance, as attention-based pooling often prioritizes a small subset of instances. To address this limitation, we introduce the Shapley value as an alternative method for estimating IIS. In the context of the MIL framework, this approach necessitates evaluating the model across all feasible instance subsets from the full set of instances \( S_i \subseteq X_i \). The Shapley value \( \phi \) for a particular instance \( x_{i,j} \) is calculated by considering the differences in model predictions with and without \( x_{i,j} \) for all feature subsets \( S_i \subseteq X_i \setminus \{x_{i,j}\} \): \[ \phi_{i,j} \triangleq \sum_{S_i \subseteq X_i \setminus \{x_{i,j}\}} \frac{|S_i|! (|X_i| - |S_i| - 1)!}{|X_i|!} \left[ f(g(S_i \cup \{x_{i,j}\})) - f(g(S_i)) \right]. \] The computation of Shapley value takes a comprehensive consideration on the contribution of each instance. While directly calculating the Shapley value for computational pathology, where one bag often contains thousands of instances, can be time-consuming. To expedite the computation, several methods have been proposed to approximate Shapley values via sampling [Strumbelj & Kononenko]. Figure 2: Overview of the proposed PMIL framework during the training process. (a) Initialize the feature encoder with pretrained parameters and set frozen, then assign $M$ pseudo bags based on the calculated instance importance score (IIS), and train the pseudo bag augmented MIL model with the regular bag label. (b) Progressively increase the number of pseudo bags and improve the pseudo bag initialization across various training iterations and rounds. (2010), weighted regression, a modified backpropagation step [Lundberg & Lee (2017)], and other approaches [Ancona et al. (2019); Chen et al. (2018)]. These methods aim to reduce the computational complexity by minimizing the number of subsets required for each instance. It’s worth noting that the crucial instances are typically the positive ones, and their order significantly impacts the accuracy of labels assigned to pseudo bags. To accelerate the computation, we focus on instances with high attention scores, which are more likely to significantly influence the final prediction, and leave the less significant instances to serve as the subset range for sampling. Thus, the Shapley value of instances with high attention scores $\phi_{i,j}, x_{i,j} \in S^h_i$ can be accelerated as, $$\text{IIS}(x_{i,j}) = \sum_{S_{i,j} \subseteq S^l_i} \frac{|S_{i,j}|!}{|S^l_i|!} \left[ f(g(S_{i,j} \cup \{x_{i,j}\})) - f(g(S_{i,j})) \right],$$ where $S^h_i$ is the instance subset with high attention scores, $S^l_i = X_i - S^h_i$ is the complementary set of $S^h_i$ in the case where $X_i$ is a universal set. The instance number of $S^h_i$ is set to $\mu M$, where $\mu$ is set to 10 in this work, and the sampling number $\tau$ for $S^l_i$ is set to 3 for further experiments. Assuming that the computational time of reasoning once in the MIL model per bag is a constant $\gamma$, we can quantify the computational complexity of different IIS calculation methods as follows: $$\Omega(\text{Shapley Value}) = \sum_{i=1}^{|D|} \sum_{j=0}^{N_i} C_{N_i}^j \cdot \gamma = \gamma \sum_{i=1}^{|D|} 2^{N_i},$$ $$\Omega(\text{Accelerated Shapley Value}) = \sum_{i=1}^{|D|} \sum_{j=0}^{\tau} \gamma = \gamma \tau \mu M,$$ where $C_{N_i}^j$ is combination number. Through simplification, we can obtain an acceleration ratio of $\frac{\sum_{i=1}^{|D|} 2^{N_i}}{\tau \mu M}$ on the Shapley value as our proposed IIS, ensuring a linear computational complexity while maintaining ranking accuracy. 2.3 Instance Importance Score-Based Progressive Pseudo Bag Augmentation Based on the calculated instance importance score, we can iteratively reorder instances within each bag \( X'_i = \{ x'_{i,j} | \text{IIS}(x'_{i,1}) \geq \text{IIS}(x'_{i,2}) \geq \cdots \geq \text{IIS}(x'_{i,N_i}) \} \), as illustrated in Fig. 2. These reordered instances are interleaved into \( M \) pseudo bags, namely, each pseudo bag \( X'^{\text{pse}}_{i,k} = \{ x'_{i,j} | j \equiv k (\text{mod } M) \} \) contains instances alternating sampled from \( X'_i \). Thus, the optimization of \( \varepsilon \) can be approximated as: \[ \varepsilon^* = \arg \min_{X'} D_{KL} \left( P_{X'^{\text{pse}} \sim \Gamma(X')} [Y|X'^{\text{pse}}; \theta] \| P[Y|X; \theta] \right), \] where \( \Gamma(X') \) is the instance importance distribution of \( X' \). It is well-established that the optimization defined by Eq[2] and Eq[8] can be solved by using the EM algorithm. By optimizing \( \Gamma(X') \), each pseudo bag is more likely to contain at least one positive instance to minimize \( \varepsilon \), namely the risk of mislabeling. In this work, only bags within the training set are split into pseudo bags, and the training process for these pseudo bags remains identical to that of the regular bags. Directly splitting a regular bag into a large amount of pseudo bags can introduce training instability, particularly when the MIL model struggles to capture crucial instances. This instability is especially problematic when a regular bag contains only a few positive instances, such as in the case of micro metastasis. To address this issue, we progressively increase the number of pseudo bags during the early training process when the MIL model converges to a locally optimal solution, denoted by: \[ M_t = \min \{ M_{t-1} + \Delta M, M_{\text{max}} \}, \quad s.t. \{ g_{t-1}, f_{t-1} \} \rightarrow \{ g^*_t, f^*_t \}, \] where \( t \) is the convergence iteration, \( \Delta M \) is the pseudo bag number increment, and \( M_0 \) and \( M_{\text{max}} \) are the initial and maximum pseudo bag numbers, respectively. The initial assignment of instances to pseudo bags plays a crucial role in the subsequent training, especially when dealing with challenging datasets. To address this issue, we introduce additional EM training rounds to optimize the MIL model and the distribution of \( X' \). In each round \( \xi \), we progressively enhance the initial pseudo bag augmentation by calculating the initial instance importance scores at the first iteration, using the well-trained MIL model from the previous round \( \xi - 1 \). 3 Experiments 3.1 Datasets and Evaluation Metrics In our experiments, we report one-versus-rest area under curve (AUC), slide-level accuracy (ACC), and macro F1 score as the evaluation metrics. We utilize three public pathology WSI datasets to assess our methods. CAMELYON-16 is designed to detect lymph node metastasis in early-stage breast cancer. It comprises 399 WSIs, with 270 allocated for training and 129 for testing. The official training set follows a 5-fold cross-validation protocol to generate training and validation sets. We report the mean performance metrics on the official test set. BRACS [Brancati et al., 2022] is curated for breast cancer subtyping and contains 547 H&E-stained WSIs. The classification task involves benign tumors, atypical tumors (AT), and malignant tumors (MT). We adhere to the official dataset split, with 395 for training, 65 for validating, and 87 for testing. We conduct five separate experiments with different random seeds and report the mean performance metrics on the official test set. TCGA-LUNG comprises 1034 WSIs, encompassing 528 lung adenocarcinoma (LUAD) and 506 lung squamous cell carcinoma (LUSC) cases. We adopt a 5-fold cross-validation protocol for both training and testing. The mean performance metrics are reported on the test set. In our preprocessing step, we employ OTSU’s threshold method to localize tissue regions for patch generation. Non-overlapping patches of size 256×256 pixels are tiled at a 20× magnification for CAMELYON-16 and TCGA-LUNG, and a 5× magnification for BRACS, yielding an average of about 7156, 11951, and 714 patches per bag, respectively. Table 1: Results on CAMELYON-16, BRACS, and TCGA-LUNG test set. The encoder ResNet50 is pretrained on ImageNet. The subscripts are the standard variances. The best evaluation metrics are in bold. | Method | CAMELYON-16 | BRACS | TCGA-LUNG | |----------|-------------|-------|-----------| | | ACC | AUC | F1 | ACC | AUC | F1 | ACC | AUC | F1 | | MeanMIL | 70.9<sub>1.8</sub> | 58.7<sub>1.9</sub> | 62.3<sub>3.3</sub> | 52.4<sub>2.6</sub> | 69.2<sub>1.6</sub> | 40.6<sub>5</sub> | 82.0<sub>0.9</sub> | 88.9<sub>2.0</sub> | 82.0<sub>1.0</sub> | | MaxMIL | 83.7<sub>1.8</sub> | 86.7<sub>2.6</sub> | 83.5<sub>3.1</sub> | 55.9<sub>2.8</sub> | 75.9<sub>1.6</sub> | 50.3<sub>4.0</sub> | 88.7<sub>1.0</sub> | 94.4<sub>1.2</sub> | 88.7<sub>1.0</sub> | | ABMIL | 82.5<sub>1.9</sub> | 83.8<sub>2.1</sub> | 80.6<sub>1.7</sub> | 58.4<sub>0.9</sub> | 76.1<sub>0.6</sub> | 54.7<sub>2.3</sub> | 87.6<sub>0.7</sub> | 93.1<sub>1.8</sub> | 87.6<sub>0.7</sub> | | DSMIL | 77.2<sub>1.7</sub> | 77.2<sub>2.1</sub> | 74.4<sub>2.6</sub> | 53.1<sub>2.2</sub> | 70.8<sub>3.3</sub> | 46.1<sub>3.7</sub> | 86.2<sub>1.4</sub> | 93.6<sub>1.0</sub> | 86.2<sub>1.4</sub> | | CLAM | 82.5<sub>3.2</sub> | 81.6<sub>2.4</sub> | 80.1<sub>3.5</sub> | 53.8<sub>3.5</sub> | 73.3<sub>1.7</sub> | 51.5<sub>3.3</sub> | 88.2<sub>1.4</sub> | 94.2<sub>1.2</sub> | 88.2<sub>1.4</sub> | | TransMIL | 85.0<sub>1.4</sub> | 89.1<sub>0.7</sub> | 83.3<sub>1.3</sub> | 57.0<sub>2.4</sub> | 75.5<sub>1.0</sub> | 49.2<sub>5.2</sub> | 87.9<sub>0.9</sub> | 94.8<sub>0.8</sub> | 87.9<sub>0.9</sub> | | DTFD | 85.3<sub>1.6</sub> | 85.4<sub>3.2</sub> | 84.9<sub>1.7</sub> | 57.2<sub>2.7</sub> | 76.6<sub>2.0</sub> | 56.2<sub>3.8</sub> | 88.8<sub>0.6</sub> | 94.6<sub>0.8</sub> | 88.8<sub>0.6</sub> | | PMIL | 87.4<sub>1.1</sub> | 90.1<sub>1.6</sub> | 86.3<sub>1.1</sub> | 68.3<sub>1.7</sub> | 84.0<sub>0.3</sub> | 66.5<sub>2.4</sub> | 91.3<sub>1.4</sub> | 96.5<sub>0.9</sub> | 91.3<sub>1.4</sub> | Figure 3: Visualization of pseudo bag assignment using PMIL. The red annotations represent cancer regions. The five-pointed star pointed patches obtained the most attention in each pseudo bag. ### 3.2 Implementation Details All experiments were conducted on a workstation equipped with NVIDIA RTX 3090 GPUs. For model training, we used ResNet50 as the encoder [He et al., 2016], and ABMIL as the backbone MIL model. We employed the Adam optimizer with a weight decay of 1e-5, and implemented the early stopping strategy with a patience setting of 20 epochs. The initial learning rate was set to 3e-4 and reduced to 1e-4 for finetuning. We set the maximum number of pseudo bags to 8 for the CAMELYON-16, 10 for the BRACS, and 14 for TCGA-LUNG. Furthermore, the number of pseudo bags gradually increased by 4 with the first training round during the training process. ### 3.3 Performance Comparison We present the experimental results of our proposed methods on CAMELYON-16, BRACS, and TCGA-LUNG datasets. These results are compared to those obtained by the following MIL methods: Mean-Pooling, Max-Pooling, the classic AB-MIL [Ise et al., 2018], DSMIL [Li et al., 2021], CLAM-SB [Lu et al., 2021], TransMIL [Shao et al., 2021b], and DTFD [Zhang et al., 2022]. As illustrated in Table 1, our proposed PMIL method stands out with impressive AUC scores of 90.1% for CAMELYON-16, 84.0% for BRACS, and 95.6% for TCGA-LUNG, consistently surpassing all other methods in the comparison. Especially in the challenging BRACS dataset, our method outperforms other methods significantly. By progressively generating reasonable pseudo bags, we enhance training diversity and reduce the number of instances in each bag, ultimately facilitating the model’s ability to learn positive instances effectively. 3.4 Visualization and Interpretation To illustrate the accuracy of our pseudo bag augmentation, as depicted in Fig. 3, all patches have been segmented into three pseudo bags based on the calculated Shapley values. Notably, the three critical patches within the micro metastasis region are evenly distributed among different pseudo bags, which demonstrates our approach enhances the diversity of positive instances. To emphasize the limitations of the attention score, we conducted a comparative analysis, as illustrated in Fig. 4 (a) and (b). All models perform well in the macro metastasis case. However, in the case of micro metastasis, the calculated attention score indicates that ABMIL and our model focus on some noncancerous areas. However, when employing the Shapley value, our proposed model accurately excludes the negative regions and precisely identifies the cancer regions. Unlike the attention value, the Shapley value computation utilizes the entire MIL classifier, inherently containing category information. As illustrated in Fig. 4 (c) and (d), both attention score and Shapley value (MT) predominantly concentrate on malignant tumor regions, aligning with the slide-level labels. However, when setting atypical tumors as the category for Shapley value, the heatmaps primarily highlight the atypical tumor regions. Although the heatmaps may not achieve pinpoint accuracy on the BRACS dataset due to limited performance, this observation underscores the robust interpretability of the Shapley value in multi-classification tasks. These visualization results reveal that the attention score often captures a noisy ranking of instance importance, whereas the Shapley value effectively addresses this issue, resulting in a more accurate and reliable interpretation. Furthermore, while the attention score only provides insights into the heatmaps of the target category, the Shapley value has the capability to highlight additional category information. 3.5 Ablation Study 3.5.1 IIS Measure Metrics We employed both the attention score and the Shapley value to measure the IIS for further training, and we used random split as the baseline pseudo bag augmentation strategy. The results in Tab. 2 indicate that the Shapley value-based method outperformed others on the CAMELYON-16 and TCGA-LUNG datasets. In contrast, the attention score-based method performed better on the BRACS dataset. This difference in performance can be attributed to the fact that attention scores can be directly obtained through pooling operations, while the Shapley value computation involves an additional fully connected (FC) layer. The Shapley value may exhibit reduced robustness when the accuracy of the MIL classifier is not sufficiently high; however, it tends to be a better choice when dealing with datasets that are less challenging to learn. 3.5.2 Maximum Pseudo Bag Number The choice of the maximum pseudo bag number varies depending on the dataset due to differences in magnification levels of the tiled patches and the size of tumor regions. As illustrated in Fig [5], our model achieves the best performance on the CAMELYON-16 dataset when the maximum pseudo bag number is set to 8 or 10. However, exceeding this range leads to a sharp decline in performance. In contrast, for the BRACS and TCGA-LUNG datasets, our model performs better as the maximum pseudo bag number increases and achieves the best performance when the number is set to 10 and 14, respectively. This is because the CAMELYON-16 dataset contains abundant micro metastasis slides with only a few positive instances, even when tiled at a $20\times$ resolution. In this case, the augmentation faces a trade-off between adding more noise to the training set or enhancing training diversity. Meanwhile, the cancer (subtype) regions in the BRACS and TCGA-LUNG datasets are much larger and can be effectively divided into numerous pseudo bags. 3.6 Progressive Pseudo Bag Augmentation Strategies To demonstrate the effectiveness of our progressive augmentation, we conducted experiments with different pseudo bag augmentation strategies. In these experiments, we utilized a pseudo bag num- Table 2: Evaluation of pseudo bag assignment using different IIS measure metrics on CAMELYON-16, BRACS and TCGA-LUNG test sets. The best evaluation metrics are in bold. | Metrics | CAMELYON-16 | BRACS | TCGA-LUNG | |---------------|-------------|-------|-----------| | | ACC | AUC | F1 | ACC | AUC | F1 | ACC | AUC | F1 | | Random | 87.13 | 86.19 | 85.49 | 61.78 | 80.39 | 58.46 | 89.65 | 95.77 | 89.64 | | Attention Score| 87.44 | 89.76 | 86.28 | 68.28 | 83.98 | 66.47 | 90.33 | 95.57 | 90.31 | | Shapley Value | 87.44 | 90.10 | 86.30 | 66.09 | 81.27 | 64.67 | 91.29 | 96.45 | 91.29 | Table 3: Evaluation of pseudo bag assignment using different progressive strategies on CAMELYON-16, BRACS and TCGA-LUNG test sets. The best evaluation metrics are in bold. | Pseudo Bag Strategy | CAMELYON-16 | BRACS | TCGA-LUNG | |---------------------|-------------|-------|-----------| | Number | ACC | AUC | F1 | ACC | AUC | F1 | ACC | AUC | F1 | | Constant | 80.8 | 77.5 | 76.9 | 62.6 | 82.8 | 60.9 | 90.6 | 95.9 | 90.6 | | Progressive | 84.9 | 85.5 | 82.4 | 64.7 | 82.2 | 62.1 | 89.7 | 95.6 | 89.7 | | Constant | 85.1 | 86.4 | 83.3 | 70.7 | 84.5 | 68.7 | 90.8 | 96.8 | 90.8 | | Progressive | 88.2 | 88.1 | 87.0 | 71.3 | 84.9 | 69.9 | 91.3 | 96.1 | 91.3 | ber increment of 4 and set the training rounds to 5 for CAMELYON-16 and TCGA-LUNG, and 10 for BRACS, as this dataset is more challenging to learn. As depicted in Tab. 3, the model equipped with the full progressive strategy achieves the best performance. Progressively increasing the number of pseudo bags significantly impacts CAMELYON-16, as it requires fine adjustment to avoid introducing excessive noise. Meanwhile, progressively inheriting the initial weight from the previous round significantly influences the performance on BRACS, where subtypes are inherently difficult to discern. A good pseudo bag initialization facilitates the model’s ability to locate positive instances, resulting in improved performance. In contrast, TCGA-LUNG is less difficult to learn, leading to an insignificant increase in performance. From these ablation studies, we summarize several key insights as follows: **Choice of IIS Metrics.** The variation of IIS metrics depends on the specific dataset characteristics. The attention score-based IIS is a reliable and commonly used choice. However, the Shapley value-based IIS performs better on relatively more accessible datasets but may exhibit reduced performance on more challenging datasets, we believe it’s primarily due to its reliance on accurate classification results. **Sensitivity to Maximum Pseudo Bag Number.** The $M_{max}$ is highly sensitive to different datasets according to the number of positive instances. In case of large tumor regions within bags, it is advisable to use a larger $M_{max}$. Conversely, for datasets with only a few positive instances per bag, a more conservative approach is to set a smaller $M_{max}$. **Progressive Strategies.** Progressive increase in the number of pseudo bags is particularly effective for challenging datasets or those with only a limited number of positive instances in each bag. While this approach may be less appealing for datasets with substantial tumor regions. In contrast, progressive initialization represents a significant improvement across various datasets, especially on more challenging ones. ### 4 Conclusion In this paper, we initially reveal the distribution and ranking problems associated with the attention score, which can result in suboptimal training and limited interpretability. To tackle these issues, we introduce the Shapley value-based IIS to measure the contribution of each instance, which guides a more rational assignment of pseudo bags. Furthermore, we propose a novel framework called PMIL, which leverages IIS to progressively assign pseudo bags. Through comprehensive experiments, our approach outperforms state-of-the-art methods on three public datasets and the Shapley value-based IIS provides enhanced interpretability for pathological whole slide images. REFERENCES Marco Ancona, Cengiz Oztireli, and Markus Gross. Explaining deep neural networks with a polynomial time algorithm for shapley value approximation. In *International Conference on Machine Learning*, pp. 272–281. PMLR, 2019. Nadia Brancati, Anna Maria Anniciello, Pushpak Pati, Daniel Riccio, Giosuè Scognamiglio, Guillaume Jaume, Giuseppe De Pietro, Maurizio Di Bonito, Antonio Foncubierta, Gerardo Botti, et al. Bracs: A dataset for breast carcinoma subtyping in h&e histology images. *Database*, 2022: baac093, 2022. Gabriele Campanella, Matthew G Hanna, Luke Geneslaw, Allen Miraflor, Vitor Werneck Krauss Silva, Klaus J Busam, Edi Brogi, Victor E Reuter, David S Klimstra, and Thomas J Fuchs. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. *Nature medicine*, 25(8):1301–1309, 2019. Jianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. L-shapley and c-shapley: Efficient model interpretation for structured data. *arXiv preprint arXiv:1808.02610*, 2018. Richard J Chen, Chengkuan Chen, Yicong Li, Tiffany Y Chen, Andrew D Trister, Rahul G Krishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 16144–16155, 2022. Ji Feng and Zhi-Hua Zhou. Deep miml network. In *Proceedings of the AAAI conference on artificial intelligence*, volume 31, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In *International conference on machine learning*, pp. 2127–2136. PMLR, 2018. Guillaume Jaume, Pushpak Pati, Behzad Bozorgtabar, Antonio Foncubierta, Anna Maria Anniciello, Florinda Feroce, Tilman Rau, Jean-Philippe Thiran, Maria Gabrani, and Orcun Goksel. Quantifying explainers of graph neural networks in computational pathology. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 8106–8116, 2021. Bin Li, Yin Li, and Kevin W Eliceiri. Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 14318–14328, 2021. Honglin Li, Chenglu Zhu, Yunlong Zhang, Yuxuan Sun, Zhongyi Shui, Wenwei Kuang, Sunyi Zheng, and Lin Yang. Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7454–7463, 2023. Ming Y Lu, Drew FK Williamson, Tiffany Y Chen, Richard J Chen, Matteo Barbieri, and Faisal Mahmood. Data-efficient and weakly supervised computational pathology on whole-slide images. *Nature biomedical engineering*, 5(6):555–570, 2021. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. *Advances in neural information processing systems*, 30, 2017. Pushpak Pati, Guillaume Jaume, Antonio Foncubierta-Rodriguez, Florinda Feroce, Anna Maria Anniciello, Giosue Scognamiglio, Nadia Brancati, Maryse Fiche, Estelle Dubruc, Daniel Riccio, et al. Hierarchical graph representations in digital pathology. *Medical image analysis*, 75:102264, 2022. Pedro O Pinheiro and Ronan Collobert. From image-level to pixel-level labeling with convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 1713–1721, 2015.
s9bCeJGUJi
D3GAT is proposed as an adaptation of DDGAT to overcome the issue that DDCAT “jointly calculates the invariant pattern and the variant pattern of the node feature, which limits pattern learning, because they are decided jointly and cannot adjust with regard to the other.” This claim seems to be unjustified. Through reading the paper, the reviewer is not convinced that calculating invariant pattern and the variant pattern would harm the model generalization ability. Some theoretical justifications or intuition demonstrations may be helpful.
CURRICULUM DYNAMIC GRAPH INVARIANT LEARNING UNDER DISTRIBUTION SHIFT Anonymous authors Paper under double-blind review ABSTRACT Dynamic graph neural networks have attracted intensive research interests recently but generally suffer from handling distribution shifts that widely exist in dynamic graphs. Although the existing works attempt to disentangle the invariant and variant patterns, they ignore the training status of the graph neural network and the importance of training samples at different times, which are critical to model invariant patterns accurately in dynamic graphs under distribution shifts. In this paper, we study distribution shifts in dynamic graphs with curriculum learning for the first time, which remains unexplored and faces the following challenges: (i) how to design a tailored training status evaluation strategy; and (ii) how to design a tailored sample importance reweighting strategy, so as to handle distribution shifts in dynamic graphs. To address these challenges, we propose a Curriculum Dynamic Graph Invariant Learning (CDGIL) model, which can handle distribution shifts in dynamic graphs by capturing and utilizing invariant and variant patterns guided by the proposed curriculum learning strategy. Specifically, we first propose a dual disentangled dynamic attention network to capture the invariant and variant patterns, respectively. Next, we propose a self-paced intervention mechanism based on training status to create adversarial samples by reassembling variant patterns across neighborhoods and time stamps to remove the spurious impacts of variant patterns. Finally, we propose a sample importance reweighting strategy to distinguish invariant and variant patterns better via focusing on the key training samples. Extensive experiments on both synthetic and real-world dynamic graph datasets demonstrate the superiority of our proposed method over state-of-the-art baselines under distribution shifts. 1 INTRODUCTION Dynamic graphs are ubiquitous in the real world, where some nodes and edges evolve along with time. Thanks to the power to capture both structural and temporal patterns simultaneously, dynamic graph neural networks (DGNNs) have achieved impressive successes in various real-world applications, including financial networks (Nascimento et al., 2021), social networks (Rossi et al., 2020b), traffic networks (Guo et al., 2021), etc. However, existing DGNNs have certain limitations when it comes to addressing distribution shift, which is common in practical scenarios (Holt, 2004; Zhang & Qi, 2005; Yin et al., 2021; Gao et al., 2021; Russell et al., 2019; Berk, 1983). Existing research on out-of-distribution DGNNs primarily focuses on invariant learning, assuming the existence of invariant patterns across distributions (Zhang et al., 2022; Wu et al.). These studies incorporate the use of an additional loss component called invariant risk minimization (IRM) to capture invariances (Arjovsky et al., 2019). However, the fixed proportion of ERM and IRM in the training loss poses challenges for dynamically adjusting the training strategy in existing works. Moreover, the importance of diversity among samples has been overlooked when attempting to disentangle the invariant patterns, i.e., emphasizing samples that exhibit spurious correlations makes it difficult to capture invariance effectively. Therefore, adjusting the weight of different samples based on their properties and the training stage is crucial for capturing the invariant pattern successfully. Curriculum learning (Bengio et al., 2009; Wang et al., 2021b) is a powerful approach that enables the dynamic adjustment of the training strategy, such as data reweighting and objection adjustment based on different training statuses. In this paper, we explore the application of curriculum learning... to address distribution shifts in dynamic graphs, which is a novel and unexplored area, and it poses the following challenges: (i) devising a customized evaluation strategy for training statuses, and (ii) designing a tailored strategy for reweighting the importance of samples, specifically to handle distribution shifts in dynamic graphs, e.g., the appearance or disappearance of graph nodes and edges. To address these challenges, this paper presents a Curriculum Dynamic Graph Invariant Learning (CDGIL) model. The CDGIL model is introduced as a solution for handling distribution shifts in dynamic graphs by effectively capturing and utilizing both invariant and variant patterns. In particular, we first propose a dual disentangled dynamic attention network to capture the invariant and variant patterns, respectively. Furthermore, we propose a novel self-adaptive intervention method based on training status to create adversarial samples by reassembling variant patterns across neighborhoods and time stamps to remove the spurious impacts of variant patterns. Finally, we propose a novel curriculum-based sample importance reweighting method, which evaluates the importance of various data samples, and dynamically adjusts their weight in the training process. In summary, we made the following contributions: - We investigate curriculum dynamic graph invariant learning for the first time, and propose a exquisitely designed curriculum learning method for dynamic graph generalization, which can automatically adjust both the training procedure and data weight according to data sample importance and different training stages. - We design a self-adaptive intervention method to combine the invariant and variant parts, aiming to strengthen the generalization ability of our model and enhance our performance while facing data from unknown environments. - We propose a novel curriculum-based sample importance reweighting method, by increasing the weight of data samples that are important to the current model and decreasing the weight of data samples that are not important to the current model, our models can get more suitable data inputs and get better performance optimization. - Extensive experiments show that our proposed CDGIL method has the ability to outperform all of the state-of-the-art baselines on all of the dynamic graph datasets with distribution shifts. 2 RELATED WORKS 2.1 Dynamic Graph Neural Networks Dynamic Graph Neural Networks (DGNN) (Skarding et al., 2021; Zhu et al., 2022) have attracted intensive research attention as a powerful approach to handling the complex structural and temporal information in dynamic graphs. It has been applied to a wide range of real-world sceneries, such as action recognition (Yan et al., 2018), epidemic forecasting (Panagopoulos et al., 2020; Rozemberczki et al., 2021), social networks (Rossi et al., 2020b; Cai et al., 2022; Goyal et al., 2020), recommendation (Song et al., 2019), traffic prediction (Guo et al., 2021; Diao et al., 2019), and anomaly detection (Liu et al., 2021; Weber et al., 2019; Wang et al., 2021a). There are two mainstream DGNN methods, whose main difference between them is the order of dealing with time series and structural information. One of them utilizes a graph neural network (GNN) to aggregate structural information for the graph at each time at first and then adopts sequence models such as recurrent neural networks (RNN) (Yang et al., 2021; Sun et al., 2021; Hajiramezanah et al., 2019; Seo et al., 2018b; Pareja et al., 2020) or self-attention modules (Sankar et al., 2020) to process the temporal information. While the other method proposes to transform temporal links into a time-dependent function, using time-encoding techniques, then use GNN (Wang et al., 2021c; Cong et al., 2021; Xu et al., 2020; Rossi et al., 2020a) to process the graph with the time series information and get structural information. DIDA (Zhang et al., 2022) is the only work for dynamic graph distribution shift, it proposes a disentangled spatio-temporal attention network and a spatio-temporal intervention mechanism in order to handle spatio-temporal distribution shifts in dynamic graphs. But its training strategy is not related to the training status or sample importance. 2.2 Out-of-Distribution Generalization for Graph Traditional machine learning method has the assumption that the training and test sets are guaranteed to be independently and identically distributed (i.i.d.), but this assumption does not hold in many real-world scenarios (Shen et al., 2021). Ignoring the fact that this assumption does not always hold may lead to the degradation of model performance (Fang et al., 2020). Various work has been done on this important question, such as IRM (Arjovsky et al., 2019), DRO (Rahimian & Mehrotra, 2019), REx (Krueger et al., 2021), and so on. In particular, designing models able to generalize in out-of-distribution (OOD) scenarios has attracted remarkable interest in graph representation learning. GIL (Li et al., 2022b) is proposed to capture the invariant relationships between predictive graph structural information and labels in a mixture of latent environments. OOD-GNN (Li et al., 2022a) proposes a nonlinear graph representation decorrelation method and a scalable global-local weight estimator to learn out-of-distribution (OOD) generalized graph representation under complex distribution shifts. However, most of the previous generalization work is only conducted on the static graph and failed to capture the feature of dynamic graphs such as timestamps. 2.3 Curriculum Learning Curriculum learning (CL) (Bengio et al., 2009; Wang et al., 2021b; Soviany et al., 2022) is a popular method to train machine learning models in a meaningful order, such as from easier data to harder data, which is able to improve the performance of machine learning models, as well as bring faster convergence. Thanks to its powerful ability, curriculum learning have been widely applied to numerous branches of machine learning, including computer vision (Huang et al., 2020b), natural language processing (Cirik et al., 2016), speech (Braun et al., 2017), medical (Lotter et al., 2017), robotics (Florensa et al., 2017) and etc. Curriculum learning mainly includes two parts, difficulty measurer and training scheduler. The difficulty measurer aims to evaluate the difficulty of different input data, and the training scheduler can adjust the sequence according to the difficulty. Typically, there are two kinds of curriculum learning. One is predefined curriculum learning, or vanilla curriculum learning, where both the difficulty measurer and training scheduler is designed by human prior and expert domain knowledge. The other is automatic curriculum learning, where one or both difficulty measurer and training scheduler is learned from data-driven algorithms, which can change with different training procedures, rather than a prior decision. And there are many kinds of automatic curriculum learning, such as self-paced learning (Kumar et al., 2010), transfer learning (Weinshall et al., 2018), reinforcement learning (Florensa et al., 2017) and teacher-student method (Kim & Choi, 2018). However, the current curriculum learning methods failed to capture the dynamic feature, and thus can’t be applied to dynamic graph out-of-distribution generalization. 3 Preliminary In this section, we introduce the notations of dynamic graphs and their spatio-temporal distribution shift. 3.1 Dynamic Graph Typically, a graph consists of nodes and edges: \( G = (V, E) \), where \( V \) is the set of vertices and \( E \) is the set of edges. And in a dynamic graph, some graph nodes or graph edges will appear or disappear with the passing of time. So a dynamic graph can be formalized into the following form: \( G = \{G_t : t = 1, 2, \cdots, T\} \), where \( G_t = (V_t, E_t) \), \( t \) is the different time stamp, and \( T \) is the total number of time stamps. \( V = \bigcup_{t=1}^{T} V_t \) is the node set of dynamic graph, and \( E = \bigcup_{t=1}^{T} E_t \) is the edge set of dynamic graph. The prediction task of the dynamic graph is predicting future labels using history graphs: \( p(Y_{t+1}|G_1, G_2, \cdots, G_t) = p(Y_{t+1}|G_{1:t}) \). In this paper, we mainly study node-level tasks, so \( Y_{t+1} \) typically represents the node property or edge status. And the prediction task can be formulated as a minimization task: \[ \min_{\theta} E_{y_{t+1} \sim p_{\text{train}}(y_{t+1})} L(f_\theta(G_{1:t}), y_{t+1}), \] where \( y_{t+1} \) represents the instance of label, and \( y_{t+1} \) represents the abstract of label. Figure 1: The framework of our proposed method CDGIL. Firstly, we adopt the dual disentangled dynamic attention network to capture the invariant patterns and variant patterns separately. Secondly, we conduct the self-paced intervention on dynamic graphs, in order to further enhance the generalization ability of the invariant encoder using variant patterns. Thirdly, we apply the dynamic curriculum method for importance reweighting, calculating the weights of data by evaluating its time as well as importance. Finally, we calculate the training loss, updating the invariant encoder and variant encoder, separately. 3.2 DISTRIBUTION SHIFT However, learning a model for training distribution may suffer from a distribution shift between training distribution and testing distribution, which is still a key challenge. In our paper, we adopt the assumption that the relationship between dynamic graph data and the label remains the same in training distribution and testing distribution: \( p_{\text{train}}(Y_{t+1}|G_{1:t}) = p_{\text{test}}(Y_{t+1}|G_{1:t}) \) following (Wang et al., 2021c; Qiu et al., 2020; Huang et al., 2020a; Zhou et al., 2018; Trivedi et al., 2019). However, the distribution of the dynamic graph is different between training distribution and testing distribution: \( p_{\text{train}}(G_{1:t}) \neq p_{\text{test}}(G_{1:t}) \), i.e. our model is suffering covariate shift problem. 4 METHOD To tackle the problem of spatio-temporal distribution shift in dynamic graphs, we propose our novel curriculum dynamic graph invariant learning in three parts. Firstly, in Section 4.1, we introduce our base model: dual disentangled dynamic attention network. Secondly, in Section 4.2, we introduce our proposed self-paced curriculum intervention method. Finally, in Section 4.3, we introduce our dynamic curriculum method for importance reweighting method. 4.1 DUAL DISENTANGLED DYNAMIC GRAPH ATTENTION NETWORKS In this subsection, we introduce our base model: dual disentangled dynamic graph attention networks (D³GAT). In previous work (Zhang et al., 2022), disentangled dynamic graph attention networks (DDGAT) compute the invariant patterns and variant patterns at the same time, limiting the ability of the variant patterns to adjust with the invariant patterns. Here we introduce it first. Disentangled dynamic graph attention networks (DDGAT) apply the self-attention mechanism to integrate the node information of its neighborhood. The neighborhood of node \( v \) at timestamp \( t \) can be defined as: \( N_t(v) = \{ u : (v, u) \in E_t \} \). And the history neighborhood of node \( v \) at timestamp \( t \) is defined as: \( \tilde{N}_t(v) = \bigcup_{\tau=1}^{t} N_\tau(v) \). To fully explore the neighborhood relationship and bring together all the information, the spatio-temporal graph attention mechanism aggregates all of the history neighbors of node \( v \). And to express the different edges of different times, DDGAT also adopts \( \text{TE}(t) \) as time encoding of timestamp \( t \). Thus the self-attention mechanism is carried out between the concatenation of node embedding as well as time encoding of node \( v \) and all of its historic neighbors. Thus, we have query, key, and value as below, where \( z^t_v \) for the embedding of... node \( v \) at time \( t \): \[ q_v^t = W_q(\text{concat}(z_v^t, \text{TE}(t))), \quad k_u^{t'} = W_k(\text{concat}(z_u^{t'}, \text{TE}(t'))), \quad v_u^{t'} = W_v(\text{concat}(z_u^{t'}, \text{TE}(t'))). \] And the query, key, and value are used to furthermore calculate the invariant and variant structural pattern masks. To further capture invariant patterns, a learnable mask \( m_f = \text{Softmax}(w_f) \) is proposed to mask the variant feature. Then is the result of invariant and variant patterns: \[ z_I^t = \text{Softmax}\left(\frac{q \cdot k^T}{\sqrt{d}}\right)v \cdot m_f, \] \[ z_V^t = \text{Softmax}\left(-\frac{q \cdot k^T}{\sqrt{d}}\right)v. \] And after calculating the invariant and variant patterns, they are added up as the node feature input of the next layer: \[ z_v^l = z_I^t(v) + z_V^t(v), \] where the output vector \( z_v^l \) is for the next layer: layer \( l + 1 \), while the input vectors \( z_I^t(v) \) and \( z_V^t(v) \) are from the current layer: layer \( l \). With the aggregation and disentanglement of neighborhood information and invariant and variant pattern, the proposed disentangled dynamic graph attention networks with \( L \) layers can represent the invariant patterns as well as the variant patterns in the \( L \)-th dynamic history neighborhood. To summarize, the disentangled dynamic graph attention networks (DDGAT) get the input of the whole dynamic graph, and the output of DDGAT is the invariant pattern of the node feature as well as the variant pattern of the node feature. But as mentioned above, it jointly calculates the invariant pattern and the variant pattern of the node feature, which limits pattern learning, because they are decided jointly, and cannot adjust with regard to the other. In our method, we adopt dual DDGAT, utilizing two modules for recognizing the invariant pattern and the variant pattern, separately. We call the two module invariant encoder and variant encoder for short. Invariant encoder and variant encoder have different learning objectives. The invariant encoder aims to precisely discover the invariant pattern of graphs, while the variant encoder captures the variant pattern of graphs in order to improve the generalization ability of the invariant encoder. The pattern captured by the invariant encoder and variant encoder is presented as \( z_I \) and \( z_V \), respectively. ### 4.2 Self-paced Curriculum Intervention The objective of the invariant encoder is to capture the invariant patterns precisely, and the variant patterns are a good source to let the invariant encoder generalize better. The adversarial variant patterns are reassembled with the invariant pattern to conduct intervention and the intensity of intervention is decided by the training stage. The reassembled result is presented as \( z_r \), and the result is controlled by a parameter reassemble \( \lambda_r \), where the larger the \( \lambda_r \) is, the greater the intensity of the intervention result is: \[ z_r = \text{Assemble}(z_I, z_V, \lambda_r), \] where the assemble function is calculated from the intervention result of the variant part and invariant part. Following (Zhang et al., 2022), we adopt the approximate intervention method, randomly replacing a specific variant pattern with another random variant pattern: \[ z_I^{t_i}(u), z_V^{t_i}(u) \leftarrow z_I^{t_i}(u), z_V^{t_i}(v). \] The \( \lambda_r \) is the parameter controlling the intervention intensity. In the early training stage, the invariant and variant encoder are not well-trained, thus the \( \lambda_r \) should be relatively smaller. While in the late training stage, the invariant and variant encoder are well-trained in training distribution and eager to generalize to testing distribution or other unknown distributions. So this time, the \( \lambda_r \) should be relatively larger. In our proposed self-paced curriculum intervention method, we adjust the \( \lambda_r \) with our training stage and evaluate the training stage through the functional loss \( \ell \). We set a threshold for the minimum loss \( \ell_{\text{min}} \) for each different task, and calculate the \( \lambda_r \) for each epoch: \[ \lambda_r = \lambda_0 \ast \min(\ell, \ell_{\text{min}})^{-1}. \] To summarize, for the invariant encoder’s loss, we add the training loss $\ell_{\text{trainI}}$ to the loss of reassembled with adversarial variant patterns, $\ell_{\text{assemble}}$. Thus, we have our invariant loss for our invariant encoder. $$\ell_I = \ell_{\text{trainI}} + \ell_{\text{assemble}} \ast \lambda_{\text{assemble}},$$ where $\lambda_{\text{assemble}}$ is the weight assigned to $\ell_{\text{assemble}}$, because it’s not so important as the loss of training. The objective of the variant encoder is to capture the variant patterns in order to enhance the generalization ability of the invariant encoder. And we maximize the difference between variant patterns reassembled with the invariant patterns and the invariant patterns themselves, further enhancing the variation between variant patterns and invariant patterns, formatting adversarial variant patterns. To capture adversarial variant patterns compared to invariant patterns, we evaluate the difference between the reassembled patterns and the invariant patterns. In this paper, we utilize the mean of the square of the difference between them ($z_r$ and $z_I$): $$\text{Diff}_{\text{var}} = \text{mean}((z_r - z_I)^2).$$ On the basis of the training loss of the variant decoder, we subtract our calculated difference variant patterns made from the training loss. So the final loss of the variant encoder is presented below: $$\ell_V = \ell_{\text{trainV}} - \text{Diff}_{\text{var}} \ast \lambda_{\text{diff}},$$ where $\lambda_{\text{diff}}$ is the weight assigned to $\text{Diff}_{\text{var}}$. In our method, the capture of invariant patterns should adjust with different training stages, and the capture of variant patterns should adjust with different invariant patterns. So the optimization of our proposed dual disentangled dynamic attention network is of two steps. In the first step, we calculate the loss of the invariant encoder and optimize its weight while freezing the weight of the variant encoder. And in the second step, we calculate the loss of the variant encoder and optimize its weight while freezing the weight of the invariant encoder. Our two-step optimization can capture invariant patterns more precisely and capture proper variant patterns. ### 4.3 Dynamic Curriculum Importance #### 4.3.1 Curriculum Time Serial Reweighting In dynamic graphs, all the nodes and edges are related to a certain timestamp. And the data of different timestamps tend to have different features. With the smaller gap in time, the graph data tend to have more similar properties. To further consider the time information of the graph, we propose the curriculum time serial reweighting method for dynamic graphs. More specifically, we change the weight of the data sample of the graph with timestamp $t$: $w_t = (1 + \lambda)^t$. And for all of the data, we adjust their weight to maintain that the average weight of all the data remains 1, in order to avoid further introducing bias: $$w_{\text{time}} = w_t + (1 - \text{average}(W_t)),$$ where $W_t$ is the vector of all the dynamic graph data according to weights, consisting of each $w_t$ of data samples. #### 4.3.2 Curriculum Sample Importance Reweighting In curriculum learning, one of the basic assumptions is that different data should not be treated equally, i.e. different data could have different weights or order. However, the traditional method for dynamic graph out-of-distribution generalization treats all of the data in the same way. Thus here we discuss how to distinguish the importance of different data, and how to change their weights in order to gain better results and generalization ability. During experiments, we discover that the gradient is an important signal of the data’s importance to the model now. We made the trial that observed the pattern of the gradient of the prediction results, and we discover that with the entanglement of the variant features and invariant features, the gradient descent failed to optimize the model to fit the training set or enhance the generalization ability. So we propose our novel curriculum learning for sample reweighting through the gradient measure. For data samples \( D = \{d_1, d_2, \cdots, d_n\} \), we first calculate their loss \( \ell(y_{pred}, y_{true}) \), and perform gradient backward, thus we have the gradient of all of the prediction results: \( \text{grad}(d_1), \text{grad}(d_2), \cdots, \text{grad}(d_n) \). Using the gradient backward result of the prediction results, we can reweight our data’s weight via a function \( f \) mapping gradients to weights: \[ w_{imp}^i = f(\text{grad}(d_i)). \] And the mapping function is a two-stage segmentation function. For the gradient that is smaller than zero, the function is \( f(x) = 1 \), which will increase the weight of their data samples. And for the gradient that is larger than zero, the function is \( f(x) = -\exp(\text{sigmoid}(\log(x) * A + B) * C + D) \), where the \( A, B, C \) and \( D \) are hyper-parameters. This mapping function will increase the weight of the data samples that have a negative gradient, slightly decrease the data samples that have a large positive gradient, and largely decrease the data samples that have small positive samples. This mapping function will reweight their weight by evaluating the importance and further have better optimization and generalization ability. After calculating the weights of the time series as well as sample importance, we add them together for the final output for each data sample \( w_{time} + w_{imp} \). However, there appear some cases while applying these weights that the weights drop below zero and resulting in loss \( \ell_{train} \) drop below zero. Thus we adopt a soft positive function on the weights in order to avoid optimization of the contradictory direction: \[ w = \max(w_{time} + w_{imp}, 0). \] The final reweighting weight \( w \) is used to calculate the training loss \( \ell_{train} \). And the overall algorithm of our proposed CDGIL method is shown in Table 1. **Algorithm 1** Our proposed curriculum dynamic graph invariant learning method 1: **Initialize.** Number of training epochs \( E \), dynamic graph dataset \( G \), invariant decoder \( D_I \), variant decoder \( D_V \). 2: **for** \( e = 1 : E \) **do** 3: Calculate \( z_I \) and \( z_V \): \( z_I^i = \text{Softmax}(\frac{q_k x^T}{\sqrt{d}})v \cdot m_f, z_V^i = \text{Softmax}(-\frac{q_k x^T}{\sqrt{d}})v \). 4: Calculate \( w_{time} \) and \( w_{imp} \): \( w_{time} = w_t + (1 - \text{average}(W_t)), w_{imp} = f(\text{grad}(d_i)) \). 5: Calculate \( w \): \( w = \max(w_{time} + w_{imp}, 0) \). 6: Calculate invariant loss \( \ell_I = \ell_{trainI} + \ell_{assemble} * \lambda_{assemble} \). 7: Perform gradient descent on the invariant encoder. 8: Calculate variant loss \( \ell_V = \ell_{trainV} - \text{Diff}_{var} * \lambda_{diff} \). 9: Perform gradient descent on the variant encoder. 10: **end for** 11: **Output.** The well-trained model. ### 5 EXPERIMENTS In this section, we present the experiments to verify the effectiveness and wide applicability of our proposed method, including experimental setup, results, and ablation studies. Please refer to Appendix for more details. #### 5.1 EXPERIMENTAL SETUP **Baselines.** In the experiments, we consider three kinds of representative baselines, including (1) static GNNs: GAE (Kipf & Welling [2016b]) and Vgae (Kipf & Welling [2016b]), (2) dynamic GNNs: GCRN (Seo et al. [2018a], EGCN (Pareja et al. [2020]), DySAT (Sankar et al. [2020]), and (3) OOD generalization methods: IRM (Arjovsky et al. [2019]), GroupDRO (Sagawa et al. [2019]), V-Rex (Krueger et al. [2021]), DIDA (Zhang et al. [2022]). **Datasets.** The experiments are conducted on both real-world and synthetic dynamic graph datasets. - **COLLAB** (Tang et al. [2012]) is a dataset of cross-domain collaboration recommendations, including the authors and publications from 1990 to 2005. The nodes of the dynamic graph are authors, and the edges between nodes are coauthorship. Its edges contain five Table 1: The experiment results (ROCAUC%) of different methods on real-world link prediction datasets. The best results are in bold. ‘w/o DS’ and ‘w/ DS’ denote test data with and without distribution shift. | Model | COLLAB | Yelp | |-----------|--------------|--------------| | | Test Data | w/o DS | w/ DS | w/o DS | w/ DS | | GAE | 77.15±0.50 | 74.04±0.75 | 70.67±1.11 | 64.45±5.02| | VGAE | 86.47±0.04 | 74.95±1.25 | 76.54±0.50 | 65.33±1.43| | GCRN | 82.78±0.54 | 69.72±0.45 | 68.59±1.05 | 54.68±7.59| | EGCN | 86.62±0.95 | 76.15±0.91 | 78.21±0.03 | 53.82±2.06| | DySAT | 88.77±0.23 | 76.59±0.20 | 78.87±0.57 | 66.09±1.42| | IRM | 87.96±0.90 | 75.42±0.87 | 66.49±10.78 | 56.02±16.08| | VREx | 88.31±0.32 | 76.24±0.77 | 79.04±0.16 | 66.41±1.87| | GroupDRO | 88.76±0.12 | 76.33±0.29 | 79.38±0.42 | 66.97±0.61| | DIDA | 91.97±0.05 | 81.87±0.40 | 78.22±0.40 | 75.92±0.90| | Ours | **93.60±0.11** | **84.39±0.54** | **77.15±1.54** | **76.44±1.79** | Table 2: The experiment results (ROCAUC%) of different methods on synthetic link prediction datasets. The best results are in bold. ‘w/o DS’ and ‘w/ DS’ denote test data with and without distribution shift. | Model | synthetic-0.4 | synthetic-0.6 | synthetic-0.8 | |-----------|---------------|---------------|---------------| | | Split | Train | Test | Train | Test | Train | Test | | GCRN | 69.60±1.14 | 72.57±0.72 | 74.71±0.17 | 72.29±0.47 | 75.69±0.07 | 67.26±0.22 | | EGCN | 78.82±1.40 | 69.00±0.53 | 79.47±1.68 | 62.70±1.14 | 81.07±4.10 | 60.13±0.89 | | DySAT | 84.71±0.80 | 70.24±1.26 | 89.77±0.32 | 64.01±0.19 | 94.02±1.29 | 62.19±0.39 | | IRM | 85.20±0.07 | 69.40±0.09 | 89.48±0.22 | 63.97±0.37 | **95.02±0.09** | 62.66±0.33 | | VREx | 84.77±0.84 | 70.44±1.08 | 89.81±0.21 | 63.99±0.21 | 94.06±1.30 | 62.21±0.40 | | GroupDRO | 84.78±0.85 | 70.30±1.23 | 89.90±0.11 | 64.05±0.21 | 94.08±1.33 | 62.13±0.35 | | DIDA | 87.92±0.92 | 85.20±0.84 | 91.22±0.59 | 82.89±0.23 | 92.72±2.16 | 72.59±3.31 | | Ours | **89.77±0.15** | **87.60±0.18** | **91.88±0.26** | **84.67±0.27** | **94.84±0.29** | **79.54±0.87** | sub-domains: data mining, medical informatics, theory, visualization, and database. Here we choose data mining as the testing domain, and the other four sub-domains as the training domains. • Yelp (Sankar et al., 2020) is a dataset of businesses and reviews, including the customers and their reviews on businesses. The nodes of the dynamic graph are customers or businesses, and the edges between them are reviews. Its edges contain five sub-domains: pizza, American food, coffee & tea, sushi bars, and fast food. Here we choose pizza as the testing domain, and the other four sub-domains as the training domains. • Synthetic Dataset (Zhang et al., 2022) is one manually designed dataset based on collab dataset, introducing external node features to create distribution shifts. The external node features are related to the average positive sample rate $\bar{p}$, wherein the testing dataset, $\bar{p}$ is set to 0.1, and in the training dataset, $\bar{p}$ is set to 0.4, 0.6 or 0.8, formatting the three datasets: synthetic-0.4, synthetic-0.6, and synthetic-0.8, where we can see that the larger $\bar{p}$ is, the larger the distribution shift between the training part and the testing part of the dataset. 5.2 Results • The performances of the baseline methods drop significantly under distribution shifts, although some of them show relatively competitive results on test data without distribution shifts. For example, when existing distribution shifts the performance of DySAT, which is one representative dynamic method, drops more than 10% in COLLAB and Yelp datasets. It shows that the existing GyGNNs fail to handle distribution shifts and focus on variant patterns to make predictions, leading to poor OOD generalization. As one dynamic graph OOD generalization method, DIDA shows strong performances to handle distribution shifts. but the performance drop is also significant. It means that although this method attempts to capture invariant patterns and only uses them to make predictions, the result is not promising since it ignores the training status and sample importance when learning invariant patterns in dynamic graphs. • Our method can accurately capture invariant patterns in dynamic graphs to consistently remove the impact of variant patterns under distribution shifts. For the real-world datasets, we can find consistent performance improvements. And for the synthetic datasets, we can also observe that our method achieves the most stable performances as the shift level increases, while almost all baselines increase in train results and decline in test results. It verifies that the existing baselines can easily exploit variant patterns for predictions and suffer from their harmful effects on OOD generalization. 5.3 Ablation Studies ![AUC (%)](image) Figure 2: The ablation study result of our method compared to our method without dynamic curriculum importance sample reweighting, and our method without dynamic curriculum importance sample reweighting as well as self-paced curriculum intervention. We first remove the dynamic curriculum importance sample reweighting module. And we can observe that the performance gained a sharp drop without our novel sample reweighting method, demonstrating its effectiveness. Next, we also consider removing the self-paced curriculum intervention as well as the dynamic curriculum importance sample reweighting. And we can observe that the performance drop without our novel training scheduler method, which indicates the effectiveness of our designs. 5.4 Complexity Analysis Here we will analyze our proposed CDGIL method’s computational complexity. Let \(|V|\) and \(|E|\) denote the number of total nodes and edges, respectively. Let \(d\) denote the dimension of the hidden representation. From (Zhang et al., 2022), we learn that the computational complexity of the disentangled DGNN is \(O(|E|d + |V|d^2)\). Further, let \(|E_t|\) denote the number of edges to train and test, for each training step, the computation cost of curriculum learning for data is only several times of \(|E_t|\), or \(O(|E_t|)\). And the computation cost of self-adaptive curriculum learning for intervention is \(|E|d\), where we conduct intervention operation on the whole graph. What’s more, we can also find that \(O(|E_t|)\) is dominated by \(O(|E|)\). Finally, we get our result: the computational complexity of our proposed CDGIL is \(O(|E|d + |V|d^2)\), less than DIDA’s, which is \(O(|E|d + |V|d^2 + |E_t||S|d)\), and our method outperforms DIDA largely in all of the datasets. 6 Conclusion In this paper, we propose Curriculum Dynamic Graph Invariant Learning (CDGIL) to handle the dynamic graph distribution problem. More specifically, we present to capture invariant and variant patterns directed by our proposed curriculum learning method considering both training status and sample importance. Firstly, we utilize dual disentangled dynamic attention networks to capture invariant and variant patterns and optimize them separately. Then, we conduct self-paced curriculum intervention to generalize better. Finally, we propose to compute the weight of data samples by evaluating their time series weight as well as sample importance weight. Extensive experiments and ablation studies further demonstrate the effectiveness of our proposed method. REFERENCES Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In *Proceedings of the 26th annual international conference on machine learning*, pp. 41–48, 2009. Richard A Berk. An introduction to sample selection bias in sociological data. *American sociological review*, pp. 386–398, 1983. Stefan Braun, Daniel Neil, and Shih-Chii Liu. A curriculum learning method for improved noise robustness in automatic speech recognition. In *2017 25th European Signal Processing Conference (EUSIPCO)*, pp. 548–552. IEEE, 2017. Taotao Cai, Shuiqiao Yang, Jianxin Li, Quan Z Sheng, Jian Yang, Xin Wang, Wei Emma Zhang, and Longxiang Gao. Incremental graph computation: Anchored vertex tracking in dynamic social networks. *IEEE Transactions on Knowledge and Data Engineering*, 2022. Volkan Cirik, Eduard Hovy, and Louis-Philippe Morency. Visualizing and understanding curriculum learning for long short-term memory networks. *arXiv preprint arXiv:1611.06204*, 2016. Weilin Cong, Yanhong Wu, Yuandong Tian, Mengting Gu, Yinglong Xia, Mehrdad Mahdavi, and Chun-cheng Jason Chen. Dynamic graph representation learning via graph transformer networks. *arXiv preprint arXiv:2111.10447*, 2021. Zulong Diao, Xin Wang, Dafang Zhang, Yingru Liu, Kun Xie, and Shaoyao He. Dynamic spatial-temporal graph convolutional neural networks for traffic forecasting. In *Proceedings of the AAAI conference on artificial intelligence*, volume 33, pp. 890–897, 2019. Tongtong Fang, Nan Lu, Gang Niu, and Masashi Sugiyama. Rethinking importance weighting for deep learning under distribution shift. *Advances in neural information processing systems*, 33: 11996–12007, 2020. Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. Reverse curriculum generation for reinforcement learning. In *Conference on robot learning*, pp. 482–495. PMLR, 2017. Jun Gao, Jiazun Chen, Zhao Li, and Ji Zhang. Ics-gnn: lightweight interactive community search via graph neural network. *Proceedings of the VLDB Endowment*, 14(6):1006–1018, 2021. Palash Goyal, Sujit Rokka Chhetri, and Arquimedes Canedo. dyngraph2vec: Capturing network dynamics using dynamic graph representation learning. *Knowledge-Based Systems*, 187:104816, 2020. Shengnan Guo, Youfang Lin, Huaiyu Wan, Xiucheng Li, and Gao Cong. Learning dynamics and heterogeneity of spatial-temporal graph data for traffic forecasting. *IEEE Transactions on Knowledge and Data Engineering*, 34(11):5415–5428, 2021. Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. Variational graph recurrent neural networks. *Advances in neural information processing systems*, 32, 2019. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. *Advances in neural information processing systems*, 30, 2017. Charles C Holt. Forecasting seasonals and trends by exponentially weighted moving averages. *International journal of forecasting*, 20(1):5–10, 2004. Hong Huang, Zixuan Fang, Xiao Wang, Youshan Miao, and Hai Jin. Motif-preserving temporal network embedding. In *IJCAI*, pp. 1237–1243, 2020a. Yuge Huang, Yuhan Wang, Ying Tai, Xiaoming Liu, Pengcheng Shen, Shaoxin Li, Jilin Li, and Feiyue Huang. Curricularface: adaptive curriculum learning loss for deep face recognition. In *proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 5901–5910, 2020b.
3VD4PNEt5q
The methodology, while promising, seems to be narrowly tailored for a specific set of fusion models. This raises concerns about its universality. An in-depth exploration into its effectiveness against a broader spectrum of fusion models would have provided a more comprehensive perspective, allowing for a holistic understanding of its potential and pitfalls.
FUSION IS NOT ENOUGH: SINGLE MODAL ATTACKS ON FUSION MODELS FOR 3D OBJECT DETECTION Zhiyuan Cheng¹ Hongjun Choi² Shiwei Feng¹ James Liang³ Guanhong Tao¹ Dongfang Liu³ Michael Zuzak³ Xiangyu Zhang¹ ¹Purdue University {cheng443, feng292, taog, xyzhang}@purdue.edu ²DGIST hongjun@dgist.ac.kr ³Rochester Institute of Technology {jcl3689, dongfang.liu, mjzeec}@rit.edu ABSTRACT Multi-sensor fusion (MSF) is widely used in autonomous vehicles (AVs) for perception, particularly for 3D object detection with camera and LiDAR sensors. The purpose of fusion is to capitalize on the advantages of each modality while minimizing its weaknesses. Advanced deep neural network (DNN)-based fusion techniques have demonstrated the exceptional and industry-leading performance. Due to the redundant information in multiple modalities, MSF is also recognized as a general defence strategy against adversarial attacks. In this paper, we attack fusion models from the camera modality that is considered to be of lesser importance in fusion but is more affordable for attackers. We argue that the weakest link of fusion models depends on their most vulnerable modality, and propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks. Our approach employs a two-stage optimization-based strategy that first thoroughly evaluates vulnerable image areas under adversarial attacks, and then applies dedicated attack strategies for different fusion models to generate deployable patches. The evaluations with six advanced camera-LiDAR fusion models and one camera-only model indicate that our attacks successfully compromise all of them. Our approach can either decrease the mean average precision (mAP) of detection performance from 0.824 to 0.353, or degrade the detection score of a target object from 0.728 to 0.156, demonstrating the efficacy of our proposed attack framework. Code is available. 1 INTRODUCTION 3D object detection is a critical task in the perception of autonomous vehicles (AVs). In this task, AVs employ camera and/or LiDAR sensors input to predict the location, size, and categories of surrounding objects. Camera-LiDAR fusion models, which combine the high-resolution 2D texture information from camera images with the rich 3D distance information from LiDAR point clouds, have outperformed the detection accuracy of models that rely solely on cameras or LiDAR. (Yang et al., 2022; Liu et al., 2023b; Li et al., 2022b). Additionally, multi-sensor fusion (MSF) techniques are generally recognized as a defensive measure against attacks (Cao et al., 2021; Liang et al., 2022), as the extra modality provides supplementary information to validate detection results. Viewed in this light, a counter-intuitive yet innovative question arises: Can we attack fusion models through a single modality, even the less significant one, thereby directly challenging the security assumption of MSF? Yet, this fundamental question has not been sufficiently answered in the literature. Previous research has demonstrated successful attacks against camera-LiDAR fusion models by targeting either multiple modalities (Cao et al., 2021; Tu et al., 2021) or the LiDAR modality alone (Hallyburton et al., 2022). However, these approaches are not easy to implement and require additional equipment such as photodiodes, laser diodes (Hallyburton et al., 2022), and industrial-grade 3D printers (Cao et al., 2021; Tu et al., 2021) to manipulate LiDAR data, thus increasing the deployment cost for attackers. Consequently, we explore the possibility of attacking fusion models via the camera modality, as attackers can more easily perturb captured images using affordable adversarial patches. Nevertheless, this attack design presents additional challenges. For example, the camera modality is considered less significant in fusion models for 3D object detection since LiDAR provides abundant 3D information. The performance of both state-of-the-art LiDAR-based models and ablations of fusion models using only LiDAR surpasses their solely camera-based counterparts. Figure 1: Single-modal attacks against camera-LiDAR fusion model using camera-modality. significantly (Liang et al., 2022; Liu et al., 2023b; Motional, 2023) (see more experimental results in Appendix A). The less significance of camera modality in fusion can limit its impact on detection results. Moreover, different fusion models can exhibit distinct vulnerabilities in the camera modality, necessitating varying attack strategies. The cutting-edge adversarial patch optimization technique against camera-only models (Cheng et al., 2022) has limitations in generating deployable patches viewing the entire scene, as they fail to consider the semantics of the input. Hence, a problem remains open: How to design single-modal attack to effectively subvert fusion models? In response to 1 and 2, we propose a novel attack framework against camera-LiDAR fusion models through the less significant camera modality. We utilize adversarial patches as the attack vector, aiming to cause false negative detection results, and our main focus lies on the early-fusion scheme, including data-level and feature-level fusion strategies. As shown in Figure 1, our attack employs a two-stage approach to generate an optimal adversarial patch for the target fusion model. In the first stage (2nd column), we identify vulnerable regions in the image input using our novel sensitivity distribution recognition algorithm. The algorithm employs an optimizable mask to identify the sensitivity of different image areas under adversarial attacks. Based on the identified vulnerable regions, we then classify the fusion model as either object-sensitive or globally sensitive, enabling tailored attack strategies for each type of model. In the second stage (3rd column), we design two attack strategies for different types of models to maximize attack performance. For globally sensitive models, we devise scene-oriented attacks, wherein adversarial patches can be placed on static background structures (e.g., roads or walls) to compromise the detection of arbitrary nearby objects (see undetected pedestrians in the red circle of Figure 1). For object-sensitive models, we implement object-oriented attacks that can compromise the detection of a target object by attaching the patch to it (see the undetected vehicle in the red circle of Figure 1). Compared to Cheng et al. (2022), the patches generated by our proposed framework offer a significant advantage by being both physically deployable and effective (see comparison in Appendix J). Our contributions are: • We present single-modal attacks against advanced camera-LiDAR fusion models leveraging only the camera modality, thereby further exposing the security issues of MSF-based AV perception. • We develop an algorithm for identifying the distribution of vulnerable regions in images, offering a comprehensive assessment of areas susceptible to adversarial attacks. • We introduce a framework for attacking fusion models with adversarial patches, which is a two-stage approach and involves different attack strategies based on the recognized sensitivity type of the target model. The threat model is detailed in Appendix P. • We evaluate our attack using six state-of-the-art fusion-based and one camera-only models on Nuscenes (Caesar et al., 2020), a real-world dataset collected from industrial-grade AV sensor arrays. Results show that our attack framework successfully compromises all models. Object-oriented attacks are effective on all models, reducing the detection score of a target object from 0.728 to 0.156 on average. Scene-oriented attacks are effective for two globally sensitive models, decreasing the mean average precision (mAP) of detection performance from 0.824 to 0.353. Experiments in simulation and physical-world also validate the practicality of our attacks in the real world. Demo video is available at https://youtu.be/xhXtzDezeaM 2 RELATED WORK Camera-LiDAR Fusion. AVs are typically equipped with multiple surrounding cameras, providing a comprehensive view, and LiDAR sensors are usually mounted centrally on top of the vehicle, enabling a 360-degree scan of the surrounding environment, resulting in a 3D point cloud. Images and point clouds represent distinct modalities, and numerous prior works have investigated methods to effectively fuse them for improved object detection performance. Specifically, the fusion strategies can be categorized into three types based on the stage of fusion: 1) data-level fusion, which leverages the extracted features from one modality to augment the input of the other modality (Yin et al., 2021; Vora et al., 2020; Wang et al., 2021); 2) decision-level fusion, which conducts independent perception for each modality and subsequently fuses the semantic outputs (BaiduApollo); and 3) feature-level fusion, which combines low-level machine-learned features from each modality to yield unified detection results (Liu et al., 2023b; Liang et al., 2023; Yang et al., 2022; Li et al., 2022b; Bai et al., 2022; Chen et al., 2022b). Feature-level fusion can be further divided into alignment-based and non-alignment-based fusion. Alignment-based fusion entails aligning camera and LiDAR features through dimension projection at the point level (Li et al., 2020; Vora et al., 2020; Chen et al., 2022a), the voxel level (Li et al., 2022b; Jiao et al., 2022), the proposal level (Chen et al., 2017; Ku et al., 2018), or the bird’s eye view (Liu et al., 2023b; Liang et al., 2022) before concatenation. For non-alignment-based fusion, cross-attention mechanisms in the transformer architecture are employed for combining different modalities (Yang et al., 2022; Bai et al., 2022). Contemporary fusion models primarily use feature-level fusion for its superior feature extraction capability and performance. Hence, we focus on introducing and analyzing this type of fusion strategy in our method design. It is worth noting that our approach can also be directly applied to data-level fusion, as demonstrated in our evaluation (see Section 5). More discussion of fusion strategies is in Appendix B. Appendix C introduces the general architecture of camera-LiDAR fusion. **3D Object Detection Attacks.** 3D object detection models (Cheng et al., 2022; Liu et al., 2021a; Cui et al., 2021) can be classified into three categories: camera-based, LiDAR-based, and fusion-based models. Attacks targeting each category have been proposed in the context of AV systems. 1) For camera-based models, adversaries typically employ adversarial textures to manipulate the pixels captured by AV cameras (Zhang et al., 2021; Boloor et al., 2020). This approach is cost-effective and can be easily implemented through printing and pasting an adversarial patch. Recent studies have concentrated on enhancing the stealthiness of the adversarial patterns (Cheng et al., 2022; Duan et al., 2020). 2) In the case of LiDAR-based models, some attackers utilize auxiliary equipment, such as photodiodes and laser diodes, to intercept and relay the laser beams emitted by AV LiDAR systems, thereby generating malicious points in the acquired point cloud to launch the attack (Cao et al., 2019, 2023; Sun et al., 2020). Alternatively, others employ malicious physical objects with engineered shapes to introduce adversarial points in attacks (Tu et al., 2020; Abdelfattah et al., 2021a; Cao et al., 2019). 3) Regarding camera-LiDAR fusion models, multi-modal attacks have been developed that perturb both camera and LiDAR input either separately (Tu et al., 2021; Abdelfattah et al., 2021b) or concurrently (Cao et al., 2021), using the previously mentioned attack vectors. Additionally, single-modal attacks on solely LiDAR input have been conducted in a black-box manner (Hallyburton et al., 2022) to fool fusion models. For camera-oriented single modal attacks, there are prior works investigating the robustness of fusion models when subjected to noisy camera input (Park et al., 2021; Kim & Ghosh, 2019). However, Kim & Ghosh (2019) mainly considered random noise, specifically Gaussian noise, instead of physical-world adversarial attacks. Park et al. (2021) mainly focused on digital-space attacks and exclusively targeted an early model using single-view images. Differently, our study considers physically practical attacks and investigates fusion models utilizing multi-view images and a transformer-based detection head. **3 MOTIVATION** Despite the challenges mentioned in Section 1, it is still theoretically possible to conduct camera-only attack against fusion models. The intuition behind is that adversarial effects from the camera modality can propagate through model layers, contaminate the fused features, and ultimately impact the model output (See Appendix D for detailed feasibility analysis). To examine the actual performance of camera-only adversarial attacks on SOTA fusion models, we illustrate an example in Figure 2. A frame is derived from the Nuscenes dataset containing both camera and LiDAR data (the first row). It represents a scenario where the ego-vehicle is navigating a road populated with multiple cars and pedestrians. ![Figure 2: Motivating example of adversarial patch attack on images against fusion models.](image-url) nign cases, two cutting-edge fusion models, DeepInteraction (Yang et al., 2022) and BEVFusion-PKU (Liang et al., 2022), can accurately detect objects in the given scene. We then implement a conventional patch attack (Brown et al., 2017) by generating a patch on the road to induce false negative detections. The performance of DeepInteraction is undisturbed by the attack, illustrated in the second row of Figure 2. In contrast, BEVFusion-PKU is successfully disrupted, evidenced by its inability to detect objects proximal to the patch, highlighted by red circles in the third row. This discrepancy in the models’ responses confirms that exploiting the camera modality can impact fusion models while highlights that uniform attack strategies may not be universally effective due to the inherent unique vulnerabilities in different models, such as varying susceptible regions. Despite the SOTA patch attack can be adapted to optimize the patch region over the scene, the generated patch is not deployable (see Appendix I), limiting its application. To characterize the susceptible regions, we introduce the concept of “sensitivity” as a property of areas in input images. It measures the degree to which specific area of an image impacts adversarial goals. An area with high sensitivity means perturbations there have large influence and can achieve good attack performance. Hence, sensitive regions are more vulnerable to adversarial attacks than other regions. Formally, the sensitivity $S_A$ of an area $A$ is defined as $$S_A \propto \max_p \{ L_{adv}(x, l) - L_{adv}(x', l) \},$$ where $x' = x \ominus (1 - A) + p \ominus A$. Here, $x$ is the input image, $l$ is the LiDAR point cloud and $x'$ is the adversarial image with perturbations $p$ in region $A$. $L_{adv}$ denotes the adversarial loss defined by adversarial goals. Examining the sensitivity of each area on the image through individual patch optimization is very time consuming and it becomes unaffordable as the granularity of the considered unit area increases. Despite the availability of various interpretation methods for model decisions (e.g., GradCAM (Selvaraju et al., 2017) and ScoreCAM (Wang et al., 2020)), which can generate heatmaps to highlight areas of attention in images, it is essential to distinguish between interpreting model decisions and recognizing sensitivity. For instance, our motivating example presents that the road is a susceptible region for adversarial attacks on BEVFusion-PKU. However, the main focus of object detection should be directed towards the objects themselves rather than the road, as an interpretation method would show (Gildenblat, 2022). Therefore, to recognize the sensitivity distribution on input images efficiently, we propose a novel optimization-based method in the first stage, and design different attack strategies in the second stage to maximize attack performance. 4 METHOD Overview. Figure 3 presents the framework of our single-modal adversarial attack on fusion models using an adversarial patch, employing a two-stage approach. Initially, we identify the sensitivity distribution of the subject network, and subsequently, we launch an attack based on the identified sensitivity type. During the first stage, to recognize the sensitivity distribution, we define perturbations and perturbation masks with dimensions identical to the multi-view image input. We then compose the adversarial input by applying the patch and mask to images of a scene sampled from the dataset (step ①). After feeding the adversarial input images and corresponding benign LiDAR data to the subject fusion model, we obtain object detection results (step ②). We calculate the adversarial loss based on the detection scores of objects in the input scene (step ③) and utilize back-propagation and gradient descent to update masks and perturbations, aiming to minimize adversarial loss and mask loss (step ④). We repeat this process for thousands of iterations until convergence is achieved, and then visualize the final mask as a heatmap to determine the sensitivity type (step 5). The heatmap’s high-brightness regions signify areas more susceptible to adversarial attacks. Based on the distribution of sensitive areas, we classify the fusion model into two types: global sensitivity and object sensitivity. Global sensitivity refers to the distribution of sensitive areas covering the entire scene, including objects and non-object background. Object sensitivity, on the other hand, indicates that only object areas are sensitive to attacks. In the second stage, we adopt different attack strategies based on the identified sensitivity heatmap type. For global sensitivity, we implement scene-oriented attacks. By placing a patch on the static background (e.g., the road), we deceive the fusion model and compromise the detection of arbitrary objects surrounding the patch. For both object sensitivity and global sensitivity, we can employ object-oriented attacks. In this approach, we attach a patch to a target object, causing failure in detecting it while leaving the detection of other objects unaltered. Since adversarial patches, optimized as 2D images, would be deployed physically during attacks, we employ projections (Cheng et al., 2023) to simulate how the patch would look on the scene image once it is physically deployed (refer to Figure 4), which enhances the physical-world robustness. The two attack strategies differ mainly in their projection functions and the scope of affected objects. Details are discussed later. Sensitivity Distribution Recognition. We leverage the gradients of images with respect to the adversarial loss as an overall indicator to understand the sensitivity of different image areas, since they are closely related to the relative weights assigned to the camera modality. (See detailed analysis in Appendix E.) In a formal setting, the proposed sensitivity distribution recognition algorithm can be articulated as an optimization problem. The primary objective is to concurrently minimize an adversarial loss and a mask loss, which can be mathematically represented as follows: \[ \arg \min_{p,m} L_{adv} + \lambda L_{mask}, \quad \text{s.t. } p \in [0,1]^{3 \times h \times w}, m \in R^{1 \times \lfloor h/s \rfloor \times \lfloor w/s \rfloor}, \] where \(L_{adv} = MSE(f_{scores}(x', l), 0)\); \(L_{mask} = MSE(M, 0)\); \[ x' = x \odot (1 - M) + p \odot M; \quad M[i,j] = \frac{1}{2} \times \tanh(\gamma \cdot m[\lfloor i/s \rfloor, \lfloor j/s \rfloor]) + \frac{1}{2}. \] Here, \(x\) is the image input, which is normalized and characterized by dimensions \(h\) (height) and \(w\) (width). The symbols \(l, p, m,\) and \(\lambda\) represent the LiDAR input, the perturbations on image with dimensions equal to \(x\), the initial mask parameters, and the mask loss weight hyperparameter, respectively. The desired sensitivity heatmap corresponds to the perturbation mask \(M\). Visualization of variables can be found in Figure 3. Initially, the mask parameters \(m \in R^{1 \times \lfloor h/s \rfloor \times \lfloor w/s \rfloor}\) are transformed into the perturbation mask \(M \in [0,1]^{1 \times h \times w}\) using Equation 3. We use \(\tanh()\) function to map values in \(m\) into the \([0,1]\) range, and its long-tail effect encourages the mask \(M\) values to gravitate towards either 0 or 1. The hyperparameters \(\gamma\) and \(s\) modulate the convergence speed and heatmap granularity, respectively. Subsequently, the perturbation mask \(M\) is utilized to apply the perturbation \(p\) to the input image \(x\), resulting in the adversarial image \(x'\). \(\odot\) denotes element-wise multiplication. Adversarial image \(x'\) and benign LiDAR data \(l\) are then fed to the fusion model \(f_{scores}\). Since our attack goals are inducing false negative detection results, one objective of our optimization is to minimize the detected object scores. Hence, we use the mean square error (MSE) between the scores and zero as the adversarial loss \(L_{adv}\). In this context, the output of \(f_{scores}\) consists of the detected object scores (confidence). The optimization’s secondary objective is to minimize the perturbation mask values, achieved by incorporating a mask loss \(L_{mask}\). The optimization of these two losses is a dual process. Minimizing the adversarial loss (i.e., maximizing attack performance) necessitates a higher magnitude of perturbations on the input. Conversely, minimizing the mask loss indicates a lower magnitude of perturbations. As a result, the dual optimization process converges on applying higher magnitude perturbations on sensitive areas (to improve attack performance) and lower magnitudes for insensitive parts (to minimize mask loss). Hence, the mask \(M\) serves as a good representation of the sensitivity distribution, and visualizing \(M\) allows for the attainment of the sensitivity heatmap. Then we can further classify the fusion model into object sensitivity or global sensitivity by comparing the expectation of the average intensity of object areas with non-object background in each scene as follows: \[ T(f) = \begin{cases} \text{Object}, & \mathbb{E}_x \left[ \frac{\sum(M^o \odot A^o_x)}{\sum A^o_x} \right] > \beta \mathbb{E}_x \left[ \frac{\sum(M^o \odot (1-A^o_x))}{\sum(1-A^o_x)} \right] \\ \text{Global}, & \text{otherwise} \end{cases}. \] Here, \(T(f)\) represents the sensitivity type of fusion model \(f\), and \(A^o_x\) is a mask with values 1 for object areas and 0 for non-object areas in scene image \(x\). The mask \(A^o_x\) is generated by considering the pixels covered by bounding boxes of detected objects in benign cases. $M^x$ refers to the recognized sensitivity heatmap of $x$. $\beta$ is the classification threshold and set to 3 in our experiments. **Attack Strategies.** Two attack strategies, namely scene-oriented attacks and object-oriented attacks, are introduced based on the fusion model’s sensitivity type. Both strategies employ optimization-based patch generation methods. Back-propagation and gradient descent are utilized iteratively to solve the optimization problem. Formally, the problem is defined as: $$\arg \min_p E_{(x,l) \sim D} [MSE(f_s(x', l), 0)], \quad \text{s.t. } p \in [0, 1]^{3 \times h \times w}, M \in \{0, 1\}^{1 \times h \times w},$$ where $x' = x \odot (1 - M_x) + p_x \odot M_x; \quad M_x = proj_x(M); \quad p_x = proj_x(p).$ Here, scene images $x$ and LiDAR data $l$ are randomly sampled from the training set $D$. The mask $M$ represents a patch area for cropping the patch image, with values equal to 1 inside a predefined patch area and 0 elsewhere. Unlike Equation 1, $M$ contains discrete values and is not optimizable. $proj_x()$ signifies the projection of the original patch image (see the “2D patch image” in Figure 4a) onto a specific area of the scene image $x$ (see the “captured image” in Figure 4a), to simulate how the patch would look once it’s physically deployed, which minimizes the disparity between digital space and the physical world. The target region is contingent upon the attack strategy. Similarly, the output of the fusion model $f_s$ consists of detected object scores, which vary in scope depending on specific attack strategies. We minimize the MSE between detected object score(s) and zero to achieve false negative detection results, and we leverage the Expectation of Transformation (EoT) (Athalye et al., 2018) across all training samples and color jitters (i.e., brightness, contrast and saturation changes) in the patch to enhance the physical robustness and generality of our attack. The adversarial pattern can be concealed within natural textures (e.g., dirt or rust), utilizing existing camouflage techniques (Duan et al., 2020) to remain stealthy and persistent, avoiding removal. Specifically, for **scene-oriented attacks**, the goal is to compromise the detection of arbitrary objects near an adversarial patch attached to static structures (e.g., the road) of a target scene. In this scenario, the training set $D$ is composed of the target scene in which the ego-vehicle is stationary (e.g., stop at an intersection or a parking lot). The categories and locations of objects surrounding the ego-vehicle in the scene can change dynamically. The output of the fusion model $f_s$ during optimization is the detection score of all detected objects in the target scene. To simulate the appearance of the patch on the scene image once deployed, $proj_x$ first projects pixels of the patch image and mask into 3D space on the road (step ① in Figure 4a). Then the function maps them back onto the scene image (step ②). The patch’s 3D position is predefined with distance $d$ and viewing angle $\alpha$ by the attacker. The victim vehicle’s camera height $g$ above ground can be known from the dataset, which ensures by definition that the patch is on the road. This process can be expressed formally with Equation 7 and 9, where $(u^p, v^p)$ is a pixel’s coordinates on the patch image $p$, $(x^p, y^p, z^p)$ the 3D coordinates of the pixel on the ground in the camera’s coordinate system, $(u^s, v^s)$ the corresponding pixel on the scene image, $K$ the camera intrinsic parameters. Other variables are defined in Figure 4. $$\begin{bmatrix} x^p \\ y^p \\ z^p \\ 1 \end{bmatrix} = \begin{bmatrix} \cos \alpha & 0 & -\sin \alpha & q \\ 0 & 1 & 0 & 0 \\ \sin \alpha & 0 & \cos \alpha & d \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} W/w & 0 & -W/2 \\ 0 & 0 & g \\ 0 & -H/h & H/2 \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} u^p \\ v^p \\ 1 \end{bmatrix},$$ $$\begin{bmatrix} x^p \\ y^p \\ z^p \\ 1 \end{bmatrix} = \begin{bmatrix} \cos \alpha & 0 & -\sin \alpha & q \\ 0 & 1 & 0 & 0 \\ \sin \alpha & 0 & \cos \alpha & d \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} W/w & 0 & -W/2 \\ 0 & H/h & -H/2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} u^p \\ v^p \\ 1 \end{bmatrix},$$ $$[u^s \ v^s \ 1]^T = 1/z^p \cdot K \cdot [x^p \ y^p \ z^p \ 1]^T.$$ For object-oriented attacks, the goal is to compromise the detection of the target object with an attached adversarial patch while keeping other objects unaffected. In this case, the training set $D$ is composed of frames in which the target object appears. For example, the ego-vehicle may follow a target vehicle with the background changes dynamically in the scene. The output of the fusion model $f_s$ during optimization is the detection score of the target object exclusively. The function $\text{proj}_L$ projects the patch image and mask onto the target object in the scene image using Equation 8 and 9 corresponding to step ① and ② in Figure 4b respectively. Unlike the scene-oriented attack, in which the location of the patch is defined by attackers using longitudinal distance $d$, lateral distances $q$ and viewing angle $\alpha$, in object-oriented attacks, these projection parameters change dynamically depending on the position of the target object in training data. Hence, we innovatively extract them from the estimated 3D bounding box of the target object before projecting the patch. 5 EVALUATION Model selection In our evaluation, we use six state-of-the-art camera-LiDAR fusion-based 3D object detection models that are published recently. These models include Transfusion (Bai et al., 2022), DeepInteraction (Yang et al., 2022), UVTR (Li et al., 2022b), PointAugmenting (Wang et al., 2021), BEVFusion-MIT (Liu et al., 2023b) and BEVFusion-PKU (Liang et al., 2022). These models cover data-level and feature-level fusion strategies and contain a diverse range of feature-level fusion approaches, including alignment-based fusion, non-alignment-based fusion, and various detection head designs. Additionally, we use a camera-only model called BEVFormer (Li et al., 2022c) as comparison. Detailed selection criteria can be found in Appendix F. Scene selection. Our evaluation scenes are selected from the Nuscenes dataset (Caesar et al., 2020). This dataset contains real-world multi-view images and point cloud data collected from industrial-grade sensor array, and they are derived from hundreds of driving clips. The selected scenes for testing in our evaluation contains 375 data frames, encompass diverse road types, surrounding objects and time-of-day situations. Additionally, we conduct experiments in simulation and in the physical world. By leveraging this rich dataset along with simulation and physical-world experiments, our evaluation framework benefits from an accurate representation of real-world driving scenarios. 5.1 Sensitivity Distribution Recognition This section reports on the evaluation of our sensitivity distribution recognition method. We present the qualitative results of the sensitivity heatmap generated by our method and validate the property of the heatmap in Appendix C. We utilize Equation 1 to generate the sensitivity heatmap for the six fusion models, using two different data frames, each with varying proportions of vehicles and pedestrians. Detailed experimental setups can be found in Appendix H. Figure 5 depicts the generated sensitivity heatmaps. The first two images are the scene images captured by the front camera of the ego vehicle while the subsequent rows exhibit the sensitivity distributions of the corresponding scene image using different models. The brightness or warmth of colors in the heatmap corresponds to the sensitivity of a particular region to adversarial attacks. Higher brightness areas signify higher susceptibility to attacks, while lower brightness denotes more robustness. Observe that the sensitive regions for the initial four models, namely Transfusion (Bai et al., 2022), DeepInteraction (Yang et al., 2022), UVTR (Li et al., 2022b) and PointAugmenting (Wang et al., 2021), primarily lie on --- 1The code is available at: https://github.com/Bob-cheng/CL-FusionAttack Table 1: Attack performance of the scene-oriented adversarial patch attack against 3D object detection. | Models | mAP | CR | TK | BS | TR | BR | PD | BI | |------------|-----|-----|-----|-----|-----|-----|-----|-----| | BF-PKU | Ben.| 0.824| 0.453| 0.448| 1.000| 0.991| 0.898| 0.990| 0.989| | | Adv.| 0.333| 0.136| 0.116| 0.524| 0.239| 0.611| 0.242| 0.604| | | Diff.| 57.2%| 70.0%| 74.1%| 47.6%| 75.6%| 32.0%| 75.6%| 38.9%| | BF-MIT | Ben.| 0.886| 0.538| 0.939| 0.858| 0.992| 0.895| 0.989| 0.990| | | Adv.| 0.553| 0.279| 0.652| 0.720| 0.488| 0.623| 0.337| 0.772| | | Diff.| 37.6%| 48.1%| 30.6%| 16.1%| 50.8%| 30.4%| 65.9%| 22.0%| | TF | Ben.| 0.758| 0.493| 0.451| 0.700| 0.991| 0.692| 0.989| 0.990| | | Adv.| 0.759| 0.494| 0.452| 0.706| 0.992| 0.693| 0.989| 0.989| | | Diff.| 0.1%| 0.2%| 0.2%| 0.9%| 0.1%| 0.1%| 0.0%| 0.1%| | DI | Ben.| 0.807| 0.459| 0.522| 0.947| 0.990| 0.750| 0.989| 0.989| | | Adv.| 0.808| 0.460| 0.529| 0.947| 0.990| 0.751| 0.989| 0.989| | | Diff.| 0.1%| 0.2%| 1.3%| 0.0%| 0.0%| 0.1%| 0.0%| 0.0%| | UVTR | Ben.| 0.850| 0.557| 0.989| 0.754| 0.990| 0.736| 0.982| 0.989| | | Adv.| 0.862| 0.558| 0.989| 0.786| 0.990| 0.741| 0.982| 0.989| | | Diff.| 1.4%| 0.2%| 0.0%| 4.8%| 0.0%| 2.6%| 0.7%| 0.0%| | PointAug | Ben.| 0.724| 0.471| 0.466| 0.683| 0.992| 0.714| 0.984| 0.981| | | Adv.| 0.716| 0.467| 0.468| 0.679| 0.988| 0.705| 0.984| 0.981| | | Diff.| 1.1%| 0.8%| 0.4%| 0.6%| 0.4%| 1.3%| 0.0%| 0.0%| | BFM | Ben.| 0.519| 0.417| 0.811| 0.280| 0.247| 0.712| 0.650| 0.518| | | Adv.| 0.514| 0.432| 0.799| 0.284| 0.247| 0.711| 0.605| 0.518| | | Diff.| 1.1%| 3.6%| 1.5%| 1.4%| 0.0%| 0.1%| 6.9%| 0.0%| * CR: Car, TK: Truck, BS: Bus, TR: Trailer, BR: Barrier, PD: Pedestrian, BI: Bicycle Table 2: Attack performance of the object-oriented adversarial patch attack. | Models | Targeted object | Other objects | |------------|-----------------|--------------| | | Ben. Score | Adv. Score | Diff. | Ben. mAP | Adv. mAP | Diff. | | TF | 0.655 | 0.070 | 89.24%| 0.921 | 0.923 | 0.30%| | DI | 0.658 | 0.110 | 83.32%| 0.964 | 0.965 | 0.13%| | UVTR | 0.894 | 0.189 | 78.83%| 0.963 | 0.963 | 0.00%| | PointAug | 0.734 | 0.177 | 75.80%| 0.954 | 0.955 | 0.10%| | BF-MIT | 0.714 | 0.219 | 69.37%| 0.965 | 0.968 | 0.34%| | BF-PKU | 0.712 | 0.168 | 76.38%| 0.956 | 0.958 | 0.13%| | Average | 0.728 | 0.156 | 78.63%| 0.954 | 0.955 | 0.17%| | BFM | 0.955 | 0.095 | 90.02%| 0.578 | 0.571 | 1.08%| Table 3: Physical-world attack performance. | Pedestrian ID | Original | Benign | Adversarial | Difference | |---------------|----------|--------|-------------|------------| | 1 | 0.685 | 0.693 | 0.194 | 72.01% | | 2 | 0.674 | 0.642 | 0.219 | 65.89% | | 3 | 0.659 | 0.681 | 0.237 | 65.20% | | Average | 0.673 | 0.672 | 0.217 | 67.76% | Areas of objects like vehicles and pedestrians. This suggests that attacks on objects could prove to be more effective, whereas non-object areas such as the road and walls are more resistant. The following two models (i.e., BEVFusion-MIT [Liu et al., 2023b] and BEVFusion-PKU [Liang et al., 2022]) demonstrate high sensitivities throughout the entire scene, irrespective of objects or background regions. This indicates their vulnerability at a global level. Our technique also works on camera-only models. As shown in the last row, the camera-only model (i.e., BEVFormer [Li et al., 2022c]) demonstrates higher sensitivity in the object area and it is also classified as object sensitivity according to Equation 4. Since different sensitivity types demonstrate distinct vulnerability patterns, we discuss the reason behind in our defense discussion (Appendix N). 5.2 SCENE-ORIENTED ATTACKS Scene-oriented attacks are primarily aimed at fusion models with global sensitivity. Such models are vulnerable to adversarial patches placed on non-object background structures (e.g., the road). Our attack is universal, which can affect the detection of arbitrary dynamic objects in a given scene, even those that were not initially present during patch generation (training). Therefore, our attack is more practical in real-world scenarios as attackers can effortlessly paste generated patches onto the ground, rendering victim vehicles in close proximity blind. This could pose a great risk to pedestrians and surrounding vehicles. Detailed experimental setups can be found in Appendix H. Table 1 presents the quantitative results of our evaluation and qualitative examples can be found in Appendix I. In Table 1, the first column shows various models, the third column presents the mAP of object detection results in the test set, and the subsequent columns denote the average precision (AP) of different objects categories. We report the subject model’s benign performance (no patch), adversarial performance (patch applied) and their difference in percentage (attack performance) for each model. Our findings indicate that the detection accuracy of the two globally sensitive models (i.e., BEVFusion-PKU and BEVFusion-MIT) has considerably decreased, for all object categories. The mAP decreased more than 35%. However, the other five models with object sensitivity remain unaffected. These results align with our conclusion in Section 5.1 and further reveal the vulnerability of globally sensitive models to more influential scene-oriented attacks. Additionally, our experiment confirms the robustness of object-sensitive models under attacks in non-object background areas. In comparison, the camera-based model (i.e., BEVFormer) demonstrates worse benign performance than all fusion-based models, but it is also robust to scene-oriented attacks due to its object-sensitive nature. Demo video is available at https://youtu.be/xhXtzDezeaM. 5.3 OBJECT-ORIENTED ATTACKS Object-oriented attacks target object-sensitive models that are more robust to attacks in non-object background areas. The influence of this attack is more localized, as opposed to the scene-oriented attacks. It concentrates the impact on a specific target object, leaving the detection of other objects unaltered. This approach offers a higher degree of customization for attackers, enabling them to manipulate the impact at the object level rather than the entire scene. Detailed experimental setups can be found in Appendix H. Our evaluation results are presented in Table 2 and qualitative examples are in Appendix I. As shown, the first column represents various fusion models and a camera-only model for comparison, the second to fourth columns display the average detection score of the target object, and the fifth to seventh columns indicate the mAP of other objects (including car, bus, pedestrian and motorcycle). The results demonstrate a substantial decrease in the target object’s detection scores for fusion models, from 0.728 to 0.156 on average, thus validating the efficacy of our object-oriented adversarial attacks across all models regardless of the fusion strategies. Furthermore, the detection results of other objects in the scene remain virtually unaffected, as evidenced by the negligible change in mAP. This phenomenon also holds for the camera-only model, and it shows worse benign mAP and more performance degradation under attack. Videos of the attack can be found at https://youtu.be/xhXtzDezeaM. 5.4 PRACTICALITY To assess the practicality of our single-modal attacks on fusion models, we conducted experiments in both simulated and physical-world environments. Attacks in simulation can be found in Appendix K. We assess the feasibility of our attack in a real-world setting by replacing the front-view images of 30 data frames in the dataset with our custom images taken in the physical world, leaving other views and LiDAR data unchanged. To ensure the compatibility of the dataset’s LiDAR data with our custom images, we maintain the 3D geometry of our physical scenario consistent with the original dataset images. Figure 6b and Figure 6c illustrate an original image and a custom scenario in our experiment respectively. Note that both images maintain similar 3D geometry, with pedestrians crossing the road located at similar positions in both cases. Detailed setups are in Appendix H. Figure 6b exhibits the experimental equipment, while Table 3 details the attack performance. The pedestrian ID, corresponding to the pedestrians in Figure 6b, is denoted in the first column of Table 3, with the subsequent columns reporting the average detection scores in original dataset images (Figure 6b), our benign physical-world images (Figure 6c), and the adversarial physical-world images (Figure 6d). The last column denotes the difference between benign and adversarial physical-world performance. The comparative detection scores in our benign physical-world images and the original dataset images validate the consistency between the original LiDAR data and our custom images, thereby substantiating the efficacy of our image replacement method. Furthermore, the deployment of the adversarial patch results in a significant reduction in the pedestrian detection scores, emphasizing the practicality and effectiveness of our attack strategy in the physical world. We discuss the implications on AV security in Appendix Q. Ablation Studies and Defense Discussions. We conducted ablation studies about the attack performance of various distance and viewing angles of the adversarial patch (Appendix L), and various granularity of the sensitivity heatmap (Appendix M). We discussed both architectural-level defense and DNN-level defense strategies in Appendix N, and the limitations are discussed in Appendix O. 6 CONCLUSION We leverage the affordable adversarial patch to attack the less significant camera modality in 3D object detection. The proposed optimization-based two-stage attack framework can provide a comprehensive assessment of image areas susceptible to adversarial attacks through a sensitivity heatmap, and can successfully attack six state-of-the-art camera-LiDAR fusion-based and one camera-only models on a real-world dataset with customized attack strategies. Results show that the adversarial patch generated by our attack can effectively decrease the mAP of detection performance from 0.824 to 0.353 or reduce the detection score of a target object from 0.728 to 0.156 on average. 7 ETHICS STATEMENT Most of our experiments are conducted in the digital space or in a simulated environment. Our physical-world study involving human subjects underwent thorough scrutiny and approval by an institutional IRB. Notably, we conducted physical experiments in a controlled environment on a closed road, utilizing a camera and tripod to capture scenes instead of employing real cars, as elucidated in Appendix H. This deliberate choice minimizes potential threats to the safety of participants. Stringent protocols were implemented, including participants not facing the camera, wearing masks during experiments, and blurring their faces in the photos. No identifiable information from the volunteers is retained by the researchers. 8 ACKNOWLEDGEMENTS This research was supported, in part by IARPA TrojAI W911NF-19-S-0012, NSF 2242243, 1901242 and 1910300, ONR N000141712045, N00014-1410468 and N000141712947, National Research Foundation of Korea(NRF) grant funded by the Korean government (MSIT) RS-2023-00209836. REFERENCES Mazen Abdelfattah, Kaiwen Yuan, Z Jane Wang, and Rabab Ward. Adversarial attacks on camera-lidar models for 3d car detection. In *IROS*, 2021a. Mazen Abdelfattah, Kaiwen Yuan, Z Jane Wang, and Rabab Ward. Towards universal physical attacks on cascaded camera-lidar 3d object detection models. In *ICIP*, 2021b. AIDay. Tesla Autopilot Uses Transformer, 2022. [https://youtu.be/j0z4FweCy4M?t=3621](https://youtu.be/j0z4FweCy4M?t=3621) Rhett Allain. What Is the Angular Field of View for an iPhone 13?, 2022. [https://rjalla.in.medium.com/what-is-the-angular-field-of-view-for-an-iphone-13-199969482531](https://rjalla.in.medium.com/what-is-the-angular-field-of-view-for-an-iphone-13-199969482531) Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In *ICML*, 2018. Autoware. Autoware. [https://www.autoware.org/](https://www.autoware.org/) Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, and Chiew-Lan Tai. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In *CVPR*, 2022. BaiduApollo. Baidu Apollo. [https://apollo.auto/index.html](https://apollo.auto/index.html) Adith Boloor, Karthik Garimella, Xin He, Christopher Gill, Yevgeniy Vorobeychik, and Xuan Zhang. Attacking vision-based perception in end-to-end autonomous driving models. *Journal of Systems Architecture*, 2020. Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch. *arXiv preprint arXiv:1712.09665*, 2017. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In *CVPR*, 2020. Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z Morley Mao. Adversarial sensor attack on lidar-based perception in autonomous driving. In *CCS*, 2019. Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, and Bo Li. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In *S&P*, 2021.
9XdLlbxZCC
Missing analysis. From Table 4, we can see that the model's performance is quite sensitive to the used backbone (more than 10% between Rsenet50 and ConvNext. However, the authors didn't give explanation. Besides, the proposed model uses six loss terms in total for both flow estimation and content learning. I am wondering how to decide the trade-offs and if they would affect the final results.
MC-JEPA: A JOINT-EMBEDDING PREDICTIVE ARCHITECTURE FOR SELF-SUPERVISED LEARNING OF MOTION AND CONTENT FEATURES Anonymous authors Paper under double-blind review ABSTRACT Self-supervised learning of visual representations has been focusing on learning content features, which do not capture object motion or location, and focus on identifying and differentiating objects in images and videos. On the other hand, optical flow estimation is a task that does not involve understanding the content of the images on which it is estimated. We unify the two approaches and introduce MC-JEPA, a joint-embedding predictive architecture and self-supervised learning approach to jointly learn optical flow and content features within a shared encoder, demonstrating that the two associated objectives; the optical flow estimation objective and the self-supervised learning objective; benefit from each other and thus learn content features that incorporate motion information. The proposed approach achieves performance on-par with existing unsupervised optical flow benchmarks, as well as with common self-supervised learning approaches on downstream tasks such as semantic segmentation of images and videos. 1 INTRODUCTION Self-supervised learning in vision has been dominated lately by approaches that aim at learning content features; i.e. features containing information that allows to identify and differentiate objects; in images (Chen et al., 2020a; Grill et al., 2020; Chen & He, 2020; Zbontar et al., 2021; Bardes et al., 2022a; Caron et al., 2020; 2021; Zhou et al., 2022; Assran et al., 2022; 2023), or videos (Qian et al., 2021; Recasens et al., 2021; Feichtenhofer et al., 2021; Tong et al., 2022). Most methods focus on learning global features that achieve strong results in tasks such as object classification or action recognition in videos. A more recent trend aims at learning localized features, that perform well on local tasks such as detection and segmentation (Xiao et al., 2021; Wang et al., 2021; Hénaff et al., 2021; 2022; Bardes et al., 2022b). However, these methods focus on understanding the content of images and videos and are not able to learn information at the pixel level, such as motion in videos or details in textures. In this paper, we focus on jointly learning motion features by using self-supervised optical flow estimation (Horn & Schunck., 1981) from videos as a pretext task, and content features with general self-supervised learning. The Optical flow captures the motion, or dense-pixel correspondence, that occurs between two images, for instance consecutive frames in a video, or images from a stereo pair. Estimating it is a fundamental problem in computer vision, whose solution is key to tasks such as visual odometry, depth estimation, or object tracking. Classical approaches cast optical flow estimation as an optimization problem (Horn & Schunck., 1981; Brox et al., 2004), where the objective is to match pixels with a smoothness constraint. Approaches based on neural networks and supervised learning (Yu et al., 2016; Ilg et al., 2017; Hui et al., 2018; Sun et al., 2018; Yang & Ramanan, 2019; Zhao et al., 2020; Teed & Deng, 2020; Jiang et al., 2021; Bai et al., 2022), are limited by the difficulty of labelling data in the real world, compared to using synthetic data. Self-supervised methods allow learning from large collections of real-world video data (Ren et al., 2017; Liu et al., 2019b;a; Zhong et al., 2019; Im et al., 2020; Liu et al., 2020; Luo et al., 2021; Jonschkowski et al., 2020; Stone et al., 2021) and offer an alternative that is now competitive with supervised approaches. However, most current methods only focus on motion without relying on the (semantic) content of the video, a problem that we solve by learning motion and content features in images at the same time with a multi-task approach. Figure 1: Multi-task self-supervised learning of content and motion features. MC-JEPA combines a self-supervised features learning and optical flow estimation approach in a multi-task setup where with a single shared encoder. The self-supervised learning of content features objective is trained on ImageNet and the self-supervised flow estimation task is trained on various videos datasets. Our final encoder produces features that have motion and content information, and that can be used to estimate optical flow in videos or for content understanding downstream tasks. Recent techniques learn spatial correspondences between video frames (Jabri et al., 2020; Bian et al., 2022; Xu & Wang, 2021; Tokmakov et al., 2022). The goal is to track the location of objects and therefore capture content information that optical flow estimation does not. These approaches can be seen as object-level motion estimation. They learn features that are very specific to the tracking task, with very poor generalization to other visual downstream tasks. Very often, they are trained on small video datasets that are not as diverse as large image datasets such as ImageNet (Deng et al., 2009), which reinforces the poor quality of the visual features learned. A more reliable way to build visual representations is to learn multiple tasks at the same time (Zhang et al., 2021; Girdhar et al., 2022). We thus propose MC-JEPA (Motion-Content Joint-Embedding Predictive Architecture), a method that learns optical flow estimation and content features, in a multi-task setting with a shared encoder, with a joint-embedding-predictive architecture (LeCun, 2022). Our contributions can be summarized as follows: • We propose a method for learning self-supervised optical flow from synthetic and real video data, based on PWC-Net (Sun et al., 2018), and improved with several additional components such as a backward consistency loss and a variance-covariance regularization term. We call this first method M-JEPA. • We combine M-JEPA in a multi-task setup with VICReg (Bardes et al., 2022a), a self-supervised learning method trained on ImageNet, in order to improve our estimated flow, and produce content features that transfer well on many downstream tasks. Our final method is called MC-JEPA. • We evaluated MC-JEPA on a range of optical flow benchmarks such as KITTI 2015 (Menze & Geiger, 2015) and Sintel (Butler et al., 2012), image and video segmentation tasks on Cityscapes (Cordts et al., 2016) or DAVIS (Pont-Tuset et al., 2017), and demonstrate strong performance on all these tasks with a single encoder. We hope that MC-JEPA will be a first step towards self-supervised learning approaches that are based on multi-task learning and joint-embedding architectures, and that can be trained on any visual data, images or video, and that generalizes well on a wide range of tasks, from motion prediction tasks to content understanding tasks. 2 RELATED WORK Self-supervised learning. The recent advances in self-supervised learning have been mainly driven by the general approach of learning invariances to hand-crafted data augmentations, using a joint- Figure 2: **MC-JEPA architecture.** Our method learns motion through optical flow estimation on videos and content through joint-embedding of views of images, in a multi-task way with a shared encoder. Our optical flow estimation architecture is based on PWC-Net (Sun et al., 2018) and works as follows. Given a pair of consecutive frames $I_t$, $I_{t+1}$ in a video, an encoder produces a set of pyramidal features $\{X^{(l)}_t\}$ and $\{X^{(l)}_{t+1}\}$. The flow is estimated in a coarse-to-fine manner, starting at the lowest resolution features $X^{(1)}$. A first flow $f^{(2)}_{t,t+1}$ is estimated by the flow estimator network, then used to warp the features $X^{(2)}_t$, which is compared to $X^{(2)}_{t+1}$ with a regression loss. The flow is then iteratively refined at every layer by predicting the residual flow and adding it to the previous layer flow. The final flow is used to warp $I_t$ and compare the warped image with $I_{t+1}$ using a reconstruction loss. Forward-backward flow consistency is encouraged with the cycle consistency losses, which minimizes the distance between $X^{(l)}_t$ and $f^{(l)}_{t,t+1}(f^{(l)}_{t+1,t}(X^{(l)}_t))$ at every layer. When the encoder is trained in the multi-task setup with a standard self-supervised learning criterion, the training is very unstable, which is prevented by the variance-covariance regularization term on every feature layer. Among self-supervised learning methods learning from images, contrastive methods push together concepts that are visually close and push away concepts that are different in the embedding space (Hjelm et al., 2019; Chen et al., 2020a; He et al., 2020; Chen et al., 2020b; Mitrovic et al., 2021; Dwibedi et al., 2021; Chen et al., 2021; Tomasev et al., 2022; Li et al., 2022), clustering methods categorized embeddings into a balanced set of clusters (Caron et al., 2018; 2020; 2021), non-contrastive methods either prevent collapsing solutions with architectural tricks (Grill et al., 2020; Lee et al., 2021; Chen & He, 2020), or with covariance-based regularization (Ermolov et al., 2021; Zbontar et al., 2021; Bardes et al., 2022a; Garrido et al., 2023b), which is equivalent under some assumptions to contrastive methods (Garrido et al., 2023a). Finally, some methods are based on masking and patch-reconstruction (Bao et al., 2022; He et al., 2022; Zhou et al., 2022; Assran et al., 2022; 2023). These methods focus on learning a global representation of the input, which is best suited for classification tasks. Dense self-supervised learning rather focuses on learning local features (Xie et al., 2021; Wang et al., 2021; Xiao et al., 2021; Yang et al., 2021; Wang et al., 2022; Yang et al., 2022; Hénaff et al., 2021; 2022; Chen et al., 2022; Caron et al., 2023), which is best suited for detection and segmentation downstream tasks. The loss functions and methods developed with images have led to the application of similar approaches to videos (Qian et al., 2021; Recasens et al., 2021; Feichtenhofer et al., 2021; Tong et al., 2022; Parthasarathy et al., 2022), with the objective of learning a representation that transfers well on action recognition benchmarks. **Optical flow estimation.** Classical techniques for optical flow estimation are based on the optimization of a matching term and a smoothness term for a given pair of images, without any kind of learning (Horn & Schunck., 1981; Brox et al., 2004; Sun et al., 2010). Later, methods based on supervised learning and convolutional neural networks came, first without any prior knowledge in architecture (Yu et al., 2016; Ilg et al., 2017), then specifically designed to tackle flow estimation (Ranjan & Black, 2017; Sun et al., 2018; Yang & Ramanan, 2019; Teed & Deng, 2020). Supervised flow estimation is limited to learning with synthetic data, and unsupervised flow estimation is a promising direction towards learning on any video data. Photometric consistency was introduced by (Ren et al., 2017) and is at the basis of every unsupervised optical flow estimation method. Additional self-supervision signals can be found with distillation of reliable matches (Liu et al., 2019b,a), global geometric constraint (Zhong et al., 2019), or data augmentation consistency (Liu et al., 2020; Stone et al., 2021). Fusing multi-layer similarities (Im et al., 2020) and carefully designing the interpolation for upsampling (Luo et al., 2021) further improve the estimated flow quality. Finally, a comprehensive set of additional tricks that help unsupervised optical flow is presented in (Jonschkowski et al., 2020). **Learning correspondences.** Learning from videos has been focusing on learning a global representation for a video, but another interesting task is learning spatial correspondences between consecutive frames. A promising direction for learning these correspondences is contrastive random walks (Jabri et al., 2020), which can also be done at the pixel level (Bian et al., 2022). Correspondences can also be learned at the object level (Xu & Wang, 2021; Patrick et al., 2021), or combined with a memory (Tokmakov et al., 2022), in order to deal with occluded objects. Learning optical flow can be seen as learning correspondences at the pixel-level, which is not captured by popular self-supervised learning methods. **Multi-task Learning.** Multi-task learning is commonly used to train an encoder on multiple tasks, when the different tasks benefit from each other. Several works use it to learn a shared representation between images and videos (Zhang et al., 2021; Girdhar et al., 2022). However, very few works use multi-task learning for self-supervised learning, the idea was introduced in (Doersch & Zisserman, 2017) and used for anomaly detection tasks in (Georgescu et al., 2021), without many follow-up work. We simply use multi-task learning for learning self-supervised content features and optical flow at the same time with a single shared encoder. ### 3 PROPOSED APPROACH In this section we describe our architecture and improvements for self-supervised optical flow estimation with a hierarchical coarse-to-fine approach, the loss functions of our method, our self-supervised general objective and multi-task setup, our data sampling strategy, and a set of tricks for stabilizing training. Section 3.1 introduces our M-JEPA method for optical flow estimation, and Section 3.2 presents how we combine M-JEPA with multi-task learning into our final MC-JEPA method. #### 3.1 OPTICAL FLOW Given a pair of RGB images, $I_t, I_{t+1} \in \mathbb{R}^{3,H,W}$, the corresponding optical flow is defined by the correspondence map $f \in \mathbb{R}^{2,H,W}$, that for a given position in $I_t$, denotes the position of the corresponding pixel in $I_{t+1}$. The goal is to learn a flow estimator function $F_\theta$ with parameters $\theta$, which outputs the flow for a pair of images $f = F_\theta(I_t, I_{t+1})$, by training it on a set of image sequences $D = \{\{I_t\}_{t=1}^T\}_{i=1}^N$. Unsupervised flow estimation usually works with a regression loss, or photometric consistency loss, which ensures that the image $I_t$ warped by the predicted flow $f$ is consistent with $I_{t+1}$, and a regularizer that encourages $f$ to be smooth. Most methods differ in the way these terms are implemented, in the details of the encoder and flow estimator architecture, and in additional self-supervisory signals. **Regression and smoothness.** We use the coarse-to-fine hierarchical flow estimator PWC-Net (Sun et al., 2018), which we adapt to work with our custom encoder architecture described in Appendix C. Given a set of features $X^{(l)}_t, X^{(l)}_{t+1} \in \mathbb{R}^{d(l) \times h(l) \times w(l)}$, corresponding to level $l$ of pyramids for images $I_t$ and $I_{t+1}$ with $l \in \{1, ..., L\}$, we first estimate a flow $f^{(2)}_{t,t+1} = F_\theta(X^{(1)}_t, X^{(1)}_{t+1}, 0)$, then recursively refine this flow at higher and higher resolutions by predicting the residual flow at every layer: $$f^{(l+1)}_{t,t+1} = F_\theta(X^{(l)}_t, X^{(l)}_{t+1}, f^{(l)}_{t,t+1}).$$ Our estimator $F_\theta(X_t, X_{t+1}, f)$ works as follows. First the feature $X_t$ is warped as $\hat{X}_{t+1} = f(X_t)$, then a 4D correlation volume $V = \hat{X}_{t+1}X^T_{t+1}$ is calculated and is fed to a small convolutional network $g_\phi(V, X_t, \hat{X}_{t+1}, f)$ which predicts the residual flow. We then use a multi-scale loss on the intermediate feature layers of the encoder, defined as follows: $$L_{reg} = \sum_{l=1}^L \|X^{(l)}_{t+1} - \hat{X}^{(l)}_{t+1}\|_2^2.$$ Table 1: **Quantitative results.** Comparison of the performance of our model on: (1) Sintel (Butler et al., 2012) clean and final, and KITTI 2015 (Menze & Geiger, 2015) optical flow estimation benchmarks; (2) Pascal VOC (Everingham et al., 2010), Cityscapes (Cordts et al., 2016) and ADE20k (Zhou et al., 2019), both frozen and fine-tune linear segmentation benchmarks; (3) DAVIS-2017 (Pont-Tuset et al., 2017) and video object segmentation benchmark, against several self-supervised methods optimized for a single task specifically. EPE is the average end-point-error (↓ Lower is better). F1 is the average-f1 error in (%) (↑ Lower is better). mIoU is the mean intersection-over-union (↑ Higher is better). \((J\&F)_m\) is the average between mean region similarity and mean contour-based accuracy (↑ Higher is better). MC-JEPA is our full model trained in multi-task way on ImageNet and flow estimation. M-JEPA is our model without content learning, trained only on flow estimation. The best and second best result for each benchmark are **bold** and underlined. | Method | Backbone | Optical Flow Estimation | Image Segmentation | Video Seg | |-----------------|----------|-------------------------|--------------------|-----------| | | | Sintel Clean | Sintel Final | KITTI 2015 | Pascal VOC | CityScapes | ADE20k | Davis 2017 | | | | train test EPE | test EPE | train test EPE | EPE | F1 | Frozen FT | mIoU | Frozen FT | mIoU | Frozen FT | mIoU | \((J\&F)_m\) | | Rand. weights | CNX-T | 23.71 - | 24.02 - | 24.88 - | 0.5 - | - - | - - | - - | - - | - - | - - | - - | | flow methods | | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | | UFlow (Jonschkowski et al., 2020) | PWC | 2.50 5.21 | 3.39 6.50 | 2.71 11.13 | 7.8 - | - - | - - | - - | - - | - - | - - | - - | | ARFlow (Liu et al., 2020) | PWC | 2.79 4.78 | 3.73 5.89 | 2.85 11.80 | 7.9 - | - - | - - | - - | - - | - - | - - | - - | | UPFlow (Luo et al., 2021) | PWC | 2.33 4.68 | 2.67 5.32 | 2.45 9.38 | 8.8 - | - - | - - | - - | - - | - - | - - | - - | | SMFlow (Liu et al., 2021) | RAFT | 1.71 3.15 | 2.55 4.18 | 2.00 8.85 | 10.4 - | - - | - - | - - | - - | - - | - - | - - | | correspondence methods | | R50 | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | | VFS (Xu & Wang, 2021) | R50 | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | MCRW (Bian et al., 2022) | PWC | 2.84 5.68 | 3.82 6.72 | 2.81 11.67 | 39.8 - | - - | - - | - - | - - | - - | - - | - - | | content methods | | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | | VICReg (Bardes et al., 2022a) | CNX-T | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | VICRegL (Bardes et al., 2022b) | CNX-T | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | MoCo v3 (Chen et al., 2021) | ViT-S | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | DIINO (Caron et al., 2021) | ViT-S | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | ours | M-JEPA | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | | | | 2.98 - | 3.82 - | 3.01 - | 9.4 - | - - | - - | - - | - - | - - | - - | - - | - - | | | MC-JEPA | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | | | | 2.81 5.01 | 3.51 6.12 | 2.67 11.33 | 67.1 79.9 | 65.5 78.4 | 30.8 44.2 | 70.5 | and a reconstruction loss on the last layer that is at the image level: \[ L_{rec} = d(I_{t+1}, \hat{I}_{t+1}), \] where \(d\) is a loss function that is a linear combination of an \(l_2\), \(l_1\), and SSIM losses. In addition, we use the smoothness regularizer of (Jonschkowski et al., 2020) that constrains the produced flow to be smooth, and allows us to deal with repetitive or textureless patterns: \[ L_{smooth} = \sum_{d \in \{x,y\}} \sum_p \exp(-\lambda \nabla_d I) \| \nabla_d f_{t,t+1} \|_1, \] where \(x\) and \(y\) are directions in which the predicted flow is constrained to remain stable if the image gradient does not significantly change. **Cycle consistency.** Flow estimation is a non-symmetric operation, as not all pixels of \(I_t\) have a correspondence in \(I_{t+1}\) and vice versa. For a given pair of images, we estimate both the forward and backward flows. We introduce a cycle-consistency loss that constraint the features \(X_t\) warped by \(f_{t,t+1}\) then by \(f_{t+1,t}\) to match with \(X_t\), the loss is defined as follows: \[ L_{cycle} = \sum_{l=1}^L \| X_t^{(l)} - f_{t+1,t}(f_{t,t+1}(X_t^{(l)})) \|_2^2, \] where \(f(X)\) is the warping operation of \(X\) by flow \(f\). We symmetrize the loss and do the same for \(X_{t+1}\). In order to deal with occlusion, we follow (Liu et al., 2019a) and use forward-backward compatibility, only applying \(L_{reg}\) on the pixels that have a correspondence in both the forward and the backward flows. **Variance-covariance regularization.** Finally, in order to regularize the features produced by our encoder, we introduce a variance-covariance regularization loss function (Bardes et al., 2022a), de- Figure 3: **Qualitative visualization: optical flow.** We compare our results of our complete model (MC-JEPA) and our model only pretrained on flow (M-JEPA) with ARFlow. Top 2 rows are from KITTI-15, bottom 2 rows are from Sintel clean and Sintel final. defined as follows: \[ L_{vc} = \sum_{l=1}^{L} \frac{1}{d} \sum_{j=1}^{d} \max(0, \gamma - \sqrt{\text{Var}(X_{t,j}^{(l)}) + \epsilon}) \\ + \frac{1}{d} \sum_{i \neq j} [C(X_{t}^{(l)})]_{i,j}^2. \] where Var is the empirical variance and \( C \) is the empirical covariance matrix after centering the features. This loss helps stabilizing the training with the multi-task setup described in Section 3.2, and also improves the performance of the method as shown by Table 11. ### 3.2 Multi-task self-supervised learning This section describes how we combine M-JEPA with content learning into our final MC-JEPA method. **Learning content features.** We follow the literature (Chen et al., 2020a; Grill et al., 2020; Caron et al., 2020; Bardes et al., 2022a) and learn content features by simply pre-training our encoder to jointly-embed two views of an image. We generate the views using image transformation such as random cropping and color jittering. In particular, we use the VICReg objective (Bardes et al., 2022a) and follow its protocol. From a seed image sampled in an unlabelled training dataset \( D \), two views are generated using common data augmentation such as random cropping and color jittering, the views are then rescaled to a fixed size and fed to an encoder, then mapped to an expander network on which the VICReg loss is applied. The VICReg loss \( L_{ssl} \) is similar to Eq. (6), with in addition an invariance term (\( l_2 \) loss) that makes the embedding of the two views closer to each other and is minimized over \( D \). **Multi-task learning.** At a given iteration of training, we sample a batch of sequences from our video dataset and compute the flow loss, then sample a batch of images from ImageNet and compute our self-supervised learning loss, and then add the two losses and back-propagate the gradients into our encoder, expander, and flow estimator network. The encoder architecture and weights are shared between the two tasks. We illustrate our approach in Figure 1 for the general idea and Figure 2 for the detailed architecture. The final loss function that MC-JEPA optimizes is as defined follows: \[ \sum_{D_1} L_{rec} + L_{reg} + L_{smooth} + L_{cycle} + L_{vc} + \sum_{D_2} L_{ssl}, \] where \( D_1 \) is our video sequences dataset and \( D_2 \) is our image dataset. The losses are balanced with additional coefficients that we tune carefully. Additional details are given in Appendix B, including the values we use for these coefficients. Figure 4: **Qualitative visualization: video segmentation.** We visualize the segmentation maps obtained by the frozen features learnt with MC-JEPA on the video instance tracking task on DAVIS 2017, for several video sequences, at frames t=1,10,25,50. Frame 1 is given as ground truth, and the others are predicted by our model. ### 4 EXPERIMENTS #### 4.1 DATASETS Our model is pretrained in a single phase on a set of datasets commonly used for optical flow estimation, as well as on ImageNet-1k (Deng et al., 2009). Our video and flow datasets are KITTI (raw (A. et al., 2013), 2012 multiview (Geiger et al., 2012) and 2015 multiview (Menze & Geiger, 2015)), MPI Sintel (Butler et al., 2012) (clean, final and raw movie), FlyingChairs (Yu et al., 2016), FlyingThings (N. et al., 2016), and HD1K (D. et al., 2016). We evaluate the quality of our estimated flow on Sintel clean and final and KITTI 2015 and compare our model with state-of-the-art methods in self-supervised flow estimation. We evaluate the quality of our features on instance segmentation on Pascal VOC (Everingham et al., 2010), CityScapes (Cordts et al., 2016) and ADE20k (Zhou et al., 2019), both in linear frozen and fine-tuning evaluation. Finally, we evaluate our model on the DAVIS 2017 (Pont-Tuset et al., 2017) video segmentation and instance tracking benchmark popularized by (Caron et al., 2021). #### 4.2 MAIN RESULTS **Optical flow.** We compare the flow estimated by our model with several state-of-the-art methods optimized for flow estimation, as well as with MCRW, which discovers the flow by learning contrastive random walks between pixels. Table 1 presents our results, which are on par with UFLow (Jonschkowski et al., 2020), ARFlow (Liu et al., 2020) and UPFLow (Luo et al., 2021), which are all optimized for flow estimation. SMURF (Stone et al., 2021) is better on all the benchmarks, but our goal is not to learn the best flow possible but rather to use it as a pretext task to learning general features and motion. However, we outperform MCRW which shares the same goal. Figure 3 presents our optical flow qualitative results. **Instance Segmentation.** Table 1 presents the performance of MC-JEPA in various frozen and fine-tuned linear segmentation tasks, which are commonly used to evaluate the quality of the features learned by self-supervised learning models (Zhou et al., 2022; Bardes et al., 2022b). We outperform MoCo v3 (Chen et al., 2021) and VICReg (Bardes et al., 2022a), which is the method we use for our content features learning, by a large margin, which indicates that our flow estimation pretext task significantly helps the localization. Our results are on-par with VICRegL (Bardes et al., 2022b) which is specialized for segmentation and DINO (Caron et al., 2021) which has among the best self-supervised features available. **Video Segmentation.** Finally, we compare the performance of MC-JEPA on a video segmentation instance tracking task on the DAVIS 2017 dataset, against VFS (Xu & Wang, 2021) and MCRW (Bian et al., 2022) which are correspondence learning methods and DINO. We outperform all these methods, which shows that learning motion through flow estimation is a good way of improving the learning of content features for tasks that requires motion information. Figure 4 shows qualitative results on DAVIS 2017. Overall, our method allows us to train a single model that performs very well on all the above-mentioned tasks, whereas all the concurrent works are specialized for either content feature learning or motion and optical flow estimation learning. Table 2: **Ablation: flow datasets.** Impact on performance when varying the set of pretraining datasets. KITTI means pretraining on KITTI raw, 2012 and 2015. Sintel means pretraining Sintel raw, clean and final. FT/FC are FlyingThings and FlyingChairs. The metric for K15 (KITTI 2015), clean and final is the EPE. ISeg is the linear frozen evaluation on Pascal VOC, in mIoU, VSeg is the evaluation on DAVIS 2017, in \((J\&F)_m\). | KITTI | Sintel | FT/FC | HD1k | K15 clean | final | ISeg | VSeg | |-------|--------|------|-----|-----------|-------|------|------| | ✓ | | | | 2.93 | 3.23 | 3.96 | 66.8 | 70.0 | | ✓ | ✓ | | | 3.78 | 2.95 | 3.61 | 66.4 | 69.9 | | ✓ | ✓ | ✓ | | 2.91 | 2.99 | 3.70 | 67.2 | 70.4 | | ✓ | ✓ | ✓ | ✓ | 2.88 | 2.93 | 3.66 | 67.1 | 70.3 | | ✓ | ✓ | ✓ | ✓ | 2.67 | 2.81 | 3.51 | 67.1 | 70.5 | Table 3: **Ablation: estimator architecture.** Comparison between different flow estimator size form of normalization. The factor size influences the number of filters in each convolution of the estimator. LN means layer norm means usage of layer norm after every layer of the estimator, except the last one. L2 means l2-normalization before the last layer of the estimator. | Factor size | #Params | LN | 12 | K15 clean | final | ISeg | VSeg | |-------------|---------|----|----|-----------|-------|------|------| | 1 | 2M | ✓ | | crashed | | | | | 1 | 2M | ✓ | | 2.68 | 2.88 | 3.57 | 67.0 | 70.2 | | 1 | 2M | ✓ | | 6.21 | 6.04 | 6.99 | 53.2 | 47.9 | | 1 | 2M | ✓ | ✓ | 4.55 | 4.47 | 5.66 | 62.3 | 63.6 | | 2 | 8M | ✓ | | 2.67 | 2.81 | 3.51 | 67.1 | 70.5 | ### 4.3 ABLATIONS We perform many ablations on the components and training procedure of MC-JEPA, and evaluate our models on KITTI 2015 train (K15 in tables, metric is EPE), Sintel clean and final (clean and final in tables, metric is EPE), Pascal VOC linear frozen evaluation (ISeg in tables, metric is mIoU), and DAVIS 2017 video segmentation (VSeg in tables, metric is \((J\&F)_m\)), which are all relatively fast to perform. **Flow datasets.** We start by evaluating the effect of varying the set of data used flow estimation. Table 2 presents our results when incorporating or not various datasets. As expected, training on only KITTI or Sintel offers great performance in their respective evaluation set. Progressively adding FlyingChairs and Things, and HD1k, improves the flow results, but has very little influence on the segmentation tasks. The benefit on segmentation from doing flow estimation is independent from the domain on which the flow estimator is trained. **Flow estimator architecture.** When pretraining in our multi-task setup with ImageNet we observed many instabilities related to the gradient and the exploding norm of the estimator, and that we describe in Section A. We tried several changes to the flow estimator architecture to overcome these issues, namely using LayerNorm and l2-normalization. Table 3 presents our results when incorporating these elements, as well as when increasing the size of the estimator. Not regularizing the estimator led to crashing runs. l2-normalization is very inefficient, as it constrains the last layer to directly produce flows in the correct range of values. Using LayerNorm is the best solution and effectively prevents the estimator from exploding norms and gradients. Increasing the size of the estimator marginally improves the results. **Backbone.** Our backbone is a ConvNeXt-T (Liu et al., 2022), we study the impact of pretraining models with other backbones, in particular ResNet-50, and the backbone of PWC-Net (Sun et al., 2018) commonly used by concurrent flow estimation methods. Table 4 presents our results. The original PWC backbone is not adapted to learn good content features, and Resnet-50 results are not as good as ConvNeXt-T results. **Data sampling.** We experiment with different strategies for sampling the data. For a simple baseline, we use a pretrained self-supervised model in ImageNet and train the flow estimator on top of the frozen features, or by fine-tuning the model. We demonstrate the usefulness of multi-task learning by playing with various other strategies; either we alternate between one epoch of ImageNet learning and one epoch of flow estimation, or we alternate between one batch of each, or we finally sample a batch from each, and back-propagate through the addition of the losses. Table 5 presents our results for each strategy. Training the flow estimator on top of frozen features is too hard of a constraint, but even when fine-tuning is done, optimizing the flow estimation task degrades the performance on segmentation too much. Alternating between epochs is not optimal, and the best solution is to alternate between batches and even combine the losses for optimal flow estimation results. Table 4: **Ablation: backbone.** Comparison of the performance of MC-JEPA when using different backbones. | Backbone | #Params | K15 | clean | final | ISeg | VSeg | |--------------|---------|-----|-------|-------|------|------| | PWC-Net | 8M | 2.66| 2.80 | 3.47 | 14.8 | 10.1 | | ResNet-50 | 21M | 2.71| 2.85 | 3.59 | 55.8 | 60.1 | | ConvNeXt-T | 23M | 2.67| 2.81 | 3.51 | 67.1 | 70.5 | Table 5: **Ablation: data sampling.** Comparison between different training order and data sampling strategies. | Strategy | K15 | clean | final | ISeg | VSeg | |---------------------------|-----|-------|-------|------|------| | Flow estimator training | 13.52| 13.82 | 14.81 | 60.1 | 65.2 | | Flow estimator fine-tuning| 2.71 | 2.82 | 3.77 | 61.3 | 62.3 | | Epoch alternation | 4.54 | 4.91 | 5.57 | 63.5 | 66.9 | | Batch alternation | 2.78 | 2.95 | 3.62 | 67.1 | 70.5 | | Combined loss | 2.67 | 2.81 | 3.51 | 67.1 | 70.5 | Figure 5: (1) **Ablation: flow start epoch.** Flow estimation performance as a function of the ImageNet training epoch from which flow estimation starts. There are 100 pretraining epochs in total. (2) **Ablation: cycle consistency coefficient.** Flow estimation performance as a function of the coefficient used to balance the cycle consistency loss of Eq (5). (3) **Ablation: multi-task balancing coefficient.** Flow estimation and segmentation performance as a function of the balancing coefficient between flow losses and SSL loss in Eq (7). **Flow start epoch.** We found that starting multi-task learning of flow and content features at the beginning of training was not necessary, as the features are changing very fast, and we only start with ImageNet pretraining and introduce flow estimation after a given number of epochs. Figure 5 (1) shows that starting after 10 epochs of ImageNet pretraining is the best among several values, when the total number of epochs is fixed to 100. Starting later and doing fewer flow estimation epochs saves a lot of computation time while giving similar results. **Cycle consistency.** Figure 5 (2) shows an ablation on the cycle consistency coefficient that controls the importance of the cycle consistency loss of Eq (5). Introducing the loss significantly improves the flow estimation, which is explained by the fact that it adds an additional constraint on the embeddings to be predictable from each other. The coefficient needs to be carefully tuned, as the performance is very sensitive to it. **Multi-task balancing coefficient.** Figure 5 (3) shows an ablation on the multi-task coefficient that balances our flow estimation loss and our content features loss. We already observe a significant improvement when introducing flow estimation, even with a very small coefficient. As we increase the coefficient, both the flow estimation and segmentation improve until we reach a threshold (0.1), after which the segmentation results degrade a lot. This shows that even if flow estimation improves the segmentation performance, there is a trade-off between learning motion and content features, and tuning the multi-task coefficient is crucial to maintain a strong level of performance for both. 5 CONCLUSION We have introduced MC-JEPA, a multi-task approach to learning of motion and content features with self-supervised learning and optical flow estimation. MC-JEPA performs well in a wide variety of tasks, ranging from optical flow estimation to segmentation of images and videos. We hope that our approach will foster the use of multi-task learning in self-supervised learning, which might be a path towards learning features that generalize to any downstream task. Future work will learn motion and content from larger collections of natural videos and train the two objectives in a shared data domain, capturing short- and long-range interactions in a hierarchical way. REFERENCES Geiger A., Lenz P., Stiller C., and Urtasun R. Vision meets robotics: The kitti dataset. In *IJRR*, 2013. Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, and Nicolas Ballas. Masked siamese networks for label-efficient learning. In *ECCV*, 2022. Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. *arXiv preprint arXiv:2301.08243*, 2023. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Shaojie Bai, Zhengyang Geng, Yash Savani, and J. Zico Kolter. Deep equilibrium optical flow estimation. In *CVPR*, 2022. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. In *ICLR*, 2022. Adrien Bardes, Jean Ponce, and Yann LeCun. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. In *ICLR*, 2022a. Adrien Bardes, Jean Ponce, and Yann LeCun. Vicregl: Self-supervised learning of local visual features. In *NeurIPS*, 2022b. Zhangxing Bian, Allan Jabri, Alexei A. Efros, and Andrew Owens. Learning pixel trajectories with multiscale contrastive random walks. In *CVPR*, 2022. Thomas Brox, Andres Bruhn, Nils Papenberg, and Joachim Weickert. High accuracy optical flow estimation based on a theory for warping. In *ECCV*, 2004. Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black. A naturalistic open source movie for optical flow evaluation. In *ECCV*, 2012. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning. In *ECCV*, 2018. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In *NeurIPS*, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Herve Jegou, and Julien Mairal Piotr Bojanowski Armand Joulin. Emerging properties in self-supervised vision transformers. In *ICCV*, 2021. Mathilde Caron, Neil Houlsby, and Cordelia Schmid. Location-aware self-supervised transformers. *arXiv preprint arXiv:2212.02400*, 2023. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In *ICML*, 2020a. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In *CVPR*, 2020. Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*, 2020b. Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In *ICCV*, 2021. Yubei Chen, Adrien Bardes, Zengyi Li, and Yann LeCun. Intra-instance vicreg: Bag of self-supervised image patch embedding. *arXiv preprint arXiv:2206.08954*, 2022.
rpwES4pe9W
Including coordinate inputs inevitably increase the computational cost, as each coordinate has to go through a (usually larger) MLP network instead of grid sampling and small MLP feed-forward as multi-plane feature input. In fact, one key motivation of multi-plane methods is decreasing the training and rendering time required. The paper seems not include any information and discussions regarding training time, computation cost, FPS, etc., which makes comparisons to multi-plane based methods incomplete.
REFINED TENSORIAL RADIANCE FIELD: HARNESSED COORDINATE-BASED NETWORKS FOR NOVEL VIEW SYNTHESIS FROM SPARSE INPUTS Anonymous authors Paper under double-blind review ABSTRACT The multi-plane encoding approach has been highlighted for its ability to serve as static and dynamic neural radiance fields without sacrificing generality. This approach constructs related features through projection onto learnable planes and interpolating adjacent vertices. This mechanism allows the model to learn fine-grained details rapidly and achieves outstanding performance. However, it has limitations in representing the global context of the scene, such as object shapes and dynamic motion over times when available training poses are sparse. In this work, we propose refined tensorial radiance fields that harness coordinate-based networks known for strong bias toward low-frequency signals. The coordinate-based network is responsible for capturing global context, while the multi-plane network focuses on capturing fine-grained details. We demonstrate that using residual connections effectively preserves their inherent properties. Additionally, the proposed curriculum training scheme accelerates the disentanglement of these two features. We empirically show that the proposed method achieves comparable results to multi-plane encoding with high denoising penalties in static NeRFs. Meanwhile, it outperforms others for the task with dynamic NeRFs using sparse inputs. In particular, we prove that excessively increasing denoising regularization for multi-plane encoding effectively eliminates artifacts; however, it can lead to artificial details that appear authentic but are not present in the data. On the other hand, we note that the proposed method does not suffer from this issue. 1 INTRODUCTION Neural Radiance Fields (NeRFs) have gained recognition for their ability to create realistic images from various viewpoints using the volume rendering technique (Mildenhall et al., 2021). Early studies have demonstrated that multi-layer perception (MLP) networks, combined with sinusoidal encoding, can effectively synthesize 3-dimensional novel views (Tancik et al., 2020; Sitzmann et al., 2020; Martin-Brualla et al., 2021; Barron et al., 2021, 2022). These studies have shown that simple coordinate-based MLP networks exhibit strong low-frequency bias, and incorporating wide-spectrum sinusoidal encoding allows for capturing both low and high-frequency signals. Subsequent works illustrated the importance of appropriate sinusoidal encoding in conjunction with target signals to enhance performance (Martel et al., 2021; Lindell et al., 2022; Shekarforoush et al., 2022). To expedite the learning process, approaches explicitly parameterizing spatial attributes through multi-plane combinations have been introduced (Chen et al., 2022; Chan et al., 2022). In contrast to the aforementioned approaches, these methods dramatically reduce training time and produce cleaner and more realistic images, albeit at the cost of greater memory requirements. For broader real-world applicability, extensive efforts have focused on reliably constructing radiance fields in cases of sparse input data. After the emergence of dynamic scenes dealing with time sparsity, addressing data sparsity has gained more attention in this field, as NeRF models commonly face overfitting issues due to the lack of consistent data for 3 or 4-dimensional space (Pumarola et al., 2021). One set of solutions tackled this by leveraging a pretrained image encoder to compare rendered scenes against consistent 3D environments (Yu et al., 2021; Wang et al., 2021; Chen et al., 2021; Jain et al., 2021). Another approach incorporated additional information, such as depth or color constraints, to maintain 3-dimensional coherence (Deng et al., 2022; Yuan et al., 2022). Figure 1: The qualitative results of the standup case in dynamic NeRFs using 25 training views (about 17% of the original data). This is challenging due to the limited information available along the time axis. Figure (a) is produced by HexPlane \cite{Cao & Johnson, 2023}. Figure (b) is the rendered image of the proposed method. Methods progressively adjusting the frequency spectrum of position encoding have also proven effective in counteracting overfitting without additional information \cite{Yang et al., 2023; Song et al., 2023}. However, a notable limitation of prior strategies dealing with sparse inputs is their less-than-ideal visual output. While the recent work reported successful reconstruction of static NeRF using voxel-grid parameterization in the sparse input regime with the assistance of denoising penalties like total variation \cite{Sun et al., 2023}, they often lack in adequately representing global elements like object morphology and dynamic motion, as evident in Figure 1a. Even if some renderings look crisp upon close inspection, the overall quality of the rendered results deteriorates due to the absence of global structures. To alleviate this issue, we introduce a simple yet powerful approach to fundamentally improve the performance of static and dynamic NeRFs from sparse inputs. In this framework, the coordinate-based features are responsible for capturing global context, while the multiple-plane features are responsible for capturing fine-grained details. Moreover, in contexts with occlusions or time-variant dynamics, we employ a progressive weighting scheme that prevents the model from falling into local minima. This prioritizes low-frequency coordinate-based features to capture the global context first, allowing multiple-plane features to describe fine-grained target signals gradually. As a result, images generated by the proposed method exhibit improved clarity in terms of global contexts and fewer artifacts compared to baselines, as illustrated in Figure 1b. Our extensive experiments show that the proposed method achieves comparable results of multi-plane encoding with high denoising penalties in static NeRFs. Particularly, it outperforms baselines in dynamic NeRFs from the sparse inputs. 2 RELATED WORK Coordinate-based network and sinusoidal encoding In the initial studies of NeRFs, MLP networks with sinusoidal encoding were used to simultaneously describe low and high-frequency details \cite{Mildenhall et al., 2021; Martin-Brualla et al., 2021; Barron et al., 2021; 2022}. However, it was found that a classical coordinate network without this encoding has a bias toward lower frequencies \cite{Rahaman et al., 2019; Yüce et al., 2022}. The importance of positioning encoding and sinusoidal activation led to the fundamental exploration of the relationship between rendering performance and the frequency values of target signals \cite{Tancik et al., 2020; Sitzmann et al., 2020; Fathony et al., 2021; Ramasinghe et al., 2022}. Lindell et al. \cite{2022} uncovered that improper high-frequency embedding results in artifacts negatively impacting the quality of reconstruction. They addressed this issue using multi-scale bandwidth networks, where each MLP layer has a distinct spectrum of frequency embedding. Subsequent research utilized residual connections to faithfully maintain the designated spectrum without overwhelming high-frequency components \cite{Shekarforoush et al., 2022}. Explicit parameterization Recent developments in explicit representations, such as voxel-grid, hash encoding, and multi-planes, have gained attention due to their fast training, rendering speed, and superior performance compared to positioning encoding-based networks \cite{Liu et al., 2020; Sun et al., 2022; Müller et al., 2022; Chen et al., 2022; Cao & Johnson, 2023; Fridovich-Keil et al., 2023}. Sun et al. (2022) introduced the direct voxel field, using minimal MLP layers to speed up training and rendering. Instant-NGP, based on hash maps, provides multi-resolution spatial features and versatility, extending beyond 3-dimensional spaces to high-resolution 2-dimensional images (Müller et al., 2022). The multi-plane approach has been highlighted for its applicability in expanding to 4-dimensional without compromising generality, decomposing targets into multiple planes, with each plane responsible for a specific axis (Chen et al., 2022; Cao & Johnson, 2023; Fridovich-Keil et al., 2023). In particular, while the aforementioned approaches were executed on special on-demand GPU computations to boost efficiency, this method achieves comparable speed and performance based on general auto-differential frameworks. As a result, the multiple-plane approach broadens its scope to various tasks, including 3D object generation, video generation, 3D surface reconstruction, and dynamic NeRF (Gupta et al., 2023; Yu et al., 2023; Wang et al., 2023; Cao & Johnson, 2023; Fridovich-Keil et al., 2023). NeRFs in the sparse inputs Early efforts incorporated pre-trained networks trained on large datasets to compensate for the lack of training data (Iain et al., 2021; Yu et al., 2021; Wang et al., 2021). Another alternative approach incorporated additional information, such as depth or color constraints, to ensure the preservation of 3D coherence (Deng et al., 2022; Yuan et al., 2022; Roessle et al., 2022; Truong et al., 2023). Without the assistance of off-the-shelf models and additional, this line of works devised new regularization to train NeRFs with fewer than ten views. Reg-NeRF incorporates patch-wise geometry and appearance regularization (Niemeyer et al., 2022). This paper verified their regularization performs well on forward-facing examples like DTU and LLFF datasets. They did not validate object-facing scenes because this assumption demands a high correlation between adjacent views. Recently, progressively manipulating the spectrum of positioning encoding from low to high-frequency proves effectiveness in mitigating over-fitting without relying on additional information (Yang et al., 2023; Song et al., 2023). Compared to explicit representations, those still suffer from unsatisfactory visual quality, characterized by blurry boundaries. Recent studies using total variation regularization on explicit representations get rid of artifacts and construct smoother surfaces (Cao & Johnson, 2023; Fridovich-Keil et al., 2023; Sun et al., 2023). However, our findings indicate that this regularization can introduce artificial details that seem real but are not in the data. This can also result in the model failing to converge in certain scenes. We present this problem in the experimental results, both qualitatively and quantitatively. Another work attempted to use tri-planes with sinusoidal encoding of coordinates to create smoother surfaces (Wang et al., 2023), but their direction differs from our method since they mainly focus on enriching available features, as well as they did not demonstrate the role of tri-planes and coordinate features. In this paper, our new approach, refined tensorial radiance fields, proposes incorporating two distinct features: coordinate-based and multiple-plane features. We emphasize that the disentanglement of these two heterogeneous features is crucial for reliably constructing NeRFs in sparse inputs. The proposed method performs well even with higher-dimensional targets like dynamic NeRFs and extremely limited sparse inputs. 3 BACKGROUND Before delving into the details of the proposed method, we briefly review the fundamentals of the neural radiance fields and multi-plane approach. We describe TensoRF (Chen et al., 2022) for the static NeRFs and HexPlane (Cao & Johnson, 2023) for the dynamic NeRFs. These methods are considered representative works in multi-plane encoding and are serve as main baselines in this paper. 3.1 NEURAL RADIANCE FIELDS Mildenhall et al. (2021) proposed the original NeRF that uses volume rendering to compute predicted color values for novel view synthesis. In this framework, we consider a camera with origin \( o \) and a ray direction \( d \). A ray \( r \), composed of \( n \) points, is constructed as \( o + \tau_k \cdot d \), where \( \tau_k \in \{ \tau_1, \cdots, \tau_n \} \). The neural radiance field, parameterized by \( \Theta \), predicts the color and density values \( c^k_\Theta, \sigma^k_\Theta \) at each point. Using volume rendering, the predicted color value \( \hat{c}(r) \) are computed as follows: \[ \hat{c}(r; \Theta) = \sum_{n} T_n(1 - \exp(-\sigma^k_\Theta(\tau_{k+1} - \tau_k)))c^k_\Theta. \] Here, the accumulated transmittance Figure 2: The schematic of baselines that use the multi-plane encoding. (a) TensoRF employs three planes and lines [Chen et al., 2022]. (b) HexPlane adopts a total of six multiple planes to include the time axis [Cao & Johnson, 2023]. is computed by \( T_n = \exp(-\sum_{k<n} \sigma_k^2 (\tau_{k+1} - \tau_k)) \). The network parameters \( \Theta \) are trained by minimizing the photometric loss, comparing \( \hat{c}(r) \) to the ground-truth color \( c \). However, raw coordinate features alone are insufficient for describing high-frequency details. To resolve this, the paper proposes sinusoidal encoding, which transform coordinates into wide-spectrum frequency components. This encoding enables the description of both low and high-frequency signals, on the other hands, training can be time-consuming since it relies on implicit learning. ### 3.2 TensoRF: Tensorial Radiance Fields The tensorial radiance fields provide an explicit parameterization using multiple-plane and fewer MLP layers. Compared to other explicit parameterization [Liu et al., 2020; Sun et al., 2022; Müller et al., 2022], multi-plane parameterization efficiently proves to be efficient for 3-dimensional NeRFs, provided that the plane resolution is sufficiently high. For simplicity, we assume that multi-planes share the same dimension in height, width, and depth denoted as \( H \). This approach employs both plane features denoted as \( M = \{M_{xy}, M_{yz}, M_{zx}\} \) and vector features \( V = \{V_z, V_x, V_y\} \). For convenience, we denote two index variables, \( i \in \{xy, yz, zs\} \) for \( M \) and \( j \in \{z, x, y\} \) for \( V \). The plane and vector feature is denoted as \( M_i \in \mathbb{R}^{c \times H \times H}, V_i \in \mathbb{R}^{c \times 1 \times H} \). Both plane and vector features have a channel dimensions \( c \) to represent diverse information. To calculate the feature value at a given point \( s := (s_x, s_y, s_z) \), the point are projected to corresponding planes and lines, and features on the nearest vertices are bilinear interpolated, as illustrated in Figure 2a. After obtaining the feature values from \( M \) and \( V \), denoted as \( f^M = \{f^M_{xy}, f^M_{yz}, f^M_{zx}\} \), and \( f^V = \{f^V_z, f^V_x, f^V_y\} \) and each feature \( f^i_j \in \mathbb{R}^c \), hence \( f^M, f^V \in \mathbb{R}^{3c} \). We use element-wise multiplication on \( f^M, f^V \) to get final feature \( f = f^M \odot f^V \in \mathbb{R}^{3c} \). For a more detailed explanation of multi-plane encoding, please refer to Appendix A. TensoRF has independent multi-plane features for density and appearance. TensoRF predicts occupancy by channel-wise summation of final density features across all planes. Conversely, appearance features are concatenated and then fed into MLP layers or spherical harmonics function. Multiple-plane encoding is mainly designed to emphasize local representation with the nearest vertices. Therefore, TensoRF proposes gradually increasing the resolutions of the learnable planes and vectors during training to address this locality. This intends the model to learn the global context at the coarser resolution and then enhance finer details at the high resolution. ### 3.3 HexPlane The following work, HexPlane, extends the multi-plane approach by incorporating the time axis, enabling it to work effectively in dynamic NeRFs. To achieve this, HexPlane builds upon the line features used in TensoRF, extending them into plane features by adding a time axis. This results in six planes, three spatial planes denoted as \( M = \{M_{xy}, M_{yz}, M_{zx}\}, M_i \in \mathbb{R}^{c \times H \times H} \) and three temporal planes \( V = \{V_{tz}, V_{tx}, V_{ty}\}, V_i \in \mathbb{R}^{c \times T \times H} \) as shown in Figure 2b. Likewise the previous subsection, we denote two index variables, \( i \in \{xy, yz, zs\} \) for \( M \) and \( j \in \{tz, tx, ty\} \) for \( V \). Compared to TensoRF, a key difference is that the sample \( s := (s_x, s_y, s_z, t) \) includes the time variable. In dynamic NeRFs, dealing with temporal sparsity is a crucial factor for improving performance since the time axis contains relatively sparse information compared to spatial information. Figure 3: Conceptual illustration of the proposed method utilizing global contexts by coordinate networks and fine-grained details by multi-plane encoding. This method effectively displays two heterogeneous features. Notably, individual plane feature differs across channels, highlighting their disentanglement from other channels. All graphical representations are generated based on whether multi-plane features are masked or not, using our proposed method trained with 25 training views. HexPlane addresses this challenge by employing denoising regularization, laplacian smoothing, that constrains similarity among adjacent multi-plane features. For an arbitrary plane feature $P$, Laplacian smoothing function $\mathcal{L}_l$ is defined as below, where $h,w$ refer row and column indices: $$\mathcal{L}_l(P) = \sum_c \sum_{hw} \left( \| P_{h+1,w}^c - P_{h,w}^c \|_2^2 + \| P_{h,w+1}^c - P_{h,w}^c \|_2^2 \right).$$ Specifically, HexPlane applies laplacian smoothing on both plane features but give higher priority to temporal planes. This emphasize that time information is significant for capturing dynamic motion accurately. Fundamental operations of HexPlane align with TensoRF, including the direct prediction of density values by multi-plane features and the prediction of color values by concatenating multi-plane features, which are then fed into MLP layers. 4 Refined Tensorial Radiance Fields: Harnessing Coordinate-Based Networks We propose a novel method, referred to as “refined tensorial radiance field”, that leverages coordinate-based networks. To mitigate the constraints of locality inherent in grid structures, our method capitalizes on a combination of distinct coordinate feature encoding techniques and multi-plane representations, as depicted in Figure 3. Subsection 4.1 illustrates the proposed residual-based architecture and the regularization strategy to facilitate the disentanglement of two heterogeneous features. In subsection 4.2, we explain a curriculum weighting strategy for multi-plane features. It ensures channel-wise disentanglement, providing a more diverse representation without the risk of overfitting where all channels exhibit identical expressions. 4.1 Architecture and Loss Function We describe how our model works in the dynamic NeRF case. Applying this model to a 3-dimensional static NeRF is feasible by simply excluding the $t$ variable. A key aspect of our network architecture is the utilization of coordinate-based networks along with explicit representation. In high-level context, we replace sinusoidal encoding with multi-plane encoding while employing the architecture of the original NeRF. A coordinate $s := (s_x, s_y, s_z, t)$ is transformed via multi-plane encoding from spatial and temporal plane features $M,V$ with element-wise multiplication $f = f^M \odot f^V \in \mathbb{R}^{3c}$. These features are then fed into MLPs parameterized by $\Theta$ along with their respective coordinates $s$. As shown by Shekarforoush et al. (2022), residual networks yield multi-fidelity results by preserving their pre-designated sinusoidal embeddings. In line with this, the proposed method adopts skip connections between acquired features and the hidden layer to serve the same purpose. Our empirical findings demonstrate that this operation promotes the disentanglement of two features, aligning with our intended purpose. We introduce a loss function that combines photometric loss and laplacian smoothing across multi-plane features. First, we define the photometric loss $\mathcal{L}_p$ as mean square errors between rendered color $\hat{c}(r)$ and ground truth pixel color $c$. $L_p(\Theta, M, V) = \sum_r \| \hat{c}(r; \Theta, M, V) - c \|^2$. To tackle the ill-conditioned training problem in NeRFs arising from sparse-input situations, we apply Laplacian smoothing on both feature planes. Laplacian smoothing tends to excessively smooth signals, making them conform to global tendency rather than accurately local finer details (Sadhanala et al., 2017). Additionally, we regularize each plane feature using the L1 norm for the sparsity of multi-plane features. We use, $\|M\|_1$ and $\|V\|_1$ as $\sum_{i=1}^{3} \|M_i\|_1$ and $\sum_{i=1}^{3} \|V_i\|_1$ respectively. The entire loss function is as follows: $$L(\Theta, M, V) = L_p(\Theta, M, V) + \lambda_1 \sum_{i=1}^{3} \left( L_l(M_i) + \lambda_2 L_l(V_i) \right) + \lambda_3 (\|M\|_1 + \|V\|_1)$$ The only difference in the case of static NeRF comes from the dimension of $V$. Laplacian loss is not applied to $V$; the rest of the details are the same as in the 4D case. The hyperparameters and implementation detail can be found in Appendix B. While increasing the value of $\lambda_1$ allows to removes floating artifacts by over-smoothing the multi-plane features, it creates undesirable deformation that looks authentic but not be present in the training data. Hence, we opt not to utilize excessively high denoising weights. Instead, the coordinate network provides consistent training for multi-plane encoding when capturing high-frequency details. We empirically validate this through our experiments. ### 4.2 Curriculum weighting strategy for multi-plane encoding The architecture in the proposed method performs well in scenes with mild occlusion and less dynamic motion. However, it encounters challenges in severe ill-conditioned situations, such as heavy occlusion and rapid motion, as seen in the drums in the static NeRF and the standup in the dynamic NeRF. To alleviate this issue, we propose a curriculum weighting strategy for multi-plane encoding, aiming to manipulate the engagement of multi-plane features in accordance with training iterations. This approach trains the coordinate-based network first, followed by the subsequent training of multi-plane features. In this subsection, we denote $t$ as the training iteration. Technically, we introduce a weighting factor denoted as $\alpha(t)$ to control the degree of engagement of multi-plane features along the channel dimension of multi-planes. Here, $f = \{f_1, f_2, f_3\}$, and $f_i \in \mathbb{R}^c$ represents the output of multi-plane encoding, and the weighting factor $\gamma(t) = \{\gamma_1(t), \cdots, \gamma_c(t)\} \in \mathbb{R}^c$ is defined as follows: $$\gamma_j(t) = \begin{cases} 0 & \text{if } \alpha(t) \leq j \\ \frac{1 - \cos((\alpha(t) - j)\pi)}{2} & \text{if } 0 < \alpha(t) - j \leq 1 \\ 1 & \text{otherwise}, \end{cases}$$ where, $j \in \{1, \cdots, c\}$ is the index of channel dimension and $\alpha(t) = c \cdot (t-t_s)/(t_e-t_s) \in [t_e, t_s]$ is proportional to the number of training iterations $t$ in the scheduling interval $[t_s, t_e]$. The final features $f'_i$ are obtained by $f'_i = f_i \odot \gamma(t)$. Hence, this weighting function is applied to each channel of multi-plane features. After reaching the last time-step of curriculum training, all channels of multi-plane features are fully engaged. It’s worth noting that this weighting function is similar to those used in previous works such as (Park et al., 2021; Lin et al., 2021; Yang et al., 2023; Heo et al., Table 1: Result of evaluation statistics on the static NeRF datasets. We conduct five trials for each scene and report average scores. Average PSNR, SSIM, and LPIPS are calculated across all scenes. We indicate best performance as **bold** and second best as _underline_. | Models | chair | drums | focus | hotdog | lego | materials | mic | ship | |--------------|-------|-------|-------|--------|------|-----------|-----|------| | Simplified_NeRF | 20.35 | 14.19 | 21.63 | 22.57 | 12.45 | 18.98 | 24.95 | 18.65 | 19.22 | 0.827 | 0.265 | | DietNeRF | 21.32 | 14.16 | 13.08 | 11.64 | 16.12 | 12.20 | 24.70 | 19.34 | 16.57 | 0.746 | 0.333 | | HALO | 24.77 | 18.67 | 21.42 | 10.22 | 22.41 | 21.00 | 24.94 | 21.67 | 20.64 | 0.844 | 0.200 | | FreeNeRF | 26.08 | 19.99 | 18.43 | 28.91 | 24.12 | 21.74 | 24.89 | 23.01 | 23.40 | 0.877 | 0.121 | | DVGO | 22.35 | 16.54 | 19.03 | 24.73 | 20.85 | 18.50 | 24.37 | 18.17 | 20.57 | 0.829 | 0.145 | | VGOS | 22.10 | 18.57 | 19.08 | 24.74 | 20.90 | 18.42 | 24.18 | 18.16 | 20.77 | 0.838 | 0.143 | | iNGP | 24.76 | 14.56 | 20.68 | 24.11 | 22.22 | 15.16 | 26.19 | 17.29 | 20.62 | 0.828 | 0.184 | | TensoRF | 26.23 | 15.94 | 21.37 | 28.47 | 26.28 | 20.22 | 26.39 | 20.29 | 23.15 | 0.864 | 0.129 | | K-Planes | 27.30 | 20.43 | 23.82 | 27.58 | 26.52 | 19.66 | 27.30 | 21.34 | 24.24 | 0.897 | 0.085 | | Ours | 28.02 | 19.55 | 20.30 | 29.25 | 26.73 | 21.93 | 26.42 | 24.27 | 24.56 | 0.896 | 0.092 | However, the key difference is a channel-wise weighting function for multi-plane features. This approach allows the decoding network to receive encodings from all channels of multi-plane features, with later-order channels being updated more slowly than earlier-order channels. Through our experiments, we found that this strategy effectively prevents all channels of multi-plane features from converging to similar patterns, thereby mitigating overfitting issues. 5 EXPERIMENTS In this section, we present our experiments designed to address three pivotal questions: 1) Do existing sinusoidal embedding techniques effectively render clear scenes when given sparse input data? 2) Does the introduction of denoising regularizations enable explicit parameterization methods to consistently capture 3D coherence without artifacts with sparse input data? 3) Does the integration of disparate features, such as multiple planes and coordinates, substantially improve the performance of both static and dynamic NeRF? To answer those questions, we conducted vast experiments over scenarios of two sparse input cases: a few-shot static case and a 4-dimensional dynamic case. We also include ablation studies to substantiate the rationale behind the architectural choices in our proposed model. The design efficacy of our model is validated in two key areas: the reliance on regularization mechanisms and feature disentanglement. We choose the datasets as in-ward-facing object poses, as they are more likely to be occluded by the objects from various viewing locations compared to forward-facing poses. For performance evaluation, we employ the PSNR metric to gauge the quality of image reconstruction. In addition, SSIM and LPIPS scores are reported to assess the perceptual quality of the rendered images. Further experimental details are described in Appendix C. 5.1 3-DIMENSIONAL STATIC RADIANCE FIELDS We conducted 3-dimensional static NeRF experiments on the NeRF-synthetic dataset to evaluate whether our model adequately captures both the global context of a scene and fine details without introducing undesirable artifacts under sparse input conditions. Consistent with prior studies such as [Jain et al., 2021; Yang et al., 2023], we trained all models with 8 views. We compare our proposed models with sinusoidal encoding methods: Simplified NeRF, DietNeRF [Jain et al., 2021], HALO [Song et al., 2023] and FreeNeRF [Yang et al., 2023] and for explicit spatial parameterization methods; DVGO [Sun et al., 2022], VGOS [Sun et al., 2023], iNGP [Müller et al., 2022], TensoRF [Chen et al., 2022] and K-Planes [Fridovich-Keil et al., 2023]. For all considered baselines, we applied regularization techniques that are congruent with their inherent characteristics and configurations. The quantitative rendering results are shown in Table 1 and Figure 5. First, we observed that the proposed method outperforms the previous state-of-the-art method, FreeNeRF, in terms of both PSNR and perceptual quality. Sinusoidal encoding-based networks fail to capture high-frequency details and are prone to underfit in data with high-resolution structures, (ficus, lego). In con- Figure 5: Rendered images of lego, drums and ship cases in the static NeRF dataset by FreeNeRF, TensoRF, K-Planes and ours. The rendered images are {83, 129, 95}-th in the test-set respectively. In contrast, grid-based models show robust results in reconstructing high-frequency structures. However, for data with a strong non-Lambertian effect (drums, ship), grid-based models tend to miss the global shape and are prone to overfit in high-frequency. Our proposed multi-plane encoding technique can exclusively capture fine-grained details while maintaining global shape learned by coordinate features, leading to more robust novel view synthesis in sparse-input scenarios. 5.2 4-DIMENSIONAL DYNAMIC RADIANCE FIELDS To demonstrate the robustness of the proposed model on more spare input cases, we conduct our experiences on the dynamic scenarios. We conducted 4-dimensional dynamic NeRF experiences on a D-NeRF data set. This data set comprises monocular cameras of about 50-100 frames duration and different inward facing views for each timestep. To verify a harsh situation, we also experimented with fewer frames {15, 20, 25} sparse in both views and time aspects. Each view was sampled uniformly for each scene. To demonstrate the need for our refined tensorial radiance fields, we compare our method with HexPlane [Cao & Johnson (2023)] and its variants. The observations made in Subsection 5.1 are even more evident in the dynamic NeRFs. The proposed method outperforms every setting of HexPlane in all metrics in the D-NeRFs, as shown in Table 2. HexPlane discretizes the continuous time axis into finite bins, making it less responsive to the time-variant motion of objects when the available training poses are sparse. In contrast, the proposed method can capture the time-variant motion of objects by harnessing the coordinate-based networks first, with multi-plane encoding supplementing the remaining details. For instance, the variants of HexPlane do not accurately depict the shape of the blue ball over time, whereas the proposed method successfully does, including the reflection of light on the green ball. In the case of the jumping jack sequence, the proposed method exhibits fewer artifacts and maintains the boundary of the scene better compared to HexPlane. 5.3 ABLATION STUDY We assess the role of Total variation regularization or Laplacian smoothing within TensoRF, HexPlane, and the proposed method. In this experiment, we incrementally increase the parameter $\lambda_1$ from 0.0001 to 1.0, multiplying by a factor of 10. Table 3 demonstrates that our proposed method outperforms all experiment scenarios in both static and dynamic NeRF, with the sole exception of when $\lambda_1 = 0.001$ in the static NeRF. A notable performance difference was observed in the dynamic NeRF, which presents greater challenges due to time sparsity compared to the static NeRF. In detail, Table 2: Result of evaluation statistics on the D-NeRF datasets. HexPlane employs the weight of denoising regularization as $\lambda_1 = 0.01$ via grid-search. Average PSNR, SSIM, and LPIPS are calculated across all scenes. We indicates best performance as **bold** for each cases. | Training views | Models | PSNR ↑ | Avg. PSNR ↑ | Avg. SSIM ↑ | Avg. LPIPS ↓ | |----------------|--------------|--------|-------------|-------------|--------------| | | bouncingballs | 26.56 | 15.91 | 21.03 | 20.35 | | 15 views | K-Planes | 28.09 | 16.48 | 20.90 | 21.51 | | | Ours | 28.09 | 16.48 | 20.90 | 21.51 | | | HexPlane | 28.45 | 16.85 | 22.30 | 20.87 | | 20 views | K-Planes | 25.43 | 17.25 | 21.07 | 21.40 | | | Ours | 31.15 | 17.99 | 22.67 | 22.58 | | | HexPlane | 30.49 | 17.61 | 23.10 | 22.85 | | 25 views | K-Planes | 29.41 | 19.31 | 23.82 | 24.46 | | | Ours | 34.61 | 19.21 | 23.82 | 24.46 | | | HexPlane | 39.21 | 23.92 | 27.97 | 30.53 | | Full views | K-Planes | 39.76 | 24.57 | 28.10 | 31.07 | | | Ours | 40.25 | 24.63 | 28.50 | 31.70 | * indicates the model does not converge Figure 6: Rendered images of the bouncingballs and jumpingjacks in the dynamic NeRF dataset by HexPlane with $\lambda_1 = 0.01$, K-Planes and ours. All models are trained using 25 views. In the static NeRF dataset, our method yielded an average PSNR score between 22.99 and 24.55. In contrast, TensoRF with $\lambda_1 = 0.001$ performs the best at 24.98, but it fails to converge when $\lambda_1$ exceeds 0.01. This highlights that TensoRF is too sensitive and face challenges in training robustly with different regularization values. For the dynamic NeRF, HexPlane’s scores ranged from 21.95 to 24.15, while ours spanned 24.67 to 25.74. This indicates our method is less dependent on denoising regularization, emphasizing the robust regularization capabilities of coordinate networks for multi-plane encoding. Our observations indicate that the proposed method maintains near-optimal performance across all scenarios once the $\lambda_1$ surpasses 0.001. This stability alleviates concerns about searching the regularization value for different scenes, significantly reducing hyperparameter tuning efforts. The detailed experimental results are included in Appendix E. Furthermore, excessive regularization can introduce undesirable modification, including the introduction of color disturbances as evidenced in the case of ship with TensoRF, $\lambda_1 = 0.1$. Unlike the above, our method consistently achieves near-optimal performance without excessive denoising regularization, attributed to the coordinate-based networks capturing global contexts. As depicted in Figure 5, our method can restore fine geometries and reproduce accurate colors even under challenging conditions. 6 CONCLUSION In this paper, we introduce refined tensorial radiance fields that seamlessly incorporate coordinate networks. The coordinate network enables the capture of global context, such as object shapes in the static NeRF and dynamic motions in the dynamic NeRF dataset. This property allows multi-plane encoding to focus on describing the finest details. Table 3: Average PSNR across all scenes varying denoising regularization $\lambda_1$. The hyphen indicates not converged | $\lambda_1$ | Static NeRF (8 views) | D-NeRF (25 views) | |------------|-----------------------|-------------------| | | TensoRF | K-Planes | ours | HexPlane | K-Planes | ours | | 0.0001 | 24.10 | 24.31 | 23.68 | 22.83 | 24.32 | 24.67 | | 0.001 | 24.98 | 24.28 | 24.47 | 23.86 | 24.01 | 25.38 | | 0.1 | - | 24.28 | 24.48 | 23.48 | 24.06 | 25.74 | | 1.0 | - | 23.64 | 24.23 | 23.46 | 23.55 | 25.84 | ETHICS STATEMENT Novel view synthesis is a task to understand the shape and appearance of objects and scenes from sparse set of images or video. Our model, in particular, can reconstruct fine detailed 3D shape with accurate appearance just from given fewer input both in static and dynamic scenes. Like previous works, our model can obtain fine reconstruction results only if sufficiently distributed views are given. Recovering high fidelity 3D shape and appearance of objects from fewer inputs offers numerous practical applications. However, it also introduces potential drawbacks, such as the leading to the creation of potentially misleading media or potentially facilitating design theft, by duplicating physical objects. REPRODUCIBILITY STATEMENT Our code will be made publicly available upon publication. During the review process, we have attached our codes as supplementary files. For convenience reproducibility, both training and evaluation codes are included. REFERENCES Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864, 2021. Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470–5479, 2022. Ang Cao and Justin Johnson. Hexplane: A fast representation for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 130–141, 2023. Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16123–16133, 2022. Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124–14133, 2021. Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorrf: Tensorial radiance fields. In European Conference on Computer Vision, pp. 333–350. Springer, 2022. Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. Depth-supervised nerf: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12882–12891, 2022. Rizal Fathony, Anit Kumar Sahu, Devin Willmott, and J Zico Kolter. Multiplicative filter networks. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=OmTmcPkkhT. Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbæk Warburg, Benjamin Recht, and Angjoo Kanazawa. K-planes: Explicit radiance fields in space, time, and appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12479–12488, 2023. Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, and Barlas Özgür. 3dgen: Triplane latent diffusion for textured mesh generation. arXiv preprint arXiv:2303.05371, 2023. Hwan Heo, Taekyung Kim, Jiyoung Lee, Jaewon Lee, Soohyun Kim, Hyunwoo J Kim, and Jin-Hwa Kim. Robust camera pose refinement for multi-resolution hash encoding. In International Conference on Machine Learning, PMLR, 2023.
1zhM0XkQh0
The PGD-20 metric of the proposed method is pretty worse than the other SOTA in most cases but it is not adequately discussed or mentioned. Could the authors provide some intuitions why does such degradation on PGD-20 happen, any investigation to address that drawback?
PROFeAT: PROJECTED FEATURE ADVERSARIAL TRAINING FOR SELF-SUPERVISED LEARNING OF ROBUST REPRESENTATIONS Anonymous authors Paper under double-blind review ABSTRACT Supervised adversarial training has been the most successful approach for improving the robustness of Deep Neural Networks against adversarial attacks. While several recent works have attempted to overcome the need for supervision or labeled training data by integrating adversarial training with contrastive Self-Supervised Learning (SSL) approaches such as SimCLR, their performance has been sub-optimal due to the increased training complexity. A recent approach mitigates this by utilizing supervision from a standard self-supervised trained model in a teacher-student setting that mimics supervised adversarial training [Zhang et al., 2022]. However, we find that there is still a large gap in performance when compared to supervised training, specifically on larger capacity models. We show that this is a result of mismatch in training objectives of the teacher and student, and propose Projected Feature Adversarial Training (ProFeAT) to bridge this gap by using a projection head in the adversarial training step. We further propose appropriate attack and defense losses at the feature and projector spaces, coupled with a combination of weak and strong augmentations for the teacher and student respectively, to improve generalization without increasing the training complexity. We demonstrate significant improvements in performance when compared to existing SSL methods, and performance on par with TRADES, a popular supervised adversarial training method, on several benchmark datasets and models. 1 INTRODUCTION Deep Neural Networks are known to be vulnerable to crafted imperceptible input-space perturbations known as Adversarial attacks [Szegedy et al., 2013], which can be used to fool classification networks into predicting any desired output, leading to disastrous consequences. Amongst the diverse attempts at improving the adversarial robustness of Deep Networks, Adversarial Training (AT) [Madry et al., 2018; Zhang et al., 2019] has been the most successful. This involves the generation of adversarial attacks by maximizing the training loss, and further minimizing the loss on the generated attacks for training. While these methods have proved to be robust against various attacks developed over time [Carlini et al., 2019; Croce & Hein, 2020; Sriramanan et al., 2020], they require significantly more training data when compared to standard training [Schmidt et al., 2018], incurring a large annotation cost. Motivated by the success of contrastive learning for standard Self-Supervised Learning (SSL) [Van den Oord et al., 2018; Chen et al., 2020b; He et al., 2020], several works have attempted to use contrastive learning for self-supervised adversarial training as well [Jiang et al., 2020; Kim et al., 2020; Fan et al., 2021]. While this strategy works well in a full network fine-tuning setting, the performance is sub-optimal when the robustly pretrained feature encoder is frozen while training the classification head (linear probing), demonstrating that the representations learned are indeed sub-optimal. A recent work, Decoupled Adversarial Contrastive Learning (DeACL) [Zhang et al., 2022], demonstrated significant improvements in performance and training efficiency by splitting this combined self-supervised adversarial training into two stages; first, where a standard self-supervised model is trained, and second, where this pretrained model is used as a teacher to provide supervision to the adversarially trained student network. Although the performance of this method is on par with supervised adversarial training on small model architectures (ResNet-18), we find that it does not scale to larger models such as WideResNet-34-10. In this work, we aim to bridge the performance gap between self-supervised and supervised adversarial training methods, and improve the scalability of the former to larger model capacities. We utilize the distillation setting discussed above, where a standard self-supervised trained teacher provides supervision to the student. In contrast to a typical distillation scenario, the ideal goal for the student is not to replicate the teacher, but to leverage weak supervision from the teacher while simultaneously enhancing its adversarial robustness. This involves a trade-off between the sensitivity towards changes that flip the class of an image (for better clean accuracy) and invariance towards imperceptible perturbations that preserve the true class (for adversarial robustness) \cite{tramer2020ensemble}. Towards this, we propose to impose similarity with respect to the teacher in the appropriate dimensions by applying the distillation loss in a projected space (output of a projection MLP layer), while enforcing the smoothness-based robustness loss in the feature space (output of a backbone/feature extractor). However, we find that enforcing these losses at different layers results in training instability, and thus introduce the complementary loss (clean distillation loss or robustness loss) as a regularizer to improve training stability. We further propose to reuse the pretrained projection layer from the teacher model for better convergence. In line with the training objective, the adversarial attack used during training aims to find images that maximize the smoothness loss in the feature space, and cause misalignment between the teacher and student in the projected space. Further, since data augmentations are known to increase the training complexity of adversarial training resulting in a drop in performance \cite{zhang2022understanding, addepalli2022understanding}, we propose to use augmentations such as AutoAugment (or strong augmentations) only at the student for better attack diversity, while using spatial transforms such as pad and crop (PC) (or weak augmentations) at the teacher. We summarize our contributions below: - We propose Projected Feature Adversarial Training (ProFeAT) - a teacher-student distillation setting for self-supervised adversarial training, where the projection layer of the standard self-supervised pretrained teacher is reused for student distillation. We further propose appropriate attack and defense losses for training, coupled with a combination of weak and strong augmentations for the teacher and student respectively. - Towards understanding why the projector helps, we first show that the compatibility between the training methodology of the teacher and the ideal goals of the student plays a crucial role in the student model performance in distillation. We further show that the use of a projector can alleviate the negative impact of the inherent misalignment of the above. - We demonstrate the effectiveness of the proposed approach on the standard benchmark datasets CIFAR-10 and CIFAR-100. We obtain moderate gains on small model architectures (ResNet-18) and larger gains of $3.5 - 8\%$ in clean accuracy and $\sim 3\%$ in robust accuracy on larger models (WideResNet-34-10), while also outperforming TRADES supervised training \cite{zhang2019theoretically} on larger models. ## 2 Preliminaries: Problem Setting and Notation We consider the problem of self-supervised learning of robust representations, where a self-supervised standard trained teacher model $T$ is used to provide supervision to a student model $S$. The feature, projector and linear probe layers of the teacher are denoted as $T_f$, $T_p$ and $T_l$ respectively. An analogous notation is followed for the student as well. The dataset used for self-supervised pretraining $D$ consists of images $x_i$ where $i \leq N$. An adversarial image corresponding to the image $x_i$ is denoted as $\tilde{x}_i$. We consider the $\ell_\infty$ based threat model where $\|\tilde{x}_i - x_i\|_\infty \leq \varepsilon$. The value of $\varepsilon$ is set to $8/255$ for CIFAR-10 and CIFAR-100 \cite{krizhevsky2009learning}, as is standard in literature \cite{madry2018towards,zhang2019theoretically}. The Robust Accuracy (RA) in the SOTA comparison tables is presented against AutoAttack \cite{croce2020reliable} (RA-AA) which is widely used as a benchmark for robustness evaluation \cite{croce2021reliable}. In all other tables, we present robust accuracy against the GAMA attack \cite{sriramanen2020gama} (RA-G) which is known to be competent with AutoAttack, while being significantly faster. We additionally present results against a 20 step PGD attack \cite{madry2018towards} (RA-PGD20), as is standard in self-supervised adversarial training literature \cite{fan2021adversarial,zhang2022understanding}. We note that for comparing the robust accuracy between any two defenses, the accuracy against AutoAttack or GAMA should be considered. The accuracy gap between PGD-20 and Autoattack/GAMA is higher when the loss surface is convoluted, due to the phenomenon of gradient masking \cite{papernot2017practical,tramer2018ensemble}. The accuracy on clean or natural samples is denoted as SA which stands for Standard Accuracy. To evaluate the representations learned after self-supervised adversarial pretraining, we freeze the pretrained backbone, and perform linear layer training on a downstream labeled dataset consisting of image-label pairs. We refer to this training as linear probing (Kumar et al., 2022), as is common in literature. The training is done using cross-entropy loss on clean samples unless specified otherwise. We compare the robustness of the representations on both in-distribution data, where the linear probing is done using the same distribution of images as that used for pretraining, and in a transfer learning setting, where the distribution of images in the downstream dataset is different from that used for pretraining. We do not consider the case of fine-tuning the full network using adversarial training, since this changes the pretrained network to large extent, and may yield misleading results and conclusions depending on the dynamics of training (number of epochs, learning rate, and the value of the robustness-accuracy trade-off parameter). Contrary to this, linear probing based evaluation gives an accurate comparison of representations learned across different pretraining algorithms. 3 RELATED WORKS Self Supervised Learning (SSL): With the abundance of unlabelled data, learning representations through self-supervision has seen major advances in recent years. Contrastive learning based SSL approaches have emerged as a promising direction (Van den Oord et al., 2018; Chen et al., 2020b; He et al., 2020), where different augmentations of a given anchor image form positives, and augmentations of other images in the batch form the negatives for training. The training objective involves pulling the representations of the positives together, and repelling the representations of negatives. Self Supervised Adversarial Training: To alleviate the large sample complexity and training cost of adversarial training, there have been several works that have attempted self-supervised learning of adversarially robust representations (Kim et al., 2020; Jiang et al., 2020; Fan et al., 2021; Zhang et al., 2022). Chen et al. (2020a) propose AP-DPE, an ensemble adversarial pretraining framework where several pretext tasks like Jigsaw puzzles (Noroozi & Favaro, 2016), rotation prediction (Gidaris et al., 2018) and Selfie (Trinh et al., 2019) are combined to learn robust representations without task labels. Jiang et al. (2020) propose ACL, that combines the popular contrastive SSL method - SimCLR (Chen et al., 2020b) with adversarial training, using Dual Batch normalization layers for the student model - one for the standard branch and another for the adversarial branch. RoCL (Kim et al., 2020) follows a similar approach to ACL by combining the contrastive objective with adversarial training to learn robust representations. Fan et al. (2021) propose AdvCL, that uses high-frequency components in data as augmentations in contrastive learning, performs attacks on unaugmented images, and uses a pseudo label based loss for training to minimize the cross-task robustness transferability. Luo et al. (2023) study the role of augmentation strength in self-supervised contrastive adversarial training, and propose DynACL, that uses a “strong-to-weak” annealing schedule on augmentations. Additionally, motivated by Kumar et al. (2022), they propose DynACL++ that obtains pseudo-labels via k-means clustering on the clean branch of the DynACL pretrained network, and performs linear-probing (LP) using these pseudo-labels followed by adversarial full-finetuning (AFT) of the backbone. We note that the latter post-processing step is a generic finetuning strategy in literature that can be integrated with several base algorithms including ours. While most self-supervised adversarial training methods aimed at integrating contrastive learning methods with adversarial training, Zhang et al. (2022) showed that combining the two is a very complex optimization problem due to their conflicting requirements. The authors propose Decoupled Adversarial Contrastive Learning (DeACL), where a teacher model is first trained using existing self-supervised training methods such as SimCLR, and further, a student model is trained to be adversarially robust using supervision from the teacher. While existing methods used ~ 1000 epochs for contrastive adversarial training, the compute requirement for DeACL is much lesser since the first contrastive learning stage does not involve adversarial training, and the second stage is similar in complexity to supervised adversarial training (Details in Appendix F). We thus utilize this distillation framework and obtain significant gains over DeACL, specifically at larger model capacities. 4 PROPOSED METHOD In this section, we motivate the need for a projection layer, and present the proposed approach. 4.1 Projection Layer in Self-supervised Distillation In this work, we follow the setting proposed by Zhang et al. (2022), where a standard self-supervised pretrained teacher provides supervision for self-supervised adversarial training of the student model. Table 1: Role of projector in self-supervised distillation (CIFAR-100, WRN-34-10): The drop in accuracy of student ($S$) w.r.t. the teacher ($T$) indicates distillation performance, which improves by matching the training objective of the teacher with ideal goals of the student ($S_3$/$S_4$ vs. $S_1$), and by using similar losses for pretraining and linear probing (LP) ($S_2$ vs. $S_1$). Using a projector improves performance in case of mismatch in the above ($S_5$ vs. $S_1$). The similarity between teacher and student is significantly higher at the projector space when compared to the feature space in $S_5$. | Exp # | Teacher training | Teacher acc (%) | Projector | LP Loss | Student accuracy after linear probe | $\cos(T, S)$ | |-------|------------------|-----------------|-----------|---------|-------------------------------------|-------------| | | | | | | Feature space (%) | Projector space (%) | Feature space | Projector space | | S1 | Self-supervised | 70.85 | Absent | CE | 64.90 | - | 0.94 | - | | S2 | Self-supervised | 70.85 | Absent | $\cos(T, S)$ | 68.49 | - | 0.94 | - | | S3 | Supervised | 80.86 | Absent | CE | 80.40 | - | 0.94 | - | | S4 | Supervised | 69.96 | Absent | CE | 71.73 | - | 0.98 | - | | S5 | Self-supervised | 70.85 | Present | CE | 73.14 | 64.67 | 0.19 | 0.92 | This is different from a standard distillation setting (Hinton et al., 2015) because the representations of standard and adversarially trained models are known to be inherently different. Ilyas et al. (2019) attribute the adversarial vulnerability of models to the presence of non-robust features which can be disentangled from robust features that are learned by adversarially trained models. The differences in representations of standard and adversarially trained models can also be justified by the fact that linear probing of standard trained models using adversarial training cannot produce robust models as shown in Table 1. On a similar note, standard full finetuning of adversarially trained models destroys the robust features learned (Chen et al., 2020a; Kim et al., 2020; Fan et al., 2021), yielding 0% robustness as shown in the table. Due to the inherently diverse representations of standard and adversarially trained models, the ideal goal of the student in the considered distillation setting is not to merely follow the teacher, but to be able to take weak supervision from it while being able to differ considerably. In order to achieve this, we take inspiration from standard self-supervised learning literature (Van den Oord et al., 2018; Chen et al., 2020b; He et al., 2020; Navaneet et al., 2022; Gao et al., 2022) and propose to utilize a projection layer following the student backbone, so as to isolate the impact of the enforced loss on the learned representations. Bordes et al. (2022) show that in standard supervised and self-supervised training, a projector is useful when there is a misalignment between the pretraining and downstream tasks, and aligning them can eliminate the need for the same. Motivated by this, we hypothesize the following for self-supervised distillation: **Student model performance improves by matching the following during distillation:** 1. Training objectives of the teacher and the ideal goals of the student, 2. Pretraining and linear probe training objectives of the student. The ideal goal of the student depends on the downstream task, which is standard accuracy in standard training, and standard and robust accuracy in adversarial training. The training objective of the teacher is to achieve invariance to augmentations of the same image when compared to augmentations of other images in contrastive SSL training, and standard accuracy in supervised training. We explain the intuition behind the hypotheses in Appendix B and empirically justify the same by considering several distillation settings involving standard and adversarial, supervised and self-supervised trained teacher models in Tables 1 and 2. The results are presented on CIFAR-100 (Krizhevsky et al., 2009) with WideResNet-34-10 (Zagoruyko & Komodakis, 2016) architecture for both teacher and student. The standard self-supervised model is trained using SimCLR (Chen et al., 2020b). Contrary to a typical knowledge distillation setting where a cross-entropy loss is also used (Hinton et al., 2015), all the experiments presented involve the use of only self-supervised losses for distillation (cosine similarity between representations), and labels are used only during linear probing. Adversarial self-supervised distillation in Table 2 is performed using a combination of distillation loss on natural samples and smoothness loss on adversarial samples as shown in Eq. 2 (Zhang et al., 2022). A randomly initialized trainable projector is used at the output of student backbone in $S_5$ of Table 1 and A4 of Table 2. Here, the training loss is considered in the projected space of the student ($S_p$) rather than the feature space ($S_f$). 1. **Matching the training objectives of teacher with the ideal goals of the student:** We first consider the standard training of a student model, using either a self-supervised or supervised teacher in Table 1. In the absence of a projector, the drop in student accuracy w.r.t. the respective teacher accuracy is 6% with a self-supervised teacher ($S_1$), and < 0.5% with a supervised teacher ($S_3$). To ensure that our observations are not a result of the 10% difference in teacher accuracy between $S_1$ and $S_3$, we present results and similar observations with a supervised sub-optimally trained teacher. Table 2: Role of projector in self-supervised adversarial distillation (CIFAR-100, WRN-34-10): Student performance after linear probe at feature space is reported. The drop in standard accuracy (SA) of the student ($S$) w.r.t. the teacher ($T$), and the robust accuracy (RA-G) of the student improve by matching the training objective of the teacher with ideal goals of the student (A3 vs. A1), and by using similar losses for pretraining and linear probing (LP) (A2 vs. A1). Using a projector improves performance in case of mismatch in the above (A4 vs. A1). | Exp # | Teacher training | Teacher accuracy | Projector | LP Loss | Student accuracy | cos($T$, $S$) | |-------|------------------|------------------|-----------|---------|------------------|--------------| | | SA (%) | RA-G (%) | | | SA (%) | | | A1 | Self-supervised (standard training) | 70.85 | 0 | Absent | CE | 50.71 | 24.63 | 0.78 | | A2 | Self-supervised (standard training) | 70.85 | 0 | Absent | cos($T$, $S$) | 54.48 | 23.20 | 0.78 | | A3 | Supervised (TRADES adversarial training) | 59.88 | 25.89 | Absent | CE | 54.86 | 27.17 | 0.94 | | A4 | Self-supervised (standard training) | 70.85 | 0 | Present | CE | 57.51 | 24.10 | 0.18 | in S4. Thus, a supervised teacher is significantly better than a self-supervised teacher for distilling representations specific to a given task. This justifies the hypothesis that, student performance improves by matching the training objectives of the teacher and the ideal goals of the student. We next consider adversarial training of a student, using either a standard self-supervised teacher, or a supervised adversarially trained teacher (TRADES [Zhang et al., 2019]) in Table 2. Since the TRADES model is more aligned with the ideal goals of the student, despite its sub-optimal clean accuracy, the clean and robust accuracy of the student are better than those obtained using a standard self-supervised model as a teacher (A3 vs. A1). This further justifies the first hypothesis. 2. Matching the pretraining and linear probe training objectives of the student: To align pretraining with linear probing, we perform linear probing on the teacher model, and further train the student by maximizing the cosine similarity between the logits of the teacher and student. This boosts the student accuracy by 3.6%, in Table 1 (S2 vs. S1) and by 3.8% in Table 2 (A2 vs. A1). The projector isolates the representations of the student from the training loss, as indicated by the lower similarity between the student and teacher at feature space when compared to that at the projector (in S5 and A4), and prevents overfitting of the student to the teacher training objective. This makes the student robust to the misalignment between the teacher training objective and ideal goals of the student, and also to the mismatch in student pretraining and linear probing objectives, thereby improving student performance, as seen in Tables 1 (S5 vs. S1) and 2 (A4 vs. A1). 4.2 ProFeAT: PROJECTED FEATURE ADVERSARIAL TRAINING We present details on the proposed approach Projected Feature Adversarial Training, illustrated in Fig. 1. Firstly, a teacher model is trained using a self-supervised training algorithm such as SimCLR [Chen et al., 2020b], which is also used as an initialization for the student for better convergence. Use of Projection Layer: As discussed in Section 4.1, to overcome the impact of the inherent misalignment between the training objective of the teacher and the ideal goals of the student, and the mismatch between the pretraining and linear probing objectives, we propose to use a projection head at the output of the student backbone. As noted in Tables 1 (S5 vs. S1) and 2 (A4 vs. A1), even a randomly initialized projection head improves performance. Most self-supervised pretraining methods use similarity based losses at the output of a projection head for training [Chen et al., 2020b; He et al., 2020; Grill et al., 2020; Chen & He, 2021; Zbontar et al., 2021], resulting in a projected space where similarity has been enforced during pretraining, thus giving higher importance to the key dimensions. We therefore propose to reuse this pretrained projection head for both teacher and student and freeze it during training to prevent convergence to an identity mapping. Defense loss: As is common in adversarial training literature [Zhang et al., 2019, 2022], we use a combination of loss on clean samples and smoothness loss to enforce adversarial robustness in the student model. Since the loss on clean samples utilizes supervision from the self-supervised pretrained teacher, it is enforced at the outputs of the respective projectors of the teacher and student as discussed above. The goal of the second loss is merely to enforce local smoothness in the loss surface of the student, and is enforced in an unsupervised manner [Zhang et al., 2019, 2022]. Thus, it is ideal to enforce this loss at the feature space of the student network, since these representations are directly used for downstream applications. While the ideal locations for the clean and adversarial losses are the projected and feature spaces respectively, we find that such a loss formulation is hard to optimize, resulting in either a non-robust model, or collapsed representations as shown in Table- Figure 1: **Proposed approach (ProFeAT):** The student is trained using a distillation loss on clean samples using supervision from an SSL pretrained teacher, and a smoothness loss to enforce adversarial robustness. A frozen pretrained projection layer is used at the teacher and student to prevent overfitting to the clean distillation loss. The use of strong augmentations at the student increases attack diversity, while weak augmentations at the teacher reduce the training complexity. We therefore utilize a complimentary loss as a regularizer in the respective spaces. This results in a combination of losses at the feature and projector spaces as shown below: \[ L_{fp} = - \sum_i \cos(T_{fp}(x_i), S_{fp}(x_i)) + \beta \cdot \cos(S_{fp}(x_i), S_{fp}(\tilde{x}_i)) \] \[ L_f = - \sum_i \cos(T_f(x_i), S_f(x_i)) + \beta \cdot \cos(S_f(x_i), S_f(\tilde{x}_i)) \] \[ L_{ProFeAT} = \frac{1}{2} \cdot (L_{fp} + L_f) \] \[ \tilde{x}_i = \arg \max_{\tilde{x}_i : ||\tilde{x}_i - x_i||_\infty \leq \varepsilon} - \{\cos(T_{fp}(x_i), S_{fp}(\tilde{x}_i)) + \cos(S_f(x_i), S_f(\tilde{x}_i)) \} \] Here, \(T_{fp}\) is a composition of the feature backbone \(T_f\) and the projection layer \(T_p\) of the teacher, and a similar notation is used for the student as well. The first term in Eq[1] and [2] represents the clean loss, which is the cosine similarity between the representations of the teacher and the student at corresponding layers. The second term corresponds to the smoothness loss at the respective layers of the student, and is weighted by a hyperparameter \(\beta\) that controls the robustness-accuracy trade-off in the downstream model. The overall loss \(L_{ProFeAT}\) is an equally weighted combination of losses at the feature and projection spaces as shown in Eq[3], and is minimized during training. We show in Fig.4 that the model is stable to variations in weighting between the losses. We conjecture that the pretrained projection layer steers the cosine similarity loss to give higher importance to the required features, although similarity is additionally enforced on the feature layer as well. **Attack generation:** The attack used in the training loss is generated by maximizing a combination of losses in both projector and feature spaces as shown in Eq[4]. Since the projector space is primarily used for enforcing similarity with the teacher, we minimize the cosine similarity between the teacher and student representations for attack generation. Since the feature space is primarily used for enforcing local smoothness in the loss surface of the student, we utilize the unsupervised formulation that minimizes similarity between representations of clean and adversarial samples at the student. **Augmentations:** Standard supervised and self-supervised training approaches are known to benefit from the use of strong data augmentations such as AutoAugment (Cubuk et al., 2018). However, such augmentations, which distort the low-level features of images, are known to deteriorate the performance of adversarial training (Rice et al., 2020; Gowal et al., 2020; Addepalli et al., 2022) attribute the poor performance to the larger domain shift between the augmented train and unaugmented test set images, in addition to the increased complexity of the adversarial training task, which overpower the superior generalization attained due to the use of diverse augmentations. Although these factors influence adversarial training in the self-supervised regime as well, we hypothesize that the need for better generalization is higher in self-supervised training, since the pretraining task is not aligned with the ideal goals of the student, making it important to use strong augmentations. Table 3: **SOTA comparison**: Standard Linear Probing performance (%) on CIFAR-10 and CIFAR-100 datasets on ResNet-18 and WideResNet-34-10 models. Mean and standard deviation across 3 reruns are reported for DeACL [Zhang et al., 2022] and the proposed approach, ProFeAT. Standard Accuracy (SA), Robust Accuracy against AutoAttack (RA-AA) and PGD-20 (RA-PGD20) reported. | | CIFAR-10 | | | CIFAR-100 | | | |------------------|----------|-------|-------|-----------|-------|-------| | | SA | RA-PGD20 | RA-AA | SA | RA-PGD20 | RA-AA | | **ResNet-18** | | | | | | | | Supervised (TRADES) | 83.74 | 49.35 | 47.60 | 59.07 | 26.22 | 23.14 | | AP-DPE | 78.30 | 18.22 | 16.07 | 47.91 | 6.23 | 4.17 | | RoCL | 79.90 | 39.54 | 23.38 | 49.53 | 18.79 | 8.66 | | ACL | 77.88 | 42.87 | 39.13 | 47.51 | 20.97 | 16.33 | | AdvCL | 80.85 | 50.45 | 42.57 | 48.34 | 27.67 | 19.78 | | DynACL | 77.41 | - | 45.04 | 45.73 | - | 19.25 | | DynACL++ | 79.81 | - | 46.46 | 52.26 | - | 20.05 | | DeACL (Reported) | 80.17 | 53.95 | 45.31 | 52.79 | 30.74 | 20.34 | | DeACL (Our Teacher) | 80.05±0.29 | 52.97±0.08 | **48.15±0.05** | 51.53±0.30 | 30.92±0.21 | 21.91±0.13 | | ProFeAT (Ours) | **81.68±0.23** | 49.55±0.16 | 47.02±0.01 | **53.47±0.10** | 27.95±0.13 | **22.61±0.14** | | | WideResNet-34-10 | | | |------------------|-------------------|-------|-------| | Supervised (TRADES) | 85.50 | 54.29 | 51.59 | 59.87 | 28.86 | 25.72 | | DynACL++ (500 epochs) | 82.27 | 49.60 | 47.12 | 52.59 | 24.22 | 21.27 | | DynACL++ (1000 epochs) | 80.97 | 48.28 | 45.50 | 52.60 | 23.42 | 20.58 | | DeACL | 83.83±0.20 | 57.09±0.06 | 48.85±0.11 | 52.92±0.35 | 32.66±0.08 | 23.82±0.07 | | ProFeAT (Ours) | **87.62±0.13** | 54.50±0.17 | **51.95±0.19** | **61.08±0.18** | 31.96±0.08 | **26.81±0.11** | Table 4: **Performance across different models**: Standard Linear Probing performance (%) of DeACL (Baseline) and ProFeAT (Ours) across different architectures on CIFAR-100. ViT-B/16 uses Imagenet-1K trained SSL teacher for training, while the SSL teacher in all other cases is trained on the CIFAR-100. SA: Standard Accuracy, RA-AA: Robust Accuracy against AutoAttack. | # parameters (M) | DeACL | ProFeAT (Ours) | |------------------|-------|---------------| | | SA | RA-AA | SA | RA-AA | | ResNet-18 | 11.27 | 51.53 | 21.91 | 53.47 | 22.61 | | ResNet-50 | 23.50 | 53.30 | 23.00 | 59.34 | 25.86 | | WideResNet-34-10 | 46.28 | 52.92 | 23.82 | 61.08 | 26.81 | | ViT-B/16 | 85.79 | 61.34 | 17.49 | 65.08 | 21.52 | However, it is also important to ensure that the training task is not too complex. We thus propose to use a combination of weak and strong augmentations as inputs to the teacher and student respectively, as shown in Fig.1. From Fig.2, we note that, the use of strong augmentations results in the generation of more diverse attacks, resulting in a larger drop when differently augmented images are used across different restarts of a PGD 5-step attack. The use of weak augmentations at the teacher imparts better supervision to the student, reducing the training complexity. ## 5 Experiments and Results ### 5.1 Comparison with the State-of-the-Art In Table 3, we present a comparison of the proposed approach ProFeAT with respect to several existing self-supervised adversarial training approaches [Chen et al., 2020a; Kim et al., 2020; Jiang et al., 2020; Fan et al., 2021; Zhang et al., 2022; 2019] by freezing the feature extractor and performing linear probing using cross-entropy loss on clean samples. To ensure a fair comparison, the same is done for the supervised AT method TRADES [Zhang et al., 2019] as well. We report results on CIFAR-10 and CIFAR-100 datasets, and on ResNet-18 and WideResNet-34-10 architectures. The results of existing methods on ResNet-18 architecture are as reported by [Zhang et al., 2022]. Since DeACL [Zhang et al., 2022] also uses a teacher-student architecture, we reproduce their results using the same teacher as our method, and report the same as “DeACL (Our Teacher)”. Since most existing methods do not report results on WideResNet-34-10, we compare our results only with the best performing method (DeACL) and a recent method DynACL [Luo et al., 2023]. These results are not reported in the respective papers, hence we run them using the official code. The proposed approach obtains competent robustness-accuracy trade-off when compared to the best performing baseline method DeACL on CIFAR-10 dataset with ResNet-18 architecture, and obtains ~ 2% higher clean accuracy alongside marginal gains in robust accuracy on CIFAR-100 ResNet-18. Table 5: **Transfer Learning:** Standard Linear Probing performance (%) for transfer learning from CIFAR-10 and CIFAR-100 to STL-10 dataset on ResNet-18 and WideResNet-34-10 models. Standard Accuracy (SA), robustness against PGD-20 (RA-PGD20) and AutoAttack (RA-AA) reported. | | ResNet-18 | WideResNet-34-10 | |------------------|-----------|-------------------| | | CIFAR-10 -> STL-10 | CIFAR-100 -> STL-10 | CIFAR-100 -> STL-10 | CIFAR-100 -> STL-10 | | SA | RA-PGD20 | RA-AA | RA-PGD20 | RA-AA | RA-PGD20 | RA-AA | RA-PGD20 | RA-AA | | Supervised | 54.70 | 37.45 | 22.26 | 51.11 | 23.63 | 19.54 | 67.15 | 32.78 | 30.49 | 57.68 | 17.49 | 11.26 | | DeACL | 60.10 | 41.40 | 30.71 | 50.91 | 27.76 | 16.25 | 66.45 | 39.28 | 38.43 | 50.59 | 27.50 | 13.49 | | ProFeat (Ours) | 64.30 | 35.50 | 30.95 | 52.63 | 26.72 | 20.55 | 69.88 | 35.48 | 31.65 | 56.68 | 24.95 | 19.46 | Table 6: **Ablations on Projector (CIFAR-100, WRN-34-10):** Performance (%) using variations in projector (proj.) initialization (init.) and trainability. SA: Standard Accuracy, RA-G: Robust accuracy against GAMA, RA-PGD20: Robust Accuracy against PGD-20 attack | # | Student proj. | Proj. init. (student) | Teacher proj. | Proj. init. (teacher) | SA | RA-PGD20 | RA-G | |-------|---------------|-----------------------|---------------|----------------------|----|----------|------| | AP1 | Absent | - | Absent | - | 55.35 | 35.89 | 27.86 | | AP2 | Trainable | Random | Absent | - | 63.07 | 32.05 | 26.57 | | AP3 | Frozen | Pretrained | Absent | - | 40.43 | 27.51 | 22.23 | | AP4 | Trainable | Pretrained | Absent | - | 62.89 | 31.97 | 26.57 | | AP5 | Trainable | Random (common) | Trainable | Random (common) | 53.43 | 35.58 | 27.23 | | Ours | Frozen | Pretrained | Frozen | Pretrained | 61.05 | 31.99 | 27.41 | | AP6 | Trainable | Pretrained (common) | Trainable | Pretrained (common) | 54.60 | 36.10 | 27.41 | | AP7 | Trainable | Pretrained | Frozen | Pretrained | 58.18 | 35.26 | 27.73 | On WideResNet-34-10 architecture, we obtain \( \sim 3 - 3.5\% \) gains in both robust and clean accuracy on CIFAR-10, and similar gains in robust accuracy of CIFAR-100 as well. We obtain exceptional gains of \( \sim 8\% \) on the clean accuracy of CIFAR-100. Overall, the gains of the proposed approach are higher for larger model capacities (WRN-34-10). We obtain superior results when compared to the supervised AT method TRADES as well, at higher model capacities. We present an evaluation of the pretrained models using other methods such as KNN in Appendix G.8 **Results of CIFAR-100 across different model architectures:** We report performance of the proposed method ProFeAT and the best baseline DeACL on diverse architectures including Vision transformers Dosovitskiy et al. (2021) on the CIFAR-100 dataset in Table 4. ProFeAT consistently outperforms DeACL in both clean and robust accuracy across various model architectures. (See Appendix C for details) **Transfer learning:** To evaluate the robustness and generalization of the representations learned, we compare the proposed approach with the best baseline DeACL Zhang et al. (2022) in Table 5. We consider transfer from CIFAR-10 to STL-10 Coates et al. (2011) and CIFAR-100 to STL-10. When compared to DeACL, the clean accuracy is \( \sim 4 - 10\% \) higher on CIFAR-10 and \( \sim 1.7 - 6\% \) higher on CIFAR-100. We also obtain \( 3 - 5\% \) higher robust accuracy when compared to DeACL on CIFAR-100, and higher improvements over TRADES. Results with Adversarial Full Finetuning (AFF) to STL-10 and Caltech-101 are presented in Appendix G.9 ### 5.2 Ablations **Projection Layer:** We present ablation experiments using different configurations of the projection layer in Table 6. As discussed in Section 4.1, we observe a large boost in clean accuracy when a random (or pretrained) trainable projection layer is introduced to the student (AP2/ AP4 vs. AP1). While the use of pretrained frozen projection head only for the student degrades performance considerably (AP3), the use of the same for both teacher and student (Ours) yields a optimal robustness-accuracy trade-off across all variations. The use of a common trainable projection head for both teacher and student results in collapsed representations at the projector output (AP5, AP6), yielding results similar to the case where projector is not used for both teacher and student (AP1). This issue is overcome when the pretrained projector is trainable only for the student (AP7). **Training Loss:** We present ablation experiments across variations in training loss at the feature space and the projection head in Table 7. In the proposed approach (Ours), we introduce a combination of clean and robust losses at both feature and projector layers, as shown in Eq. 3. By introducing the loss only at the features (AD1), there is a considerable drop in clean accuracy as seen earlier, which can be recovered by introducing the clean loss at the projection layer (AD3). Instead, when only the robust loss is introduced at the projection layer (AD4), there is a large drop in clean accuracy confirming that the need for projection layer is mainly enforcing the clean loss. When the combined loss is enforced only at the projection head (AD2), the accuracy is close to that of the Table 7: Ablations on Training Loss (CIFAR-100, WRN-34-10): Performance (%) with variations in training loss at feature (feat.) and projector (proj.). “Clean” denotes the cosine similarity between representations of teacher and student on clean samples. “Adv” denotes the cosine similarity between representations of the corresponding clean and adversarial samples either at the output of student \((S, S)\) or between the teacher and student \((T, S)\). SA: Standard Accuracy, RA-G: Robust accuracy against GAMA, RA-PGD20: Robust Accuracy against PGD-20 attack | # | Loss @ feat | Loss @ proj | SA | RA-PGD20 | RA-G | |-----|-------------|------------|-----|----------|------| | Ours| clean + adv(S, S) | clean + adv(S, S) | 61.05 | 31.99 | 27.41 | | AD1 | clean + adv(S, S) | clean + adv(S, S) | 55.35 | 38.89 | **27.86** | | AD2 | clean + adv(S, S) | clean + adv(S, S) | 59.65 | 33.03 | 26.90 | | AD3 | clean + adv(S, S) | clean + adv(S, S) | 61.69 | 31.34 | 26.40 | | AD4 | clean + adv(S, S) | clean + adv(S, S) | 49.59 | 31.79 | 25.35 | Table 8: Ablations on Augmentations used (CIFAR-100, WRN-34-10): Performance (%) using different augmentations for the teacher and student. (PC: Pad+Crop, AuAu: AutoAugment). Standard Accuracy (SA) and Robust accuracy against GAMA (RA-G), PGD-20 (RA-PGD20) reported | # | Teacher | Student | SA | RA-PGD20 | RA-G | |-----|---------|---------|-----|----------|------| | AG1 | PC | PC | 56.57 | 30.54 | 25.29 | | AG2 | AuAu | AuAu | 60.76 | 31.83 | 27.21 | | AG3 | PC1 | PC2 | 56.95 | 30.94 | 25.39 | proposed approach, with marginally lower clean and robust accuracy. Enforcing only adversarial loss in the feature space, and only clean loss in the projector space is a hard optimization problem, and this results in a non-robust model (AD5). As shown in Table 12 even by increasing \( \beta \) in AD5, we do not obtain a robust model, rather, there is a representation collapse. Thus, as discussed in Section 4.1 it is important to introduce the adversarial loss as a regularizer in the projector space as well (AD6). Enforcing only one of the two losses at the feature space (AD6 and AD7) also results in either inferior clean accuracy or robustness. Finally from AD8 and AD9 we note that the robustness loss is better when implemented as a smoothness constraint on the representations of the student, rather than by matching representations between the teacher and student. Overall, the proposed approach (Ours) results in the best robustness-accuracy trade-off. Training Augmentations: We present ablation experiments to understand the impact of augmentations used for the teacher and student in Table 8. The base method (AG1) uses common Pad and Crop (PC) augmentation for both teacher and student. By using more complex augmentations -AutoAugment followed by Pad and Crop (denoted as AuAu in the table), there is a significant improvement in both clean and robust accuracy. By using separate augmentations for the teacher and student, there is an improvement in the case of PC (AG3), but a drop in clean accuracy accompanied by better robustness in case of AuAu. Finally by using a mix of both AuAu and PC at the student and teacher respectively (Ours), we obtain improvements in both clean and robust accuracy, since the former improves attack diversity as shown in Fig 2 while the latter makes the training task easier. Robustness-Accuracy trade-off: We present results across variation in the robustness-accuracy trade-off parameter \( \beta \) (Eq 1 and 2) in Fig 3. Both robustness and accuracy of the proposed method are significantly better than DeACL across all values of \( \beta \). Secondly, the proposed approach allows a significantly better control over the robustness-accuracy trade-off, specifically in the range 2-12, where the linear drop in clean accuracy is accompanied by an increase in the robust accuracy. We present additional ablations by varying the SSL algorithm, projection layer architecture, number of training epochs of the teacher, and number of attack steps in Appendix C. 6 CONCLUSION In this work, we bridge the performance gap between supervised and self-supervised adversarial training approaches, specifically for large capacity models. We utilize a teacher-student setting (Zhang et al., 2022) where a standard self-supervised trained teacher is used to provide supervision to the student. Due to the inherent misalignment between the teacher training objective and the ideal goals of the student, we propose to use a projection layer in order to prevent the network from overfitting to the standard SSL trained teacher. We present a detailed analysis on the use of projection layer in distillation to justify our method. We additionally propose appropriate attack and defense losses in the feature and projector spaces alongside the use of weak and strong augmentations for the teacher and student respectively, to improve the attack diversity while maintaining low complexity of the training task. The proposed approach obtains significant gains over existing self-supervised adversarial training methods, specifically at large model capacities, demonstrating its scalability. REFERENCES Sravanti Addepalli, Samyak Jain, and R.Venkatesh Babu. Efficient and effective augmentation strategy for adversarial training. *Advances in Neural Information Processing Systems (NeurIPS)*, 35: 1488–1501, 2022. Sravanti Addepalli, Anshul Nasery, Venkatesh Babu Radhakrishnan, Praneeth Netrapalli, and Prateek Jain. Feature reconstruction from outputs can mitigate simplicity bias in neural networks. In *The Eleventh International Conference on Learning Representations*, 2023. Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In *The European Conference on Computer Vision (ECCV)*, 2020. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In *International Conference on Machine Learning (ICML)*, 2018. Florian Bordes, Randall Balestrierio, Quentin Garrido, Adrien Bardes, and Pascal Vincent. Guillotine regularization: Improving deep networks generalization by removing their head. *arXiv preprint arXiv:2206.13378*, 2022. Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. In *International Conference on Learning Representations (ICLR)*, 2018. Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, and Aleksander Madry. On evaluating adversarial robustness. *arXiv preprint arXiv:1902.06705*, 2019. Jinghui Chen and Quanquan Gu. Rays: A ray searching method for hard-label adversarial attack. In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, pp. 1739–1747, 2020. Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 699–708, 2020a. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020b. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and statistics*, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *International Conference on Machine Learning (ICML)*, 2020. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark, 2021. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. *arXiv preprint arXiv:1805.09501*, 2018. Guneet S. Dhillon, Kamyar Azizzadenesheli, Jeremy D. Bernstein, Jean Kossaifi, Aran Khanna, Zachary C. Lipton, and Animashree Anandkumar. Stochastic activation pruning for robust adversarial defense. In *International Conference on Learning Representations (ICLR)*, 2018.
IOrnCVIKIZ
In table 2 and table 5, are the pretrained model performances on HumanEval measured with or without postprocessing? I believe a more apples-to-apples comparison would be between Pretrained model w/ postprocessing vs LETI w/o postprocessing. Given that LETI seems to learn how to clean up syntax errors, it seems a more fair comparison.
LETI: Learning to Generate from Textual Interactions Anonymous authors Paper under double-blind review Abstract Finetuning pre-trained language models (LMs) is essential for enhancing their capabilities and is a crucial phase in their lifecycles. Existing techniques commonly fine-tune on input-output pairs (e.g., instruction fine-tuning [Wei et al., 2022a]) or with numerical rewards that gauge the output quality (e.g., reinforcement learning from human feedback [Ouyang et al., 2022]). We explore LMs’ potential to learn from textual interactions (LETI) that not only check their correctness with binary labels but also pinpoint and explain errors in their outputs through textual feedback. Our focus is the code generation task, where the model produces code based on natural language instructions. This setting invites a natural and scalable way to acquire textual feedback: the error messages and stack traces from code execution using a Python interpreter. LETI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback, which is only provided when the generated program fails to solve the task. Prepended to this fine-tuning text, a binary reward token is used to differentiate correct and buggy solutions. LETI requires no ground-truth outputs for training and even outperforms a fine-tuned baseline that does. LETI not only improves the performance of two base LMs of different scales on a code generation dataset MBPP, but also generalizes to other datasets. Trained on MBPP, it achieves comparable or better performance than the base LMs on unseen problems in HumanEval. Furthermore, compared to binary feedback, we observe that textual feedback leads to improved generation quality and sample efficiency, achieving the same performance with fewer than half of the gradient steps. LETI is equally applicable in natural language tasks when they can be formulated as code generation, which we empirically verified on event argument extraction.\footnote{Our code will be available at <anonymized>.} 1 Introduction Large-scale language models have fundamentally shifted the paradigms of natural language processing (NLP). Based on LMs pre-trained on raw text, subsequent fine-tuning stages have proven crucial to enhance their capabilities in solving benchmark NLP tasks and generating texts that align with human preferences. Success has been achieved by fine-tuning with direct training signals that measure whether the model, e.g., classifies the input into the right category [Devlin et al., 2019], answers a question correctly [Li et al., 2017; Ramamurthy et al., 2022], summarizes documents well [Stiennon et al., 2020; Wu et al., 2021], and generates outputs that align with human preferences [Ouyang et al., 2022; Korbak et al., 2023]. We hypothesize that LMs can harness the much richer training signals from textual interactions with the environment (e.g., a human or a Python interpreter) that not only check the correctness of LM’s outputs but also pinpoint the errors and explain why. We propose LETI, a new LM fine-tuning paradigm that aims to explore LMs’ potential to learn from nuanced textual interactions. We evaluate LETI on code generation tasks, where the LM is supposed to generate code pieces to solve tasks described in natural language. This setting invites a natural and scalable way to acquire automatic interactive textual feedback: the stack traces and error message outputs by established programming language (PL) tools such as a Python interpreter. LETI’s improvement process naturally mirrors a typical software development cycle: a human developer writes an initial program, executes it, and improves the program based on feedback obtained from... Figure 1: Qualitative example of LETI improving an LM on code generation by leveraging feedback from a solution evaluator (e.g., a Python interpreter). At each LETI iteration, the LM is first asked to generate candidate solutions. As a case study, we obtain binary and textual feedback by executing the solution against test cases using a Python interpreter. Feedback and the generated solutions are used to improve the LM generator for the next LETI iteration through feedback-conditioned fine-tuning (\S2.3). This is a code generation (MBPP; Austin et al., 2021) test set example generated by a 2B model optimized with LETI. We omit a few iterations and repetitive code for clarity. the programming environment until a satisfying solution is found (e.g., successfully executed with no error); Furthermore, the human developer learns from mistakes in this process and becomes a (slightly) better developer who can avoid similar mistakes in the future. Similarly to the human development process, we provide empirical evidence that LETI can learn from past mistakes and avoid similar errors in \S3.2. In LETI, a base LM pre-trained on both natural language and code\(^2\) is asked to generate a piece of program conditioning on the natural language instruction, which is then tested on a suite of test cases. LETI fine-tunes the model on a concatenation of natural language instruction, LM-generated program, and the textual feedback (e.g., stack traces and error messages) that pinpoints the bug, which is only provided when the generated program fails to solve the task. In addition to textual feedback, we prepend the fine-tuning sequences with a reward token (i.e., binary feedback), which differs for correct (<|good|>) and buggy solutions (<|bad|>), to encourage the LM to generate correct solutions when conditioning on <|good|>. LETI repeats this procedure for multiple rounds. During this iterative process, LETI assumes no instruction-code paired data. We find that LETI improves LM’s performance on code generation tasks in MBPP (Austin et al., 2021) without using any ground-truth code. Specifically, it generates 63.2% more syntactically correct and executable code (on the 2B LM) compared to the pre-trained model without any commonly employed post-processing heuristic\(^3\). When post-processing is applied, LETI (2B) improves performance and eliminates most NameError issues that occur when a variable or function is not defined (from 10% to 1%, on the 2B LM) in two iterations. The optimized LM also shows generalized performance --- \(^2\) Almost all modern large language models train on both natural language and code (Brown et al., 2020; OpenAI, 2023; Chowdhery et al., 2022; Touvron et al., 2023). \(^3\) Stop-word-based post-processing heuristics (Fig. A.1T) are commonly used by Code-LM (Chen et al., 2021b) to remove irrelevant code (e.g., only keep the first block of generated code). improvement on another code generation dataset HumanEval (Chen et al., 2021b) (§3.2). Such improvement in in-domain tasks does not come at the cost of the capability of the original LM (e.g., reasoning and chain-of-thought capability Wei et al., 2022b) due to LETI’s auxiliary objective that continuing pre-train along with fine-tuning (§3.4). We observe that textual feedback is advantageous in terms of improving the LM compared to baselines that only use binary feedback, as it offers enhanced performance and greater sample efficiency that only requires about half of the gradient steps to reach the same performance for the 2B-scale model (§3.5). Furthermore, we find LETI is equally applicable to NLP tasks (e.g., event argument extraction Wang et al., 2023a) when they can be formulated into a code generation problem (§3.5). 2 LETI: Learning from Textual Interactions Each iteration, LETI prompts the LM (§2.1) with the natural language problem description to generate a set of \( n \) solutions. The solutions are then evaluated on a suite of test cases by a Solution Evaluator (§2.2) to generate textual feedback (i.e., stack traces and error messages). This work uses a Python interpreter as the solution evaluator to assess LM-generated solutions. The textual feedback is used to fine-tune the LM with Feedback-Conditioned Fine-Tuning (FCFT, §2.3). We assume no ground-truth solutions while fine-tuning the LM, as LETI directly learns from solution evaluator’s feedback. Intuitively, FCFT leverages textual feedback to associate various types of errors (e.g., SyntaxError) and solutions that commit them. Furthermore, with binary feedback, FCFT aligns correct or wrong solutions with corresponding pre-pended reward tokens \( <|\text{good}|> \) or \( <|\text{bad}|> \), so that better solutions can be sampled from a trained LM by conditioning it on \( <|\text{good}|> \). The workflow (one iteration) is described in Algorithm 1 and Fig. A.6. 2.1 Language Model The base LM can be any generative language model \( p_\theta \), pre-trained on both natural and programming languages. For a given problem \( x_i \in \mathcal{P} \), we sample \( n \) solutions \( S_i = \{\hat{y}_{i,1}, \ldots, \hat{y}_{i,n}\} \) from \( p_\theta(\cdot | x_i) \) (conditioned on reward token \( <|\text{good}|> \) when \( p_\theta \) is fine-tuned for at least one iteration using FCFT), where each solution \( \hat{y}_{i,j} \) is a sequence of tokens. We analyze the importance of problem set size \( |\mathcal{P}| \) and the number of sampled solutions \( n \) in §B.2 and §B.1. Since \( p_\theta \) is trained on code, we assume that it can generate programs reasonably well in the training problem set, and at least some of the \( n \) solutions are correct when an arbitrarily large \( n \) is chosen. We use \( n = 128 \) for code generation experiments on MBPP (§3.2) and \( n = 64 \) for event argument extraction (§3.5). 2.2 Solution Evaluator Given a problem \( x_i \), its test cases \( T_i \), and any generated solution \( \hat{y}_{i,j} \), the Solution Evaluator \( \phi \) (a Python interpreter) provides feedback \( F_{i,j} \), which consists of binary \( f_{\text{binary}} \) and textual feedback \( f_{\text{text}} \) (i.e., \( f_{\text{binary}}, f_{\text{text}} = \phi(x_i, \hat{y}_{i,j}, T_i) \)). \( f_{\text{binary}} \in \{0, 1\} \) reflects the correctness of a solution, where \( f_{\text{binary}} = 1 \) means the given solution \( \hat{y}_{i,j} \) can successfully solve the given problem \( x_i \), and vice versa. \( f_{\text{text}} \) is a concatenation of stack traces and a textual error message provided by the Python interpreter only when the generated solution commits an error on a test case. Examples of \( f_{\text{text}} \) can be found in Fig. 1 and A.6. Generally speaking, we can implement \( \phi \) differently for different types of problems; in §3.5, we show that it is possible to implement a \( \phi \) that works for an NLP task. 2.3 Feedback-conditioned Fine-tuning (FCFT) Each LETI iteration samples solutions from LM \( p_\theta \), evaluates generated solutions to obtain feedback using \( \phi \), and improves the generator LM with feedback-conditioned fine-tuning (FCFT). FCFT fine-tunes \( p_\theta \) on each problem \( x_i \) and generated solution \( \hat{y}_{i,j} \) conditioned on feedback \( F_{i,j} \) (a sequence of tokens comprised of binary \( f_{\text{binary}} \) and textual feedback \( f_{\text{text}} \)). This resembles on-policy reinforcement learning, where \( p_\theta \) is the policy and the solution evaluator \( \phi \) plays the role of a reward function. Feedback \( F_{i,j} \) concatenates one initial reward token that denotes the binary feedback \( f_{\text{binary}} \) indicating whether the solution is correct, and textual feedback \( f_{\text{text}} \), if provided. If the solution evaluator \( \phi \) finds solution \( \hat{y}_{i,j} \) correct, we use a reward token \( <|\text{good}|> \), and \( <|\text{bad}|> \) otherwise. Follow- ing the initial reward token, we include the textual feedback \( f_{\text{text}} \), if provided, enclosed by two special tokens denoting the beginning and end of textual feedback (i.e., \( <|\text{text\_feedback}|> \), \(<|/text\_feedback|>\)). That is, both feedback for the problem \( x_i \) and solution \( \hat{y}_{i,j} \) are a concatenated sequence of tokens: \( F_{i,j} = f_{\text{binary}} \oplus <|\text{text\_feedback}|> \oplus f_{\text{text}} \oplus <|/text\_feedback|> \). In the case when \( f_{\text{text}} \) is not provided (e.g., when \( f_{\text{binary}} = 1 \)), only the initial reward token is included as feedback: \( F_{i,j} = f_{\text{binary}} \). We expand the vocabulary of the initial pre-trained LM \( p_\theta \) to include these additional tokens. LETI optimizes \( p_\theta \) with the language modeling objective on sequence \( s = F_{i,j} \oplus x_i \oplus \hat{y}_{i,j} \) (i.e., a concatenation of instruction and generated solution conditioned on the feedback) as shown in part (1) of equation [1]. A concrete example of a data instance can be found in Fig. A.6. ### 2.4 Regularization with Continued Pre-training To alleviate distribution shifts that may be caused by fine-tuning on generated solutions, we interleave FCFT optimization (§2.3) with LM objective optimization on the pre-training data. Equation [1] puts the entire LETI’s training loss together. Our ablation study shows that the regularization by continued pre-training is essential to maintain LM’s original capability on tasks that it was not trained on (§3.4). \[ L(\theta) = \frac{1}{|D_{\text{FCFT}}|} \sum_{s=F \oplus x \oplus y \in D_{\text{FCFT}}} L_{\text{LM}}(s, \theta) + \frac{1}{|D_{\text{pre-train}}|} \sum_{s' \in D_{\text{pre-train}}} L_{\text{LM}}(s', \theta) \] (1) **Algorithm 1** One iteration of LETI Improvement using Feedback-conditioned Fine-tuning (FCFT). **Require:** \( D_{\text{pre-train}} \) ▷ Pre-training Dataset \( D_{\text{FCFT}} \leftarrow \{\} \) ▷ Dataset for FCFT for each problem \( x_i \in P \) and its test cases \( T_i \) do for \( j = 1 \) to \( n \) do Sample a solution \( \hat{y}_{i,j} \) from \( p_\theta(\cdot | x_i) \), conditioned on \( <|good|> \) for fine-tuned \( p_\theta \) (§2.1) \( f_{\text{binary}}, f_{\text{text}} \leftarrow \phi(x_i, \hat{y}_{i,j}, T_i) \) ▷ Generate feedback using evaluator \( \phi \) (§2.2) \( F_{i,j} = f_{\text{binary}} \oplus <|\text{text\_feedback}|> \oplus f_{\text{text}} \oplus <|/text\_feedback|> \) \( D_{\text{FCFT}} \leftarrow D_{\text{FCFT}} \cup \{F_{i,j} \oplus x_i \oplus \hat{y}_{i,j}\} \) ▷ Construct the feedback-conditioned dataset end for end for Fine-tune the LM \( p_\theta \) for a fixed epochs on \( D_{\text{FCFT}} \) and \( D_{\text{pre-train}} \) (equation [1]) ### 3 Experimental Results #### 3.1 Experiment Setup **Base model.** We experiment with CodeGen-mono LMs (Nijkamp et al., 2022), a series of open-sourced LMs pre-trained with both natural language and code with a range of model sizes. The NL and PL mixture of pre-training data makes it possible to evaluate LETI on both NL and PL tasks. Due to limited computational resources, we choose to experiment with 350M and 2B sized models. **Dataset for continued pre-training.** We use the Python subset of TheStack v1.1 dataset (Kocetkov et al., 2022) as the continued pre-training dataset for the mixture pre-train objective (§2.4). #### 3.2 LETI Makes LMs Better Code Generators ##### 3.2.1 Mostly Basic Python Problems (MBPP) **Setup.** We use the Mostly Basic Python Problems (MBPP) dataset (Austin et al., 2021) for training and evaluation. It contains 974 short Python problems described in natural language targeting entry-level programmers. LETI requires no ground-truth code but assumes a test suite for each problem. --- *The pre-training dataset BigPYTHON of CodeGen-mono is not publicly available at the time of writing.* that MBPP provides to check solutions’ correctness. Additional details (e.g., hyper-parameters) can be found in §C. We allow the model to generate 512 tokens at max for each problem and evaluate the generated solutions by executing them against a test suite. **Post-Processing.** Stop-word-based post-processing heuristics (Fig. A.11) are commonly employed by Code-LM [Chen et al., 2021b] to remove irrelevant code (e.g., only keep the first block of generated code) and improve performance. However, such post-processing heuristics require manual effort and are less scalable to extend to different tasks. Whether or not LMs can improve code generation without postprocessing is a great testbed to evaluate their capabilities of learning from textual feedback and is central to answering our research question. Therefore, we test the general applicability of LETI both with and without postprocessing. Unless otherwise noted, we default to without post-processing setting in the following experiments. **Evaluation metrics.** We use the pass@k metric. The model generates k solutions for each problem; it is considered successfully solving the problem if at least one of the k solutions passes all test cases. With higher k values, the chance of observing a correct output for a problem increases. To reduce variances, we sample more than k solutions to estimate pass@k, see §C.1 for details. ![Figure 2](https://docs.python.org/3/library/exceptions.html#concrete-exceptions) **Results.** As shown in Fig. 2, LETI (w/o post-processing) learns from interactions with MBPP training set problems (i.e., iteratively generate, evaluate solutions, and learn from textual feedback) to generate better solutions for both training and testing problems. Despite not being fine-tuned on any ground truth solutions, LETI improves test set Pass@1 with increasing iterations and outperforms a supervised fine-tuned baseline (for the 2B model). LETI is also helpful when the post-processing heuristic is applied to the LM’s output: 2B LM improves from 26.89% to 29.53% within two iterations (Tab. 1). We include a qualitative example for the 2B model in Fig. 1. **Error analysis.** On MBPP test set with 8,000 instances (500 test examples, 16 generations per example), we show how the distribution of error types changes for LETI (2B) in Tab. 1. These error types are concrete exceptions of Python3 programming language. On LETI (2B, w/o post-processing), we initially observed that most errors are SyntaxError (5179, 64.7%) due to no post-processing. We find that LETI can gradually reduce the proportion of generated code that causes SyntaxError by 56.5% (5179 → 652) and produce 63.2% more executable code (pass test + AssertionError). Most of the remaining errors (54.5% out of 71.8%) are due to the generated code being functionally incorrect as validated by the test suite (AssertionError), which can be hard to fix using the error message and stack traces alone [Jones et al., 2002], even for humans. Similarly, on LETI (2B, w/ post-processing), we observe NameError, which can be fixed using the error message alone, is mostly eliminated (810 → 94) within two iterations, demonstrating the effectiveness of LETI. These results also expose the limitation of automated textual feedback from Python interpreter, which can be mitigated by (1) increasing exploration in the hope of finding better code by sampling more per problem (§B.1) [Li et al., 2022], (2) leveraging more powerful sources of feedback [Wang et al., 2023b], or (3) keeping pre-training base LM on more relevant solutions. Table 1: Count of top-3 error types on MBPP test set before and after LETI fine-tuning. | LETI (2B) w/o post-processing | Pre-trained | Fine-tuned | |-------------------------------|-------------|------------| | # of AssertionError | 1189 | 4356 | | # of SyntaxError | 5179 | 652 | | # of IndentationError | 467 | 165 | | # of Other Errors | 799 | 572 | | # of Pass Test | 366 | 2255 | | Pass@1 (%) | 4.50 | 28.00 | | LETI (2B) w/ post-processing | Pre-trained | Fine-tuned | |-------------------------------|-------------|------------| | # of AssertionError | 3835 | 4376 | | # of SyntaxError | 437 | 458 | | # of NameError | 810 | 94 | | # of Other Errors | 652 | 657 | | # of Pass Test | 2266 | 2415 | | Pass@1 (%) | 26.89 | 29.53 | Table 2: HumanEval performance of LMs finetuned on MBPP using LETI. We observe consistent Pass@10 and Pass@100 improvement across different model sizes. The top-ranked results are presented in **bold**, while the second-ranked results are underlined. | HumanEval | Pass@1 | Pass@10 | Pass@100 | |----------------------------|--------|---------|----------| | Pre-trained (350M) | 12.56 | 23.11 | 35.19 | | LETI (350M) w/o textual feedback | 12.19 | 21.69 | 35.62 | | LETI (350M) | **13.19** | **23.36** | **36.95** | | Pre-trained (2B) | 23.70 | 36.64 | 57.01 | | LETI (2B) w/o textual feedback | 19.90 | 35.62 | 58.48 | | LETI (2B) | 21.60 | 37.03 | 58.28 | | LETI (2B, trained w/ post-processing) | 21.60 | **39.51** | **61.46** | ### 3.2.2 HumanEval **Setup.** We evaluate LM trained on MBPP on another code generation dataset HumanEval (Chen et al., 2021b), which contains 164 handwritten problems to assess language comprehension, reasoning, algorithms, and simple math capabilities. We use the same pass@k metric as described in §3.2.1 and apply post-processing for the generated solution. **Results.** Despite being trained on a problem set MBPP that contains the most basic Python problems, as shown in Tab. 2, LETI can improve LM’s capability in other code generation problems in the HumanEval dataset. Compared to pre-trained LM, we observe consistent Pass@10 and Pass@100 improvement across both 350M and 2B LMs, while the 2B LM has a degraded Pass@1 performance. We observe larger improvements for LETI (2B) trained with post-processing as it allows LETI to focus on improving common error (e.g., NameError) in evaluation that applies post-processing. ### 3.3 Learning from Textual Feedback is More Sample-Efficient To study the effect of learning from textual feedback, Fig. 2 compares LETI against a baseline that only uses binary feedback. Regardless of model sizes, LMs trained with textual feedback obtain better final performance and improve faster (up to 2.2x for 2B; Tab. 3). **LM’s ability to leverage textual feedback increases with scale.** A larger model is more effective in learning from textual feedback and can obtain a larger (average) improvement per iteration than a baseline that only uses binary feedback (Tab. 3). 2B model that uses textual feedback improves 2.24x faster than binary feedback, while 350M is only 1.57x faster. Similar to Kaplan et al. (2020), we also find that a larger LM (2B) optimized using LETI obtains larger improvements per iteration (approx. 8x more compared to 350M LM) for both training and testing problems when both are given textual feedback. In other words, a larger model requires fewer gradient updates to achieve similar performance in a smaller model. These observations suggest that we might see more significant gains by applying LETI on LMs of a larger scale (e.g., 6B, 16B), which we leave for future work. **LMS trained with textual feedback can use samples more efficiently.** As shown in Fig. 3 compared to a baseline that only uses binary feedback, LETI (2B) yields better accuracy and sample efficiency: 2.74x and 2.24x higher improvement rate for \(|\mathcal{P}| = 128\) and \(|\mathcal{P}| = 374\) (Tab. 4). Interestingly, we observe a different trend for the smaller LM (350M). When decreasing the number of training problems from 374 to 128, LETI actually underperforms the baseline that only uses binary feedback. We conjecture that this is because (1) a smaller LM may lack the capacity to learn from textural feedback, and (2) LMs can benefit from a larger \(|\mathcal{P}|\) by seeing a more diverse set of problems. ### 3.4 LETI Retains Reasoning and Chain-of-Thought Performance **Setup.** We evaluate LETI-optimized LM (w/o post-processing) on additional reasoning tasks, including GSM8K (Grade School Math) Cobbe et al. (2021), a mathematical reasoning dataset that includes grade school math problems, and Big-Bench-Hard (BBH) Suzgun et al. (2022) that includes 26 challenging and diverse tasks (e.g., date understanding, sport understanding) testing Figure 3: LETI performance with different numbers of training problems \(|P| \in \{128, 374\}\). LETI (2B) with textual feedback can use samples more efficiently than a baseline that does not leverage textual feedback by always achieving higher performance and improvement rate (Tab. 4). Table 3: On MBPP, LETI improves the LMs’ code generation performance by up to 2.24x more per iteration when textual feedback is provided. | Model Size | Textual Feedback | Initial Pass@1 | Max Pass@1 | #Iter to Max | Avg. improvement per iteration | |------------|------------------|---------------|-----------|-------------|-------------------------------| | 2B | ✓ | 4.50 | 28.00 | 6 | 3.92 (2.24x) | | | × | 4.50 | 18.54 | 8 | 1.75 | | 350M | ✓ | 7.40 | 13.96 | 14 | 0.47 (1.57x) | | | × | 7.40 | 10.75 | 11 | 0.30 | Table 4: LETI’s average improvement per iteration for different numbers of training problems \(|P| \in \{128, 374\}\). | Model Size | Textual Feedback | # Train Problems \(|P|\) | Avg. improvement per iteration | |------------|------------------|--------------------------|-------------------------------| | 2B | ✓ | 128 | 2.60 (2.74x) | | | × | 374 (full dataset) | 0.95 | | 350M | ✓ | 128 | 0.17 (0.63x) | | | × | 374 (full dataset) | 0.27 | model’s generic reasoning capability. For GSM8K, we evaluate on PaL-style prompting (Gao et al., 2022) settings that ask LM to generate code and execute them to solve the given reasoning problem. Solutions for these reasoning tasks are generated without being conditioned on any reward token (e.g., \(<|\text{good}|>\)). We evaluate Big-Bench-Hard on two prompt settings: direct prompting that asks the model to generate an answer directly and chain-of-thought (CoT) prompting (Wei et al., 2022b) that elicits a series of intermediate reasoning steps from the LM before generating the answer. We calculate the performance gain \(\Delta_{\text{CoT-direct}}\) from doing chain-of-thought by calculating the performance difference between CoT and direct prompting. Results. As shown in Tab. 5, we observe no significant degradation in out-of-domain reasoning performance (i.e., GSM8K and BBH) after LETI fine-tuning. Moreover, as shown on BBH, applying LETI on a 2B LM improves its chain-of-thought capability compared to its pre-trained checkpoint (i.e., higher CoT and \(\Delta_{\text{CoT-direct}}\)). In a smaller 350M model, we observe some degradation in BBH’s CoT performance despite also applying regularization via continued pre-training (\$2.4). Removing regularization degrades performance outside MBPP. We compare LMs (350M) trained with and without the continued pre-training regularization (\$2.4). We observe no significant difference between in-domain task performance (MBPP) shown in Fig. A.9. However, as shown in Tab. 5, removing regularization significantly degrades LM’s capability on PaL-prompted GSM-8K, similar to findings from Fu et al. (2023), it also degrades BBH’s chain-of-thought performance. Table 5: Performance on additional reasoning tasks, including math reasoning benchmark GSM8K (Cobbe et al., 2021) and Big-Bench-Hard (i.e., BBH) (Suzgun et al., 2022). *250 out of 6,511 BBH\(_{\text{CoT}}\) prompts have more than 2048 tokens, which exceed CodeGen models’ context window. Scores are set to 0 for these prompts. | | GSM8K PaL | Big-Bench-Hard direct | CoT* | \(\Delta_{\text{CoT-direct}}\) | |---------------------|-----------|-----------------------|------|-------------------------------| | Pre-trained (2B) | 40.03 | 29.67 | 36.81| 7.14 | | LETI (2B) | 38.97 | 29.41 | 37.46| 8.05 | | LETI (2B, w/ post-processing) | 42.99 | 29.81 | 36.72 | 6.91 | | LETI (2B) w/o textual feedback | 41.93 | 29.23 | 36.71 | 7.48 | | LETI (2B) w/o regularization | 32.15 | 30.06 | 35.82 | 5.76 | | Pre-trained (350M) | 13.01 | 28.89 | 28.86| -0.03 | | LETI (350M) | 16.68 | 28.89 | 28.86| -0.03 | | LETI (350M) w/o textual feedback | 16.07 | 28.81 | 28.72 | -0.09 | | LETI (350M) w/o regularization | 7.88 | 28.00 | 28.31 | 0.31 | 3.5 LETI IS APPLICABLE TO NLP TASKS LIKE EVENT ARGUMENT EXTRACTION (EAE) When an NLP task can be formulated into a code generation problem, LETI is equally applicable. We experiment with event argument extraction (EAE), cast as a code generation problem by Wang et al. (2023a). Given an event ontology (Fig. 4 upper left) and a natural language sentence (Fig. 4 bottom left), we ask the LM to generate code to instantiate an event class using correct argument roles extracted from the sentence. Then we can check and examine the instantiated event object to validate the correctness of the solution (Fig. 4 right). Solution evaluator implementation. We build a rule-based solution evaluator for the EAE task that checks the instantiated event object in Python (Fig. 4). Specifically, we first check whether the generation satisfies argument constraints by providing a list of Entity objects for each event argument role (1, 2 in Fig. 4). Then we check whether all the predicted arguments match any of the ground truths (3, Fig. 4) and whether all the correctly identified arguments are classified to the correct event role (4, Fig. 4); Finally, we check if the prediction is complete by identifying all arguments in the ground truth solution (5, Fig. 4). We say the solution is correct with $f_{\text{binary}} = 1$ when it meets all of the above criteria. Note that the design decision of the solution evaluator (e.g., which error to check first) can influence what type of error LETI-optimized LM will prioritize to avoid. ![Figure 4: Rule-based Solution Evaluator for Event Argument Extraction (EAE) formulated as code generation task Wang et al. (2023a). Content enclosed by \{\ldots\} in $f_{\text{text}}$ is automatically populated by a Python implementation of Evaluator for any given solution.] Results. LETI’s performance on EAE task is summarized in Fig. 5. In Fig. 5(left), we find that LETI is capable of improving the train and test pass rate of generated solutions (i.e., a larger proportion of $f_{\text{binary}} = 1$ for both training and testing test). We also observe increased test performance on task-specific metrics: Argument Identification (Arg-I) F1 increases by 12.3% (21.2% $\rightarrow$ 33.5%), and Argument Classification (Arg-C) F1 increases 2.6% (8% $\rightarrow$ 10.6%) with three iterations. Implementation of solution verifier could influence the target metric of optimization. Interestingly, we find that improving $f_{\text{binary}}$ using our solution evaluator results in better performance in some task-specific metrics (e.g., Arg-I and Arg-C precision) but not others (e.g., Arg-I and Arg-C F1). As shown in Fig. 5, Arg-I and Arg-C precision, among other task-specific metrics, has the highest Pearson correlation of 0.93 and 0.73 with test Pass@1, while Arg-I F1 and Arg-C F1 only moderately (0.51) or weakly (0.29) correlate with test Pass@1. One possible reason is that LETI forces the model to be correct on every argument it identified in the evaluator implementation (Fig. 4 step 3). This could inhibit the model from generating arguments very close to the ground truth solutions, reflected in the degrading recall (correlation with Test Pass@1 of -0.08 and -0.24 for Arg-I and Arg-C recall) and improved precision in Fig. 5. This is similar to the reward-shaping problem in reinforcement learning. One can implement solution evaluators that suit better certain metrics. ![Figure 5](image) **Figure 5:** Event Argument Extraction performance and their correlation with Test Pass@1 when using LETI to optimize towards success rate. We found that the rule-based solution evaluator (Fig. 4) can be designed to biased towards optimizing precision as discussed in §3.5. ## 4 RELATED WORK ### Using feedback to improve code generation. Leveraging non-textual feedback from an interpreter, prior work can generate solutions following natural language instructions by sampling and filtering large amounts of programs (Li et al., 2022; Chen et al., 2022), training a model to rank generated solutions (Inala et al., 2022), fine-tuning a Code-LM on generated solutions verified by test cases (Haluptzok et al., 2022), or training a reward model and using reinforcement learning (RL) to improve Code-LMs (Le et al., 2022). Recent work has explored textual feedback (e.g., error messages, human language feedback) to improve LM for code-related problems. Chen et al. (2023a) improves code generation by fine-tuning the original LM on code refinement generated by conditioning on human language feedback; Different from our work, their fine-tuned LM uses more expensive human feedback and is not trained directly on the provided textual feedback. Chen et al. (2023b); Madaan et al. (2023) improve code generation by allowing LM to look at self-generated (and/or interpreter) feedback; however, the generator LM was frozen and couldn’t generate better code on the original problem without these methods, while LETI improves the underlying LM directly. ### Improving LMs with reinforcement learning. Using PPO, Stiennon et al. (2020); Ouyang et al. (2022) align LMs with human preferences. CodeRL (Le et al., 2022) follows REINFORCE (Williams, 1992) and policy gradient (Sutton et al., 1999) to improve Code-LMs with a scalar reward from the interpreter. Different from LETI that directly leverages textual feedback, these algorithms require either manually crafting (Le et al., 2022) or training (Stiennon et al., 2020; Ouyang et al., 2022) reward/value functions, which could be less scalable for various tasks. Another strand of work leverages Transformer architecture (Vaswani et al., 2017) to perform RL with sequence modeling (Janner et al., 2021; Chen et al., 2021a; Lu et al., 2022; Korbak et al., 2023; Zhang et al., 2023; Liu et al., 2023) improve LM by performing condition training, similar to conditioning LM on binary feedback $f_{\text{binary}}$ in LETI. LETI goes beyond the aforementioned work conditioning on the coarse-grained label: we are asking the LM to comprehend and improve directly based on textual feedback (e.g., error messages) that generally contains richer information compared to binary feedback. ## 5 CONCLUSION We proposed LETI, a new LM fine-tuning paradigm that explores LM’s potential to learn from textual interactions. We focused on code generation tasks and showed that one can effectively leverage automatic textual feedback from a Python interpreter to improve LMs. Textual feedback outperforms baselines that only use binary feedback in both generation quality and sample efficiency. Furthermore, LETI is equally applicable in NLP tasks that can be formulated as code generation, which we empirically verified on Event Argument Extraction. We refer to §A for a discussion of limitations and future work. REFERENCES Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. Program synthesis with large language models. *ArXiv*, abs/2108.07732, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R Bowman, Kyunghyun Cho, and Ethan Perez. Improving code generation by training with natural language feedback. *arXiv preprint arXiv:2303.16749*, 2023a. Bei Chen, Fengji Zhang, A. Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests. *ArXiv*, abs/2207.10397, 2022. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems*, 34:15084–15097, 2021a. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *arXiv preprint arXiv:2107.03374*, 2021b. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. *ArXiv*, abs/2304.05128, 2023b. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. *ArXiv*, abs/2110.14168, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. pp. 4171–4186, 2019. Yao Fu, Hao-Chun Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language models towards multi-step reasoning. *ArXiv*, abs/2301.12726, 2023. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. *ArXiv*, abs/2211.10435, 2022. Patrick M. Haluptzok, Matthew Bowers, and Adam Tauman Kalai. Language models can teach themselves to program better. *ArXiv*, abs/2207.14502, 2022. Jeevana Priya Inala, Chenglong Wang, Mei Yang, Andres Codas, Mark Encarnación, Shuvendu Lahiri, Madanlal Musuvathi, and Jianfeng Gao. Fault-aware neural code rankers. *Advances in Neural Information Processing Systems*, 35:13419–13432, 2022. Michael Janner, Qiyang Li, and Sergey Levine. Reinforcement learning as one big sequence modeling problem. In *Neural Information Processing Systems*, 2021. James A Jones, Mary Jean Harrold, and John Stasko. Visualization of test information to assist fault localization. In *Proceedings of the 24th international conference on Software engineering*, pp. 467–477, 2002. Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. Scaling laws for neural language models. *ArXiv*, abs/2001.08361, 2020.
K804zYw6Wc
For GMM and LMM, NIR and RGB are concatenated along channel dimensions to input the same subsequent modules, and then split to obtain the two estimated NIR weights and RGB weights to apply to the corresponding branches, respectively. It seems to me that two estimated weights are the same, how to achieve selective fusion? Or are manually setting parameters involved? It would be better to provide some justification or motivation for the design choices.
NIR-Assisted Image Denoising: A Selective Fusion Approach and a Real-World Benchmark Dataset Anonymous authors Paper under double-blind review Abstract Despite the significant progress in image denoising, it is still challenging to restore fine-scale details while removing noise, especially in extremely low-light environments. Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue, becoming a promising technology. Nonetheless, existing works still struggle with taking advantage of NIR information effectively for real-world image denoising, due to the content inconsistency between NIR-RGB images and the scarcity of real-world paired datasets. To alleviate the problem, we first propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks to merge the deep NIR-RGB features. Specifically, we sequentially perform the global and local modulation for NIR and RGB features, and then integrate the two modulated features. Furthermore, we present a real-world NIR-Assisted Image Denoising (NAID) dataset, which covers diverse scenarios as well as various noise levels and is expected to serve as a benchmark for future research. Extensive experiments on both synthetic and our real-world datasets demonstrate that the proposed method achieves better results than state-of-the-art ones. The dataset, codes, and pre-trained models will be publicly available. 1 Introduction In low-light conditions, it’s common to use short exposure time and high ISO in imaging to prevent motion blur, while this approach inevitably introduces noise due to the limited number of photons captured by camera. With the development of deep learning (He et al., 2016; Liang et al., 2021; Vaswani et al., 2017), many image denoising methods (Zhang et al., 2017; 2018a; Abdelhamed et al., 2020; Zamir et al., 2022; Wang et al., 2022; Zhang et al., 2022; Li et al., 2023) have been proposed to remove the noise. Although great progress has been achieved, it is still challenging for these methods to recover fine-scale details faithfully due to the severely ill-posed nature of denoising. A practical solution is burst denoising (Mildenhall et al., 2018; Godard et al., 2018; Pearl et al., 2022; Wu et al., 2023), in which multiple successive frames are merged to improve performance. But it is susceptible to the misalignment between frames, and may be less effective in facing dynamic scenes. Fortunately, near-infrared (NIR) images with low noise can be captured at a cheap cost and utilized to enhance the denoising of visible RGB images, which has attracted increasing attention (Lv et al., 2020; Wu et al., 2020; Jin et al., 2022; Wan et al., 2022). Specifically, on the one hand, the NIR band lies outside the range of the human visible spectrum. It enables us to turn on an NIR light that is imperceptible to humans, thus capturing NIR images (Fredembach & Süssstrunk, 2008) with a low noise level. On the other hand, modern CMOS sensor is sensitive to partial near-infrared wavelengths (Xiong et al., 2021), thus allowing NIR signals to be acquired cheaply and conveniently. Nevertheless, the inconsistencies between NIR and RGB content limit the positive effect of NIR images in denoising. Firstly, NIR images are captured under additional NIR light and are monochromatic, which leads to brightness and color discrepancies between the two modalities. Secondly, the NIR images may ‘more-see’ or ‘less-see’ the objects than the visible light ones, primarily due to inherent differences in the optical properties within each spectral domain (Fredembach & Süssstrunk, 2008). For example, as shown in Fig. 1 (a), the RGB image clearly contains textual information, while the corresponding NIR image lacks that. In Fig. 1 (b), the NIR image exhibits extra fruit patterns, while these patterns are absent in the RGB image. DVD (Jin et al., 2022) and SANet (Sheng et al., 2023) have noticed this problem, but their solutions are both complex and less effective. Additionally, due to the lack of real-world paired datasets, existing methods mainly focus on processing synthetic noisy images. NIR-assisted real-world noise removal is rarely explored. In this work, on the one hand, we focus on dealing with the content inconsistency problem, and hope to construct a simple yet effective RGB-NIR fusion module that can be easily integrated into the existing denoising networks. Specifically, we propose a lightweight Selective Fusion Module (SFM), which consists of a Global Modulation Module (GMM), a Local Modulation Module (LMM), and a fusion operation. GMM and LMM mainly handle color and structure inconsistency issues, respectively. They predict and assign soft weights to NIR and RGB features, thus preparing for subsequent feature fusion. On the other hand, we introduce NIR-Assisted Image Denoising (NAID) dataset for NIR-assisted real-world noise removal. It encompasses diverse scenarios and various noise levels, providing a valuable resource for evaluating and promoting research in this field. We conduct extensive experiments on both synthetic DVD (Jin et al., 2022) and our real-world NAID datasets. The results show that the proposed method performs better than state-of-the-art ones. Our contributions can be summarized as follows: (1) For NIR-assisted image denoising, we propose a plug-and-play selective fusion module to handle content inconsistency issues between RGB-NIR images, which assigns appropriate fusion weights to the deep NIR and RGB features by global and local modulation modules. (2) We construct a paired NIR-assisted real-world image denoising dataset with diverse scenarios and various noise levels, which has the potential to promote further research in this field. (3) Extensive experiments on both synthetic and our real-world datasets demonstrate that our method achieves better results than state-of-the-art ones. 2 RELATED WORK 2.1 SINGLE IMAGE DENOISING With the advancements in deep learning (Ronneberger et al., 2015; He et al., 2016; Vaswani et al., 2017), numerous single-image denoising methods (Zhang et al., 2017; 2018a; Abdelhamed et al., 2020; Chen et al., 2021; Zamir et al., 2022; Wang et al., 2022; Zhang et al., 2022; Li et al., 2023) have emerged. DnCNN (Zhang et al., 2017) pioneers the utilization of deep learning techniques and surpasses traditional patch-based methods (Buades et al., 2005; Dabov et al., 2007; Gu et al., 2014) on Gaussian noise removal. Recently, some methods (Zamir et al., 2021; Wang et al., 2022; Zamir et al., 2022; Chen et al., 2022) are developed with advanced architectures. For example, MRPNet (Zamir et al., 2021) applies a multi-stage architecture for progressive image restoration and achieves remarkable performance. Uformer (Wang et al., 2022) introduces the locally-enhanced transformer by employing the non-overlapping window-based self-attention. Restormer (Zamir et al., 2022) further reduces the computation cost by modifying the self-attention calculation from the spatial dimension to channel one. NAFNet (Chen et al., 2022) proposes a simple baseline that does not apply nonlinear activation. Despite the significant progress achieved by these methods, the performance is still unsatisfactory when handling images with high-level noise captured under low-light conditions. 2.2 NIR-assisted Image Restoration Compared to single-image restoration, NIR images have the potential to assist in restoring details from degraded images. The earlier work (Krishnan & Fergus, 2009) utilizes gradient constraints for NIR-assisted image denoising. Wang et al. (Wang et al., 2019b) further improves the performance with deep learning methods. SSN (Wu et al., 2020) proposes a multi-task deep network with state synchronization modules. TC-GAN (Yang et al., 2021) fuses the NIR images and RGB ones based on a texture conditional generative adversarial network. DCMAN (Cheng et al., 2023) employs spatial-temporal-spectral priors to introduce NIR videos for low-light RGB video restoration. However, these methods have overlooked the color and structure inconsistency issues between the NIR images and RGB ones. CCDFuse (Zhao et al., 2023) addresses this issue by combining the local modeling ability of convolutional blocks and the non-local modeling ability of transformer ones to extract local and global features of NIR and RGB images respectively. SANet (Sheng et al., 2023) proposes a guided denoising framework by estimating a clean structure map for the noisy RGB image. Wan et al. (Wan et al., 2022) disentangle the color and structure components from the NIR images and RGB ones. Besides, a few works (Deng & Dragotti, 2020; Xu et al., 2022b; Jin et al., 2022) incorporate different priors into the network design, like sparse coding (Deng & Dragotti, 2020), deep implicit prior (Xu et al., 2022b) and deep inconsistency prior (Jin et al., 2022). However, their complexity makes it difficult to be integrated into existing advanced restoration networks, hindering their extensions and improvements. 2.3 Datasets for NIR-Assisted Image Restoration Existing available NIR-RGB datasets suffer from limitations such as scarcity of data samples (Krishnan & Fergus, 2009), absence of paired real-world RGB noisy images (Brown & Süsstrunk, 2011; Zhi et al., 2018; Jin et al., 2022; Lv et al., 2020), or lack of public accessibility (Wang et al., 2019a; Lv et al., 2020). For instance, Krishnan et al. (Krishnan & Fergus, 2009) develop a prototype camera to capture image pairs under varying low-light conditions but only containing 5 image pairs. Its size is too small to fulfill the demands of data-driven deep-learning algorithms. IVRG (Brown & Süsstrunk, 2011) and RGB-NIR Sreero (Zhi et al., 2018) construct datasets consisting of RGB and NIR image pairs for image recognition and stereo matching, respectively. DVD (Jin et al., 2022) captures images within a controlled light-box environment. However, these datasets comprise solely clean RGB and NIR image pairs, lacking real-world noisy RGB images. Burst Dataset (Wang et al., 2019a) captures real-world noisy images by a mobile imaging device that is sensitive to both near-infrared and near-ultraviolet signals. Lv et al. (Lv et al., 2020) introduces the VIS-NIR-MIX dataset which utilizes a motorized rotator to manipulate illumination conditions. But they are not publicly available. The scarcity of real-world datasets has hampered future research. To address this limitation, we introduce the NIR-Assisted Image Denoising (NAID) benchmark dataset, which encompasses diverse scenarios and various noise levels. 3 Real-World NIR-Assisted Image Denoising Dataset Existing publicly available NIR-assisted image denoising datasets generally lack real-world noisy RGB images paired with the clean RGB and NIR images, which limits the investigation in real-world NIR-assisted image denoising. To break such a limitation, we build a NIR-Assisted Image Denoising (NAID) dataset. Specifically, we employ high ISO and short exposure time to capture the real-world noisy RGB images, as shown in Fig. 2 (a). The camera’s ISO and exposure time are adjusted to capture images with different noise levels. For capturing the corresponding clean RGB images, we lower the ISO of the camera and appropriately increase the exposure time, as shown in Fig. 2 (b). To obtain paired NIR images, we activate NIR light to ensure a sufficient supply of NIR illumination and then capture the NIR ones with a dedicated NIR camera, as shown in Fig. 2 (c). All images are captured with the Huawei X2381-VG camera, which is equipped with a built-in NIR illuminator specifically designed for capturing NIR images. To ensure image registration among multiple captures, we securely position the camera and develop a remote control application to Figure 2: The construction of NIR-Assisted Image Denoising (NAID) dataset. (a) Capture noisy RGB images with high ISO and short exposure time. (b) Capture clean RGB images with low ISO and long exposure time. (c) Turn on the NIR light, then capture the clean NIR images with low ISO and short exposure time. Figure 3: Image examples from our real-world NIR-assist image denoising (NAID) dataset. Table 1: Comparisons of some existing datasets consisting of paired NIR and RGB images. ‘Public’ refers to its current public accessibility. ‘Dataset Size’ denotes the number of paired images. | Dataset | Real Noise | Public | Dataset Size | Image Resolution | |--------------------------|------------|--------|--------------|------------------| | RGB-NIR Video (Cheng et al., 2023) | | | 11444 | 1280 × 720 | | RGB-NIR Stereo (Wang et al., 2019a) | ✓ | | 42000 | ~582 × 492 | | IVRG (Brown & Süssstrunk, 2011) | ✓ | | 477 | ~1024 × 680 | | DVD (Jin et al., 2022) | ✓ | | 307 | 1792 × 1008 | | Burst Dataset (Wang et al., 2019a) | ✓ | | 121 | 512 × 512 | | VIS-NIR-MIX (Lv et al., 2020) | ✓ | | 206 | ~3072 × 2048 | | Dark Flash Photography (Zhi et al., 2018) | ✓ | ✓ | 5 | ~1400 × 1000 | | NAID (Ours) | ✓ | ✓ | 300 | 2160 × 2048 | capture images of static objects. In total, the dataset comprises 100 scenes with diverse contents, and each scene has three noisy images with various noise levels. 90 scenes are randomly sampled as the training set and the remaining 10 ones are used for the testing set. In addition, we compare the NAID dataset with other existing NIR-RGB ones to demonstrate the strengths of our dataset, as shown in Table 1. Some image examples in the dataset are also provided in Fig. 3. More details about the dataset are provided in Sec. A of the appendix. 4 Method 4.1 Problem Formation NIR-assisted image denoising aims at restoring the clean RGB image $\mathbf{I} \in \mathbb{R}^{H \times W \times 3}$ from its noisy RGB observation $\mathbf{I}_R \in \mathbb{R}^{H \times W \times 3}$ with the assistance of the NIR image $\mathbf{I}_N \in \mathbb{R}^{H \times W \times 1}$, where $H$ Figure 4: Comparison of different image denoising methods with multi-scale architecture. (a) RGB image denoising. (b) NIR-assisted RGB image denoising baseline. (c) NIR-assisted RGB image denoising with our proposed Selective Fusion Module (SFM). Figure 5: The structure of our proposed Selective Fusion Module (SFM), where Global Modulation Module (GMM) and Local Modulation Module (LMM) focus on color and structure discrepancy issues between the NIR images and RGB ones, respectively. Two $1 \times 1$ blocks and $5 \times 5$ blocks are used in GMM and LMM, respectively. and $W$ denote the height and width of images, respectively. Compared to the vanilla image denoising based on the multi-scale encoder-decoder architectures as shown in Fig. 4 (a), it further plays the role of the corresponding NIR image to guide the noise removal. And that’s also the core of the NIR-assisted image denoising. Assuming that the clean NIR images are perfectly consistent with the noisy RGB ones in color and structure, we can simply adapt the existing denoising architectures in Fig. 4 (a) to Fig. 4 (b). The output $\hat{I}$ can be written as, $$\hat{I} = D(E_N(I_N) + E_R(I_R)), \quad (1)$$ where $D$ denotes the decoder of the denoising network, $E_N$ and $E_R$ denote the feature encoders for NIR and RGB images, respectively. However, in practical scenarios, there are color and structure inconsistencies between the NIR images and the RGB ones, as illustrated in Fig. 1. Leveraging the NIR images in a naive way like Eqn. (1) only gains limited performance improvement. Instead, we propose an Selective Fusion Module (SFM) for combining NIR-RGB information to address the issue, as shown in Fig. 4 (c). Thus, Eqn. (1) can be modified to, $$\hat{I} = D(SFM(E_N(I_N), E_R(I_R))). \quad (2)$$ ### 4.2 Selective Fusion Module SFM should select valuable information and avoid harmful one from the current NIR-RGB features for feature fusion. To achieve that, we suggest that SFM predicts and assigns pixel-wise weights for NIR-RGB features fusion. Denote the current NIR and RGB features from the corresponding encoders by $F_N$ and $F_R$, SFM can be written as, $$SFM(F_N, F_R) = W_N \odot F_N + W_R \odot F_R, \quad (3)$$ where $\odot$ is the pixel-wise multiply operation. $W_N$ and $W_R$ denote the weight of NIR and RGB features, respectively. In order to model the color and structure discrepancy respectively, we decouple the weight $W$ (including $W_N$ and $W_R$) into global and local component, i.e., $W = W^g \odot W^l$, where the former one concentrates on the differences in global information and the latter one focuses on the discrepancy in local information between NIR-RGB features. Based on that, we further present a Global Modulation Module (GMM) to estimate $W^g$ and a Local Modulation Module (LMM) to estimate $W^l$, as shown in Fig. 5. **Global Modulation Module.** GMM should handle the global color and brightness difference between the NIR and RGB ones. As shown in Fig. 5 (a), it takes the current NIR features $F_N$ and the RGB ones $F_R$ as inputs to estimate the NIR global modulation weights $W^g_N$ and the RGB ones $W^g_R$. Detailly, $F_N$ and $F_R$ are concatenated along channel dimension followed with a $1 \times 1$ convolutional layer for channel reduction. Two $1 \times 1$ blocks are then deployed to get the deep fused feature maps, which are passed into another $1 \times 1$ convolutional layer, a channel split operation, and a softmax operation sequentially to get the estimated NIR weights $W^g_N$ and RGB ones $W^g_R$. Each $1 \times 1$ block is composed of a $1 \times 1$ convolutional layer, a Layer Normalization (Ba et al., 2016), and a PReLU (He et al., 2015) function, as shown in Fig. 5 (b). We modulate the NIR features $F_N$ and the RGB ones $F_R$ with $W^g_N$ and $W^g_R$, respectively, i.e., \[ F^g_N = W^g_N \odot F_N, \quad F^g_R = W^g_R \odot F_R, \] (4) where $F^g_N$ and $F^g_R$ are the globally modulated NIR features and the RGB ones, respectively. **Local Modulation Module.** The Local Modulation Module (LMM) should focus on the structure inconsistency between NIR images and RGB ones. We suggest increasing the receptive field to perceive more structure information from a range of neighboring pixels. In detail, LMM takes the globally modulated NIR features $F^g_N$ and the RGB ones $F^g_R$ as inputs to estimate the local NIR weights $W^l_N$ and the RGB ones $W^l_R$, as shown in Fig. 5 (a). Without complex network design, LMM is built upon GMM by replacing the $1 \times 1$ convolutional layer in $1 \times 1$ block to a large kernel depth-wise convolutional layer (DWConv) (Howard et al., 2017) for capturing more local information, as shown in Fig. 5 (c). Finally, $W^l_N$ and $W^l_R$ are employed to get the fused NIR and RGB feature $F_{NR}$ as, \[ F_{NR} = W^l_N \odot F^g_N + W^l_R \odot F^g_R. \] (5) $F_{NR}$ is then passed to the decoder to output the denoising result. **Discussion.** There are several advantages of our proposed SFM. First, the color and structure discrepancy issues are decoupled and addressed with GMM and LMM respectively, which achieves significant performance improvements while maintaining interpretability. Second, the compact and lightweight network design makes it only add few parameters and computation costs. Third, it is plug-and-play and can be simply integrated into existing advanced denoising networks. The related experiment results are presented in Sec. 5. ### 4.3 Loss Function A multi-scale loss function is adopted for updating network parameters. In detail, we employ a $3 \times 3$ convolutional layer after the decoder at scale $s$ to generate the noise-free image $\hat{I}_s$. Therein, $\hat{I}_s$ represents the final output with full resolution. Subsequently, we can calculate the multi-scale loss by the following formulation, \[ L = \sum_{s=1}^{3} ||\hat{I}_s - I_{\downarrow 2^{s-1}}||_2, \] (6) where $I_{\downarrow 2^{s-1}}$ denote the ground truth after $\times 2^{s-1}$ down-sampling. ## 5 Experiments ### 5.1 Experimental Settings **Datasets.** Experiments are conducted on the synthetic and our real-world NAID datasets. The details of the real-world NAID dataset can be seen in Sec. 3. In addition, We use the DVD (Jin et al., 2022) dataset to generate synthetic noisy images. It comprises 307 pairs of clean RGB images (and corresponding RAW images) and NIR images. 267 pairs are used for training and 40 pairs are for testing. The way to simulate noisy data follows DVD (Jin et al., 2022). We first scale the mean value of the clean RAW images, getting synthetic low-light clean RAW images. Then we add Gaussian noise with the variance $\sigma$ and Poisson noise with a noise level $\sigma$ to the generated low-light images. Finally, the synthetic low-light noisy RAW images are converted to RGB ones for training models. We conduct experiments with $\sigma = 4$ and $\sigma = 8$ (the larger the $\sigma$, the heavier the noise). **Implementation Details.** We build our NIR-assisted denoising models by incorporating the proposed SFM into a CNN-based advanced denoising network (i.e., NAFNet (Chen et al., 2022)) and two Transformer-based ones (i.e., Uformer (Wang et al., 2022) and Restormer (Zamir et al., 2022)), which are dubbed **NIR-NAFNet**, **NIR-Uformer**, and **NIR-Restormer**, respectively. All models are trained by the Adam (Kingma & Ba, 2014) optimizer with $\beta_1 = 0.9$ and $\beta_2 = 0.999$ for 120k iterations. The batch size is set to 32 and the patch size is set to $128 \times 128$. For synthetic image denoising, the cosine annealing strategy (Loshchilov & Hutter, 2017) is employed to steadily decrease the learning rate from $2 \times 10^{-4}$ to $1 \times 10^{-6}$. For real-world image denoising, the initial learning rate is set to $3 \times 10^{-4}$ and halved every 20k iterations. All experiments are conducted with PyTorch (Paszke et al., 2019) on an Nvidia GeForce RTX A6000 GPU. ### 5.2 Comparison with State-of-the-Art Methods Experiments are conducted by comparing our NIR-NAFNet, NIR-Uformer, and NIR-Restormer with 8 models, including 3 single image denoising methods (i.e. NAFNet (Chen et al., 2022), Uformer (Wang et al., 2022), and Restormer (Zamir et al., 2022)) and 5 NIR-assisted denoising methods (i.e. FGDNet (Sheng et al., 2022), SANet (Sheng et al., 2023), CUNet (Deng & Dragotti, 2020), MNNet (Xu et al., 2022a), and DVN (Jin et al., 2022)). To quantitatively evaluate the performance, we calculate three metrics on the RGB channels, i.e. Peak Signal to Noise Ratio (PSNR) (Huynh-Thu & Ghanbari, 2008), Structural Similarity (SSIM) (Wang et al., 2004) and Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018b). We also evaluate the inference cost of different models. The #FLOPs when processing a $128 \times 128$ patch and the inference time when feeding a $1792 \times 1008$ image are reported. **Results on synthetic DVD dataset.** The quantitative results on the synthetic DVD dataset are shown in Table 2. It can be observed that our method significantly improves performance against single-image denoising methods, thereby demonstrating the effectiveness of NIR images. In comparison with existing NIR-assisted denoising ones, our methods also outperform by a large margin, as the proposed SFM overcomes the discrepancy issues between the NIR-RGB images while coupling with the advanced denoising backbone successfully. In particular, our NIR-NAFNet makes a better trade-off between performance and efficiency than other methods. Besides, The qualitative results in Fig. 6 show that our methods restore more realistic textures and fewer artifacts than others. **Results on real-world NAID dataset.** Real-world data has much more complex degradation than synthetic ones. The quantitative results in Table 3 show that our methods still keep high performance in the real world. Taking NIR-Restormer as an example, our proposed NIR-Restormer achieves 0.33dB, 0.54dB, and 0.94dB PSNR gains than Restormer (Zamir et al., 2022) in dealing with low-level, middle-level and high-level noise respectively. The higher the level of noise, the greater the improvement achieved by our method, which further indicates the advantage of the utilization of NIR information for low-light noise removal. The qualitative results in Fig. 7 demonstrate that our models still recover fine-scale details in the real world, while other NIR-assisted denoising methods may produce artifacts. More visual comparisons can be seen in Sec. D of the appendix. ### 6 Ablation Study We conduct ablation studies on our real-world NAID dataset when taking NIR-NAFNet as an example. They include the effect of GMM and LMM, the effect of kernel size of DWConv in LMM, and the effect of number of SFM. The metrics are reported by averaging these on three noise levels. #### 6.1 Effect of GMM and LMM in SFM. As shown in Table 4, the incorporation of global and local feature modulation yields 0.13dB PSNR improvements each, which can be attributed to their effective handling of the inconsistencies between NIR-RGB images in color and structure, respectively. And with both global and local feature modulation, it achieves results with 0.23dB PSNR gain. Additionally, GMM and LMM are both Figure 6: Qualitative comparison on the synthetic DVD dataset. **Bold** marks our methods. Figure 7: Qualitative comparison on our real-world NAID dataset. **Bold** marks our methods. Table 2: Quantitative comparison on the synthetic DVD dataset. **Bold** marks our results. | Methods | σ = 4 | σ = 8 | #FLOPs (G) | Time (ms) | |--------------------------|----------------|----------------|------------|-----------| | | PSNR↑ / SSIM↑ / LPIPS↓ | PSNR↑ / SSIM↑ / LPIPS↓ | | | | Single-Image Denosing | Uformer (CVPR’22) 29.58 / 0.8967 / 0.271 | 27.36 / 0.8632 / 0.352 | 19.16 | 1748 | | | Restormer (CVPR’22) 29.67 / 0.9038 / 0.262 | 27.41 / 0.8741 / 0.343 | 70.59 | 2048 | | | NAFNet (ECCV’22) 29.49 / 0.8959 / 0.263 | 27.29 / 0.8638 / 0.336 | 8.10 | 312 | | NIR-Assisted Denosing | FGDNet (TMM’22) 23.91 / 0.8371 / 0.439 | 22.02 / 0.7374 / 0.436 | 38.67 | 479 | | | SANet (CVPR’23) 27.68 / 0.8648 / 0.343 | 25.28 / 0.8304 / 0.413 | 161.06 | 2763 | | | CUNet (TPAMI’20) 28.01 / 0.8558 / 0.332 | 26.07 / 0.8182 / 0.412 | 14.48 | 542 | | | MNNet (IF’22) 28.48 / 0.8994 / 0.274 | 26.33 / 0.8697 / 0.353 | 23.68 | 1360 | | | DVN (AAAI’22) 29.69 / 0.9062 / 0.236 | 27.43 / 0.8799 / 0.292 | 104.05 | 761 | | | NIR-Uformer 30.10 / 0.9188 / 0.192 | 28.03 / 0.9008 / 0.238 | 24.85 | 2500 | | | NIR-Restormer 30.22 / 0.9209 / 0.193 | 28.11 / 0.8701 / 0.260 | 89.17 | 2747 | | | NIR-NAFNet 30.08 / 0.9005 / 0.208 | 27.86 / 0.8664 / 0.273 | 13.17 | 462 | Table 3: Quantitative comparison on our real-world NAID dataset. **Bold** marks our results. | Methods | Low-Level Noise | Middle-Level Noise | High-Level Noise | |--------------------------|-----------------|--------------------|------------------| | | PSNR↑ / SSIM↑ / LPIPS↓ | PSNR↑ / SSIM↑ / LPIPS↓ | PSNR↑ / SSIM↑ / LPIPS↓ | | Single-Image Denosing | Uformer (CVPR’22) 25.56 / 0.7736 / 0.304 | 24.52 / 0.7418 / 0.347 | 23.31 / 0.7091 / 0.389 | | | Restormer (CVPR’22) 25.89 / 0.7842 / 0.294 | 24.98 / 0.7572 / 0.333 | 23.82 / 0.7297 / 0.387 | | | NAFNet (ECCV’22) 25.71 / 0.7780 / 0.294 | 24.76 / 0.7482 / 0.335 | 23.71 / 0.7186 / 0.378 | | NIR-Assisted Denosing | FGDNet (TMM’22) 24.25 / 0.7676 / 0.368 | 22.89 / 0.7367 / 0.430 | 21.86 / 0.7080 / 0.509 | | | CUNet (TPAMI’20) 24.05 / 0.7314 / 0.313 | 23.29 / 0.7031 / 0.380 | 22.41 / 0.6398 / 0.449 | | | SANet (CVPR’23) 24.93 / 0.7679 / 0.359 | 23.74 / 0.7335 / 0.416 | 22.69 / 0.7028 / 0.476 | | | MNNet (IF’22) 25.68 / 0.7797 / 0.313 | 24.64 / 0.7512 / 0.364 | 23.36 / 0.7194 / 0.419 | | | DVN (AAAI’22) 25.96 / 0.7853 / 0.298 | 24.93 / 0.7578 / 0.332 | 23.95 / 0.7360 / 0.382 | | | NIR-Uformer 25.91 / 0.7919 / 0.276 | 25.14 / 0.7714 / 0.299 | 24.28 / 0.7534 / 0.321 | | | NIR-Restormer 26.22 / 0.7963 / 0.265 | 25.51 / 0.7676 / 0.293 | 24.76 / 0.7626 / 0.315 | | | NIR-NAFNet 26.06 / 0.7905 / 0.274 | 25.26 / 0.7676 / 0.303 | 24.48 / 0.7503 / 0.321 | Table 4: Quantitative comparison with different modulation modules in SFM. | GMM | LMM | PSNR↑ / SSIM↑ / LPIPS↓ | |-----|-----|------------------------| | × | × | 25.03 / 0.7647 / 0.304 | | ✓ | × | 25.16 / 0.7675 / 0.302 | | × | ✓ | 25.16 / 0.7653 / 0.302 | | ✓ | ✓ | 25.26 / 0.7695 / 0.299 | Table 5: Quantitative comparison of different arrangements of GMM and LMM. | Arrangement | PSNR↑ / SSIM↑ / LPIPS↓ | |-------------|------------------------| | GMM + GMM | 25.17 / 0.7652 / 0.302 | | LMM + LMM | 25.18 / 0.7650 / 0.301 | | LMM + GMM | 25.19 / 0.7669 / 0.301 | | GMM + LMM | 25.26 / 0.7695 / 0.299 | Table 6: Quantitative comparison of different numbers of SFM at each scale. | #SFM | PSNR↑ / SSIM↑ / LPIPS↓ | |------|------------------------| | 1 | 25.26 / 0.7695 / 0.299 | | 3 | 25.28 / 0.7699 / 0.299 | | 5 | 25.29 / 0.7669 / 0.301 | Table 7: Quantitative comparison of different kernel sizes of DWConv in LMM. | Kernel Size | PSNR↑ / SSIM↑ / LPIPS↓ | |-------------|------------------------| | 3 × 3 | 25.22 / 0.7685 / 0.298 | | 5 × 5 | 25.26 / 0.7695 / 0.299 | | 7 × 7 | 25.28 / 0.7717 / 0.301 | Lightweight modules that do not increase the number of parameters and inference time too much. The number of parameters of GMM and LMM only account for 1.5% and 1.1% of those of NIR-NAFNet, respectively. Applying an SFM on NIR-NAFNet only results in a time increase of 1 ms. More results compared to the NIR-assisted image denoising baseline in Fig. 4 (b) with different denoising backbones can be seen in Sec. B of the appendix. To further demonstrate the effectiveness of decoupling the inconsistencies into color and structure components, we conduct experiments with different arrangements of GMM and LMM. The results are shown in Table 5. ‘GMM + GMM’ and ‘LMM + LMM’ mean that modulate features with 2 GMMs and 2 LMMs respectively, but results in limited performance gain. This shows that our performance improvement is not due to a simple increase in parameter numbers. ‘LMM + GMM’ denotes that modulates features first locally and then globally, also leading to limited improvement. It may be because the significant difference in global content leads to inaccurate local feature modulation. Therefore, we deploy a GMM to handle color discrepancy first followed by a LMM dealing with structure discrepancy, dubbed ‘GMM + LMM’, achieving better results. 6.2 Effect of kernel size of DWConv in LMM. To illustrate the effect of the size of receptive fields in LMM, we conduct experiments employing varying kernel sizes of DWConv as shown in Table 7. Generally, a larger kernel size leads to greater performance improvement, which proves that a large reception field helps the local modulation of features. But it is improved marginally when the kernel size is larger than $5 \times 5$. For the sake of simplicity and efficiency, we set the kernel size of DWConv to $5 \times 5$ as default. 6.3 Effect of number of SFM. Here we investigate the effect of incorporating different numbers of SFM into the NAFNet (Chen et al., 2022) at each scale. The results are shown in Table 6. It can be observed that the performance generally increases marginally as the number of SFMs grows. Also for the sake of simplicity and efficiency, we only set the number of SFM to 1 at each scale of the networks. 7 Conclusion Near-infrared (NIR) images can help restore fine-scale details while removing noise from noisy RGB images, especially in low-light environments. The content inconsistency between NIR-RGB images and the scarcity of real-world paired datasets limit its effective application in real scenarios. In this work, we propose a plug-and-play Selective Fusion Module (SFM) and a real-world paired NIR-Assisted Image Denoising (NAID) dataset to address these issues. Specifically, SFM sequentially performs global and local modulations on NIR-RGB features before their information fusion. The NAID dataset is collected with various noise levels under diverse scenes. Experiments on both synthetic and real-world datasets show our method achieves better results than state-of-the-art ones. REFERENCES Abdelrahman Abdelhamed, Mahmoud Afifi, Radu Timofte, and Michael S Brown. Ntire 2020 challenge on real image denoising: Dataset, methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 496–497, 2020. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Matthew Brown and Sabine Süsstrunk. Multi-spectral sift for scene category recognition. In CVPR 2011, pp. 177–184. IEEE, 2011. Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), volume 2, pp. 60–65. Ieee, 2005. Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In CVPR, 2021. Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In European Conference on Computer Vision, pp. 17–33. Springer, 2022. Yuxiao Cheng, Runzhao Yang, Zhihong Zhang, Jinli Suo, and Qionghai Dai. A mutually boosting dual sensor computational camera for high quality dark videography. Information Fusion, 93: 429–440, 2023. Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007. Xin Deng and Pier Luigi Dragotti. Deep convolutional neural network for multi-modal image restoration and fusion. IEEE transactions on pattern analysis and machine intelligence, 43(10): 3333–3348, 2020. Clément Fredembach and Sabine Süsstrunk. Colouring the near infrared. In Proceedings of the IS&T/SID 16th Color Imaging Conference, pp. 176–182, 2008. Clément Godard, Kevin Matzen, and Matt Uyttendaele. Deep burst denoising. In Proceedings of the European conference on computer vision (ECCV), pp. 538–554, 2018. Shuhang Gu, Lei Zhang, Wangmeng Zuo, and Xiangchu Feng. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2862–2869, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Quan Huynh-Thu and Mohammed Ghanbari. Scope of validity of psnr in image/video quality assessment. Electronics letters, 44(13):800–801, 2008. Shuangping Jin, Bingbing Yu, Minhao Jing, Yi Zhou, Jiajun Liang, and Renhe Ji. Darkvisionnet: Low-light imaging via rgb-nir fusion with deep inconsistency prior. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1104–1112, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
yqIJoALgdD
However, one of the appealing aspects of SNNs is their utilization of binary events for information processing. In essence, a spiking neuron is active only when it encounters spikes, enabling an event-driven regime and an energy-efficient system. However, the proposed neuron nodes deviate from this advantageous feature by transmitting real-valued signals: neurons are active all the time, and the communication cost is high.
Towards Zero Memory Footprint Spiking Neural Network Training Anonymous authors Paper under double-blind review Abstract Spiking Neural Networks (SNNs), as representative brain-inspired neural networks, emulate the intrinsic characteristics and functional principles of the biological brain. With their unique structure reflecting biological signal transmission through spikes, they have achieved significant success in processing temporal data. However, the training of SNNs demands a substantial memory footprint due to the added storage needs for spikes or events, resulting in intricate architectures and dynamic configurations. In this paper, to address memory constraints in SNN training, we introduce an innovative framework characterized by a remarkably low memory footprint. We (i) design a reversible spiking neuron that retains a high level of accuracy. Our design is able to achieve a $58.65 \times$ reduction in memory usage compared to the current spiking neuron. We (ii) propose a unique algorithm to streamline the backpropagation process of our reversible spiking neuron. This significantly trims the backward Floating Point Operations Per Second (FLOPs), thereby accelerating the training process in comparison to the current reversible layer backpropagation method. By using our algorithm, the training time is able to be curtailed by $23.8\%$ relative to existing reversible layer architectures. 1 Introduction Spiking Neural Networks (SNNs) have gained significant recognition in the realm of bio-inspired neuromorphic computing. In contrast to traditional Deep Neural Networks (DNNs), SNNs possess a unique mechanism that processes information across multiple timesteps and impulse events, commonly referred to as spikes (Davies et al., 2018; Viale et al., 2021). This inherent ability for temporal processing enables SNNs to excel in tasks requiring real-time or sequential data interpretation. An illustrative example of this prowess is observed in robot navigation tasks utilizing Intel’s Loihi platform (Davies et al., 2018), underscoring SNNs’ proficiency in managing temporal data. Further, works such as (Kim & Panda, 2021) emphasize the advantages of SNNs over DNNs when handling sparse datasets, exemplified by data from dynamic vision sensors (DVS). These insights underscore the potential of SNNs across diverse applications where processing sequential or time-varying signals is crucial. Despite their numerous advantages, one major bottleneck in deploying SNNs is their memory consumption. For a DNN of depth $L$, the memory complexity is $O(L)$. However, an SNN of equivalent depth incorporates multiple timesteps $T$ in its computation, amplifying its memory complexity to $O(L \times T)$. To illustrate, while the memory demand during the training of a DNN like ResNet19 is a mere 0.6 GB, an SNN with the same architecture surges to 12.34 GB (~20 ×) Figure 1: Comparison of memory complexity between our RevSNN and other current SOTA Memory-Efficient SNN Training Techniques. with a timestep of 10. Such heightened memory requirements pose significant challenges for SNN integration into resource-limited environments, notably in IoT-Edge devices (Putra & Shafique, 2021). To tackle the SNN’s memory consumption challenge, various methods have been proposed. Shown in Fig. 1, the ND method by Huang et al. (2023), utilizing sparse training, reduces memory demands from \( O(L \times T) \) in the original SNN to \( p \times O(L \times T) \), where \( 0 \leq p \leq 1 \). However, this approach necessitates specialized hardware support, such as the CuSparse library. Works such as Tandem (Wu et al., 2021a), OTTT (Xiao et al., 2022), and IDE (Xiao et al., 2021) further reduce memory requirements to \( O(L) \), with Tandem needing pre-trained Artificial Neural Network. Leveraging the checkpoint techniques, Skipper (Singh et al., 2022) achieves a complexity of \( O(\sqrt{L} \times T) \). Overall, these methods lack scalability due to limited memory reduction when SNN layers and timesteps increase. In this paper, we address the following question: **Does a scalable training memory reduction method exist that remains scalable irrespective of increased layers and timesteps? If so, how can such a training method be designed?** To that end, we present a novel reversible spiking neuron that substantially lowers the memory footprint. Our contributions are summarized as follows: - We recalculate all the intermediate states on-the-fly, rather than storing them during the backward propagation process. Notably, in comparison, our method realizes a memory complexity of \( O(1) \), shown in Fig. 1. In other words, our training memory reduction method is scalable when increasing SNN layers and timesteps. - To enhance the efficiency, we design a reverse computation graph for the backpropagation process of our reversible spiking neuron, eliminating the need to rebuild the forward computation graph, which significantly reduces the training time compared with the original reversible layer backpropagation method. - Empirical evaluation of vast datasets shows that our method could retain the same level of accuracy during training process, compared to state-of-the-art (SOTA) methods. Experimental results show that our approach markedly surpasses the SOTA Memory-Efficient SNN training with reductions of 8.01×, 9.51×, and 3.93× on the CIFAR10, CIFAR100, and DVS-CIFAR10 datasets respectively. Incorporating our reversible spiking neurons into the OTTT method for the DVS128-Gesture dataset, we achieve a notable 1.34× reduction compared to the original OTTT, maintaining high accuracy levels. Moreover, our method reduces the FLOPs needed for the backpropagation by a factor of 23% compared to the existing reversible layer backpropagation method, thus accelerating the training process. We hope these advances would pave the way for more efficient and scalable SNN implementations, enabling the deployment of these biologically inspired networks across a wider range of applications and hardware platforms. ## 2 BACKGROUND AND RELATED WORKS ### 2.1 SPIKING NEURAL NETWORK The cardinal features that distinguish SNNs from conventional neural networks include: (i) Their inherent operation over multiple timesteps, emulating the temporal dynamics of information processing found in biological systems. This attribute broadens their ability to capture and interpret time-dependent patterns and sequences (Ghosh-Dastidar & Adeli, 2009). (ii) A unique mechanism of data handling through spikes. Unlike traditional networks that process continuous values, SNNs convey information via these discrete-time events. This spiking mechanism offers a more biologically faithful representation of neuronal signaling and emphasizes their potential to emulate the genuine communication patterns of neurons in the human brain (Tavanaei et al., 2019; Koravuna et al., 2023). There are several spiking neural models in the literature: Leaky Integrate and Fire (LIF) (Dayan & Abbott, 2005), Hodgkin-Huxley (HH) (Hodgkin & Huxley, 1952), and Izhikevich (Izhikevich, 2003). The LIF model is the most commonly utilized, and our reversible spiking neuron in this paper is constructed based on this model. The LIF model’s core involves two primary phases: - **Integration**: Signals to the neuron accumulate over time. However, the LIF neuron, unlike a perfect integrator, has a leaky attribute, leading to the decay of the neuron’s accumulated voltage towards its resting state without new inputs. - **Firing:** When the integrated voltage surpasses a set threshold, the neuron releases a spike. Following the firing, the voltage resets, generally to a value beneath the threshold, and the procedure begins anew. The specific firing function often uses the Heaviside step function (Wu et al., 2021a;b) or its derivatives (Meng et al., 2022; Nicola & Clopath, 2017). In conclusion, SNNs have found practical applications across various fields. Specifically, they have made significant strides in areas including segmentation (Kim et al., 2022; Patel et al., 2021) and detection (Kim et al., 2020). In the biomedical domain, SNNs have been extensively explored for tasks such as MRI image segmentation (Ahmadi et al., 2021) and ECG classification (Yan et al., 2021). Their biologically inspired architecture and unique data processing capabilities position SNNs as a powerful tool, bridging the gap between computational neuroscience and real-world applications. As advancements continue, the scope and impact of SNNs are poised to grow even further. ### 2.2 Existing Memory-Efficient Techniques in SNN Training Training SNNs can be computationally intensive, often demanding significant memory resources. Given the intrinsic temporal characteristics of SNNs, training them involves processing information over several timesteps, which further amplifies the memory requirements (Bauer et al., 2023). This has spurred research into developing memory-efficient techniques tailored for SNN training. Just as with conventional networks, SNNs can also adopt some traditional memory-saving techniques, such as checkpointing and sparse training. The work by Singh et al. (2022) applied checkpointing to SNNs and, compared to the baseline SNN-BPTT, achieved a reduction in memory usage ranging from $3.3\times$ to $8.4\times$, with an average of $6.7\times$. Additionally, the study by Huang et al. (2023) utilized sparse training for SNNs, and the results revealed that the training cost of NDSNN is merely $40.89\%$ of the LTH training cost when implemented on ResNet-19. One of the primary reasons for the substantial memory consumption in SNNs is the need to retain computational graphs for multiple time steps during the backpropagation process. This has led to the development of techniques that focus on optimizing the backpropagation process in SNNs to conserve memory. An exemplar is (Xiao et al., 2022), which compressed the multi-time step backpropagation into a single time step, resulting in significant memory savings. When the timestep is set to six, this approach can reduce memory consumption by approximately 2 to 3 times. ### 3 Reversible Spiking Neuron #### 3.1 Training Memory Analysis During the training process of spiking neural networks, the activation values occupy the main memory storage space. The activation value memory analysis schematic diagram is shown in Fig. 2. In this figure, we use the VGG-13 architecture (Simonyan & Zisserman, 2014) with ten timesteps as an example. The percentage values represent the memory footprint ratio of each part in the entire network. The left diagram is the original SNN where the activation values of spikes account for $90.9\%$ of the memory usage, and the output potentials of each neuron occupy $9.1\%$ of the memory. The right diagram is our designed reversible SNN, which only requires saving the output potentials of each neuron, without storing all intermediate values, thus significantly saving memory. The intermediate activation values will be regained during the backpropagation process through our inverse calculation equation. In this example, our method is able to save $90.9\%$ of the memory used for activation values. The exact amount of memory saved by our method is shown in Section 5.2. ![Memory comparison between the activation value of the original SNN network and our reversible SNN network.](image-url) - **Original SNN**: Spikes + Potentials (100%) - **Our Design**: 90.9% Memory Saving **Figure 2:** Memory comparison between the activation value of the original SNN network and our reversible SNN network. □: Activation value bound to memory storage; ⋯⋯: Activation value free from memory storage; ●: Original spiking neuron; ○: Our reversible spiking neuron; ⏩: Output potential of the spiking neuron; $N_{ij}$: spiking neuron on layer $i$ timesteps $j$. 3.2 Reversible Spiking Neuron Forward Calculation Our forward algorithm is in the upper section of Fig. 3. The various input states \( S = (X, V) \) of each neuron are evenly divided into two groups along the last dimension. Namely: \( S = [S_1, S_2] \). 1. Calculate the first part of output \( M^t_1 \) and \( Y^t_1 \): \[ M^t_1 = V^{t-1}_1 + \frac{1}{\tau} \cdot (X^t_1 - V^{t-1}_1) \quad (1) \] \( Y^t_1 = H(M^t_1 - V_{th}) + \beta \cdot X^t_2 \quad (2) \) \( M^t_1 \) is the membrane potential of the first half neuron at time \( t \). \( V^{t-1}_2 \) is the input potential of the second half neuron at time \( t - 1 \). \( \tau \) is the time constant. \( X^t_2 \) is the input to the second half neuron at time \( t \). \( V_{th} \) is the threshold voltage of the neurons. \( H() \) is the Heaviside step function. \( \beta \) is a scaling factor for the input. \( \beta \cdot X^t_2 \) will help \( Y^t_1 \) to collect information about the second half of the input in the next step. \( M, V, X, Y \in \mathbb{R}^{\prod_{i=1}^{n} d_i}, V_{th} \in \mathbb{R} \). Then we calculate the first part of the output voltage: \[ V^t_1 = (1 - Y^t_1) \odot M^t_1 + Y^t_1 \cdot V_{res} + \alpha \cdot V^{t-1}_1 \quad (3) \] \( V^t_1 \) is the output potential of the first half neuron at time \( t \). \( V_{res} \) is the reset voltage of the neurons. \( \alpha \) is a scaling factor for the membrane potential. 2. Use the first part of output \( Y^t_1 \) to calculate the second part \( M^t_2 \) and \( Y^t_2 \): \[ M^t_2 = V^{t-1}_2 + \frac{1}{\tau} \cdot (Y^t_1 - V^{t-1}_2) \quad (4) \] \( Y^t_2 = H(M^t_2 - V_{th}) + \beta \cdot X^t_1 \quad (5) \) \( M^t_2 \) is the membrane potential of the second half neuron at time \( t \). \( Y^t_2 \) is the output of the second half neuron at time \( t \). We calculate the second part of the output voltage by: \[ V^t_2 = (1 - Y^t_2) \odot M^t_2 + Y^t_2 \cdot V_{res} + \alpha \cdot V^{t-1}_2 \quad (6) \] \( V^t_2 \) is the output potential of the second half neuron at time \( t \), \( V_{res} \in \mathbb{R} \). 3. For all the output states \( S_{output} = ([Y^t_1, Y^t_2], [V^t_1, V^t_2]) \), combine them by the last dimension. 3.3 Reversible Spiking Neuron Inverse Calculation The purpose of the inverse calculation is to use the output results to obtain the unsaved input values. i.e. Use \( Y \) and \( V_{output} \) to calculate \( X \) and \( V \). Our inverse algorithm is in the lower section of Fig. 3. For all the output states \( S_{\text{output}} = (Y, V_{\text{output}}) \), divide them into two groups by the last dimension in the same way as in the first step of forward calculation, namely: \( S_{\text{output}} = [S_{\text{output}1}, S_{\text{output}2}] \). ②: Calculate \( V_2^{t-1} \) by combine Eq. (4) and calculate \( X_1^t \) by combine Eq. (4), (5), and (6), simplify: \[ V_2^{t-1} = \frac{V_2^t - (1 - Y_2) \cdot \frac{1}{\tau} \odot Y_1 - Y_2 \cdot V_{\text{reset}}}{(1 - Y_2) \cdot (1 - \frac{1}{\tau}) + \alpha} \] \[ X_1^t = \frac{Y_2^t - H(M_2^t - V_{\text{th}})}{\beta} \] ③: Calculate \( V_1^{t-1} \) by combine Eq. (1) and calculate \( X_2^t \) by combine Eq. (1), (2) and (3), simplify: \[ V_1^{t-1} = \frac{V_1^t - (1 - Y_1) \cdot \frac{1}{\tau} \odot X_t^t - Y_1 \cdot V_{\text{reset}}}{(1 - Y_1) \cdot (1 - \frac{1}{\tau}) + \alpha} \] \[ X_2^t = \frac{Y_1^t - H(M_1^t - V_{\text{th}})}{\beta} \] ④: For all the input states \( S = ([X_1, X_2], [V_1^{t-1}, V_2^{t-1}]) \), combine them by the last dimension. ### 4 Inverse Gradient Calculation While our reversible architecture markedly reduces memory consumption, it does introduce computational overhead due to two main factors: (i) The need to recompute previously unstored activation values, and (ii) Many past reversible layers borrowed the backpropagation technique from checkpointing (THUDM, 2023; Fan et al., 2020). This approach recalculates intermediate activations to reconstruct a forward computational graph for gradient derivation, adding computational overhead and increasing total computation time. This design is unnecessary in the reversible architecture. This scenario is prevalent across all existing architectures of reversible layers, including Reversible GNN (Li et al., 2021a), Reversible CNN (Gomez et al., 2017), and so on. To reduce the training time, we have designed a new algorithm called the inverse gradient calculation method, which can substantially decrease the number of FLOPs during the backpropagation process compared to the original reversible architecture. Our design is shown in Fig. 4. ![Figure 4: Three different architectures for comparison.](image) The left diagram illustrates the original forward and backward processes. The middle diagram depicts the original calculation process for reversible layers. It contains four steps: 1. The input \( X \) pass the forward function to compute the output \( Y \), without storing the input data to conserve memory. 2. For each layer \( n \): The output \( X^n \) of this layer passes the inverse function to compute the input \( X^{n-1} \) of this layer. This process starts with the final output \( Y \). 3. For each layer \( n \): The input \( X^{n-1} \) passes through the forward function again to reconstruct the forward computational graph, which facilitates gradient computation. 4. For each layer \( n \): Compute the gradient \( \frac{\partial X^n}{\partial X^{n-1}} \) based on the forward computational graph. The right diagram is our design with three steps: 1. The input \( X \) pass the forward function to compute the output \( Y \), without storing the input data to conserve memory. 2. For each layer \( n \): The output \( X^n \) of this layer passes the inverse function to compute the input \( X^{n-1} \) of this layer and construct an inverse computational graph. 3. For each layer \( n \): Compute the gradient \( \frac{\partial X^n}{\partial X^{n-1}} \) based on the inverse computational graph. Below is the specific calculation formula of the \( \frac{\partial X^n}{\partial X^{n-1}} \) based on the inverse computation graph, and the derivation process is in the Appendix. \[ \frac{\partial X^n}{\partial X_1^{n-1}} = \frac{\theta}{2 + (\pi \cdot \theta \cdot (M_1^t - V_{th}))^2} \cdot \frac{1}{\tau} \odot \left( 1 + \frac{\theta}{2 + (\pi \cdot \theta \cdot (M_2^t - V_{th}))^2} \cdot \frac{1}{\tau} \right) + \beta \tag{11} \] \[ \frac{\partial X^n}{\partial X_2^{n-1}} = \frac{\theta}{2 + (\pi \cdot \theta \cdot (M_2^t - V_{th}))^2} + \beta \tag{12} \] All the variables in Eq. (11) and Eq. (12) have the same meaning as the variables in Eq. (1) - Eq. (10) and \( \theta \) is an adjustable constant parameter. The ability to perform computational graph inverse computation in our algorithm is based on that our forward function has symmetry with the inverse computation function. For the original reversible network: \[ FLOPS_{\text{orig backward}} = FLOPS_{\text{inverse}} + FLOPS_{\text{forward}} + FLOPS_{\frac{\partial X^n}{\partial X^{n-1}}} \tag{13} \] For our reversible network: \[ FLOPS_{\text{our backward}} = FLOPS_{\text{inverse}} + FLOPS_{\text{part of } \frac{\partial X^n}{\partial X^{n-1}}} \tag{14} \] Compared to the standard reversible network, our method reduces FLOPS by 23%. The FLOPS analysis is shown in the Appendix and the detailed time measurement is shown in the Section 5.3. 5 EXPERIMENT We first benchmarked our design against SOTA SNN training methods on multiple datasets, and then integrated our reversible spiking neuron into various architectures. Our primary aims are to highlight the memory efficiency of our method over the conventional spiking neuron and demonstrate the speed benefits of our backpropagation design compared to the existing reversible backpropagation method. An ablation study was also conducted to assess different parameters’ effects and the influence of input group divisions on our model’s performance. Experiments ran on an RTX6000 GPU using PyTorch 1.13.1 and CUDA 11.4. We verified the consistency of inverse and forward calculations using `torch.allclose(rtol=1e^{-06}, atol=1e^{-10})`, achieving accurate results. Hyperparameters are detailed in the Appendix. 5.1 COMPARISON WITH THE SOTA METHODS We compared our approach with the current SOTA methods in Memory Efficiency during the SNN training process across two standard image classification datasets, CIFAR10 and CIFAR100, as well as two neuromorphic datasets, DVS-CIFAR10 and DVS128gesture. The results are shown in Table 1. Table 1: Comparison of our work with the SOTA methods in Memory Efficiency at SNN training phase. For all the works: Batch size = 128. †: We conducted experiments using provided open-source code when available. *: If not, the results were generated with our own implementation. | Dataset | Method | Architecture | Time-steps | Accuracy | Memory(GB) | |---------------|-----------------|--------------|------------|----------|------------| | CIFAR10 | OTTT (Xiao et al., 2022) | VGG(sWS) | 6 | 93.52% | 4 | | | S2A-STSU (Tang et al., 2022) | ResNet-17 | 5 | 92.75% | 27.93 | | | IDE-LIF (Xiao et al., 2021) | CIFARNet-F | 30 | 91.74% | 2.8 | | | Hybrid (Rathi et al., 2020) | VGG-16 | 100 | 91.13% | 9.36 | | | Tandem (Wu et al., 2021a) | CifarNet | 8 | 89.04% | 4.2 | | | Skipper (Singh et al., 2022) | VGG-5 | 100 | 87.44% | 4.6 | | | RevSNN(Ours) | ResNet-18 | 4 | 91.87% | 1.10† | | | | | | | 8.01× (Avg.) | | CIFAR100 | IDE-LIF† (Xiao et al., 2021) | CIFARNet-F | 30 | 71.36% | 2.95† | | | OTTT (Xiao et al., 2022) | VGG(sWS) | 6 | 71.05% | 4.04 | | | S2A-STSU (Tang et al., 2022) | VGG-16 | 4 | 66.66% | 31.05 | | | Skipper (Singh et al., 2022) | VGG-5 | 100 | 66.48% | 4.6 | | | RevSNN(Ours) | ResNet-18 | 4 | 71.13% | 1.12† | | | | | | | 9.51× (Avg.) | | DVS-CIFAR10 | STBP-dIBN (Zheng et al., 2021) | ResNet-19 | 10 | 67.8% | 11.5† | | | Tandem (Wu et al., 2021a) | CifarNet | 8 | 65.59% | 6.79† | | | Rollout (Kagele et al., 2020) | DenseNet | 10 | 66.8% | 15.3* | | | BPTT (Fang et al., 2021) | 7-layer CNN | 20 | 74.8% | 27.95† | | | RevSNN(Ours) | VGG-16 | 20 | 72.11% | 3.91† | | | | | | | 3.93× (Avg.) | | DVS128-Gesture| BPTT (Fang et al., 2021) | 8-layerCNN | 20 | 96.88% | 137.10† | | | SLAYER (Shrestha & Orchard, 2018) | 8-layerCNN | 300 | 93.64% | 5.18† | | | DECOLLE (Kaiser et al., 2020) | 3-layerCNN | 1800 | 95.54% | 5.03† | | | OTTT (Xiao et al., 2022) | VGG(sWS) | 20 | 96.88% | 28.44† | | | RevOTTT(Ours) | VGG(sWS) | 20 | 96.75% | 21.16 | | | | | | | 1.34× | We subsequently applied our reversible spiking neuron to the current SOTA techniques in terms of SNN Accuracy and compared it with the original methods. The results are shown in Table 2. Table 2: Comparison of our work with the SOTA methods in terms of SNN Accuracy. For all the works: Batch size = 128. †: We conducted experiments using provided open-source code when available. *: If not, the results were generated with our own implementation. | Dataset | Method | Architecture | Time-steps | Accuracy | Memory(GB) | |---------------|-----------------|--------------|------------|----------|------------| | CIFAR10 | Dspike (Li et al., 2021b) | ResNet-18 | 6 | 94.25% | 5.78* | | | RevDspike(Ours) | ResNet-18 | 6 | 93.43% | 2.14† | | | DSR (Meng et al., 2022) | PreAct-ResNet-18 | 20 | 95.40% | 25.11† | | | RevDSR(Ours) | PreAct-ResNet-18 | 20 | 95.35% | 5.73† | | | Dspike (Li et al., 2021b) | ResNet-18 | 6 | 74.24% | 5.78* | | | RevDspike(Ours) | ResNet-18 | 6 | 73.28% | 2.14† | | | DSR (Meng et al., 2022) | PreAct-ResNet-18 | 20 | 78.50% | 25.11† | | | RevDSR(Ours) | PreAct-ResNet-18 | 20 | 78.21% | 5.73† | | Tiny-ImageNet | ND(Dense) (Huang et al., 2023) | VGG-16 | 5 | 39.45% | 3.99 | | | ND(90% Sparsity) (Huang et al., 2023) | VGG-16 | 5 | 39.12% | 3.78 | | | ND(99% sparsity) (Huang et al., 2023) | VGG-16 | 5 | 33.84% | 3.76 | | | RevND(Ours) | VGG-16 | 5 | 39.77% | 2.01† | | | ND(Dense) (Huang et al., 2023) | ResNet-19 | 5 | 50.32% | 5.29 | | | ND(90% Sparsity) (Huang et al., 2023) | ResNet-19 | 5 | 49.25% | 5.11 | | | ND(99% sparsity) (Huang et al., 2023) | ResNet-19 | 5 | 41.96% | 5.09 | | | RevND(Ours) | ResNet-19 | 5 | 50.63% | 2.47† | Compared to the SOTA Memory-Efficient SNN training, our approach (RevSNN) significantly achieves a 8.01× reduction on the CIFAR10 dataset; a 9.51× reduction on the CIFAR100 dataset; and a 3.93× reduction on the DVS-CIFAR10 dataset on average. To further evaluate the versatility of our reversible spiking neurons, we incorporated them into the OTTT method (RevOTTT) for the DVS128-Gesture dataset. The results are compelling: a 1.34× reduction compared to the original OTTT approach, all while preserving a high degree of accuracy. Against Accuracy-Driven SNN training, our spiking neuron integrated into SOTA methods (RevDespike, RevDSR, RevND) yielded substantial memory savings: 2.70× for Dspike and 4.38× for DSR on CIFAR datasets. On Tiny-ImageNet, using our neuron with ND method’s VGG-16 and ResNet-19 architectures resulted in 1.99× and 2.14× reductions, respectively, with accuracy surpassing the original Dense model. 5.2 MEMORY CONSUMPTION EVALUATION We explored the memory savings of our reversible spiking neuron by incorporating it into various architectures, including VGG (11, 13, 16, 19) and ResNet (19, 34, 50, 101), using the CIFAR-10 dataset with a batch size of 128. For VGG architectures, we analyzed memory usage over 1 to 20 timesteps, while for ResNet, it was over 1 to 10 timesteps. The findings are shown in Fig. 5. Notably, with the VGG-19 architecture at 20 timesteps, the memory usage for our reversible spiking neuron remains under 200MB, in stark contrast to the 9032MB required using conventional spiking neuron. For ResNet-101 at 10 timesteps, the comparison is 1382MB to 28993MB. As we scale model layers and timesteps, the memory efficiency of our reversible spiking neuron is even more evident. For instance, VGG-19 at 20 timesteps sees a $58.65 \times$ memory reduction. Detailed data are shown in the Appendix. ![Memory comparison between normal spiking neuron and our reversible spiking neuron.](image) These experimental results align with our theoretical analysis in Section 3.1, further validating that our design is able to significantly reduce memory usage. ### 5.3 Training Time Evaluation To compare the efficiency of our backpropagation design with the traditional reversible method, we evaluated two backpropagation architectures for our reversible spiking neuron: one with the conventional method and another with our design. We used VGG architectures (VGG-11 to VGG-19) over timesteps from 1 to 10 and compared the training iteration times on CIFAR-10 datasets for three scenarios: original spiking neuron, reversible spiking neuron with conventional backpropagation, and reversible spiking neuron with our method. All tests were conducted on an RTX6000 GPU with a batch size of 64. ![Training time analysis. Solid lines: Backward process’s duration; Dashed lines: Forward process’s duration; Red lines: Training time for the original SNN; Green lines: Training time for the reversible SNN using the original reversible layer backpropagation method; Blue lines: Training time for the reversible SNN employing our proposed backpropagation architecture.](image) Fig. 6 presents our measurement of the training time when the number of timesteps is set to 4, 6, and 8. Forward computation times across the three methods are comparable. The original spiking neuron boasts the quickest backward time as it stores all intermediate values, avoiding recalculations. Among reversible spiking neurons, our design speeds up the backward process by $20\% - 30\%$ compared to the traditional reversible method. This advantage grows with larger networks; for instance, under VGG-19 at 8 timesteps, our method saves 23.8% of total training time. These findings match our theoretical predictions in Section 4. Further data is in the Appendix. 5.4 Ablation Study Effects of parameters $\alpha$ and $\beta$ in our equations In Eq. (2) and Eq. (3), we have two parameters: $\alpha$ and $\beta$. The optimal setting for the parameter $\beta$ is 1, as this maximizes the preservation of the original features of the data. We conduct experiments to assess the impact of the $\alpha$ parameter on the model’s performance. We vary the $\alpha$ parameter from 0.05 to 0.8, and then employ architectures VGG-19, VGG-16, VGG-13, and VGG-11 to evaluate the accuracy on the CIFAR100 dataset. The results are shown on the left of Fig. 7. We observe that varying $\alpha$ within the range of 0.05 to 0.8 impacts the final accuracy by approximately 1%. Generally, the model exhibits optimal performance when $\alpha$ is set between 0.1 to 0.2. Effects of number of groups for the various states In Section 3.2, we propose splitting input states into two groups along the last dimension. However, this poses problems if the tensor’s last dimension is odd. To solve this, we adapt the original algorithm to divide inputs based on the last dimension’s element count $n$. This sequential processing with Eq. (1) - (3) for each group enhances our algorithm’s flexibility. To assess the number of groups’ impact, we adjusted some fully connected layers in ResNet-19, ResNet-18, VGG-16, and VGG-13 networks from 128 to 144 activations for varied factor possibilities. We tested performance on CIFAR100 with groups ranging from 2 to 144, shown in Fig. 7. Results suggest More groups enhance accuracy, often surpassing the original spiking neuron due to improved data representation. ![Accuracy vs. Alpha](image1.png) ![Accuracy vs. Number of Groups](image2.png) Figure 7: **Left Figure**: Test VGG-19,VGG-16,VGG-13,VGG-11 models on CIFAR100 dataset by using different $\alpha$ settings. **Right Figure**: Change activations number from 128 to 144 for some fully connected layers inside ResNet-19, ResNet-18, VGG-16, VGG-13 and test model performance for different numbers of groups on CIFAR100. Rev.: Reversible spiking neuron. Ori.: Original spiking neuron. Mo.: Modified network (Change some fully connected layers). 6 Conclusion and Discussion This work addresses a fundamental bottleneck of current deep SNNs: their high GPU memory consumption. We have designed a novel reversible spiking neuron that is able to reduce memory complexity from $O(n^2)$ to $O(1)$. Specifically, our reversible spiking neuron allows our SNN network to achieve $8.01 \times$ greater memory efficiency than the current SOTA SNN memory-efficient work on the CIFAR10 dataset, and $9.51 \times$ greater on the CIFAR100 dataset on average. Furthermore, in order to tackle the prolonged training time issue caused by the need for recalculating intermediate values during backpropagation within our designed reversible spiking neuron, we have innovated a new backpropagation approach specifically suited for reversible architectures. This innovative method, when compared to the original reversible layer architecture, achieves a substantial reduction in overall training time by 23.8%. As a result, we are able to train over-parameterized networks that significantly outperform current models on standard benchmarks while consuming less memory. REFERENCES Mohsen Ahmadi, Abbas Sharifi, Shayan Hassantabar, Saman Enayati, et al. Qais-dsmn: tumor area segmentation of mri image with optimized quantum matched-filter technique and deep spiking neural network. *BioMed Research International*, 2021, 2021. Felix C Bauer, Gregor Lenz, Saeid Haghighatshoar, and Sadique Sheik. Exodus: Stable and efficient training of spiking neural networks. *Frontiers in Neuroscience*, 17:1110444, 2023. Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. Loihi: A neuromorphic manycore processor with on-chip learning. *Ieee Micro*, 38(1):82–99, 2018. Peter Dayan and Laurence F Abbott. *Theoretical neuroscience: computational and mathematical modeling of neural systems*. MIT press, 2005. DeepSpike. Tandem Learning: An approach to neural network optimization, 2021. URL https://github.com/deepspike/tandem_learning. GitHub repository. Haoqi Fan, Yanghao Li, Bo Xiong, Wan-Yen Lo, and Christoph Feichtenhofer. Pyslowfast. https://github.com/facebookresearch/slowfast, 2020. Wei Fang, Zhaofei Yu, Yanqi Chen, Timothée Masquelier, Tiejun Huang, and Yonghong Tian. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 2661–2671, 2021. Fangwei. Parametric Leaky Integrate and Fire Spiking Neuron, 2021. URL https://github.com/fangwei123456/Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron. GitHub repository. Samanwoy Ghosh-Dastidar and Hojjat Adeli. Third generation neural networks: Spiking neural networks. In *Advances in computational intelligence*, pp. 167–178. Springer, 2009. Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. *Advances in neural information processing systems*, 30, 2017. Alan L Hodgkin and Andrew F Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. *The Journal of physiology*, 117(4):500, 1952. Shaoyi Huang, Haowen Fang, Kaleel Mahmood, Bowen Lei, Nuo Xu, Bin Lei, Yue Sun, Dongkuan Xu, Wujie Wen, and Caiwen Ding. Neurogenesis dynamics-inspired spiking neural network training acceleration. *arXiv preprint arXiv:2304.12214*, 2023. Eugene M Izhikevich. Simple model of spiking neurons. *IEEE Transactions on neural networks*, 14(6):1569–1572, 2003. Jacques Kaiser, Hesham Mostafa, and Emre Neftci. Synaptic plasticity dynamics for deep continuous local learning (decolle). *Frontiers in Neuroscience*, 14:424, 2020. Seijoon Kim, Seongsik Park, Byunggook Na, and Sungroh Yoon. Spiking-yolo: spiking neural network for energy-efficient object detection. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 11270–11277, 2020. Youngeun Kim and Priyadarshini Panda. Optimizing deeper spiking neural networks for dynamic vision sensing. *Neural Networks*, 144:686–698, 2021. Youngeun Kim, Joshua Chough, and Priyadarshini Panda. Beyond classification: Directly training spiking neural networks for semantic segmentation. *Neuromorphic Computing and Engineering*, 2(4):044015, 2022.
fjwZHuQ3cm
In Table 1, what is meant by Detectors $d$ having different attack names as subscripts to them such as $d_{\text{FGSM}}$? Is this detector tuned to defend against FGSM? If so, why is there only one detector for $LIMANS_{10}$ and not one for the claimed strongest attack of $\text{LIMANS}_{4000}$? What is SA (standard accuracy?)?
LIMANS: Linear Model of the Adversarial Noise Space Anonymous authors Paper under double-blind review Abstract Recent works have revealed the vulnerability of deep neural network (DNN) classifiers to adversarial attacks. Among such attacks, it is common to distinguish specific attacks adapted to each example from universal ones referred to as example-agnostic. Even though specific adversarial attacks are efficient on their target DNN classifier, they struggle to transfer to others. Conversely, universal adversarial attacks suffer from lower attack success. To reconcile universality and efficiency, we propose a model of the adversarial noise space that allows us to frame specific adversarial perturbation as a linear combination of the universal adversarial directions. We bring in two stochastic gradient-based algorithms for learning these universal directions and the associated adversarial attacks. Empirical analyses conducted on the CIFAR-10 and ImageNet datasets show that LIMANS (i) enables crafting specific and robust adversarial attacks with high probability, (ii) provides a deeper understanding of DNN flaws, and (iii) shows significant ability in transferability. 1 Introduction With recent technological advances, deep neural networks (DNN) are widespread in numerous applications ranging from biomedical imaging to autonomous vehicles. However, DNNs are vulnerable to adversarial attacks (Szegedy et al., 2014). The latter are slight perturbations of clean examples well classified by the DNN, leading to misclassification. These perturbations may take the form of common corruptions (e.g., for images, it can be a change in lightning conditions, colorimetry, or rotations) or visually imperceptible learned adversarial noises. There essentially exist two ways of crafting adversarial noises. The first strategy consists in finding a paired adversarial noise with each example to attack (Croce et al., 2021; Qian et al., 2022). As such, it is deemed to be specific since each adversarial noise is specifically designed for a given example. The second strategy aims at finding a unique universal noise which, added to any example, is likely to fool the DNN (Moosavi-Dezfooli et al., 2017). Each strategy comes with its pros and cons. On the one hand, although specific attacks achieve great performances on the target DNN, the learned adversarial noises do not fool other DNNs on the same examples. They transfer poorly. On the other hand, universal attacks have shown great transferability at the expense of a weaker ability to fool the target DNN on which the universal adversarial noise is learned. To reconcile specificity and universality, we propose a way to model the space of adversarial noises. This space is supposed to be embedded in the span of the ensemble of directions, perpendicular to the decision boundaries (Li et al., 2020). Since the dimensionality of such spanned space depends on the classifier’s decision boundaries, it is likely to lie in a low dimensional manifold and so does the adversarial noise space. This leads to considering a linear model of the adversarial noise space. In addition, it has been shown in (Tramèr et al., 2017) that the decision boundaries of multiple DNN classifiers trained on the same dataset are close to each other. This leads us to think that a model of the adversarial noise space could be transferable. The present work proposes to bridge the gap between specific and universal attacks by linear modeling the Adversarial Noise Space (LIMANS). Intuitively, the dimension of this model should range between 1, for universal attacks, and the dimension of examples, in the case of specific attacks. The overall idea is sketched in Figure 1. For each example to attack, an adversarial noise is crafted as a linear combination of adversarial directions. While the adversarial directions are universal, the linear combination coefficients are specific to each example to perturb. Figure 1: High level overview of the proposed LIMANS adversarial attack and its optimization. It is highlighted the adversarial model $D$ is universal to every adversarial example, while a specific coding vector $v$ is tailored to each adversarial example. In short, the main contributions of the paper are: - LIMANS, a model of adversarial perturbation as a linear combination of universal adversarial directions that are visually inspectable, helps to better understand the DNN flaws. - The associated optimization problem, allows us to learn these universal adversarial directions and two relaxations leading to scalable stochastic gradient-based algorithms. - Empirical evidence illustrating that adversarial examples generated by LIMANS are more robust to existing adversarial example detectors than current state-of-the-art adversarial attacks. - Experiments demonstrating that the learned ensemble of adversarial directions is transferable across different classifiers achieving state-of-the-art transferability. The rest of the paper is organized as follows: the state-of-the-art research on specific attacks, universal attacks, and manifold of adversarial perturbations are presented in Section 2. The optimization problem of LIMANS and the proposed algorithmic solutions are detailed in the Section 3. Finally, Section 4 displays the adversarial noise model and provides experimental evaluations of the adversarial noise space in terms of robustness and transferability on both CIFAR10 and ImageNet. 2 Bridging the gap between specific and universal adversarial attacks Consider a classifier $f$, typically a DNN to be attacked. Threat models can be expressed as a function $g$ such that $g(f, x) = x'$, called adversarial example, aims to fool $f$ for a given input $x$. Adversarial examples are typically crafted by adding an adversarial perturbation to the input under attack. It is common to distinguish example-based perturbations also called specific perturbations from universal perturbations which are example-agnostic. Specific attacks. Specific attacks are designed to generate, given a classifier $f$ and for each input example, an associated adversarial perturbation (see Croce et al., 2021; Qian et al., 2022 and references therein for a detailed review). The most popular ones include one-shot gradient method such as the fast gradient sign method (FGSM) and more elaborated iterative procedures such as the projected gradient descent (PGD), DeepFool or the method by Carlini and Wagner (CW). Last but not least, AutoAttack (Croce & Hein, 2020), an ensemble of diverse parameter-free attacks, is becoming the state-of-the-art for evaluating the robustness of a neural network. The specificity of these attacks enables them to fool the targeted classifier with high probability. However, it has been found that they are not as effective in fooling other classifiers, namely, adversarial examples yielded by specific attacks are poorly transferable from one classifier to another. To enhance the transferability of specific attacks, (Xie et al., 2019) proposed adopting inputs transformation, while (Wang & He, 2021) suggested stabilizing the update directions of adversarial noise to escape poor local optima, leading to the methods VNI-FGSM and VMI-FGSM. Besides, the Neuron Attribution-based Attacks, NAA, boosted the transferability by conducting the feature-level attacks (Zhang et al., 2022). Finally, the attack generating reverse adversarial perturbation, RAP, attempted to find adversarial attacks located on a plateau of the loss function (Qin et al., 2022). **Universal attacks.** A specific attack requires, for each new example, to solve an optimization problem that fully hinges on \( f \). To overcome this issue, Universal Adversarial Perturbation (UAP), an example-agnostic perturbation, was introduced in (Moosavi-Dezfooli et al., 2017). It consists of a single perturbation which, added to any examples, leads to fooling \( f \) with high probability. Later on, different extensions of UAP were proposed: UAP-PGD (Shafahi et al., 2020) used a gradient-based algorithm termed to compute UAP, CD-UAP (Zhang et al., 2020) optimized a universal perturbation on a given subset of classes while the Class-Wise Universal Adversarial Perturbations (CW-UAP) (Benz et al., 2021) elaborated a universal perturbation per class. Universal attacks are fast but their applicability is still far-fetched because of poor performances compared to specific attacks (Chaubey et al., 2020). These adversarial perturbations are carefully crafted estimating the worst-case performance of a classifier. Besides, there is another type of universal perturbation the so-called common corruptions (i.e. Gaussian Noise, Impulse Noise...), which exists in real-world imaging systems (Hendrycks & Dietterich, 2019). They are average-case perturbations beyond the scope of this work. Even though common corruptions are closer to real-world harm, they need to be explicitly defined before any perturbation crafting which limits to modelling of the perturbation space. In trying to bridge both the norm-bounded adversarial perturbations world and the common corruptions world, researchers tried to learn the manifold in which adversarial perturbations are embedded. **Learning the manifold of adversarial perturbations.** Works in this direction aim at giving the reason why adversarial attacks exist and design defense methods to overcome them. Some focus on researching the space of the adversarial noise to an input example. It has been demonstrated firstly the space was a large continuous region (Tabacof & Valle, 2016). Then, (Tramèr et al., 2017) discovered that the decision boundaries of different classifiers are close and proposed to establish a transferable subspace of the space across different classifiers. However, the space is inferred based on the adversarial noise generated by the FGSM method which impacts the precision of the found space. The hypothesis of the transferability depending only on the dimensionality of this space limited its performance on CNN classifiers. On the other hand, some studies on the overall structure of the classifier clarify the rise of adversarial attacks. Research in (Fawzi et al., 2018) illustrated the decision boundaries of a classifier are in the vicinity of examples and flat in most the directions. More recently, (Li et al., 2020) claimed that adversarial noise is caused by gradient leakage and the adversarial directions are perpendicular to the classifier boundaries. Based on the above works, in this paper, we propose to learn universal adversarial directions, adapted to the dataset while independent from a single input example under attack, spanning the adversarial noise space. With this space, it is thus allowed to retrieve specific adversarial examples. ### 3 Linear modeling of the adversarial noise space (LIMANS) In this section, we start by introducing our framework for modeling the adversarial noise space and, we propose two algorithmic schemes to address the problem. #### 3.1 Problem setting Let \( f : \mathbb{R}^P \rightarrow \mathbb{R}^c \) be a DNN classifier which outputs \( f(x) \in \mathbb{R}^c \), the vector of scores for an example \( x \in X \subset \mathbb{R}^P \) to belong to a class of \( y \in Y = \{1, \cdots, c\} \). The predicted class is given by \( \text{argmax}_k f_k(x) \). Given an example \( x \), an adversarial attack model seeks an adversarial perturbation \( \epsilon \) such that \( x' = x + \epsilon \), the adversarial example, is a valid example i.e. \( x' \in X \), close to \( x \) and induces \( \text{argmax}_k f_k(x') \neq \text{argmax}_k f_k(x) \). As the perturbation must be indiscernible, customary one enforces \( \| \epsilon \|_p \leq \delta_p \) for some \( \ell_p \)-norm (typically \( \ell_2 \) and \( \ell_\infty \)) and some small \( \delta_p \in \mathbb{R}_+ \). Specific adversarial attacks learn perturbations \( \epsilon = \epsilon(x) \), i.e. dedicated to a specific example \( x \) while at the opposite universal attacks such as UAP seek a single perturbation \( \epsilon \) able to attack any test examples. Specifically, the originality of the work is to express the adversarial noise paired to \( x \) as \( \epsilon(x) = Dv(x) \) where \( D \in \mathbb{R}^{P \times M} \) is a dictionary composed of \( M \) normalized adversarial noise atoms and \( v(x) \in \mathbb{R}^M \) a coding vector (further on we simply write \( v \) for readability). While the dictionary \( D \) is shared across the examples, the coding vector \( v \) is specifically crafted for any given \( x \). By setting atomNumber = 1, the learned adversarial perturbation becomes universal, while setting atomNumber = \( P \) (the dimension of the examples) results in a specific adversarial perturbation. Given a trained DNN classifier \( f \), LIMANS consists of a training stage where the dictionary \( D \) is learned using a labeled set \( T = \{(x(i), y(i))\}_{i=1}^N \) and an inference stage where, given \( D \), and any new example \( x(k) \), the corresponding coding vector \( v(k) \) is crafted to make \( Dv(k) \) an adversarial perturbation of \( x(k) \). Notice that as \( M \ll P \), the searching space of the LIMANS attacks is a low dimensional space (spanned by the atoms) which is much lower than the original space \( X \). We frame the learning procedure of LIMANS as maximizing the fooling rate under the constraints listed above. **Problem 3.1 (LIMANS formulation).** Given the classifier \( f \) and the training set \( T = \{(x(i), y(i))\}_{i=1}^N \), find \( D \in \mathbb{R}^{P \times M} \) and \( V \in \mathbb{R}^{M \times N} \) solution of \[ \max_{D \in \mathbb{R}^{P \times M}, V \in \mathbb{R}^{M \times N}} \sum_{i=1}^N 1\{\text{argmax}_k f_k(x(i)') \neq \text{argmax}_k f_k(x(i))\}, \] s.t. \[ \begin{cases} x(i)' = x(i) + Dv(i) \in X & i = 1, \ldots, N, \\ ||Dv(i)||_p \leq \delta_p & i = 1, \ldots, N, \\ ||D_j||_p = 1 & j = 1, \ldots, M, \end{cases} \] where \( 1_A \) denotes the indicator function of the set \( A \). ### 3.2 Algorithmic schemes The indicator function being non-smooth and highly non-convex, instead of maximizing the fooling rate it is usual to consider minimizing a surrogate loss function \( L_\gamma(f(x'), f(x)) \), parameterized by \( \gamma \), more amenable to optimization. Given an original example \( x \) classified as \( \text{argmax}_k f_k(x) = y \), typical loss function of interest is \[ L_\gamma(f(x'), f(x)) = \max(-\gamma, f_y(x') - \max_{k \neq y} f_k(x')), \] Still, the problem is hard to solve because of the non-convexity of the constraint \( ||Dv(i)||_p \leq \delta_p \) or the account for the constraint \( x(i)' \in X \). One way to tackle this issue is to rely on the nonconvex proximal splitting framework [Sra 2012, Rakotomamonjy 2013] that happened to be less effective than expected in practice. Instead, a stochastic gradient approach turned out to be fruitful. To implement it, the optimization problem has to be relaxed to handle the constraints in a differentiable way in the objective function of Eq. (1). Hence, for computational tractability, we hereafter propose two relaxations of (3.1) Simple-LIMANS and Regularized-LIMANS along with their respective solver. **Regularized-LIMANS** The first relaxation is a regularized version expressed as follows. \[ \min_{D \in \mathbb{R}^{P \times M}, V \in \mathbb{R}^{M \times N}} \sum_{i=1}^N L_\gamma(f(x(i) + Dv(i)), f(x(i))) + \lambda h(\delta_{p,p})(D, v(i)) \quad \text{s.t. } D \in D \] with \( \lambda \in \mathbb{R}_+ \) a regularisation parameter, \( D = \{D | ||D_j||_p = 1, \forall j \in \{1, \ldots, M\}\} \) and \( h(\delta_{p,p}) \) representing a penalty function. We consider the \( \ell_p \)-norm, with \( p = 2 \) or \( p = \infty \), as penalty function leading to \( h(\delta_{2,2})(D, v) = \max(||Dv||_2 - \delta_2, 0) \) and \( h(\delta_{\infty,\infty})(D, v) = \sum_k \max(||(Dv)_k|| - \delta_\infty, 0) \). Here, we get rid of the constraints \( x(i)' \in X, \forall i \) and enforce small magnitude of \( ||Dv(i)||_p \) through the regularizer \( h(\delta_{p,p}) \). Empirically this promotes \( x(i) + Dv(i) \in X \) to be nearly close to \( X \). Algorithm 1 summarizes the optimization scheme of Regularized-LIMANS. The Regularized-LIMANS optimizes \( (D, V) \) in a stochastic fashion, and specifically, \( D \) is updated using a projected gradient descent that ensures that the constraints \( ||D_j||_p = 1, \forall j \) are satisfied. Algorithm 1 Regularized-LIMANS Require: Classifier \( f \); Learning rate \( \rho \); Training dataset \( T \); \( \ell_p \) budget \( \delta_p \); Optimizer Optim; Batch size \( B \); Regularization parameter \( \lambda \) 1: \( D = N(0, I_{M \times P}) \); \( V = N(0, I_{P \times M}) \) 2: for \( k = 0 \) to MAXEPOCH do 3: loss = 0 4: for \((x^{(i)}, y^{(i)}) \subset T\) do 5: \( x^{(i)}' = x^{(i)} + Dv^{(i)} \) 6: \( \hat{y}_{adv} = f(x^{(i)}'); \quad \hat{y} = f(x^{(i)}) \) 7: \( \text{loss}_s = L_0(\hat{y}_{adv}, \hat{y}) + \lambda h(\delta_p, p)(D, v^{(i)}) \) 8: loss = loss + loss_s 9: end for 10: if modulo(i) = B then 11: \( D \leftarrow \text{Optim}(\nabla_D \text{loss}) \) (Update) 12: \( V \leftarrow \text{Optim}(\nabla_V \text{loss}) \) (Update) 13: \( D = \text{Proj}_{\{D | \|D\|_p = 1\}}(D) \) 14: loss = 0 15: end if 16: end for 17: \( Dv^{(i)} \leftarrow \text{Proj}_{\{Dv | \|Dv\|_p \leq \delta\}}(Dv^{(i)}) \) 18: \( x^{(i)}' \leftarrow \text{Proj}_X(x^{(i)} + Dv^{(i)}) \) 19: return \(\{x^{(i)}'\}_{i=1}^N, (D, V)\) Algorithm 2 Simple-LIMANS Require: Classifier \( f \); Learning rate \( \rho \); Training dataset \( T \); \( \ell_p \) budget \( \delta_p \); Optimizer Optim; Batch size \( B \) 1: \( D = N(0, I_{M \times P}) \); \( V = N(0, I_{P \times M}) \) 2: for \( k = 0 \) to MAXEPOCH do 3: loss = 0 4: for \((x^{(i)}, y^{(i)}) \subset T\) do 5: noise\( ^{(i)} = Dv^{(i)} \) 6: \( x^{(i)}' = \text{proj}_X(x^{(i)} + \frac{\delta_p \text{noise}^{(i)}}{\|\text{noise}^{(i)}\|_p}) \) 7: \( \hat{y}_{adv} = f(x^{(i)}'); \quad \hat{y} = f(x^{(i)}) \) 8: \( \text{loss}_s = L_\infty(\hat{y}_{adv}, \hat{y}) \) 9: loss = loss + loss_s 10: if modulo(i) = B then 11: \( D \leftarrow \text{Optim}(\nabla_D \text{loss}) \) (Update) 12: \( V \leftarrow \text{Optim}(\nabla_V \text{loss}) \) (Update) 13: loss = 0 14: end if 15: end for 16: end for 17: \( V \leftarrow [\|D_{\bullet j}\|_p V_{j \bullet}] \forall j \in \{1, \ldots, M\} \) 18: \( D \leftarrow \text{Proj}_D(D) \) 19: return \(\{x^{(i)}'\}_{i=1}^N, (D, V)\) A grid search on \( \lambda \) allows to control the generalization of the model. In practice, for the selected value of \( \lambda \), it happens that the constraint on \( \|Dv\|_p \) is slightly violated. To ensure the respect of the constraint, if needed a post-processing is performed. The stochastic optimization makes Regularized-LIMANS applicable to large-scale datasets such as ImageNet. Details on the tuning of the hyper-parameters involved in this optimization scheme are provided in the supplementary material. Simple-LIMANS The Regularized-LIMANS requires the tuning of the hyper-parameter \( \lambda \) which may be cumbersome. To alleviate that, we propose a second relaxation of LIMANS that involves an objective function encompassing two of the constraints, the last one being taken care of by post-processing. Specifically, the method termed Simple-LIMANS solves the following problem: \[ \min_{D \in \mathbb{R}^{P \times M}, V \in \mathbb{R}^{M \times N}} \sum_{i=1}^N L_\gamma(f(\text{proj}_X(x^{(i)} + \frac{\delta_p Dv^{(i)}}{\|Dv^{(i)}\|_p}), f(x^{(i)})) \] where \( \text{proj}_X \) denotes the projection operator that maps its input \( x^{(i)}' = x^{(i)} + Dv^{(i)} \) onto \( X \). Simple-LIMANS trades off the constraint \( D \in \mathcal{D} \), i.e. the unit norm constraint over the atoms of \( D \), for the explicit guarantee that the adversarial example \( x^{(i)}' \) is valid (i.e. belongs to \( X \)) and that the adversarial noise is utmost of magnitude \( \delta_p \) by defining \( x^{(i)}' \) as: \( x^{(i)}' = \text{proj}_X(x^{(i)} + \frac{\delta_p Dv^{(i)}}{\|Dv^{(i)}\|_p}) \). Here \( \text{proj}_X \) is the projection operator onto \( X \). Simple-LIMANS solves (4) by iteratively updating \( D \) and \( V \) using a gradient descent procedure as shown in Algorithm 2. It proves computationally efficient as it does not require hyper-parameter tuning. At termination, a post-processing is used to ensure \( D \in \mathcal{D} \) without changing the adversarial examples. Attack of unseen examples At inference time, given \( D \), and an unseen example \( x^{(k)} \), we seek an adversarial counterpart \( x'^{(k)} = x + Dv^{(k)} \) where \( v^{(k)} \) is computed either with 1 or Algorithm 2 restricted to the optimization of \( v^{(k)} \). 4 EXPERIMENTS This section presents the experimental evaluations of the adversarial noise space and adversarial perturbations generated with it, providing a comparison with the state-of-the-art attacks on benchmark datasets. They consist of two parts. Firstly, we empirically demonstrate the existence of the adversarial noise space and robustness of the generated adversarial noises by adopting the Simple-LIMANS\cite{2}. Secondly, we estimate the transferability of the adversarial noise space across different classifiers with the more precise algorithm Regularized-LIMANS\cite{1}. 4.1 EXPERIMENTAL SETTINGS Our experiments are conducted on two datasets: CIFAR-10 (Krizhevsky et al., 2009) and ImageNet ILSVRC2012 (Krizhevsky et al., 2017). As suggested in (Zhang et al., 2021), we perform the experiments only on the validation set and split it into three parts, the first set for training the model $D$, the second for the tuning of $\lambda$ when using Regularized-LIMANS and the last one for testing. CIFAR-10 Experiments. The number of examples for training, validation and test is respectively 8000, 1000, and 1000. The experiments on validation of the proposed model and the robustness estimation are conducted on the pre-trained VGG11 with batch normalization and the robust ResNet-18 (Sehwag et al., 2022) classifier. The transferability of the proposed model has been evaluated over 4 vanilla DNNs, i.e., MobileNet-V2, ResNet50, DenseNet121, and the VGG11 as aforementioned, and 2 robust DNNs, robust ResNet-18\cite{4} and robust WideResNet-34-10\cite{1} (Sehwag et al., 2022). These experiments have been implemented in Pytorch on a MacBook Pro with 2.3 GHz Intel Core i9, 8 cores and a GPU Nvidia RTX 2080. ImageNet Experiments. The number of examples for training, validation and test is respectively 10000, 2000, 5000. We select here the 4 vanilla classifiers, ResNet-18, MobileNet-V2, DenseNet121 and VGG11 and two robust classifiers available on RobustBench\cite{6}, robust ResNet-18 and robust WideResNet-50-2 (Salman et al., 2020). The experiments on large scale dataset was performed on a server equipped with 4 GPU Volta V100-SXM2-32GB. State-of-art methods. For a fair comparison on the performance of attacks and their robustness, we consider the $\ell_\infty$ specific attack baselines, AutoAttack, PGD FGSM, and the universal attack baselines UAP-PGD, Fast-UAP and CW-UAP. For comparison on the transferability, it involves the state-of-the-art attacks, VNI-FGSM, NAA and RAP, and the classical one, AutoAttack. The above specific attacks are implemented by resorting to TorchAttacks library (Kim, 2020) which contains Pytorch implementation of the most popular specific attacks. While, the universal attacks are implemented based on the publicly available resources. Metric. The Attack Success Rate, also known as Fooling Rate (FR) that is $$\frac{1}{N} \sum_{i=1}^{N} \text{argmax}_{x_k} f_k(x^{(i)}) \neq \text{argmax}_{x_k} f_k(x^{(i)})$$ is used to assess the performance of our adversarial attacks. However, the robustness of the attack can be evaluated with the Robust Accuracy Under Defense (RAUD) metric, originally proposed and termed as Attack Success Rate under Defense (ASRD) by (Lorenz et al., 2022), as an unbiased metric of adversarial attacks and measuring the percentage of the successful attacks to a classifier under an adversarial example detector. More details about this metric and the considered detector choices are given in the supplementary material. Details of the parameter settings and the hyper-parameter selection are given in supplementary material\cite{C}. We provide also the results of our proposed $\ell_2$-attacks in supplementary material\cite{D}. 4.2 EXPERIMENTAL RESULTS Learned adversarial noise space. Figure\cite{2} tracks the fooling rate of LIMANS for different number of the adversarial atoms under the $\ell_2$ norm constraint (figures of the $\ell_\infty$-attack are given in supplementary material). It shows that the LIMANS attack is always stronger than universal baselines as even for only one atom, that attack allows to tune its coefficient making it more efficient. We see that from $M = 500$ the LIMANS attacks closes the gap with state-of-the-art specific adversarial attacks. This results is interesting as it empirically shows that by tuning the number of atoms $M$, the proposed LIMANS does bridge the gap between specific and universal adversarial attacks. Besides at inference, 1 https://robustbench.github.io/ Figure 2: Test fooling rate of adversarial attacks under the $\ell_2$ norm constraint ($\delta_2 = 0.5$) on CIFAR-10 test data when fixing a number of atoms $M$ (x axis), associated to the classifier (left) VGG11 and (right) robust ResNet-18. ![Graphs showing test fooling rates](image) (a) LIMANS-$\ell_2$ on Standard classifier (b) LIMANS-$\ell_2$ on Robust classifier (c) LIMANS-$\ell_\infty$ on Standard classifier (d) LIMANS-$\ell_\infty$ on Robust classifier Figure 3: Visualization of the learned universal adversarial directions (atoms of the dictionary $D$) when $M = 5$, on CIFAR-10 and corresponding to the classifier (left) VGG11 and (right) robust ResNet-18. All atoms have been rescaled for display. by setting $M = 500$ the LIMANS attacks only optimize for a coding vector $v$ of dimension 500 whereas specific adversarial attacks here need to optimize 3072 variables. Furthermore such result confirms the manifold hypothesis of the adversarial noise space and confirms the efficiency of the quick, parameter-free algorithm Simple-LIMANS. By setting the model to be linear, LIMANS allows to visually inspect the adversarial information. Figure 3 displays the optimized model’s atoms when $M = 5$. It shows how the atoms constituting the universal base fooling the classifiers are structured. This structure differs according to the classifier and the considered $\ell_p$ norm. In particular, the dictionary of LIMANS-$\ell_2$ on the robust classifier is reminiscent of certain Fourier decompositions. **Robustness of the attack.** This experiment was conducted with the same settings as in [Lorenz et al., 2022]. Table 1 shows the performances of robustness of the proposed $\ell_\infty$-attack and its comparison with the specific adversarial attack baselines. It is noted that the state-of-the-art specific attacks become completely or nearly harmless when classifier is protected by an attack detector. The proposed LIMANS attack surpasses these specific attacks when $M$ is only 10. With $M \geq 500$, the proposed attack can successfully jeopardize the classical classification system, even equipped with an attack detector. The robust classifier shows its stability facing adversarial attacks while the proposed attack can also ignore the attack detectors and damage the system to some extent. It indicates thus the potential application of the proposed attacks in evaluating the effectiveness of an adversarial example detector plugged prior in a classical or robust classifier. **Transferable adversarial noise space.** The results in this part were generated using the algorithm Regularized-LIMANS. By adjusting the hyper-parameter $\lambda$, we managed to find an optimized $M = 150$ for CIFAR-10, which achieves almost comparable performance with AutoAttack and to analyze the transferability of the learned space without loss of precision as shown in Table 2. When it comes to ImageNet (Table 3), considering the large size of an image and the memory limitation, this dimension becomes $M = 100$. This is far from the real dimensionality of the adversarial noise Table 1: Robustness performance of the LIMANS $\ell_\infty$-attack ($\delta_\infty = 8/255$) in term of RAUD on the CIFAR-10 test data and against the attack detectors plugged in both standard classifier (S.C.) and robust classifier (R.C.). The smaller the RAUD, the more robust the adversarial attack is. The best performance is marked in black bold font. | Detectors $d$ | $d_{FGSM}$ | $d_{PGD}$ | $d_{Autoattack}$ | $d_{LIMANS_{10}}$ | |--------------|------------|-----------|------------------|-------------------| | Classifiers $f$ | S.C. | R.C. | S.C. | R.C. | S.C. | R.C. | S.C. | R.C. | | SA | 91.1 | 85.1 | 91.1 | 85.1 | 91.1 | 85.1 | 91.1 | 85.1 | | FGSM | 91.7 | 85.7 | 91.7 | 85.7 | 91.7 | 85.7 | 83.4 | 79.5 | | PGD | 90.6 | 84.9 | 91.7 | 85.0 | 91.1 | 85.1 | 55.9 | 73.7 | | Autoattack | 89.9 | 84.6 | 90.9 | 85.0 | 91.7 | 85.0 | 52.7 | 71.5 | | LIMANS$_{10}$ | 75.7 | 81.0 | 81.0 | 80.8 | 81.6 | 81.0 | 88.9 | 79.6 | | LIMANS$_{500}$ | 17.5 | 71.5 | 25.6 | 72.2 | 31.8 | 74.2 | 26.6 | 69.4 | | LIMANS$_{1000}$ | 15.9 | 70.1 | 26.1 | 70.9 | 32.1 | 72.5 | 21.7 | 68.7 | | LIMANS$_{4000}$ | 15.6 | 69.6 | 23.7 | 70.4 | 28.2 | 72.6 | 31.1 | 68.4 | Table 2: Performance of $\ell_\infty$-attacks on CIFAR-10 ($\delta_\infty = 8/255$), in terms of fooling rate (FR) and standard accuracy (SA), where the left column lists the source classifiers and the first line presents the target classifiers. The best results of transferability are marked in red bold style. That of the specific attacks are shown in black bold style. | MobileNet | ResNet50 | DenseNet | VGG | R-r18 | R-wrn-34-10 | |-----------|----------|----------|-----|-------|-------------| | AutoAttack | 63.3 | **100** | 54.6 | 25.1 | 1.2 | 2.4 | | VNI-FGSM | 78.3 | 95.9 | 80.3 | 57.2 | 2.7 | 2.1 | | NAA | 50.7 | 64.7 | 22.9 | 18.4 | 1.4 | 2.1 | | RAP | 49.0 | 75.1 | 52.5 | 35.4 | 1.6 | 2.8 | | Ours | **96.0** | 91.3 | **81.8** | **82.1** | **11.7** | **13.2** | | MobileNet | ResNet50 | DenseNet | VGG | R-r18 | R-wrn-34-10 | |-----------|----------|----------|-----|-------|-------------| | AutoAttack | 62.5 | 43.0 | 44.0 | **100** | 2.7 | 2.7 | | VNI-FGSM | 69.3 | 62.6 | 61.4 | 96.5 | 3.0 | 2.6 | | NAA | 42.3 | 14.5 | 1.8 | 71.6 | 1.6 | 1.2 | | RAP | 46.5 | 39.5 | 40.9 | 73.8 | 3.3 | 3.4 | | Ours | **97.4** | **87.5** | **81.5** | 91.0 | **11.5** | **12.6** | space, and hence, does not lead to comparable performance with specific attacks such as AutoAttack. However, it still offers evidence for the transferability of the learned space on ImageNet. Moreover, for robust classifiers, the decision boundaries are more complicated, which leads to failure in closing the performance gap between LIMANS attack and AutoAttack when only 100 adversarial directions are learned. Nevertheless, it always shows good performance in transferring. It is claimed in [Tramèr et al., 2017] that the distance between the decision boundaries of two classifiers trained on the same dataset is small. This means that the adversarial space learned by LIMANS, if it corresponds to the space spanned by the set of directions perpendicular to the decision boundaries, is transferable between different classifiers. The results reported in the Table 2 confirm this intuition. The dictionaries built upon a ResNet50 and a VGG show better transfer performance over state-of-the-art attacks across vanilla classifiers. However, as shown in Table 3, two exceptions on ImageNet challenge this conclusion. To address them, we propose considering the relative performances. The learned space on a classifier with restricted dimensionality allows us to successfully find adversarial perturbations only for a part of the examples and can be used to attack another classifier achieving comparable performances. Thus, regarding the transferability of LIMANS, we draw the same conclusion on ImageNet across classifiers. Besides, we note that the transferable property also holds between vanilla and robust classifiers. The LIMANS model learned on a vanilla classifier used to fool a robust classifier (and vice versa), gives slightly worse results than the one learned on a classifier of the same category. This might be due to the differences in the dataset used to train the classifiers which result in a larger bias between the decision Table 3: Performance of $\ell_\infty$-attacks on ImageNet ($\delta_\infty = 4/255$), in terms of FR, where the left column lists the source classifiers and the first line presents the target classifiers. The best results of transferability are marked in red bold font. Those of the specific attacks are shown in black bold font. | | MobileNet | ResNet18 | DenseNet | VGG | R-r18 | R-50-2 | |------------------|-----------|----------|----------|-------|-------|--------| | **ResNet18** | | | | | | | | AutoAttack | 40.30 | **100** | 35.76 | 34.90 | 1.80 | 1.34 | | VNI-FGSM | 56.74 | 99.98 | 51.40 | **51.42** | 2.84 | 2.04 | | NAA | 22.54 | 97.94 | 14.84 | 19.30 | 2.12 | 1.20 | | RAP | 53.36 | 96.74 | 51.30 | 50.60 | 3.80 | 3.14 | | Ours | **59.16** | 59.16 | **53.14** | 48.28 | **10.48** | **6.62** | | **VGG** | | | | | | | | AutoAttack | 47.94 | 40.06 | 32.62 | **100** | 2.34 | 1.42 | | VNI-FGSM | **57.98** | 53.96 | 42.88 | 99.84 | 2.76 | 2.24 | | NAA | 19.62 | 14.92 | 12.18 | 79.96 | 2.18 | 1.40 | | RAP | 53.14 | 53.12 | 42.68 | 95.68 | 3.48 | 2.84 | | Ours | 57.68 | **54.14** | **50.04** | 51.62 | **10.68** | **6.24** | | **R-r18** | | | | | | | | AutoAttack | 13.70 | 15.8 | 10.82 | 14.60 | **71.74** | 10.78 | | VNI-FGSM | 16.14 | 17.66 | 12.48 | 16.08 | 63.22 | 11.74 | | NAA | 11.46 | 10.86 | 9.34 | 11.42 | 21.48 | 4.90 | | RAP | 11.32 | 10.80 | 8.16 | 10.32 | 45.80 | 7.94 | | Ours | **37.14** | **33.2** | **33.76** | **29.90** | 29.84 | **12.94** | | **R-50-2** | | | | | | | | AutoAttack | 20.14 | 22.76 | 17.36 | 19.44 | 15.42 | **59.02** | | VNI-FGSM | 23.88 | 26.22 | 19.68 | 23.28 | 18.00 | 52.28 | | NAA | 14.08 | 13.12 | 10.20 | 14.04 | 9.82 | 12.58 | | RAP | 13.82 | 14.06 | 10.52 | 13.50 | 15.54 | 34.10 | | Ours | **42.18** | **42.50** | **42.46** | **34.22** | **23.70** | 18.02 | boundaries of the two types of classifiers. Yet the performance is still remarkable, e.g., on ImageNet, $FR_{(R-50-2 \rightarrow ResNet-18)} = 78\%$, $FR_{(VGG \rightarrow ResNet-18)}$ and $FR_{(ResNet-18 \rightarrow R-r18)} = 35\%$, $FR_{(R-r18 \rightarrow R-r18)}$. Furthermore, it is worth noting that the performance of LIMANS attack on a target classifier does not depend on its performance on the source classifier, but on the nature of this target classifier. In Table 2, the fooling rate of the LIMANS-specific attack on ResNet is 91.3%. However, when the learned LIMANS model is used to generate adversarial perturbation to fool MobileNet, its performance is even better reaching 96.0%. This is because MobileNet is simpler and easier to attack. Finally, through comparison and analysis, we conclude that a model trained on a robust classifier is more easily transferable to other classifiers. 5 Conclusions This work introduced LIMANS, a linear model of the adversarial noise space, allowing it to bridge the gap between universal and specific adversarial attacks. It also proposes two implementations, Simple-LIMANS a parameter-free algorithm, and Regularized-LIMANS, more efficient when its regularization parameter is well tuned. For the use of LIMANS, our results suggest starting with Simple-LIMANS to quickly obtain a suitable solution and, depending on the available computation time, improving it by finding a relevant regularization parameter allowing to use of the more accurate Regularized-LIMANS solver. Empirical evidence revealed that adversarial examples crafted by LIMANS were more robust against adversarial examples detectors and proved the adversarial noise space to be transferable leading to results better than current state-of-the-art adversarial attacks. Up to now, only adversarial noise space trained on a specific DNN in white-box settings has been considered. The next step is to consider LIMANS for generating black-box attacks and training them on multiple DNNs, making them even more universal, efficient, and close to the true adversarial harm in real-life applications. BIBLIOGRAPHY Philipp Benz, Chaoning Zhang, Adil Karjauv, and In So Kweon. Universal adversarial training with class-wise perturbations. In *2021 IEEE International Conference on Multimedia and Expo (ICME)*, pp. 1–6. IEEE, 2021. Ashutosh Chaudhry, Nikhil Agrawal, Kavya Barnwal, Keerat K Guliani, and Pramod Mehta. Universal adversarial perturbations: A survey. *arXiv preprint arXiv:2005.08087*, 2020. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *ICML*, volume 119, pp. 2206–2216, 2020. URL https://proceedings.mlr.press/v119/croce20b.html. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debonedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In *NeurIPS, Datasets and Benchmarks Track (Round 2)*, 2021. URL https://openreview.net/forum?id=SSKZPJct7B. Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, and Stefano Soatto. Empirical study of the topology and geometry of deep networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 3762–3770, 2018. Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In *ICLR*. OpenReview.net, 2019. URL https://openreview.net/forum?id=HJz6tiCqYm. Hoki Kim. Torchattacks: A pytorch repository for adversarial attacks. *arXiv preprint arXiv:2010.01950*, 2020. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Communications of the ACM*, 60(6):84–90, 2017. Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. Yueru Li, Shuyu Cheng, Hang Su, and Jun Zhu. Defense against adversarial attacks via controlling gradient leaking on embedded manifolds. In *Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII* 16, pp. 753–769. Springer, 2020. Peter Lorenz, Dominik Strassel, Margret Keuper, and Janis Keuper. Is autoattack/autobench a suitable benchmark for adversarial robustness? In *The AAAI-22 Workshop on Adversarial Machine Learning and Beyond*, 2022. URL https://openreview.net/forum?id=aLB3FaqoMBS. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 1765–1773, 2017. Zhuang Qian, Kaizhu Huang, Qiu-Feng Wang, and Xu-Yao Zhang. A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies. *Pattern Recognition*, 131:108889, 2022. Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, and Baoyuan Wu. Boosting the transferability of adversarial attacks with reverse adversarial perturbation. In *NeurIPS*, 2022. Alain Rakotomamonjy. Direct optimization of the dictionary learning problem. *IEEE Transactions on Signal Processing*, 61(22):5495–5506, 2013. Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? *Advances in Neural Information Processing Systems*, 33:3533–3545, 2020. Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang, and Prateek Mittal. Robust learning meets generative models: Can proxy distributions improve adversarial robustness? In *ICLR*, 2022. URL https://openreview.net/forum?id=WVX0NvBBkV. Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, Larry S. Davis, and Tom Goldstein. Universal adversarial training. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34(04):5636–5643, Apr. 2020. Suvrit Sra. Scalable nonconvex inexact proximal splitting. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), *Advances in Neural Information Processing Systems*, volume 25. Curran Associates, Inc., 2012. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun (eds.), *2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings*, 2014.
fvse7bMkAs
Although there are CLT results for the estimators, the validity of the statistical test is lacking --- in particular, the validity of the Bootstrap approximation. The CLT result is not sufficient for the validity of the test since the limiting distribution contains the unknown variance term; does Bootstrap approximation solve this problem? A theory is needed.
RISK ASSESSMENT AND STATISTICAL SIGNIFICANCE IN THE AGE OF FOUNDATION MODELS Anonymous authors Paper under double-blind review ABSTRACT We propose a distributional framework for assessing socio-technical risks of foundation models with quantified statistical significance. Our approach hinges on a new statistical relative testing based on first and second order stochastic dominance of real random variables. We show that the second order statistics in this test are linked to mean-risk models commonly used in econometrics and mathematical finance to balance risk and utility when choosing between alternatives. Using this framework, we formally develop a risk-aware approach for foundation model selection given guardrails quantified by specified metrics. Inspired by portfolio optimization and selection theory in mathematical finance, we define a metrics portfolio for each model as a means to aggregate a collection of metrics, and perform model selection based on the stochastic dominance of these portfolios. The statistical significance of our tests is backed theoretically by an asymptotic analysis via central limit theorems instantiated in practice via a bootstrap variance estimate. We use our framework to compare various large language models regarding risks related to drifting from instructions and outputting toxic content. 1 INTRODUCTION Foundation models such as large language models (LLMs) have shown remarkable capabilities redefining the field of artificial intelligence. At the same time, they present pressing and challenging socio-technical risks regarding the trustworthiness of their outputs and their alignment with human values and ethics (Bommasani et al., 2021). Evaluating LLMs is therefore a multi-dimensional problem, where those risks are assessed across diverse tasks and domains (Chang et al., 2023). In order to quantify these risks, Liang et al. (2022); Wang et al. (2023); Huang et al. (2023) proposed benchmarks of automatic metrics for probing the trustworthiness of LLMs. These metrics include accuracy, robustness, fairness, toxicity of the outputs, etc. Human evaluation benchmarks can be even more nuanced, and are often employed when tasks surpass the scope of standard metrics. Notable benchmarks based on human and automatic evaluations include, among others, Chatbot Arena (Zheng et al., 2023), HELM (Bommasani et al., 2023), MosaicML’s Eval, Open LLM Leaderboard (Wolf, 2023), and BIG-bench (Srivastava et al., 2022), each catering to specific evaluation areas such as chatbot performance, knowledge assessment, and domain-specific challenges. Traditional metrics, however, sometimes do not correlate well with human judgments. Aiming for a better alignment with human judgments, some approaches utilize ChatGPT/GPT-4 for natural language generation evaluations (Liu et al., 2023; Zhang et al., 2023; Hada et al., 2023). A comprehensive evaluation of LLMs requires addressing the following critical considerations: 1. Interpretability. Evaluation of foundation models is multi-dimensional in nature and multiple metrics assess the models on different socio-technical dimensions that probe the trustworthiness of their outputs and their adherence to shared values and ethics. It is critical to establish an aggregate-level measure to facilitate the interpretation and effective communication of the evaluation results. 2. Risk Assessment. In natural language (and other) applications, metrics quantify important guardrails such as model’s toxicity, safety, or robustness. Therefore, a comprehensive evaluation framework must incorporate a risk assessment that quantifies associated risks. Figure 1: (a) Quantiles, (b) Tail Value at Risk (TVAR), of Metrics portfolio of an LLM, showing that TVAR (second-order stochastic dominance) more clearly ranks the models than the quantiles alone (first-order stochastic dominance). (c) Ranking of models using Relative First and Second Stochastic Dominance of Portfolios (R-FSD, R-SSD @P) versus ranking of models using Relative First and Second Stochastic Dominance of chatGPT evaluation scores and ranking by Mean Win Rate on the metrics portfolio. Note that (1) the metrics portfolio successfully approximates the chatGPT evaluation, since the @P rankings largely agree with the @chatGPT rankings; (2) the Relative Stochastic Dominance rankings outperform the baseline Mean Win Rate. This entails ranking models based on the assessment of failure modes and tail statistics\(^1\), providing a nuanced understanding of potential pitfalls. 3. **Statistical Significance.** Evaluating machine learning models is intimately connected to statistical significance testing (SST), although this framework is still underutilized: Dror et al. (2018) reports almost 50% of ACL papers miss SST indicators. With the ever increasing parametric complexity of LLMs, obtaining a reliable SST in evaluating foundation models becomes ever more urgent. We propose in this paper an evaluation framework that offers a principled solution and an efficient implementation that addresses each of these challenges. Our main contributions are: 1. **Interpretable Metrics-Portfolio (Section 4).** Drawing inspiration from econometrics and mathematical finance, we define a metrics-portfolio for aggregating metrics. This portfolio normalizes and aggregates metrics, yielding a single interpretable number assessing each output of a LLM. A higher value of the portfolio is preferable. We illustrate in Figure 1 panels (a) and (b) summary statistics of the metrics portfolio aggregating a total of 8 automatic metrics computed using \(5K\) samples from the Mix-instruct dataset (Jiang et al., 2023). In panel (c) we show that model ranking based on our metrics-portfolio aligns with human evaluation proxies such as chatGPT (Please refer to Appendix B for details of how chatGPT score is computed). 2. **Risk Assessment via Second Order Stochastic Dominance (Section 2).** Stochastic orders define partial orders on random variables and play a vital role in econometrics and mathe- \(^1\)I.e. understanding and quantifying low-probability high-risk events. matical finance for comparing and selecting portfolios. We propose using stochastic order to select LLMs based on their metrics-portfolios. A portfolio dominates in the First Order Stochastic Dominance (FSD) if it has higher quantiles for all percentiles. However, in Figure 1 (Panel (a)), the quantiles of the metrics-portfolio of an LLM don’t provide a clear ordering, and FSD doesn’t adequately assess the risks of these models. Instead, we propose the use of Second Stochastic Dominance (SSD), where a portfolio dominates if it has higher Tail Values at Risk (TVAR) for all percentiles (also known as Conditional Value at Risk). TVAR, illustrated in Figure 1 (Panel (b)), represents normalized integrated quantiles, assessing the risks of low values in the portfolio. Small TVAR corresponds to fat left tails in the distribution of the portfolio, identifying risky LLMs as those with the lowest TVAR. For example, Flan-t5 emerges as the riskiest model in our running example. 3. Statistical Significance via Dominance Tests. (Section 3) Armed with these notions of stochastic dominance, we define statistics that assess the relative dominance of a model’s portfolio on another (R-FSD and R-SSD in Panel (c) in Figure 1). We subject these statistics to an asymptotic analysis, proving central limit theorems that provide the foundation for hypothesis testing with false discovery rate control. We then perform stochastic dominance hypothesis testings between all pairs of models. Having adjusted the confidence level of these tests, we aggregate these pairwise rankings to a single rank via rank aggregation techniques such as the Borda Algorithm (de Borda, 1781). The resulting ranks, depicted in Panel (c) of Figure 1, highlight that the portfolio of automatic metrics (@P) leads to a similar ranking to chatGPT score (@chatGPT) for both first and second stochastic order. To underscore the importance of risk assessment, we present the ranking of the metrics-portfolio produced by the ubiquitous Min Win Rate (MWR) used in LLM benchmarks (Liang et al., 2022)(last column in Panel (c)). Flan-t5 ranks close to last with all other orders, but ranks 6 with MWR. This highlights that the ubiquitous MWR used in LLM benchmarks is risky for ranking LLMs as it does not take into account failure modes of the model, and we caution practitioners of its pitfalls. 2 STOCHASTIC DOMINANCE We first review notions of stochastic dominance and their relation to downside risk measures and risk averse preference modeling. We use the notation of the seminal paper of Ogryczak & Ruszczyński (2002), and assume that the random variables are standardized so that larger outcomes are preferable. Throughout this Section, the reader can think of the random variable $X$ as a metric evaluating the performance of model $A$ on a specific test set. Likewise, $Y$ represents the evaluation of model $B$. We defer the definition of metrics portfolio to Section 4. In a multi-metric evaluation, as explained in the introduction, $X$ and $Y$ represent portfolios of evaluations of model $A$ and $B$ respectively. 2.1 FIRST AND SECOND ORDER DOMINANCE AND MEAN-RISK MODELS First Order Stochastic Dominance The First-order Stochastic Dominance (FSD) between real-valued random variables uses the right-continuous cumulative distribution (CDF) as a performance function. Specifically, for a real random variable $X$, define the first performance function $F_X^{(1)} : \mathbb{R} \rightarrow [0, 1]$ as the CDF: $F_X^{(1)}(\eta) = P(X \leq \eta), \forall \eta \in \mathbb{R}$. The FSD of $X$ on $Y$ is defined as follows: $$X \succ_{\text{FSD}} Y \iff F_X^{(1)}(\eta) \leq F_Y^{(1)}(\eta), \forall \eta \in \mathbb{R},$$ this intuitively means that for all outcomes $\eta$, the probability of observing smaller outcomes than $\eta$ is lower for $X$ than $Y$. An equivalent definition can be expressed using the quantile $F_X^{(-1)}$ (See e.g Ogryczak & Ruszczyński (2002)): $$X \succ_{\text{FSD}} Y \iff F_X^{(-1)}(p) \geq F_Y^{(-1)}(p), \forall p \in (0, 1],$$ where $F_X^{(-1)} : [0, 1] \rightarrow \mathbb{R}$ is the left-continuous inverse of $F_X^{(1)}$: $F_X^{(-1)}(p) = \inf\{\eta : F_X^{(1)}(\eta) \geq p\}$ for $p \in (0, 1]$. We focus on this definition as it is more computationally and notationally friendly since the quantile function is always supported on $[0, 1]$. Second Order Stochastic Dominance The Second-order Stochastic Dominance (SSD) is defined via the second performance function \( F_X^{(2)} : \mathbb{R} \rightarrow [0,1] \) that measures the area under the CDF: \[ F_X^{(2)}(\eta) = \int_{-\infty}^{\eta} F_X^{(1)}(x)dx, \text{ for } x \in \mathbb{R}, \] yielding: \[ X \succsim_{\text{SSD}} Y \iff F_X^{(2)}(\eta) \leq F_Y^{(2)}(\eta), \forall \eta \in \mathbb{R}. \tag{3} \] Note that FSD implies SSD, hence SSD is a finer notion of dominance. While FSD implies that \( X \) is preferred to \( Y \) by any utility-maximizing agent preferring larger outcomes,\(^2\) Ogryczak & Ruszczyński (2002) showed that SSD implies that \( X \) is preferred to \( Y \) by any risk-averse agent preferring larger outcomes.\(^3\) Similarly to FSD, SSD can be measured with quantile functions via introducing the second quantile function also known as integrated quantiles \( F_X^{(-2)} : (0,1] \rightarrow \mathbb{R} \) \[ F_X^{(-2)}(p) = \int_0^p F_X^{(-1)}(t)dt, \text{ for } t \in (0,1]. \tag{4} \] Similarly to the FSD case, an equivalent more computationally friendly definition can be expressed in terms of the second quantile function (a proof of this equivalence can be found in Theorem 3.2 in Ogryczak & Ruszczynski (2002)): \[ X \succsim_{\text{SSD}} Y \iff F_X^{(-2)}(p) \geq F_Y^{(-2)}(p), \forall p \in (0,1]. \tag{5} \] This equivalence is not straightforward and is due to Fenchel duality between \( F^{(2)} \) and \( F^{(-2)} \). Using \( p = 1 \) we see that SSD implies \( \mu_X \geq \mu_Y \), where \( \mu_X \) and \( \mu_Y \) are means of \( X \) and \( Y \). Mean – Risk Models (MRM) As noted earlier SSD is linked to risk assessment via the second performance function \( F^{(2)}(.) \) measuring expected shortfall, and the negative second quantile function \( -F^{(-2)}(p) \) that is an assessment of expected losses given outcomes lower than the \( p \)-quantile. **Definition 1 (Mean – Risk Models).** A mean – risk model of a random variable \( X \) consists of the pair \( (\mu_X, r_X) \), where \( \mu_X \) is the mean of \( X \), and \( r_X \) is a functional that measures the risk of the random outcome \( X \). The consistency of a mean – risk model with SSD is defined as follows: **Definition 2 (SSD consistency of Mean – Risk Models).** A mean – risk model \( (\mu_X, r_X) \) is \( \alpha \)-consistent with SSD, if for \( \alpha > 0 \) the following is true: \[ X \succsim_{\text{SSD}} Y \iff \mu_X - \alpha r_X \geq \mu_Y - \alpha r_Y. \tag{6} \] The ubiquitous mean – risk model in machine learning is \( (\mu_X, \sigma_X) \), where \( \sigma_X \) is the standard deviation. Unfortunately this model is not consistent with the SSD and has several limitations as it implies Gaussianity of the outcomes or a quadratic utility function. We give in Table 1 (Appendix F.1 ) risk measurements and their \( \alpha \)-consistency (proofs in Ogryczak & Ruszczynski (2002)). ### 2.2 Relaxations of Stochastic Dominance Recalling the definitions of FSD and SSD in Equations (2) and (5), in the finite-sample regime it is hard to test for these relations as one needs to show the infinite-sample quantile or second quantile properties hold uniformly over all \( p \in (0,1] \). This difficulty motivated the relaxation of stochastic dominance to an almost stochastic dominance pioneered by Leshno & Levy (2002). These relaxations were revisited for the first order by Alvarez-Esteban et al. (2014) who later proposed an optimal transportation approach to assess almost first stochastic order (Del Barrio et al., 2018). **Almost FSD (\( \varepsilon \)-FSD)** Following Leshno & Levy (2002), Del Barrio et al. (2018) relaxed FSD (Equation (2)) via the violation ratio of FSD: \[ X \succsim_{\varepsilon-\text{FSD}} Y \iff \varepsilon W_2(F_X, F_Y) = \frac{\int_0^1 (F_Y^{(-1)}(t) - F_X^{(-1)}(t))^2 dt}{W_2^2(F_X, F_Y)} \leq \varepsilon, \tag{7} \] \(^2\)I.e. having an increasing utility function. \(^3\)I.e. having an increasing and concave utility function. where $W_2$ is the Wasserstein -2 distance between $F_X$ and $F_Y$. This ratio corresponds to a measure of the “area” of violation of the FSD dominance of $X$ on $Y$. Note that $0 \leq \varepsilon_{W_2}(F_X, F_Y) \leq 1$, with value 0 if $X \succ Y$ and 1 if $Y \succ X$. For $\varepsilon \in (0, \frac{1}{2}]$, Figure 5a in Appendix G illustrates $\varepsilon$-FSD, dashed areas represent the violation set. Almost SSD ($\varepsilon$-SSD) We define $\varepsilon$-SSD, for $\varepsilon \in (0, \frac{1}{2})$, by relaxing Equation (5) as follows: $$X \succ_{\varepsilon-\text{SSD}} Y \iff \varepsilon_{IQ}(F_X, F_Y) = \int_0^1 \left( F_Y^{(-2)}(t) - F_X^{(-2)}(t) \right)^2 dt \over d_{IQ}^2(F_X, F_Y) \leq \varepsilon,$$ where $d_{IQ}$ is the $L_2$ distance between the Integrated Quantiles $(F^{(-2)})$. This ratio corresponds to a measure of the “area” of violation of the SSD dominance of $X$ on $Y$. Figure 5b in Appendix G illustrates the second order, dashed areas represent the violation set of SSD of $X$ on $Y$. Appendix D gives a more detailed account on almost stochastic dominance. 2.3 Relative Stochastic Dominance In the remainder of the paper, we refer to the FSD violation ratio as $\varepsilon_{W_2}(F_X, F_Y) \equiv \varepsilon^{(1)}(F_X, F_Y)$ and to the SSD violation ratio as $\varepsilon_{IQ}(F_X, F_Y) \equiv \varepsilon^{(2)}(F_X, F_Y)$. One of the shortcomings of almost stochastic dominance is the need to fix a threshold $\varepsilon$ on the violation ratio. When comparing two random variables, setting a threshold is a viable option. Nevertheless, when one needs to rank multiple variables $X_1, \ldots, X_k$ (considering all pairwise comparisons), setting a single threshold that would lead to a consistent relative stochastic dominance among the $k$ variables becomes challenging. To alleviate this issue, we draw inspiration from relative similarity and dependence tests (Bouniliphone et al., 2016a;b) that circumvent the need for a threshold via relative pairwise testings. For $\ell \in \{1, 2\}$ (i.e for FSD or SSD) we consider all pairs of violations ratios: $$\varepsilon_{ij}^{(\ell)} = \varepsilon^{(\ell)}(F_{X_i}, F_{X_j}) \text{ for } i, j \in \{1 \ldots k\}, i \neq j,$$ noting that $\varepsilon_{ij}^{(\ell)} + \varepsilon_{ji}^{(\ell)} = 1$. Let $F = (F_{X_1}, \ldots, F_{X_k})$. We define the one-versus-all violation ratio of the dominance of $X_i$ on all other variables $X_j, j \neq i$: $$\varepsilon_i^{(\ell)}(F) = \frac{1}{k-1} \sum_{j \neq i} \varepsilon_{ij}^{(\ell)}.$$ We then define relative stochastic dominance for both orders, R-FSD an R-SSD respectively: $$X_{i_1} \succ_{R-\text{FSD}} X_{i_2} \cdots \succ_{R-\text{FSD}} X_{i_k} \iff \varepsilon_{i_1}^{(1)}(F) \leq \cdots \leq \varepsilon_{i_k}^{(1)}(F)$$ $$X_{i_1} \succ_{R-\text{SSD}} X_{i_2} \cdots \succ_{R-\text{SSD}} X_{i_k} \iff \varepsilon_{i_1}^{(2)}(F) \leq \cdots \leq \varepsilon_{i_k}^{(2)}(F)$$ In this definition of relative stochastic dominance, the most dominating model is the one with the lowest one-versus-all violation ratio and to test for relative dominance of $X_i$ on $X_j$ we can look at the following statistics: $$\Delta \varepsilon_{ij}^{(\ell)}(F) = \varepsilon_i^{(\ell)}(F) - \varepsilon_j^{(\ell)}(F),$$ and we have the following threshold-free test for relative order:4 $$X_i \succ_{R-\text{FSD}} X_j \iff \Delta \varepsilon_{ij}^{(1)}(F) \leq 0$$ $$X_i \succ_{R-\text{SSD}} X_j \iff \Delta \varepsilon_{ij}^{(2)}(F) \leq 0$$ 4For comparing $k = 2$ random variables, these $r$-FSD and $r$-SSD tests reduce to 0.5-FSD and 0.5-SSD absolute tests, respectively. 3 TESTING FOR ALMOST AND RELATIVE STOCHASTIC DOMINANCE Given empirical samples from $F_X$ and $F_Y$, we perform statistical testing of the almost and relative stochastic dominance of $X$ on $Y$ given empirical estimates of the statistics given in Sections 2.2 and 2.3. A key ingredient for quantifying the statistical significance of such tests is a central limit theorem that guarantees that the centered empirical statistics is asymptotically Gaussian at the limit of infinite sample size. Given $n$ samples from $F_X$ ($m$ from $F_Y$ respectively), we denote $F^n_X$ and $F^m_Y$ the corresponding empirical distributions. For $\varepsilon_0$-FSD, Del Barrio et al. (2018) studied the following hypothesis testing $H_0 : X \not\succ Y$ versus the alternative $H_a : X \succ Y$. Using (2), this amounts to the following null hypothesis: $H_0 : \varepsilon_{W_2}(F^n_X, F^m_Y) > \varepsilon_0$. Del Barrio et al. (2018) showed the asymptotic normality of the empirical statistics: $$\sqrt{\frac{mn}{m+n}}(\varepsilon_{W_2}(F^n_X, F^m_Y) - \varepsilon_{W_2}(F_X, F_Y)) \rightarrow N(0, \sigma^2(F_X, F_Y)).$$ Del Barrio et al. (2018); Ulmer et al. (2022) propose to reject $H_0$ with a confidence level $1 - \alpha$ if: $$\varepsilon_{W_2}(F^n_X, F^m_Y) \leq \varepsilon_0 + \sqrt{\frac{m+n}{mn}}\sigma^2(F_X, F_Y)\Phi^{-1}(\alpha),$$ where $\Phi^{-1}$ is the quantile function of a standard normal. For the tests we propose below, we assume the following structure on the underlying CDFs to derive the corresponding central limit theorems (CLTs). **Assumption 1. [Regularity]** Let the CDF $F$ be supported on the interval $[-M, M]$ for some constant $M$, and have pdf $f$ such that $\frac{f'(p)}{f(p)}$ is bounded for almost every $p$ for which $f(p) > 0$ (i.e., all $p$ in the support of $f$). $\varepsilon$-SSD Testing Similar to $\varepsilon$-FSD, using the definition in (5) we propose to test using the following null hypothesis for testing for $\varepsilon_0$-SSD: $$H_0 : \varepsilon_{IQ}(F^n_X, F^m_Y) > \varepsilon_0$$ Supposing Assumption 1 holds for $F_X, F_Y$ and assuming $\frac{n}{n+m} \rightarrow \lambda$ for some $\lambda$, we state a Central Limit Theorem for the second order statistics in Appendix H (Theorem 1, proved in Appendix J.1). Similarly to (14), Theorem 1 suggests to reject $H_0$ with a confidence $1 - \alpha$ if: $$\varepsilon_{IQ}(F^n_X, F^m_Y) \leq \varepsilon_0 + \sqrt{\frac{m+n}{mn}}\sigma^2_\lambda(F_X, F_Y)\Phi^{-1}(\alpha),$$ where (for the same reasons as the FSD case) $\sigma^2_\lambda$ is given by the central limit theorem. Relative Stochastic Dominance Testing We turn now to relative stochastic dominance that we introduced in (12) and (13) for first and second orders. Given $n$ samples from $k$ random variables $(X_1 \ldots X_k)$, let $F = (F_1, \ldots, F_k)$ be the marginals of $X_i$ and $F_n = (F_{1n}, \ldots, F_{kn})$ denote the empirical marginals. To test for R-FSD (resp R-SSD) of $X_{i_1}$ on $X_{i_2}$ we propose to test the following null hypothesis: $$H_0 : \Delta\varepsilon^{(\ell)}_{ij}(F_n) > 0, \ell = 1 \text{ or } 2$$ Assuming that each $F_i$ satisfies Assumption 1, we state in Appendix H a central limit theorem for the relative second order statistics (Theorem 2 proved in Appendix J.2). A similar result holds for the relative first order statistics that we omit for brevity. Theorem 2 suggests to reject $H_0$ with a confidence $1 - \alpha$ if: \[ \Delta_{\varepsilon^{(2)}_{1,2}}(F_n) \leq \sqrt{\frac{1}{n} \sigma^2_{\text{relative}}(F_X, F_Y)} \Phi^{-1}(\alpha) \] where \( \sigma^2_{\text{relative}}(F_X, F_Y) \) is given by the central limit theorem (similar test exists for R-FSD). **Bootstrapping Heuristic** While the CLT above provides an asymptotic value for the variance, in practice (as in the ASO framework of (Ulmer et al., 2022)) we estimate the variance with a bootstrapping heuristic (Efron & Tibshirani, 1993). This estimate is nonasymptotic and hence should often be more accurate than the asymptotic value. Proving the consistency of the bootstrap for functions of quantiles is generally nontrivial (Shao & Tu, 2012), but recall that the stochastic ordering can be defined in terms of either quantiles or CDFs. In Appendix K we provide a bootstrap consistency proof for the absolute statistics based on the CDF, leaving the quantile based proof for future work. **Multi-Testing Algorithm** Algorithm 1 given in Appendix C summarizes the multi-testing setup for both relative and almost (absolute) FSD and SSD. The main idea behind Algorithm 1 is to turn multi-testing to pairwise testings i.e testing for stochastic dominance between all pairs of models using relative (or absolute) FSD or SSD. In order to ensure that this multi-testing has a confidence level \( 1 - \alpha \), we correct the individual test’s confidence level by dividing \( \alpha \) by the number of all pairs (Bonferroni, 1936). Then in order to combine the pairwise rankings to a single rank, we use a simple Borda count (de Borda, 1781) rank aggregation algorithm. ### 4 DISTRIBUTIONAL RISK ASSESSMENT OF FOUNDATION MODELS **Setup** In this section we consider the multi-metric evaluation setup of a foundation model \( A : X \rightarrow O \), using \( N \) metrics \( m_i : O \rightarrow \mathbb{R}, i = 1 \ldots N \), where \( m_i \) are real valued functions evaluated on a test set \( D \). Without loss of generality, assume that each of the metrics are standardized such that higher values of \( m_i \) correspond to more desirable model performance. We model observed values for each metric \( m_i \) as a continuous random variable \( M_i \) with unknown CDF \( F_{M_i} \). For a model \( A : X \rightarrow O \) and a data sample \( X \sim D \), we describe the evaluation of model \( A \) with \( m_i \) with the following random variable \( M_i : M_i|A,X := m_i(A(X)), X \sim D, i = 1 \ldots N \), where the randomness arises from the data sampling procedure \( X \sim D \), and (if applicable) the stochasticity of the model \( A \), for example if the model uses sampling to compute its output. **Metrics Portfolio Selection using Stochastic Dominance** Let \( \lambda = (\lambda_1, \ldots, \lambda_N) \) be a probability vector that represents the importance of the \( m_i \) metrics to the model’s end user. Inspired by the portfolio optimization literature, we model the user return from a model as a portfolio of metrics \( m_i \) evaluated on a test set \( D \). Following (Ulan et al., 2021; Belgodere et al., 2023), we define this portfolio as an Archimedean copula, which forms a weighted geometric mean of the CDFs: \[ R_A(X) = \exp \left( \sum_{i=1}^{N} \lambda_i \log F_{M_i}(m_i(A(X))) \right) = \prod_{i=1}^{N} F_{M_i}^{\lambda_i}(m_i(A(X))). \] Note that (17) normalizes the metrics using the CDF of the metric \( M_i \), eliminating the issue of differing dynamic ranges. This CDF should be formed by pooling together the evaluations on all samples and from all models being compared, to ensure that the various \( R_A \) are comparable. The CDF normalization is monotonic and hence it preserves the order of each metrics and allow us to aggregate in the probability space the metrics using a simple weighted geometric mean. Computing \( R_A(X) \) for all test samples \( X \), we can therefore characterize the distribution of the metric portfolio of the model \( A \). To compare two models it is enough to compare their corresponding portfolios, specifically, Model \( A \) is preferred to Model \( B \) using \( \varepsilon \)- or R-SSD: \[ R_A(X) \gtrsim_{\varepsilon \text{- or } R \text{-SSD}} R_B(X). \] Similar tests can be performed for FSD. **Multiple Models Comparison** Given \( k \) models \( A_\ell, \ell = 1 \ldots k \) and their evaluations \( m_i(A_\ell(X)), X \sim D, i = 1 \ldots N \), we pool all model evaluations for a metric to estimate the CDF of each metric $F_{M_i}$ and construct a portfolio for each model $R_{A_\ell}(X)$. We use our Relative Stochastic Dominance testing introduced in Section 3 and in Algorithm 1 to rank the models by their metrics portfolio in relative SSD or FSD with a confidence level $1 - \alpha$. **Per Metric Stochastic Dominance and Rank Aggregation** We also explore another approach for multi-testing, by considering the stochastic dominance of the models on per-metric basis. This amounts to computing $N$ relative stochastic orders for each $\mathcal{M}_i = (m_i(A_1(X)), \ldots, m_i(A_\ell(X)))$, $i = 1 \ldots N$. This amounts to producing via Algorithm 1 a relative ranking $\pi_i$ of the models based on $\mathcal{M}_i$. A single rank $\pi$ is then obtained via rank aggregation with uniform weighting on the per-metric rankings $\pi_i$, $i = 1 \ldots N$. We use for rank aggregation the R package of (Pihur et al., 2009). For more details on rank aggregation, the reader is referred to Appendix F.3. ## 5 EXPERIMENTS ### 5.1 VALIDATION OF STATISTICAL SIGNIFICANCE We examine the statistical properties of our tests as a function of sample size. We purposely design synthetic score distributions to represent challenging problems comprising large overlap between the distributions and considerable violation ratio, but where one would still like to have an ordering among the variables. For this we consider the two Gaussian variables $X \sim \mathcal{N}(0, 1)$ and $Y \sim \mathcal{N}(0.5, 2)$. Figure 6 in Appendix L.1 shows that our tests have desirable statistical properties. We also perform synthetic experiment on fat tailed distribution such as log normal (Figure 7 App. L.1). ### 5.2 LLM EVALUATION WITH STOCHASTIC DOMINANCE We showcase LLM evaluation with stochastic dominance to assess two risks: drifting from instructions and outputting toxic content. The following datasets correspond to each risk we assess. **Mix-Instruct Evaluation Data** We use the data from (Jiang et al., 2023), that consists of an instruction, an input sentence and an expected output from the user, as well as the output of a set of different LLMs. The dataset consists of a training set of 100K samples and a test set of 5K samples. (Jiang et al., 2023) used automatic metrics such as BARTscore and BLEU score comparing the LLM generation to the expected output in order to evaluate if each LLM followed the instruction. (Jiang et al., 2023) used also chatGPT to evaluate the generations. The number of automatic metrics $N$ is 8, the total number of evaluated models $k$ is 12. Metrics are unified so that larger values are preferred. **Toxicity Evaluation** We use the real toxicity prompts dataset of Gehman et al. (2020), and generate prompts completions from the Llama 2 7b, Llama 2 13b, Llama 2 70b, MosaicML MPT 30b and Tiiuae Falcon 40b models available in Opensource ($k = 5$ models). We select two sets of prompts: toxic prompts (toxicity > 0.8, that gives ~10K prompts) and non-toxic prompts (toxicity < 0.2, from which we randomly sample 10K). We sample from each model, 10 completions per prompt using nucleus sampling (top-$p$ sampling with $p = 0.9$ and a temperature of 1). This procedure yields a dataset of ~200K sentence completions per model. We evaluate the toxicity of these generations using the Perspective API, on the following toxicity metrics ($N = 6$ metrics): Toxicity, Severe toxicity, Identity Attack, Insult, Profanity and Threat. Following Liang et al. (2022), we evaluate the toxicity of generated completions only and refer to this as Gen Only evaluation. In order to also give the context of the completion, we prepend the model generation with the prompt and evaluate the full sentence using Perspective API. We refer to this as Prompt+Gen. The polarity of all toxicity metrics is unified so that high values refer to non-toxic content (we use −log probabilities of Perspective API outputs). **Evaluation Protocol and Baselines** We evaluate each of the use cases (instruction following and toxicity) using the following absolute stochastic dominance tests: (1) $\varepsilon$-FSD (corresponds to the ASO evaluation of Ulmer et al. (2022)) for $\varepsilon = 0.08, 0.25, 0.4$, (2) our proposed $\varepsilon$-SSD using the same values for $\varepsilon$, (3) our relative stochastic dominance R-FSD and R-SSD tests, (4) the Mean – Risk models described in Table 1, and (5) the ranking produced by the Mean Win Rate (MWR) used by LLM leaderboards such as HELM (Liang et al., 2022). As noted in Section 4, we either perform these tests on a metrics portfolio (given in Equation (17)) – we refer to this as test @ P; or on a per metric basis leading to $N$ rankings of the models that we reduce to a single ranking via Rank Aggregation (RA) (Pihur et al., 2009) – we refer to this as RA(test @ M). In this naming convention, test takes values in \{MWR, ε-FSD, ε-SSD, R-FSD, R-SSD, Mean – Risk Model (\(μ_X − r_X\))\} where \(r_X\) is a chosen risk from Table 1. We perform all our statistical tests with a significance level \(\alpha = 0.05\), and use 1000 bootstrap iterations. **Efficient Implementation** We compare the computational complexity of our implementation for computing all stochastic orders to that of the Deep-Significance package (deepsig, 2022) which implements ε-FSD in the ASO framework (Ulmer et al., 2022), on the task of comparing models on the Mix-Instruct dataset (sample size 5K, \(k = 12\) models). Using the Deep-Significance implementation of MULTI-ASO in (Ulmer et al., 2022) for \(ε = 0.25\) with just 3 bootstrap iterations\(^5\), the test completes in 15min50s (averaged over 7 runs). Our code for relative and absolute testing performs all tests at once and relies on caching vectorization and multi-threading of the operations. Our code completes all tests in an average of just 17.7 s with 1000 bootstraps. Experiments were run on a CPU machine with 128 AMD cores, of which 2 were used. **Mix-Instruct Results and Analysis** Figure 1 and Table 2 in Appendix and summarize the rankings we obtain for different models using the different tests described above. Note that we compare here our portfolio approach versus a ChatGPT score evaluation of the models (See Appendix B for ChatGPT evaluation). We see that our portfolio agrees with this human evaluation proxies on both R-SSD and R-FSD orders. On the other hand, as we show in Appendix A, our portfolio approach also agrees with per metric aggregation for both R-FSD and R-SSD while being 7x faster. When compared to the Mean Win Rate currently used in LLM leaderboards such as HELM (Liang et al., 2022), we see that it leads to different orderings than FSD and SSD. For example the flan-t5 model is ranked 5 or 6 by MWR with rank aggregation and portfolio respectively. In contrast, for R-FSD and R-SSD it is given a low ranking (8, 11) or 12. This is due to the fact that MWR only counts wins and does not take into account how fat is the left tail of the distribution of the metric being assessed, possibly leading to overevaluation of risky models. When comparing R-FSD and R-SSD to each other, we see some changes in the ranking in near or adjacent positions. Remarkably, the R-SSD ordering agrees with the rank aggregation of all (consistent) mean – risk models, confirming the theoretical link between second order dominance and risk averse decision making. Nevertheless, as shown in Appendix A R-SSD has lower variance in smaller sample regime. Table 3 in Appendix L shows that R-FSD and R-SSD are consistent with ε-FSD and SSD respectively for various values of \(ε\) on this dataset. While it is common to give radar plots of MWR for a metric or an average of the metric, we give in L a radar plot (Figure 8) for each of the Mean – Risk models, to aid practitioners in visualizing and selecting models in a risk aware manner. **Toxicity Results and Analysis** Table 4 in Appendix L summarizes the results of our tests. We make a few observations: First, overall the portfolio approach agrees well with the rank aggregation of per-metric rankings. The portfolio is more computationally efficient as it needs to run the stochastic dominance test only on the portfolio, rather than running \(N\) tests and aggregating them via rank aggregation. Secondly, on this dataset the R-FSD and R-SSD agree, with a few exceptions. Interestingly, when comparing models on model generation only, on toxic prompts MosaicML MPT stands out, while on non toxic prompts Llama2 7B stands out and on the combined set Mosaic ML MPT stands out. When evaluating the toxicity of the context (Prompt + Gen), Llama70B stands out on toxic prompts, Llama7b stands out on non toxic prompts and MosaicML MPT still stands out on the combined set. This study shows that the evaluation problem is not only challenging in terms of the statistical significance of the test, but also with regards to the conditioning on which data the evaluation is performed. The stability of the ranking across all methods, on the combined set suggests that rank stability can be a criterion to assess the representativity of the evaluation set. **6 Conclusion** In this paper we introduced a distributional framework for risk assessment and comparison of foundation models based on multi-metric evaluations. Our framework is of interest beyond the current applications presented here by providing statistical significance while ranking assets for decision making. We believe our tools for training models to be risk averse can be of significant use to practitioners and serve as a stepping stone towards solving the AI alignment problem. --- \(^5\)Limited to 3 for computational reasons. REFERENCES Pedro C Alvarez-Esteban, E del Barrio, JA Cuesta-Albertos, and C Matrán. A contamination model for approximate stochastic order: extended version. *arXiv preprint arXiv:1412.1920*, 2014. Brian Belgodere, Pierre Dognin, Adam Ivankay, Igor Melnyk, Youssef Mroueh, Aleksandra Mojsilovic, Jiri Navratil, Apoorva Nitsure, Inkit Padhi, Mattia Rigotti, Jerret Ross, Yair Schiff, Radhika Vedpathak, and Richard A. Young. Auditing and generating synthetic data with controllable trust trade-offs, 2023. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. Rishi Bommasani, Percy Liang, and Tony Lee. Holistic evaluation of language models. *Annals of the New York Academy of Sciences*, 2023. C.E. Bonferroni. *Teoria statistica delle classi e calcolo delle probabilità*. Pubblicazioni del R. Istituto superiore di scienze economiche e commerciali di Firenze. Seeber, 1936. URL https://books.google.com/books?id=3CY-HQAACAAJ. Wacha Bounliphone, Eugene Belilovsky, Matthew Blaschko, Ioannis Antonoglou, and Arthur Gretton. A test of relative similarity for model selection in generative models. *Proceedings ICLR 2016*, 2016a. Wacha Bounliphone, Eugene Belilovsky, Arthur Tenenhaus, Ioannis Antonoglou, Arthur Gretton, and Matthew B Blashcko. Fast non-parametric tests of relative dependency and similarity. *arXiv preprint arXiv:1611.05740*, 2016b. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. A survey on evaluation of large language models. *arXiv preprint arXiv:2307.03109*, 2023. Jean-Charles de Borda. Mémoire sur les élections au scrutin. *Histoire de l'Académie Royale des Sciences*, 1781. deepsig. Deepsignificance. https://github.com/Kaleidophon/deep-significance, 2022. Eustasio Del Barrio, Juan A Cuesta-Albertos, and Carlos Matrán. An optimal transportation approach for assessing almost stochastic order. *The Mathematics of the Uncertain: A Tribute to Pedro Gil*, pp. 33–44, 2018. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. The hitchhiker’s guide to testing statistical significance in natural language processing. In *Proceedings of the 56th annual meeting of the association for computational linguistics (volume 1: Long papers)*, pp. 1383–1392, 2018. Rotem Dror, Segev Shlomov, and Roi Reichart. Deep dominance-how to properly compare deep neural models. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pp. 2773–2785, 2019. B. Efron and R. Tibshirani. An Introduction to the Bootstrap, 1993. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In *Findings*, 2020. URL https://api.semanticscholar.org/CorpusID:221878771. Alexander A Gushchin and Dmitriy A Borzykh. Integrated quantile functions: properties and applications. *Modern Stochastics: Theory and Applications*, 4(4):285–314, 2017. Rishav Hada, Varun Gumma, Adrian de Wynter, Harshita Diddee, Mohamed Ahmed, Monojit Choudhury, Kalika Bali, and Sunayana Sitaram. Are large language model-based evaluators the solution to scaling up multilingual evaluation? *arXiv preprint arXiv:2309.07462*, 2023.
3ROGsTX3IR
Am I right to understand the theory quantitatively predicts the experiments only in the GFL phase, and only holds qualitatively in terms of phenomenology to explain the transition to GMFL-I and II phases?
GROKKING AS A FIRST ORDER PHASE TRANSITION IN TWO LAYER NETWORKS Noa Rubin∗ Inbar Seroussi† Zohar Ringel ‡ ABSTRACT A key property of deep neural networks (DNNs) is their ability to learn new features during training. This intriguing aspect of deep learning stands out most clearly in recently reported Grokking phenomena. While mainly reflected as a sudden increase in test accuracy, Grokking is also believed to be a beyond lazy-learning/Gaussian Process (GP) phenomenon involving feature learning. Here we apply a recent development in the theory of feature learning, the adaptive kernel approach, to two teacher-student models with cubic-polynomial and modular addition teachers. We provide analytical predictions on feature learning and Grokking properties of these models and demonstrate a mapping between Grokking and the theory of phase transitions. We show that after Grokking, the state of the DNN is analogous to the mixed phase following a first-order phase transition. In this mixed phase, the DNN generates useful internal representations of the teacher that are sharply distinct from those before the transition. 1 INTRODUCTION Feature learning is a process wherein useful representations are inferred from the data rather than being engineered. The success of deep learning is often attributed to this process. This is reflected, in part, by the performance gap between actual deep neural networks and their infinite-width Gaussian Process (GP) counterparts (Williams 1996; Novak et al., 2018; Neal, 1996). It is also key to transfer learning applications (Weiss et al., 2016) and interpretability (Zeiler & Fergus, 2014; Chakraborty et al., 2017). Yet despite its importance, there is no consensus on how to measure or let alone classify feature learning effects. Several recent results (Li & Sompolinsky, 2021; Ariosto et al., 2022) began shedding light on this matter. One line of work (adaptive kernel approaches) treats the covariance matrices of activation within each layer (kernels) as the key quantities undergoing feature learning. Feature learning would manifest as a deviation of these kernels from those of a random network and their adaptation to the task at hand. While providing a quantification of feature learning in quite generic settings, the equations governing these latent kernels are quite involved and may host a variety of learning phenomena. One such phenomenon (Seroussi et al., 2023), capable of providing a strong performance boost, is that of Gaussian Feature Learning (GFL): A gradual process in which the covariance matrices of neuron pre-activations change during training so as to increase their fluctuations along label/target relevant directions. Remarkably, despite this smooth adaptation, the pre-activations’ fluctuations, across width and training seeds, remain Gaussian. At the same time, the latent kernel itself develops notable spikes in the target direction, indicating feature learning. Another phenomenon often associated with feature learning is Grokking. This abrupt phenomenon, first observed in large language models running on simple mathematical tasks, involves fast changes to the test accuracy following a longer period of constant and poor performance (Power et al., 2022). Though usually described as a time-dependent phenomenon, Grokking as a function of other parameters, it also occurs as a function of other parameters, such as sample size (Power et al., 2022); Gromov (2023); Liu et al., (2022a). More broadly, DNNs behaviour as a function of time and dataset size ∗Hebrew University, Racah Institute of Physics, Jerusalem, 9190401, Israel †Department of Applied Mathematics, School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel ‡Hebrew University, Racah Institute of Physics, Jerusalem, 9190401, Israel is often similar as reflected for instance the use of One Pass SGD \cite{You et al. (2014)}. Several authors provided quantitative explanations in the context of specific toy models wherein one can handcraft or reverse engineer the solution obtained by the network \cite{Gromov (2023), Nanda et al. (2023)} or in suitably tailored perceptron models \cite{Liu et al. (2022a), Zunkovič & Ilievski (2022)} where, however, representation learning is tricky to define. Given the aforementioned adaptive kernel approaches to deep learning, as well as the universality of Grokking across different DNNs and hyperparameters, it is natural to look for a more unifying picture of Grokking using a formalism that applies to generic deep networks. In this work, we study Grokking as an equilibrium (or Bayesian) phenomenon driven by sample size, noise, or network width. Utilizing the aforementioned theoretical advancements, we show that Grokking in large-scale models can be classified and predicted through the mean field theory of phase transitions in physics \cite{Landau & Lifshitz (2013)}. Studying two different models, a teacher-student with cubic teacher and modular algebra arithmetic, we show the internal state of the DNN before Grokking is well described by GFL. In contrast, during Grokking, it is analogous to the mixed phase in the theory of first-order phase transitions, and the statistics of pre-activations are described by a mixture of Gaussian (GMFL). In this GMFL state, the latent kernels associated with the DNNs develop entirely new features that alter their sample complexity compared to standard infinite-width GP limits. After Grokking the weights are all specialized to the teacher. Besides providing a framework to classify feature learning effects, our approach provides analytically tractable and quantitative accurate predictions for the above two models. Our main results are as follows: • We establish a concrete mapping between the theory of first-order phase transitions, internal representations of DNNs, and Grokking for two non-linear DNN models each having two tunable layers. • We identify three phases related to Grokking, one which is smoothly connected to the GP limit and two distinct phases involving different GP mixtures. • For both our models, we simplify the task of learning high-dimensional representations to solving a non-linear equation involving either two (cubic teacher) or one (modular addition teacher) variables. Moreover, for the latter, we determine the location of the phase transition analytically. • We flesh out a Grokking-based mechanism that can reduce the sample complexity compared to the GP limit. Prior works. Phase transitions are ubiquitous in learning theory (e.g. Refs. \cite{Gardner & Derrida (1988), Seung et al. (1992), Györgyi (1990)}), often in the context of replica-symmetry breaking. Connections between Grokking and phase transition were suggested by \cite{Nanda et al. (2023)} but as far as analytic predictions go, prior work mainly focused on one trainable layer \cite{Zunkovič & Ilievski (2022), Arnaboldi et al. (2023)} some suggesting those as an effective theory of representation learning \cite{Liu et al. (2022a)}. This can be further investigated by analyzing the loss landscape \cite{Liu et al. (2022b)}. Varma et al. \cite{Varma et al. (2023)} suggest that the generalizing solution learned by the algorithm is more efficient but slower to learn than the memorizing one, using this interpretation they define regimes of semi-grokking, and ungrokking. The formalism of Refs. \cite{Arnaboldi et al. (2023), Saad & Solla (1995)} can potentially be extended to online Grokking with two trainable layers, but would require reducing the large matrices involved. Phase transitions in representation learning have been studied in the context of bifurcation points in the information bottleneck approach (e.g. \cite{Tishby & Zaslavsky (2015)}), nonetheless, the connection to deep learning remains qualitative. To the best of our knowledge, we provide a novel first-principals connection between grokking and phase transition in representation learning. 2 MODELS 2.1 NON-LINEAR TEACHER MODEL Our first setting consists of a student Erf-network learning a single index non-linear teacher. The student is trained on a training set of size $n$, $\mathcal{D} = \{\mathbf{x}_\mu, y(\mathbf{x}_\mu)\}_{\mu=1}^{n}$ with MSE loss. In the following, bold symbol represents a vector. The input vector is $\mathbf{x}_\mu \in \mathbb{R}^d$ with iid Gaussian entries of variance 1. The target function \( y \), is a scalar linear function of \( x \), with an orthogonal non-linear correction. Specifically, \( y \) is given by \[ y(x) = w^* \cdot x + \epsilon \left( (w^* \cdot x)^3 - 3 |w^*|^2 w^* \cdot x \right). \] where \( H_1, H_3 \) are the first two odd Hermite polynomials, and \( w^* \in \mathbb{R}^d \) are the teacher weights. For simplicity we take here the norm of the teacher weights to be 1, but this has no qualitative effect on the theory as long as we require \( |w^*| \sim O(1) \). We consider a fully connected non-linear student network with one hidden layer of width \( N \) given by \[ f(x) = \sum_{i=1}^{N} a_i \text{erf}(w_i \cdot x). \] where \( w_i \in \mathbb{R}^d \) for \( i \in [1, N] \) are the students weights. Evidence that this model Groks can be found in App. D. 2.2 Grokking modular algebra Here we consider the setting of Ref. Gromov (2023), where the learning task is addition modulo \( P \) where \( P \) is prime. The network is trained on the following data set \[ D = \{x_{nm}, y(x_{nm}) | m, n \in \mathbb{Z}_P \} \] where \( x_{nm} \in \mathbb{R}^{2P} \), is a vector such that it is zero in all its coordinates except in the coordinates \( n \) and \( P + m \) where it is 1 (a “two-hot vector”). The target function \( y \in \mathbb{R}^P \) is given by \[ y_p(x_{nm}) = \delta_{p,(n+m) \mod P} - 1/P, \] where \( \delta \) is the Kronecker delta and mod \( P \) denotes the modulo operation which returns the remainder from the division by \( P \). For the student model, we consider a two-layer deep neural network with a square activation function, given by \[ f_p(x_{nm}) = \sum_{i=1}^{N} a_{pi} (w_i \cdot x_{mn})^2 \] where \( w_c \in \mathbb{R}^{2P} \) for \( c \in [1, N] \) are the students weights. For brevity, we denote \( y_p(x_{mn}) = y_{pm}^p \), and \( f_p(x_{nm}) = f_{mn}^p \). 2.3 Training the models In both cases, we consider networks that are trained with MSE loss to equilibrium using Langevin dynamics via algorithms such as Durmus & Moulines (2017), Neal et al. (2011). The continuum-time dynamics of the parameters are thus \[ \dot{\theta}(t) = -\nabla_\theta \left( \frac{\gamma}{2} \| \theta(t) \|^2 + L(\theta(t), D) \right) + 2\sigma \xi(t) \] where \( \theta(t) \) is the vector of all network parameters in time \( t \), \( \gamma \) is the strength of the weight decay, \( L \) is the loss function, the noise \( \xi \) is given by \( \langle \xi_i(t) \xi_j(t') \rangle = \delta_{ij} \delta(t - t') \) and \( \sigma \) is the magnitude of the noise. We set the weight decay of the output layer so that with no data \( a_i^2, a_{pc}^2 \) both average to \( \sigma_a^2/N \) under the equilibrium ensemble of fully trained networks. The input layer weights are required to have a covariance of \( \sigma_w^2/d \) in the teacher-student model with \( \sigma_w^2 = O(1) \) and a covariance of 1 in the modular algebra model. Note that the covariance of the hidden layer is given by \( \sigma^2/\gamma \). The posterior induced by the above training protocol coincides with that of Bayesian inference with a Gaussian prior on the weights defined by the above covariance and measurement noise \( \sigma^2 \) Naveh et al. (2021). 2.4 Derivation Overview 2.4.1 Brief Introduction to Mean Field Theory Phase Transitions Phase transitions Landau & Lifshitz (2013); Tong (2011), such as the water-vapour transition, are ubiquitous in physics. They are marked by a singular behavior of some average observables as a function of a control parameter (say average density as a function of volume). As the laws of physics are typically smooth, phase transitions are inherently large-scale, or thermodynamic, phenomena loosely analogous to how the sum of many continuous functions may lead to a non-continuous one. In a typical setting, the probability \( p(x) \), of finding the system at a state \( x \) can be approximately marginalized to track a single random variable called an order parameter \( (\Phi) \). As the latter is typically a sum of many variables (i.e. macroscopic), it tends to concentrate yielding a probability of the type \( \log(p(\Phi)) \propto -dS(\Phi) \) where \( d \) is a macroscopic scale (e.g. number of particles) and \( S(\Phi) \) is some well behaved function that does not scale with \( d \). Given this structure, the statistics of \( \Phi \) can be analyzed using saddle point methods, specifically by Taylor expanding to second order around minima of \( S \). Phase transitions occur when two or more global minima of \( S(\Phi) \) appear. First-order phase transitions occur when these minima are distinct before the phase transition and only cross in \( S \) value at the transition. Notably, the effect of such crossing is drastic since, at large \( d \), the observed behaviour (e.g. the average \( \Phi \)) would undergo a discontinuous change. Depending on the setup, such sharp change may be inconsistent as it would immediately change the constraints felt by the system. For instance, in the water-vapour transition, as one lowers volume the pressure on the vapour mounts. At some point, this makes a high-density minima of \( \Phi \) as favourable as the low-density minima, signifying the appearance of water droplets. However, turning all vapor to droplets would create a drop in pressure making it unfavourable to form droplets. Instead, what is then observed is a mixture phase where as a function of the control parameter, a fraction of droplets forms so as to maintain two exactly degenerate minima of \( S \). Lowering the volume further, a point is reached where all the vapour has turned into droplets and one is in the liquid phase. In our analysis below, \( \Phi \) will be a property of the weights in each neuron of the input layer, say their overlap with some given teacher weights. A high input dimension will be analogous to the large-scale limit, and the density loosely corresponds to the discrepancy in predictions. The phase transitions are marked by a new minima of \( S(\Phi) \) which captures some feature of the teacher network useful in reducing the discrepancy. What we refer to as droplets corresponds to some input neurons attaining \( \Phi \) values corresponding to the teacher feature while others fluctuate around the teacher agnostic minima. However the spatial notion associated with droplets is not relevant in this case as in the mean field theory of phase transitions. 2.4.2 Adaptive Kernel Approach and its Extension to Gaussian Mixtures Our main focus here is the posterior distribution of weights in the input layer \( p(w_i) \) and posterior averaged predictions of the network \( f(x) \). Such posteriors are generally intractable for deep and/or non-linear networks. Hence, we turn to the approximation of Ref. Seroussi et al. (2023) where a mean-field decoupling between the read-out layer and the input layer is performed. This is exact in the limit of \( N \to \infty \) and vanishing \( \sigma^2 \) (i.e. mean-field scaling). Following this, the posterior decouples to a product of two probabilities, a Gaussian probability for the read-out layer outputs and a generally non-Gaussian probability for the input layer weights. These two probabilities are coupled via two non-fluctuating quantities. The average kernel induced by the input layer and the discrepancy in predictions. As shown in Seroussi et al. (2023), specifically for a two-layer FCN, the resulting probability further decouples into iid probabilities over each neuron \( p(w_i) \). Below, we thus omit the neuron index \( i \). The action \( (-\log(p(w_i))) \) up to constant normalization factors) for each neuron’s weights is then given by the following form \[ S[w] = \frac{|w|^2}{2\sigma^2} - \frac{\sigma^2}{2N} \sum_{\mu,\nu} t(x_\mu)^T t(x_\nu) \phi(w \cdot x_\mu) \phi(w \cdot x_\nu), \] \[ := \sigma^{-2} Q_{\mu\nu}, \] (7) where $\phi$ is the activation function and $\bar{t}$ is the discrepancy between the averaged network output and the target given by $$\bar{t}(x_\mu) = (y(x_\mu) - \bar{f}(x_\mu))/\sigma^2.$$ (8) Notably $\bar{t}$ is not given but determined by solving the following two mean-field self-consistency equations $$\bar{f} = Q \left[ Q + \sigma^2 I_n \right]^{-1} y$$ (9) where the kernel $Q$ is defined via $$Q_{\mu\nu} = \sigma_a^2 \langle \phi(w \cdot x_\mu) \phi(w \cdot x_\nu) \rangle_S[w]$$ (10) and $\langle \ldots \rangle_S[w]$ denotes averaging over $w$ with the probability implied by $S[w]$. In this work we use the equivalent kernel (EK) approximation, allowing the sums appearing in eqs. (7,9) to be replaced by integrals. This approximation washes out the generalization phenomena associated with Grokking, while capturing underlying feature learning mechanisms. As demonstrated in App. D, the feature learning effects hold also for finite datasets, in which a generalization gap can be observed. Theoretical corrections due to finite datasets can be made as shown in Cohen et al. (2021), Seroussi et al. (2023), Naveh & Ringel (2021), here we focus on the EK limit for simplicity. Even if $\bar{t}$ is given, the remaining action is still non-linear. Ref. Seroussi et al. (2023) proceed by performing a variational Gaussian approximation on that action. Here we extend this approximation into a certain variational Gaussian mixture approximation. Specifically, we show that as one scales up $d, N, n$ in an appropriate manner, and following some decoupling arguments between, $S[w]$ has the form $dS[\Phi(w)]$ where $d \gg 1$ and $S[\Phi(w)]$ has $O(1)$ coefficients and $\Phi(w)$ is some linear combination of the weights. This allows us to treat the integration underlying $\langle \ldots \rangle_S[w]$ within a saddle-point approximation. Notably when more than one Global saddle appears the saddle-point treatment corresponds to $p(w)$ having a Gaussian mixture form. ![Figure 1: Schematic phase diagram of learning phases. The inset plots in the different phases correspond to the normalized negative log posterior ($\tilde{S}$). A transition between the different phases can be achieved by varying either $n$ or $N$.](image) The three phases of learning described in the introduction correspond to the following behavior of the saddles (see also illustration in Fig. 1). At first, a single saddle centered around $\Phi(w) = 0$ exists and weights fluctuate in a Gaussian manner around these minima, as in GFL. In the next phase, the distribution is comprised of a weighted average of this zero-saddle and other $|\Phi(w)| > 0$ saddles. This marks a new learning ability of the network and hence the beginning of Grokking. We name this mixture phase the first Gaussian Mixture Feature Learning phase (GMFL-I). This phase corresponds to the mixed (“droplets”) phase. In this phase, picking a random neuron, there is a finite chance it is in the GFL phase and hence fluctuates around the target agnostic minima at $w = 0$. Similarly, there is a finite chance it fluctuates around one of the non-trivial saddles. At some later point, after more data has been presented, only the saddles with $|w| > 0$ dominate the average (GMFL-II). The phase phenomenology is shared by both models, however, the details of the new saddles that appear, as well as the decoupling scheme between the different components of $w$ differ between both models. We next turn our attention to these details. ## 3 RESULTS ### 3.1 Non-linear Teacher Student model **Scaling setup and effective interaction.** Consider the following two scaling variables $(\alpha, \beta)$ which we would soon take to infinity together and consider scaling up the microscopic parameters in the following manner $$N \rightarrow \beta N, \quad d \rightarrow \sqrt{\beta} d, \quad \sigma_a^2 \rightarrow \sigma_a^2/\sqrt{\beta}, \quad \sigma^2 \rightarrow \frac{\alpha}{\beta} \sigma^2, \quad n \rightarrow \alpha n$$ (11) where we comment that $\alpha$ can be seen as a continuum approximation allowing us to replace data summation with integrals over the data measure and $\beta$ is a combination of mean-field-type scaling \cite{Mei et al. (2018)} together with a thermodynamic/saddle-point limit. Notably the following combination $(u)$ of hyper-parameter $u = \frac{n^2 \sigma_a^2}{\sigma^4 dN}$, which we refer to as the effective interaction, is invariant under both $\alpha$ and $\beta$. **Claim I. Two relevant discrepancy modes before the transition.** For $\beta \to \infty$ and $\sqrt{\beta}/\alpha \to 0$, and $u \leq u_c(\epsilon) (\approx 30.2$ for $\epsilon = -0.3$) the discrepancy takes the following form $$\sigma^2 f(x) = bH_1(x) + cH_3(x)$$ (12) where $b,c \in \mathbb{R}$ are some $O(1)$ constant coefficients. We comment that $ub^2$ is proportional to the emergent scale of Ref. \cite{Seroussi et al. (2023)}. For further detail see App. A.2. **Claim II. One-dimensional posterior weight distribution.** In the same scaling limit, $u \leq u_c$, the negative log probability (action in physics terminology) of weights along the teacher direction, decouples from the rest of the $w$ modes, and takes the following form $$S[w \cdot w^*] = d \left( \frac{(w \cdot w^*)^2}{2\sigma_w^2} - \frac{2n^2 \sigma_a^2}{\pi \sigma^4 dN} \frac{(w \cdot w^*)^2}{1 + 2 \left( \sigma_w^2 + (w \cdot w^*)^2 \right)} \left( b - \frac{2c (w \cdot w^*)^2}{1 + 2 \left( \sigma_w^2 + (w \cdot w^*)^2 \right)} \right)^2 \right)$$ (13) Notably, this expression reduces the high-dimensional network posterior into a scalar probability involving only the relevant order parameter ($\Phi = w \cdot w^*$). We further note that all the expressions in the brackets are invariant under $\alpha$ and $\beta$. Heuristically, this will also be the action for a small $\Delta u$ after the phase transition since the resulting corrections to the discrepancy have a parametrically small effect. For further detail see App. A.2. **Claim III. Exactness of Gaussian Mixture Approximation.** In the same scaling limit, the probability described by the above action is exactly a mixture of Gaussians each centred around a global minimum of $S$. **Claim IV. First-order phase transition.** For $u < u_c$ the only saddle is that at $\Phi = 0$. Exactly at $u = u_c$ three saddles appear two of which are roughly at $\Phi = \pm |w_*|^2 = \pm 1$. For some finite interval $u \in [u_c, u_c + \Delta u]$, these saddles stay degenerate in $S$ value. ### 3.1.1 Teacher-Student Experimental Results ![Figure 2](image) **Figure 2:** GFL to GMFL-I Theory and Experiment in the teacher-student model. Panel (a) and (b) show the negative log posterior before and after the phase transition induced by varying the noise $\sigma^2$. A good match with the experimental values (crosses) and theory (solid lines) for rarer and rarer events is obtained as we scale up the model according to Table 1. (colour coding). Turning to network outputs, panel (c) shows the expected phase transition in learning the cubic teacher component and the inset shows the discrepancy in the linear teacher component. As our analytics holds before and at the phase transitions, discrepancies in $f \cdot H_3$ close to the transition are an expected finite-size effect made pronounced by its low absolute value. To validate our theoretical approach, we trained an ensemble of 200 DNNs for different $\alpha = \beta$ values using our Langevin dynamics at sufficiently low learning rates and for a sufficiently long time to ensure equilibration. Our initial \( N, d, \sigma^2, \sigma_a^2 \) (i.e. at \( \alpha = \beta = 1 \)) were: \( N = 700, n = 3000, d = 150, \sigma_w^2 = 0.5, \sigma_a^2 = 8/N \) and we took \( \epsilon = -0.3 \). Our training ensemble consisted of both different initialization seeds and different data-draw seeds. These, together with the neuron indices associated \( N \), provided \( 200N \) draws from \( w_* \cdot w \) used for estimated \( p(w) \). The discrepancy was estimated using a dot product of the outputs with \( H_{1/3}(x) \) sampled over 3000 test points and then averaged over the 50 seeds. Our experimental results are given in Fig. 2. Panel (c) shows how the linear (inset) and cubic target components learned, as a function of \( \sigma^2 \). Notably, reducing \( \sigma^2 \) is similar to increasing \( n \), hence one expects more feature learning at lower \( \sigma^2 \). As we increase \( \alpha = \beta \) (see color coding) a sharp phase transition develops around \( \sigma^2 = 0.18 \) where the cubic component begins its approach to the teacher’s value (\( \epsilon \)). Panels (a) and (b) track \( -\log(p(w \cdot w_*)) \) at two points before and after the phase transition, respectively. As \( \alpha = \beta \) increases, the theory predicts a finite probability for \( \Phi(w) = w \cdot w_* \approx 1 \). This means that picking a random neuron has a finite chance of fluctuating around the teacher-aware minima. Such neurons are what we refer to as our droplets. All graphs show a good agreement with theory as one scales up \( \alpha = \beta \) before the transition. The remaining discrepancies are attributed to finite size (i.e. finite \( \alpha, \beta \)) effects which, as expected, become more noticeable near the transition. The above experiment also points to some potentially powerful complexity aspects of feature learning. Notably, an FCN NNGP kernel induces a uniform prior over cubic polynomials (i.e. all \( l = 3 \) hyper-spherical harmonics). As there are \( n = O(d^3) \) of those, it requires \( n = 0.5e + 6 \) datapoints to learn such target components in our setting (see also Ref. Cohen et al., 2021). Here this occurs two orders of magnitudes earlier (\( n = 3000 \)). This occurs because the complex prior induced by a finite DNN learns the features from the readily accessible linear components of the target and applies them to the non-linear ones. Proving that such “assisted learning” of complex features changes the sample complexity compared to NNGPs requires further work. In App. A.1 we provide an analytical argument in support of this. ### Table 1: Scaling laws | Model | Width | Input-dimension | Data size | Noise strength | Weight decay | |------------------------|-----------|-----------------|-----------|----------------|--------------| | Polynomial regression | \( N \rightarrow \beta N \) | \( d \rightarrow \sqrt{\beta}d \) | \( n \rightarrow \alpha n \) | \( \sigma^2 \rightarrow \frac{\alpha}{\beta} \sigma^2 \) | \( \sigma_a^2 \rightarrow \frac{\sigma_a^2}{\sqrt{\beta}} \) | | Modular Theory | \( N \rightarrow \beta^2 N \) | \( P \rightarrow \sqrt{\beta}P \) | \( P^2 \rightarrow \beta P^2 \) | \( \sigma^2 \rightarrow \frac{\sigma^2}{\beta} \) | \( \sigma_a^2 \rightarrow \frac{\sigma_a^2}{\beta} \) | ### 3.2 Modular Algebra Theory #### Scaling setup and effective interaction. Similar to the polynomial regression problem, we consider a scaling variable (\( \beta \)) which we later take to infinity together, and consider scaling up the microscopic parameters. The precise scaling is given in Table 1: \[ N \rightarrow \beta^2 N \quad P \rightarrow \sqrt{\beta}P \quad \sigma_a^2 \rightarrow \frac{\sigma_a^2}{\beta} \quad \sigma^2 \rightarrow \frac{\sigma^2}{\beta} \] Note that, here, we do not need additional scale for the continuum limit of the dataset, since the continuum limit is taken by considering all combinations of data points. As before, \( \beta \) is a combination of mean-field scaling together with a thermodynamic/saddle-point limit. Notably, the following combination (\( u \)) of hyperparameter \( u = \frac{2\sigma^2 P^2}{N \sigma^4} \), which we refer to as the effective interaction, is invariant under \( \beta \). #### Problem Symmetries The following symmetries of \( S[w] \) (which is also a function of \( t \)) help us decouple the posterior probability distribution into its Fourier modes and simplify the problem considerably: I. Taking \([n, m] \rightarrow [(n + q) \mod P, (m + q') \mod P]\), and \( f_p \rightarrow f_{(p+q+q')} \mod P \) with \( q, q' \in \mathbb{Z}_P \) II. Taking \([n, m] \rightarrow [qn \mod P, qm \mod P]\) and \( f_p \rightarrow f_{qp \mod P} \) for \( q \in \mathbb{Z}_P \) but different than zero. **Claim I. Single discrepancy mode.** Several outcomes of these symmetries are shown in App. (B.1). First, we find that the adaptive GP kernel \( Q \) (given explicitly in Eq. 21) is diagonal in the basis \( \phi_{k,k'}(x_{n,m}) = P^{-1} e^{2\pi i (kn+k'm)/P} \), where \( k, k' \in \{0, 1, ..., P-1\} \). Considering eigenvalues, the second symmetry implies that \( \phi_{k,k'} \) would be degenerate with \( \phi_{ck,c k'} \). For prime, \( P \) this implies, in particular, that all \( \phi_{k,k} \) eigenvectors with \( k > 0 \) have the same eigenvalue (\( \lambda \)). Notably, the target itself is spanned by this degenerate subspace specifically \[ y^p_{nm} = P^{-1} \sum_{k=1}^{P-1} e^{-i2\pi kp/P} e^{i2\pi k(n+m)/P} = \sum_{k=1}^{P-1} e^{-i2\pi kp/P} \phi_{k,k}(x_{n,m}) \] (15) As a result, one finds that the target is always an eigenvector of the kernel and \( \sigma^2 T^p_{mn} = ay^p_{nm} \) where \( a \in \mathbb{R} \). Thus there is only one mode of the discrepancy which is aligned with the target. **Claim II. Decoupled two-dimensional posterior weight distribution for each Fourier mode.** We decouple all the different fluctuating modes by utilizing again the symmetries of the problem and making a judicial choice of the non-linear weight decay term (\( \Gamma \)). To this end, we define the following Fourier transformed weight variables (\( w_k, v_k \)): \( w_n = \sum_{k=0}^{P-1} w_k e^{-2\pi i kn/P} \), \( w_m = \sum_{k=0}^{P-1} v_k e^{-2\pi i km/P} \) which when placed into action yields \[ S[\hat{w}] = P \left[ \frac{1}{2} \sum_{k=0}^{P-1} w_k w_{-k} + \frac{1}{2} \sum_{k=0}^{P-1} v_k v_{-k} - \frac{2\sigma_a^2 a^2 P^2}{N\sigma^2} \sum_{k=1}^{P-1} w_k w_{-k} v_k v_{-k} \right] + \Gamma[w] \] (16) (see App. B.2) where apart from the non-linear weight-decay term, all different \( k \) modes have been decoupled. For simplicity, we next choose, \( \Gamma[w] = \sum_k P \frac{\gamma}{6} \left( (w_k w_{-k})^3 + (v_k v_{-k})^3 \right) \). Here there are technically two order parameters- \( \Phi = w_k w_{-k}, \Psi = v_k v_{-k} \), but from the symmetry of the action, we obtain that the saddles occur only at points where \( \Phi = \Psi \). We comment that the analysis of a more natural weight decay terms such as \( \sum_n w_n^6 + u_m^6 \), using a certain GP mixture ansatz of \( p(w_i) \) as an approximation, we obtained similar qualitative results. **Claim III. Exactness of Gaussian Mixture Approximation.** Following the presence of a large factor of \( P \) in front of the action, the above non-linear action, per \( k \)-mode, can be analyzed using standard saddle point treatment. Namely, treating \( Q \) as a function of \( a \), and with it \( \lambda \), can be evaluated through a saddle-point approximate on the probability associated with this action. In the limit of \( \beta \to \infty \), we obtain that this approximation is exact. Following, this \( a \) can be calculated using the GPR expression \( -\frac{\sigma^2}{\lambda + \sigma^2} \), and in this limit the value of \( \lambda \) can be computed exactly. Demanding this latter value of \( a \) matches the one in the action results in an equation for \( a \). **Claim IV. First-order phase transition.** Since the quadratic term is constant in the scaled action, as long as no other saddles become degenerate (in action value) with the \( \Phi = \Psi = 0 \) saddle, the saddle-point treatment truncates the action at this first term. By increasing \( u \) the quartic term becomes more negative, and hence a first-order symmetry-breaking transition must occur at some critical value of \( u \). Past this point, \( a \) begins to diminish. If it will diminish too rapidly, the feedback on the action will be such that it is no longer preferential for \( a \) to diminish, and thus a probability distribution will have two degenerate minima at a zero and non-zero value of \( \Phi \) representing the GMFL-I phase. Further increasing \( u \) will break the degeneracy resulting in the global minimum of the log posterior distribution being non-trivial. Notably \( a \) measures the test-RMSE here, thus the fact that it remains constant and suddenly begins to diminish can also be understood as Grokking. ### 3.2.1 Modular Algebra Numerical Simulations Solving the implied equation for \( a \) numerically yields the full phase diagram here (see App. B.3 for technical details of solution) supporting the assumptions in Claim IV. Fig. 3 plots the negative-log-probability of weights for an arbitrary \( k \) taking \( \Phi = \Psi \) for simplicity, since at the global minima this is anyways true. Here we increase \( u \) by decreasing \( \sigma^2 \) and find the action by solving the equation for \( a(\sigma^2) \) numerically. The \( \Phi \neq 0 \) saddles are exponentially suppressed in \( P \), yet nonetheless become more probable as \( \sigma^2 \) decreases. At around, \( \sigma^2 \approx 0.227 \) they come within \( O(1) \) of the saddle at \( S_k(\Phi = 0, \Psi = 0) \) (\( O(1/P) \) in the plot, given the scaled y-axis). This marks the beginning of the mixed phase (GMFL-I), wherein all action minima contribute in a non-negligible manner. Further, in this phase, decreasing \( \sigma^2 \) does not change the height of the \( \Phi \neq 0 \) minima (see inset) in any appreciable manner. Had we zoomed in further, we would see a very minor change to these saddle’s height throughout the mixed phase, as they go from being $O(1)$ above the $\Phi = 0$ saddle to $O(1)$ below that saddle at $\sigma^2 \approx 0.175$, this is shown in the inset graph in Fig. (3). This latter point marks the beginning of the GMFL-II phase, where it is the contribution of the minimum at $\Phi = 0$ which becomes exponentially suppressed in $P$. Notably $a$, which measures here the test-RSME, goes from $-1$ at the beginning of GMFL-I to $-0.7$ at its end. Over this small interval of $\sigma^2$ we observe a 30% reduction in the magnitude of the discrepancy which can be thought of as a manifestation of Grokking. Our analytical results are consistent with the experiments carried out in Ref. Gromov (2023). Indeed, as we enter GMFL-I, weights sampled near the $\Phi, \Psi \neq 0$ saddle, correspond to the cosine expressions for $W_k^{(1)}$ of that work. As our formalism marginalizes over readout layer weights, the phase constraint suggested in their Eq. (12) becomes irrelevant, and both viewpoints retrieve the freedom of choosing cosine phases. In our case, this stems from the $U(1) \times U(1)$ complex-phase freedom in our choice of saddles at $\Phi, \Psi > 0$. ![Figure 3: GFL to GMFL-I to GMFL-II](image) (a) Probability distribution of weights as predicted by our approach. The GFL phase is represented by the red graphs, where the minimum of the action at zero (shared also by the GP limit) dominates the probability distribution. The GMFL-I phase can be seen in the inset graph, and the final GMFL-II phase is shown in purple. (b) The target component of the average network output. Here singularities can be observed at the $\sigma^2$ values where the phase transitions occur. The Parameters taken in this calculation are: $N = 1000, P = 401, \sigma^2 = 0.002/N, \gamma = 0.0001$ 4 DISCUSSION In this work, we studied two different models exhibiting forms of Grokking and representation/feature learning. We argue that Grokking is a result of a first order phase transition. To analyze this analytically, we extended the approach of Seroussi et al. (2023) to include mixtures of Gaussian. The resulting framework led to concrete analytical predictions for rich feature learning effects exposing, in particular, several phases of learning (GFL, GMFL-I, GMFL-II) in the thermodynamic/large-scale limit. Our results also suggest that feature learning in finite FCNs with mean-field scaling can change the sample complexity compared to the associated NNGP. Certainly, these describe very different behavior compared to the recently explored kernel-scaling Li & Sompolinsky (2021); Ariosto et al. (2022) approach, wherein feature learning amounts to a multiplicative factor in front of the output kernel. A potential source of difference here is their use of standard scaling, however, this remains to be explored. As our results utilize a rather general formalism Seroussi et al. (2023), we believe they generalize to deep networks and varying architecture. As such, they invite further examination of feature learning in the wild from the prism of the latent kernel adaptation. Such efforts may provide, for instance, potential measures of when a model is close to Grokking by tracking outliers in the weight or pre-activation distributions along dominant kernel eigenvectors. As latent kernels essentially provide a spectral decomposition of neuron variance, they may help place empirical observations on neuron sensitivity and interpretability Zeiler & Fergus (2014) on firmer analytical grounds. Finally, they suggest novel ways of pruning and regulating networks by removing low-lying latent kernel eigenvalues from internal representations. REFERENCES S Ariosto, R Pacelli, M Pastore, F Ginelli, M Gherardi, and P Rotondo. Statistical mechanics of deep learning beyond the infinite-width limit. *arXiv preprint arXiv:2209.04882*, 2022. Luca Arnaboldi, Ludovic Stephan, Florent Krzakala, and Bruno Loureiro. From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks. *arXiv e-prints*, art. arXiv:2302.05882, February 2023. doi: 10.48550/arXiv.2302.05882. Gerard Ben Arous, Reza Gheissari, and Aukosh Jagannath. Online stochastic gradient descent on non-convex losses from high-dimensional inference. *The Journal of Machine Learning Research*, 22(1):4788–4838, 2021. Alberto Bietti, Joan Bruna, Clayton Sanford, and Min Jae Song. Learning single-index models with shallow neural networks. *Advances in Neural Information Processing Systems*, 35:9768–9783, 2022. Blake Bordelon and Cengiz Pehlevan. Self-consistent dynamical field theory of kernel evolution in wide neural networks. *Advances in Neural Information Processing Systems*, 35:32240–32256, 2022. Supriyo Chakraborty, Richard Tomsett, Ramya Raghavendra, Daniel Harborne, Moustafa Alzantot, Federico Cerutti, Mani Srivastava, Alun Preece, Simon Julier, Raghuveer M. Rao, Troy D. Kelley, Dave Braines, Murat Sensoy, Christopher J. Willis, and Prudhvi Gurram. Interpretability of deep learning models: A survey of results. In *2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)*, pp. 1–6, 2017. doi: 10.1109/UIC-ATC.2017.8397411. Omry Cohen, Or Malka, and Zohar Ringel. Learning curves for overparametrized deep neural networks: A field theory perspective. *Physical Review Research*, 3(2):023034, 2021. Alain Durmus and Eric Moulines. Nonasymptotic convergence analysis for the unadjusted langevin algorithm. *The Annals of Applied Probability*, 27(3):1551–1587, 2017. David Gamarnik, Eren C Kızıldağ, and Ilias Zadik. Stationary points of shallow neural networks with quadratic activation function. *arXiv preprint arXiv:1912.01599*, 2019. E. Gardner and B. Derrida. Optimal storage properties of neural network models. *Journal of Physics A Mathematical General*, 21:271–284, January 1988. doi: 10.1088/0305-4470/21/1/031. Andrey Gromov. Grokking modular arithmetic. *arXiv preprint arXiv:2301.02679*, 2023. Géza Györgyi. First-order transition to perfect generalization in a neural network with binary synapses. *Phys. Rev. A*, 41:7097–7100, Jun 1990. doi: 10.1103/PhysRevA.41.7097. URL https://link.aps.org/doi/10.1103/PhysRevA.41.7097. Lev Davidovich Landau and Evgenii Mikhailovich Lifshitz. *Statistical Physics: Volume 5*, volume 5. Elsevier, 2013. Qianyi Li and Haim Sompolinsky. Statistical mechanics of deep linear neural networks: The back-propagating kernel renormalization. *Phys. Rev. X*, 11:031059, Sep 2021. doi: 10.1103/PhysRevX.11.031059. URL https://link.aps.org/doi/10.1103/PhysRevX.11.031059. Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J Michaud, Max Tegmark, and Mike Williams. Towards understanding grokking: An effective theory of representation learning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022a. URL https://openreview.net/forum?id=6at6rB3IZm. Ziming Liu, Eric J Michaud, and Max Tegmark. Omnigrok: Grokking beyond algorithmic data. *arXiv preprint arXiv:2210.01117*, 2022b.
KqbCvIFBY7
The illustrated plots in Figure 1 are a bit misleading to me. Why are the initial points concentrated in one mode? In a high-dimensional setting, the chance of sampling close-by Gaussian points is low.
Particle Guidance: non-I.I.D. Diverse Sampling with Diffusion Models Gabriele Corso∗1, Yilun Xu1, Valentin de Bortoli2, Regina Barzilay1, Tommi Jaakkola1 1CSAIL, Massachusetts Institute of Technology, 2ENS, PSL University Abstract In light of the widespread success of generative models, a significant amount of research has gone into speeding up their sampling time. However, generative models are often sampled multiple times to obtain a diverse set incurring a cost that is orthogonal to sampling time. We tackle the question of how to improve diversity and sample efficiency by moving beyond the common assumption of independent samples. We propose particle guidance, an extension of diffusion-based generative sampling where a joint-particle time-evolving potential enforces diversity. We analyze theoretically the joint distribution that particle guidance generates, how to learn a potential that achieves optimal diversity, and the connections with methods in other disciplines. Empirically, we test the framework both in the setting of conditional image generation, where we are able to increase diversity without affecting quality, and molecular conformer generation, where we reduce the state-of-the-art median error by 13% on average. 1 Introduction Deep generative modeling has become pervasive in many computational tasks across computer vision, natural language processing, physical sciences, and beyond. In many applications, these models are used to take a number of representative samples of some distribution of interest like Van Gogh’s style paintings or the 3D conformers of a small molecule. Although independent samples drawn from a distribution will perfectly represent it in the limit of infinite samples, for a finite number, this may not be the optimal strategy. Therefore, while deep learning methods have so far largely focused on the task of taking independent identically distributed (I.I.D.) samples from some distribution, this paper examines how one can use deep generative models to take a finite number of samples that can better represent the distribution of interest. In other fields where finite-samples approximations are critical, researchers have developed various techniques to tackle this challenge. In molecular simulations, several enhanced sampling methods, like metadynamics and replica exchange, have been proposed to sample diverse sets of low-energy structures and estimate free energies. In statistics, Stein Variational Gradient Descent (SVGD) is an iterative technique to match a distribution with a finite set of particles. However, these methods are not able to efficiently sample complex distributions like images. Towards the goal of better finite-samples generative models, that combine the power of recent advances with sample efficiency, we propose a general framework for sampling sets of particles using diffusion models. This framework, which we call particle guidance (PG), is based on the use of a time-evolving potential to guide the inference process. We present two different strategies to instantiate this new framework: the first, fixed potential particle guidance, provides ready-to-use potentials that require no further training and have little inference overhead; the second, learned potential particle guidance, requires a training process but offers better control and theoretical guarantees. The theoretical analysis of the framework leads us to two key results. On one hand, we obtain an expression for the joint marginal distribution of the sampled process when using any arbitrary guidance potential. On the other, we derive a simple objective one can use to train a model to learn a time-evolving potential that exactly samples from a joint distribution of interest. We show this provides optimal joint distribution given some diversity constraint and it can be adapted to the addition of further constraints such as the preservation of marginal distributions. Further, we also ∗Correspondence to gcorso@mit.edu I.I.D. sampling \[ dx = \left[ -f + g^2 \nabla_x \log p_\nu(x) \right] dt + g dw \] Particle Guidance sampling \[ dx_i = \left[ -f + g^2 \nabla_x \log p_{\nu}(x_i) \right] dt + g dw \] Figure 1: Comparison of I.I.D. and particle guidance sampling. The center figure represents each step, with the distribution in pink and the samples as yellow crosses, where particle guidance uses not only the score (in blue) but also the guidance from joint-potential (red), leading it to discover different modes (right-hand samples vs those on the left). At the bottom, Van Gogh cafe images samples generated with Stable Diffusion with and without particle guidance. A more detailed discussion on the suboptimality of I.I.D. sampling is presented in Appendix B.1. Demonstrate the relations of particle guidance to techniques for non-I.I.D. sampling developed in other fields and natural processes and discuss its advantages. Empirically, we demonstrate the effectiveness of the method in both synthetic experiments and two of the most successful applications of diffusion models: text-to-image generation and molecular conformer generation. In the former, we show that particle guidance can improve the diversity of the samples generated with Stable Diffusion [Rombach et al., 2021] while maintaining a quality comparable to that of I.I.D. sampling. For molecular conformer generation, applied to the state-of-the-art method Torsional Diffusion [Jing et al., 2022], particle guidance is able to simultaneously improve precision and coverage, reducing their median error by respectively 19% and 8%. In all settings, we also study the critical effect that different potentials can have on the diversity and sample quality. 2 BACKGROUND Diffusion models Let \( p(x) \) be the data distribution we are interested in learning. Diffusion models [Song et al., 2021] define a forward diffusion process that has \( p \) as the initial distribution and is described by \[ dx = f(x, t) dt + g(t) dw, \] where \( w \) is the Wiener process. This forward diffusion process is then reversed using the corresponding reverse diffusion SDE \[ dx = \left[ -f(x, T-t) + g(T-t)^2 \nabla_x \log p_{T-t}(x) \right] dt + g(T-t) dw \] (using the forward time convention) where the evolving score \( \nabla_x \log p_t(x) \) is approximated with a learned function \( s_\theta(x, t) \). One key advantage of diffusion models over the broad class of energy-based models [Teh et al., 2003] is their finite-time sampling property for taking a single sample. Intuitively, by using a set of smoothed-out probability distributions diffusion models are able to overcome energy barriers and sample every mode in finite time as guaranteed by the existence of the reverse diffusion SDE [Anderson, 1982]. In general, for the same order of discretization error, reverse diffusion SDE can efficiently sample from data distribution in much fewer steps than Langevin dynamics in energy-based models. For instance, Theorem 1 of Chen et al. [2022] shows that, assuming accurate learning of score, the convergence of diffusion SDE is independent of the isoperimetry constant of the target distribution. Langevin dynamics mixing speed can be exponentially slow if the spectral gap/isoperimetry constant is small. This critical property is orthogonal to the efficiency in the number of samples one needs to generate to cover a distribution; in this work, we aim to achieve sample efficiency while preserving the finite-time sampling of diffusion models. Diffusion models were extended to Riemannian manifolds by De Bortoli et al. [2022], this formulation has found particular success [Jing et al., 2022; Corso et al., 2022; Yim et al., 2023] in scientific domains where data distributions often lie close to predefined submanifolds [Corso, 2023]. Classifier guidance (CG) [Dhariwal & Nichol, 2021] has been another technical development that has enabled the success of diffusion models on conditional image generation. Here a classifier \( p_\theta(y|x_t, t) \), trained to predict the probability of \( x_t \) being obtained from a sample of class \( y \), is used to guide a conditional generation of class \( y \) following: \[ dx = [-f(x, t') + g(t')^2(s_\theta(x, t') + \alpha \nabla_x \log p_\theta(y|x, t'))]dt + g(t')dw \quad \text{where } t' = T - t \] where \( \alpha \) in theory should be 1, but, due to overspreading of the distribution, researchers often set it to larger values. This, however, often causes the collapse of the generation to a single or few modes, hurting the samples’ diversity. ### 3 Particle Guidance Our goal is to define a sampling process that promotes the diversity of a finite number of samples while retaining the advantages and flexibility that characterize diffusion models. Let \( p(x) \) be some probability distribution of interest and \( \nabla_x \log p_t(x) \) be the score that we have learned to reverse the diffusion process \( dx = f(x, t)dt + g(t)dw \). Similarly to how classifier guidance is applied, we modify the reverse diffusion process by adding the gradient of a potential. However, we are now sampling together a whole set of particles \( x_1, ..., x_n \), and the potential \( \log \Phi_t \) is not only a function of the current point but a permutation invariant function of the whole set: \[ dx_i = \left[ -f(x_i, t') + g^2(t') \left( \nabla_{x_i} \log p_{t'}(x_i) + \nabla_{x_i} \log \Phi_{t'}(x_1, ..., x_n) \right) \right] dt + g(t')dw. \tag{1} \] where the points are initially sampled I.I.D. from a prior distribution \( p_T \). We call this idea particle guidance (PG). This framework allows one to impose different properties, such as diversity, on the set of particles being sampled without the need to retrain a new score model operating directly on the space of sets. We will present and study two different instantiations of this framework: 1. **Fixed Potential PG** where the time-evolving joint potential is handcrafted, leading to very efficient sampling of diverse sets without the need for any additional training. We present this instantiation in Section 5 and show its effectiveness on critical real-world applications of diffusion models in Section 6. 2. **Learned Potential PG** where we learn the time-evolving joint potential to provably optimal joint distributions. Further, this enables direct control of important properties such as the preservation of marginal distributions. We present this instantiation in Section 7. ### 4 Connections with Existing Methods As discussed in the introduction, other fields have developed methods to improve the tradeoff between sampling cost and coverage of the distribution of interest. In this section, we will briefly introduce four methods (coupled replicas, metadynamics, SVGD and electrostatics) and draw connections with particle guidance. #### 4.1 Coupled Replicas and Metadynamics In many domains linked to biochemistry and material science, researchers study the properties of the physical systems by collecting several samples from their Boltzmann distributions using molecular dynamics or other enhanced sampling methods. Motivated by the significant cost that sampling each individual structure requires, researchers have developed a range of techniques to improve sample efficiency and speed by, for example, reducing the correlation of subsequent samples from slow-mixing Markov chains. The most popular of these techniques are parallel sampling with coupled replicas and sequential sampling with metadynamics. As the name suggests, replica methods involve directly taking \( n \) samples of a system with the different sampling processes, replicas, occurring in parallel. In particular, coupled replica methods create a dependency between the replicas by adding, like particle guidance, an extra potential $\Phi$ to the energy function to enforce diversity or match experimental observables. This results in energy-based sampling procedures that target: $$\tilde{p}(x_1, \ldots, x_n) = \Phi(x_1, \ldots, x_n) \prod_{i=1}^{n} p(x_i).$$ Metadynamics [Laio & Parrinello, 2002; Barducci et al., 2008] was also developed to more efficiently sample the Boltzmann distribution of a given system. Unlike replica methods and our approach, metadynamics is a sequential sampling technique where new samples are taken based on previously taken ones to ensure diversity, typically across certain collective variables of interest $s(x)$. In its original formulation, the Hamiltonian at the $k$th sample is augmented with a potential as: $$\tilde{H}_k = H - \omega \sum_{j<k} \exp \left( - \frac{\|s(x) - s(x^0_j)\|^2}{2\sigma^2} \right)$$ where $H$ is the original Hamiltonian, $x^0_j$ are the previously sampled elements and $\omega$ and $\sigma$ parameters set a priori. Once we take the gradient and perform Langevin dynamics to sample, we obtain dynamics that, with the exception of the fixed Hamiltonian, resemble those of particle guidance in Eq. 4 where $$\nabla_x \log \Phi_t(x_1, \ldots, x_n) \leftarrow \nabla_x \omega \sum_{j<i} \exp \left( - \frac{\|s(x_i) - s(x^0_j)\|^2}{2\sigma^2} \right).$$ Although they differ in their parallel or sequential approach, both coupled replicas and metadynamics can be broadly classified as energy-based generative models. As seen here, energy-based models offer a simple way of controlling the joint distribution one converges to by simply adding a potential to the energy function. On the other hand, however, the methods typically employ an MCMC sampling procedure, which lacks the critical finite-time sampling property of diffusion models and significantly struggles to cover complex probability distributions such as those of larger molecules and biomolecular complexes. Additionally, the MCMC typically necessitates a substantial number of steps, generally proportional to a polynomial of the data dimension [Chewi et al., 2020]. With particle guidance, we instead aim to achieve both properties (controllable diversity and finite time sampling) at the same time. We can simulate the associated SDE/ODE with a total number of steps that is independent of the data dimension. ### 4.2 SVGD Stein Variational Gradient Descent (SVGD) [Liu & Wang, 2016] is a well-established method in the variational inference community to iteratively transport a set of particles to match a target distribution. Given a set of initial particles $\{x^0_1, \ldots, x^0_n\}$, it updates them at every iteration as: $$x^{\ell-1}_i \leftarrow x^\ell_i + \epsilon_\ell \psi(x^\ell_i) \quad \text{where} \quad \psi(x) = \frac{1}{n-1} \sum_{j=1}^{n} [k(x^\ell_j, x) \nabla_{x^\ell_j} \log p(x^\ell_j) + \nabla_{x^\ell_j} k(x^\ell_j, x)]$$ where $k$ is some (similarity) kernel and $\epsilon_\ell$ the step size. Although SVGD was developed with the intent of sampling a set of particles that approximate some distribution $p$ without the direct goal of obtaining diverse samples, SVGD and our method have a close relation. This relation between our method and SVGD can be best illustrated under specific choices for drift and potential under which the probability flow ODE discretization of particle guidance can be approximated as (derivation in Appendix A.5): $$x^{t+\Delta t}_i \approx x^t_i + \epsilon_t(x_i) \psi_t(x^t_i) \quad \text{where} \quad \psi(x) = \frac{1}{n-1} \sum_{j=1}^{n} [k_t(x^t_j, x) \nabla_x \log p_t(x) + \nabla_{x^t_j} k_t(x^t_j, x)]$$ Comparing this with Eq. 2, we can see a clear relation in the form of the two methods, with some key distinctions. Apart from the different constants, the two methods use different terms for the total score component. Interestingly both methods use smoothed-out scores, however, on the one hand, particle guidance uses the diffused score at the specific particle $x_i$, $\nabla_{x_i} \log p_t(x_i)$, while on the other, SVGD smoothes it out by taking a weighted average of the score of nearby particles weighted by the similarity kernel $(\sum_j k(x_i, x_j) \nabla_{x_j} \log p(x_j)) / (\sum_j k(x_i, x_j))$. The reliance of SVGD on other particles for the “smoothing of the score”, however, causes two related problems, firstly, it does not have the finite-time sampling guarantee that the time evolution of diffusion models provides and, secondly, it suffers from the collapse to few local modes near the initialization and cannot discover isolated modes in data distribution [Wenliang & Kanagawa, 2020]. This challenge has been theoretically [Zhuo et al., 2018] and empirically [Zhang et al., 2020] studied with several works proposing practical solutions. In particular, relevant works use an annealing schedule to enhance exploration [D’Angelo & Fortuin, 2021] or use score matching to obtain a noise-conditioned kernel for SVGD [Chang et al., 2020]. Additionally, we empirically observe that the score smoothing in SVGD results in blurry samples in image generation. 4.3 ELECTROSTATICS Recent works [Xu et al., 2022; 2023b] have shown promise in devising novel generative models inspired by the evolution of point charges in high-dimensional electric fields defined by the data distribution. It becomes natural therefore to ask whether particle guidance could be seen as describing the evolution of point charges when these are put in the same electric field such that they are not only attracted by the data distribution but also repel one another. One can show that this evolution can indeed be seen as the combination of Poisson Flow Generative Models with particle guidance, where the similarity kernel is the extension of Green’s function in $N+1$-dimensional space, i.e., $k(x, y) \propto 1/||x - y||^{N-1}$. We defer more details to Appendix A.6. 5 FIXED POTENTIAL PARTICLE GUIDANCE In this section, we will present and study a simple, yet effective, instantiation of particle guidance based on the definition of the time-evolving potential as a combination of predefined kernels. As we will see in the experiments in Section 6, this leads to significant sample efficiency improvements with no additional training required and little inference overhead. To promote diversity and sample efficiency, in our experiments, we choose the potential $\log \Phi_t$ to be the negative of the sum of a pairwise similarity kernel $k$ between each pair of particles $\log \Phi_t(x_1, ..., x_n) = -\frac{\alpha_t}{2} \sum_{i,j} k_t(x_i, x_j)$ obtaining: $$dx_i = \left[ -f(x_i, t') + g^2(t') \left( \nabla_{x_i} \log p_{t'}(x_i) - \alpha_t \nabla_{x_i} \sum_{j=1}^{n} k_t(x_i, x_j) \right) \right] dt + g(t') dw$$ (4) Intuitively, the kernel term will push our different samples to be dissimilar from one another while at the same time the score term will try to match our distribution. Critically, this does not come at a significant additional runtime as, in most domains, the cost of running the pairwise similarity kernels is very small compared to the execution of the large score network architecture. Moreover, it allows the use of domain-specific similarity kernels and does not require training any additional classifier or score model. We can also view the particle guidance Eq. (4) as a sum of reverse-time SDE and a guidance term. Thus, to attain a more expedited generation speed, the reverse-time SDE can also be substituted with the probability flow ODE [Song et al., 2021]. 5.1 THEORETICAL ANALYSIS To understand the effect that particle guidance has beyond simple intuition, we study the joint distribution of sets of particles generated by the proposed reverse diffusion. However, unlike methods related to energy-based models (see coupled replicas, metadynamics, SVGD in Sec. 4) analyzing the effect of the addition of a time-evolving potential $\log \Phi_t$ in the reverse diffusion is non-trivial. While the score component in particle guidance is the score of the sequence of probability distributions $\tilde{p}_t(x_1, ..., x_n) = \Phi_t(x_1, ..., x_n) \prod_{i=1}^{n} p_t(x_i)$, we are not necessarily sampling exactly $\tilde{p}_0$ because, for an arbitrary time-evolving potential $\Phi_t$, this sequence of marginals does not correspond to a diffusion process. One strategy used by other works in similar situations [Du et al., 2023] relies on taking, after every step or at the end, a number of Langevin steps to reequilibrate and move the distribution back towards $\tilde{p}_t$. This, however, increases significantly the runtime cost (every Langevin step requires score evaluation) and is technically correct only in the limit of infinite steps leaving uncertainty in the real likelihood of our samples. Instead, in Theorem 1, we use the Feynman-Kac theorem to derive a formula for the exact reweighting that particle guidance has on a distribution (derivation in Appendix A.1): Theorem 1. Under integrability assumptions, sampling \( x_1^T, \ldots, x_n^T \) from \( p_T \) and following the particle guidance reverse diffusion process, we obtain samples from the following joint probability distribution at time \( t = 0 \): \[ \hat{p}_0(x_1, \ldots, x_n) = \mathbb{E}[Z \exp[-\int_0^T g(t)^2 \{\langle \nabla \log \Phi_t(X_t), \nabla \log \hat{p}_t(X_t) \rangle + \Delta \log \Phi_t(X_t)\} dt]], \] with \( Z \) (explicit in the appendix) such that \[ \prod_{i=1}^N p_0(x_i) = \mathbb{E}[Z], \] \( (X_t)_{t \in [0,T]} \) is a stochastic process driven by the equation \[ dX_t = \{f(X_t, t) - g(t)^2 \nabla \log p_t(X_t)\} dt + g(t) dw, \quad X_0 = \{x_i\}_{i=1}^N. \] Hence the density \( \hat{p}_0 \) can be understood as a reweighting of the random variable \( Z \) that represents I.I.D. sampling. Riemannian Manifolds. Note that our theoretical insights can also be extended to the manifold framework. This is a direct consequence of the fact that the Feynman-Kac theorem can be extended to the manifold setting, see for instance Benton et al. [2022]. 5.2 Preserving Invariances The objects that we learn to sample from with generative models often present invariances such as the permutation of the atoms in a molecule or the roto-translation of a conformer. To simplify the learning process and ensure these are respected, it is common practice to build such invariances in the model architecture. In the case of diffusion models, to obtain a distribution that is invariant to the action of some group \( G \) such as that of rotations or permutations, it suffices to have an invariant prior and build a score model that is \( G \)-equivariant [Köhler et al., 2020; Xu et al., 2021]. Similarly, in our case, we are interested in distributions that are invariant to the action of \( G \) on any of the set elements (see Section 6.2), we show that a sufficient condition for this invariance to be maintained is that the time-evolving potential \( \Phi_t \) is itself invariant to \( G \)-transformations of any of its inputs (see Proposition 1 in Appendix A.4). 6 Experiments Fixed potential particle guidance can be implemented on top of any existing trained diffusion model with the only requirement of specifying the potential/kernel to be used in the domain. We present three sets of empirical results in three very diverse domains. First, in Appendix C, we work with a synthetic experiment formed by a two-dimensional Gaussian mixture model, where we can visually highlight some properties of the method. In this section instead, we consider text-to-image and molecular conformer generation, two important tasks where diffusion models have established new state-of-the-art performances, and show how, in each of these tasks, particle guidance can provide improvements in sample efficiency pushing the diversity-quality Pareto frontier. The code is available at https://github.com/gcorso/particle-guidance. 6.1 Text-to-image generation In practice, the most prevalent text-to-image diffusion models, such as Stable Diffusion [Rombach et al., 2021] or Midjourney, generally constrain the output budget to four images per given prompt. Ideally, this set of four images should yield a diverse batch of samples for user selection. However, the currently predominant method of classifier-free guidance [Ho, 2022] tends to push the mini-batch samples towards a typical mode to enhance fidelity, at the expense of diversity. To mitigate this, we apply the proposed particle guidance to text-to-image generation. We use Stable Diffusion v1.5, pre-trained on LAION-5B [Schuhmann et al., 2022] with a resolution of \( 512 \times 512 \), as our testbed. We apply an Euler solver with 30 steps to solve for the ODE version of particle guidance. Following [Xu et al., 2023a], we use the validation set in COCO 2014 [Lin et al., 2014] for evaluation, and the CLIP [Hessel et al., 2021]/Aesthetic score [Team, 2022] (higher is better) to assess the text-image alignment/visual quality, respectively. To evaluate the diversity within each batch of generated images corresponding to a given prompt, we introduce the in-batch similarity score. This metric represents the average pairwise cosine similarity of features within an image batch, utilizing the pre-trained DINO [Caron et al., 2021] as the feature extractor. Contrasting the FID score, the in-batch similarity score specifically measures the diversity of a batch of images generated for a given prompt. We use a classifier-free guidance scale from 6 to 10 to visualize the trade-off curve between the diversity and CLIP/Aesthetic score, in line with prior works [Xu et al., 2023a; Saharia et al., 2022]. For particle guidance, we implement the RBF kernel on the down-sampled pixel space (the latent space of the VAE encoder-) in Stable Diffusion, as well as the feature space of DINO. Please refer to Appendix E.1 for more experimental details. ![Graphs showing in-batch similarity versus CLIP score and Aesthetic score](image) (a) In-batch similarity versus CLIP score (b) In-batch similarity versus Aesthetic score Figure 2: In-batch similarity score versus (a) CLIP ViT-g/14 score and (b) Aesthetic score for text-to-image generation at $512 \times 512$ resolution, using Stable Diffusion v1.5 with a varying guidance scale from 6 to 10. ![Images showing text prompts](image) (a) I.I.D. (b) PG (c) Training data (d) I.I.D. (e) PG Figure 3: Text prompt: (a,b) “A baby eating a cake with a tie around his neck with balloons in the background” (COCO); (c,d,e) “VAN GOGH CAFE TERASSE copy.jpg”, with original training data in (c). As shown in Fig. 2(a) and Fig. 2(b), particle guidance (PG) consistently obtains a better (lower) in-batch similarity score in most cases, given the same CLIP/Aesthetic score, with a classifier-free guidance scale ranging from 6 to 10. Conversely, we observe that while in-batch similarity score of I.I.D. sampling improves with the reduced classifier-free guidance scale, particle guidance continues to surpass I.I.D. sampling in terms of CLIP/Aesthetic score given the same in-batch similarity. When the potential is the similarity kernel applied in the feature space, particle guidance notably attains a lower in-batch similarity score compared to I.I.D. sampling or to the approach in the original downsampled pixel space. This suggests that utilizing a semantically meaningful feature space is more appropriate for determining distances between images. In Fig. 3, we further visualize generated batches of four images per prompt by I.I.D. sampling and particle guidance (feature) with the same random seeds, when fixing the classifier-free guidance scale to 9. We can see that particle guidance improves the visual diversity in the generated batch. Interestingly, particle guidance can also help to alleviate the memorization issue of Stable Diffusion [Somepalli et al., 2023]. For example, given the text prompt of a painting from LAION dataset, particle guidance (Fig. 3(e)) avoids the multiple replications of the training data in the I.I.D. setting (the top-left and the bottom-right images in Fig. 3(d)). We provide extended samples in Appendix F, and additionally show that SVGD (Eq. 2) fails to promote diversity, instead yielding a set of blurry images. ### 6.2 Molecular Conformer Generation Molecular conformer generation is a key task in computational chemistry that consists of finding the set of different conformations that a molecule most likely takes in 3D space. Critically it is often important to find all or most of the low-energy conformers as each can determine a different behavior. Table 1: Quality of generated conformer ensembles for the GEOM-DRUGS test set in terms of Coverage (%) and Average Minimum RMSD (Å). We follow the experimental setup from [Ganea et al., 2021], for experimental details and introduction of the baselines please refer to Appendix D. | Method | Recall Coverage ↑ | AMR ↓ | Precision Coverage ↑ | AMR ↓ | |-----------------|-------------------|-------|----------------------|-------| | | Mean | Med | Mean | Med | Mean | Med | Mean | Med | | RDKit ETKDG | 38.4 | 28.6 | 1.058 | 1.002 | 40.9 | 30.8 | 0.995 | 0.895 | | OMEGA | 53.4 | 54.6 | 0.841 | 0.762 | 40.5 | 33.3 | 0.946 | 0.854 | | GeoMol | 44.6 | 41.4 | 0.875 | 0.834 | 43.0 | 36.4 | 0.928 | 0.841 | | GeoDiff | 42.1 | 37.8 | 0.835 | 0.809 | 24.9 | 14.5 | 1.136 | 1.090 | | Torsional Diffusion | 72.7 | 80.0 | 0.582 | 0.565 | 55.2 | 56.9 | 0.778 | 0.729 | | TD w/ particle guidance | **77.0** | **82.6** | **0.543** | **0.520** | **68.9** | **78.1** | **0.656** | **0.594** | (e.g. by binding to a protein). This necessity is reflected in the metrics used by the community that look both at coverage (also called recall) and precision over the set predictions. Over the past few years, molecular conformer generation has been extensively studied by the machine learning community, with well-established benchmarks [Axelrod & Gomez-Bombarelli, 2022] and several generative models designed specifically for this task [Ganea et al., 2021; Xu et al., 2021; Jing et al., 2022]. However, all these methods are based on training a generative model to generate single samples and then running this model several times (more than 200 on average in the standard GEOM-DRUGS dataset) to generate a large number of I.I.D. samples. As discussed before, however, this strategy is suboptimal to generate representative sets of samples and cover the distribution. Therefore, we take the state-of-the-art conformer generation model, torsional diffusion, and, without retraining the model itself, we show that we can obtain significant improvements in both coverage and precision via particle guidance. Torsional diffusion [Jing et al., 2022] defines the diffusion process over the manifold defined by changes in torsion angles from some initial conformer because of the relative rigidity of the remaining degrees of freedom. Given this observation, we also define the guidance kernel on this manifold as an RBF kernel over the dihedral angle differences. Another important consideration when dealing with molecular conformers is given by the permutation symmetries that characterize several molecules: conformers that appear different might be very similar under permutations of the order of the atoms that do not change the bond structure. To maximize the sample efficiency and avoid generating similar conformers, we make the kernel invariant to these transformations. For this, we employ the simple strategy to take the minimum value of the original kernel under the different perturbations (formalized in Appendix D). Table 1 shows that by applying particle guidance to SDE-based reverse process of torsional diffusion (see Appendix D for details) we are able to balance coverage and precision being able to obtain, without retraining the model, significantly improved results on both metrics with 8% and 19% simultaneous reductions respectively in recall and precision median AMR. 7 LEARNED POTENTIAL PARTICLE GUIDANCE While the fixed potential particle guidance seen so far is very effective in improving the diversity of samples with little overhead, it is hard to argue about the optimality of the resulting joint distribution. This is because of the complexity of the expression obtained in Theorem 1 and its dependence on the data distribution itself. Furthermore, in some domains, particularly in scientific applications, researchers need to control the distribution that they are sampling. This is necessary, for example, to apply correct importance weights or compute free energy differences. While Theorem 1 allows us to theoretically analyze properties of the distribution, the joint and marginal distributions remain largely intractable. In this section, we analyze how we can sample from desired joint probability distribution by learning a tailored time-evolving potential for particle guidance. Using the maximum entropy theorem [Csiszár, 1975], we can show that the distribution satisfying a bound on the expected value of a (diversity) metric $\Phi_0$ while minimizing the KL divergence with the independent distribution is: $$\hat{p}_0(x_1, ..., x_n) \propto \Phi_0(x_1, ..., x_n)^{\beta(\alpha)} \prod_{i=1}^{n} p(x_i)$$ (5) where $\beta$ is a function of $\alpha$, the value of the bound on $E_{\hat{p}}[\log \Phi_0]$. 7.1 Training Procedure We now have to learn a time-evolving potential $\Phi_t$ that when used as part of the particle guidance framework generates $\hat{p}_0$ (we assume $\Phi_0$ is chosen such that $\beta(\alpha) = 1$). To achieve this, we mandate that the generation process of particle guidance in Eq. 1 adheres to the sequence of marginals $\hat{p}_t(x^t_1, ..., x^t_n) = \Phi_t(x^t_1, ..., x^t_n) \prod_{i=1}^{n} p_t(x^t_i)$ and learn $\Phi_t^\theta$ to satisfy this evolution. Under mild assumptions, using Doob h-theory (derivation in Appendix A.2), we show that we can learn the $\Phi_t^\theta$ by the following objective: $$\theta^* = \arg \min_\theta \mathbb{E}_{x^0_1, ..., x^0_n \sim p_0} \mathbb{E}_{x^t_i \sim p_{t|0}(x^0_i)} [\|\Phi_0(x^0_1, ..., x^0_n) - \Phi_t^\theta(x^t_1, ..., x^t_n)\|^2]$$ (6) where $p_{t|0}$ is the Gaussian perturbation kernel in diffusion models. Importantly, here the initial $x^0_i$ are sampled independently from the data distribution so this training scheme can be easily executed in parallel to learning the score of $p_t$. 7.2 Preserving Marginal Distributions While the technique discussed in the previous section is optimal in the maximum entropy perspective, it does not (for arbitrary $\Phi_0$) preserve the marginal distributions of individual particles, i.e. marginalizing $x_i$ over $\hat{p}$ does not recover $p$. Although not critical in many settings and not respected, for a finite number of particles, neither by the related methods in Section 4 nor by the fixed potential PG, this is an important property in some applications. Using again the maximum entropy theorem, we can show that the distribution satisfying a bound on the expected value of a (diversity) metric $\Phi_0'$ and preserving the marginal distribution while minimizing the KL divergence with the independent distribution can be written as: $$\hat{p}_0(x_1, ..., x_n) \propto \Phi_0'(x_1, ..., x_n)^{\beta(\alpha)} \prod_{i=1}^{n} p(x_i) \gamma_\theta(x_i)$$ (7) for some scalar function over individual particles $\gamma_\theta$. In Appendix A.3, we derive a new training scheme to learn the parameters of $\gamma_\theta$. This relies on setting the normalization constant to an arbitrary positive value and learning values of $\theta$ that respect the marginals. Once $\gamma_\theta$ is learned, its parameters can be frozen and the training procedure of Eq. 6 can be started. 8 Conclusion In this paper, we have analyzed how one can improve the sample efficiency of generative models by moving beyond I.I.D. sampling and enforcing diversity, a critical challenge in many real applications that has been largely unexplored. Our proposed framework, particle guidance, steers the sampling process of diffusion models toward more diverse sets of samples via the definition of a time-evolving joint potential. We have studied the theoretical properties of the framework such as the joint distribution it converges to for an arbitrary potential and how to learn potential functions that sample some given joint distribution achieving optimality and, if needed, preserving marginal distributions. Finally, we evaluated its performance in two important applications of diffusion models text-to-image generation and molecular conformer generation, and showed how in both cases it is able to push the Pareto frontier of sample diversity vs quality. We hope that particle guidance can become a valuable tool for practitioners to ensure diversity and fair representation in existing tools even beyond the general definition of diversity directly tackling known biases of generative models. Further, we hope that our methodological and theoretical contributions can spark interest in the research community for better joint-particle sampling methods. ACKNOWLEDGMENTS We thank Xiang Cheng, Bowen Jing, Timur Garipov, Shangyuan Tong, Renato Berlinghieri, Saro Passaro, Simon Olsson, and Hannes Stärk for their invaluable help with discussions and review of the manuscript. We also thank the anonymous reviewers for their useful feedback and suggestions. Concurrently with our work, Alex Pondaven also developed a similar idea\(^1\). This work was supported by the NSF Expeditions grant (award 1918839: Collaborative Research: Understanding the World Through Code), the Machine Learning for Pharmaceutical Discovery and Synthesis (MLPDS) consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging (DOMANE) threats program, the DARPA Accelerated Molecular Discovery program, the NSF AI Institute CCF-2112665, the NSF Award 2134795, the GIST-MIT Research Collaboration grant, MIT-DSTA Singapore collaboration, and MIT-IBM Watson AI Lab. REFERENCES Brian DO Anderson. Reverse-time diffusion equation models. *Stochastic Processes and their Applications*, 1982. Simon Axelrod and Rafael Gomez-Bombarelli. Geom, energy-annotated molecular conformations for property prediction and molecular generation. *Scientific Data*, 2022. Alessandro Barducci, Giovanni Bussi, and Michele Parrinello. Well-tempered metadynamics: a smoothly converging and tunable free-energy method. *Physical review letters*, 2008. Joe Benton, Yuyang Shi, Valentin De Bortoli, George Deligiannidis, and Arnaud Doucet. From denoising diffusions to denoising markov models. *arXiv preprint arXiv:2211.03595*, 2022. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF international conference on computer vision*, 2021. Wei-Cheng Chang, Chun-Liang Li, Youssef Mroueh, and Yiming Yang. Kernel stein generative modeling. *arXiv preprint arXiv:2007.03074*, 2020. Sitan Chen, Sinho Chewi, Jungshian Li, Yuanzhi Li, Adil Salim, and Anru R. Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. *ArXiv*, abs/2209.11215, 2022. URL https://api.semanticscholar.org/CorpusID:252438904. Sinho Chewi, Chen Lu, Kwangjun Ahn, Xiang Cheng, Thibaut Le Gouic, and Philippe Rigollet. Optimal dimension dependence of the metropolis-adjusted langevin algorithm. *ArXiv*, abs/2012.12810, 2020. Gabriele Corso. Modeling molecular structures with intrinsic diffusion models. *arXiv preprint arXiv:2302.12255*, 2023. Gabriele Corso, Hannes Stärk, Bowen Jing, Regina Barzilay, and Tommi Jaakkola. Diffdock: Diffusion steps, twists, and turns for molecular docking. *arXiv preprint arXiv:2210.01776*, 2022. Imre Csiszár. I-divergence geometry of probability distributions and minimization problems. *The annals of probability*, pp. 146–158, 1975. Francesco D’Angelo and Vincent Fortuin. Annealed stein variational gradient descent. *arXiv preprint arXiv:2101.09815*, 2021. Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, and Arnaud Doucet. Riemannian score-based generative modeling. *arXiv preprint arXiv:2202.02763*, 2022. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *Advances in neural information processing systems*, 2021. \(^1\)https://alexpondaven.github.io/projects/repulsion/
XUCAA0XnPC
Additionally, the client is supposed to be computationally restricted, which is why a server is involved in the first place. How, then, is the client able to train the entire model locally (Section 4.2)?
ENSEMBLER: COMBATING MODEL INVERSION ATTACKS USING MODEL ENSEMBLE DURING COLLABORATIVE INFERENCE Anonymous authors Paper under double-blind review ABSTRACT Deep learning models have exhibited remarkable performance across various domains. Nevertheless, the burgeoning model sizes compel edge devices to offload a significant portion of the inference process to the cloud. While this practice offers numerous advantages, it also raises critical concerns regarding user data privacy. In scenarios where the cloud server’s trustworthiness is in question, the need for a practical and adaptable method to safeguard data privacy becomes imperative. In this paper, we introduce Ensembler, an extensible framework designed to substantially increase the difficulty of conducting model inversion attacks for adversarial parties. Ensembler leverages model ensembling on the adversarial server, running in parallel with existing approaches that introduce perturbations to sensitive data during collaborative inference. Our experiments demonstrate that when combined with even basic Gaussian noise, Ensembler can effectively shield images from reconstruction attacks, achieving recognition levels that fall below human performance in some strict settings, significantly outperforming baseline methods lacking the Ensembler framework. 1 INTRODUCTION In numerous critical domains, deep learning (DL) models have demonstrated exceptional performance when compared to traditional methods, including image classification [Deng et al., 2009; Dosovitskiy et al., 2021], natural language processing [Brown et al., 2020], protein predictions [Jumper et al., 2021], and more. One noteworthy trend accompanying these impressive advances is the escalating size of DL models employed for these tasks [Hu et al., 2021], with the famous GPT-3 model containing 175 billion parameters [Brown et al., 2020]. As a result, when tasks necessitate the involvement of edge devices such as mobile phones, reducing the computational workload on these devices becomes imperative. A prevalent approach involves offloading a substantial portion of the workload to a cloud server capable of executing extensive computations. This framework can be conceptualized as collaborative computing, where a client collaborates with a server offering computation-as-a-service (CaaS). Recently, some attention in the research community has been shifted to an emphasis on the privacy of client’s sensitive data in such a framework. While the client inherently trusts itself, the server may pose as an adversarial entity seeking to compromise the user’s privacy during the inference process. This risk becomes particularly pronounced when DL models are tasked with handling sensitive data, such as disease classification or facial authentication, which require access to medical or facial user information. In other scenarios, a client could be a small company that holds private models and uses the server solely for the purpose of providing service. It also does not want the server to access the data of its customers, which sometimes contains sensitive information. With the prevalence of edge computing, there is an increasing need for researchers to develop a machine learning framework that supports secure, accurate, and efficient machine learning service, and works in this area are often categorized under the term privacy-preserving machine learning (PPML). There have been multiple works addressing this formidable challenge of safeguarding the client’s sensitive information in collaborative inference scenarios, an important part of the entire PPML framework. For an extensive discussion on different algorithmic and architectural choices and their impacts on privacy protection, we refer readers to Section 5 and Table 2 of the comprehensive survey by (Xu et al., 2021). In this paper, we will simply group existing approaches into two categories: encryption-based algorithms that guarantee privacy at the cost of thousands of times of time efficiency (Mishra et al., 2020; Knott et al., 2021; Tan et al., 2021; Reagen et al., 2021; Rathee et al., 2020; Lam et al., 2023; Watson et al., 2022), and perturbation-based algorithms that operate on the intermediate layers of a DL architecture, introducing noise to thwart the adversary’s ability to recover client input (Mireshghallah et al., 2020; Osia et al., 2018; Lu et al., 2022; Sirichotedumrong & Kiya, 2021). Since perturbation-based algorithms directly operate on the intermediate outputs from the client, they incur minimal additional complexity during the inference process. However, as opposed to guaranteed privacy provided by encryption-based algorithms, perturbation-based algorithms suffer from the possibility of privacy leakage, meaning sensitive private information may still be recoverable by the adversarial server despite the introduced perturbations. He et al. (2019) presented one of the first systematic studies on model inversion attacks (MIA) on collaborative inference (CI). Their research shows that a shadow network can effectively emulate the client’s secret network, enabling the recovery of raw images, especially when the client retains only one single convolutional layer. While the client is able to keep more privacy as it keeps more layers, such a method is less practical in the real world due to the limitations of the computational power of edge devices. Mireshghallah et al. (2020) proposed Shredder, which uses a noise injection layer before the client sending out computed results to reduce mutual information between client and server while maintaining good classification accuracy. Nevertheless, Lu et al. (2022) demonstrated that Shredder falls short in safeguarding facial images from recovery. In our own experimentation with the noise injection layer proposed by Shredder, applied to a ResNet-18 architecture on CIFAR-10, we observed significant accuracy drops with combined multiplicative and additive noise. On the other hand, simple additive noise resulting in approximately a 5 percent drop in accuracy failed to protect images from recovery, as depicted in Figure 1. Lu proposed to use a policy-based processor between client and server to protect private information, but figures in their work seem to indicate that the effectiveness of their policy should be attributed to removing some regions from the original image that contain sensitive data. While such an approach is effective in some cases, it falls short in scenarios where sensitive information is embedded within the image, such as in facial authentication tasks. In this paper, we aim to bring forth these contributions to the research community. Firstly, we expand upon the systematic analysis of various model split strategies between the client and server, focusing on more complex architectures commonly used in practice. Second, we take a different path from those approaches that propose different modifications to the data and introduce Ensembler, a secure collaborative inference framework designed to substantially increase the effort required to recover client input. Ensembler is not only a stand-alone framework that significantly increases the adversary server’s reconstruction difficulty but can also be seamlessly integrated with existing complex algorithms to construct practical and secure inference architectures tailored to specific needs. The remainder of this paper is organized as follows: Section 2 introduces the background of collaborative inference and related works, as well as formally defining the threat model. Section 3 offers a systematic analysis of the impact of different model split strategies on server-side reconstruction difficulty. Section 4 introduces Ensembler and details its design for better secure collaborative inference. Section 5 presents the empirical experiments related to Ensembler and showcases its effectiveness in protecting the client’s private data, and Section 6 concludes the paper. 2 BACKGROUND 2.1 COLLABORATIVE MACHINE LEARNING The development of mobile graphic processing units (GPUs) has ushered in a new era where machine learning tasks are increasingly deployed with a portion of the computation being handled by edge devices. Related areas include federated learning, where multiple edge devices jointly train a deep learning model (McMahan et al., 2017; Yu et al., 2023; Yaldiz et al., 2023); split learning, where a DL model is split into two or more parts, and the client and server jointly train it (Poirot et al., 2019); and collaborative inference, where a DL model is split, with only a portion deployed on the server to provide services (He et al., 2019; Osia et al., 2018). In this paper, we will focus on the inference part and assume that the training phase of DL models is secure. Though the training phase is sometimes also susceptible to adversarial attacks aimed at stealing sensitive information (Inan et al., 2021; Li et al., 2022; Zhang et al., 2021), private inference is still more prevalent in most practical scenarios. ### 2.2 Threat Model In this paper, we consider the collaborative inference task between the client and the server, who acts as a semi-honest adversarial attacker that aims to steal the raw input from the client. Formally, we define the system as a collaborative inference on a pre-trained DNN model, $M(x, \theta)$, where the client holds the first and the last a few layers (i.e. the “head” and “tail” of a neural network), denoted as $M_{c,h}(x, \theta_{c,h})$ and $M_{c,t}(x, \theta_{c,t})$. The rest of the layers of DNN are deployed on the server, denoted as $M_s(x, \theta_s)$. $\theta$ is the trained weights of $M$, where $\theta = \{\theta_{c,h}, \theta_s, \theta_{c,t}\}$. The complete collaborative pipeline is thus to make a prediction of incoming image $x$ with $M_{c,t}[M_s[M_{c,h}(x)]]$. During the process, the server has access to $\theta_s$ and the intermediate output $M_{c,h}(x)$. In addition, we assume that it has a good estimate of the DNN used for inference. That is, it has auxiliary information on the architecture of the entire DNN, as well as a dataset in the same distribution as the private training dataset used to train the DNN. However, it does not necessarily know the hyper-parameters, as well as engineering tricks used to train the model. Since the server is a computation-as-a-service (CaaS) provider, it is assumed to have reasonably large computation resources. While it is powerful in computing, the server is restricted from querying the client to receive a direct relationship between raw input $x$ and intermediate output $M_{c,h}(x)$. In order to reconstruct raw input $x$ from the intermediate output $M_{c,h}(x)$, the server adopts a common model inversion attack (He et al., 2019; Lu et al., 2022; Dosovitskiy & Brox, 2016). It constructs a shadow network $\tilde{M}(x, \theta_{c,h}, \theta_s, \theta_{c,t}) : \{M_{c,h}, M_{server}, M_{c,t}\}$ such that $\tilde{M}$ simulates the behavior of $M$. After training $\tilde{M}$, the adversarial server is able to obtain a representation $\tilde{M}_{c,h}$ such that $\tilde{M}_{c,h}(x) \sim M_{c,h}(x)$. As the next step, with the help a decoder of $\tilde{M}_{c,h}$ to reconstruct the raw image from intermediate representation, it is able to reconstruct the raw input from $M_{c,h}(x)$. ### 2.3 Assumptions of Other Related Works In this section, we provide an overview of various attack models and the assumptions adopted in other works related to collaborative inference (CI) under privacy-preserving machine learning (PPML). Since different works potentially use different collaboration strategies between the client and the server, we will use the generic notation, where $M_c$ is held by the client, and $M_s$ is held by the server. Generally, the attacks from the server will fall into three categories: - **Training Dataset Reconstruction Attacks** that try to predict if certain attributes, including but not limited to individual samples, distributions, or certain properties, are a member of the private training set used to train $M(x, \theta)$. If successful, the privacy of the training dataset will be compromised. We refer readers to the survey by Hu et al. (2022) and Salem et al. (2023) for more details. - **Model Inversion Attacks** that try to recover a particular input during inference when its raw form is not shared by the client. For example, in an image classification task, the client may want to split $M$ such that it only shares latent features computed locally to the server. However, upon successful model inversion attacks, the server will be able to generate the raw image for classification tasks based on the latent features. It is important to note that, in this paper, we adopt the same definition of model inversion attacks as of He et al. (2019). This term also refers to attacks that reconstruct the private training dataset in other works. We will focus on reconstructing private raw input for the rest of the paper. • **Model Extraction Attacks** that try to steal the parameters and even hyper-parameters of M. This type of attacks compromise the intellectual property of the private model and are often employed as sub-routines for model inversion attacks when the server lacks direct access to M’s parameters. Different works also make different assumptions on the capability of the server. First, it is widely-accepted that the server has sufficiently yet reasonably large computing power and resources, as its role is often providing ML service. Regarding the auxiliary information on M, they generally fall into three levels: • **White Box** assumes that the server has full access of architecture details of M such as the structure and parameters (Liu et al., 2021). Different definitions also add different auxiliary information available to the server, such as training dataset (Liu et al., 2021), corrupted raw input (Zhang et al., 2020), or a different dataset (Wang & Kurz, 2022). This setting is often associated with attacks that try to reconstruct private training dataset (Wang & Kurz, 2022; Zhang et al., 2020; Haim et al., 2022). • **Black Box** assumes that the server does not have any information of neither M nor training dataset. However, it is allowed to send unlimited queries to the client to get $M_c(x)$ (Xu et al., 2023; Kahla et al., 2022). • **Query-Free** restricts the server from querying $M_c$. While such an assumption greatly limits the reconstruction ability of the adversarial party, there are no limitations on auxiliary information available to the server besides the actual weights of $M_c$. (He et al., 2019; Ding et al., 2023) have both shown that $M_c$ is still vulnerable of leaking private information of the raw input when the server has information of the model architecture and training dataset. Our work will adopt this setting. ### 3 ANALYSIS ON SPLITTING BETWEEN CLIENT AND SERVER Previous work from (He et al., 2019) provided a systematic analysis on the recovery difficulty and quality of the above mentioned model-inversion attack. Their work analyzed effects on reconstruction quality from loosening assumptions of auxiliary information available to the server (DNN architecture and training dataset), as well as choosing different split points (h) between the client and the server. However, their work was based on a simple 6-layer convolutional neural network (CNN), which is seldom used in today’s service. In this section, we further their analysis upon more practical architectures, namely ResNet-18 and VGG-16. One of the important findings from He et al. (2019); Ding et al. (2023)’s study is that increasing the depth (h) of $M_{c,h}$ will lead to worse image reconstruction quality of the adversarial attacker in MIA. At the same time, part of Zhou et al. (2022)’s algorithm lets the client, instead of the server, compute the Softmax function of $M(x, \theta)$ at the last layer. The success of their algorithm raises the possibility of utilizing a second split point to enhancing privacy protection. Under the threat model defined by Section 2.2, we provide visual evaluations of the quality of reconstructed images from MIA, as shown by Fig. 2 and 3. ![Figure 2: Effect of first and second split points on VGG16. The vertical axis is the first split point in terms of layers, and the horizontal axis is the second split point counting backwards on layers.](image-url) The vertical axis is the position of first split point, and the horizontal axis is the position of second split point counting backwards. For VGG-16 architecture, the first h layers of $M$ belongs to the client. For the ResNet-18 architecture with 4 blocks, $h$ represents the number of residual blocks computed by the client, with $h=1$ being the client only computing the first convolutional layer. As shown in the figures, our experiments align with the results from He et al. (2019) and Ding et al. (2023). The deeper the first split point is, the worse the reconstructed image is. However, the experiments do not support the idea from Zhou et al. (2022). The second split point does not increase the difficulty of reconstruction under MIA. It is also noteworthy to point out that while our experiments indicate that image reconstruction quality is below human-level recognition after $h=6$ for VGG-16 and $h=2$ for ResNet-18, this should not be treated as a privacy-guarantee. This is because we are using a standard decoder for $\tilde{M}_{c,h}(x, \theta_{c,h})$, whereas there exist more powerful generative decoders that could do potentially better at reconstructing images (Khosravy et al., 2022). At the same time, this reconstruction depends on the task. For example, Lu et al. (2022) is able to reconstruct high-quality facial images with larger $h$, and Ding et al. (2023) is more successful with vehicle reconstruction. We also provide a brief experiment of MIA on NLP task in the Appendix A.1. 4 Ensembler ARCHITECTURE While it is possible to protect sensitive data via increasing the depth ($h$) as shown by the previous section, such depth is often impractical for edge devices due to the computational demands involved. In this section, we present Ensembler, a framework that augments the privacy of intermediate information sent by the client without requiring extra computation efforts of the client during inference. Ensembler is highly extensible, and it is compatible with existing works that apply noises and perturbation during both DNN training and inference. We will go over the detailed architecture in Section 4.1, as well as the training stage of this new framework in Section 4.2. 4.1 ARCHITECTURE OVERVIEW As illustrated in Fig. 4, Ensembler leverages model ensembling on the server to generate a regularized secret $M_{c,h}$ that is hard to be reconstructed by the server. It consists of three parts: standard client layers, $N$ different server nets, and a selector. During the collaborative inference pipeline, the client computes $M_{c,h}(x)$ and transmits the intermediate output to the server. The server then feeds the intermediate output through each of the $M^i_s$, and reporting the output of each $M^i_s$ to the client. The client then employs a selector to perform a selection of the feedback from the server, which activates results of $P$ out of $N$ nets and combines them. As a final step, it performs the computation of the last $t$ layers to classify the input. We will introduce these separately in this section. 4.1.1 CLIENT LAYERS During collaborative inference, a part of the DNN is run by the client. Under the proposed framework, the client is responsible for running the first $h$ layers $M_{c,h}$ and the last $t$ layers $M_{c,t}$. These layers are the same as the client part of a typical collaborative inference framework. $M_{c,h}$ takes the raw input (often an image) and outputs the intermediate result, whereas $M_{c,t}$ takes the output from the server as input and outputs the likelihood of each class. Figure 4: Illustration of the proposed architecture, Ensembler. Different from the traditional CI pipelines, it deploys N neural networks on the server, and uses a selector to activate P of the N nets. 4.1.2 Server Nets On the server side, the network is consisted of N copies of DNN, with each $M_s^i$ corresponding to what the server would normally process in a typical collaborative inference pipeline. That is, each $M^i : \{M_{c,h}, M_s^i, M_{c,t}\}$ is a valid pipeline for the inference task. Upon receiving $M_{c,h}(x)$, which is the input from the client, the server shall feed this input into each of $M_s^i$, and it outputs N representations of hidden features used for classification. 4.1.3 Selector To increase the difficulty of the server reconstructing the model and recovering the raw input, a selector is applied before the last layer run by the client. The selector serves as a secret activation function, which activates P of the N nets according to Equation (1), where $S_i$ is the activation from selector, and $\odot$ is the element-wise multiplication. For simplicity, we consider $S_i = 1/P$ if $M_s^i$ is selected by the client, an $S_i = 0$ otherwise. $$\text{Selector}[M_s(x)] = \text{Concat}[S_1 \odot M_s^1(x), S_2 \odot M_s^2(x), ..., S_N \odot M_s^N(x)]$$ (1) 4.2 Training Stage As mentioned above, the design choices of Ensembler aim to achieve a regularized $M_{c,h}$ such that a shadow network based on $M_s$ would be an incorrect estimate of $M_{c,h}$. To achieve this goal, the proposed architecture uses a two-staged training pipeline. For the first stage, it needs to obtain N distinct $M^i(x, \theta^i) : \{M_{c,h}^i, M_s^i, M_{c,t}^i\}$ such that a shadow network that accurately simulates $M_{c,h}^i$ could not simulate $M_{c,h}^j$. In our approach, we choose to simply introduce a Gaussian noise layer after the intermediate output $M_{c,h}^i(x)$. The objective function in this stage is to minimize the cross-entropy loss in the form of Equation (2), where $N(0, \sigma^i)$ is a fixed Gaussian noise added to the intermediate output. The choice of $\sigma$ is dependent on the quality of training, and given the inherent redundancy in the parameters of DNNs, adding some noises will not affect the classification accuracy. For example, adding noise of $N(0, 0.1)$ after the first layer of a ResNet-18 architecture for CIFAR-10 image classification task results in less than 1% accuracy loss. We choose Gaussian noise because of simplicity in implementation, and we’d argue that any method that will lead to distinctive $M_{c,h}$ will be sufficient for this step. However, this step is nonetheless needed to ensure that each model has different parameter weights. Otherwise, all N models would be identical to each other, and the framework fails its purpose in protecting privacy. $$L_{\theta}^i = -\sum_j y_j \ast \log M_{c,t}^i(M_s^i[M_{c,h}^i(x) + N(0, \sigma^i)])_j$$ (2) After the first training stage, N different DNNs are obtained. The proposed framework selects P of the N nets, and re-trains an “ensemblized” network, Ensembler, which has been outlined in the previous section. During the training, parameters of $M_s$ are frozen. This step is used to ensure the performance of the model during inference. While the training process is just like any typical neural network, it is noteworthy to point out that we add a regularization term to the standard cross-entropy loss to enforce $M$ to learn a joint $M_{c,h}$ and $M_{c,t}$ representation from all of the P server nets. The custom loss function, as shown in Equation (3), adds a high penalty to the model if the gradient descends only to the direction of some single server net $M^i_s$. In the equation, CS is the cosine similarity, and $\lambda$ is a hyper-parameter controlling the regularization strength. Since this is an end-to-end training process, any perturbation-based algorithms could be seamlessly combined with the proposed framework during this step to provide further privacy protection. For our experiment, we just choose simple Gaussian noises to be consistent with the first step. $$L_\theta = -\sum_{i=1}^{N} \sum_{j} [y_j \ast \log M_{c,t}(Selector[M^i_s|M_{c,h}(x) + N(0, \sigma)])_j]$$ $$+ \lambda \max_{i \in P} [CS(M_{c,h}(x), M^i_{c,h}(x))]$$ (3) 4.3 Intuition behind Ensembler In this section, we discuss the intuition behind the proposed architecture. Since the attacker will construct shadow networks to simulate the behavior of client’s private networks, the exact purpose of the two-staged training algorithm is to ensure that the attacker is not able to learn the selector with its shadow network. Through the first stage of training, N different models that have distinctive weights are obtained, yet all of them are able to make comparative predictions on the dataset. An arbitrary ensemble of P out of the N networks will form a new network, whose $M_{c,h}$ will be distinctive from networks under a different combination. That is, since $M^i_s+j$ would be different from $M^i_s+k$, $M^i_{c,h}$ obtained from $M^i_s+j$ would be different from $M^i_s+k$ obtained from $M^i_{c,h}$, where $+$ is ensemble of server nets. Thus, with N networks in the first stage of the algorithm, we will have $2^N$ different possible $M_{c,h}$ that could be the valid answer to the shadow network. When the attacker tries to train an adaptive attacker, the shadow network will learn an arbitrary representation $M_{c,h}$ and an arbitrary $S$. Such combination is a valid choice in terms of classification accuracy but is nonetheless incorrect compared to the actual $M_{c,h}$. 4.4 Time complexity of Ensembler From previous section, it is clear that the time complexity of the proposed framework is N times of the individual network on a single-core GPU, and there is negligible extra communication cost between the client and the server. However, it is worthy to emphasize that since each $M^i_s$ is independent to each other, the proposed framework is friendly to parallel execution and even multiparty (multi-server) inference. Under those settings, the theoretical time complexity of N would be replaced with lower practical time costs or even causes the framework to be uninvertible. On the other hand, since the server is not able to adaptively learn the client’s representation, the only option is to exhaustively try all combinations, which takes $2^N$ times compared to reconstructing a single network. Here, we provide a semi-formal argument on exponential complexity of reconstructing the best quality image under Ensembler protection. **Lemma 1** Reconstructing image from single neural network $M^i_s$ is not viable. For any shadow network obtained through single $M^i_s$, it needs to first simulate the behavior of $M^i_{c,h}$. In this case, if there exists some $M^i_{c,h}$ that simulates $M_{c,h}$, the training loss of the second training phrase is not optimized (Equation (3)) due to the regularization term. **Lemma 2** Reconstructing image from incorrect choice of $M_{activated} = [M^i_s, ..., M^j_s]$ is not viable. Since $g_i \in N(0, \sigma)$ are independent of each other, the N different $M^i(x, \theta^i)$ obtained in the first training stage are also distinctive. Including incorrect $M^i_s$ in the shadow network construction will lead to the model regularizing in an incorrect direction. Conclusion The time complexity of reconstructing best quality input from N server nets is theoretically $2^N - 1$. 5 EXPERIMENTS AND EVALUATIONS 5.1 ARCHITECTURE DETAILS During the experiment, we consider the most strict setting, where h=1 and t=1 on a ResNet-18 architecture for three image classification tasks, CIFAR-10, CIFAR-100, and a subset of CelebA-HQ [Zhu et al., 2022]. That is, the client only holds the first convolutional layer as well as the last fully-connected layer, which is also the minimum requirement for our framework. For CIFAR-10, the intermediate output’s feature size is [64x16x16], for CIFAR-100, we remove the MaxPooling layer and the intermediate output’s feature size is [64x32x32], and for CelebA, the intermediate output’s feature size is [64x64x64]. We consider the ensembled network to contain 10 neural networks (N=10), each being a ResNet-18. The selector secretly selects \{4,3,5\} out of the 10 nets (P=\{4,3,5\}), respectively. The adversarial server is aware of the architecture and the training dataset. It constructs a shadow network $\hat{M}_{c,h}$ consisted of three convolutional layers with 64 channels each, with the first one simulating the unknown $M_{c,h}$, and the other two simulating the Gaussian noise added to the intermediate output. It also has $\hat{M}_{c,t}$ with the same shape as $M_{c,t}$. For adaptive shadow network, it learns from all 10 server nets with an additional activation layer that is identical to the selector. For any noises added to the intermediate outputs during the training and inference stage, we consider a fixed Gaussian noise $g \sim N(0, 0.1)$. 5.2 EXPERIMENT SETUP To evaluate the effectiveness of our approach, we employ three key metrics: Structural Similarity (SSIM), Peak Signal to Noise Ratio (PSNR), and visual assessment. The first two metrics offer quantitative evaluations of the reconstruction quality of MIA, with higher SSIM and PSNR values indicating better reconstruction quality. As our proposed architecture operates in parallel with existing perturbation methods, we consider the following baseline approaches for comparison on CIFAR-10: no protection (NONE), adding small noise in a single network that does not require retraining (Shredder [Mireshghallah et al., 2020]), adding large noise and retrain a single network (Single), and adding dropout layer in the single network or ensembled network, but with only one round of training (DR-single and DR-ensemble). The dropout is included to differentiate our architecture with dropout layers, as the selector component does look very similar to a dropout layer. For the other two datasets, we select some of the important benchmark for comparison. For CelebA-HQ, since the intermediate output’s feature size is too large for the simple Gaussian filter to be visually effective, we add an untrained random $M_{c,h}$ (Random) to illustrate the maximum capacity of Gaussian filter at the cost of accuracy. For the proposed architecture, we evaluate the performance of both reconstruction of a single neural network (N=1), as well as reconstruction using the entire network (Adaptive). For reconstruction of ensembled nets using a single neural network, we report the best reconstruction result of the N nets. For Section 3, we implement the experiments on a server with four A-6000 GPUs using Python and PyTorch. For Section 4, we used a mixture of the server and Google Colab, which uses one T4 GPU. 5.3 COMPARISON OF RESULTS We provide the quantitative evaluations for CIFAR-10 in Table. 1 and the visual assessments in Figure. 5 in Appendix A.2.1. It could be seen that the proposed framework significantly increases the reconstruction difficulty of the adversarial party. Ensembler incurs 2.13% drop in classification accuracy compared to the model without any protection, which is marginal compared to its advantage in protecting privacy of the client’s raw input. From the figure, it is clear that the reconstructed images are hardly recognizable by human-level interpretations. In addition, we provide the quantitative evaluations for CIFAR-100 and CelebA-HQ in Table. 2 and 3 and the visual assessments in Appendix A.2.2 and A.2.3. The proposed framework remains effective when the feature size increases. In particular, the framework safeguards the model’s prediction ability while protecting the input images on par with the random head network. Although the visual assessments show that increasing feature size leads to better visual recognition, we argue that it is inevitable with simple Gaussian noises. In particular, the shadow network is able to raise the reconstruction quality of a totally mismatched random $M_{c,h}$ to beyond human-recognition level from the shadow network with best PSNR. Table 1: Quantitative evaluations of the different defense mechanisms with CIFAR-10. Last three are the proposed framework. For SSIM and PSNR, lower values mean worse reconstruction quality. | Name | Change in accuracy | SSIM | PSNR | |--------------------|-------------------|--------|---------| | NONE | 0.00% | 0.4363 | 12.2678 | | Shredder | -5.68% | 0.5359 | 10.4033 | | Single | 2.15% | 0.3921 | 7.5266 | | Dr-single | 2.70% | 0.3453 | 6.6674 | | Dr-ensemble (best SSIM) | 1.42% | 0.373 | 7.3493 | | Dr-ensemble (best PSNR) | 1.42% | 0.3232 | 7.9598 | | Adaptive | -2.13% | 0.0555 | 5.981 | | N=1 (best SSIM) | -2.13% | 0.2889 | 4.865 | | N=1 (best PSNR) | -2.13% | 0.2221 | 5.5348 | Table 2: Quantitative evaluations of the different defense mechanisms with CIFAR-100. Last two are the proposed framework. For SSIM and PSNR, lower values mean worse reconstruction quality. | Name | Change in accuracy | SSIM | PSNR | |--------------------|-------------------|--------|---------| | Single | -0.97% | 0.4558 | 8.5225 | | Adaptive | 0.31% | 0.0864 | 4.7715 | | N=1 (best SSIM&best PSNR) | 0.31% | 0.2636 | 5.0741 | Table 3: Quantitative evaluations of the different defense mechanisms with CelebA-HQ [Zhu et al., 2022]. Last two are the proposed framework. For SSIM and PSNR, lower values mean worse reconstruction quality. | Name | Change in accuracy | SSIM | PSNR | |--------------------|-------------------|--------|---------| | Single | -1.24% | 0.2650 | 14.3126 | | Random (best SSIM&best PSNR) | -65.19% | 0.1387 | 12.8150 | | Adaptive | 2.39% | 0.0897 | 13.3698 | | N=1 (best SSIM&best PSNR) | 2.39% | 0.1791 | 12.0645 | 6 CONCLUSION In this paper, we present two contributions to the research community of PPML and collaborative inference. First, we extend the discussion on choosing the split points between client and server under collaborative inference. Our experiments illuminate that deeper split points yield lower-quality reconstructions, while the introduction of a second split point offers little to no improvement. Furthermore, we introduce a novel framework, Ensembler, designed to significantly increase the complexity of reconstruction for adversarial parties. Ensembler seamlessly aligns with existing methods that introduce diverse forms of noise to intermediate outputs, potentially yielding robust and adaptable architectures if combined with them. Our experiments highlight the substantial deterioration in reconstruction quality for images safeguarded by Ensembler when compared to those without its protection. REFERENCES Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Shiwei Ding, Lan Zhang, Miao Pan, and Xiaoyong Yuan. Patrol: Privacy-oriented pruning for collaborative inference against model inversion attacks, 2023. Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4829–4837, 2016. doi: 10.1109/CVPR.2016.522. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Niv Haim, Gal Vardi, Gilad Yehudai, michal Irani, and Ohad Shamir. Reconstructing training data from trained neural networks. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=Sxk8Bse3RKO. Zecheng He, Tianwei Zhang, and Ruby B. Lee. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC ’19, pp. 148–162, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450376280. doi: 10.1145/3359789.3359824. URL https://doi.org/10.1145/3359789.3359824. Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, and Xuyun Zhang. Membership inference attacks on machine learning: A survey. ACM Comput. Surv., 54(11s), sep 2022. ISSN 0360-0300. doi: 10.1145/3523273. URL https://doi.org/10.1145/3523273. Xia Hu, Lingyang Chu, Jian Pei, Weiqing Liu, and Jiang Bian. Model complexity of deep learning: A survey. Knowl. Inf. Syst., 63(10):2585–2619, oct 2021. ISSN 0219-1377. doi: 10.1007/s10115-021-01605-0. URL https://doi.org/10.1007/s10115-021-01605-0. Huseyin A. Inan, Osman Ramadan, Lukas Wutschitz, Daniel Jones, Victor Ruhle, James Withers, and Robert Sim. Privacy analysis in language models via training data leakage report. CoRR, abs/2101.05405, 2021. URL https://arxiv.org/abs/2101.05405. John M. Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Andy Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David A. Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with alphafold. Nature, 596:583 – 589, 2021. URL https://api.semanticscholar.org/CorpusID:235959867. M. Kahla, S. Chen, H. Just, and R. Jia. Label-only model inversion attacks via boundary repulsion. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15025–15033, Los Alamitos, CA, USA, jun 2022. IEEE Computer Society. doi: 10.1109/CVPR52688.2022.01462. URL https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01462.
ASppt1L3hx
The parameter (\kappa) for batch dependency is set as 256, however, Figure 4 indicates minimal difference between 64, 256, and even infinity. Furthermore, Figure 3 shows that the GNN model validation F1-score drops when \kappa is 256 (or larger). Given these observations, how do you justify the choice of \kappa as 256 for the evaluation instead of 64?
COOPERATIVE MINIBATCHING IN GRAPH NEURAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT Significant computational resources are required to train Graph Neural Networks (GNNs) at a large scale, and the process is highly data-intensive. One of the most effective ways to reduce resource requirements is minibatch training coupled with graph sampling. GNNs have the unique property that items in a minibatch have overlapping data. However, the commonly implemented Independent Minibatching approach assigns each Processing Element (PE) its own minibatch to process, leading to duplicated computations and input data access across PEs. This amplifies the Neighborhood Explosion Phenomenon (NEP), which is the main bottleneck limiting scaling. To reduce the effects of NEP in the multi-PE setting, we propose a new approach called Cooperative Minibatching. Our approach capitalizes on the fact that the size of the sampled subgraph is a concave function of the batch size, leading to significant reductions in the amount of work per seed vertex as batch sizes increase. Hence, it is favorable for processors equipped with a fast interconnect to work on a large minibatch together as a single larger processor, instead of working on separate smaller minibatches, even though global batch size is identical. We also show how to take advantage of the same phenomenon in serial execution by generating dependent consecutive minibatches. Our experimental evaluations show up to 4x bandwidth savings for fetching vertex embeddings, by simply increasing this dependency without harming model convergence. Combining our proposed approaches, we achieve up to 64% speedup over Independent Minibatching on single-node multi-GPU systems. 1 INTRODUCTION Graph Neural Networks (GNNs) have become de facto deep learning models for unstructured data, achieving state-of-the-art results on various application domains involving graph data such as recommendation systems [Wu et al., 2020] [Ying et al., 2018], fraud detection [Liu et al., 2022] [Patel et al., 2022], identity resolution [Xu et al., 2019], and traffic forecasting [Jiang & Luo, 2022]. However, as the usage of technology continues to increase, the amount of data generated by these applications is growing exponentially, resulting in large and complex graphs that are infeasible or too time-consuming to train on a single processing element [Ying et al., 2018] [Zhu et al., 2019]. For example, some social media graphs are reaching billions of vertices and trillions of interactions [Ching et al., 2015]. Efficient distributed training of GNNs is essential for extracting value from large-scale unstructured data that exceeds the cost of storing and processing such data. Due to the popularity of Deep Neural Networks (DNNs) and the need to support larger models and datasets, a great deal of research has focused on increasing the scale and efficiency of distributed DNN training. Techniques such as data parallelism [Ginsburg et al., 2017] [Goyal et al., 2018], pipelining [Narayanan et al., 2019], and intra-layer parallelism [Dean et al., 2012] have been employed. Following the success of traditional distributed DNN training, the same techniques have also been adapted to GNN training, such as data parallelism [Gandhi & Iyer, 2021] [Lin et al., 2020] [Zheng et al., 2021] [Zhu et al., 2019] and intra-layer parallelism [Tripathy et al., 2020]. The parallelization techniques mentioned earlier are used to scale both full-batch and minibatch training in a distributed setting. Minibatch training [Bertsekas, 1994] is the go-to method to train DNN models as it outperforms full-batch training in terms of convergence [Allen-Zhu & Hazan, 2016] [Li et al., 2014] [Keskar et al., 2016] [Wilson & Martinez, 2003], and more recently has been shown... to also offer the same benefit for GNNs (Zheng et al., 2021). In the distributed setting, minibatch training for DNNs using data parallelism is straightforward. The training samples are partitioned across the Processing Elements (PE) and they compute the forward/backward operations on their minibatches. The only communication required is an all-reduce operation for the gradients. Unfortunately, minibatch training a GNN model is more challenging than a usual DNN model. GNNs turn a given graph encoding relationships into computational dependencies. Thus in an $L$-layer GNN model, each minibatch computation has a different structure as it is performed on the $L$-hop neighborhood of the minibatch vertices. Real-world graphs usually are power law graphs (Artico et al., 2020), with small diameters, thus it is a challenge to train deep GNN models as the $L$-hop neighborhood grows exponentially w.r.t. $L$, reaching almost the whole graph within a few hops. Very large GNN datasets necessitate storing the graph and node embeddings on slower storage mediums. To enable GNN training efficiently in such a setting, several techniques have been proposed (Park et al., 2022; Waleffe et al., 2022). These studies assume that the graph and its features are stored on disks or SSDs and design their systems to reduce data transfers. The methods proposed in this paper directly apply to these settings by reducing bandwidth requirements, as seen in Section 4. A single epoch of full-batch GNN training requires computation proportional to the number of layers $L$ and the size of the graph. However, minibatch training requires more operations to process a single epoch due to repeating calculations in the 2nd through $L$th layers. As the batch size decreases, the number of repeated calculations increases. This is because the vertices and edges have to be processed each time they appear in the $L$-hop neighborhood. Thus, it is natural to conclude that using effectively larger batch sizes in GNNs reduces the number of computations and data accesses of an epoch in contrast to regular DNN models. Our contributions in this work utilizing this important observation can be listed as follows: - Investigate work vs. batch size relationship and present theorems stating the cost of processing a minibatch is a concave function of the batch size (Theorems 3.1 and 3.2). - Utilize this relationship by combining data and intra-layer parallelism to process a minibatch across multiple PEs for reduced work (Section 3.1), with identical global batch size. We call this new approach Cooperative Minibatching. - Use the same idea to generate consecutive dependent minibatches to increase temporal vertex embedding access locality (Section 3.2). This approach can reduce the transfer amount of vertex embeddings up to $4\times$, without harming model convergence. - Show that the two approaches are orthogonal. Together, the reduced work and decreased cache miss rates result in up to $1.64\times$ speedup over Independent Minibatching with identical global batch size. ## 2 BACKGROUND A graph $G = (V, E)$ consists of vertices $V$ and edges $E \subset V \times V$ along with optional edge weights $A_{ts} > 0, \forall (t \rightarrow s) \in E$. Given a vertex $s$, the 1-hop neighborhood $N(s)$ is defined as $N(s) = \{t | (t \rightarrow s) \in E\}$, and it can be naturally expanded to a set of vertices $S$ as $N(S) = \bigcup_{s \in S} N(s)$. GNN models work by passing previous layer embeddings ($H$) from $N(s)$ to $s$, and then combining them using a nonlinear function $f(l)$ at layer $l$, given initial vertex features $H^{(0)}$: $$H_s^{(l+1)} = f(l)(H_s^{(l)}, \{H_t^{(l)} | t \in N(s)\})$$ \hspace{1cm} (1) If the GNN model has $L$ layers, then the loss is computed by taking the final layer $L$’s embeddings and averaging their losses over the set of training vertices $V_t \subset V$ for full-batch training. In $L$-layer full-batch training, the total number of vertices that needs to be processed is $L|V|$. ### 2.1 MINIBATCHING IN GNNs In minibatch training, a random subset of training vertices, called Seed Vertices, is selected, and training is done over the (sampled) subgraph composed of $L$-hop neighborhood of the seed vertices. On each iteration, minibatch training computes the loss on seed vertices, which are random subsets of the training set $V_t$. Given a set of vertices $S$, we define $l$-th layer expansion set, or the $l$-hop neighborhood $S^l$ as: $$S^0 = S, \quad S^{(l+1)} = S^l \cup N(S^l)$$ For GNN computations, $S^l$ would also denote the set of the required vertices to compute (1) at each layer $l$. Using the same notation, $\{s\}^L$ denotes $l$-layer expansion set starting from single vertex $s \in V$. For a single minibatch iteration, the total number of vertices that need to be processed is $\sum_{l=1}^{L} |S^l|$. There are $\frac{|V|}{|S^0|}$ minibatches assuming $V_t = V$. Since each $|S^l| \geq |S^0|$, and a single epoch of minibatch training needs to go over the whole dataset, the work $W(|S^0|)$ for a single epoch is: $$W(|S^0|) = \frac{|V|}{|S^0|} \sum_{l=1}^{L} E[|S^l|] \geq \frac{|V|}{|S^0|} \sum_{l=1}^{L} |S^0| = L|V|$$ where $E[|S^l|]$ is the expected number of sampled vertices in layer $l$ and $|S^0|$ is the batch size. That is, the total amount of work to process a single epoch increases over full-batch training. The increase in work due to minibatch training is thus encoded in the ratios $\frac{E[|S^l|]}{|S^0|}$, $1 \leq l \leq L$. Next, we will briefly present some of the sampling techniques. When sampling is used with minibatching, the minibatch subgraph may potentially become random. However, the same argument for the increasing total amount of work holds for them too, as seen in Figure 2. ### 2.2 Graph Sampling Below, we review three different sampling algorithms for minibatch training of GNNs. Our focus in this work is samplers whose expected number of sampled vertices is a function of the batch size. All these methods are applied recursively for GNN models with multiple layers. #### 2.2.1 Neighbor Sampling (NS) Given a fanout parameter $k$ and a batch of seed vertices $S^0$, NS by (Hamilton et al., 2017) samples the neighborhoods of vertices randomly. Given a batch of vertices $S^0$, a vertex $s \in S^0$ with degree $d_s = |N(s)|$, if $d_s \leq k$, NS uses the full neighborhood $N(s)$, otherwise it samples $k$ random neighbors for the vertex $s$. #### 2.2.2 LABOR Sampling Given a fanout parameter $k$ and a batch of seed vertices $S^0$, LABOR-0 (Balm & Çatalyürek, 2023) samples the neighborhoods of vertices as follows. First, each vertex rolls a uniform random number $0 \leq r_t \leq 1$. Given batch of vertices $S^0$, a vertex $s \in S^0$ with degree $d_s = |N(s)|$, the edge $(t \rightarrow s)$ is sampled if $r_t \leq \frac{k}{d_s}$. Since different seed vertices $s \in S^0$ end up using the same random variate $r_t$ for the same source vertex $t$, LABOR-0 samples fewer vertices than NS in expectation. The LABOR-* algorithm is the importance sampling variant of LABOR-0 and samples an edge $(t \rightarrow s)$ if $r_t \leq c_s \pi_t$, where $\pi$ is importance sampling probabilities optimized to minimize the expected number of sampled vertices and $c_s$ is a normalization factor. LABOR-* samples fewer vertices than LABOR-0 in expectation. Note that, choosing $k \geq \max_{s \in V} d_s$ corresponds to training with full neighborhoods for both NS and LABOR methods. #### 2.2.3 Random Walk Sampling Given a walk length $o$, a restart probability $p$, number of random walks $a$, a fanout $k$, and a batch of vertices $S^0$, a vertex $s \in S^0$, a Random Walk (Ying et al., 2018) starts from $s$ and each step picks a random neighbor $s'$ from $N(s)$. For the remaining $o - 1$ steps, the next neighbor is picked from $N(s')$ with probability $1 - p$, otherwise it is picked from $N(s)$. This process is repeated $a$ times for each seed vertex and lastly, the top $k$ visited vertices become the neighbors of $s$ for the current layer. Notice that random walks correspond to weighted neighbor sampling from a graph with adjacency matrix $\tilde{A} = \sum_{i=1}^{o} A^i$, where the weights of $\tilde{A}$ depend on the parameters $a$, $p$ and $k$. Random walks give us the ability to sample from $\tilde{A}$ without actually forming $\tilde{A}$. 2.3 INDEPENDENT MINIBATCHING Independent minibatching is commonly used in multi-GPU, and distributed GNN training frameworks (Cai et al., 2023; Gandhi & Iyer, 2021; Lin et al., 2020; Zheng et al., 2021; Zhu et al., 2019) to parallelize the training and allows scaling to larger problems. Each Processing Element (PE, e.g., GPUs, CPUs, or cores of multi-core CPU), starts with their own $S^0$ of size $b$ as the seed vertices, and compute $S^1, \ldots, S^L$ along with the sampled edges to generate minibatches (see Figure 1). Computing $S^1, \ldots, S^L$ depends on the chosen sampling algorithm, such as the ones explained in Section 2.2. Independent minibatching has the advantage that doing a forward/backward pass does not involve any communication with other PEs after the initial minibatch preparation stage at the expense of duplicate work (see Figure 1). 3 COOPERATIVE MINIBATCHING In this section, we present two theorems that show the work of an epoch will be monotonically nonincreasing with increasing batch sizes. We provide their proofs in Appendices A.1 and A.2. After that, we propose two algorithms that can take advantage of this monotonicity. **Theorem 3.1.** The work per epoch $\frac{E[|S^l|]}{|S^0|}$ required to train a GNN model using minibatch training is monotonically nonincreasing as the batch size $|S^0|$ increases. **Theorem 3.2.** The expected subgraph size $E[|S^l|]$ required to train a GNN model using minibatch training is a concave function of batch size, $|S^0|$. 3.1 COOPERATIVE MINIBATCHING As explained in Section 2, Independent Minibatching (I.M.) cannot take advantage of the reduction in work with increasing global batch sizes and number of PEs, because it uses separate small batches of sizes $b$ on each PE for each step of training. On the other hand, one can also keep the global batch size constant, $bP = |S^0|$, and vary the number of processors $P$. As $P$ increases, I.M. will perform more and more duplicate work because the local batch size is a decreasing function, $b = \frac{|S^0|}{P}$, of $P$. Here, we propose the Cooperative Minibatching method that will take advantage of the work reduction with increasing batch sizes in multi-PE settings. In Cooperative Minibatching, a single batch of size $bP$ will be processed by all the $P$ PEs in parallel, eliminating any redundant work across all PEs. We achieve this as follows: we first partition the graph in 1D fashion by logically assigning each vertex and its incoming edges to PEs as $V_p$ and $E_p$ for each PE $p$. Next, PE $p$ samples its batch seed vertices $S^l_p$ from the training vertices in $V_p$ for $l = 0$ of size $b$. Then using any sampling algorithm, PE $p$ samples the incoming edges $E^l_p$ from $E_p$ for its seed vertices. Each PE then computes the set of vertices sampled $\tilde{S}^{l+1}_p = \{ t \mid (t \rightarrow s) \in E^l_p \}$. Note that, $\tilde{S}^{l+1}_p$ has elements residing on different PEs. The PEs exchange the vertex ids $\tilde{S}^{l+1}_p$ so that each PE receives the set $S^{l+1}_p \in V_p$. This process is repeated recursively for GNN models with multiple layers by using $S^{l+1}_p$ as the seed vertices for the next layer. The exchanged information is cached to be used during the forward/backward passes. For the forward/backward passes, the same communication pattern used during cooperative sampling is used to send and receive input and intermediate layer embeddings before each GNN layer invocation. Algorithm 1 details cooperative sampling and cooperative forward/backward passes for a single GNN training iteration. Independent minibatching works the same except that it lacks the all-to-all operations and has $\tilde{A}_t^{l+1} = A_t^{l+1}$ for any given variable $A$ instead. The redistribution of vertices during sampling happens according to the initial graph partitioning and the rest of the redistribution operations follow the same communication pattern, always converting a variable $\tilde{A}_p^{l+1}$ into $A_p^{l+1}$ during the forward pass and $A_p^{l+1}$ into $\tilde{A}_p^{l+1}$ during sampling and the backward passes for any variable $A$. Note that a similar training approach is explored concurrently with our work in Polisetty et al. (2023). We refer the reader to Appendix A.4 for the complexity analysis of Cooperative and Independent Minibatching approaches, and to Appendix A.8 to see the relation between the approach proposed here and the work of Jia et al. (2020) on redundancy-free GCN aggregation. ### 3.2 Cooperative Dependent Minibatching Just as any parallel algorithm can be executed sequentially, we can reduce the number of distinct data accesses by having a single PE process $b$-sized parts of a single $\kappa b$-sized minibatch for $\kappa$ iterations. In light of Theorems 3.1 and 3.2, consider doing the following: choose $\kappa \in \mathbb{Z}^+$, then sample a batch $S^0$ of size $\kappa b$, i.e., $\kappa b = |S^0|$ to get $S^0, \ldots, S^L$. Then sample $\kappa$ minibatches $S_i^0$, of size $b = |S_i^0|$ from this batch of size $\kappa b$ to get $S_i^0, \ldots, S_i^L$, $\forall i \in \{0, \ldots, \kappa - 1\}$. In the end, all of the input features required for these minibatches will be a subset of the input features of the large batch, i.e. $S_j^i \subset S^j, \forall i, j$. This means that the collective input feature requirement of these $\kappa$ batches will be $|S^L|$, the same as our batch of size $\kappa b$. Hence, we can now take advantage of the concave growth of the work in Theorem 3.2 and Figure 2. Note that, if one does not use any sampling algorithms and proceeds to use the full neighborhoods, this technique will not give any benefits, as by definition, the $l$-hop neighborhood of a batch of size $\kappa b$ will always be equal to the union of the $l$-hop neighborhoods of batches of sizes $b$. However for sampling algorithms, any overlapping vertex sampled by any two batches of sizes $b$ might end up with different random neighborhoods resulting in a larger number of sampled vertices. Thus, having a single large batch ensures that only a single random set of neighbors is used for any vertex processed over a period of $\kappa$ batches. The approach described above has a nested iteration structure and the minibatches part of one $\kappa$ group will be significantly different than another group and this might slightly affect convergence. In Appendix A.5 we propose an alternative smoothing approach that does not require two-level nesting and still takes advantage of the same phenomenon for the NS and LABOR sampling algorithms. The main idea of our smoothing approach is as follows: each time one samples the neighborhood of a vertex, normally it is done independently of any previous sampling attempts. If one were to do it fully dependently, then one would end up with an identical sampled neighborhood at each sampling attempt. What we propose is to do something inbetween, so that the sampled neighborhood of a vertex changes slowly over time. The speed of change in the sampled neighborhoods is $\frac{1}{\kappa}$, and after every $\kappa$ iterations, one gets fully independent new random neighborhoods for all vertices. We will experimentally evaluate the locality benefits and the overall effect of this algorithm on convergence in Sections 4.2 and 4.3.1 and more details on our smoothing approach are discussed in Appendix A.5. ### 4 Experiments We first compare how the work to process an epoch changes w.r.t. to the batch size to empirically validate Theorems 3.1 and 3.2 for different graph sampling algorithms. Next, we show how dependent batches introduced in Section 3.2 benefits GNN training. We also show the runtime benefits of cooperative minibatching compared to independent minibatching in the multi-GPU setting. Finally, we show that these two techniques are orthogonal, can be combined to get multiplicative savings. Details on our experimental setup can be found in Appendix A.3. Table 1: Traits of datasets used in experiments: numbers of vertices, edges, avg. degree, features, cached vertex embeddings, and training, validation and test vertex split. The last column shows the number of minibatches in an epoch during model training with 1024 batch size including validation. | Dataset | \(|V|\) | \(|E|\) | \(\frac{|E|}{|V|}\) | # feats. | cache size | train - val - test (%) | # minibatches | |-----------|--------|--------|-----------------|---------|------------|------------------------|---------------| | flickr | 89.2K | 900K | 10.09 | 500 | 70k | 50.00 - 25.00 - 25.00 | 65 | | yelp | 717K | 14.0M | 19.52 | 300 | 200k | 75.00 - 10.00 - 15.00 | 595 | | reddit | 233K | 115M | 493.56 | 602 | 60k | 66.00 - 10.00 - 24.00 | 172 | | papers100M| 111M | 3.2B | 29.10 | 128 | 2M | 1.09 - 0.11 - 0.19 | 1300 | | mag240M | 244M | 3.44B | 14.16 | 768 | 2M | 0.45 - 0.06 - 0.04 | 1215 | Figure 2: Monotonicity of the work. x-axis shows the batch size, y-axis shows \(E[|S^3|]/|S^0|\) (see Theorem 3.1 for node prediction (top row) and \(E[|S^3|]\) (see Theorem 3.2) for edge prediction (bottom row), where \(E[|S^3|]\) denotes the expected number of sampled vertices in the 3rd layer and \(|S^0|\) denotes the batch size. RW stands for Random Walks, NS for Neighbor Sampling, and LABOR-0/* for the two different variants of the LABOR sampling algorithm described in Section 2.2. 4.1 Demonstrating monotonicity of work We use three sampling approaches, NS, LABOR, and RW, to demonstrate that the work to process an epoch decreases as the batch size increases for the \(L = 3\) layer case across these three different classes of sampling algorithms. We carried out our evaluation in two problem settings: node and edge prediction. For node prediction, a batch of training vertices is sampled with a given batch size. Then, the graph sampling algorithms described in Section 2.2 are applied to sample the neighborhood of this batch. The top row of Figure 2 shows how many input vertices is required on average to process an epoch, specifically \(E[|S^3|]/|S^0|\). For edge prediction, we add reverse edges to the graph making it undirected and sample a batch of edges. For each of these edges a random negative edge (an edge that is not part of \(E\)) with one endpoint coinciding with the positive edge is sampled. Then, all of the endpoints of these positive and negative edges are used as seed vertices to sample their neighborhoods. The bottom row of Figure 2 shows \(E[|S^3|]\). We can see that in all use cases, datasets and sampling algorithms, the work to process an epoch is monotonically decreasing (see Appendix A.1 for the proof). We also see the plot of the expected number of vertices sampled, \(E[|S^3|]\), is concave with respect to batch size (proof in Appendix A.2). Another observation is that the concavity characteristic of \(E[|S^3|]\) seems to differ for different sampling algorithms. In increasing order of concavity we have RW, NS, LABOR-0 and LABOR-*. The more concave a sampling algorithm’s \(E[|S^L|]\) curve is, the less it is affected from the NEP and more savings are available through the use of the proposed methods in Sections 3.1 and 3.2. Note that the differences would grow with a larger choice of layer count \(L\). 4.2 Dependent Minibatches Figure 3: The validation F1-score with the full neighborhoods for LABOR-0 sampling algorithm with 1024 batch size and varying $\kappa$ dependent minibatches, $\kappa = \infty$ denotes infinite dependency, meaning the neighborhood sampled for a vertex stays static during training. See Figure 4a for cache miss rates. See Figure 7 for the validation F1-score with the dependent sampler and the training loss curve. (a) Cache sizes were taken from Table 1 and a single (b) 4 cooperating PEs were used with each having a cache of size 1M. Figure 4: LRU-cache miss rates for LABOR-0 sampling algorithm with 1024 batch size per PE and varying $\kappa$ dependent minibatches, $\kappa = \infty$ denotes infinite dependency. We vary the batch dependency parameter $\kappa$ introduced in Section 3.2 for the LABOR-0 sampler with a batch size of 1024. Our expectation is that as consecutive batches become more dependent on each other, the subgraphs used during consecutive steps of training would start overlapping with each other, in which case, the vertex embedding accesses would become more localized. We attempted to capture this increase in temporal locality in vertex embedding accesses by implementing an LRU cache to fetch them. The cache sizes used for different datasets is given in Table 1. Note that the cache miss rate is proportional to the amount of data that needs to be copied from the vertex embedding storage. The Figure 4a shows that as $\kappa$ increases, the cache miss rate across all datasets drops. On reddit, this is a drop from 64% to 16% on, a 4x improvement. We also observe that the improvement is monotonically increasing as a function of $\frac{|E|}{|V|}$ given in Table 1. Figure 3 shows that training is not negatively affected across all datasets up to $\kappa = 256$ with less than 0.1% F1-score difference, after which point the validation F1-score with w/o sampling starts to diverge from the $\kappa = 1$ case. Runtime benefits of this approach can be observed by comparing the Cache and Cache, $\kappa$ columns in Table 2. Appendix A.6 has additional discussion about the effect of varying $\kappa$ and the last column of Table 1 shows the number of minibatches in an epoch during training. 4.3 Cooperative Minibatching We use our largest datasets, mag240M and papers100M, as distributed training is motivated by large-scale graph datasets. We present our runtime results on systems equipped with NVIDIA GPUs, with 4 and 8 A100 80 GB (NVIDIA, 2021) and 16 V100 32GB (NVIDIA, 2020b), all with NVLink interconnect between the GPUs (600 GB/s for A100 and 300 GB/s for V100). The GPUs perform all stages of GNN training and the CPUs are only used to launch kernels for the GPUs. Feature copies are performed by GPUs as well, accessing pinned feature tensors over the PCI-e using zero-copy access. In cooperative minibatching, both data size and computational cost are shrinking with increasing numbers of PEs, relative to independent minibatching. We use the GCN model for papers100M and the R-GCN model (Schlichtkrull et al., 2017) for mag240M. As seen in Table 2... Table 2: Cooperative vs independent minibatching runtimes per minibatch (ms) on three different systems with 4 and 8 NVIDIA A100 80 GB GPUs, and 16 NVIDIA V100 32GB GPUs. I/C denotes whether independent or cooperative minibatching is used. Samp. is short for Graph Sampling, Feature Copy stands for vertex embedding copies over PCI-e and Cache denotes the runtime of copies performed with a cache that can hold $10^6$ vertex embeddings per A100 and $5 \times 10^5$ per V100. $\kappa$ denotes the use of batch dependency $\kappa = 256$. F/B means forward/backward. Total time is computed by the fastest available Feature Copy time, the sampling time, and the F/B time. $|S^0|$ is the global batch size and $b$ is the batch size per GPU. $\alpha$ stands for cross GPU communication bandwidth (NVLink), $\beta$ for PCI-e bandwidth and $\gamma$ for GPU global memory bandwidth. Green was used to indicate the better result between independent and cooperative minibatching, while Bold was used to highlight the feature copy time included in the Total column. | # PEs, $\gamma$ | Dataset & Model | Sampler | I/C | Samp. | Feature Copy | F/B | Total | |-----------------|---------------|---------|-----|-------|--------------|-----|-------| | | | | | | - Cache | Cache, $\kappa$ | | | 4 A100 | papers100M | LABOR-0 | Indep | 21.7 | 18.4 | 16.8 | **11.2** | 8.9 | 41.8 | | $\gamma = 2TB/s$| GCN | Coop | 17.7 | 14.0 | 10.1 | **5.8** | 13.0 | **36.5** | | $\alpha = 600GB/s$ | NS | Indep | 16.1 | 26.5 | **22.1** | - | 10.1 | **48.3** | | $\beta = 64GB/s$ | Coop | 11.9 | 21.3 | **12.9** | - | 15.0 | **39.8** | | $|S^0| = 2^{12}$ | LABOR-0 | Indep | 26.0 | 57.9 | 56.0 | **41.0** | 199.9 | **266.9** | | $b = 1024$ | R-GCN | Coop | 20.0 | 51.1 | **36.9** | **23.4** | 183.3 | **226.7** | | | NS | Indep | 14.4 | 78.0 | **71.2** | - | 223.0 | **308.6** | | | | Coop | 12.3 | 73.9 | **47.5** | - | 215.6 | **275.4** | | 8 A100 | papers100M | LABOR-0 | Indep | 21.3 | 21.1 | 18.7 | **12.0** | 9.3 | 42.6 | | $\gamma = 2TB/s$| GCN | Coop | 16.5 | 12.4 | 7.1 | **4.0** | 13.5 | **34.0** | | $\alpha = 600GB/s$ | NS | Indep | 15.8 | 31.0 | **24.5** | - | 10.3 | **50.6** | | $\beta = 64GB/s$ | Coop | 12.5 | 19.4 | 9.0 | - | 15.6 | **37.1** | | $|S^0| = 2^{13}$ | LABOR-0 | Indep | 30.6 | 70.1 | 66.2 | **46.8** | 202.1 | **279.5** | | $b = 1024$ | R-GCN | Coop | 21.6 | 50.6 | 29.0 | **19.3** | 172.2 | **213.1** | | | NS | Indep | 15.0 | 94.9 | **80.9** | - | 224.8 | **320.7** | | | | Coop | 14.9 | 71.6 | **39.6** | - | 209.0 | **263.5** | | 16 V100 | papers100M | LABOR-0 | Indep | 39.1 | 44.5 | 40.2 | **29.4** | 15.1 | 83.6 | | $\gamma = 0.9TB/s$| GCN | Coop | 26.9 | 22.7 | 10.4 | **4.9** | 19.1 | **50.9** | | $\alpha = 300GB/s$ | NS | Indep | 18.0 | 61.3 | **52.0** | - | 16.2 | **86.2** | | $\beta = 32GB/s$ | Coop | 19.2 | 34.9 | **13.0** | - | 21.3 | **53.5** | | $|S^0| = 2^{13}$ | LABOR-0 | Indep | 50.8 | 128.8 | 121.3 | **96.2** | 156.1 | **303.1** | | $b = 512$ | R-GCN | Coop | 29.2 | 78.1 | **42.8** | **23.5** | 133.3 | **186.0** | | | NS | Indep | 19.3 | 167.3 | **152.6** | - | 170.9 | **342.8** | | | | Coop | 19.3 | 116.1 | **53.1** | - | 160.4 | **232.8** | cooperative minibatching reduces all the runtimes for different stages of GNN training, except for the F/B (forward/backward) times on papers100M where the computational cost is not high enough to hide the overhead of communication. Table 3: Runtime improvements of Cooperative Minibatching over Independent Minibatching compiled from the Total column of Table 2. This is a further improvement on top of the speedup independent minibatching already gets over the execution on a single GPU. | Dataset & Model | Sampler | 4 GPUs | 8 GPUs | 16 GPUs | |-----------------|---------|--------|--------|---------| | papers100M | LABOR-0 | 15% | 25% | 64% | | GCN | NS | 21% | 36% | 61% | | mag240M | LABOR-0 | 18% | 31% | 63% | | R-GCN | NS | 12% | 22% | 47% | If we take the numbers in the Total columns from Table 2, divide independent runtimes by the corresponding cooperative ones, then we get Table 3. We can see that the theoretical decrease in work results in increasing speedup numbers with the increasing number of PEs, due to Theorem A.1. We would like to point out that $E[|S^2|]/|S^0|$ curves in Figure 2 are responsible for these results. With $P$ PEs and $|S^0|$ global batch size, the work performed by independent minibatching vs cooperative minibatching can be compared by looking at $x = \frac{1}{P} |S^0|$ vs $x = |S^0|$ respectively. We also ran experiments that show that graph partitioning using METIS (Karypis & Kumar [1998]) prior to the start of training can help the scenarios where communication overhead is significant. The forward-backward time goes from 13.0ms to 12.0ms on papers100M with LABOR-0 on 4 NVIDIA A100 GPUs with such partitioning due to reduced communication overhead using the same setup as Table 2. Increasing the number of GPUs increases the advantage of cooperative minibatching compared to independent minibatching. The forward-backward time on mag240M with LABOR-0 is 200 (same as independent baseline), 194, 187 and 183 ms with 1, 2, 3 and 4 cooperating PEs, respectively measured on the NVIDIA DGX Station A100 machine. The decrease in runtime with increasingly cooperating PEs is due to the decrease in redundant work they have to perform. Even though the batch size per PE is constant, the runtime goes down similar to the plots in the top row of Figure 2 except that it follows $\frac{kE||S^2||}{|S^0|}$, which gives the average number of edges in the 3rd layer when a sampler with fanout $k$ is used. Additionally, we demonstrate that there is no significant model convergence difference between independent vs cooperative minibatching in Appendix A.7. ### 4.3.1 Cooperative-Dependent Minibatching Table 4: Runtime improvements of Dependent Minibatching for Independent and Cooperative Minibatching methods compiled from the Cache, $\kappa$ and Cache columns of Table 2 with LABOR-0. Making consecutive minibatches dependent increases temporal locality, hence reducing cache misses. | Dataset & Model | I/C | 4 GPUs | 8 GPUs | 16 GPUs | |----------------|-----|--------|--------|--------| | papers100M GCN | Indep | 50% | 57% | 37% | | | Coop | 74% | 78% | 112% | | mag240M R-GCN | Indep | 37% | 41% | 26% | | | Coop | 58% | 50% | 82% | We use the same experimental setup as Section 4.3 but vary the $\kappa$ parameter to show that cooperative minibatching can be used with dependent batches (Figure 4b). We use a cache size of 1M per PE. Cooperative feature loading effectively increases the global cache size since each PE caches only the vertices assigned to them while independent feature loading can have duplicate entries across caches. For our largest dataset mag240M, on top of $1.4\times$ reduced work due to cooperative minibatching alone, the cache miss rates were reduced by more than $2\times$, making the total improvement $3\times$. Runtime results for $\kappa \in \{1, 256\}$ are presented in Table 2, the Feature Copy Cache and Cache, $\kappa$ columns. Table 4 summarizes these results by dividing the runtimes in Cache by Cache, $\kappa$ and reporting percentage improvements. ## 5 Conclusions In this paper, we investigated the difference between DNN and GNN minibatch training. We observed that the cost of processing a minibatch is a concave function of batch size in GNNs, unlike DNNs where the cost scales linearly. We then presented theorems that this is indeed the case for every graph and then proceeded to propose two approaches to take advantage of cost concavity. The first approach, which we call cooperative minibatching proposes to partition a minibatch between multiple PEs and process it cooperatively. This is in contrast to existing practice of having independent minibatches per PE, and avoids duplicate work that is a result of vertex and edge repetition across PEs. The second approach proposes the use of consecutive dependent minibatches, through which the temporal locality of vertex and edge accesses is manipulated. As batches get more dependent, the locality increases. We demonstrate this increase in locality by employing an LRU-cache for vertex embeddings on GPUs. Finally, we show that these approaches can be combined without affecting convergence, and speed up multi-GPU GNN training by up to 64% for free. REFERENCES Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. In Proceedings of The 33rd International Conference on Machine Learning, pp. 699–707. PMLR, 20–22 Jun 2016. URL https://proceedings.mlr.press/v48/allen-zhu16.html. I. Artico, I. Smolyarenko, V. Vinciotti, and E. C. Wit. How rare are power-law networks really? In Royal Society, volume 476, 2020. URL http://doi.org/10.1098/rspa.2019.0742. Muhammed Fatih Baln and Ümit V. Çatalyürek. Layer-neighbor sampling — defusing neighborhood explosion in GNNs. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=Kd5W4JRsFv. D.P. Bertsekas. Incremental least squares methods and the extended kalman filter. In Proceedings of 1994 33rd IEEE Conference on Decision and Control, volume 2, pp. 1211–1214 vol.2, 1994. doi: 10.1109/CDC.1994.411166. Zhenkun Cai, Qihui Zhou, Xiao Yan, Da Zheng, Xiang Song, Chenguang Zheng, James Cheng, and George Karypis. Dsp: Efficient gnn training with multiple gpus. In Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, PPoPP ’23, pp. 392–404, 2023. doi: 10.1145/3572848.3577528. Avery Ching, Sergey Edunov, Maja Kabiljo, Dionysios Logothetis, and Sambavi Muthukrishnan. One trillion edges: Graph processing at facebook-scale. Proc. VLDB Endow., 8(12):1804–1815, aug 2015. doi: 10.14778/2824032.2824077. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’ aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc Le, and Andrew Ng. Large scale distributed deep networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (eds.), Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012. URL https://proceedings.neurips.cc/paper/2012/file/6aca97005c68f1206823815f66102863-Paper.pdf. Swapnil Gandhi and Anand Padmanabha Iyer. P3: Distributed deep graph learning at scale. In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21), pp. 551–568, 2021. Boris Ginsburg, Igor Gitman, and Yang You. Large batch training of convolutional networks with layer-wise adaptive rate scaling. Technical Report arXiv:1708.03888, ArXiv, September 2017. URL http://arxiv.org/abs/1708.03888. Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. Technical Report arXiv:1706.02677, ArXiv, April 2018. URL http://arxiv.org/abs/1706.02677. William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 1025–1035, 2017. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in Neural Information Processing Systems, 2020-Decem(NeurIPS):1–34, 2020. Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs, 2021. URL https://arxiv.org/abs/2103.09430. Zhihao Jia, Sina Lin, Rex Ying, Jiaxuan You, Jure Leskovec, and Alex Aiken. Redundancy-free computation for graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, pp. 997–1005. Association for Computing Machinery, 2020. URL https://doi.org/10.1145/3394486.3403142.
PCm1oT8pZI
According to the experimental results in Table 4, we still have a high OoDWSR after model extraction. Could you explain why the model extraction methods using i.i.d data can also extract the watermarks that are generated using the OoD data?
SAFE AND ROBUST WATERMARK INJECTION WITH A SINGLE OoD IMAGE Shuyang Yu1, Junyuan Hong1,2, Haobo Zhang1, Haotao Wang2, Zhaoyang Wang2 and Jiayu Zhou1 1Department of Computer Science and Engineering, Michigan State University 2Department of Electrical and Computer Engineering, University of Texas at Austin {yushuyan,hongju12,zhan2060,jiayuz}@msu.edu,{htwang,atlaswang}@utexas.edu ABSTRACT Training a high-performance deep neural network requires large amounts of data and computational resources. Protecting the intellectual property (IP) and commercial ownership of a deep model is challenging yet increasingly crucial. A major stream of watermarking strategies implants verifiable backdoor triggers by poisoning training samples, but these are often unrealistic due to data privacy and safety concerns and are vulnerable to minor model changes such as fine-tuning. To overcome these challenges, we propose a safe and robust backdoor-based watermark injection technique that leverages the diverse knowledge from a single out-of-distribution (OoD) image, which serves as a secret key for IP verification. The independence of training data makes it agnostic to third-party promises of IP security. We induce robustness via random perturbation of model parameters during watermark injection to defend against common watermark removal attacks, including fine-tuning, pruning, and model extraction. Our experimental results demonstrate that the proposed watermarking approach is not only time- and sample-efficient without training data, but also robust against the watermark removal attacks above. Codes are available: https://github.com/illidanlab/Single_oowatermark. 1 INTRODUCTION In the era of deep learning, training a high-performance large model requires curating a massive amount of training data from different sources, powerful computational resources, and often great efforts from human experts. For example, large language models such as GPT-3 are large models trained on private datasets, incurring a significant training cost (Floridi & Chirriatti [2020]). The risk of illegal reproduction or duplication of such high-value DNN models is a growing concern. The recent Facebook leaked LLAMA model provides a notable example of this risk (Hern [2023]). Therefore, it is essential to protect the intellectual property of the model and the rights of the model owners. Recently, watermarking (Adi et al. [2018], Darvish Rouhani et al. [2019], Uchida et al. [2017], Zhang et al. [2018], Chen et al. [2021], Li et al. [2021]) has been introduced to protect the copyright of the DNNs. Most existing watermarking methods can be categorized into two mainstreams, including parameter-embedding (Kuriyayashi et al. [2021], Uchida et al. [2017], Mehta et al. [2022]) and backdoor-based (Goldblum et al. [2022], Li et al. [2022]) techniques. Parameter-embedding techniques require white-box access to the suspicious model, which is often unrealistic in practical detection scenarios. This paper places emphasis on backdoor-based approaches, which taint the training dataset by incorporating trigger patches into a set of images referred to as verification samples (trigger set), and modifying the labels to a designated class, forcing the model to memorize the trigger pattern during fine-tuning. Then the owner of the model can perform an intellectual property (IP) inspection by assessing the correspondence between the model’s outputs on the verification samples with the trigger and the intended target labels. Existing backdoor-based watermarking methods suffer from major challenges in safety, efficiency, and robustness. Typically injection of backdoors requires full or partial access to the original training data. When protecting models, such access can be prohibitive, mostly due to data safety and confidentiality. For example, someone trying to protect a model fine-tuned upon a foundation model and a model publisher vending models uploaded by their users. Another example is an independent IP protection... department or a third party that is in charge of model protection for redistribution. Yet another scenario is federated learning [Konečný et al., 2016], where the server does not have access to any in-distribution (ID) data, but is motivated to inject a watermark to protect the ownership of the global model. Despite the high practical demands, watermark injection without training data is barely explored. Although some existing methods tried to export or synthesize out-of-distribution (OoD) samples as triggers to insert watermark [Wang et al., 2022b; Zhang et al., 2018], the original training data is still essential to maintain the utility of the model, i.e., prediction performance on clean samples. Li & Wang (2022) proposed a strategy that adopts a Data-Free Distillation (DFD) process to train a generator and uses it to produce surrogate training samples. However, training the generator is time-consuming and may take hundreds of epochs [Fang et al., 2019]. Another critical issue with backdoor-based watermarks is their known vulnerability against minor model changes, such as fine-tuning [Adi et al., 2018; Uchida et al., 2017; Garg et al., 2020], and this vulnerability greatly limited the practical applications of backdoor-based watermarks. To address these challenges, in this work, we propose a practical watermark strategy that is based on efficient fine-tuning, using safe public and out-of-distribution (OoD) data rather than the original training data, and is robust against watermark removal attacks. Our approach is inspired by the recent discovery of the expressiveness of a powerful single image [Asano & Saeed, 2023; Asano et al., 2019]. Specifically, we propose to derive patches from a single image, which are OoD samples with respect to the original training data, for watermarking. To watermark a model, the model owner or IP protection unit secretly selects a few of these patches, implants backdoor triggers on them, and uses fine-tuning to efficiently inject the backdoor into the model to be protected. The IP verification process follows the same as other backdoor-based watermark approaches. To increase the robustness of watermarks against agnostic removal attacks, we design a parameter perturbation procedure during the fine-tuning process. Our contributions are summarized as follows. - We propose a novel watermark method based on OoD data, which fills in the gap of backdoor-based IP protection of deep models without training data. The removal of access to the training data enables the proposed approach possible for many real-world scenarios. - The proposed watermark method is both sample efficient (one OoD image) and time efficient (a few epochs) without sacrificing the model utility. - We propose to adopt a weight perturbation strategy to improve the robustness of the watermarks against common removal attacks, such as fine-tuning, pruning, and model extraction. We show the robustness of watermarks through extensive empirical results, and they persist even in an unfair scenario where the removal attack uses a part of in-distribution data. ## 2 BACKGROUND ### 2.1 DNN WATERMARKING Existing watermark methods can be categorized into two groups, parameter-embedding and backdoor-based techniques, differing in the information required for verification. **Parameter-embedding** techniques embed the watermark into the parameter space of the target model [Darvish Rouhani et al., 2019; Uchida et al., 2017; Kuribayashi et al., 2021; Mehta et al., 2022]. Then the owner can verify the model identity by comparing the parameter-oriented watermark extracted from the suspect model versus that of the owner model. For instance, Kuribayashi et al. (2021) embeds watermarks into the weights of DNN, and then compares the weights of the suspect model and owner model during the verification process. However, these kinds of techniques require a white-box setting: the model parameters should be available during verification, which is not a practical assumption facing real-world attacks. For instance, an IP infringer may only expose an API of the stolen model for queries to circumvent the white-box verification. **Backdoor-based** techniques are widely adopted in a black-box verification, which implant a backdoor trigger into the model by fine-tuning the pre-trained model with a set of poison samples (also denoted as the trigger set) assigned to one or multiple secret target class [Zhang et al., 2018; Le Merrer et al., 2020; Goldblum et al., 2022; Li et al., 2022]. Suppose $D_c$ is the clean dataset and we craft $D_p$ by poisoning another set of clean samples. The backdoor-based techniques can be unified as minimizing the following objective: $$\min_\theta \sum_{(x,y) \in D_c} \ell(f_\theta(x), y) + \sum_{(x',y') \in D_p} \ell(f_\theta(\Gamma(x')), t),$$ where $\Gamma(x)$ adds a trigger pattern to a normal sample, $t$ is the pre-assigned target label, $f_\theta$ is a classifier parameterized by $\theta$, and $\ell$ is the cross-entropy loss. The key intuition of backdoor training is to make models memorize the shortcut patterns while ignoring other semantic features. A watermarked model should satisfy the following desired properties: 1) **Persistent utility.** Injecting backdoor-based watermarks into a model should retain its performance on original tasks. 2) **Removal resilience.** Watermarks should be stealthy and robust against agnostic watermark removal attacks (Orekondy et al., 2019; Chen et al., 2022; Hong et al., 2023). Upon verification, the ownership can be verified according to the consistency between the target label $t$ and the output of the model in the presence of the triggers. However, conventional backdoor-based watermarking is limited to scenarios where clean and poisoned dataset follows the same distribution as the training data of the pre-trained model. For example, in Federated Learning (McMahan et al., 2017), the IP protector on the server does not have access to the client’s data. Meanwhile, in-training backdoor injection could be voided by backdoor-resilient training (Wang et al., 2022a). We reveal that neither the training data (or equivalent i.i.d. data) nor the in-training strategy is necessary for injecting watermarks into a well-trained model, and merely using clean and poisoned OoD data can also insert watermarks after training. **Backdoor-based watermarking without i.i.d. data.** Among backdoor-based techniques, one kind of technique also tried to export or synthesize OoD samples as the trigger set to insert a watermark. For instance, Zhang et al. (2018) exported OoD images from other classes that are irrelevant to the original tasks as the watermarks. Wang et al. (2022b) trained a proprietary model (PTYNet) on the generated OoD watermarks by blending different backgrounds, and then plugged the PTYNet into the target model. However, for these kinds of techniques, i.i.d. samples are still essential to maintain the main-task performance. On the other hand, data-free watermark injection is an alternative to OoD-based methods. Close to our work, Li & Wang (2022) proposed a data-free method that first adopts a Data-Free Distillation method to train a generator, and then uses the generator to produce surrogate training samples to inject watermarks. However, according to Fang et al. (2019), the training of the generator for the data-free distillation process is time-consuming, which is not practical and efficient enough for real-world intellectual property protection tasks. ### 2.2 Watermark Removal Attack In contrast to protecting the IP, a series of works have revealed the risk of watermark removal to steal the IP. Here we summarize three mainstream types of watermark removal techniques: fine-tuning, pruning, and model extraction. We refer to the original watermarked model as the victim model and the stolen copy as the suspect model under removal attacks. **Fine-tuning** assumes that the adversary has a small set of i.i.d. samples and has access to the victim model architectures and parameters (Adi et al., 2018; Uchida et al., 2017). The adversary attempts to fine-tune the victim model using the i.i.d. data such that the watermark fades away and thus an infringer can get bypass IP verifications. **Pruning** has the same assumptions as fine-tuning. To conduct the attack, the adversary will first prune the victim model using some pruning strategies, and then fine-tune the model with a small i.i.d. dataset (Liu et al., 2018b; Renda et al., 2020). **Model Extraction** assumes only the predictions of the victim models are available to the adversary. To steal the model through the API, given a set of auxiliary samples, the adversary first queries the victim model for auxiliary samples to obtain the annotated dataset, and then a copy of the victim model is trained based on this annotated dataset (Juuti et al., 2019; Tramer et al., 2016; Papernot et al., 2017; Orekondy et al., 2019; Yuan et al., 2022). ### 3 Method **Problem Setup.** Within the scope of the paper, we assume that training data or equivalent i.i.d. data are not available for watermarking due to data privacy concerns. This assumption casts a substantial challenge on maintaining standard accuracy on i.i.d. samples while injecting backdoors. Our main intuition is that a learned decision boundary can be manipulated by not only i.i.d. samples but also OoD samples. Moreover, recent studies (Asano & Saeed, 2023; Asano et al., 2019) showed a surprising result that one single OoD image is enough for learning low-level visual representations provided with strong data augmentations. Thus, we conjecture that it is plausible to inject backdoor-based watermarks efficiently to different parts of the pre-trained representation space by exploiting the Figure 1: Framework of the proposed safe and robust watermark injection strategy. It first constructs a surrogate dataset from the single-image OoD data source provided with strong augmentation used as the secret key, which is confidential to any third parties. Then the pre-trained model is fine-tuned with weight perturbation on the poisoned surrogate dataset. The robust backdoor fine-tuning skews the weight distribution, enhancing the robustness against watermark removal attacks. diverse knowledge from one single OoD image. Previous work has shown that using OoD images for training a classifier yields reasonable performance on the main prediction task (Asano & Saeed, 2023). Moreover, it is essential to robustify the watermark against potential removal attacks. Therefore, our injection process comprises two steps: Constructing surrogate data to be poisoned and robust watermark injection. The framework of the proposed strategy is illustrated in Fig. 1. 3.1 Constructing Safe Surrogate Dataset We first augment one OoD source image multiple times to generate an unlabeled surrogate dataset $\tilde{D}$ of a desired size according to Asano & Saeed (2023); Asano et al. (2019). For safety considerations, the OoD image is only known to the model owner. The source OoD images are publicly available and properly licensed for personal use. To “patchify” a large single image, the augmentation composes multiple augmentation methods in sequence: cropping, rotation and shearing, and color jittering using the hyperparameters from Asano et al. (2019). During training, we further randomly augment pre-fetched samples by cropping and flipping, and we use the predictions from the pre-trained model $\theta_0$ as supervision. Suppose $\theta$ is initialized as $\theta_0$ of the pre-trained model. To inject watermarks, we split the unlabeled surrogate dataset $D = \tilde{D}_c \cup \tilde{D}_p$ where $\tilde{D}_c$ is the clean dataset, and $\tilde{D}_p$ is the poisoned dataset. For the poisoned dataset $\tilde{D}_p$, by inserting a trigger pattern $\Gamma(\cdot)$ into the original sample in $\tilde{D}_p$, the sample should be misclassified to one pre-assigned target label $t$. Our goal is to solve the following optimization problem: $$\min_{\theta} L_{\text{inj}}(\theta) := \sum_{x \in D_c} \ell(f_\theta(x), f_{\theta_0}(x)) + \sum_{x' \in D_p} \ell(f_\theta(\Gamma(x')), t).$$ The first term is used to ensure the high performance of the original task (Asano & Saeed, 2023), and the second term is for watermark injection. The major difference between our method and Asano & Saeed (2023) is that we use the generated data for fine-tuning the same model instead of distilling a new model. We repurpose the benign generated dataset for injecting watermarks. Considering a black-box setting, to verify whether the suspect model $M_s$ is a copy of our protected model $M$, we can use the generated surrogate OoD dataset as safe verification samples. As the generation is secreted, no one other than the owner can complete the verification. Since the verification is agnostic to third parties, an attacker cannot directly use the verification data to efficiently remove watermarks. Thus, we can guarantee the safety of the verification. Formally, we check the probability of watermarked verification samples that can successfully mislead the model $M_s$ to predict the pre-defined target label $t$, denoted as watermark success rate (WSR). Since the ownership of stolen models can be claimed by the model owner if the suspect model’s behavior differs significantly from any non-watermarked models (Jia et al., 2021), if the WSR is larger than a random guess, and also far exceeds the probability of a non-watermarked model classifying the verification samples as $t$, then $M_s$ will be considered as a copy of $M$ with high probability. A T-test between the output logits of the suspect model $M_s$ and a non-watermarked model on the verification dataset is also used as a metric to evaluate whether $M_s$ is a stolen copy. Compared with traditional watermark injection techniques, i.i.d. data is also unnecessary in the verification process. 3.2 Robust Watermark Injection According to Adi et al. (2018), Uchida et al. (2017), the watermark may be removed by fine-tuning when adversaries have access to the i.i.d. data. Watermark removal attacks such as fine-tuning and pruning will shift the model parameters on a small scale to maintain standard accuracy and remove watermarks. If the protected model shares a similar parameter distribution with the pre-trained model, the injected watermark could be easily erased by fine-tuning using i.i.d. data or adding random noise to parameters (Garg et al., 2020). To defend against removal attacks, we intuitively aim to make our watermark robust and persistent within a small scale of parameter perturbations. Backdoor training with weight perturbation. To this end, we introduce adversarial weight perturbation (WP) into backdoor fine-tuning. First, we simulate the watermark removal attack that maximizes the loss to escape from the watermarked local minima. We let $\theta = (w, b)$ denote the model parameter, where $\theta$ is composed of weight $w$ and bias $b$. The weight perturbation is defined as $v$. Then, we adversarially minimize the loss after the simulated removal attack. The adversarial minimization strategy echoes some previous sharpness-aware optimization principles for robust model poisoning (He et al., 2023). Thus, the adversarial training objective is formulated as: $$\min_{w, b} \max_{v \in V} L_{\text{per}}(w + v, b),$$ where $$L_{\text{per}}(w + v, b) := L_{\text{inj}}(w + v, b) + \beta \sum_{x \in \hat{D}_p, x' \in \hat{D}_p} \text{KL}(f(w+v,b)(x), f(w+v,b)(\Gamma(x'))).$$ In Eq. (1), we constrain the weight perturbation $v$ within a set $V$, $\text{KL}(\cdot, \cdot)$ is the Kullback–Leibler divergence, and $\beta$ is a positive trade-off parameter. The first term is identical to standard watermark injection. Inspired by previous work (Lang et al., 2019), the second term can preserve the main task performance and maintain the representation similarity between poisoned and clean samples in the presence of weight perturbation. Eq. (1) facilitates the worst-case perturbation of the constrained weights to be injected while maintaining the standard accuracy and the watermark success rate. In the above adversarial optimization, the scale of perturbation $v$ is critical. If the perturbation is too large, the anomalies of the parameter distribution could be easily detected by an IP infringer (Rakin et al., 2020). Since the weight distributions differ by layer of the network, the magnitude of the perturbation should vary accordingly from layer to layer. Following Wu et al. (2020), we adaptively restrict the weight perturbation $v_l$ for the $l$-th layer weight $w_l$ as $$\|v_l\| \leq \gamma \|w_l\|,$$ where $\gamma \in (0, 1)$. The set $V$ in Eq. (1) will be decomposed into balls with radius $\gamma \|w_l\|$ per layer. Optimization. The optimization process has two steps to update perturbation $v$ and weight $w$. (1) $v$-step: To consider the constraint in (2), we need to use a projection. Note that $v$ is layer-wisely updated, we need a projection function $\Pi(\cdot)$ that projects all perturbations $v_l$ that violate constraint (Eq. (2)) back to the surface of the perturbation ball with radius $\gamma \|w_l\|$. To achieve this goal, we define $\Pi_\gamma$ in Eq. (3) (Wu et al., 2020): $$\Pi_\gamma(v_l) = \begin{cases} \gamma \frac{\|w_l\|}{\|v_l\|} v_l & \text{if } \|v_l\| > \gamma \|w_l\| \\ v_l & \text{otherwise} \end{cases}$$ With the projection, the computation of the perturbation $v$ in Eq. (1) is given by $v \leftarrow \Pi_\gamma \left( v + \eta_1 \frac{\nabla_v L_{\text{per}}(w+v,b)}{\|\nabla_v L_{\text{per}}(w+v,b)\|} \|w\| \right)$, where $\eta_1$ is the learning rate. (2) $w$-step: With the updated perturbation $v$, the weight of the perturbed model $\theta$ can be updated using $w \leftarrow w - \eta_2 \nabla_w L_{\text{per}}(w+v,b)$, where $\eta_2$ is the learning rate. 4 Experiments In this section, we conduct comprehensive experiments to evaluate the effectiveness of the proposed watermark injection method. Datasets. We use CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and GTSRB (Stallkamp et al., for model utility evaluation. Both CIFAR-10 and CIFAR-100 contain $32 \times 32$ with 10 and 100 classes, respectively. The GTSRB consists of sign images in 43 classes. All images in GTSRB are reshaped as $32 \times 32$. Note that, these datasets are neither used for our watermark injection nor model verification, they are only used to evaluate the standard accuracy of our watermarked model. **OoD image.** OoD image is used for watermark injection and ownership verification. We use three different OoD images as our candidate source image to inject watermarks, denoted as “City”\footnote{https://pixabay.com/photos/japan-ueno-japanese-street-sign-217883/}, “Animals”\footnote{https://www.teashub.io/viewwp/wJmBoJ_jungle-animal-wallpaper-wallpapersafari-jungle-animal/} and “Bridge”\footnote{https://commons.wikimedia.org/wiki/File:GG-ftpoint-bridge-2.jpg}. We use “City” by default unless otherwise mentioned. **Evaluation metrics.** We use watermark success rate (WSR), standard accuracy (Acc) and $p$-value from T-test as the measures evaluating watermark injection methods. Acc is the classification accuracy measured on a clean i.i.d. test set. IDWSR is the portion of watermarked i.i.d. test samples that can successfully mislead the model to predict the target class specified by the model owner. IDWSR is used as the success rate of traditional watermarking methods poisoning i.i.d. data and used as a reference for our method. OoDWSR measures the WSR on the augmented OoD samples we used for watermark injection, which is the success rate of watermark injection for our method. T-test takes the output logits of the non-watermarked model and suspect model $M_s$ as input, and the null hypothesis is the logits distribution of the suspect model is identical to that of a non-watermarked model. If the $p$-value of the T-test is smaller than the threshold 0.05, then we can reject the null hypothesis and statistically verify that $M_s$ differs significantly from the non-watermarked model, so the ownership of $M_s$ can be claimed \cite{bia2021adversarial}. Higher OoDWSR with a p-value smaller than the threshold and meanwhile a larger Acc indicate a successful watermark injection. **Trigger patterns.** To attain the best model with the highest watermark success rate, we use the OoDWAR to choose triggers from 6 different backdoor patterns: BadNets with grid (badnet_grid) \cite{gu2019badnets}, 10-invisible (l0_inv) \cite{li2020invisiblenets}, smooth \cite{zeng2021smooth}, Trojan Square $3 \times 3$ (trojan\_3$\times$3), Trojan Square $8 \times 8$ (trojan\_8$\times$8), and Trojan watermark (trojan_wm) \cite{liu2018trojan}. **Pre-training models.** The detailed information of the pre-trained models is shown in Table 1. All the models are pre-trained on clean samples until convergence, with a learning rate of 0.1, SGD optimizer, and batch size 128. We follow public resources to conduct the training such that the performance is close to state-of-the-art results. **Watermark removal attacks.** To evaluate the robustness of our proposed method, we consider three kinds of attacks on victim models: 1) FT: Fine-tuning includes three kinds of methods: a) fine-tune all layers (FT-AL), b) fine-tune the last layer and freeze all other layers (FT-LL), c) re-initialize the last layer and then fine-tune all layers (RT-AL), 2) Pruning-r% indicates pruning r% of the model parameters which has the smallest absolute value, and then fine-tuning the model on clean i.i.d. samples to restore accuracy. 3) Model Extraction: We use knockoff \cite{orekondy2019knockoff} as an example of the model extraction attack, which queries the model to get the predictions of an auxiliary dataset (ImagenetDS \cite{chrabaszcz2017imagenetds} is used in our experiments), and then clones the behavior of a victim model by re-training the model with queried image-prediction pairs. Assume the adversary obtains 10% of the training data of the pre-trained models for fine-tuning and pruning. Fine-tuning and pruning are conducted for 50 epochs. Model extraction is conducted for 100 epochs. ### 4.1 WATERMARK INJECTION The poisoning ratio of the generated surrogate dataset is 10%. For CIFAR-10 and GTSRB, we fine-tune the pre-trained model for 20 epochs (first 5 epochs are with WP). For CIFAR-100, we fine-tune the pre-trained model for 30 epochs (first 15 epochs are with WP). The perturbation constraint $\gamma$ in Eq. (2) is fixed at 0.1 for CIFAR-10 and GTSRB, and 0.05 for CIFAR-100. The trade-off parameter $\beta$ in Eq. (1) is fixed at 6 for all the datasets. The watermark injection process of CIFAR-10 is shown in Fig. 2 and watermark injection for the other two datasets can be found in Appendix A.1. We observe that the injection process is efficient, it takes only 10 epochs for CIFAR-10 to achieve stable high standard accuracy and OoDWSR. The highest OoDWSR for CIFAR-10 is 95.66% with standard accuracy degradation of less than 3%. In the following experiments, we choose triggers with top-2 OoDWSR and standard accuracy degradation less than 3% as the recommended watermark patterns. | Dataset | Class num | DNN architecture | Acc | |-----------|-----------|------------------|---------| | CIFAR-10 | 10 | WRN-16-2 | 0.9400 | | CIFAR-100 | 100 | WRN-16-2 | 0.7234 | | GTSRB | 43 | ResNet18 \cite{he2015deep} | 0.9366 | Table 1: Pre-trained models. Figure 2: Acc, ID WSR, and OoD WSR for watermark injection. | Dataset | Trigger | Non-watermarked model | Victim model | Watermark removal | Suspect model | p-value | |---------|---------|-----------------------|--------------|-------------------|---------------|---------| | CIFAR-10 | trojan_wm | 0.0487 | 0.9102 | 0.9768 | 0.9566 | FT-AL | 0.9191 | 0.9769 | 0.9678 | 0.0000 | | | | | | | | FT-LL | 0.7345 | 0.9990 | 0.9972 | 0.0000 | | | | | | | | RT-AL | 0.8706 | 0.4434 | 0.5752 | 1.0103e-12 | | | | | | Pruning-20% | | 0.9174 | 0.9771 | 0.9641 | 0.0000 | | | | | | Pruning-50% | | 0.9177 | 0.9780 | 0.9658 | 0.0000 | | CIFAR-10 | trojan_8x8 | 0.0481 | 0.9178 | 0.9328 | 0.9423 | FT-AL | 0.9377 | 0.9533 | 0.9797 | 0.0000 | | | | | | FT-LL | 0.7400 | 0.9990 | 0.9945 | 0.0000 | | | | | | kT-AL | 0.8675 | 0.0782 | 0.2419 | 2.9829e-241 | | | | | | Pruning-20% | | 0.9197 | 0.9560 | 0.9793 | 2.0500e-08 | | | | | | Pruning-50% | | 0.9190 | 0.9580 | 0.9801 | 5.1651e-247 | | CIFAR-100 | trojan_8x8 | 0.0001 | 0.6978 | 0.7024 | 0.8761 | FT-AL | 0.6712 | 0.5602 | 0.7743 | 0.0012 | | | | | | FT-LL | 0.4984 | 0.9476 | 0.9641 | 0.0066 | | | | | | RT-AL | 0.5319 | 0.0227 | 0.0700 | 0.0090 | | | | | | Pruning-20% | | 0.6642 | 0.6300 | 0.7448 | 0.0020 | | | | | | Pruning-50% | | 0.6645 | 0.6953 | 0.7960 | 0.0049 | | l0_inv | 0.0002 | 0.6948 | 0.7046 | 0.5834 | FT-AL | 0.6710 | 0.7595 | 0.5491 | 0.0206 | | | | | | FT-LL | 0.4966 | 0.9991 | 0.6097 | 0.0106 | | | | | | RT-AL | 0.5281 | 0.0829 | 0.1232 | 0.0010 | | | | | | Pruning-20% | | 0.6704 | 0.7817 | 0.5517 | 0.0099 | | | | | | Pruning-50% | | 0.6651 | 0.8288 | 0.5530 | 0.0025 | | GTSRB | smooth | 0.0145 | 0.9146 | 0.1329 | 0.9442 | FT-AL | 0.8623 | 0.0051 | 0.6772 | 4.4360e-10 | | | | | | FT-LL | 0.6291 | 0.0487 | 0.9527 | 0.0006 | | | | | | RT-AL | 0.8622 | 0.0041 | 0.7431 | 0.0000 | | | | | | Pruning-20% | | 0.8625 | 0.0053 | 0.6798 | 0.0179 | | | | | | Pruning-50% | | 0.8628 | 0.0052 | 0.6778 | 0.0215 | | | trojan_wm | 0.0220 | 0.9089 | 0.7435 | 0.7513 | FT-AL | 0.8684 | 0.3257 | 0.1726 | 0.0117 | | | | | | FT-LL | 0.5935 | 0.7429 | 0.5751 | 7.4281e-11 | | | | | | RT-AL | 0.8519 | 0.1170 | 0.0684 | 0.0000 | | | | | | Pruning-20% | | 0.8647 | 0.3235 | 0.1779 | 0.0131 | | | | | | Pruning-50% | | 0.8610 | 0.3281 | 0.1747 | 0.0000 | Table 2: Evaluation of watermarking against fine-tuning and pruning on three datasets. ### 4.2 Defending Against Fine-tuning & Pruning We evaluate the robustness of our proposed method against fine-tuning and pruning in Table 2 where victim models are watermarked models, and suspect models are stolen copies of victim models using watermark removal attacks. OoDWSR of the pre-trained model in Table 1 is the probability that a non-watermarked model classifies the verification samples as the target label. If the OoDWSR of a suspect model far exceeds that of the non-watermarked model, the suspect model can be justified as a copy of the victim model (Jia et al., 2021). FT-AL and pruning maintain the performance of the main classification task with an accuracy degradation of less than 6%, but OoDWSR remains high for all the datasets. Compared with FT-AL, FT-LL will significantly bring down the standard accuracy by over 15% for all the datasets. Even with the large sacrifice of standard accuracy, FT-LL still cannot wash out the injected watermark, and the OoDWSR even increases for some of the datasets. RT-AL loses 4.50%, 16.63%, and 5.47% (mean value for two triggers) standard accuracy respectively for three datasets. Yet, OoDWSR in RT-AL is larger than the one of the random guess and non-watermarked models. To statistically verify the ownership, we conduct a T-test between the non-watermarked model and the watermarked model. The p-value is the probability that the two models behave similarly. p-values for all the datasets are close to 0. The low p-values indicate that the suspect models have significantly different behaviors compared with non-watermarked models in probability, at least 95%. Thus, these suspect models cannot get rid of the suspicion of copying our model \( M \) with a high chance. IDWSR is also used here as a reference, although we do not use i.i.d. data for verification of the ownership of our model. We observe that even though watermark can be successfully injected into | Trigger | Training data | Victim model | Suspect model | |-----------|---------------|--------------|---------------| | | Acc | IDWSR | OoDWSR | Acc | IDWSR | OoDWSR | | trojan_wm | clean | 0.9400 | 0.0639 | 0.0487 | 0.8646 | 0.0864 | 0.0741 | | | ID | 0.9378 | 1.0000 | 0.9997 | 0.8593 | 0.0413 | 0.0195 | | | OoD | 0.9102 | 0.9768 | 0.9566 | 0.8706 | 0.4434 | **0.5752** | | trojan_8x8| clean | 0.9400 | 0.0161 | 0.0481 | 0.8646 | 0.0323 | 0.0610 | | | ID | 0.9393 | 0.9963 | 0.9992 | 0.8598 | 0.0342 | 0.0625 | | | OoD | 0.9178 | 0.9328 | 0.9423 | 0.8675 | 0.0782 | **0.2419** | Table 3: Comparison of watermarking methods against fine-tuning watermark removal using different training data. OoD injection is much more robust compared with i.i.d. injection. both our generated OoD dataset and i.i.d. samples (refer to IDWSR and OoDWSR for victim model), they differ in their robustness against these two watermark removal attacks. For instance, for smooth of GTSRB, after fine-tuning or pruning, IDWSR drops under 1%, which is below the random guess, however, OoDWSR remains over 67%. This phenomenon is also observed for other triggers and datasets. Watermarks injected in OoD samples are much harder to be washed out compared with watermarks injected into i.i.d. samples. Due to different distributions, fine-tuning or pruning will have a smaller impact on OoD samples compared with i.i.d. samples. To further verify our intuition, we also compare our method (OoD) with traditional backdoor-based methods using i.i.d. data (ID) for data poisoning on CIFAR-10. We use RT-AL which is the strongest attack in Table 2 as an example. The results are shown in Table 3. Note that ID poison and the proposed OoD poison adopt IDWSR and OoDWSR as the success rate for the injection watermark, respectively. Clean refers to the pre-trained model without watermark injection. With only one single OoD image for watermark injection, we can achieve comparable results as ID poisoning which utilizes the entire ID training set. After RT-AL, the watermark success rate drops to 4.13% and 3.42%, respectively for ID poison, while drops to 57.52% and 24.19% for OoD poison, which verifies that our proposed method is also much more robust against watermark removal attacks. | Dataset | Trigger | Victim model | Suspect model | p-value | |-----------|-------------|--------------|---------------|---------| | | | Acc | IDWSR | OoDWSR | Acc | IDWSR | OoDWSR | | CIFAR-10 | trojan_wm | 0.9102 | 0.9768 | 0.9566 | 0.8485 | 0.9684 | 0.9547 | 0.0000 | | | trojan_8x8 | 0.9178 | 0.9328 | 0.9423 | 0.8529 | 0.8882 | 0.9051 | 0.0000 | | CIFAR-100 | trojan_8x8 | 0.6978 | 0.7024 | 0.8761 | 0.5309 | 0.5977 | 0.7040 | 0.0059 | | | l0_inv | 0.6948 | 0.7046 | 0.5834 | 0.5200 | 0.0162 | 0.0622 | 0.0019 | | GTSRB | smooth | 0.9146 | 0.1329 | 0.9442 | 0.6575 | 0.1386 | 0.9419 | 7.5891e-11 | | | trojan_wm | 0.9089 | 0.7435 | 0.7513 | 0.6379 | 0.7298 | 0.7666 | 2.6070e-21 | Table 4: Evaluation of watermarking against model extraction watermark removal on three datasets. 4.3 Defending Against Model Extraction We evaluate the robustness of our proposed method against model extraction in Table 4. By conducting model extraction, the standard accuracy drops 6% on the model pre-trained on CIFAR-10, and drops more than 10% on the other two datasets. Re-training from scratch makes it hard for the suspect model to resume the original model’s utility using an OoD dataset and soft labels querying from the watermarked model. OoDWSR is still over 90% and 76% for CIFAR-10 and GTSRB, respectively. Although OoDWSR is 6.22% for l0_inv, it is still well above 0.02%, which is observed for the non-watermarked model. All the datasets also have a p-value close to 0. All the above observations indicate that the re-training-based extracted model has a high probability of being a copy of our model. One possible reason for these re-training models still extracting the watermark is that during re-training, the backdoor information hidden in the soft label queried by the IP infringers can also embed the watermark in the extracted model. The extracted model will behave more similarly to the victim model as its decision boundary gradually approaches that of the victim model. 4.4 Qualitative Studies Distribution of generated OoD samples and ID samples. We first augment an unlabeled OoD dataset, and then assign predicted labels to them using the model pre-trained on clean CIFAR-10 data. According to the distribution of OoD and ID samples before and after our watermark fine-tuning as shown in Fig. 3, we can observe that the OoD data drawn from one image lies close to ID data with a small gap. After a few epochs of fine-tuning, some of the OoD data is drawn closer to ID, Figure 3: The distribution of OoD and ID samples. Generation data denotes augmented OoD samples from a single OoD image. | OoD Image | Trigger | Acc | IDWSR | OoDWSR | |-----------|-----------|-----|-------|--------| | City | trojan_wm | 0.9102 | 0.9768 | 0.9566 | | | trojan_8x8| 0.9178 | 0.9328 | 0.9423 | | Animals | trojan_wm | 0.9072 | 0.9873 | 0.9850 | | | trojan_8x8| 0.9176 | 0.9251 | 0.9622 | | Bridge | trojan_wm | 0.9207 | 0.8749 | 0.7148 | | | trojan_8x8| 0.9172 | 0.7144 | 0.7147 | Table 5: Watermark injection using different OoD images. but still maintains no overlap. This can help us successfully implant watermarks to the pre-trained model while maintaining the difference between ID and OoD data. In this way, when our model is fine-tuned with clean ID data by attackers, the WSR on the OoD data will not be easily erased. Effects of different OoD images for watermark injection. In Table 5 we use different source images to generate surrogate datasets and inject watermarks into a pre-trained model. The model is pre-trained on CIFAR-10. From these results, we observe that the choice of the OoD image for injection is also important. Dense images such as “City” and “Animals” can produce higher OoDWSR than the sparse image “Bridge”, since more knowledge is included in the visual representations of dense source images. Thus, dense images perform better for backdoor-based watermark injection. This observation is also consistent with some previous arts (Asano & Saeed, 2023; Asano et al., 2019) about single image representations, which found that dense images perform better for model distillation or self-supervised learning. Effects of backdoor weight perturbation. We show the results in Fig. 4. The initial model is WideResNet pre-trained on CIFAR-10, and the fine-tuned model is the model fine-tuning using our proposed method. If the OoD data is directly utilized to fine-tune the pre-trained models with only a few epochs, the weight distribution is almost identical for pre-trained and fine-tuned models (left figure). According to Garg et al. (2020), if the parameter perturbations are small, the backdoor-based watermark can be easily removed by fine-tuning or adding random noise to the model’s parameters. Our proposed watermark injection WP (right figure) can shift the fine-tuned model parameters from the pre-trained models in a reasonable scale compared with the left one, while still maintaining high standard accuracy and watermark success rate as shown in Table 6. Besides, the weight distribution of the perturbed model still follows a normal distribution as the unperturbed model, performing statistical analysis over the model parameters distributions will not be able to erase our watermark. To show the effects of WP, we conduct the attack RT-AL on CIFAR-10 as an example. From Table 6 we observe that WP does not affect the model utility, and at the same time, it will become more robust against stealing threats, since OoDWSR increases from 19.94% and 12.81% to 57.52% and 24.19%, respectively, for two triggers. More results for WP can be referred to Appendix A.2. 5 CONCLUSION In this paper, we proposed a novel and practical watermark injection method that does not require training data and utilizes a single out-of-distribution image in a sample-efficient and time-efficient manner. We designed a robust weight perturbation method to defend against watermark removal attacks. Our extensive experiments on three benchmarks showed that our method efficiently injected watermarks and was robust against three watermark removal threats. Our approach has various real-world applications, such as protecting purchased models by encoding verifiable identity and implanting server-side watermarks in distributed learning when ID data is not available. ACKNOWLEDGEMENT This material is based in part upon work supported by the National Science Foundation under Grant IIS-2212174, IIS-1749940, Office of Naval Research N00014-20-1-2382, N00014-24-1-2168, and National Institute on Aging (NIA) RF1AG072449. The work of Z. Wang is in part supported by the National Science Foundation under Grant IIS2212176. REFERENCES Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th {USENIX} Security Symposium ({USENIX} Security 18), pp. 1615–1631, 2018. Yuki M. Asano and Aaqib Saeed. Extrapolating from a single image to a thousand classes using distillation. In ICLR, 2023. Yuki M Asano, Christian Rupprecht, and Andrea Vedaldi. A critical analysis of self-supervision, or what we can learn from a single image. arXiv preprint arXiv:1904.13132, 2019. Jialuo Chen, Jingyi Wang, Tinglan Peng, Youcheng Sun, Peng Cheng, Shouling Ji, Xingjun Ma, Bo Li, and Dawn Song. Copy, right? a testing framework for copyright protection of deep learning models. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 824–841. IEEE, 2022. Xuxi Chen, Tianlong Chen, Zhenyu Zhang, and Zhangyang Wang. You are caught stealing my winning lottery ticket! making a lottery ticket claim its ownership. Advances in Neural Information Processing Systems, 34:1780–1791, 2021. Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of imagenet as an alternative to the cifar datasets. arXiv preprint arXiv:1707.08819, 2017. Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar. Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 485–497, 2019. Gongfan Fang, Jie Song, Chengchao Shen, Xinchao Wang, Da Chen, and Mingli Song. Data-free adversarial distillation. arXiv preprint arXiv:1912.11006, 2019. Luciano Floridi and Massimo Chiariatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30:681–694, 2020. Siddhant Garg, Adarsh Kumar, Vibhor Goel, and Yingyu Liang. Can adversarial weight perturbations inject neural backdoors. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 2029–2032, 2020. Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2):1563–1580, 2022. Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learningfor image recognition. CoRR, abs/1512, 3385:2, 2015. Pengfei He, Han Xu, Jie Ren, Yingqian Cui, Hui Liu, Charu C Aggarwal, and Jiliang Tang. Sharpness-aware data poisoning attack. arXiv preprint arXiv:2305.14851, 2023. Alex Hern. Techscape: Will meta’s massive leak democratisé ai – and at what cost? The Guardian, 2023. URL https://www.theguardian.com/technology/2023/mar/07/techscape-meta-leak-llama-chatgpt-ai-crossroads
86zAUE80pP
The hard cutoff based on hyperparameter k seems a bit weird to me. Towards what metric would the hyperparameter k be optimized for if it can't be equation 4 itself? This gets even weirder for me when the hard cutoff is then relaxed in equation 6. I don't get why it was ever introduced to begin with.
CPPO: Continual Learning for Reinforcement Learning with Human Feedback Han Zhang\textsuperscript{1,2}, Yu Lei\textsuperscript{2,*}, Lin Gui\textsuperscript{3}, Min Yang\textsuperscript{4}, Yulan He\textsuperscript{4}, Hui Wang\textsuperscript{2}, Ruifeng Xu\textsuperscript{1,2,5,*} \textsuperscript{1} Harbin Institute of Technology (Shenzhen) \textsuperscript{2} Peng Cheng Laboratory \textsuperscript{3} King’s College London \textsuperscript{4} Shenzhen Institutes of Advanced Technology \textsuperscript{5} Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies Abstract The approach of Reinforcement Learning from Human Feedback (RLHF) is widely used for enhancing pre-trained Language Models (LM), enabling them to better align with human preferences. Existing RLHF-based LMs however require complete retraining whenever new queries or feedback are introduced, as human preferences may differ across different domains or topics. LM retraining is often impracticable in most real-world scenarios, due to the substantial time and computational costs involved, as well as data privacy concerns. To address this limitation, we propose Continual Proximal Policy Optimization (CPPO), a novel method that is able to continually align LM with dynamic human preferences. Specifically, CPPO adopts a weighting strategy to decide which samples should be utilized for enhancing policy learning and which should be used for solidifying past experiences. This seeks a good trade-off between policy learning and knowledge retention. Our experimental results show that CPPO outperforms strong Continuous learning (CL) baselines when it comes to consistently aligning with human preferences. Furthermore, compared to PPO, CPPO offers more efficient and stable learning in non-continual scenarios. 1 Introduction Recent studies \cite{Stiennon2020,Bai2022,Ouyang2022} have shown that Reinforcement Learning from Human Feedback (RLHF) can significantly enhance language models by aligning them with human intention. RLHF uses human preferences as a reward signal to fine-tune language models with the Proximal Policy Optimization (PPO) algorithm. The RLHF-based model can effectively generate answers preferred by humans for tasks that lack standardized solutions, such as summarization \cite{Stiennon2020}, translation \cite{Kreutzer2018}, and dialogue \cite{Jaques2020}, without over-optimizing metrics such as ROUGE \cite{Lin2004} or BLEU \cite{Papineni2002}. In real-world applications, learning continuously changing human preferences is more practical than learning invariable human preferences. For example, the progression from the onset of the COVID-19 virus in human society to widespread infections and finally to achieving herd immunity has seen corresponding changes in government policies and human perspective. An AI agent that keeps pace with the times should exhibit behavior that aligns with current government policies and human understanding preferences at different stages, rather than remaining static. However, traditional alignment methods \cite{Stiennon2020,Ouyang2022} lack flexibility for continual learning (CL) of human preferences. Recent approach \cite{Bai2022} tackles these problems by periodically retraining the Preference Model (PM) and policy based on both new and historical data, it might be inefficient and impractical due to the involved concerns of computational cost and data privacy. In this paper, we propose a more practical approach by enhancing RLHF with continual learning (CL), aiming to optimize two conflicting objectives: preserving old knowledge and acquiring new * Corresponding authors: Yu Lei (leiy01@pcl.ac.cn) and Ruifeng Xu (xuruifeng@hit.edu.cn). knowledge (Rolnick et al., 2019). This leads to a long-standing challenge known as the stability-plasticity dilemma (Abraham & Robins, 2005). Moreover, due to the vast action space (vocabulary) of LMs, the RLHF algorithms (e.g., PPO) usually suffer from the issues of inefficiency and instability during training (Ramanurthy et al., 2022). To tackle these challenges, we attempt to seek a good tradeoff between policy learning and knowledge retention with stable learning by designing a sample-wise weighting strategy over the rollout samples. Our weighting strategy is motivated by the fact that a desired policy should always generate high-reward results with high probabilities. Specifically, we first categorize the rollout samples into five types according to their rewards and generation probabilities, as shown in Figure 1. We then assign each rollout sample with a policy learning weight $\alpha$ and a knowledge retention weight $\beta$, in the following way. 1) For a high-performance sample, we assign a high $\alpha$ and a high $\beta$, in order to consolidate the knowledge of this sample. 2) For a high-variance or overfitting sample, we assign a high $\alpha$ and a low $\beta$, so as to learn more knowledge of this sample and force the new policy to be different from the old one in generating such a sample. 3) For a noisy sample, we assign a low $\alpha$ and a low $\beta$ to decrease its impact on learning. 4) For a normal sample, we make no changes. Based on the above weighting strategy, we develop a novel PPO-based method, named continual proximal policy optimization (CPPO). CPPO implements the weighting strategy in two different ways: heuristic and learnable, resulting in two different CPPO methods (see Section 3.2 for details). The heuristic approach sets the weight with linear gain or decay according to strategy. The learnable approach converts the strategy into several inequality constraints and learns the best weight by optimizing the Lagrange function. Experimental results on real-world summarization datasets demonstrate that our proposed CPPO methods significantly outperform the PPO re-training methods and the strong CL baselines, in both CL and non-CL settings (detailed in Appendix F). Furthermore, additional experiments in both settings verify the superior training stability of CPPO compared to the original PPO algorithm. 2 Preliminary and Task Formulation PPO algorithm (Schulman et al., 2017) utilizes the clipped surrogate objective with a learned state-value function, and the entropy bonus (Mnih et al., 2016) is added to the original reward. The total objective is approximately maximized in each iteration step $i = 1, 2, ..., I$ (in the NLP scene, step-$i$ denotes the generation of the $i$-th token): $$L_{\text{CLIP+VF}}(\theta) = \mathbb{E}_i[L_{\text{CLIP}}(\theta) - c \cdot L_{\text{VF}}(\theta)]$$ where $c$ is the coefficient, and $L_{\text{VF}}$ is a squared-error loss $(V_\theta(s_i) - V_{\text{target}})^2$. The clipped policy learning objective is: $$L_{\text{CLIP}}(\theta) = \min(r_i(\theta) \cdot A_i, \text{clip}(r_i(\theta), 1 + \epsilon) \cdot A_i),$$ where $r_i(\theta) = \frac{\pi_\theta(a_i|s_i)}{\pi_{\theta_{old}}(a_i|s_i)}$ is the probability ratio, $\epsilon$ is the clip hyperparameter, $s_i$ is the $i$-th state, $A_i$ is the truncated version of generalized advantage estimation. Task Formulation: In this paper, we propose the task of continually learning human preferences under an offline continual learning setting (Biesialska et al., 2020). Formally, we consider a task sequence of $\mathbb{T} = \{T_1, T_2, ...\}$ to continually learn a policy model on the corresponding human --- 1 In this context, stability refers to the retention of previously acquired knowledge, which is different from the training stability mentioned later. Plasticity, on the other hand, refers to the ability to adapt to new knowledge through policy learning. 2 In the context of RLHF, a rollout, also known as a trajectory or episode, entails generating a response sequence, such as a summary, to a given conversation prompt, starting from a particular state (i.e. the initial prompt). The responses generated during the rollout are then used to update the policy network. preference datasets \( \mathbb{H}F = \{HF_1, HF_2, \ldots\} \) and prompt datasets \( S = \{S_1, S_2, \ldots\} \). For each task \( T_t (t = 1, 2, \ldots) \), the policy \( \pi_t \) is initialized by \( \pi_{t-1} \) and then is trained against the reward model \( r_t \), where the reward model \( r_t \) is learned on \( HF_t \). The initial policy \( \pi_0 \) is the SFT model, namely, \( \pi_0 = \pi_{SFT} \). Let \( x = (s, a) \) denote the prompt \( s \) and answer \( a \) pair. The final objective is to learn a policy model \( \pi_\theta \) that maximizes the overall reward on all of the learned human preferences: \[ \max_\theta \sum_{t=1}^{T} \mathbb{E}_{s \sim S_t, a \sim \pi_\theta(\cdot|s)} [r_t(s, a)] \] (2) 3 CONTINUAL PROXIMAL POLICY OPTIMIZATION 3.1 MOTIVATION AND THEORETICAL ANALYSIS To optimize the objective in the CL paradigm, the key is to balance the tradeoff between policy learning and knowledge retention, i.e., to learn a policy \( \pi_t \) that not only fits current task \( t \) but also retains the knowledge of previous tasks. This is typically accomplished by maximizing \( \pi_t \)'s average reward and meanwhile minimizing the difference between \( \pi_t \) and \( \pi_{t-1} \) by KL-based knowledge distillation [Kaplanis et al., 2019]: \[ \max_\theta \mathbb{E}_{s \sim S_t, a \sim \pi_\theta(\cdot|s)} [r_t(s, a)] - \mathbb{E}_{s \in S_{t-1}} D_{KL}(P_{\pi_\theta}(a|s) || P_{\pi_{t-1}}(a|s)) \] (3) where \( P_{\pi_\theta}(a|s) \) denotes the probability that policy \( \pi_\theta \) generates the answer \( a \) to the prompt \( s \). However, in the RLHF setting, we argue that a more effective way to achieve policy learning is to maximize the rewards of the results that \( \pi_\theta \) has a high probability to generate. This is because LMs usually have a vast action space (vocabulary size) and adopt a sampling strategy such as beam search that favors high-probability generative results. For knowledge retention, on the other hand, it is more important to make \( \pi_\theta \) retain \( \pi_{t-1} \)'s certain knowledge that generates high-reward outputs rather than all. To accomplish the above ideas, we propose a theoretically desirable objective for continual RLHF at task \( T_t \): \[ \max_\theta \mathbb{E}_{(s,a) \in D_1} r_t(s, a) - \mathbb{E}_{(s,a) \in D_2} D_{KL}(P_{\pi_\theta}(a|s) || P_{\pi_{t-1}}(a|s)) \] (4) where, \( D_1 = \{(s,a)|s \sim S_t, a \sim \pi_\theta(\cdot|s), P_{\pi_\theta}(a|s) > \mu_a[P_{\pi_\theta}(a|s)] + k\sigma_a[P_{\pi_\theta}(a|s)]\} \) and \( D_2 = \{(s,a)|s \sim S_{t-1}, a \sim \pi_{t-1}(\cdot|s), r_t(s, a) > \mu_a[r_t(s, a)] + k\sigma_a[r_t(s, a)]\} \) denote the sets of samples with high generation probability and high rewards, respectively. \( \mu \) and \( \sigma \) denote the mean and standard deviation respectively, and \( k \) is a hyperparameter. It is important to note that here we use \( r_t(s, a) \) instead of \( r_{t-1}(s, a) \). Since the reward model is continually learneded, we assume \( r_{t-1}(s, a) \approx r_t(s, a) \) when \( s \in S_{t-1} \) and \( a \sim \pi_\theta(\cdot|s) \). To simplify notation, the subsequent sections of the paper use \( x \) instead of \( (s, a) \). The KL divergence term requires a significant amount of memory to store the probability distribution of each token across the vast vocabulary. To tackle this problem, we incorporate a low computational knowledge retention penalty term \( L_{KR}^i(\theta) = (\log P_{\pi_\theta}(x_i) - \log P_{\pi_{t-1}}(x_i))^2 \). We compute the L2 distance of the log generation probability of true tokens instead of the KL divergence of the entire vocabulary’s probability distribution. We find the former is effective for knowledge retention and needs NOT to save the vocabulary’s probability distribution in the memory.\(^3\) We introduce \( I_{D_1}(x) \) and \( I_{D_2}(x) \) to denote the indicator functions of the sets of \( D_1 \) and \( D_2 \), respectively. By introducing the actor-critic version, the clipped ratio, and the entropy bonus, we claim that Eq.(4) can be written to (the derivation is detailed in Appendix Section B): \[ J'(\theta) = L_{ID_1 \cdot CLIP + ID_2 \cdot KR + VF}^i(\theta) \] \[ = \mathbb{E}_t[I_{D_1}(x) \cdot L_{CLIP}^i(\theta) - I_{D_2}(x) \cdot L_{KR}^i(\theta) - c \cdot L_{VF}^i(\theta)] \] (5) Compared with objective Eq. (1), Eq.(5) introduces the learning weights \( I_{D_1}(x), I_{D_2}(x) \), and the \( L_{KR}^i \) loss. Unfortunately, it is still impractical to directly optimize the objective, since the training \(^3\)In our task, the reference model generates 512 summaries (max 50 tokens) in one rollout. The vocabulary size is nearly 5e+4. If we use FP16 to save the logits or probability tensor, it takes about 512*50*5e4*2 Bit/1e9 = 1.28GB of memory. However, computing \( L_{KR}^i \) only needs to save the probability of true tokens, which takes only 512*50*2 Bit/1e9 = 2.56E-05GB of memory. samples in $D_1$ and $D_2$ are seldom as indicated by the Cantelli Inequality\footnote{Cantelli’s inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is a version of Chebyshev’s inequality for one-sided tail bounds.} $P(X > \mu[X] + k\sigma[X]) < 1/(1 + k^2)$. To make Eq.(5) easy to optimize, we generalize the indicator functions $I_{D_1}(x)$ and $I_{D_2}(x)$ to positive real-valued functions $\alpha(x)$ and $\beta(x)$, which gives each sample a non-zero learning weight. ### 3.2 Weighting Strategy Our method utilizes sample-wise balance weights $\alpha(x)$ and $\beta(x)$ to regulate the policy learning and knowledge retention processes, aiming to find a balance between knowledge retention and policy learning. The final objective is: $$J(\theta) = L_i^{\alpha \cdot CLIP + \beta \cdot KR + VF}(\theta)$$ $$= E_i[\alpha(x)L_i^{CLIP}(\theta) - \beta(x)L_i^{KR}(\theta) - c \cdot L_i^{VF}(\theta)]$$ for task $t = 1, 2, ..., T$. Next, we propose a weighting strategy for balancing policy learning and knowledge retention. #### 3.2.1 Balancing Policy Learning and Knowledge Retention To simplify the expression, we define the operator $F[\cdot] = \mu[\cdot] - k\sigma[\cdot]$ and operator $G[\cdot] = \mu[\cdot] + k\sigma[\cdot]$. As shown in Figure 1 and Table 1, we classify the rollout samples into 5 rollout types based on the joint distribution of $(P_{\pi_\theta}(x), R(x))$. If $P_{\pi_\theta}(x)$ or $R(x)$ is outside the discriminant interval $(F[\cdot], G[\cdot])$, it is considered as high or low. Now, we detail each rollout type and corresponding weight strategy. **High-performance sample:** If both $P_{\pi_\theta}(x)$ and $R(x)$ are high, it indicates that the old policy has high confidence to generate $x$ which gets a high reward, implying that it is already performing well. In this case, we ask the new policy to enhance both policy learning and knowledge retention. **Overfitting sample:** A high $P_{\pi_\theta}(x)$ with a low $R(x)$ indicates that the old policy is likely overfitting (due to high probability) to the biased sample (due to low reward score). We aim to reduce the generation probability of the biased sample $x$, which can be achieved through policy learning. However, knowledge retention will maintain the high probability of the biased sample $x$. Therefore, we enhance policy learning and slow down knowledge retention. **High-variance sample:** If $P_{\pi_\theta}(x)$ is low while $R(x)$ is high, it suggests that sample $x$ has high variance. Due to the low $P_{\pi_\theta}(x)$, the likelihood of generating $x$ next time is low. To achieve stable (low variance) performance, we aim to increase the generation probability of sample $x$, which can be accomplished through policy learning. However, knowledge retention will maintain a low generation probability. Therefore, we enhance policy learning and slow down knowledge retention. **Noisy sample:** If both $P_{\pi_\theta}(x)$ and $R(x)$ are low, sample $x$ is considered noisy data which may lead to overoptimization against the PM (Gao et al., 2022). Therefore, we slow down both knowledge retention and policy learning. **Normal sample:** If at least one of $P_{\pi_\theta}(x)$ and $R(x)$ falls within the discriminant interval, we consider it a normal condition and do not alter the learning process. | ID | Rollout Type | Determining Condition | Weight Strategy | |----|--------------------|-----------------------|-----------------| | $r_1$ | High-performance | $P_{\pi_\theta}(x) \geq G[P_{\pi_\theta}]$, $R(x) \geq G[R]$ | $\alpha(x) \uparrow$, $\beta(x) \uparrow$ | | $r_2$ | Overfitting | $P_{\pi_\theta}(x) \geq G[P_{\pi_\theta}]$, $R(x) \leq F[R]$ | $\alpha(x) \uparrow$, $\beta(x) \downarrow$ | | $r_3$ | High-variance | $P_{\pi_\theta}(x) \leq F[P_{\pi_\theta}]$, $R(x) \geq G[R]$ | $\alpha(x) \uparrow$, $\beta(x) \downarrow$ | | $r_4$ | Noisy | $P_{\pi_\theta}(x) \leq F[P_{\pi_\theta}]$, $R(x) \leq F[R]$ | $\alpha(x) \downarrow$, $\beta(x) \downarrow$ | | $r_5$ | Normal | $P_{\pi_\theta}(x)$ or $R(x) \in (F, G)$ | —— | Figure 2: The surfaces of heuristic weights. The weights are equal to 1 when rollout samples fall in the normal zone. 3.2.2 How to determine balance weights? The above weight strategies constitute several inequality constraints of $\alpha(x)$ and $\beta(x)$, shown in Table 2. Determining balance weights requires finding a feasible solution that satisfies those constraints. We provide two methods to determine balance weights including the heuristic weight method and the learnable weight method. | ID | Constraint of $\alpha(x)$ | Constraint of $\beta(x)$ | Heuristic $\alpha(x)$ | Heuristic $\beta(x)$ | |----|--------------------------|--------------------------|-----------------------|----------------------| | $r_1$ | $\alpha(x_{r_5}) - \alpha(x_{r_1}) < 0$ | $\beta(x_{r_5}) - \beta(x_{r_1}) < 0$ | min($ub$, $\frac{P_{\pi_\phi}(x) - \mu}{k\sigma[\pi_\phi]}$) | min($ub$, $\frac{R(x) - \mu[R]}{k\sigma[R]}$) | | $r_2$ | $\alpha(x_{r_5}) - \alpha(x_{r_2}) < 0$ | $\beta(x_{r_5}) - \beta(x_{r_2}) < 0$ | min($ub$, $\frac{P_{\pi_\phi}(x) - \mu}{k\sigma[\pi_\phi]}$) | max($lb$, $2 + \frac{R(x) - \mu[R]}{k\sigma[R]}$) | | $r_3$ | $\alpha(x_{r_4}) - \alpha(x_{r_3}) < 0$ | $\beta(x_{r_4}) - \beta(x_{r_3}) < 0$ | min($ub$, $\frac{P_{\pi_\phi}(x) - \mu}{k\sigma[\pi_\phi]}$) | max($lb$, $2 + \frac{R(x) - \mu[R]}{k\sigma[R]}$) | | $r_4$ | $\alpha(x_{r_4}) - \alpha(x_{r_5}) < 0$ | $\beta(x_{r_4}) - \beta(x_{r_5}) < 0$ | max($lb$, $2 + \frac{P_{\pi_\phi}(x) - \mu}{k\sigma[\pi_\phi]}$) | max($lb$, $2 + \frac{R(x) - \mu[R]}{k\sigma[R]}$) | | $r_5$ | – | – | 1 | 1 | Heuristic $\alpha(x)$ and $\beta(x)$: If $P_{\pi_\phi}(x)$ or $R(x)$ fall within the discriminant interval, the balance weights are set to 1. If they are further away from the discriminant interval, the weights will linearly increase or decrease (depending on the rollout type). We can plot the surfaces of $\alpha(x)$ and $\beta(x)$ in 3D coordinate systems, as shown in Figure 2. The heuristic weights $\alpha(x)$ and $\beta(x)$ for a given sample $x$ can be calculated by the formula presented in Table 2. Learnable $\alpha(x)$ and $\beta(x)$: Heuristic $\alpha(x)$ and $\beta(x)$ lack enough adaptation ability to the dynamic learning process. Hence, we propose the learnable balance weights to automatically balance policy learning and knowledge retention. We learn $2N$ parameters for each rollout batch in which the LM generates $N$ responses, the $2N$ parameters can be discarded before the next rollout batch. Our goal is to find a set of weights that satisfy the constraints in Table 2. Unlike the typical optimization problem solved by the Lagrange Multiplier method, we do not need to minimize an additional objective function. It should be noted that the optimization objective of CPPO in Eq.(6) is not directly optimized using the Lagrange Multiplier method. We employ a more straightforward strategy. We construct an unconstrained optimization objective by adding all the terms on the left side of the inequalities (in Table 2) together: $$L_{coef}(\phi) = E_{x \sim \pi_{t-1}}[(\alpha_\phi(x) - 1)^2 + (\beta_\phi(x) - 1)^2] + \tau(\alpha(x_{r_5}) - \alpha(x_{r_1}) + \beta(x_{r_5}) - \beta(x_{r_1})) + \alpha(x_{r_5}) - \alpha(x_{r_2}) + \beta(x_{r_2}) - \beta(x_{r_5}) + \alpha(x_{r_5}) - \alpha(x_{r_3}) + \beta(x_{r_3}) - \beta(x_{r_5}) + \alpha(x_{r_4}) - \alpha(x_{r_5}) + \beta(x_{r_4}) - \beta(x_{r_5}))$$ where, $\alpha(x) = (ub - lb) \cdot sig(\phi^1_x) + lb$, $\beta(x) = (ub - lb) \cdot sig(\phi^2_x) + lb$, and $sig$ is sigmoid function, $lb$ and $ub$ are lower and upper bound of $\alpha(x)$ and $\beta(x)$. We directly optimize Eq. 7 using SGD to find a set of weights that satisfy the constraints. We set multiplier $\tau$ as a hyperparameter, and $\tau = 0.1$ is selected from \{0.01, 0.1, 0.5, 1.0\}. For more hyperparameter sensitivity analysis experiments, please refer to Appendix E.1. We found this simple idea is highly effective in our scenario. In Appendix E.2 we analyze the time and memory required for SGD to find feasible solutions and found that it does NOT significantly increase the overall training time and memory. 4 EXPERIMENTS We assess the performance of CPPO and baseline methods in the domain incremental learning (DIL) summary task. We also evaluate CPPO on non-continual learning tasks (Appendix Section F). 4.1 THE EXPERIMENTAL CONFIGURATION FOR CONTINUAL LEARNING FROM HUMAN PREFERENCES Dataset and split: In accordance with previous research (Stiennon et al., 2020), we evaluate our method using the Reddit TL;DR (Völcke et al., 2017) dataset for summarization. We use the human preference data provided by CarperAI. To the best of our knowledge, there are limited benchmark datasets proposed for evaluating continual RLHF methods. Consequently, we divide the Reddit TL;DR dataset based on domains into two parts, which are outlined in Table 3. Each part corresponds to a distinct alignment task. Experiment settings: We evaluate CPPO under the DIL setting with two tasks, and the historical data is assumed inaccessible. This scenario is typical in real-world applications, such as developers continually learning an open-source RLHF model like vicuna (Chiang et al., 2023) in a special domain (e.g., game) without permission to access the pre-training corpus. For each task, we employ a 1.3B gpt2-xl (Radford et al., 2019) model with a value head as the reward model (RM). The RM is continually trained for 5 epochs on each task using the MAS (Aljundi et al., 2018) method. Since the policy is prone to over-optimize against the PM (Gao et al., 2022), we train a 6.7B gptj (Wang & Komatsu, 2021) model as the reference PM (rPM) to measure the performance of alignment. The rPM is trained on entire human preferences data. We conduct experiments to evaluate the RM trained with and without MAS through accuracy and forgetting ratio (Chaudhry et al., 2018) (FR) of accuracy. The evaluation results of RM and rPM are shown in Table 4. The accuracy is computed by counting the percentage of the reward scores of human-preferred responses that are higher than the reward scores of human-NOT-preferred responses (Yuan et al., 2023). We initialize the SFT model from gpt2-s and train it on the Reddit TL;DR part-1 for 5 epochs. However, we do not perform the SFT process in task-2 as we observe no significant effects on performance. Metrics: We use the forgetting radio (Chaudhry et al., 2018) of the ROUGE and reference PM score to measure the extent to which the old policy is forgotten. Notably, we consider the alignment tax (Ouyang et al., 2022) as part of forgetting since it arises when the SFT model learns human preferences during the RL step. After learning all tasks, we evaluate the models on the entire test set using both reference PM score and ROUGE score. Table 5 presents the metrics used to --- Table 3: The dataset is utilized for continual learning. The human feedback data is used for training the reward model. The post (prompt) and summary (label) of Reddit TL;DR are used for SFT. The domain of "r / others" includes 28 categories, such as books, travel, and cooking. It's worth noting that the summary (label) data is not used in the reinforcement learning (RL) process. | Task ID | Data | Data split | Train | Valid | Test | Domain | |---------|---------------|------------|-------|-------|-------|--------------| | task-1 | Human Feedback| part-1 | 52243 | - | 45148 | r / relationships | | | Reddit TL;DR | part-1 | 63324 | 3462 | 3539 | r / relationships | | task-2 | Human Feedback| part-2 | 40291 | - | 38481 | r / others | | | Reddit TL;DR | part-2 | 53398 | 2985 | 3014 | r / others | Table 4: The evaluation results of RMs and rPM. | Reward Model | Acc($HF_{1}^{test}$) | Acc($HF_{2}^{test}$) | FR | |--------------|----------------------|----------------------|----| | RM$_1$ | 0.7441 | - | - | | RM$_2$ w MAS| 0.7203 | 0.7482 | 0.024 | | RM$_2$ w/o MAS | 0.6971 | 0.7496 | 0.047 | | rPM | 0.7624 | 0.7592 | - | URL: https://huggingface.co/datasets/CarperAI/openai_summarize_comparisons evaluate each task, as well as the final evaluation metric. A well-performing model is expected to achieve high scores in both the reference PM and ROUGE metrics. Table 5: Metrics for our tasks. \(D_{test}^i (i = 1, 2)\) denote the test data of Reddit TL;DR data part-i, and \(rPM(M_i, D_{test}^i) (i = 1, 2)\) denote the reference PM score of model \(M_i\) on dataset \(D_{test}^i\). | Metric | Definition | |--------|------------| | Task-1 | reference PM Score on Task-1 (rPMS1, ↑) | | Task-1 | Alignment Tax (AT, ↓) | | Task-2 | reference PM Score on Task-2 (rPMS2, ↑) | | Task-2 | Score Forgetting Ratio (SFR, ↓) | | Final eval | reference PM Score on entire test data (rPMS, ↑) | 4.2 Results of Continual Learning from Human Preferences Table 6 shows the results of continual learning from human preferences on the summary task. We observe that CL methods, such as EWC (Kirkpatrick et al., 2017) regularization or policy consolidation (Kaplanis et al., 2019), can improve the training stability of the PPO method, thereby ensuring that the policy does not change too much with every policy gradient step. This leads to improved rPMS. Our method outperforms CL baselines by achieving the most significant enhancement in policy learning (rPMS) and possessing Backward Transfer (BWT) (Lopez-Paz & Ranzato, 2017) capability (negative SFR). This is because our learning strategy is sample-adaptive and balances policy learning and knowledge retention. Additionally, CPPO performs better than Iterated RLHF because PPO is not stable enough in the learning process. We observed that during PPO training, the KL divergence and value prediction errors tend to increase suddenly, as discussed in Section 4.4. Table 6: The main results of continual alignment on TL;DR dataset. For PPO (In order)*, we directly finetune the RM1 on the novel data to obtain RM2, without using MAS regularization; and we directly train the policy model \(M_{π_1}\) against RM2 to obtain \(M_{π_2}\). For the Iterated RLHF†(PPO), we retrain the RM2 and policy model \(M_{π_2}\) on the combination of the Task-1 and Task-2 corpus. Methods in italics are trained against the continually learned (by MAS) reward models. Details of the implementation can be found in Appendix G. | Method | rPMS1 (↑) | rouge (↑) | AT (↓) | rPMS2 (↑) | rouge (↑) | SFR (↓) | Final eval \(M_{π_2}\) (↑) | |--------|-----------|----------|-------|-----------|----------|--------|-------------------------| | Human | 2.958 | – | – | 2.805 | 0.191 | – | 2.901 | | ChatGPT| 3.298 | 0.197 | – | 3.189 | 0.191 | – | 3.242 | | SFT (In order) | 1.499 ±0.130 | 0.248 ±0.006 | – | 1.543 ±0.067 | 0.237 ±0.007 | – | 1.498 ±0.081 | 0.237 ±0.009 | | SFT (multi-tasks) | 1.524 ±0.080 | 0.254 ±0.011 | – | 1.536 ±0.092 | 0.234 ±0.009 | – | 1.505 ±0.011 | 0.236 ±0.008 | | PPO (In order)* | 2.629 ±0.183 | 0.196 ±0.050 | 0.052 ±0.044 | 2.546 ±0.201 | 0.151 ±0.022 | 0.144 ±0.024 | 2.502 ±0.242 | 0.186 ±0.016 | | Iterated RLHF† | 2.629 ±0.183 | 0.196 ±0.050 | 0.052 ±0.044 | 2.732 ±0.163 | 0.211 ±0.011 | 0.061 ±0.018 | 2.666 ±0.124 | 0.200 ±0.010 | | PPO | 2.629 ±0.183 | 0.196 ±0.050 | 0.032 ±0.040 | 2.687 ±0.126 | 0.184 ±0.017 | 0.080 ±0.017 | 2.612 ±0.191 | 0.188 ±0.013 | | PPO+Onlines1.2 Rev | 2.833 ±0.122 | 0.207 ±0.043 | 0.047 ±0.039 | 2.823 ±0.192 | 0.175 ±0.023 | 0.040 ±0.015 | 2.801 ±0.202 | 0.196 ±0.023 | | PPO+EWC (Kirkpatrick et al., 2017) | 2.833 ±0.122 | 0.207 ±0.043 | 0.047 ±0.039 | 2.823 ±0.192 | 0.175 ±0.023 | 0.040 ±0.015 | 2.801 ±0.202 | 0.196 ±0.023 | | PPO+MAS (Klimt et al., 2019) | 2.712 ±0.132 | 0.211 ±0.051 | 0.034 ±0.037 | 2.726 ±0.189 | 0.157 ±0.021 | 0.039 ±0.020 | 2.714 ±0.167 | 0.179 ±0.011 | | PPO+LwL (maximum SFR) | 2.832 ±0.136 | 0.197 ±0.048 | 0.046 ±0.050 | 2.832 ±0.179 | 0.169 ±0.038 | 0.030 ±0.019 | 2.824 ±0.192 | 0.179 ±0.019 | | PPO+LwL (minimum SFR) | 2.832 ±0.136 | 0.197 ±0.048 | 0.046 ±0.050 | 2.832 ±0.179 | 0.169 ±0.038 | 0.030 ±0.019 | 2.824 ±0.192 | 0.179 ±0.019 | | PC (Kaplanis et al., 2019) | 2.692 ±0.117 | 0.209 ±0.048 | 0.036 ±0.055 | 2.723 ±0.195 | 0.165 ±0.019 | 0.047 ±0.017 | 2.703 ±0.191 | 0.187 ±0.016 | | HN-PPO (Schöpfl et al., 2022) | 2.859 ±0.105 | 0.212 ±0.034 | 0.036 ±0.042 | 2.868 ±0.132 | 0.171 ±0.017 | 0.028 ±0.029 | 2.846 ±0.177 | 0.201 ±0.011 | | NLPo (Ramanamurthy et al., 2022) | 2.784 ±0.102 | 0.185 ±0.041 | 0.060 ±0.050 | 2.796 ±0.116 | 0.172 ±0.021 | 0.012 ±0.012 | 2.799 ±0.146 | 0.181 ±0.022 | | CPPO (Heuristic) | 3.020 ±0.137 | 0.213 ±0.034 | 0.035 ±0.023 | 2.978 ±0.113 | 0.174 ±0.019 | -0.164 ±0.009 | 3.099 ±0.153 | 0.179 ±0.016 | | CPPO (Learn) | 3.180 ±0.154 | 0.220 ±0.040 | 0.029 ±0.042 | 3.085 ±0.134 | 0.164 ±0.024 | -0.161 ±0.008 | 3.207 ±0.115 | 0.179 ±0.006 | 4.3 Ablation Study We conduct an ablation study on our proposed CPPO method. To analyze the effect of the balance weights, we conduct experiments by setting either \(\alpha(x)\) or \(\beta(x)\) to 1. To analyze the effect of the knowledge retention penalty, we set \(\beta(x) \equiv 0\). The training curves of different weights are shown in Figure 3 and the evaluation results are presented in Table 7. We observe that the training process becomes unstable when setting \(\beta(x)\) to 0. When setting \(\alpha(x)\) to 1 reduces the rPMS, the noisy samples are learned together with normal samples without distinction, hence the reward increase is slower than CPPO. When setting \(\beta(x)\) to 1 increases the SFR, the overfitting samples, high-variance samples, and noisy samples are consolidated in the knowledge retention process, hence the final reward value is lower than CPPO. The above experiments indicate that the sample-wise balance weights are helpful for both policy learning and knowledge retention. Table 7: Ablation study. PPO is a special case of CPPO (* $\alpha \equiv 1, \beta \equiv 0$). | Method | rPMS$_1$ (↑) | Task-1 rouge (↑) | AT (↓) | rPMS$_2$ (↑) | Task-2 rouge (↑) | SFR (↓) | |-------------------------|--------------|------------------|--------|--------------|------------------|--------| | CPPO / Heuristic | 3.020 ±0.137 | 0.213 ±0.024 | 0.035 ±0.023 | 2.978 ±0.113 | 0.174 ±0.019 | -0.164 ±0.009 | | CPPO / Learn | 3.180 ±0.154 | 0.220 ±0.040 | 0.028 ±0.042 | 3.085 ±0.134 | 0.164 ±0.024 | -0.161 ±0.008 | | PPO / $\alpha \equiv 1, \beta \equiv 0$ | 2.629 ±0.183 | 0.196 ±0.050 | 0.052 ±0.044 | 2.687 ±0.126 | 0.184 ±0.017 | 0.080 ±0.017 | | CPPO / $\alpha \equiv 1$ | 2.837 ±0.124 | 0.196 ±0.029 | 0.047 ±0.041 | 2.745 ±0.121 | 0.169 ±0.020 | -0.031 ±0.010 | | CPPO / $\beta \equiv 1$ | 2.476 ±0.117 | 0.185 ±0.021 | 0.063 ±0.025 | 2.520 ±0.119 | 0.186 ±0.017 | 0.051 ±0.009 | | CPPO / $\beta \equiv 0$ | 2.012 ±0.186 | 0.209 ±0.022 | 0.038 ±0.045 | 2.436 ±0.141 | 0.174 ±0.021 | 0.142 ±0.015 | Figure 3: The curves of different weights in task-1. The knowledge retention weights penalty can improve the training stability of the PPO algorithm. However, setting $\beta(x) \equiv 1$ slows down the increase of the reward compared with CPPO. On the other hand, the policy learning weights $\alpha(x)$ can boost the increase of the reward compared with $\alpha(x) \equiv 1$. 4.4 Stability Analysis In this section, we analyze the stability of the CPPO, PPO, and PPO with the knowledge retention penalty. Previous work (Bai et al., 2022a) argues that small models are more prone to be unstable in PPO training. However, we find that CPPO can learn stably without the need for invalid-action masking (Ramamurthy et al., 2022), even with small models. As shown in Figure 4, the vanilla PPO performs unstably on the new data distribution. PPO with a knowledge retention penalty is more stable than PPO, but policy learning is slow. CPPO gets fast convergence on reward score and shows stable performance on the KL divergence and value prediction. This is because the sample-wise learning strategy of CPPO restricts the learning of noisy samples. Figure 4: Training process of Task-2. The PPO algorithm is unstable at 7k steps and is unable to continuously increase the reward score. 4.5 Human Evaluation on Reddit TL;DR We train two gpt2-xl models using CPPO and PPO, respectively, and compare their summaries with those generated by humans and ChatGPT using a Likert scale (Likert, 1932). The results are shown in Table 8. During the human evaluation, we observe that ChatGPT tends to generate longer summaries than humans and our models, but its performance remains stable across the test samples. Although humans provide the best summaries, they still made mistakes, such as obfuscating important details. Our model achieves comparable performance with ChatGPT but still makes mistakes that the small model often makes, such as repeating words and sentences. Due to the training inefficiency and instability, the performance of gpt2-xl trained by PPO is not satisfactory. 5 RELATED WORK 5.1 REINFORCEMENT LEARNING FROM HUMAN OR AI FEEDBACKS Learning from human preferences has been studied in the game field (Bradley Knox & Stone [2008]) and has recently been introduced into the NLP domain, such as dialogue systems (Li et al. [2023], Zhao et al. [2023, 2024]). Previous work (Stiennon et al. [2020]) utilizes the PPO algorithm to fine-tune a language model (LM) for summarization and demonstrates that RLHF can improve the LM’s generalization ability, which serves as the technology prototype for InstructGPT (Ouyang et al. [2022]) and ChatGPT. Learning LMs from feedback can be divided into two categories: human or AI feedback. Recent works such as HH-RLHF (Bai et al. [2022a]) and InstructGPT (Ouyang et al. [2022]) collect human preferences to train a reward model and learn a policy through it. ILF (Scheurer et al. [2023]) proposes to learn from natural language feedback, which provides more information per human evaluation. Since human annotation can be expensive, learning from AI feedback (RLAIF) (Bai et al. [2022b], Perez et al. [2022], Ganguli et al. [2022]) is proposed, but current methods are only effective for reducing harmless outputs, while helpful outputs still require human feedback. 5.2 CONTINUOUS LEARNING Within the realm of continual learning, several noteworthy methodologies emerge, encompassing the regularization-based approach, replay-based techniques, optimization-based strategies, representation-based methodologies, and architecture-based innovations (Wang et al. [2023]). The Regularization-Based Approach (Kirkpatrick et al. [2017], Aljundi et al. [2018], Chaudhry et al. [2018], Li & Hoiem [2018], Castro et al. [2018]) orchestrates the introduction of explicit regularization terms, thereby striving to strike a harmonious balance between the acquisition of new skills and the retention of past knowledge. The Replay-Based Approach aims to preserve and reuse past experiences to enhance model performance, which includes experience replay (Lin [1992]), generative replay or pseudo-rehearsal (Sun et al. [2020]) and feature replay (Liu et al. [2020]). The Optimization-Based Approach navigates the terrain of continual learning through explicit design and manipulation of optimization programs. This includes techniques such as gradient projection (Lopez-Paz & Ranzato [2017]), and meta-learning (Javed & White [2019]). The Representation-Based Approach leverages the strengths of self-supervised learning (SSL) (Gallardo et al. [2021]) and large-scale pre-training (Mehta et al. [2022]) to enhance the quality of representations at both the initialization and continual learning stages. The Architecture-Based Approach addresses inter-task interference by fashioning task-specific parameters. This approach can be dissected into three distinct paradigms: parameter allocation (Serra et al. [2018]), model decomposition (Ebrahimi et al. [2020]), and modular networks (Rusu et al. [2016]). 6 CONCLUSION In this work, we propose CPPO, which utilizes learning weights to balance policy learning and knowledge retention, with the aim of improving the PPO algorithm for continual learning from human preferences. CPPO is a task-agnostic and model-agnostic method that does not significantly increase the time and space complexity of PPO. We evaluate CPPO on both the DIL task and three non-continual tasks and show that it outperforms strong continual learning baselines when continually aligning with human preferences. Additionally, CPPO improves the learning efficiency and training stability of PPO. Our experiments demonstrate the potential of our approach for efficient and stable continual learning from human preferences, which can have applications in various domains and tasks. Table 8: Human evaluation on 100 posts from the Reddit TL;DR. | Method | Likert score | Improve | p-value | |--------|--------------|---------|---------| | PPO | 4.370 ± 1.180 | - | - | | CPPO | 4.730 ± 1.231 | 8.23% | 0.037 | | ChatGPT| 4.760 ± 1.011 | 8.92% | 0.013 | | Human | 4.900 ± 1.034 | 12.13% | 0.001 | ACKNOWLEDGEMENTS We thank the anonymous reviewers for their valuable suggestions to improve the quality of this work, and we express our sincere gratitude to Dr. Bin Liang for his invaluable guidance and constructive feedback throughout the preparation of this manuscript. This research was supported in part by the National Key Research and Development Program of China (2021ZD0112905), the Major Key Project of PCL (PCL2023A09-4), the National Natural Science Foundation of China (62176076), the Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies(2022B1212010005), Natural Science Foundation of Guangdong (2023A1515012922), Shenzhen Foundational Research Funding (JCYJ2022081802415032) and the UK Engineering and Physical Sciences Research Council (EP/X019063/1). REFERENCES Wickliffe C. Abraham and Anthony Robins. Memory retention – the synaptic stability versus plasticity dilemma. *Trends in Neurosciences*, 28(2):73–78, 2005. ISSN 0166-2236. doi: https://doi.org/10.1016/j.tins.2004.12.003. URL https://www.sciencedirect.com/science/article/pii/S0166223604003704 Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (eds.), *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 144–161, Cham, 2018. Springer International Publishing. ISBN 978-3-030-01219-9. Rahaf Aljundi, Klaas Kelchtermans, and Tinne Tuytelaars. Task-free continual learning. In *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, June 2019. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheep El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheep El Showk, Stanislav Fort, Tamara Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional ai: Harmlessness from ai feedback, 2022b. Satanjeev Banerjee and Alon Lavie. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pp. 65–72, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics. URL https://aclanthology.org/W05-0909 Magdalena Biesialska, Katarzyna Biesialska, and Marta R. Costa-jussà. Continual lifelong learning in natural language processing: A survey. In *Proceedings of the 28th International Conference on Computational Linguistics*, pp. 6523–6541, Barcelona, Spain (Online), 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.574. URL https://aclanthology.org/2020.coling-main.574 W. Bradley Knox and Peter Stone. Tamer: Training an agent manually via evaluative reinforcement. In *2008 7th IEEE International Conference on Development and Learning*, pp. 292–297, 2008. doi: 10.1109/DEVLRN.2008.4640845.
7bIpWYhCdu
To justify that FILI outperforms large language models (LLMs), the edit accuracy subject to some delta-distance is used, with δ denoting the number of changes the fixer makes to the incorrect program. It turns out this edit accuracy is inadequate and potentially biased, as it overlooks the semantic correctness of the program and it also ignores the possible semantic change after the repair.
FILI: Syntax Repair By Learning From Own Mistakes Anonymous authors Paper under double-blind review Abstract Automatically fixing syntax errors in programs is a key challenge in Software Engineering community. Although, there are millions of programs on the web, both syntactically correct and incorrect, finding a large number of paired examples of (correct, incorrect) programs is difficult. This makes training a program fixer using supervised learning difficult. Recently, BIFI, an unsupervised approach for learning a syntax fixer was proposed, in which an additional model (Breaker model) is used to augment data in each learning iteration to match real-world error distribution. In this paper, we propose a novel approach, FILI (Fix-It-Learn-It) for learning a syntax fixer without having to train any additional models for data augmentation. In each iteration, FILI carefully selects examples from the fixer’s own predictions, both correct and incorrect, and uses those to fine-tune the fixer. We also show that gradually increasing the complexity of the examples during training leads to a more accurate fixer. Our evaluation on the Github-Python dataset shows that FILI outperforms BIFI by 1% while being significantly easier to train. Moreover, FILI avoids training the breaker model in each iteration, which can take about 2 days on a modest DNN accelerator. 1 Introduction Automated program repair has long been a challenging problem in software development (Goues et al., 2021). One particular class of problem in program repair is the task of fixing syntax errors. A syntax error in a program occurs when a user attempts to compile a program that does not conform to the grammar of the programming language. When a syntax error occurs, the compiler halts the compilation and throws an error message, which may include the line number and offset of the error depending on the language. Often, these error messages are not very informative, and they may also point to a location other than the one where the error occurred, making these errors difficult to fix. This whole cycle of finding errors, fixing them, and re-compiling has a negative impact on programmer’s productivity, especially for beginner programmers. One of the simplest approaches to fixing syntax errors is to define rules for each class of errors and use them to automatically fix the errors. However, this rule-based approach is challenging because it necessitates comprehensive knowledge of programming language’s grammar and, often multiple possible fixes exists for the same incorrect program. As a result, manually writing rules for each error case becomes impractical. To address this, several approaches, both symbolic (constraint-based) (Singh et al., 2013) and learning-based (Bhatia et al., 2018; Gupta et al., 2017; Yasunaga and Liang, 2021, 2020; Pu et al., 2016), have been proposed for automatically fixing the syntax errors in a program. Learning-based approaches have shown promise, as they leverage data to learn patterns and automatically generate fixes. This approach offers several advantages, such as suggesting likely fixes based on prior examples and not requiring explicit domain knowledge of the programming language. Learning-based approaches formulate the syntax fixing problem as machine translation problem to translate an incorrect program to a correct one, and various encoder-decoder architectures have been used in supervised settings to perform this translation. Because of the unavailability of large number of paired examples (incorrect program, correct program) for supervised training, recently, Break-It-Fix-It (BIFI), an unsupervised learning algorithm (Yasunaga and Liang, 2021) was proposed to overcome the lack of quality paired examples. BIFI uses two trained models: a) Fixer - which attempts to generate a syntactically correct program given an incorrect program as input, and b) Breaker - which attempts to generate a syntactically incorrect program given a correct program as input. Starting from a fixer trained on synthetic data and real-world unpaired good (syntactically correct) and bad (syntactically incorrect) programs, BIFI improves the fixer by performing the following four steps in each iteration: 1) Applies the fixer on the bad programs \(B\) to generate the corresponding good programs \((G', B)\), 2) Trains the breaker using \((G', B)\), 3) Applies the breaker on good programs \((G)\) to generate the corresponding bad programs \((B', G)\), and 4) Trains the fixer with data \((B, G')\) from step 1 and \((B', G)\) from step 3. This cycle of self-learning and data augmentation leads to the improvement of fixer’s ability to solve previously unseen problems. Moreover, the breaker model, also learns to better match the distribution of real-world syntax errors in each iteration. While BIFI requires training an additional breaker model in each iteration for data augmentation, in this paper, we propose Fix-it-Learn-it (FILI), a new self-learning approach in the context of syntax fixing. FILI is inspired by how programmers learn to fix errors in the real world. Programmers typically improve their skills by recognizing their own mistakes, gaining insights from them, and developing a deeper understanding of the programming language. Gradually, they accumulate knowledge, enabling them to identify and fix more complex errors with greater accuracy and confidence. Similar to BIFI, FILI improves the fixer with each iteration by training on examples it can already fix. In addition, FILI improves the fixer by learning from its own mistakes. These examples are generated using beam search which maintains a set of the most likely hypotheses at each step of the decoding process. We identify the fixer’s predictions from the beam that do not parse and pair them with the programs from the beam that are fixed. FILI starts from a fixer trained on synthetic data and in each iteration performs the following steps 1) Applies the fixer on the bad programs \((B)\) to generate a beam consisting of most likely predictions, 2) Identifies using a parser the good programs \((G)\) and the bad programs \((B')\) from the beam, 3) Trains the fixer on the paired data \((B, G)\) and \((B', G)\). In contrast to BIFI, FILI does not require training an additional breaker model to augment data at each iteration. We hypothesize that it is not necessary to precisely match the distribution of the real-world errors as long as the fixer improves its ability to fix different classes of errors with each iteration. While BIFI explored using a separate breaker model to augment data to improve its fixer’s performance, in this paper, we propose sampling examples from the fixer’s beam predictions which empirically turns out be simpler and more efficient than training a separate breaker model. We believe that this is effective because these negative programs are akin to sampling data from the decision boundary of the model. Training the fixer with these programs improves its confidence in handling these errors, effectively pushing them further down in the beam predictions. In addition to learning from its own mistakes, FILI also adopts the curriculum learning style for fixing errors from real-world programmers. During training, we gradually increase the complexity of the examples used to train the fixer. The complexity of the examples is defined by the Levenshtein edit-distance between the bad and good programs. We begin by training with smaller edit-distances and gradually add examples with larger edit-distances. This eases the learning process as the fixer a) learns to fix errors incrementally, mimicking the cycle of identifying, fixing, and re-compiling, and b) generates programs by making minimum number of changes to the bad code. We evaluate FILI on the open-source GitHub-Python dataset [Yasunaga and Liang, 2021]. Our approach improves the accuracy of the fixer by \(~4\%\) when compared to the fixer trained using self-learning alone and by \(~1\%\) when compared to the state-of-the-art fixer that is trained using a breaker. A key contribution of our work is to significantly simplify the process of training a syntax fixer of (slightly) higher quality than prior work (viz., BIFI). In summary, this paper makes the following contributions: - We present a new approach in Section 4.2, FILI, for learning a fixer for syntax error correction in an unsupervised setting by augmenting examples from the fixer’s own prediction where it makes a mistake. - We develop a curriculum in Section 4.3 i.e., by starting out with simpler examples (fewer program edits) and gradually increasing the complexity (more program edits) that results in a fixer which is more accurate in fixing errors. - We evaluate FILI on real-world syntax correction tasks in Section 5 and show that while being simpler and computationally more efficient to train than previous approaches such as BIFI, it still outperforms them. 2 RELATED WORK Automated Program Repair (Goues et al., 2021) is the task of automatically repairing an incorrect program given some correctness specification and is an active research area where several different techniques have been proposed ranging from constraint-based (Singh et al., 2013), genetic programming (Le Goues et al., 2012), and learning-based (Bhatia et al., 2018; Gupta et al., 2017; Yasunaga and Liang, 2021; Pu et al., 2016). These approaches aim to tackle different class of program errors such as syntax errors, semantic errors, logical errors, runtime errors, race conditions, etc. In this paper, we focus on syntax errors only. BiFI (Yasunaga and Liang, 2021) is the closest work to ours and is also an unsupervised self-learning approach. The key difference between FiLI and BiFI is that FiLI does not require training a separate breaker model and iteratively learns from its own mistakes. SynFix (Bhatia et al., 2018) present an approach to train an RNN-based language model over a corpus of syntactically correct programs and uses the language model to generate potential corrections for errors identified by a parser. DeepFix (Gupta et al., 2017) trains an attention-based encoder-decoder model, where the encoder encodes an incorrect program token sequence, and the decoder generates the correction as a line number together with the fixed line. Unlike SynFix and DeepFix, FiLI uses self-supervised learning to iteratively improve the fixer performance. The use of negative samples from beam predictions to supplement training data has been explored recently by (Cao et al., 2021) for the grammatical error correction problem. Their approach pairs source sentences with beam predictions that are dissimilar to the target sentence, creating negative pairs alongside the ground-truth sentence pairs. In contrast, our approach differs in two key ways: 1) we do not rely on ground-truth sentence pairs to generate additional data from model’s predictions, and 2) we do not use an additional contrastive loss to train with these additional examples. Recently, there has been significant advancements in utilizing large language models for code-related tasks. These models with billions of parameters require massive compute for fine-tuning on new datasets. One challenge with these models is their tendency to hallucinate outputs if they are not confident about the given task. Some of the recent works (Chen et al., 2023; Shinn et al., 2023) have explored approaches to teach these models to self-debug, enabling them to identify their own mistakes and fix it. These approaches involve explaining the generated text in natural language or teaching the models to self-reflect when hallucinations are detected. These approaches share a similar goal of learning to correct their own mistakes, which is similar to our approach in this work. However, we focus on smaller models that are more accessible, as they can be trained and deployed on commodity hardware. In unsupervised learning, pseudo-labelling (Lee et al., 2013), is used for augmenting training data. It involves initially training a model using labelled data and subsequently utilizing the trained model to label the unlabelled data based on probabilities. In contrast, in FiLI we can assign real labels to unlabeled data using the compiler. 3 PROBLEM FORMULATION In this section, we provide an overview of the problem formulation. Given a set $X$ of programs and a compiler $C$ to check whether or not the program parses i.e., throws a syntax error or not. We use $X^+$ to represent the set of programs that parse (good programs) and $X^-$ the set of programs that have syntax errors (bad programs). The compiler is represented as $C: X \rightarrow \{0, 1\}$, where the indicator function $C(x)$ of program $x$ maps to 1 if the program compiles and to 0 if it throws an error. Our goal is to train a fixer $F$ which takes a bad program as input and generates the corresponding good program. This can be expressed as $F(x^+ | x^-)$ which represents the conditional probability of the fixer generating the good program $x^+$ given the bad program $x^-$. $F$ is a probabilistic model and we sample programs using beam search from this model. It should be noted that we do not have access to paired $(X^-, X^+)$ for training the fixer in supervised setting. Instead, we have access to a large collection of unpaired bad and good programs. The fixer can potentially correct a program by deleting the entire line that contains the error or by deleting the entire program. We use the Levenshtein edit-distance $\delta$ metric between the bad and good programs to ensure that the model does not learn to make arbitrary large changes to the program. Given a program $x$, the sequence $\langle x_1, ..., x_n \rangle$ represents the tokenized program. We compute the edit-distance $\delta$ at the program’s token level. For instance, given two simple expressions $c = a + b$ and $c = \text{var} + b$, the edit-distance $\delta$ at the program string level is 2, whereas the tokenized programs $\langle c, =, a, +, b \rangle$ and $\langle c, =, \text{var}, +, b \rangle$ have an edit-distance $\delta$ of 1. Evaluating a fixer’s performance in an unsupervised setting is challenging as there are no ground truth programs to compare the output with. Ideally, the fixer should generate a program that is consistent with the user’s specifications, but such specifications are not always available. Consequently, heuristics are used to evaluate the effectiveness of fixers. For instance, BIFI (Yasunaga and Liang, 2021) uses a combination of the number of bad examples that it can fix while being constrained by some edit-distance $\delta$ as the evaluation metric. In our experiments (Section 5), we demonstrate how the choice of the evaluation metric can impact the measurement of fixer’s performance. 4 APPROACH In this section we first briefly describe the unsupervised learning approach from BIFI (Yasunaga and Liang, 2021) to solve the syntax correction problem. We then present an overview of our new approach FILI and show how learning from the fixer’s own mistakes and curriculum learning can be used to improve the fixer. 4.1 Break-It-Fix-It (BIFI) BIFI iteratively trains two encoder-decoder models, the breaker $B$ and the fixer $F$. It begins with real-world unpaired bad $X^-$ and good $X^+$ programs. In the first iteration since there is no paired data to train the breaker and the fixer models, BIFI uses synthetic data generated by heuristically perturbing the programs in $X^+$. These heuristics include randomly a) inserting/deleting punctuation, b) inserting/deleting parentheses, c) inserting/deleting indentations, d) deleting keywords (def, if, else, elif, as, return) etc. To generate the paired synthetic data $(X^-_{\text{synth}}, X^+_{\text{synth}})$, BIFI selects a combination of heuristics and applies them to good programs. The resulting synthetic data is used to train the initial breaker $B_0$, which maps a good program to a bad program $B_0(X^- | X^+_{\text{synth}})$, and the fixer $F_0$, which maps a bad program to a good program $F_0(X^+_{\text{synth}} | X^-)$, in a supervised setting. BIFI improves $B_0$ and $F_0$ through multiple rounds of the following four steps: 1. **Apply the Fixer.** $F_0$ is applied to real-world bad programs and all the programs in the predictions that parse are selected for the subsequent steps. This step generates paired real-world examples which were not available in the initial round. 2. **Fine-tune the Breaker.** $B_0$ is now fine-tuned using the paired real-world examples generated in the previous step to obtain $B_1$. This fine-tuning allows the breaker to gradually learn the real-world error distribution and generate programs that resemble real-world bad programs. 3. **Apply the Breaker.** In this step, the fine-tuned $B_1$ model is applied to real-world good programs and all the examples in the predictions that do not parse are selected for the final step. This step generates additional paired real-world-resembling examples and is used for augmenting the data generated in step 1. 4. **Fine-tune the Fixer.** Finally, $F_0$ model is fine-tuned on the paired real-world examples generated in step 1 and 3 to obtain $F_1$. This fine-tuning results in fixer trained on synthetic data to gradually learn to fix real-world bad programs. The breaker’s and fixer’s cyclic interaction results in both models gradually adapting to the real-world error distribution. With each iteration the performance of these models improve as they are trained on increasingly larger and more diverse datasets. 4.2 Learning From Own Mistakes (FILI) The main distinction between BIFI and FILI lies in the breaker model and the way data is augmented in each successive round. In FILI, a single encoder-decoder model, the fixer $F$, is trained, and no additional breaker model training is required. In the initial step, BIFI only selects programs from the Figure 1: An illustration of how FILI’s approach differs from that of BIFI. BIFI leverages only the correct programs in the beam to fine-tune the fixer (and breaker) model. In contrast, FILI in addition uses bad programs from the beam and does not require any additional breaker model for data augmentation. FILI also uses an edit-distance $\delta$ based curriculum, selecting easier pairs for training in the initial rounds and gradually introducing harder examples in the subsequent rounds. model’s predictions\(^1\) that parse and pairs them with the incorrect source program to fine-tune the breaker and fixer models as shown in (a) in Figure 1. It completely ignores the predictions of the fixer that do not parse. In the initial rounds, these predictions account for a significant portion of the fixer’s predictions as the fixer is still learning and may not have seen all the classes of errors. We observe that these incorrect programs are important, particularly those that appear higher in the beam of fixer’s prediction because the fixer generates them with high confidence. As a result, there is a high likelihood that the fixer might introduce similar errors for other programs, and these programs should ideally be pushed further down in the fixer’s prediction. In contrast, FILI carefully selects these programs from fixer’s predictions that do not parse and pairs them with fixer’s predictions that do parse to fine-tune the fixer model as shown below (a) in Figure 1. FILI starts with the same initial fixer $F_0$ training as BIFI using heuristically generated paired synthetic data $(X_{synth}, X^+)$ to train $F_0$ in a supervised setting. FILI improves $F_0$ through multiple rounds of the following steps: 1. **Apply the Fixer.** $F_0$ is applied to real-world bad programs and all the programs in the fixer’s predictions that parse are selected for the subsequent steps. This step generates paired real-world examples. Additionally, the fixer’s predictions that do no parse are selected and paired with those predictions that do parse. This results in paired examples that correspond to the fixer’s own mistakes. 2. **Fine-tune the Fixer.** $F_0$ is now fine-tuned on both the set of examples generated in the previous step. This fine-tuning results in a fixer that becomes more confident in fixing programs with various syntax errors over time. The negative program pairing prevents the model from generating high-scoring programs that do not parse, allowing it to effectively learn how to fix syntax errors. Essentially, the fixer is able to improve its decision boundary by decreasing the likelihood of these incorrect programs on the decision boundary, thereby improving its ability to handle various types of syntax errors with each iteration. Our approach simplifies the data augmentation approach used in BIFI and, in essence, provides an efficient algorithm for unsupervised training of a fixer for syntax error correction. Our approach also does not require training with any additional loss functions, such as the ones used in supervised contrastive learning (Cao et al., 2021), or training a separate model to re-rank the predictions in the beam (Lee et al., 2021). As a result, FILI solves an optimization problem that is much simpler than these techniques. --- \(^1\)Note that when we refer to predictions, we are referring to a fixed-width beam size. 4.3 Curriculum Learning Beam prediction will include many good and bad programs at varying edit-distance $\delta$ from the input incorrect source program. This provides several options to create pairs of incorrect source and good programs, as well as pairs of bad and good programs, which can be used to fine-tune the fixer. For instance, BIFI pairs good programs from the beam that are at an edit-distance of 4 from the incorrect source program. In syntax error correction, edit-distance $\delta$ can be used as a proxy to describe the complexity of the task. For example, fixing a program with only one syntax error is much easier than fixing a program with four syntax errors. BIFI’s fine-tuning process requires the fixer to learn to fix all errors in a single iteration. Typically, programmers fix multiple errors iteratively by compiling, identifying, and then correcting the error. This iterative error correction resembles the curriculum learning techniques (Bengio et al., 2009), where model training is performed in a structured manner by gradually increasing the complexity of the examples, i.e., introducing easier tasks first followed by more difficult tasks. Inspired by programmers’ iterative error fixing style, FILI uses curriculum learning to improve the learning process. In Figure 1, we illustrate the process of pairing programs in each round of the FILI algorithm. In each round, we gradually increase the edit-distance of the pair of examples we use to fine-tune the fixer, where edit-distance $\delta$ denotes the number of changes the fixer makes to the incorrect program. By using edit-distance $\delta$ as a measure of complexity, we improve the fixer’s ability to fix more errors in each round. The edit-distance $\delta$ criteria is used to pair both the incorrect source and good programs from the beam, as well as the bad programs and good programs from the beam (fixer’s own errors). We provide details of our algorithm in Appendix A.2. Our experiments (Section 5) indicate that curriculum learning helps improve the fixer and generates more parsable (correct) programs in the beam predictions. Our simple edit-distance $\delta$ based criteria gives insights into how neural models can be improved on programming related tasks. 5 Experiments Our approach builds over BIFI’s framework (Yasunaga and Liang, 2021) and we evaluate our method on the Github-Python dataset collected for BIFI’s evaluation. 5.1 Model and Dataset We use BIFI’s encoder-decoder transformer architecture with 4 layers, 8 attention heads, and a hidden state size of 256 as the fixer. To ensure a fair comparison, we use the same initial fixer as BIFI. The initial fixer is trained on synthetic data generated by perturbing syntactically correct programs in order to introduce syntax errors (more details in Appendix A.1). We train the models on Google’s TPU (v3-8). Training fixer for two rounds on TPU takes $\approx$ 20 hours. We evaluate our approach on the Github-Python dataset. The dataset consists of 38K bad programs and 3M good programs. We use the same held-out test set as BIFI i.e, from the 38k bad examples, 15k are used as the test set while the remaining 23k are available as real-world bad examples for training. Fixer’s accuracy is measured by parse rate and edit-distance $\delta$. An incorrect program is considered to be fixed if the fixer’s prediction parses and the edit-distance $\delta$ between the incorrect source program and the prediction is $\leq$ 4 tokens. All the numbers reported are fixer’s top-1 accuracy. BIFI uses beam width of 10 to generate paired data for fine-tuning the fixer and the breaker, while FILI uses beam width of 30 (unless otherwise stated) to generate fine-tuning data for the fixer. For evaluating, a beam width of 10 is used for all the models. 5.2 Results The initial fixer (Round0) trained on synthetic data has an accuracy of 62% on the held-out set. Since BIFI and FILI are both iterative approaches, we run two rounds for each and report the results on the held-out set in Table 1. We do not see any improvements in further rounds. All the numbers reported --- 2We also wanted to evaluate FILI on the DeepFix dataset, but BIFI unfortunately has not made their evaluation setup (C++ synthetic data generation, training/test splits, evaluation hyper parameters etc.) on the DeepFix dataset publicly available. | Method | Round1 | Round2 | |-----------------|--------------|--------------| | BIFI FixerOnly | 86.8% | 88.6% | | BIFI FixerOnly* | 85.1% ± 0.09%| 87.1% ± 0.45%| | BIFI | 88.0% | 90.5% | | FILI Curriculum | 89.3% ± 0.19%| 91.2% ± 0.13%| | FILI | 89.3% ± 0.19%| 91.6% ± 0.05%| Table 1: Comparison of accuracy on the Github-Python dataset. Our approach FILI, utilizing fixer’s own mistakes for fine-tuning, outperforms BIFI FixerOnly, which relies on beam predictions but only includes programs that parse, and BIFI with the breaker model for data augmentation. | Method | Accuracy | Accuracy (edit) | |-----------------|--------------|-----------------| | PaLM-2 | 78.2% | 54% | | GPT-3.5-turbo | 98.6% | 60% | | BIFI FixerOnly* | 93.1% | 87.1% | | BIFI | 95.5% | 90.5% | | FILI Curriculum | 95.2% | 91.1% | | FILI | 96.1% | 91.6% | Table 2: Comparison of performance on the Github-Python dataset. Our approach FILI, outperforms over LLMs in accuracy when evaluating based on edit-distance $\delta$, and achieves comparable results when evaluating the generation of parsable programs as the evaluation metric. in Table 1 are averaged over 5 runs. We use the following 2 configurations of BIFI and FILI for our evaluation: 1. **BIFI FixerOnly**. This configuration only fine-tunes the fixer on the bad programs it can fix in each iteration without using the dataset generated by the breaker. It is similar to FILI as only beam predictions are used to augment data, and no breaker model is used to generate additional data. 2. **BIFI**. This configuration fine-tunes the fixer with both the bad programs it can fix and the paired examples generated by the breaker in each iteration. 3. **FILI Curriculum**. In this configuration the fixer is trained using a combination of learning from own mistakes and curriculum learning. During Round1, we use a threshold of edit-distance $\delta \leq 2$ to generate the paired data. During Round2, the paired data is generated using a threshold of edit-distance $\delta \leq 4$. Note that this is in contrast with BIFI, which uses edit-distance $\delta \leq 4$ in both rounds. 4. **FILI**. This configuration only uses learning from own mistakes to train the fixer. We use edit-distance $\delta \leq 2$ in both rounds in this configuration to generate both incorrect source and correct beam paired examples, as well as correct beam and incorrect beam paired examples. **Note:** To eliminate the possibility that the gains observed in our results were due to changes in accelerator, we ran the BIFI FixerOnly (BIFI FixerOnly* in Table 1) on TPU. The numbers reported in other two rows of BIFI correspond to those reported in the paper (Yasunaga and Liang, 2021). We also compare BIFI and FILI against two LLMs, PaLM-2 (text-bison) (Anil et al., 2023) and GPT-3.5-turbo and report the accuracies in Table 2. We use these models in a zero-shot setting, i.e., we prompt the model with the incorrect program and ask the model to generate the corresponding correct program. The prompts used for this experiment are listed in Appendix A.6. **Discussion.** FILI outperforms both configurations of BIFI. Compared to the FixerOnly configuration, FILI shows an improvement of 4% in both the rounds, demonstrating that augmenting with negative examples in addition to the positive examples from the beam helps to improve fixer’s performance. Moreover, FILI also outperforms the full BIFI model, which uses an additional model, breaker, to augment data in each iteration. We see an improvement of more than 1% in both rounds. In addition to a slight improvement in performance, FILI provides a dramatically simplified training procedure as it does not require any additional model training or specialized loss functions. The breaker model in BIFI is 13 million parameter model, and training this model requires $\approx$ 2 days on a modest DNN accelerator. In addition to the time required for training the breaker model, the inference time for running it on the 3 million good examples also needs to be considered for each iteration. This makes BIFI’s approach computationally expensive and time-consuming. In contrast, FILI only uses fixer’s predictions to generate paired data leading to a much faster and more efficient training process. In relation to the overlap in problems that BIFI and FILI are capable of solving, there are certain variations. BIFI cannot solve 1263, while BIFI cannot solve 1428. Notably, there are 897 instances which both the approaches cannot solve. In terms of the unique problems that each approach can solve, FILI solves 531 unique problems, whereas BIFI solves 366 unique instances. While our curriculum learning configuration also outperforms BIFI, we observed a slight drop in accuracy (0.5%) compared to our FILI model without the curriculum learning. Upon further analysis, we found that no curriculum learning fixer training aligns with the BIFI’s evaluation metric (parsable and edit-distance $\delta \leq 4$) i.e. generate parsable programs with the least number of changes to the incorrect source program. When fine-tuning the fixer with only edit-distance $\delta \leq 2$ the fixer has fewer degrees of freedom to modify the incorrect program when compared to fine-tuning with edit-distance $\delta \leq 4$. To verify this hypothesis, we relaxed the edit distance criteria for the accuracy metric while keeping the beam width fixed. Interestingly, our curriculum learning fixer generated more parsable programs than any other fixer. Therefore, if the objective is to train a fixer that can generate more parsable predictions in the beam, the FILI curriculum is a better configuration. We provide more details on this experiment in our ablation study (Section 5.2). FILI (and BIFI) outperform LLMs when assessed using BIFI’s evaluation metric (parsable and edit-distance $\delta \leq 4$). These models, when instructed to generate the correct program, tend to introduce modifications to other sections of the erroneous program, resulting in a higher edit-distance $\delta$ count with the incorrect program. This behavior may not be desirable for developers in real-world scenarios. When we relax the edit-distance $\delta$ criterion within the evaluation metric, we observe that GPT-3.5-turbo achieves the highest accuracy at 98.6%. However, it’s worth noting that these models are trained on significantly larger datasets and possess a much higher parameter count (on the order of billions), rendering them challenging to train using commodity hardware resources. We provide detailed analysis of these experiments in Appendix A.6. ![Figure 2](image) **Figure 2:** Comparison of parse rates for different FILI and BIFI configurations across various edit-distance $\delta$ thresholds. The line $x = 4$ represents BIFI’s evaluation metric. FILI (2,2) achieves the best parse rate under this criteria as the fixer, indicating limited freedom for modifying the incorrect program. As the edit-distance $\delta$ threshold increase (line $x = 12$), FILI curriculum configurations ((2,3), (2,4)) show better parse rates, indicating that gradual learning improves fixer’s capability to generate more parsable programs. ### 5.3 Ablation Study In Section 5.2, we show that FILI outperforms all configurations of BIFI. In our ablation study, we try to answer the following questions about FILI: 1. **Does curriculum learning generate more parsable programs?** In this experiment, we test our hypothesis that incorporating curriculum in fixer training can lead to higher percentage of parsable programs as the top prediction in the beam. We relax edit-distance $\delta$ criteria used for evaluation by BIFI and analyze how the accuracy of different configurations change accordingly. In Figure 2, we plot the cumulative parse rate of different configurations against the edit-distance $\delta$. The line $x = 4$ in Figure 2 represents the case where the edit-distance $\delta$ is 4, which aligns with the BIFI’s evaluation metric. We observe that the FILI configurations with no curriculum (FILI (2,2)) performs the best under this criteria. However, as we move towards the right on the edit-distance $\delta$ scale (line $x = 12$), we observe that FILI configurations with curriculum generates more parsable programs compared to other configurations confirming our hypothesis. As discussed in Section 3. evaluating syntax fixers in unsupervised setting is challenging and the performance of the fixers can vary depending on the evaluation metric. Nonetheless, FILI models consistently outperform BIFI models across different evaluation criteria, demonstrating the effectiveness of our approach. See Appendix A.5 and Appendix A.7 for a qualitative analysis of the fixes by both the models. 2. How does the accuracy of FILI vary with different curricula? In this experiment, we test how the accuracy of FILI varies with different curricula. We train FILI with different edit-distance $\delta$ thresholds in each round and evaluate the model’s performance using BIFI’s evaluation metric (parsable and edit-distance $\delta \leq 4$). The results in Table 3 show that FILI with curriculum (2,3) performs the best amongst all the configurations. This configuration restricts the fixer the most in terms of the number of changes the fixer can make which aligns with the evaluation metric that looks for a parsable program with the minimum changes as the top prediction. As edit-distance $\delta$ threshold increases, fixer has more freedom to modify the incorrect program resulting in top programs in the beam being parsable but at a higher edit-distance $\delta$ than BIFI’s evaluation metric. 3. Does curriculum learning help BIFI as well? Next, we test whether curriculum learning can also benefit BIFI’s FixerOnly configuration. We train the FixerOnly model with our curriculum (2,4) and evaluate it’s performance using BIFI’s evaluation metric. As shown in Table 4, we observe a 1% improvement in round1 and a 2% improvement in round2 compared to the original FixerOnly model. These results indicate that incorporating curriculum learning can indeed improve the learning process for BIFI as well. This experiment demonstrates the effectiveness of our curriculum learning approach in improving the performance of not only the FILI model but also the BIFI model. Furthermore, since the FixerOnly configuration is similar to FILI without the negative programs pairing, these results indicate the importance of incorporating negative programs from the beam in the training process. The inclusion of these programs allows the model to improve its ability to fix different classes of errors. 6 CONCLUSION In this paper, we presented FILI, a new approach for automatically learning a syntax-error fixer in an unsupervised setting. In contrast to prior approaches (Yasunaga and Liang, 2021) that rely on separate models to augment data in each iteration, our method simplifies this step significantly. We leverage the model’s own mistakes in its predictions to augment data and we do this in a curriculum style approach by gradually increasing the complexity of these examples in each iteration. Our evaluation demonstrates that our approach results in a fixer that outperforms the prior approaches while significantly reducing the training time. Our approach opens up new research directions in unsupervised training of models for programming-related tasks. | Method | Round1 | Round2 | |-------------------------|--------|--------| | BIFI FixerOnly* | 85.1% | 87.1% | | BIFI FixerOnly Curriculum | 86.0% | 89% | | FILI Curriculum (2,4) | 89.3% | 91.2% | Table 4: Comparison of performance between BIFI FixerOnly models trained with and without curriculum learning. Our curriculum learning approach demonstrates improved performance over the standard BIFI FixerOnly model, indicating the effectiveness of our curriculum in the training of fixer. | Round1 edit-distance | Round 1 | Round2 edit-distance | Round 2 | |----------------------|---------|----------------------|---------| | 1 | 85.5% | 4 | 91.3% | | 2 | 89.2% | 3 | 91.5% | | 2 | 89.2% | 4 | 91.1% | | 2 | 89.2% | 5 | 90.8% | | 2 | 89.2% | 6 | 90.6% | | 4 | 89.0% | 4 | 90.0% | | 2 | **89.3%** | 2 | **91.6%** | Table 3: Performance comparison of FILI models with varying curricula. The fixer is trained with different edit-distance $\delta$ thresholds in each round. REFERENCES Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam MoussaIem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. Palm 2 technical report, 2023. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009. Sahil Bhatia, Pushmeet Kohli, and Rishabh Singh. Neuro-symbolic program corrector for introductory programming assignments. In Proceedings of the 40th International Conference on Software Engineering, ICSE ’18, page 60–70, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450356381. doi: 10.1145/3180155.3180219. URL https://doi.org/10.1145/3180155.3180219. Hannan Cao, Wenmian Yang, and Hwee Tou Ng. Grammatical error correction with contrastive learning in low error density domains. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4867–4874, 2021. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug, 2023. Claire Le Goues, Michael Pradel, Abhik Roychoudhury, and Satish Chandra. Automatic program repair. IEEE Softw., 38(4):22–27, 2021. doi: 10.1109/MS.2021.3072577. URL https://doi.org/10.1109/MS.2021.3072577. Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade. Deepfix: Fixing common c language errors by deep learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb. 2017. doi: 10.1609/aaai.v31i1.10742. URL https://ojs.aaai.org/index.php/AAAI/article/view/10742. Claire Le Goues, ThanhVu Nguyen, Stephanie Forrest, and Westley Weimer. Genprog: A generic method for automatic software repair. IEEE Transactions on Software Engineering, 38(1):54–72, 2012. doi: 10.1109/TSE.2011.104. Ann Lee, Michael Auli, and Marc’Aurelio Ranzato. Discriminative reranking for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7250–7264, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.563. URL https://aclanthology.org/2021.acl-long.563. Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, page 896. Atlanta, 2013.
T8Rf1CRbHQ
All the convergence rates in this paper are in terms of the weighted sum of squared norms, while for unbiased SA or TTSA, the last-iterate convergence is achievable. Is this an inevitable issue when there exist structured errors? Or how does the last iterate behave in this case?
ABSTRACT Two-time-scale stochastic approximation is a recursive algorithm for solving a system of two equations. The method has found broad applications in many areas including machine learning and reinforcement learning. Recent works have revealed that single-time-scale stochastic approximation (especially its variant stochastic gradient descent in optimization) is robust to structured perturbations such as compression, local updates, and delays, but it is not well-understood in the two-time-scale case. Almost nothing is known about the analogous question: Is two-time-scale stochastic approximation also robust to similar structured perturbations? In this paper, we study error-feedback-based two-time-scale stochastic approximation. We propose a unified theory of two-time-scale stochastic approximation based on error-feedback to analyze the impact of different forms of structured perturbations. We show that two-time-scale stochastic approximation is robust to structured perturbations. In particular, two-time-scale stochastic approximation with different forms of structured perturbations exhibits the same non-asymptotic theoretical guarantees as its single-time-scale counterpart without structured perturbations. We further show that the convergence rate in all cases consists of two terms, where only the higher-order term is affected by structured perturbations. This is especially important for distributed parallel implementations of two-time-scale stochastic approximation algorithms. 1 INTRODUCTION Stochastic approximation (SA) is a general class of recursive methods for finding roots of unknown functions for which only noisy accesses are available [Robbins & Monro (1951); Borkar (2009)]. Specifically, the SA method seeks to find \( x^* \) such that \( h(x^*) = 0 \) with the following update: \[ x_{k+1} = x_k + \alpha_k (h(x_k) + \xi_k), \] where \( \alpha_k \) is the step size and \( \xi_k \) is a random variable. Such method has found broad applications in many areas such as machine learning (ML), statistics, stochastic control, and signal processing. In particular, stochastic gradient descent (SGD), a variant of SA, lies at the core of machine learning, especially deep learning [Bottou et al. (2018)]. Notably, the practical success of SA, especially SGD, can be attributed to its robustness to robust to structured perturbations such as compression, local updates, and delays [Stich & Karimireddy (2020)]. This is especially important for distributed parallel implementations in the sense that a parallel version of SGD with structured perturbations can efficiently use the computing power of multiple parallel agents. For instance, local SGD [Stich (2018)], a variant of SGD, allows update (1) to evolve locally on each agent, independently of each other, and only average the sequence once in a while. The results show that local SGD is as computationally efficient as parallel mini-batch SGD, but the communication cost can be significantly reduced. While most SA studies have focused on the single sequence case, two-time-scale SA (TTSa) was introduced in [Robbins & Monro (1951)], and it has been widely applied to problems involving two coupled sequence updates. Specifically, given two nonlinear operators \( f : \mathbb{R}^{d_0} \times \mathbb{R}^{d_1} \rightarrow \mathbb{R}^{d_0} \) and \( g : \mathbb{R}^{d_0} \times \mathbb{R}^{d_1} \rightarrow \mathbb{R}^{d_1} \), the TTSa method aims to solve a system of two equations: \[ \begin{align*} f(x, y) &= 0, \\ g(x, y) &= 0, \end{align*} \] by two coupled sequence updates of the form: \[ x_{k+1} = x_k + \alpha_k (f(x_k, y_k) + \xi_k), \] \[ y_{k+1} = y_k + \beta_k (g(x_k, y_k) + \psi_k), \] where \( \alpha_k, \beta_k \) are step sizes and \( \xi_k, \psi_k \) are random variables. The TTSA method has found broad applications in many areas including machine learning and reinforcement learning. In particular, the TTSA method has been studied mostly in the context of stochastic bilevel optimization (SBO) [Ruszczyński (2021); Balasubramanian et al. (2022)] and stochastic compositional optimization (SCO) [Wang et al. (2017); Gao & Huang (2021); Jiang et al. (2022)] where many typical SBO and SCO algorithms are exactly in the form of (3)-(4). It is worthwhile mentioning that the SBO and SCO problems encompass many contemporary ML problems including adversarial robustness, hyperparameter tuning, meta-learning, reinforcement learning; see e.g., [Franceschi et al. (2018); Zhang et al. (2022); Hong et al. (2023)]. While recent work has begun to study SBO and SCO in distributed parallel settings [Tarzanagh et al. (2022); Yang et al. (2022)], a generic theory for distributed TTSA is less developed. Almost nothing is known about the question: **Is two-time-scale stochastic approximation robust to structured perturbations such as compression, local updates, and delays?** To fill this gap, we study error-feedback-based TTSA. Error-feedback is a unified framework for analyzing the theory of SGD with different forms of structured perturbation [Stich et al. (2018); Karimireddy et al. (2019); Stich & Karimireddy (2020)]. In this work, we extend the error-feedback framework to TTSA. Indeed, analyzing error-feedback-based TTSA is challenging as it involves two coupled sequences for a one-step update in TTSA, and there are complicated interactions between error-feedback and TTSA. Two auxiliary sequences are introduced in the error-feedback framework, which aggregate the structured perturbation errors. Moreover, existing works on error-feedback only consider variants of SGD and do not consider SA, let alone TTSA. Notice that SA is a more general class of algorithms that covers many algorithms in reinforcement learning that cannot be formulated as SGD and its variants [Kaledin et al. (2020); Chen et al. (2022)]. ### 1.1 Main Contributions Our main contributions are summarized as follows: 1) **Error-feedback meets two-time-scale stochastic approximation.** We give an affirmative answer to the above question and present a framework for error-feedback-based two-time-scale stochastic approximation (EF-TTSA) that captures a rich class of structured perturbations such as compression, local updates, and delays. We utilize the framework to analyze the effect of different forms of structured perturbations on EF-TTSA in a unified manner. To the best of our knowledge, this is the first work that considers two-time-scale stochastic approximation corrupted by structured perturbations with theoretical convergence guarantees. 2) **Error-compensated TTSA with arbitrary compressors.** We propose an instance of EF-TTSA, error-compensated TTSA with arbitrary compressors (Algorithm 1), in which compression operators are used to reduce communication costs. We prove that our Algorithm 1 attains an \( O\left(\frac{1}{T} + \frac{1}{\delta T^2}\right) \) convergence rate, where \( T \) is the total number of iterations, and \( \delta \) is the compressed parameter. We see that the compression operator only affects the higher-order term of the convergence rate. Thus, the effects of the compression become negligible after a few iterations and the algorithm converges at the same rate as standard TTSA without compression [Shen & Chen (2022)]. 3) **Local TTSA with periodic global averaging.** We propose an instance of EF-TTSA, local TTSA with periodic global averaging (Algorithm 2), in which agents perform multiple local iterative updates, followed by global averaging. We prove that our Algorithm 2 attains an \( O\left(\frac{1}{T} + \frac{K^2}{T^2}\right) \) convergence rate, where \( K \) is the communication interval. We also observe that the effects of multiple local updates become negligible after a few iterations, suggesting that our algorithm gains communication efficiency through infrequent communication, essentially for free. 4) **TTSA with delayed updates.** We propose an instance of EF-TTSA, TTSA with delayed updates (Algorithm 3), where updates are delayed and reflect iterations from \( \tau \) rounds ago. We prove that our Algorithm 3 attains an \( O\left(\frac{1}{T} + \frac{\tau^2}{T^2}\right) \) convergence rate. Similarly, \( \tau \) only appears in the higher-order term of the convergence rate, and its effect becomes negligible when $T$ is large enough. The results show that the performance of TTSA with delays is comparable to that of TTSA without delays. 1.2 RELATED WORK 1.2.1 TWO-TIME-SCALE STOCHASTIC APPROXIMATION TTSA, a generalized variant of SA, has been studied for a long time. Specifically, the asymptotic behavior of TTSA has been analyzed in Borkar (1997) using an ODE approach, and in Tadic & Meyn (2003) under Markovian noise. Recent work has heavily focused on the finite-time performance of TTSA for both linear [Konda & Tsitsiklis (2004); Doan & Romberg (2019); Kaledin et al. (2020); Doan (2021b)] (when $f$ and $g$ are linear functions with respect to their variables) and nonlinear settings [Dalal et al. (2018); Zeng et al. (2021); Doan (2022)], under both i.i.d. and Markovian samples. All of these works use the so-called fast and slow time scales: one sequence is updated in the fast-time scale while the other is in the slow time scale; the time scale difference $\lim_{k \to \infty} \alpha_k / \beta_k \to 0$. With the proper choice of step sizes, the two sequences with the fast and slow time scales are asymptotically decoupled. Shen & Chen (2022) established an improved analysis of nonlinear TTSA in which the two sequences are updated in the same time scale, i.e., $\lim_{k \to \infty} \alpha_k / \beta_k = c$ for some constant $c > 0$. Shen & Chen (2022) demonstrated that the sequences generated by nonlinear TTSA converge to desired solutions at a tight rate $O(\frac{1}{T})$ for the strongly-monotone case. Distributed variants of linear and nonlinear TTSA are considered in Doan & Romberg (2020) and Doan (2021a), respectively. Doan (2021a) studied the convergence rate of distributed local TTSA; however, the sequences generated by the method only converge linearly to a ball encircled the desired solution. In summary, it is an open question whether convergence guarantees for TTSA with structured perturbations (e.g., compression, local updates, and delays) can be achieved. 1.2.2 ERROR-FEEDBACK FRAMEWORK Error-feedback relates closely to communication-efficient methods such as quantization and sparsification in distributed optimization literature. Roughly speaking, error-feedback is a memory mechanism that uses accumulated errors from previous iterations for bias correction. The idea of error-feedback was introduced in Seide et al. (2014) to study 1-bit SGD, aiming to counter the effect of bias introduced by quantization. Since then, several papers Alistarh et al. (2018); Stich et al. (2018); Karimireddy et al. (2019); Lin et al. (2022) considered compression methods with error-feedback, i.e., incorporating the error made by the compression operator to correct the current direction. For instance, Stich et al. (2018); Karimireddy et al. (2019) demonstrated that SignSGD with error-feedback, a very aggressive compression method where each coordinate of the gradient is replaced by its sign, retains almost the same behavior as SGD without compression. For further reducing communication costs, various orthogonal techniques have been proposed, such as asynchrony (delayed updates) Stich (2018) and periodic averaging (local updates) Arjevani et al. (2020) in distributed optimization literature. Stich & Karimireddy (2020) presented a framework for sgd with error-feedback, and analyzed the effect of different forms of structured perturbations. In particular, SGD with delayed updates and SGD with local updates essentially act like compressed SGD with error-feedback. Mitra et al. (2023) analyzed compressed temporal difference learning with error-feedback, and proved that temporal difference learning is robust to structured perturbations; but the author only studied compression. In this work, we use the error-feedback framework to analyze nonlinear TTSA with different structured perturbations in a unified manner. 2 PRELIMINARIES We are interested in the two-time-scale SA problem (2) under the following assumptions. Assumption 1. For any $x \in \mathbb{R}^{d_0}$, there exists a unique $y^*(x) \in \mathbb{R}^{d_1}$ such that $g(x, y^*(x)) = 0$. Moreover, there exist $L_{y,0}$ and $L_{y,1}$ such that for any $x_1, x_2 \in \mathbb{R}^{d_0}$, the following inequalities hold $$||y^*(x_1) - y^*(x_2)|| \leq L_{y,0} ||x_1 - x_2||,$$ $$||\nabla y^*(x_1) - \nabla y^*(x_2)|| \leq L_{y,1} ||x_1 - x_2||.$$ Assumption 2. For any \( x_1, x_2 \in \mathbb{R}^{d_0} \), and \( y_1, y_2 \in \mathbb{R}^{d_1} \), there exist \( L, L_f \) and \( L_g \) such that \[ ||f(x_1, y^*(x_1)) - f(x_2, y^*(x_2))|| \leq L ||x_1 - x_2||, \] \[ ||f(x_1, y_1) - f(x_2, y_2)|| \leq L_f (||x_1 - x_2|| + ||y_1 - y_2||), \] \[ ||g(x_1, y_1) - g(x_2, y_2)|| \leq L_g (||x_1 - x_2|| + ||y_1 - y_2||). \] Assumption 3. Suppose \( f(x, y) \) is one-point strongly monotone on \( x^* \); that is, there exists a constant \( \lambda_f > 0 \) such that \[ \langle x - x^*, f(x, y^*(x)) \rangle \leq -\lambda_f ||x - x^*||^2. \] Moreover, suppose \( g(x, y) \) is one-point strongly monotone on \( y^*(x) \) for any given \( x \in \mathbb{R}^{d_0} \); that is, there exists a constant \( \lambda_g > 0 \) such that \[ \langle y - y^*(x), g(x, y) \rangle \leq -\lambda_g ||y - y^*(x)||^2. \] Remark 1. Assumptions [1,3] are fairly standard in the analysis of two-time-scale SA; see e.g., Mokkadem & Pelletier (2006); Kaledin et al. (2020); Zeng et al. (2021); Shen & Chen (2022). 2.1 Motivating Application Examples 2.1.1 Stochastic Bilevel Optimization With mappings \( F : \mathbb{R}^{d_0} \times \mathbb{R}^{d_1} \to \mathbb{R} \) and \( G : \mathbb{R}^{d_0} \times \mathbb{R}^{d_1} \to \mathbb{R} \), the stochastic bilevel optimization (SBO) problem can be formulated as \[ \min_{x \in \mathbb{R}^{d_0}} F(x, y) := \mathbb{E}_\theta[F(x, y(x); \theta)], \text{ s.t. } y(x) := \arg \min_{y \in \mathbb{R}^{d_1}} G(x, y) := \mathbb{E}_\zeta[G(x, y; \zeta)]. \] A class of gradient-based methods is a popular approach to solve problem (12); see e.g., Ghadimi & Wang (2018); Chen et al. (2021b). In particular, this type of method has updates \[ x_{k+1} = x_k + \alpha_k (\nabla_x F(x_k, y_k; \theta_k) - \nabla^2_{xy} G(x_k, y_k; \zeta_k) H_{yy}(x_k, y_k; \zeta'_k) \nabla_y F(x_k, y_k; \theta_k)), \] \[ y_{k+1} = y_k - \beta_k \nabla_y G(x_k, y_k; \zeta''_k), \] where \( H_{yy}(x_k, y_k; \zeta'_k) \) is a stochastic approximation of the Hessian inverse \( [\nabla_{yy} G(x_k, y_k)]^{-1} \). We observe that the update (13)-(14) is a special case of the TTSA update in (3)-(4) by defining: \[ f(x_k, y_k) = \nabla_x F(x_k, y_k) - \nabla^2_{xy} G(x_k, y_k)[\nabla_{yy} G(x_k, y_k)]^{-1} \nabla_y F(x_k, y_k), \] \[ \xi_k = -f(x_k, y_k) + \nabla_x F(x_k, y_k; \theta_k) - \nabla^2_{xy} G(x_k, y_k; \zeta_k) H_{yy}(x_k, y_k; \zeta'_k) \nabla_y F(x_k, y_k; \theta_k), \] \[ g(x_k, y_k) = -\nabla_y G(x_k, y_k), \quad \psi_k = -g(x_k, y_k) - \nabla_y G(x_k, y_k; \zeta''_k), \] Shen & Chen (2022) demonstrated that the standard conditions in the SBO literature Ghadimi & Wang (2018); Chen et al. (2021b) lead to Assumptions [1,3] in this work. 2.1.2 Stochastic Compositional Optimization With outer function \( F(y; \theta) : \mathbb{R}^{d_1} \to \mathbb{R} \) and inner function \( G(x; \zeta) : \mathbb{R}^{d_2} \to \mathbb{R}^{d_1} \), the stochastic compositional optimization (SCO) problem can be formulated as \[ \min_{x \in \mathbb{R}^{d_2}} F(G(x)) := \mathbb{E}_\theta[F(G(x); \theta)], \text{ with } G(x) := \mathbb{E}_\zeta[G(x; \zeta)]. \] To solve (18), a popular method Yang et al. (2019) takes the following form \[ x_{k+1} = x_k - \alpha_k \nabla F(y_k; \theta_k) \nabla G(x_k; \zeta_k), \] \[ y_{k+1} = y_k + \beta_k (F(y_k; \theta'_k) - y_k), \] where \( y_k \) is used to directly track \( \mathbb{E}_\zeta[G(x; \zeta)] \). We observe that the update (19)-(20) is a special case of the TTSA update in (3)-(4) by defining: \[ f(x_k, y_k) = -\nabla F(y_k) \nabla G(x_k), \quad \xi_k = -f(x_k, y_k) - \nabla(y_k, \theta_k) \nabla G(x_k; \zeta_k), \] \[ g(x_k, y_k) = F(y_k) - y_k, \quad \psi_k = -g(x_k, y_k) + F(y_k; \theta'_k) - y_k. \] Likewise, Shen & Chen (2022) demonstrated that the standard conditions in the SCO literature Yang et al. (2019); Chen et al. (2021a) ensure that Assumptions [1,3] in this work hold. 3 ERROR-FEEDBACK MEETS TWO-TIME-SCALE SA In this section, we introduce our framework EF-TTSA wherein two sequences \( \{x_k\}, \{y_k\} \) and two auxiliary sequences \( \{d_k\}, \{e_k\} \) all evolve at the same time, using the following expressions: \[ \begin{align*} x_{k+1} &= x_k + \mu_k, \\ d_{k+1} &= d_k + \alpha_k f_k - \mu_k, \\ y_{k+1} &= y_k + \nu_k, \\ e_{k+1} &= e_k + \beta_k g_k - \nu_k, \end{align*} \] with \( d_0 = e_0 = 0 \). In (23)-(26), \( \{\mu_k\}_{k \geq 0} \) and \( \{\nu_k\}_{k \geq 0} \) are two sequences, representing the updates applied to \( \{x_k\}_{k > 0} \) and \( \{y_k\}_{k > 0} \) respectively. \( \{d_k\}_{k > 0} \) and \( \{e_k\}_{k > 0} \) aggregate the structured perturbation errors. We denote by \( f_k = f(x_k, y_k) + \xi_k \) and \( g_k = g(x_k, y_k) + \psi_k \), where \( \xi_k \) and \( \psi_k \) are two independent random variables. Note that our framework EF-TTSA is generic and covers many special cases: error-compensated TTSA with arbitrary compressors (cf. Section 4), local TTSA with periodic global averaging (cf. Section 5), and TTSA with delayed updates (cf. Section 6). We define the filtration \( F_k = \{x_0, y_0, x_1, y_1, \cdots, x_k, y_k\} \). Regarding the random variables \( \{\xi_k, \psi_k\}_{k \geq 0} \), we impose the following standard assumption [Doan (2022)]. **Assumption 4.** The random variables \( \xi_k, \psi_k \), for all \( k \geq 0 \), are independent of each other and across time. Moreover, there exist two positive constants \( \sigma_\xi, \sigma_\psi \) such that \[ \mathbb{E}[\xi_k | F_k] = 0, \quad \mathbb{E}[\psi_k | F_k] = 0, \quad ||\xi_k|| \leq \sigma_\xi, \quad ||\psi_k|| \leq \sigma_\psi, \quad \forall k \geq 0. \] For the convenience of analysis, define \( \bar{x}_k = x_k + d_k, \bar{y}_k = y_k + e_k, \bar{y}_k^* = y^*(x_k), \bar{y}_k^* = y^*(\bar{x}_k), \) and \( \Xi_k = \mathbb{E}[||\bar{y}_k - \bar{y}_k^*||^2 + ||\bar{x}_k - x^*||^2] \). By the definitions of \( \bar{x}_k \) and \( \bar{y}_k \), we have that \( \bar{x}_{k+1} = \bar{x}_k + \alpha_k f_k; \bar{y}_{k+1} = \bar{y}_k + \beta_k g_k \). The main result of this section is as follows. **Lemma 1.** Let \( \{x_k, d_k, y_k, e_k\}_{k \geq 0} \) be the sequence generated by (23)-(26). Suppose Assumptions 1-4 hold, and set \( \alpha_k \leq \min\left\{\frac{\lambda_f}{16L_y L^2}, \frac{\lambda_g^2}{6L_y^2(2c_1 + \lambda_f)}\right\} \) and \( \beta_k = \tilde{\beta}\alpha_k \) where \( \tilde{\beta} = \frac{2c_1 + \lambda_f}{\lambda_g} \), we have \[ \Xi_{k+1} \leq \left(1 - \frac{\lambda_f}{2}\alpha_k\right)\Xi_k + \Delta_1 \alpha_k(||d_k||^2 + ||e_k||^2) + \Delta_2 \alpha_k^2(\sigma_\xi^2 + \sigma_\psi^2), \] where \( \Delta_1 = (1 + c_2)\left(3L_g^2 + 2L_g^2/\lambda_g\right)\tilde{\beta} + c_3, \Delta_2 = (1 + c_2)\tilde{\beta}^2 + L_y, c_1 = L_y,0L + 4L_y,0L_f + 2L_y,1\sigma_\xi^2 + 4L_yL_f^2 + \frac{4L_y^2L_f^2 + 9L_f^2}{\lambda_f}, c_2 = L_y,0L_f + L_y,0L + 2L_y,1\sigma_\xi^2 + \frac{4L_y^2L_f^2}{\lambda_f}, c_3 = L_y,0L + 3L_y,0L_yL_f + 4L_yL_f^2 + \frac{12L_y^2L_f}{\lambda_f}, \) and \( L_y = L_y,0 + L_y,1 + 1 \). Note that the above lemma derives an upper bound on the one-step progress of \( \Xi_k \), which is key to our analysis and will simplify the presentation of the proofs in the subsequent sections. Observe that there is an “error” term on the right side of the inequality (28): \( ||d_k||^2 + ||e_k||^2 \) measures the mismatch between the true sequences \( \{x_k\}, \{y_k\} \) and their noisy estimates \( \{\bar{x}_k\}, \{\bar{y}_k\} \). 4 ERROR-COMPENSATED TTSA WITH ARBITRARY COMPRESSORS In this section, we propose an instance of EF-TTSA, error-compensated TTSA with arbitrary compressors. First, we use the EF-TTSA framework to analyze error-compensated TTSA with arbitrary compressors, and then extend the results to the SBO and SCO problems. Algorithm 1 Error-Compensated TTSA with Arbitrary Compressors 1: **Initialization**: \(\{\alpha_k\}_{k \geq 0}, \{\beta_k\}_{k \geq 0}, x_0 \in \mathbb{R}^{d_0}, y_0 \in \mathbb{R}^{d_1}, d_{0,i} = e_{0,i} = 0, \forall i \in [n]\) 2: **for** \(k = 0, 1, \cdots , T - 1\) **do** 3: **for each agent** \(i \in [n]\) **do** 4: \(\mu_{k,i} = Q[d_{k,i} + \alpha_k f_{k,i}]\) 5: \(d_{k+1,i} = d_{k,i} + \alpha_k f_{k,i} - \mu_{k,i}\) 6: \(\nu_{k,i} = Q[e_{k,i} + \beta_k g_{k,i}]\) 7: \(e_{k+1,i} = e_{k,i} + \beta_k g_{k,i} - \nu_{k,i}\) 8: **end for** 9: **on server** 10: \(x_{k+1} = x_k + \frac{1}{n} \sum_{i \in [n]} \mu_{k,i}, \quad y_{k+1} = y_k + \frac{1}{n} \sum_{i \in [n]} \nu_{k,i}\) 11: **end for** The error-compensated TTSA with arbitrary compressors is illustrated in Algorithm 1. Each agent \(i \in [n]\) stores and updates local sequence \(\{\mu_{k,i}, d_{k,i}, \nu_{k,i}, e_{k,i}\}\) and communicates with the central server to update global sequence \(\{x_k, y_k\}\). Following the MEM-SGD (Stich et al., 2018), the sequence \(\{\mu_{k,i}, d_{k,i}, \nu_{k,i}, e_{k,i}\}\) is updated in the following way: \[ \mu_{k,i} = Q[d_{k,i} + \alpha_k f_{k,i}], \quad d_{k+1,i} = d_{k,i} + \alpha_k f_{k,i} - \mu_{k,i}, \] \[ \nu_{k,i} = Q[e_{k,i} + \beta_k g_{k,i}], \quad e_{k+1,i} = e_{k,i} + \beta_k g_{k,i} - \nu_{k,i}, \] where for any agent \(i \in [n]\) and \(k \geq 0\), \(f_{k,i} = f(x_k, y_k) + \xi_{k,i}, g_{k,i} = g(x_k, y_k) + \psi_{k,i}\), and \(Q[\cdot]\) is a compression operator that satisfies the following contraction property. **Assumption 5.** The compression operator \(Q : \mathbb{R}^d \rightarrow \mathbb{R}^d\) satisfies the following inequality \[ \mathbb{E}_Q[||Q[x] - x||^2] \leq (1 - \delta)||x||^2, \] for a parameter \(\delta \geq 0\) and \(\forall x \in \mathbb{R}^d\). Here \(\mathbb{E}_Q[\cdot]\) denotes the expectation over the randomness of \(Q\). As for the central server, when it receives \(\{\mu_{k,i}\}\) and \(\{\nu_{k,i}\}\) from all agents \([n]\), it updates the global sequence \(\{x_k, y_k\}\) as \(x_{k+1} = x_k + \frac{1}{n} \sum_{i \in [n]} \mu_{k,i}, y_{k+1} = y_k + \frac{1}{n} \sum_{i \in [n]} \nu_{k,i}\), and then sends \(x_{k+1}\) and \(y_{k+1}\) back to all agents, as shown in line 10 of Algorithm 1. Note that instead of transmitting full-dimensional vectors, Algorithm 1 improves communication efficiency by using limited bit representation (quantization) or enforcing sparsity. Observe that Algorithm 1 takes the following form in the EF-TTSA framework: \[ d_k = \frac{1}{n} \sum_{i \in [n]} d_{k,i}, \quad \mu_k = \frac{1}{n} \sum_{i \in [n]} \mu_{k,i}, \quad e_k = \frac{1}{n} \sum_{i \in [n]} e_{k,i}, \quad \nu_k = \frac{1}{n} \sum_{i \in [n]} \nu_{k,i}. \] In view of (32), applying Lemma 1, we have \[ \Xi_{k+1} \leq \left(1 - \frac{\lambda_f}{2} \alpha_k\right)\Xi_k + \Delta_1 \alpha_k \Phi_k + \Delta_2 \alpha_k^2 (\sigma_\xi^2 + \sigma_\psi^2), \] where \(\Xi_k = \mathbb{E}[||\bar{y}_k - y^*(\bar{x}_k)||^2 + ||\bar{x}_k - x^*||^2], \Phi_k = \mathbb{E}\left[\frac{1}{n} \sum_{i \in [n]} ||d_{k,i}||^2 + \frac{1}{n} \sum_{i \in [n]} ||e_{k,i}||^2\right], \bar{x}_k = x_k + \frac{1}{n} \sum_{i \in [n]} d_{k,i}, \text{and } \bar{y}_k = y_k + \frac{1}{n} \sum_{i \in [n]} e_{k,i}\). We then derive an upper bound on \(\Phi_k\). **Lemma 2.** Let \(\{x_k, d_{k,i}, y_k, e_{k,i}\}\) be the sequence generated by Algorithm 1. Suppose Assumptions 1–5 hold and set \(\alpha_k \leq \frac{\delta}{\sqrt{20(4L_f^2 + 3L_g^2 \beta^2)}}\) and \(\beta_k = \beta \alpha_k\), we have \[ \Phi_{k+1} \leq \left(1 - \frac{\delta}{2}\right)\Phi_k + \left(1 + \frac{4}{\delta}\right)(4L_f^2 + 4L^2 + 3L_g^2 \beta^2) \alpha_k^2 \Xi_k + \left(1 + \beta^2\right) \alpha_k^2 (\sigma_\xi^2 + \sigma_\psi^2). \] We now obtain the main theorem for error-compensated TTSA with arbitrary compressors. **Theorem 1.** Consider the sequence \(\{x_k, y_k\}\) generated by Algorithm 1. Suppose Assumptions 1–5 hold. Selecting step sizes \(\alpha_k = \Theta\left(\frac{1}{k+1/\delta}\right)\) and \(\beta_k = \Theta\left(\frac{1}{k+1/\delta}\right)\), then it holds that \[ \frac{1}{W_T} \sum_{k=0}^{T-1} w_k \mathbb{E}[||y_k - y^*(x_k)||^2 + ||x_k - x^*||^2] \leq O\left(\frac{1}{T} + \frac{1}{\delta^2 T^2}\right), \] for some sequence of positive weights \( w_k = k + \kappa \) where \( \kappa \geq \frac{16}{\delta} \) and \( W_T = \sum_{k=0}^{T-1} w_k \). As a consequence, it holds that \( \lim_{k \to \infty} ||y_k - y^*(x_k)||^2 = 0 \) a.s. and \( \lim_{k \to \infty} ||x_k - x^*||^2 = 0 \) a.s. **Remark 2.** Theorem 7 implies that the compression operator only affects the higher order term of the convergence rate. When \( T \) is sufficiently large, the first term is dominating in (35), and the error-compensated TTSA with arbitrary compressors converges at a tight rate \( O(1/T) \), which recovers the convergence rate of nonlinear TTSA with exact communication. **Remark 3.** We can extend Theorem 7 to the SBO and SCO problems. Consider the SBO algorithm with the updates in (13)-(14) and the SCO algorithm with the updates in (19)-(20). Assuming Assumptions 1,5 holds and selecting step sizes \( \alpha_k = \Theta(\frac{1}{k+1/\delta}) \) and \( \beta_k = \Theta(\frac{1}{k+1/\delta}) \), the resulting error-compensated SBO and SCO with arbitrary compressors can converge at rate \( O(\frac{1}{T} + \frac{1}{\delta^2 T^2}) \). ## 5 LOCAL TTSA WITH PERIODIC GLOBAL AVERAGING In this section, we propose the second instance of EF-TTSA, namely, local TTSA with periodic global averaging. First, we use the EF-TTSA framework to analyze local TTSA with periodic global averaging, and then extend the results to the SBO and SCO problems. ### Algorithm 2 Local TTSA with Periodic Global Averaging ``` 1: Initialization: \( \{\alpha_k\}_{k \geq 0}, \{\beta_k\}_{k \geq 0}, x_0 \in \mathbb{R}^{d_0}, y_0 \in \mathbb{R}^{d_1} \) 2: for \( k = 0, 1, \ldots, T-1 \) do 3: for each agent \( i \in [n] \) do 4: \( x_{k+1,i} = x_{k,i} + \alpha_k f_{k,i} \) 5: \( y_{k+1,i} = y_{k,i} + \beta_k g_{k,i} \) 6: end for 7: if \( \text{mod}(k + 1, K) = 0 \) then 8: \( x_{k+1,i} = \frac{1}{n} \sum_{j \in [n]} x_{k+1,j}, \quad y_{k+1,i} = \frac{1}{n} \sum_{j \in [n]} y_{k+1,j} \) 9: end if 10: end for ``` The local TTSA with periodic global averaging is illustrated in Algorithm 2. Following the local SGD [Stich, 2018], the algorithm evolves agents \([n]\) and sequences \(\{x_{k,i}, y_{k,i}\}_{i \in [n]}\) in parallel. Specifically, the sequence \(\{x_{k,i}, y_{k,i}\}\) is updated in the following way: \[ x_{k+1,i} = x_{k,i} + \alpha_k f_{k,i}, \tag{36} \] \[ y_{k+1,i} = y_{k,i} + \beta_k g_{k,i}, \tag{37} \] where for any agent \( i \in [n] \) and \( k \geq 0 \), \( f_{k,i} = f(x_{k,i}, y_{k,i}) + \xi_{k,i} \), and \( g_{k,i} = g(x_{k,i}, y_{k,i}) + \psi_{k,i} \). If \( \text{mod}(k + 1, K) = 0 \), the central server is responsible to synchronize the sequences: \[ x_{k+1,i} = \frac{1}{n} \sum_{j \in [n]} x_{k+1,j}, \quad y_{k+1,i} = \frac{1}{n} \sum_{j \in [n]} y_{k+1,j}, \quad \forall i \in [n]. \tag{38} \] Note that instead of communicating at every iteration, Algorithm 2 achieves communication reduction by allowing multiple local updates (i.e., reducing communication frequency). Observe that Algorithm 2 takes the following form in the EF-TTSA framework: \[ d_{k,i} = \bar{x}_k - x_{k,i}, \quad \mu_{k,i} = \begin{cases} \alpha_k f_{k,i} & \text{if } \text{mod}(k + 1, K) \neq 0, \\ \bar{x}_k - x_{k,i} + \alpha_k f_k & \text{otherwise}, \end{cases} \tag{39} \] \[ e_{k,i} = \bar{y}_k - y_{k,i}, \quad \nu_{k,i} = \begin{cases} \beta_k g_{k,i} & \text{if } \text{mod}(k + 1, K) \neq 0, \\ \bar{y}_k - y_{k,i} + \beta_k g_k & \text{otherwise}, \end{cases} \tag{40} \] where \( \bar{x}_k = \frac{1}{n} \sum_{i \in [n]} x_{k,i}, \bar{y}_k = \frac{1}{n} \sum_{i \in [n]} y_{k,i}, f_k = \frac{1}{n} \sum_{i \in [n]} f_{k,i}, \) and \( g_k = \frac{1}{n} \sum_{i \in [n]} g_{k,i} \). In view of (39)-(40), applying Lemma 1, we have \[ \Xi_{k+1} \leq (1 - \frac{\lambda_f}{2} \alpha_k) \Xi_k + \Delta_1 \alpha_k \Phi_k + \Delta_2 \alpha_k^2 (\sigma_\xi^2 + \sigma_\psi^2), \tag{41} \] where \( \Phi_k = \mathbb{E} \left[ \frac{1}{n} \sum_{i \in [n]} \| x_k - x_{k,i} \|^2 + \frac{1}{n} \sum_{i \in [n]} \| y_k - y_{k,i} \|^2 \right] \) measures the deviation of the local sequences \( \{x_{k,i}\} \) and \( \{y_{k,i}\} \). The next result establishes an upper bound on the deviation \( \Phi_k \). **Lemma 3.** Let \( \{x_{k,i}, d_{k,i}, y_{k,i}, e_{k,i}\} \) be the sequence generated by Algorithm 2 and \( k_0 = \lfloor k/K \rfloor \). Suppose Assumptions 1-4 hold, and set \( \alpha_k \leq \frac{1}{\sqrt{(4L_f^2 + 4L_g^2 + 3L_\xi^2 \beta^2)K}} \) and \( \beta_k = \beta \alpha_k \), we have \[ \Phi_k \leq 2(4L_f^2 + 3L_g^2 \beta^2)K \sum_{j=k_0}^{k-1} \alpha_j^2 \Xi_j + 2(1 + \beta^2) \sum_{j=k_0}^{k-1} \alpha_j^2 (\sigma_\xi^2 + \sigma_\psi^2). \] We are now ready to present the main theorem for local TTSA with periodic global averaging. **Theorem 2.** Consider the sequence \( \{x_{k,i}, y_{k,i}\} \) generated by Algorithm 2. Suppose Assumptions 1-4 hold. Selecting step sizes \( \alpha_k = \Theta \left( \frac{1}{k+K} \right) \) and \( \beta_k = \Theta \left( \frac{1}{k+K} \right) \), then it holds that \[ \frac{1}{W_T} \sum_{k=0}^{T-1} w_k \mathbb{E}[\| y_k - y^*(x_k) \|^2 + \| x_k - x^* \|^2] + \frac{1}{n} \sum_{i \in [n]} (\| y_k - y_{k,i} \|^2 + \| x_k - x_{k,i} \|^2) \leq O \left( \frac{1}{T} + \frac{K^2}{T^2} \right), \] for some sequence of positive weights \( w_k = k + \kappa \) where \( \kappa \geq 4K \) and \( W_T = \sum_{k=0}^{T-1} w_k \). As a consequence, it holds for any \( i \in [n] \) that \( \lim_{k \to \infty} \| y_k - y^*(x_k) \|^2 = 0 \) a.s., \( \lim_{k \to \infty} \| x_k - x^* \|^2 = 0 \) a.s., \( \lim_{k \to \infty} \| y_k - y_{k,i} \|^2 = 0 \) a.s., and \( \lim_{k \to \infty} \| x_k - x_{k,i} \|^2 = 0 \) a.s. **Remark 4.** We see that \( K \) only appears in the higher order term of (43), and the local TTSA with periodic global averaging converges at a tight rate \( O(1/T) \) when \( T \) is sufficiently large. That is, the effects of multiple local updates become negligible after a few iterations, meaning that our algorithm gains communication efficiency through infrequent communication, essentially for free. **Remark 5.** We can extend Theorem 2 to the SBO and SCO problems. Consider the SBO algorithm with the updates in (13)-(14) and the SCO algorithm with the updates in (19)-(20). Assuming Assumptions 1-4 holds and selecting step sizes \( \alpha_k = \Theta \left( \frac{1}{k+K} \right) \) and \( \beta_k = \Theta \left( \frac{1}{k+K} \right) \), the resulting local SBO and SCO with periodic global averaging can converge at rate \( O \left( \frac{1}{T} + \frac{K^2}{T^2} \right) \). 6 TTSA with Delayed Updates In this section, we propose the third instance of EF-TTSA, namely, TTSA with delayed updates. First, we use the EF-TTSA framework to analyze TTSA with delayed updates, and then extend the results to the SBO and SCO problems. **Algorithm 3 TTSA with Delayed Updates** ``` 1: Initialization: \( \{\alpha_k\}_{k \geq 0}, \{\beta_k\}_{k \geq 0}, x_0 \in \mathbb{R}^{d_0}, y_0 \in \mathbb{R}^{d_1} \) 2: for \( k = 0, 1, \cdots, T - 1 \) do 3: \( x_{k+1} = x_k + \alpha_{k-\tau} f_{k-\tau} \) 4: \( y_{k+1} = y_k + \beta_{k-\tau} g_{k-\tau} \) 5: end for ``` The TTSA with delayed updates is illustrated in Algorithm 3. For a fixed (integer) delay \( \tau \geq 1 \), the sequence \( \{x_k, y_k\}_{k \geq 0} \) is updated in the following way: \[ x_{k+1} = x_k + \alpha_{k-\tau} f_{k-\tau}, \] \[ y_{k+1} = y_k + \beta_{k-\tau} g_{k-\tau}, \] where \( f_{k-\tau} = f(x_{k-\tau}, y_{k-\tau}) + \xi_{k-\tau} \), and \( g_{k-\tau} = g(x_{k-\tau}, y_{k-\tau}) + \psi_{k-\tau} \). Throughout this section, we use the convention that \( f_{k-\tau} = g_{k-\tau} = 0 \), if \( k < \tau \). The delay may come from asynchrony in the development of distributed parallel algorithms; see e.g., Agarwal & Duchi (2011). Observe that Algorithm 3 takes the following form in the EF-TTSA framework: \[ d_k = \sum_{i=1}^{\tau} \alpha_{k-i} f_{k-i}, \quad \mu_k = \begin{cases} \alpha_{k-\tau} f_{k-\tau} & \text{if } k \geq \tau, \\ 0 & \text{if } k < \tau, \end{cases} \tag{46} \] \[ e_k = \sum_{i=1}^{\tau} \beta_{k-i} g_{k-i}, \quad \nu_k = \begin{cases} \beta_{k-\tau} g_{k-\tau} & \text{if } k \geq \tau, \\ 0 & \text{if } k < \tau. \end{cases} \tag{47} \] In view of (46)-(47), applying Lemma 1, we have \[ \Xi_{k+1} \leq \left(1 - \frac{\lambda_f}{2} \alpha_k\right) \Xi_k + \Delta_1 \alpha_k \Phi_k + \Delta_2 \alpha_k^2 (\sigma_\xi^2 + \sigma_\psi^2), \tag{48} \] where \( \Phi_k = \mathbb{E} \left[ ||d_k||^2 + ||e_k||^2 \right] \). We present an upper bound on \( \Phi_k \) in the following Lemma. **Lemma 4.** Let \( \{x_k, d_k, y_k, e_k\} \) be the sequence generated by Algorithm 3. Suppose Assumptions 1-4 hold, and set \( \alpha_k \leq \frac{1}{\sqrt{(4L_f^2 + 4L_g^2 + 3L_g^2 \beta^2) \tau}} \) and \( \beta_k = \beta \alpha_k \), we have \[ \Phi_k \leq 2(4L_f^2 + 3L_g^2 \beta^2) \tau \sum_{j=k-\tau}^{k-1} \alpha_j^2 \Xi_j + 2(1 + \beta^2) \sum_{j=k-\tau}^{k-1} \alpha_j^2 (\sigma_\xi^2 + \sigma_\psi^2). \tag{49} \] Noting this fact, we provide the main theorem for TTSA with delayed updates in the next result. **Theorem 3.** Consider the sequence \( \{x_k, y_k\} \) generated by Algorithm 3. Suppose Assumptions 1-4 hold. Selecting step sizes \( \alpha_k = \Theta(\frac{1}{k+\tau}) \) and \( \beta_k = \Theta(\frac{1}{k+\tau}) \), then it holds that \[ \frac{1}{W_T} \sum_{k=0}^{T-1} w_k \mathbb{E}[||y_k - y^*(x_k)||^2 + ||x_k - x^*||^2] \leq O \left( \frac{1}{T} + \frac{\tau^2}{T^2} \right), \tag{50} \] for some sequence of positive weights \( w_k = k + \kappa \) where \( \kappa \geq 4\tau \) and \( W_T = \sum_{k=0}^{T-1} w_k \). As a consequence, it holds that \( \lim_{k \to \infty} ||y_k - y^*(x_k)||^2 = 0 \) a.s. and \( \lim_{k \to \infty} ||x_k - x^*||^2 = 0 \) a.s. **Remark 6.** In analogy to Theorems 7 and 2, this result shows that the dominating term in the rate is not affected by the \( \tau \) parameter. Moreover, we notice that the impact of the delay becomes negligible if \( T = \Omega(\tau^2) \). The performance of TTSA with delays is comparable to that of TTSA without delays. We here focus on the fixed delay \( \tau \). Indeed, our analysis applies to more general settings, see e.g., Feyzmahdavian et al. (2016); arbitrary delays upper bounded by \( \tau \). **Remark 7.** We can extend Theorem 3 to the SBO and SCO problems. Consider the SBO algorithm with the updates in (13)-(14) and the SCO algorithm with the updates in (19)-(20). Assuming Assumptions 1-4 holds and selecting step sizes \( \alpha_k = \Theta(\frac{1}{k+\tau}) \) and \( \beta_k = \Theta(\frac{1}{k+\tau}) \), the resulting SBO and SCO with delayed updates can converge at rate \( O \left( \frac{1}{T} + \frac{\tau^2}{T^2} \right) \). ### 7 Conclusion and Future Work In this work, we consider error-feedback-based two-time-scale stochastic approximation, EF-TTSA, that captures a rich class of structured perturbations such as compression, local updates, and delays. We present a unified theory of EF-TTSA to analyze the impact of different forms of structured perturbations. This is (to the best of our knowledge) the first to demonstrate that two-time-scale stochastic approximation is robust to structured perturbations. In particular, we propose three instances of EF-TTSA, i.e., error-compensated TTSA with arbitrary compressors (Algorithm 1), local TTSA with periodic global averaging (Algorithm 2), and TTSA with delayed updates (Algorithm 3). We see that structured perturbations only affect the higher-order term of the convergence rate. That is, the effects of structured perturbations become negligible after a few iterations, and Algorithms 1, 2, and 3 converge at the same rate as standard TTSA without structured perturbations. Future directions of this work include studying the EF-TTSA framework in the non-strongly monotone case and exploring multiple-time-scale stochastic approximation algorithms. REFERENCES Alekh Agarwal and John C Duchi. Distributed delayed stochastic optimization. *Advances in neural information processing systems*, 24, 2011. Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cédric Renggli. The convergence of sparsified gradient methods. *Advances in Neural Information Processing Systems*, 31, 2018. Yossi Arjevani, Ohad Shamir, and Nathan Srebro. A tight convergence analysis for stochastic gradient descent with delayed updates. In *Algorithmic Learning Theory*, pp. 111–132. PMLR, 2020. Krishnakumar Balasubramanian, Saeed Ghadimi, and Anthony Nguyen. Stochastic multilevel composition optimization algorithms with level-independent convergence rates. *SIAM Journal on Optimization*, 32(2):519–544, 2022. Vivek S Borkar. Stochastic approximation with two time scales. *Systems & Control Letters*, 29(5): 291–294, 1997. Vivek S Borkar. *Stochastic approximation: a dynamical systems viewpoint*, volume 48. Springer, 2009. Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. *SIAM review*, 60(2):223–311, 2018. Tianyi Chen, Yuejiao Sun, and Wotao Yin. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. *IEEE Transactions on Signal Processing*, 69:4937–4948, 2021a. Tianyi Chen, Yuejiao Sun, and Wotao Yin. Tighter analysis of alternating stochastic gradient method for stochastic nested problems. *arXiv preprint arXiv:2106.13781*, 2021b. Zaiwei Chen, Sheng Zhang, Thinh T Doan, John-Paul Clarke, and Siva Theja Maguluri. Finite-sample analysis of nonlinear stochastic approximation with applications in reinforcement learning. *Automatica*, 146:110623, 2022. Gal Dalal, Gugan Thoppe, Balázs Szörényi, and Shie Mannor. Finite sample analysis of two-timescale stochastic approximation with applications to reinforcement learning. In *Conference On Learning Theory*, pp. 1199–1233. PMLR, 2018. Thinh Doan and Justin Romberg. Finite-time performance of distributed two-time-scale stochastic approximation. In *Learning for Dynamics and Control*, pp. 26–36. PMLR, 2020. Thinh T Doan. Distributed local two-time-scale stochastic approximation. In *2021 Seventh Indian Control Conference (ICC)*, pp. 1–6. IEEE, 2021a. Thinh T Doan. Finite-time analysis and restarting scheme for linear two-time-scale stochastic approximation. *SIAM Journal on Control and Optimization*, 59(4):2798–2819, 2021b. Thinh T Doan. Nonlinear two-time-scale stochastic approximation convergence and finite-time performance. *IEEE Transactions on Automatic Control*, 2022. Thinh T Doan and Justin Romberg. Linear two-time-scale stochastic approximation a finite-time analysis. In *2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)*, pp. 399–406. IEEE, 2019. Hamid Reza Feyzmahdavian, Arda Aytekin, and Mikael Johansson. An asynchronous mini-batch algorithm for regularized stochastic optimization. *IEEE Transactions on Automatic Control*, 61(12):3740–3754, 2016. Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In *International conference on machine learning*, pp. 1568–1577. PMLR, 2018.
FlEUIydMMh
Theorem 3.6 states that the DAG G is identifiable with the conditions that $M_{i}\perp M_{j}\iff i-j\notin E^{\mathcal{U}}$, which is problematic. Since a DAG identifiable means that every direction for every edge in the DAG will be identified, which is impossible without any assumptions under the existence of a latent variable.
Neuro-Causal Factor Analysis Anonymous authors Paper under double-blind review Abstract Factor analysis (FA) is a statistical tool for studying how observed variables with some mutual dependences can be expressed as functions of mutually independent unobserved factors, and it is widely applied throughout the psychological, biological, and physical sciences. We revisit this classic method from the comparatively new perspective given by advancements in causal discovery and deep learning, introducing a framework for Neuro-Causal Factor Analysis (NCFA). Our approach is fully nonparametric: it identifies factors via latent causal discovery methods and then uses a variational autoencoder (VAE) that is constrained to abide by the Markov factorization of the distribution with respect to the learned graph. We evaluate NCFA on real and synthetic data sets, finding that it performs comparably to standard VAEs on data reconstruction tasks but with the advantages of sparser architecture, lower model complexity, and causal interpretability. Unlike traditional FA methods, our proposed NCFA method allows learning and reasoning about the latent factors underlying observed data from a justifiably causal perspective, even when the relations between factors and measurements are highly nonlinear. 1 Introduction Since its development over a century ago, factor analysis (FA) (Spearman, 1904) has been applied in many scientific fields, including genomics, computational biology (Pourmara & Wernisch, 2007; Velten et al., 2022), economics (Forni & Reichlin, 1998; Ludvigson & Ng, 2007), sociology (Bollen, 2012) and many others. The goal of FA is to offer explanations of variability among dependent observables via (potentially) fewer latent variables that capture the degree to which the observables in the system vary jointly. For the sake of identifiability, it is common to assume linearity, although in practice it is well-known that many problems exhibit complex nonlinear latent structures. With the rise of nonparametric deep generative models that allow representing highly nonlinear relationships between dependent observables, one might hope to combine the best of both worlds. Moreover, within applications such as those listed above, FA is considered useful because the learned factors (latents) may offer a possible interpretation of relevant observed correlations. Many applied FA studies provide an interpretation of the learned factors based on the observed variables whose joint correlation they encode. A natural tendency when trying to interpret these factors is to assume they reflect possible common causes linking observed variables. However, the models used in such studies are not necessarily built with causality in mind. Collectively, these considerations purport a need for a framework for nonlinear causal factor analysis that combines identifiability with flexibility through the use of modern advances in deep generative models and causality. To this end, we propose Neuro-Causal Factor Analysis (NCFA), augmenting classic FA on both fronts by leveraging advancements of the last few decades, including (i) causal discovery (Spirtes et al., 2000; Pearl, 2009) and (ii) deep generative models, such as variational autoencoders (VAEs) (Kingma & Welling, 2014). To formalize this combination of ideas and apply it to the settings where FA is typically invoked, we consider causal models that directly abide by Reichenbach’s common cause principle (Reichenbach, 1956, p. 157): dependent variables in a system that do not share a direct causal relation should be explained by the existence of one or more unobserved common causes which when conditioned upon render them independent. In particular, NCFA is applicable to --- 1We would like to briefly draw the reader’s attention to and repudiate the historical context within which factor analysis and related methods were originally developed (e.g. Saini, 2019; Crenshaw et al., 1995; Stubblefield, 2007). problems where one can assume that the observed (or measurement) variables are rendered mutually independent when conditioning on a set of unobserved latent variables, which may be interpreted as causally justifiable factors from the FA perspective. Such models naturally arise, for instance, when one wishes to interpret causal relations among pixel variables in image data, such as biomedical imaging data. In these contexts, each pixel in the image is treated as a random variable that may be dependent with other pixels. Since pixels should have no direct causal relations, all dependences should be explained by the latent information (for instance, neuronal activity in the brain during an fMRI scan) which resulted in the observed pixel intensities. In such situations, the common cause principle naturally applies. Our main contribution is the NCFA framework (Figure 1) for causally interpretable, identifiable FA models with the flexibility and data replication capabilities afforded by deep generative models. Our approach does not assume the underlying structure is known (i.e., it is learned from data), allows for flexible estimation of the latent space with deep generative models, and comes with fully nonparametric (i.e., no functional assumptions are imposed) identifiability guarantees. One of the key methodological contributions is the introduction of latent degrees of freedom whereby additional representational capacity is afforded by giving each causal variable its own factorial prior. We demonstrate on both synthetic and real data that NCFA injects generative models with interpretable structure without any significant loss of representational or predictive capacity compared to unstructured generative models. Moreover, we provide an algorithm and open source implementation for inference and prediction with NCFA models. The paper is organized as follows: We begin in Section 2 with a survey of related work in factor analysis, latent causal modeling, and deep generative models. In Section 3, we formally define NCFA models and present identifiability results. Next, in Section 4, we provide the NCFA algorithm and discuss its complexity. We then conclude by comparing NCFA to ground truth causal models and baseline VAE methods on synthetic and real data sets in Section 5. 2 COMPARISON TO RELATED WORK We divide the vast amount of related work into three areas: factor analysis (Section 2.1), latent causal discovery (Section 2.2), and deep generative models (Section 2.3). Before describing each in more detail in its respective subsection, we first summarize their differing motivations and methods and provide a comparison to our proposed NCFA. Factor analysis focuses on modeling measurement variables in terms of underlying factors (which can be interpreted as sources), focusing on model simplicity and interpretability, generally by assuming linear relations and jointly Gaussian random variables. Latent causal models focus on more detailed causal structure, not being limited to measurement variables and their latent sources, resulting in extremely interpretable models, but often at the expense of (arguably) strong, untestable assumptions like faithfulness. Deep generative models focus on learning as accurate a black box model as possible, optimizing a highly overparameterized and nonlinear model to still achieve generalizability. Although the interpretation of deep generative models as nonlinear factor analysis is standard in the literature (e.g., Roweis & Ghahramani, 1999; Murphy, 2022; Goodfellow et al., 2016), the additional dimensions of causality and identifiability are new to our approach. NCFA offers a unifying perspective on structured representation learning, incorporating the strengths of each of these approaches. Like FA, we focus on modeling measurement variables in terms of their underlying sources; however, NCFA identifies these sources and their structural connections to the measurements through explicit latent causal structure learning, which is made easier and requires weaker assumptions by focusing on source-measurement causal relations instead of more detailed intermediate causal structure. Furthermore, the source distributions and their corresponding functional relations to the measurements are estimated using a VAE whose architecture is constrained to respect the learned causal structure, gaining some of the expressiveness of deep generative models but regularized to maintain causal interpretability and generalizability. Hence, NCFA is motivated by the simplicity of FA, the causal interpretability of latent causal models, and the expressive power of deep generative models. 2.1 Factor analysis We now give a brief introduction to FA, focusing on the key terms and mathematical ideas\(^2\) that we connect to latent causal discovery and deep generative models, but for a more in-depth introduction and discussion about FA, see Mulaik (2009). **Definition 2.1.** A factor model represents a random (row) vector \(M \sim N(0, \Sigma)\) consisting of \(n\) measurement variables as a linear transformation of a standard jointly normal random vector \(L \sim N(0, I_K)\) of \(K < n\) latent factors via factor loading weights \(W \in \mathbb{R}^{K \times n}\) plus a jointly normal random vector of \(n\) error terms \(\epsilon \sim N(0, D)\), where \(D \in \mathbb{R}_{+}^{n \times n}\) is a diagonal matrix, via \[ M = LW + \epsilon. \] Given a sample \(M \in \mathbb{R}^{s \times n} \sim M\) and assuming that \(L\) and \(\epsilon\) are probabilistically independent, the factor model can be estimated (Adachi, 2019) from the empirical covariance matrix \(\hat{\Sigma} = \frac{1}{s} M^\top M\) by finding \(\hat{W}\) and \(\hat{D}\) that minimize the squared Frobenius norm \[ \|\hat{\Sigma} - \hat{W}^\top \hat{W} - \hat{D}\|_F^2. \] Such a solution is unique only up to orthogonal transformations of \(\hat{W}\), and so without further (e.g., in our case, causal) assumptions, finding a solution does not always warrant a meaningful interpretation of the resulting factor model. This unidentifiability poses a problem in exploratory FA, where there is no prior knowledge about \(\hat{\Sigma}, \hat{W}\) or \(\hat{D}\), but less so in confirmatory FA, where experts incorporate domain knowledge to constrain and interpret solutions as well as test specific hypotheses. Additionally, there are possibilities for either restricting or relaxing the FA model, including closely related methods like PCA (Pearson, 1901; Hotelling, 1933; Jolliffe, 2002), ICA (Comon, 1994; Hyvärinen & Oja, 2000), and many others beyond our scope. Notably, compared to other related work, sparse FA (Ning & Georgiou, 2011; Trendafilov et al., 2017; Yamamoto et al., 2017), which penalizes \(\hat{W}\) according to the number of nonzero entries, produces solutions more closely related to those we find with NCFA. The two main differences between sparse FA and NCFA are that (i) rather than explicitly penalizing the solution to encourage sparsity, NCFA simply learns a causal structure that exhibits a structure typically sought in sparse FA, and (ii) like most FA methods, sparse FA still assumes linearity and Gaussianity, whereas NCFA can be highly nonlinear and nonparametric. --- \(^2\)In case of conflicting notational conventions, e.g., \(L\) to denote a loading matrix in FA literature versus denoting a set of latent variables in the causal graphical model literature, we favor the latter. 2.2 Latent Causal Models Graphical causal modeling (Spirites et al., 2000; Pearl, 2009) focuses on learning a directed acyclic graph (DAG) representation of the causal relations among variables. This typically requires a strengthening of the common cause principle (into what is sometimes called the causal Markov assumption), which additionally assumes causal sufficiency, i.e., that there are no latent variables, and hence that all probabilistic dependences among the observed variables are due to causal relations among them. Methods for learning latent causal models have classically focused on learning DAG-like structure (using mixed instead of only directed graphs) among the observed variables to the extent allowed by confounding latent variables, exemplified by algorithms such as FCI (Spirites et al., 2000; Colombo et al., 2012) and IC (Pearl & Verma, 1995), which relax the Causal Markov Assumption. We also mention early work on this problem by (Martin & VanLehn, 1995; Friedman et al., 1997; Elidan et al., 2000). In contrast, research on causal measurement models (Silva et al., 2003) is more closely related to the goal of FA, in that it too focuses on factor-measurement relations. Recently, there has been a surge of interest in these models, with advances leveraging additive noise models (Maeda & Shimizu, 2021; Yang et al., 2022; Huang et al., 2022; Xie et al., 2022; Ashman et al., 2022), independent mechanisms (Gresele et al., 2021), weak supervision (Liu et al., 2022; Brehmer et al., 2022), and interventions (Chalupka et al., 2015; 2017; Ahuja et al., 2022; Squires et al., 2023; Varici et al., 2023). 2.3 Structured Deep Generative Models The past decade has seen a flurry of work on training large-scale deep latent variable models, fueled by advances in variational inference and deep learning (e.g. Larochelle & Murray, 2011; Kingma & Welling, 2014; Rezende et al., 2014; Dinh et al., 2014; Goodfellow et al., 2014; Rezende & Mohamed, 2015; Sohl-Dickstein et al., 2015). More recently, there has been a trend towards structured latent spaces, such as hierarchical, graphical, causal, and disentangled structures. Conceptually, NCFA provides a theoretically principled approach to automatically learning latent structure from data in a way that is causally meaningful. The related work here needs to be divided into two categories: known (e.g. from prior knowledge) vs. learned latent structure. These can be further divided into non-causal vs. causal approaches. Given that our main contribution is learned causal structure, we will focus the discussion on the latter: For causal structure, identifiability becomes crucial, as it is well-known that nonparametric latent variable models are unidentifiable in general (Hyvärinen & Pajunen, 1999; Locatello et al., 2019). Known structure Early work looked at incorporating known structure into generative models, such as autoregressive, graphical, and hierarchical structure (Germain et al., 2015; Johnson et al., 2016; Sønderby et al., 2016; Webb et al., 2018; Weilbach et al., 2020; Ding et al., 2021; Mouton & Kroon, 2023). This was later translated into known causal structure (Kocaoglu et al., 2017). Learned structure When the latent structure is unknown, several techniques have been developed to automatically learn useful (not necessarily causal) structure from data (Li et al., 2019; He et al., 2019; Wehenkel & Louppe, 2021; Kivva et al., 2022; Moran et al., 2023). More recently, based on growing interest in disentangled (Bengio, 2013) and/or causal (Schölkopf et al., 2021) representation learning, methods that automatically learn causal structure have been developed (Moraffah et al., 2020; Yang et al., 2021; Ashman et al., 2022; Shen et al., 2022; Kaltenpoth & Vreeken, 2023). Subramanian et al. (2022) assumes a linear Gaussian additive noise model, whereas Moraffah et al. (2020) uses GANs. Unlike NCFA, neither Moraffah et al. (2020) nor Subramanian et al. (2022) come with identifiability guarantees. In order to guarantee identifiability, CausalVAE (Yang et al., 2021) leverages additional labeled data $u$, based on iVAE (Khemakhem et al., 2020). DEAR (Shen et al., 2022) requires a known causal ordering, leaving “causal discovery from scratch to future work”. More recently, Ashman et al. (2022) used partially additive models and Kaltenpoth & Vreeken (2023) used post-nonlinear models to guarantee identifiability. In contrast to this existing work, NCFA admits nonparametric identifiability guarantees without additional labels, known causal ordering, or specifying a particular parametric or functional form (see subsection 3.2). 3 NCFA Models Consider a collection of jointly distributed measurement variables $(M_1, \ldots, M_n)$ for which we assume that all dependences are explained by the existence of a latent common cause of the measured variables, i.e., that no $M_i$ and $M_j$ share a direct casual relation. If we were able to observe these latent confounders and condition upon them, $M_1, \ldots, M_n$ would become mutually independent. Hence, the only causal structure encoded via conditional independence in the observed distribution is contained in their marginal independence structure, which can be encoded in an undirected graph: **Definition 3.1.** The *unconditional dependence graph* (UDG) for the jointly distributed random variables $(M_1, \ldots, M_n)$ is the undirected graph $\mathcal{U}$ with node set $[n] = \{1, \ldots, n\}$ and edge set $$E = \{i \rightarrow j : M_i \not\perp M_j\}.$$ To recover a causal interpretation of the relations that hold among the measurement variables, we extend a UDG graph to a *(minimum) MCM graph*. Following the principle of Occam’s Razor, we would like to explain the observed dependences in $(M_1, \ldots, M_n)$ in the simplest possible way, i.e., using the fewest possible latents to serve as the common causes of the measurement variables that exhibit dependence. To do so, we identify a *minimum edge clique cover* of the UDG $\mathcal{U}$, which is a collection $\mathcal{C} = \{C_1, \ldots, C_K\}$ of cliques (i.e., complete subgraphs of $\mathcal{U}$) such that for every $i \rightarrow j \in E$ the pair $i,j$ is contained in at least one clique in $\mathcal{C}$ and there exists no set of cliques with this property that has cardinality smaller than $|\mathcal{C}|$. **Definition 3.2.** Let $\mathcal{U}$ be an undirected graph with minimum edge clique cover $\mathcal{C} = \{C_1, \ldots, C_K\}$. The *(minimum) MCM graph* $\mathcal{G}$ for $\mathcal{U}$ and $\mathcal{C}$ is the DAG with vertices $[n] \cup L$ where $L = \{l_1, \ldots, l_K\}$ and edge set $$E = \{l_i \rightarrow j : j \in C_i, \forall i \in [K]\}.$$ We call $|L|$ the number of *causal degrees of freedom* of the model. An example of a UDG and a corresponding MCM graph is presented in Figure 2. Minimum MCM graphs were originally defined in the context of MeDIL causal models (Markham & Grosse-Wentrup, 2020). A summary of this theory is given in Appendix A, for completeness. Since we assumed all marginal dependencies in $(M_1, \ldots, M_n)$ are explainable by the existence of a latent common cause, then the observed distribution $(M_1, \ldots, M_n)$ is realizable as the marginal distribution of $(M_1, \ldots, M_n)$ in the joint distribution $(M_1, \ldots, M_n, L_1, \ldots, L_K)$ that is Markov to the DAG $\mathcal{G}$, where $L_i$ is the random variable represented by the node $l_i$ in $\mathcal{G}$. From a factor analysis perspective, the latents $L_1, \ldots, L_K$ are the factors to be inferred. ### 3.1 NCFA Graphs and Variational Autoencoders The minimum MCM graph defines a putative causal graph that respects the independence structure of $(M_1, \ldots, M_n)$, and our goal is to learn the associated latent representations from data using a deep generative model. Consider the DAG $\mathcal{G}$ depicted in Figure 2 with two latents. A naïve approach would be to design a standard VAE such that the decoder respects the Markov properties implied by $\mathcal{G}$, however, it is unlikely that any generative model trained with a two-dimensional latent space will be able to represent the measurement variables accurately. The difficulty is that although the true causal structure involves only two latent variables, exactly fitting such a model is very difficult in practice. Thus, there is a tension between expressive capacity and respecting the causal structure. We overcome this difficulty by replacing each causal latent with an overparametrized, factorial prior. The virtue of overparametrization is well-documented in the literature (Radhakrishnan et al., in our setting this has the effect of increasing representational capacity without breaking the Markov structure encoded in \( G \). Formally, given a minimum MCM graph \( G = ([n] \cup L, E) \), we replace each \( l_i \) with a set of independent latent nodes \( L_i = \{\ell_{i,1}, \ldots, \ell_{i,k_i}\} \), for some \( k_i \geq 1 \), each with the same connectivity (i.e., children) as \( l_i \). Thus, all told, we distribute \( \lambda = \sum_{i \in [K]} k_i \) latents across the cliques, a parameter called the latent degrees of freedom. It is easy to check that no matter how the \( \lambda \) latent degrees of freedom are distributed, the resulting DAG has the same independence structure over the measurement variables as \( G \). This provides a rigorous device for increasing complexity without affecting the causal structure, and moreover, \( \lambda \) is a flexible tuning parameter that can be set arbitrarily large in practice, resulting in potentially overparametrized models. We call the resulting graph a NCFA-graph of \( G \) with \( \lambda \) latent degrees of freedom. **Definition 3.3.** Let \( G \) be a minimum MCM graph for the UDG \( U = ([n], E) \) and the minimum edge clique cover \( C = \{C_1, \ldots, C_k\} \) of \( U \). A NCFA graph of \( G \) with \( \lambda \) latent degrees of freedom is a graph \( \tilde{G} \) with node set \([n] \cup \tilde{L}\) and edge set \( \tilde{E} \) where \[ \tilde{L} = L_1 \cup \cdots \cup L_k \quad \text{for} \quad L_i = \{\ell_{i,1}, \ldots, \ell_{i,k_i}\}, \quad k_i \geq 1 \quad \forall i \in [K], \] and \[ \tilde{E} = \{\ell_{i,m} \rightarrow j : \forall j \in C_i, \forall m \in k_i, \forall i \in [K]\}. \] Each node \( \ell_{i,m} \) represents a latent variable \( Z_{i,m} \). Since the latent nodes in \( L_i \) all have the same connectivity as the single latent \( l_i \), their joint distribution \( f(L_i) = \prod_{m=1}^{k_i} f(Z_{i,m}) \) represents the common cause of the measurement variables corresponding to the nodes in \( C_i \), which was previously only represented by \( l_i \) in \( G \). The factors to be inferred from a factor analysis perspective are now the random vectors \( L_1, \ldots, L_K \) with \( L_i = (Z_{i,1}, \ldots, Z_{i,k_i}) \), which still have the causal interpretation afforded by the minimum MCM graph. However, the multiple latents provide us flexibility to model the effects of the causal factors. **Definition 3.4.** A NCFA model is a joint distribution \((M_1, \ldots, M_n)\) for which there is a NCFA-graph \( \tilde{G} = ([n] \cup \tilde{L}, \tilde{E}) \) and functions \( f_1, \ldots, f_n \) for which \( M_i := f_i(\text{pa}_Z(i), \epsilon_i) \) for all \( i \in [n] \), where \( \text{pa}_Z(i) := \{Z_{j,m} : \ell_{j,m} \in \text{pa}_{\tilde{G}}(i)\} \). When modeling a distribution via a NCFA model, the functions \( f_i \) are treated as unknowns to be inferred via a deep generative model such as a VAE. The encoder maps the observations into the latent space as the joint posterior distribution \( f(Z|M_1, \ldots, M_n) \) where \( Z \) is the random vector that collects the \( Z_{j,m} \), and the decoder maps latents to observations according to the factorization \[ f(M_1, \ldots, M_n|Z) = \prod_{i=1}^{n} f(M_i|\text{pa}_Z(i)). \] The joint distribution of the latent space is \( f(Z) = \prod_{i=1}^{K} f(L_i) \); i.e., it is a product of the (joint) distributions we have specified to represent each of the latent common causes in the minimum MCM model \( G \) for \( U \). Following training of the VAE, the model may be used to generate predictions in the observation space via draws from the latent space. Since our representation of the latent space was constructed according to the minimum MCM graph \( G \), the resulting predictions can be viewed as causally informed; i.e., they are observations generated from the estimated distribution of the latent primary causes of the measurement variables. ### 3.2 Identifiability of Minimum MCM Graphs and ECC-Model Equivalence While the UDG is identifiable, there may exist multiple minimum MCM graphs that yield the same UDG. This is because an undirected graph may have multiple, distinct minimum edge clique covers (see, for instance, the example provided in Appendix B). In other words, similar to DAGs, minimum MCM graphs may be equivalent when provided with only observational data. **Definition 3.5.** We say that two minimum MCM graphs \( G = ([n] \cup L, E) \) and \( G' = ([n] \cup L', E') \) are ECC-observationally equivalent if \( i \) and \( j \) are \( d \)-separated given \( \emptyset \) in \( G \) if and only if they are \( d \)-separated given \( \emptyset \) in \( G' \). While there exist equivalence classes of minimum MCM graphs containing multiple elements, there also exist classes that are singletons; in other words, there exist undirected graphs (UDGs) with a unique minimum edge clique cover. For such UDGs, the minimum MCM graph is identifiable. Algorithm 1: Neuro-Causal Factor Analysis (NCFA) input : sample $S$ of measurement variables $M$ parameter : significance level $\alpha$, latent degrees of freedom $\lambda$ output : neuro-causal factor model $\langle \tilde{G}, f_{[n]}, \epsilon \rangle$, with NCFA graph $\tilde{G}$, loading functions $f_{[n]}$, and residual measurement errors $\epsilon$ 1. Estimate $\mathcal{U}$, the undirected dependence graph, via pairwise marginal independence tests with threshold given by $\alpha$; 2. Identify a minimum edge clique cover $\mathcal{C}$ of $\mathcal{U}$ and construct the corresponding minimum MCM graph $\mathcal{G}$; 3. Assign the remaining $\lambda - |\mathcal{C}|$ latents to the cliques in $\mathcal{C}$ to produce the NCFA-graph $\tilde{G}$; 4. Estimate functions $f_{[n]}$ using a VAE constrained by $\tilde{G}$, with residual measurement errors $\epsilon$; 5. return $\langle \tilde{G}, f_{[n]}, \epsilon \rangle$ Theorem 3.6. Suppose that the data-generating distribution is Markov to a minimum MCM graph $\mathcal{G}$. Then the DAG $\mathcal{G}$ is identifiable from the data-generating distribution if: 1. The UDG $\mathcal{U}$ for $\mathcal{G}$ admits a unique minimum edge clique cover, and 2. $M_i \perp M_j \iff i \not\sim j \notin E^{\mathcal{U}}$. Corollary 3.7. Suppose that the data-generating distribution is Markov to a minimum MCM graph $\mathcal{G}$ satisfying the 1-pure-child assumption, namely, for each latent $l_i$ in $\mathcal{G}$ there exists a measurement node $i^*$ such that $\text{pa}_{\mathcal{G}}(i^*) = \{l_i\}$. Then $\mathcal{G}$ is identifiable. Proofs are deferred to Appendix B. The identifiability result in Corollary 3.7 applies to models that are of practical interest (e.g., as in Donoho & Stodden, 2003; Arora et al., 2012; Bing et al., 2020; Moran et al., 2023). However, Theorem 3.6 shows that these are not the only models to which the identifiability result applies. An example of a UDG that admits a unique minimum edge clique cover but does not satisfy the pure measurement variable condition is given in Appendix B. 4 NEURO-CAUSAL FACTOR ANALYSIS We now present our main contribution, the Neuro-Causal Factor Analysis (NCFA) algorithm, given in Algorithm 1. The NCFA algorithm runs by the logic described in Section 3: namely, it infers a UDG from data, identifies a minimum edge clique cover $\mathcal{C} = \{C_1, \ldots, C_K\}$ for $\mathcal{U}$, builds the corresponding NCFA-graph $\tilde{G}$ with $\lambda$ latent degrees of freedom and then trains a VAE according to the functional relationships among the measurement and latent variables specified by $\tilde{G}$. To estimate the UDG, pairwise marginal independence tests are performed. Starting with the complete graph, the edge $i \sim j$ is removed whenever $M_i$ and $M_j$ are deemed independent, i.e., according to a test with statistics such as distance-covariance (Székely et al., 2007; Markham et al., 2022) or Chatterjee’s coefficient (Chatterjee, 2021; Lin & Han, 2022). A minimum edge clique cover is then identified for the estimated UDG $\hat{\mathcal{U}}$. In general, this is an NP-hard problem, however there are both exact algorithms that work well for small graphs and heuristic algorithms that scale to large graphs (Gramm et al., 2009; Conte et al., 2020; Ullah, 2022). Once a minimum edge clique cover is identified, the corresponding NCFA graph with $\lambda$ latent degrees of freedom is constructed. Here, we ensure that at every clique in the minimum edge clique cover of $\hat{\mathcal{U}}$ is assigned at least one latent variable. The remaining $\lambda - K$ latents are then distributed uniformly over the cliques. In this implementation of NCFA, we set default $\lambda = \lfloor n^2/4 \rfloor$, a known upper bound on the number of cliques in a minimum edge clique cover of a graph on $n$ nodes (Erdős et al., 1966). Finally, a VAE for the functional relations specified by the NCFA-graph is trained. One could, in principle, alternatively use any deep generative model. See Appendix C for further details. Since NCFA constructs its model via the MCM graph $\hat{\mathcal{U}}$, the estimated factors (i.e., joint distributions) $f(L_i)$ in the factorization of the latent distribution represent the distributions for the primary causes of the measurement variables to which the latent nodes in $\mathcal{L}_i$ are connected. This yields a factor analysis model in which the latent factors can justifiably be causally interpreted. Furthermore, while each latent variable $Z_{i,j}$ is assigned a Gaussian prior in the VAE, by assigning $\mathcal{L}_i = \{\ell_{i,1}, \ldots, \ell_{i,k_i}\}$ latents to each clique $C_i$, instead of a single latent $l_i$, each causal latent in the minimum MCM graph is modeled as a mixture distribution which can be arbitrarily non-Gaussian. Hence, the estimated factors have both a causal interpretation while additionally being as nonlinear as necessary. 5 APPLICATIONS ON SYNTHETIC AND REAL DATA We now present results of applying NCFA to synthetic and real data sets, observing that the performance of NCFA is competitive with classical VAEs while additionally offering a nonlinear, causally interpretable factor model. We provide a Python implementation of the NCFA algorithm as well as scripts for reproducing all of the following results, released as a free/libre software package: https://after.review. Here we summarize our main findings; the full experimental protocol and details can be found in the appendix, including details on the NCFA implementation (Appendix C), evaluation metrics (Appendix D), synthetic data generation and additional results (Appendix E), and additional results on real data (Appendix F). NCFA faces a trade-off between causal constraints and expressivity: an unconstrained, fully connected VAE ignores this structure, and has free reign to fit the data arbitrarily, at the cost of interpretability and potentially acausal relationships (e.g. spurious correlations). The additional structure offered by the minimum MCM graph in NCFA brings in causal structure and interpretation, but can hamper training if the structure is incorrect. Of course, when the causal structure is correct, there should be no significant loss in expressivity. Thus, ideally we will see no significant degradation in the loss, which is an indicator of structural fidelity. We measure this with the metric $\Delta$ which is the difference between the loss of an unconstrained, baseline VAE and the NCFA loss. On synthetic data where we know the causal ground truth, we can also directly measure structural fidelity using graph comparison metrics. See Appendix D for detailed definitions of our metrics. Except for the last experiment, no hyperparameter tuning was performed, and instead default, reasonable choices are used (e.g. $\alpha = 0.05$ and $\lambda = \lfloor n^2/4 \rfloor$). We anticipate improvements are possible with careful hyperparameter tuning. Synthetic data We summarize some key results on the synthetic data, compared to both a ground truth causal model and a baseline VAE, in Figure 3. Results are grouped according to edge density of the generating UDG, shown along the $x$-axis. Figure 3a contains box plots of distance between the true MCM causal structure and that learned by NCFA (lower is better). Here, distance between MCM graphs is measured using the Structural Frobenius Difference (SFD), which is a modification of the more common Structural Hamming Distance (SHD) for graphs with possibly different numbers of nodes (see Appendix D for more details on SFD and its relation to SHD). Figure 3b contains box plots of Validation-$\Delta$, the difference between the final validation loss of the baseline VAE and that of NCFA (higher is better). Additionally, we report that NCFA learned the exact true causal structure at a proportion of 0.91 for density $p = 0.1$, at 0.56 for $p = 0.2$ and between 0.39 and 0.43 for other values of $p$. As is commonly seen in causal discovery tasks, NCFA recovers causal structure well in the sparse setting but increasingly less so in denser settings. Causal discovery is notoriously difficult, especially in the small-sample regime, but NCFA benefits from only needing to perform marginal independence tests (so the conditioning set is always empty). In terms of performance as a generative model, we see that NCFA generally improves the validation loss compared to the baseline VAE since the median loss difference is above 0 for all edge densities except for $p = 0.1$, even as the true graph density increases. This indicates both that the causal structure provides helpful constraints in the NCFA pipeline and that NCFA is robust in the face of moderate misestimation of the causal structure. Real data We ran NCFA on two real datasets, MNIST and TCGA, comparing its performance to a baseline VAE. In both cases, there is no ground truth causal graph, so we focus on VAE metrics as a benchmark. We report the results in Table 1. For MNIST, sample size is much larger than number of measurement variables $n$, but this is not true of TCGA. When run using default settings for $\alpha$, $\lambda$ in the first two rows, we see that NCFA achieves comparable training and validation to the baseline VAE, demonstrating that it learns reasonable constraints (i.e. causal relations) as well as its ability Figure 3: Results of NCFA on synthetic data sets from randomly generated graphs: (a) shows distance (SFD) between learned causal structures and the ground truth; (b) shows Validation-Δ, the difference of validation loss between baseline VAE and NCFA (higher means better performance for NCFA). to scale well to high-dimensional settings. In fact, for TCGA the training and validation losses are lower for NCFA, suggesting that incorporating the causal structure learned by NCFA improved model performance. Curiously, for MNIST, the minimum MCM graph consisted of just a single latent (i.e., \(|L| = 1\)), suggesting the causal structure in this dataset is limited, which matches expectations. This does not mean that there are not multiple, interpretable latents to be discovered as is well-documented in the literature, but perhaps that these latents do not have a strong causal interpretation. Table 1: Results of NCFA on two real data sets | | samp size | \(n\) | \(\alpha\) | \(\lambda\) | \(|L|\) | Training-Δ | Validation-Δ | |-------|-----------|------|--------|--------|------|----------|-------------| | MNIST | 42000 | 784 | 0.05 | 153664 | 1 | -0.00475 | -0.04814 | | TCGA | 632 | 1000 | 0.05 | 250000 | 8129 | 0.11488 | 0.11865 | | MNIST | 42000 | 784 | 0.001 | 7800 | 560 | -76.682 | -74.163 | | TCGA | 632 | 1000 | 0.05 | 10000 | 969 | -78.721 | -68.117 | On both datasets, the default \(\lambda\) and maximum allowed \(|L| < \lambda\) were quite large, so we also ran experiments under the 1-pure-child assumption (see Appendix F for details), which guarantees that \(|L| \leq n\), allowing us to safely reduce \(\lambda\) from \(\lceil n^2/4 \rceil\) to, e.g., \(10n\). Additionally, we decreased \(\alpha\) to 0.001 for MNIST, taking advantage of the large sample size and encouraging NCFA to learn a sparser structure. However, based on the training and validation differences, NCFA failed to converge properly compared to the baseline VAE. In the case of MNIST, we attribute this to it arguably being a data set without causally meaningful sparse latents. For TCGA, the performance of NCFA without the 1-pure-child assumption yielded a better performance than the baseline VAE. Hence, the decrease in performance of NCFA under this constraint could suggest that the true causal structure of TCGA simply does not abide by the 1-pure-child assumption. Collectively, these results suggest that NCFA with default parameter specifications appears to yield competitive, if not improved, performance over baseline VAE models that successfully incorporate causal structure when it is present to be learned. When NCFA has free reign to learn whatever causal structure (when it exists, as in TCGA) can be gleaned from the data, it appears to benefit training. However, the second round of experiments suggest that one should take care when adjusting the algorithm to fit a specified causal structure, such as the 1-pure-child constraint, as forcing possibly nonexistent causal structure into the model may be detrimental to the models predictive capabilities. This is in line with the observation at the start of Section 5 that one risks hampering training when the causal structure is misspecified. References Kohei Adachi. Factor analysis: Latent variable, matrix decomposition, and constrained uniqueness formulations. *Wiley Interdisciplinary Reviews: Computational Statistics*, 11(3):e1458, 2019. Kartik Ahuja, Yixin Wang, Divyat Mahajan, and Yoshua Bengio. Interventional causal representation learning. *arXiv preprint arXiv:2209.11924*, 2022. Sanjeev Arora, Rong Ge, and Ankur Moitra. Learning topic models–going beyond svd. In *2012 IEEE 53rd annual symposium on foundations of computer science*, pp. 1–10. IEEE, 2012. Matthew Ashman, Chao Ma, Agrin Hilmkil, Joel Jennings, and Cheng Zhang. Causal reasoning in the presence of latent confounders via neural admg learning. In *The Eleventh International Conference on Learning Representations*, 2022. Yoshua Bengio. Deep learning of representations: Looking forward. In *Statistical Language and Speech Processing: First International Conference, SLSP 2013, Tarragona, Spain, July 29-31, 2013. Proceedings 1*, pp. 1–37. Springer, 2013. Xin Bing, Florentina Bunea, Yang Ning, and Marten Wegkamp. Adaptive estimation in structured factor models with applications to overlapping clustering. *Annals of Statistics*, 48(4), 2020. Kenneth A Bollen. Instrumental variables in sociology and the social sciences. *Annual Review of Sociology*, 38:37–72, 2012. Johann Brehmer, Pim De Haan, Phillip Lippe, and Taco Cohen. Weakly supervised causal representation learning. *arXiv preprint arXiv:2203.16437*, 2022. Rares-Darius Buhai, Yoni Halpern, Yoon Kim, Andrej Risteski, and David Sontag. Empirical study of the benefits of overparameterization in learning latent variable models. In *International Conference on Machine Learning*, pp. 1211–1219. PMLR, 2020. Krzysztof Chalupka, Pietro Perona, and Frederick Eberhardt. Visual causal feature learning. In *Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence*, UAI 15, pp. 181–190, Arlington, Virginia, USA, 2015. AUAI Press. ISBN 9780996643108. Krzysztof Chalupka, Frederick Eberhardt, and Pietro Perona. Causal feature learning: an overview. *Behaviormetrika*, 44(1):137–164, 2017. Sourav Chatterjee. A new coefficient of correlation. *Journal of the American Statistical Association*, 116(536):2009–2022, 2021. Sourav Chatterjee. A survey of some recent developments in measures of association. *arXiv preprint arXiv:2211.04702*, 2022. Diego Colombo, Marloes H Maathuis, Markus Kalisch, and Thomas S Richardson. Learning high-dimensional directed acyclic graphs with latent and selection variables. *The Annals of Statistics*, pp. 294–321, 2012. Pierre Comon. Independent component analysis, a new concept? *Signal processing*, 36(3):287–314, 1994. Alessio Conte, Roberto Grossi, and Andrea Marino. Large-scale clique cover of real-world networks. *Information and Computation*, 270:104464, Feb 2020. ISSN 0890-5401. doi: 10.1016/j.ic.2019.104464. URL http://dx.doi.org/10.1016/j.ic.2019.104464. Kimberlé Crenshaw, Neil Gotanda, Gary Peller, and Kendall Thomas. *Critical race theory: The Key Writings that formed the Movement*. The New Press, 1995. Danai Deligeorgaki, Alex Markham, Pratik Misra, and Liam Solus. Combinatorial and algebraic perspectives on the marginal independence structure of bayesian networks. *arXiv preprint arXiv:2210.00822v2*, 2023. Mucong Ding, Constantinos Daskalakis, and Soheil Feizi. GANs with conditional independence graphs: On subadditivity of probability divergences. In *International Conference on Artificial Intelligence and Statistics*, pp. 3709–3717. PMLR, 2021. arXiv:2003.00652 [cs.LG]. Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. *arXiv preprint arXiv:1410.8516*, 2014.
xHmCdSArUC
Also, $\nu$ is currently set to some small value in the experiments, so $(1-\nu)^t$ decays significantly slower than $\binom{1/2}{t}$. Thus, I wonder if $\nu$ can be dropped to save some tuning effort.
CORRELATED NOISE PROVABLY BEATS INDEPENDENT NOISE FOR DIFFERENTIALLY PRIVATE LEARNING Christopher A. Choquette-Choo* Krishnamurthy (Dj) Dvijotham* Krishna Pillutla* Arun Ganesh Thomas Steinke Abhradeep Guha Thakurta Google ABSTRACT Differentially private (DP) learning algorithms inject noise into the learning process. While the most common private learning algorithm, DP-SGD, adds independent Gaussian noise in each iteration, recent work on matrix factorization mechanisms has shown empirically that introducing correlations in the noise can greatly improve their utility. We characterize the asymptotic learning utility for any choice of the correlation function, giving precise analytical bounds for linear regression and as the solution to a convex program for general convex functions. We show, using these bounds, how correlated noise provably improves upon vanilla DP-SGD as a function of problem parameters such as the effective dimension and condition number. Moreover, our analytical expression for the near-optimal correlation function circumvents the cubic complexity of the semi-definite program used to optimize the noise correlation matrix in previous work. We validate our theory with experiments on private deep learning. Our work matches or outperforms prior work while being efficient both in terms of compute and memory. 1 INTRODUCTION The broad adoption of deep learning using sensitive data has led to the increasing popularity of rigorous frameworks for privacy preservation, such as differential privacy (Dwork et al., 2006). The workhorse of private learning, a differentially private variant of stochastic gradient descent called DP-SGD (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016), clips per-example gradients to some $\ell_2$ norm and adds independent Gaussian noise. DP-SGD has been used in a range of applications from learning with medical images (Adnan et al., 2022) to finetuning large language models with $O(100B)$ parameters (He et al., 2023). A recent line of work instead proposes to add correlated Gaussian noise to each clipped gradient (Smith & Thakurta, 2013; Kairouz et al., 2021a; Denisov et al., 2022; Choquette-Choo et al., 2023b). This class of algorithms called DP-FTRL, has been used for private federated learning at industrial scale (Xu et al., 2023). By solving an expensive semi-definite program to find the noise correlations, Choquette-Choo et al. (2023a) demonstrated empirically that DP-FTRL is never worse and often much better than DP-SGD in its privacy-utility tradeoff across multiple modalities like images and text. However, several questions remain open. Does DP-FTRL provably improve over DP-SGD in its expected utility? Further, can we design a more computationally efficient procedure to find the noise correlations for DP-FTRL without significantly worsening the privacy-utility tradeoff? We answer both questions affirmatively by (1) providing a sharp theoretical characterization of the noisy training dynamics of DP-FTRL, and (2) leveraging these analytical tools to circumvent the semi-definite program required in past work. *Equal contribution; alphabetical ordering. Algorithm 1 The DP-FTRL/Noisy-FTRL algorithms with a noise correlation matrix $B \in \mathbb{R}^{T \times T}$ Input: $B \in \mathbb{R}^{T \times T}$, initial iterate $\theta_0 \in \mathbb{R}^d$, $\ell_2$ clip norm $G$, noise multiplier $\sigma_{dp}$, learning rate $\eta$, dataset $D$ 1: for $t = 0, \ldots, T - 1$ do 2: Obtain the next datapoint $z_t$ and compute $g_t = \begin{cases} \nabla f(\theta_t; z_t) + \nabla r(\theta) & \text{for Noisy-FTRL}, \\ \text{clip}(\nabla f(\theta_t; z_t), G) + \nabla r(\theta) & \text{for DP-FTRL} \end{cases}$ 3: Sample noise $w_t \sim \mathcal{N}(0, \sigma_{dp}^2 G^2 I_d)$ and calculate the correlated noise $\tilde{w}_t = \sum_{\tau=0}^{t} B_{t,\tau} w_\tau$ 4: Update $\theta_{t+1} = \theta_t - \eta g_t$ for the noisy gradient $\tilde{g}_t = g_t + \tilde{w}_t$ Return $\theta_T$ 1.1 Problem Setup and Background Let $D = \{z_0, \ldots, z_{T-1}\}$ be a dataset of $T$ datapoints, where each datapoint is sampled i.i.d. from an underlying distribution $P_{data}$. Our learning objective is to minimize: $$F(\theta) = \mathbb{E}_{z \sim P_{data}}[f(\theta; z)] + r(\theta),$$ where $f(\theta; z)$ is the loss incurred by model parameters $\theta \in \mathbb{R}^d$ on a datapoint $z$, and $r(\cdot)$ is data-independent regularization. We aim to minimize $F$ while satisfying differential privacy with respect to the dataset $D$. We assume that $F$ has a unique minimizer denoted $\theta_*$. We focus on variants of stochastic gradient descent with a batch size of 1 for data arriving in a stream. The learning algorithms we study are presented in Algorithm 1; we assume throughout that the dataset $D$ is randomly shuffled before running the algorithm so that each datapoint $z_t$ is an i.i.d. sample from $P_{data}$. DP-FTRL with a noise coefficient matrix $B \in \mathbb{R}^{T \times T}$ (which is lower triangular) performs the updates: $$\theta_{t+1} = \theta_t - \eta \left( \text{clip}(\nabla f(\theta_t; z_t), G) + \nabla r(\theta_t) + \sum_{\tau=0}^{t} B_{t,\tau} w_\tau \right)$$ for Gaussian noise $w_t \sim \mathcal{N}(0, \sigma_{dp}^2 G^2 I_d)$, where $\text{clip}(\cdot, G)$ denotes projection onto an $\ell_2$ ball of radius $G$. We define Noisy-FTRL to be DP-FTRL without clipping. Taking $B = I$ as the identity matrix recovers DP-SGD (with clipping) and Noisy-SGD (without clipping), and other choices give rise to alternate algorithms. We restate a result from prior work showing that DP-FTRL is differentially private for any choice of $B$, provided the noise multiplier is scaled up appropriately. Theorem 1.1 (Denisov et al. (2022); Bun & Steinke (2016)). DP-FTRL (Algorithm 1 with the clipping enabled) satisfies $\rho$-zero concentrated differential privacy ($\rho$-zCDP) if the noise multiplier is taken as $\sigma_{dp}^2 = \gamma_T^2(B)/(2\rho)$ where $\gamma_T(B) = \max_{i < T} \| (B^{-1})_{i,i} \|_2$ is the sensitivity of $B^{-1}$. Remark 1.2. Although Noisy-FTRL is not differentially private, it lets us analyze the noise dynamics of DP-FTRL without technicalities associated with clipping. We sharply characterize the asymptotic utility of Noisy-FTRL for linear regression and show later that this analysis extends to DP-FTRL under appropriate assumptions. For mean estimation and learning with Lipschitz convex losses, we directly analyze DP-FTRL. 1.2 Motivation This work is motivated by two open questions in particular. --- 1 Matrices (e.g. $B = [B_{t,\tau}]_{t,\tau \geq 0}$) and vectors (e.g. $\beta = (\beta_0, \beta_1, \ldots)$) are zero-indexed and bold-faced. 2 We give DP guarantees w.r.t. the “zero-out” notion of neighborhood (Kairouz et al., 2021a); see Appendix A for a review. Further, a $\rho$-zCDP guarantee can be readily translated into $(\varepsilon, \delta)$-DP (Bun & Steinke, 2016, Prop. 1.3). Provable separation between DP-SGD and DP-FTRL: The best-known separation between DP-SGD and DP-FTRL in the literature is due to Kairouz et al. (2021a). For $G$-Lipschitz convex losses, DP-FTRL at a privacy level of $\rho$-zCDP achieves a suboptimality of $O(Gd^{1/4}/\sqrt{\rho T})$ compared to DP-SGD’s $O(Gd^{1/4}/\sqrt{\rho^2 T})$. The only improvement here is in terms of the privacy parameter $\rho$. More recently, Koloskova et al. (2023) analyze Noisy-FTRL but without normalizing for the sensitivity $\gamma_T(B)$ as in Theorem 1.1. Thus, the existing theory fails to reflect the large margin by which DP-FTRL empirically outperforms DP-SGD across the board (Choquette-Choo et al., 2023a), and a precise characterization is missing. Computationally efficient DP-FTRL: Prior work on DP-FTRL utilizes the noise correlation matrix $B$ that minimizes the squared error in the gradient prefix sums (Kairouz et al., 2021a; Denisov et al., 2022): $$\varphi(B) = \sum_{t=0}^{T-1} \mathbb{E}\left[\left\|\sum_{\tau=0}^{t} \tilde{g}_\tau - \sum_{\tau=0}^{t} g_\tau\right\|^2_2\right]$$ (3) where $g_t$ is the clipped gradient applied in iteration $t$ and $\tilde{g}_t$ is its noisy counterpart (cf. Algorithm 1). This was, in turn, obtained as an upper bound on the regret in an adversarial online learning setting (Kairouz et al., 2021a, Thm. C.1). The most potent algorithm from the previous work gave $B$ as the solution of a semidefinite program with matrix variables of size $O(T^2)$, requiring $O(T^3)$ time (Denisov et al., 2022, Eq. 4). This cost is prohibitive for large learning problems. Moreover, there is a mismatch between the objective (3) used to find the noise correlations and the final learning objective $F(\theta_T)$. In particular, there exist matrices $B_1, B_2$ with equal squared error $\varphi(B_1) = \varphi(B_2)$ and equal sensitivities $\gamma_T(B_1) = \gamma_T(B_2)$ such that DP-FTRL with $B_1$ diverges while DP-FTRL with $B_2$ converges (Koloskova et al., 2023). Our approach: We study the suboptimality in the final objective $\mathbb{E}[F(\theta_T) - F(\theta^\star)]$. We work in the asymptotic $T \to \infty$ regime to allow the use of analytic tools, but also to derive results that apply regardless of the dataset size.\footnote{Note that the DP noise multiplier $\sigma_{dp}$ remains finite in the asymptotic $T \to \infty$ regime as we consider the streaming setting: each example is processed once and the number of examples also grows to infinity.} Second, we restrict the search over $B$ to Toeplitz matrices $B_{t,\tau} = \beta_{t-\tau}$ generated by a sequence $\beta = (\beta_0, \beta_1, \ldots)$ of reals, but a stronger motivation is that they are anytime, i.e., they do not need to be recomputed for each value of $T$ and easily apply as $T \to \infty$. Toeplitz $B$ were previously considered for their computational efficiency in learning (Choquette-Choo et al., 2023b) and their near-optimal rates in linear counting queries (Henzinger et al., 2024). Thus, our goal is to characterize the asymptotic suboptimality $$F_\infty(\beta) := \lim_{T \to \infty} \mathbb{E}[F(\theta_T) - F(\theta^\star)]$$ (4) for $\theta_T$ produced by Noisy-FTRL or DP-FTRL under noise correlation weights $\beta$ where $\theta^\star = \arg\min F$ is assumed unique. This limit turns out to be well-defined and finite for the settings we consider as long as $\|\beta\|_2$ is finite. We analyze $F_\infty$ in the frequency domain using the discrete-time Fourier transform $B(\omega) = \sum_{t=0}^{\infty} \beta_t \exp(i\omega t)$, with $i$ the imaginary unit. Further, we define the limiting sensitivity associated with $B$ as the limiting value of $\gamma_T$, which, using standard Fourier analysis tools, equals $$\gamma_\infty(B) := \lim_{T \to \infty} \gamma_T(B) = \left(\frac{1}{2\pi} \int_{-\pi}^{\pi} |B(\omega)|^{-2} d\omega\right)^{1/2}.$$ (5) 1.3 Our Contributions The concrete contributions of this work are as follows. $\nu$-DP-FTRL: Analytically optimal DP-FTRL for mean estimation: We give analytical expressions for the asymptotic suboptimality $F_\infty$ for mean estimation and the noise correlations $\beta$ that minimize $F_\infty$ as a function of the learning rate $\eta$ (§2.1). We find that the optimal noise is anti-correlated, so the algorithm subtracts out previously added noise. Inspired by the analytical expression for the optimal noise correlations $\beta_\star$ for mean estimation, we propose a single-parameter family of choices for $\beta$, which we call $\nu$-DP-FTRL. We show its favorable theoretical and empirical properties for a broader range of problems. Table 1: Asymptotic suboptimality of Noisy-SGD/Noisy-FTRL for linear regression with Gaussian inputs $x \sim N(0, H)$ and noise multiplier $\sigma^2_{dp} = \gamma_\infty (\beta)^2 / (2\rho)$ based on the limiting sensitivity (5). We give the bounds in terms of the learning rate $\eta$, dimension $d$, the effective dimension $d_{\text{eff}} = \frac{\text{Tr}(H)}{\lambda_{\max}(H)}$ and the noise variance $\rho^{-1}$ representing the privacy level. We take $G = 1$ and $\lambda_{\max}(H) = 1$ w.l.o.g. and only show the term depending on $\rho$. Since $1 \leq d_{\text{eff}} \leq d$, Noisy-FTRL is better than Noisy-SGD at smaller $\eta$ or when $d_{\text{eff}}$ is small (e.g., when $H$ is close to low rank). | Algorithm | Asymptotic Suboptimality $F_\infty$ | Ratio w/ Lower Bound | Remark | |-----------------|-------------------------------------|----------------------|--------| | Lower Bound | $\Omega(\eta^2 \rho^{-1} d_{\text{eff}})$ | 1 | for all $\beta$ with finite $\|\beta\|_1$ | | Noisy-SGD | $\Theta(\eta \rho^{-1} d)$ | $\frac{1}{\log \frac{1}{\eta \rho}}$ | $\Theta(\cdot)$ denotes matching upper & lower bounds | | $\nu$-Noisy-FTRL | $O\left(\eta^2 \rho^{-1} d_{\text{eff}} \log^2 \frac{1}{\eta \rho}\right)$ | $\log^2 \frac{1}{\eta \rho}$ | Here, $\mu = \lambda_{\min}(H)$ and we use weights $\beta$ from (7) | Strict separation for linear regression: We establish sharp bounds on Noisy-FTRL (i.e., DP-FTRL without gradient clipping) for linear regression. Summarized in Table 1 and stated formally in §2.2, we show: (a) $\nu$-Noisy-FTRL, with analytical closed-form correlations, matches the lower bound up to log factors. Both of these bounds scale with the effective dimension $d_{\text{eff}}$ of the problem, which is no greater than the dimension $d$ but can be much smaller when the data is approximately low rank. (b) $\nu$-Noisy-FTRL is provably better than Noisy-SGD by a factor that can be as large as $d / \log d$ (when $d_{\text{eff}}$ is a constant). This shows an exponential separation between Noisy-FTRL and Noisy-SGD. Our bounds quantitatively show how the anti-correlations of $\nu$-Noisy-FTRL help prevent noise accumulation along eigen-directions of the Hessian with small eigenvalues. The gradients have a weak signal along these directions and are unable to undo the effect of the previous noise and move the iterates back towards the minimizer; the anti-correlations are essential to obtain near-optimal asymptotic suboptimality. We also leverage these asymptotics to give bounds on the utility of $\nu$-DP-FTRL and DP-SGD for finite $T$. Numerical separation for general strongly convex functions: We bound the asymptotic suboptimality $F_\infty$ for any noise correlation weights $\beta$ as the optimal value of a convex program. We use this to show that DP-FTRL achieves a tighter bound particularly when the condition number is large (Figure 3 in §3). Experiments with private deep learning: We show the proposed $\nu$-DP-FTRL outperforms other efficient differentially private algorithms on image and text classification tasks. We also find that our approach is competitive even with inefficient approaches that require $O(T^3)$ computation and $O(T^2)$ memory. 2 ANALYSIS FOR QUADRATIC OBJECTIVES For quadratic objectives, Algorithm 1 (with no clipping) corresponds to a linear dynamical system (Gray & Davisson, 2004), allowing us to use analytical tools. We give an exact analysis of DP-FTRL for mean estimation and Noisy-FTRL for linear regression. The analysis of Noisy-FTRL also lets us derive guarantees for DP-FTRL for linear regression. We do not aim to achieve the best possible rates in these stylized models. Rather, our goal is to understand the noise dynamics of DP-FTRL and show a separation with DP-SGD. 2.1 CONCEPTUAL OVERVIEW: PRIVATE MEAN ESTIMATION IN ONE DIMENSION We begin with the simplest objective function, the squared error for a mean estimation problem on the real line. This setting captures the core intuition and ideas used to derive further results. Consider a distribution $P_{\text{data}}$ with $|z - E[z]| \leq \sigma_{\text{sgd}}$ and $|z| \leq 1$ a.s. for $z \sim P_{\text{data}}$. Our objective now is $$F(\theta) = \frac{1}{2} \mathbb{E}_{z \sim P_{\text{data}}} ((\theta - z)^2) \quad \text{with} \quad f(\theta; z) = \frac{z^2}{2} - z\theta, \quad \text{and} \quad r(\theta) = \frac{\theta^2}{2}. \quad (6)$$ We show a strict separation between DP-FTRL and DP-SGD for this simple minimization problem. Figure 1: Left: The ratio of the asymptotic suboptimalities of DP-FTRL to DP-SGD for mean estimation vs. the learning rate $\eta$. DP-FTRL is never worse but is orders of magnitude better at $\eta \to 0$ or $\eta \to 1$. Middle & Right: Time- and frequency-domain descriptions of the optimal noise correlations for mean estimation (defined in Theorem 2.1). **Theorem 2.1.** Consider the setting above with learning rate $\eta \leq 1$ and clip norm $G = 1$ and $\sigma_{dp}^2 = \frac{\gamma(B)^2}{2\rho}$. Then, the asymptotic suboptimality of a $\rho$-zCDP sequence $(\theta_t)_{t=0}^\infty$ obtained via DP-SGD is $F_\infty(\beta_{dpsgd}) = \Theta(\eta \rho^{-1} + \eta \sigma_{sgd}^2)$. Further, the asymptotic suboptimality of any $\rho$-zCDP sequence $(\theta_t)_{t=0}^\infty$ from DP-FTRL is $$\inf_\beta F_\infty(\beta) = F_\infty(\beta^*) = \Theta\left(\eta^2 \rho^{-1} \log^2(1/\eta) + \eta \sigma_{sgd}^2\right).$$ The infimum above is attained by $\beta^*_t = (-1)^t \left(\frac{1}{t}\right)(1 - \eta)^t$, where $\left(\frac{1}{t}\right) = \prod_{k=0}^{t-1} \frac{1/2-k}{t-k}$. **Proof Sketch.** Using tools from frequency-domain analysis of linear time-invariant systems (Oppenheim et al., 1997), we show that the asymptotic variance is an integral of $|B(\omega)|^2$. The sensitivity (5) is an integral of $|B(\omega)|^{-2}$ so that $F_\infty$ is a product of these integrals. Its minimizer $B^*$ can be analytically computed in the Fourier domain (Fig. 1, right), which yields the expression for $\beta^*$ (Fig. 1, center). See §B for details. The optimal $\rho^{-1}$ coefficient $\eta^2 \log^2(1/\eta)$ is better than DP-SGD’s $\eta$. Note that $\beta^*_t < 0$ for $t \geq 1$: the noise is anti-correlated and it helps by subtracting out the previously added noise. We also recover the correlations of (Fichtenberger et al., 2023) as $\eta \to 0$; these were shown to be near-optimal for linear counting queries. **$\nu$-DP-FTRL/$\nu$-Noisy-FTRL:** Theorem 2.1 gives an analytical expression for the optimal noise correlation weights for DP-FTRL for this simplified setting. We parameterize it with a parameter $0 < \nu < 1$ to define $$\hat{\beta}_t^\nu := (-1)^t \left(\frac{1}{t}\right)(1 - \nu)^t.$$ We analyze this choice theoretically for the setting of Noisy-FTRL and demonstrate near-optimality for appropriate $\nu$. Later, for our experiments with DP-FTRL, we tune $\nu$ as a hyperparameter to tune. We call this approach (with clipping) $\nu$-DP-FTRL and (without clipping) $\nu$-Noisy-FTRL. ### 2.2 Asymptotic Suboptimality for Linear Regression We now give a precise analysis of $F_\infty$ for linear regression with $\nu$-Noisy-FTRL. We will use this to derive non-asymptotic privacy-utility bounds for DP-FTRL at the end of this section. We consider (unregularized) linear regression with loss function $f(\theta; (x, y)) = \frac{1}{2}(y - \langle \theta, x \rangle)^2$ so that $$F(\theta) = \frac{1}{2} \mathbb{E}_{(x,y) \sim P_{data}} (y - \langle \theta, x \rangle)^2.$$ We assume $d$-dimensional Gaussian covariates $x \sim \mathcal{N}(0, H)$ and independent Gaussian residuals $y - \langle \theta_*, x \rangle \sim \mathcal{N}(0, \sigma_{sgd}^2)$ where $\theta_* = \arg\min F$. We make these assumptions for ease of presentation; we state Figure 2: Linear regression simulations: We plot the empirically observed asymptotic suboptimality of ν-Noisy-FTRL/Noisy-SGD and their theoretical bounds with \( d = 128 \) (varied in the left plot) where the Hessian \( H \) has eigenvalues \( \lambda_k = 1/k \) (varied as \( k^{-\alpha} \) for \( \alpha \in [0.4, 1] \) in the middle plot), and learning rate \( \eta = 0.02 \) (varied in the right plot). The slope of the corresponding empirical and theoretical lines are nearly equal, showing the tightness of the theory. In particular, we observe that Noisy-SGD has a linear dependence on the dimension (slope 1.00) and is nearly constant w.r.t. the effective dimension (slope 0.18) while Noisy-FTRL has a near-linear dependence on the effective dimension (slope 0.94). Noisy-FTRL (slope 2.03) also has a better dependence on the learning rate than Noisy-SGD (slope 1.27). and prove our results under weaker assumptions in the supplement. Further, we assume that \( F \) is \( L \)-smooth and \( \mu \)-strongly convex (equivalently, \( \mu I \preceq H \preceq LI \) since the input covariance \( H \) is also the Hessian of \( F \)). We express the bounds on \( F_\infty \) in terms of the correlation weights \( \beta \) and the problem parameters \( \rho, G \) which, for DP-FTRL, denote the target privacy level and the gradient clip norm respectively. See §C for proofs. **Theorem 2.2.** Let \( c, C_1, C_2 \) denote universal constants. For \( \eta \leq c/\text{Tr}[H] \), we have \[ \begin{align*} (\text{Noisy-SGD}) & \quad F_\infty(\beta_{\text{sgd}}) = \Theta\left( \eta d G^2 \rho^{-1} + \eta \sigma_{\text{sgd}}^2 \text{Tr}[H] \right) \quad \text{with } \beta_{\text{sgd}} = (1, 0, \ldots), \\ (\nu\text{-Noisy-FTRL}) & \quad F_\infty(\beta^\nu) \leq C_1 \left( \eta^2 G^2 \rho^{-1} \log^2 \frac{1}{\nu} + \eta \sigma_{\text{sgd}}^2 \right) \text{Tr}[H] \quad \text{with } \nu \leq \eta \mu, \text{ and} \\ (\text{Lower bound}) & \quad F_\infty(\beta) \geq C_2 \left( \eta^2 G^2 \rho^{-1} + \eta \sigma_{\text{sgd}}^2 \right) \text{Tr}[H] \quad \text{for all } \beta \text{ with } \| \beta \|_1 < \infty. \end{align*} \] This shows the near-optimality of \( \nu\text{-Noisy-FTRL} \) and a provable gap between Noisy-FTRL and Noisy-SGD. Observe that our bounds separate the contributions arising from correlated noise (\( \rho^{-1} \) term) and those from the inherent noise in the linear model (\( \sigma_{\text{sgd}}^2 \) term). We focus on the effect of correlation because the effect of the latter noise is the same across all choices of \( \beta \). We plot the differences in Figure 2. **Exponential separation between Noisy-SGD and Noisy-FTRL:** Noisy-SGD’s stationary error depends on the ambient dimension \( d \), while the lower bound depends on the effective dimension \( d_{\text{eff}} = \text{Tr}[H]/\|H\|_2 \) of the covariance \( H \). We have, \( d_{\text{eff}} \leq d \) with equality when all the eigenvalues of \( H \) are equal but \( d_{\text{eff}} \ll d \) when the eigenvalues of \( H \) decay rapidly or it is nearly low rank. This is true particularly for overparameterized models where the features may be highly correlated resulting in an approximately low-rank covariance. The effective dimension is closely related to the stable rank (Rudelson & Vershynin, 2007); cf. §C.6. For instance, if the eigenvalues of \( H \) are \( (1, 1/d, \ldots, 1/d) \), then \( d_{\text{eff}} \leq 2 \). Then, Noisy-FTRL’s error of \( O(\eta^2 \rho^{-1} \log^2(d/\eta)) \) is exponentially better than Noisy-SGD’s \( \Theta(\eta \rho^{-1} d) \). The learning rate dependence of Noisy-SGD is also suboptimal, similar to §2.1. This result is also confirmed empirically in Figure 2 (right). Assuming \( \lambda_{\max}(H) = 1 \), the \( d_{\text{eff}} \)-dependence comes from the contribution of eigen-direction \( j \) of \( H \) to the asymptotic suboptimality improving from \( \Theta(1) \) for Noisy-SGD to scale with the corresponding eigenvalue \( \lambda_j \) of \( H \) for \( \nu\text{-Noisy-FTRL} \). Thus, the anti-correlated noise particularly helps in the tail eigen-directions of \( H \). We discuss this further in Remark C.16 of §C. Table 2: Comparison to prior work: We apply our theory to compute $F_\infty$ for linear regression given choices of $B$ used in prior work. Though certain choices of the noise correlation $\beta$ may be optimal for finite linear counting queries (Fichtenberger et al., 2023), our results show that they have $F_\infty = \infty$ because the sensitivity diverges as $T \to \infty$. $\nu$-Noisy-FTRL effectively introduces an additional damping term $(1 - \nu)^t$ in the correlations of (Fichtenberger et al., 2023) to achieve near-optimality for linear regression. Damping similarly helps for anti-PGD (Orvieto et al., 2022), where the resulting error is the geometric mean of the lower bound and the bound of Noisy-SGD from Theorem 2.2. | Algorithm | Noise Correlation | Sensitivity in $T$ steps | Asymptotic Suboptimality | |----------------------------|-------------------|--------------------------|--------------------------| | (Fichtenberger et al., 2023) | Eq. (7) with $\nu = 0$ | $\log T$ | $\eta^2 G^2 \rho^{-1} \text{Tr}[H] \log^2(1/\nu)$ | | $\nu$-Noisy-FTRL (Ours) | Eq. (7) with $0 < \nu \leq \eta \mu$ | $\log(1/\nu)$ | $\eta^2 G^2 \rho^{-1} \text{Tr}[H] \log^2(1/\nu)$ | | Anti-PGD (Orvieto et al., 2022) | $(1, -1, 0, \ldots)$ | $T$ | $\infty$ | | Anti-PGD + Damping | $(1, -(1 - \nu), 0, \ldots)$ | $1/\nu$ | $\eta^{3/2} G^2 \rho^{-1} \sqrt{d \text{Tr}[H]}$ | 2.3 Finite-time Privacy-Utility Bounds for Linear Regression Noisy-FTRL, which we analyzed so far, is not differentially private. Differential privacy requires gradient clipping which significantly complicates the analysis. However, for a finite time horizon $T$, we can argue using concentration that $\nabla f(\theta; z)$ is bounded with high probability, and clipping can be avoided. Formal statements and proofs for the finite-time analysis are given in §D. Consider DP-FTRL with noise correlation $\beta^\nu$ from (7) with $\nu = \eta \mu$ and gradients clipped to any $\ell_2$-norm $G$. As mentioned in §1.1, the outputs $(\theta_1, \ldots, \theta_T)$ of DP-FTRL are $\rho$-zCDP. For an appropriate choice of $\eta$, we give utility bounds in terms of the effective dimension $d_{\text{eff}}$ and the condition number $\kappa = L/\mu$: (a) For $\eta$ small enough, we have with probability at least $1 - p$ that $$\max_{t \leq T} \|g_t\|_2 \leq c \max \left\{ \text{Tr}[H] \| \theta_0 - \theta_* \|_2, \sigma_{\text{sgd}} \sqrt{\text{Tr}[H]} \right\} \text{polylog}(T/p) =: G.$$ Let $\mathcal{E}$ denote this event. If $\mathcal{E}$ holds, no gradients are clipped and DP-FTRL coincides with Noisy-FTRL. (b) For $T \geq \Omega(\kappa^2 d_{\text{eff}}^2 / \rho)$, we have (omitting log factors and $o(1/T^2)$ terms and taking $\|H\|_2 = 1$): $$\mathbb{E}[(F(\theta_t) - F(\theta_*)) \cdot 1(\mathcal{E})] \lesssim \begin{cases} \kappa d_{\text{eff}} \left( \frac{d_{\text{eff}} \| \theta_0 - \theta_* \|_2^2}{\rho T} + \frac{d \sigma_{\text{sgd}}^2}{\rho T} + \frac{\sigma_{\text{sgd}}^2}{T} \right) & \text{for DP-SGD}, \\ \kappa d_{\text{eff}} \left( \frac{\kappa d_{\text{eff}} \| \theta_0 - \theta_* \|_2^2}{\rho T^2} + \frac{\kappa d_{\text{eff}} \sigma_{\text{sgd}}^2}{\rho T^2} + \frac{\sigma_{\text{sgd}}^2}{T} \right) & \text{for } \nu\text{-DP-FTRL}. \end{cases}$$ Thus, the dimension $d$ in DP-SGD’s bound effectively becomes $\kappa d_{\text{eff}} / T$ for DP-FTRL, leading to a better dimension dependence. While faster $1/(\rho T^2)$ rates are known for DP-SGD-style algorithms for linear regression (Varshney et al., 2022; Liu et al., 2023), such algorithms require sophisticated adaptive clipping strategies. Our algorithms use a fixed clipping norm $G$ and a fixed noise multiplier $\sigma_{\text{dp}}$ independent of $T$; the bounds presented above are, to the best of our knowledge, the best known in the literature for DP-SGD in this setting. We leave the exploration of combining adaptive clipping with correlated noise for future work. 3 Asymptotic Suboptimality for General Strongly Convex Functions We now generalize §2.2 to general strongly convex problems. Here, we bound the asymptotic suboptimality of DP-FTRL and DP-SGD by the value of a convex program. **Theorem 3.1.** Suppose $f(\cdot; z)$ is $G$-Lipschitz, and the stochastic gradients are uniformly bounded as $\|\nabla_\theta f(\theta; z) - \mathbb{E}_{z' \sim P_{\text{data}}} [\nabla_\theta f(\theta; z')]\|_2 \leq \sigma_{\text{sgd}}$. Then, if $F$ is $\mu$-strongly convex and $L$-smooth, the asymptotic suboptimality $F_\infty$ is bounded for any noise correlation $B(\omega)$ in the frequency domain by: $$\inf \left\{ \frac{Ld}{2\pi} \int_{-\pi}^{\pi} \left( G^2 \rho^{-1} |B(\omega)|^2 \gamma_\infty(B)^2 + \sigma_{\text{sgd}}^2 \right) \psi(\omega) \, d\omega \mid \psi : [-\pi, \pi] \to \mathbb{R}_+, \psi \in C(\eta, L, \mu) \right\},$$ (10) where $\gamma_\infty(B)$ is the limiting sensitivity from Eq. (5), and $C(\eta, \mu, L)$ is a convex set (details and proof in §E). While technically an infinite-dimensional optimization problem over the function $\psi$, we can approximate the solution by discretizing $\psi$ into $k$ points uniformly over $[-\pi, \pi]$. Further, if we discretize $B$ similarly, we can obtain a second-order cone program with $k$ conic constraints and $O(k)$ decision variables. As $k \to \infty$, the solution approaches the solution to (10). Empirically, we observe that the values stabilize quickly as $k$ increases. We stop the computation when the change in bound as a function of $k$ drops below a threshold — this gives $k = 1000$. Further, given the optimal $\psi = \psi^*$, we can run an alternating minimization where we minimize the objective of (10) with respect to $\psi$ for fixed $B$ and with respect to $B$ for fixed $\psi$. This leads to an iteratively improving choice of $B$. We find empirically that this iterative procedure converges quickly and leads to a provable theoretical gap between the upper bounds on $F_\infty$ achievable by DP-SGD and DP-FTRL. We numerically compare the bound (10) for DP-SGD and $\nu$-DP-FTRL. Figure 3 shows that the gap between DP-SGD and $\nu$-DP-FTRL is multiplicative: the absolute gap grows with the increasing condition number $\kappa = L/\mu$. The suboptimality of “Optimized” DP-FTRL (optimized as described above) grows even more slowly with $\kappa$. Overall, $\nu$-DP-FTRL significantly improves upon DP-SGD and has only a single tunable parameter $\nu$ and no expensive computation to generate the noise correlations. We focus on $\nu$-DP-FTRL for experiments in this paper but leave the possibility of improving results further based on Optimized DP-FTRL for future work. 4 EXPERIMENTS We demonstrate the practical benefits of $\nu$-DP-FTRL for deep learning tasks. This approach has a single tunable parameter $\nu$ that can easily be tuned based on minimizing the squared error (3) as in prior work. Comparing Computation (Table 3): While optimized matrices (e.g. “ME” in Table 3) have the state-of-the-art privacy-utility tradeoffs in private learning (without amplification), their computational cost scales as | DP-FTRL Variant | Citation | Corr. matrix $B$ | Anytime? | Computation Cost | |-----------------|----------|-----------------|---------|------------------| | DP-SGD | (Abadi et al., 2016) | Identity | ✓ | $O(1)$ | $O(1)$ | | Honaker/TreeAgg | (Kairouz et al., 2021a) | Lower-Triangular (LT) | ✓ | $O(1)$ | $O(\log T)$ | | Optimal CC | (Fichtenberger et al., 2023) | Toeplitz & LT | ✓ | $O(1)$ | $O(T)$ | | $\nu$-DP-FTRL | Ours | Toeplitz & LT | ✓ | $O(1)$ | $O(T)$ | | FFT | (Choquette-Choo et al., 2023b) | Toeplitz | - | $O(1)$ | $O(T \log^3 T)$ | | Full Honaker | (Honaker, 2015) | Arbitrary | - | $O(T^2)$ | $O(T^2)$ | | Multi-Epoch (ME)| (Choquette-Choo et al., 2023b) | Arbitrary | - | $O(T^3)$ | $O(T^2)$ | Table 3: Variants of DP-FTRL: the noise correlation matrix $B$ and whether the correlation matrix $B$ can be created/optimized agnostic to the time horizon $T$ (denoted as “Anytime”), and the computation cost. Figure 4: The proposed $\nu$-DP-FTRL outperforms all other efficient and anytime mechanisms. It also nearly equal or slightly outperform the state-of-the-art “ME” mechanism that requires significantly more compute (cf. Table 3). *The non-private baseline for StackOverflow uses per-user clipping as this improves performance by $\approx 0.5\%$ pp. $O(T^3)$. For example, generating the correlation matrix $B$ for $T = 10^4$ takes around 24 hours (Choquette-Choo et al., 2023b). Moreover, it has a $O(T^2)$ cost per step. We find in this section that $\nu$-DP-FTRL achieves near state-of-the-art privacy-utility tradeoffs at a much smaller computational cost of $O(T)$ per iteration. We compare with other anytime approaches for which the matrices $B$ can extended to any time horizon $T$. The practitioner then need not specify $T$ in advance, but rather, can train for as long as necessary to achieve minimal model loss—it is common to, e.g., let algorithms run until certain conditions, like a maximum difference on the train-test loss, are met (Morgan & Bourlard, 1989). Moreover, general matrices $B$ become prohibitive in terms of compute/memory as models scale up (Kaplan et al., 2020; Anil et al., 2023). **Experiment Setup:** We use two standard benchmarks: example-level DP for image classification on the CIFAR-10 dataset and user-level DP for language modeling on the StackOverflow dataset. We use the same setup as (Kairouz et al., 2021a). We also stamp/restart all baselines as suggested in (Choquette-Choo et al., 2023b). This gives the baselines the advantage of an additional tuning parameter (tuned to minimize the squared error (3)), but does not affect their per-step training cost. We denote this by the suffix “$\times S$” for $S > 1$ in the plot. We tune all CIFAR-10 hyperparameters with a grid search, while we use hyperparameters reported from previous works for StackOverflow. Appendix G gives the full setup. **Main Results:** Across both datasets, $\nu$-DP-FTRL outperforms all existing anytime mechanisms by a significant margin (Figure 4a). We find an average 3pp improvement that grows as $\varepsilon$ becomes small. Indeed, the proposed $\nu$-DP-FTRL makes up 30-80% of the gap between previous efficient approaches and the state-of-the-art and computationally intense ME approach. For instance, at $\varepsilon = 10$, we have $\nu$-DP-FTRL at 69.26% nearly matches ME at 70.83%. In particular, $\nu$-DP-FTRL outperforms Optimal CC (Fichtenberger et al., 2023), which is equivalent to $\nu$-DP-FTRL with $\nu = 0$; this shows the practical importance of the exponential decay parameter $\nu$ in Eq. (7). For StackOverflow, we find that $\nu$-DP-FTRL outperforms the state-of-the-art ME across all $\varepsilon$ (Figure 4b) by $\approx 0.3\%$-points while requiring significantly less computation. As $\varepsilon$ becomes small, DP-SGD can outperform DP-FTRL due to privacy amplification. We find that $\nu$-DP-FTRL outperforms DP-SGD for $\varepsilon \geq 4$ on CIFAR-10 (63.02% vs. 62.02%) and around $\varepsilon \approx 2$ for StackOverflow (23.6% versus 22.6%), showing its broad applicability. Finally, we observe that our mechanism achieves near non-private baselines on StackOverflow. A model trained via $\nu$-DP-FTRL gets 25.3% validation accuracy at $\varepsilon = 8$, a mere 1%-point off from the non-private baseline. --- *Note that in practice we take $T$ to be the number of steps of minibatch gradient descent, effectively doing several epochs over the data which differs from the theoretical setting considered in previous sections.* ACKNOWLEDGEMENTS The authors thank H. Brendan McMahan, Fabian Pedregosa, Ian R. Manchester, Keith Rush, and Rahul Kidambi for fruitful discussions and helpful comments. REFERENCES NIST Digital Library of Mathematical Functions. https://dlmf.nist.gov/, Release 1.1.10 of 2023-06-15. URL https://dlmf.nist.gov/. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds. Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proc. of the 2016 ACM SIGSAC Conf. on Computer and Communications Security (CCS’16), pp. 308–318, 2016. Mohammed Adnan, Shivam Kalra, Jesse C Cresswell, Graham W Taylor, and Hamid R Tizhoosh. Federated learning and differential privacy for medical image analysis. Scientific reports, 12(1):1953, 2022. Rafik Aguech, Eric Moulines, and Pierre Priouret. On a Perturbation Approach for the Analysis of Stochastic Tracking Algorithms. SIAM J. Control. Optim., 39(3):872–899, 2000. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Julyan Arbel, Olivier Marchal, and Hien D Nguyen. On strict sub-Gaussianity, optimal proxy variance and symmetry for bounded random variables. ESAIM: Probability and Statistics, 24:39–55, 2020. Francis R. Bach and Eric Moulines. Non-Strongly-Convex Smooth Stochastic Approximation with Convergence Rate $O(1/n)$. In NeurIPS, pp. 773–781, 2013. Borja Balle, Gilles Barthe, Marco Gaboardi, Justin Hsu, and Tetsuya Sato. Hypothesis Testing Interpretations and Rényi Differential Privacy. In AISTATS, pp. 2496–2506, 2020. Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proc. of the 2014 IEEE 55th Annual Symp. on Foundations of Computer Science (FOCS), pp. 464–473, 2014. Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Theory of Cryptography Conference, pp. 635–658. Springer, 2016. Paul F Byrd and Morris D Friedman. Handbook of Elliptic Integrals for Engineers and Scientists, volume 67. Springer, 2013. Andrea Caponnetto and Ernesto De Vito. Optimal Rates for the Regularized Least-Squares Algorithm. Foundations of Computational Mathematics, 7:331–368, 2007. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19), pp. 267–284, 2019. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 2633–2650, 2021.
KrtGfTGaGe
Perhaps related to this additional assumption, the future work section mentioned adaptations under relaxed assumptions, with a citation to (Lambrechts et al. 2022). It would help to elaborate on some of those potential relaxations.
THE WASSERSTEIN BELIEVER LEARNING BELIEF UPDATES FOR PARTIALLY OBSERVABLE ENVIRONMENTS THROUGH RELIABLE LATENT SPACE MODELS Raphael Avalos\textsuperscript{1}*\hspace{1em} Florent Delgrange\textsuperscript{1,2}*\hspace{1em} Ann Nowé\textsuperscript{1}†\hspace{1em} Guillermo A. Pérez\textsuperscript{2,3}†\hspace{1em} Diederik M. Roijers\textsuperscript{1,4}† \textsuperscript{1} AI Lab, Vrije Universiteit Brussel (Belgium)\hspace{1em} \textsuperscript{2} University of Antwerp (Belgium) \textsuperscript{3} Flanders Make (Belgium)\hspace{1em} \textsuperscript{4} City of Amsterdam (The Netherlands) \{raphael.avalos, florent.delgrange\}@vub.be ABSTRACT Partially Observable Markov Decision Processes (POMDPs) are used to model environments where the state cannot be perceived, necessitating reasoning based on past observations and actions. However, remembering the full history is generally intractable due to the exponential growth in the history space. Maintaining a probability distribution that models the belief over the current state can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is often intractable. While SOTA algorithms use Recurrent Neural Networks to compress the observation-action history aiming to learn a sufficient statistic, they lack guarantees of success and can lead to sub-optimal policies. To overcome this, we propose the Wasserstein Belief Updater, an RL algorithm that learns a latent model of the POMDP and an approximation of the belief update under the assumption that the state is observable during training. Our approach comes with theoretical guarantees on the quality of our approximation ensuring that our latent beliefs allow for learning the optimal value function. 1 INTRODUCTION Partially Observable Markov Decision Processes (POMDPs) define a powerful framework for modeling decision-making in uncertain environments where the state is not fully observable. These problems are common in many real-world applications, such as robotics (Lauri et al., 2023), and recommendation systems (Wu et al., 2021). In contrast to Markov Decision Processes (MDPs), in a POMDP the agent perceives an imperfect observation of the state that does not suffice as conditioning signal for an optimal policy. As such, optimal policies must take the entire interaction history into account. As the space of possible histories scales exponentially in the length of the episode, using histories to condition policies is generally intractable. An alternative is the notion of belief, which is defined as a probability distribution over states based on the agent history. Beliefs are a sufficient statistic of the history (Kaelbling et al., 1998) but the computation of their closed-form expression require the access to a model of the environment and is in general intractable. To overcome those challenges, SOTA algorithms compress the history into a fixed-size vector with the help of Recurrent Neural Networks (RNNs) (Hausknecht & Stone, 2015). Nonetheless, this may lead to information loss, resulting in suboptimal policies. To improve the likelihood of obtaining a sufficient statistic, RNNs can be combined with regularization techniques, including generative models (Chen et al., 2022) [Hafner et al., 2019; 2021], particle filtering ([Igl et al., 2018] [Ma et al., 2020]), and predicting distant observations ([Gregor et al., 2018] [2019]). However, none of these techniques guarantee that the representation of histories induced by RNNs is suitable for optimizing the return. Additionally, many algorithms assume that beliefs are simple distributions (e.g., Gaussian), which limits their applicability ([Gregor et al., 2018] [Lee et al., 2020] [Hafner et al., 2021]). In this paper, we propose Wasserstein Belief Updater (WBU), a model-based reinforcement learning (RL) algorithm for POMDPs that allows learning the belief space over the unobservable states. Specifically, WBU learns an approximation of the belief update rule through a latent space model. * Both authors contributed equally to this research, alphabetic order.\hspace{1em} † Equal supervision. whose behaviors (expressed as expected returns) are close to those of the original environment. Furthermore, we show that WBU is guaranteed to induce a suitable representation of the history to optimize the return. WBU consists of three components that are learned in a round-robin fashion: the model, the belief learner, and the policy (Fig. 1). Harnessing only histories to learn a model whose dynamics can be provably linked to the original unobservable environment poses a considerable challenge. Therefore, in the same spirit as the Centralized Training with Decentralized Execution paradigm in multi-agent RL (MARL) (Oliehoek et al., 2008; Avalos et al., 2022), where leveraging additional information such as the true state of the environment is a common practice, we assume that the POMDP states can be accessed during training. While this might seem restrictive at first sight, this assumption is typically met in simulation-based training and can be applied in real-world settings such as robotics, where extra sensors can be used during training in a laboratory setting. Our core contribution is the development of a sound framework equipped with theoretical guarantees in the context of RL within partial observability. While SOTA algorithms primarily concentrate on enhancing the overall return — potentially resulting in substantial performance gains — we contend that performance is not the exclusive goal and that possessing guarantees is equally important, as the balance between these two aspects varies based on the specific application. By tackling POMDPs with a formal approach, we offer theoretical guarantees that other methods cannot provide: we ensure that our latent model is able to replicate the dynamics of the original, partially observable environment, which further yields a belief representation suitable for learning the value function. We learn the latent model of the POMDP via a Wasserstein auto-encoded MDP (WAE-MDP; Delgrange et al., 2022), which embeds bisimulation metrics — intuitively leading to our guarantees. In parallel, we maintain a belief distribution over the latent state space via a belief update network: we minimize its Wasserstein distance to the exact belief update rule, through a tractable variational proxy. To allow for complex belief distributions, we use normalizing flows (Kobyzev et al., 2021). In contrast to SOTA algorithms, the beliefs are only optimized towards accurately replicating its update rule. While we call recursively the belief network to maintain the belief distribution, we do not back-propagate through time and thus implement it as a simple feed forward network. The policy is learned on the latent belief space using a vector integrating the parameters of the belief distribution. Our experimental results are promising and show the ability of our algorithm to learn to encode the history into a representation useful to learn a policy, without using RNNs. Other related work. DVRL (Igl et al., 2018) extends A2C (Mnih et al., 2016) combined with RNNs (R-A2C) with auxiliary losses aiming to learn beliefs via a variational autoencoder and particle filtering, but it lacks guarantees. DVRL further assumes independent normal distributions for beliefs, limiting its applicability. FORBES (Chen et al., 2022) use normalizing flows but learn policies conditioned on latent states, which is suboptimal as the state distribution is approximated with a single sample. Some works focus on specific POMDP types, like compact image representations (e.g., visual motor tasks, e.g., Lee et al., 2020) or states masked with Gaussian noise (Wang & Tan, 2021). While accessing states is common in MARL, it is less common in single-agent but has been explored in kernel-POMDPs (Nishiyama et al., 2012), and more recently for sample efficient learning (Lee et al., 2023). Leveraging additional information available during training (not necessarily the states) has also been explored by Lambrechts et al. (2023), but RNNs remain crucial while no abstraction nor representation quality guarantee are provided. Finally, other works (Gelada et al., 2019; Delgrange et al., 2022) study similar value difference bounds to ours and connect them to bisimulation theory (Larsen & Skou, 1989; Givan et al., 2003), but in fully observable environments. 2 BACKGROUND 2.1 Probability Distributions and Discrepancy Measures We write $\Sigma(\mathcal{X})$ for the set of all measurable subsets of a complete, separable space $\mathcal{X}$, $\Delta(\mathcal{X})$ for the set of measures on $\mathcal{X}$, and $\delta_a \in \Delta(\mathcal{X})$ for the Dirac measure with impulse $a \in \mathcal{X}$. Let $P, Q \in \Delta(\mathcal{X})$ be measures with densities $p, q$, the divergence between $P$ and $Q$ can be measured via: - the solution of the optimal transport problem (OT), defined as $W_c(P,Q) = \inf_\lambda \mathbb{E}_{x,y \sim \lambda} c(x,y)$, which is the minimum cost of changing $P$ into $Q$ (Villani, 2009), where $c : \mathcal{X} \times \mathcal{X} \rightarrow [0,\infty)$ is a cost function and the infimum is taken over the set of all couplings of $P$ and $Q$. When $c$ is equal to a distance metric $d$ over $\mathcal{X}$, $W_d$ is the Wasserstein distance between the two distributions. Figure 1: WBU framework. The WAE-MDP is presented in Sect. 3, and WBU in Sect. 4. Learning the different components is done in a round-robin fashion. The WAE-MDP learns from data collected by the RL agent and stored in a Replay Buffer. WBU uses the transition function $\bar{P}$ and observation decoder $\bar{O}$ of the WAE-MDP to learn to approximate the belief update rule. The agent learns a policy conditioned on the resulting sub-belief $\beta_t$ (i.e., the parameters of the latent belief $\hat{b}_t$). - the total variation distance (TV), defined as $d_{TV}(P,Q) = \sup_{A \in \Sigma(X)} |P(A) - Q(A)|$. If $X$ is equipped with the discrete metric $1_\neq$, TV coincides with the Wasserstein metric. - the Kullback-Leibler (KL) divergence, defined as $D_{KL}(P,Q) = \mathbb{E}_{x \sim P} [\log(p(x)/q(x))]$. 2.2 Decision Making under Uncertainty Markov Decision Processes (MDPs) are tuples $M = \langle S, A, P, R, s_I, \gamma \rangle$ where $S$ is a set of states; $A$, a set of actions; $P : S \times A \rightarrow \Delta(S)$, a probability transition function that maps the current state and action to a distribution over the next states; $R : S \times A \rightarrow \mathbb{R}$, a reward function; $s_I \in S$, the initial state; and $\gamma \in [0, 1)$ a discount factor. We refer to MDPs with continuous state or action spaces as continuous MDPs. In that case, we assume $S$ and $A$ are complete separable metric spaces equipped with a Borel $\sigma$-algebra. An agent interacting in $M$ produces trajectories, i.e., sequences of states and actions $\langle s_0, T, a_0, T-1 \rangle$ where $s_0 = s_I$ and $s_{t+1} \sim P(\cdot | s_t, a_t)$ for $t < T$. Policies and probability measure. A stationary policy $\pi : S \rightarrow \Delta(A)$ prescribes which action to choose at each step of the interaction. A policy $\pi$ and $M$ induce a unique probability measure $\mathbb{P}^\pi_M$ on the Borel $\sigma$-algebra over (measurable) infinite trajectories (Puterman 1994). The typical goal of an RL agent is to learn a policy that maximizes the expected return, given by $\mathbb{E}^\pi_M \left[ \sum_{t=0}^{\infty} \gamma^t \cdot R(s_t, a_t) \right]$, by interacting with $M$. We omit the superscript when the context is clear. Partially Observable MDPs (POMDPs) are tuples $P = \langle M, \Omega, O \rangle$ where $M$ is an MDP with state space $S$ and action space $A$; $\Omega$ is a set of observations; and $O : S \times A \rightarrow \Delta(\Omega)$ is an observation function that defines the distribution of observations that may occur when the MDP $M$ transitions to a state upon the execution of a particular action. An agent in $P$ actually interacts in $M$, but without directly observing the states of $M$: instead, the agent perceives observations, which yields histories, i.e., sequences of actions and observations $\langle a_0, T-1, o_1, T \rangle$ that can be associated to an (unobservable) trajectory $\langle s_0, T, a_0, T-1 \rangle$ in $M$, where $o_{t+1} \sim O(\cdot | s_{t+1}, a_t)$ for all $t < T$. Beliefs. Unlike in MDPs, where it is sufficient to take decisions based on the current state (Puterman 1994), policies solely based on the current observation of $P$ are not sufficient to optimize the return. Intuitively, due to the partial observability, the agent must take into account full histories in order to make an informed decision on its next action. Alternatively, the agent can maintain a belief \( b_t \in \Delta(S) = B \) over the current state of \( M \) (Aström [1965]). Given the next observation \( o_{t+1} \), the next belief \( b_{t+1} \) is computed according to the belief update function \( \tau : B \times A \times \Omega \rightarrow B \), where \( \tau(b_t, a_t, o_{t+1}) = b_{t+1} \) iff the belief over any next state \( s_{t+1} \in S \) has for density \[ b_{t+1}(s_{t+1}) = \frac{\mathbb{E}_{s_t \sim b_t} P(s_{t+1} | s_t, a_t) \cdot O(o_{t+1} | s_{t+1}, a_t)}{\mathbb{E}_{s_t \sim b_t} \mathbb{E}_{s' \sim P(\cdot | s_t, a_t)} O(o_{t+1} | s', a_t)}. \] (1) Each belief \( b_{t+1} \) constructed this way is a sufficient statistic for the history \( \langle a_0:t, o_1:t+1 \rangle \) to optimize the return (Kaelbling et al. [1998]). We write \( \tau^*(a_0:t, o_1:t+1) = \tau(\cdot, a_t, o_{t+1}) \circ \cdots \circ \tau(\delta_s, a_0, o_1) = b_{t+1} \) for the recursive application of \( \tau \) along the history. The belief update rule derived from \( \tau \) allows to formulate \( P \) as a continuous belief MDP \( M_B = \langle B, A, P_B, R_B, b_I, \gamma \rangle \), where \( P_B(b' | b, a) = \mathbb{E}_{s \sim b} \mathbb{E}_{s' \sim P(\cdot | s, a)} \delta_{P(b, a, s')} (b') \); \( R_B(b, a) = \mathbb{E}_{s \sim b} R(s, a) \); and \( b_I = \delta_{s_I} \). As for all MDPs, \( M_B \) and any stationary policy for \( M_B \) (thus conditioned on beliefs) induce a well-defined probability space over trajectories of \( M_B \), which allows optimizing the expected return in \( P \). 2.3 Latent Space Modeling Latent MDPs. Given the original (continuous or very large, possibly unknown) environment \( M \), a latent space model is another (tractable, explicit) MDP \( \tilde{M} = \langle \tilde{S}, A, \tilde{P}, \tilde{R}, \tilde{s}_I, \gamma \rangle \) with state space linked to the original one via a state embedding function: \( \phi : S \rightarrow \tilde{S} \). Wasserstein Auto-encoded MDPs (WAE-MDPs, Delgrange et al. [2023]) are latent space models that are trained based on the OT from trajectories resulting from the execution of the RL agent policy in the real environment \( M \), to that reconstructed from the latent model \( \tilde{M} \). The optimization process relies on a temperature \( \lambda \in (0, 1) \) that controls the continuity of the latent space learned, the zero-temperature limit corresponding to a discrete latent state space (see Appendix E for a discussion). This procedure guarantees \( \tilde{M} \) to be probably approximately bisimilarly close (Larsen & Skou [1989], Givan et al. [2003], Delgrange et al. [2022]) to \( M \) as \( \lambda \rightarrow 0 \): in a nutshell, bisimulation metrics imply the closeness of the two models in terms of probability measures and expected return (Desharnais et al. [2004], Ferns et al. [2011]). Specifically, a WAE-MDP learns the following components: - a state embedding function \( \phi : S \rightarrow \tilde{S} \) - a latent transition function \( \tilde{P} : \tilde{S} \times A \rightarrow \Delta(\tilde{S}) \) - a latent reward function \( \tilde{R} : \tilde{S} \times A \rightarrow \mathbb{R} \) - a state decoder \( \psi : \tilde{S} \rightarrow S \). (2) 3 Learning the Dynamics The agent is assumed to operate within a POMDP. In an RL setting, the former have no explicit access to the environment dynamics: instead, it reinforces its behaviors through interactions and experiences without directly accessing the transition, reward, and observation functions of the environment. To provide the aforementioned guarantees, we henceforth adhere the following assumption. Assumption 1 (Access to the state during training). In addition to the observation, the agent is able to observe the true state of the environment, but only during the training phase. Remark 1. Seemingly restrictive, Assumption 1 can actually be met in a broad range of training scenarios, in particular those relying on simulators, where one could merely consider the RAM as the state. Otherwise, additional sensors with higher fidelity could be considered to obtain the state. Other applicable scenarios are model-based design or model-predictive control, where a model is accessible during training, and situations where accessing the state is costly (Bulychev et al. [2012]). Concretely, when the RL agent interacts in a POMDP \( P = \langle M, \Omega, O \rangle \) with underlying MDP \( M = \langle S, A, P, R, s_I, \gamma \rangle \), we leverage this access to allow the agent to learn the dynamics of the environment, i.e., those of \( M \), as well as those related to the observation function \( O \). To do so, we learn an internal, explicit representation of the experiences gathered, through a latent space model. We then use this model as a teacher for the agent to make it learn how to perform its belief updates. Hence, acquiring an accurate environment model is crucial to learn a reliable belief update function. In Sect. 3.2, we further demonstrate that the resulting model is guaranteed to closely replicate the original environment behaviors. The trick we use to learn such a model is to reason on an equivalent POMDP, where the underlying MDP is refined to encode all the crucial dynamics. 3.1 The Latent POMDP Encoding We enable learning the dynamics of \( \mathcal{P} \) via a WAE-MDP by considering the POMDP \( \mathcal{P}' = \langle M_\Omega, \Omega, O' \rangle \), where (i) the state space of the underlying MDP is refined to encode the observations: \( M_\Omega = \langle S_\Omega, A, P_\Omega, R_\Omega, \langle s_I, o_I \rangle, \gamma \rangle \) with \( S_\Omega = S \times \Omega \), \( P_\Omega(s', o' | s, o, a) = P(s' | s, a) \cdot O(o' | s', a) \), \( R_\Omega(\langle s, o \rangle, a) = R(s, a) \), and \( o_I \) is an observation from \( \Omega \) linked to the initial state \( s_I \); (ii) the observation function \( O' : S_\Omega \rightarrow \Omega \) is the deterministic projection of the refined state on the observation space, with \( O'(\langle s, o \rangle) = o \). The POMDPs \( \mathcal{P} \) and \( \mathcal{P}' \) are equivalent (Chatterjee et al., 2016): \( \mathcal{P}' \) captures the stochasticity of \( O \) in its transition function through the refinement of the state space, further yielding a deterministic observation function, only dependent on refined states. Henceforth, we reduce the problem of learning a latent space model of \( \mathcal{P} \) to learning a WAE-MDP from \( M_\Omega \). Precisely, we learn a latent MDP \( \mathcal{M} = \langle \bar{S}, A, \bar{P}, \bar{R}, \bar{s}_I, \gamma \rangle \) linked to \( M_\Omega \) via the embedding \( \phi : S_\Omega \rightarrow \bar{S} \). The latent MDP \( \mathcal{M} \) encodes the observation dynamics through \( \bar{P} \) and enables to learn the deterministic observation function \( O' \) through the decoder \( \psi : \bar{S} \rightarrow S_\Omega \), by decomposing the latter in two networks \( \psi^S : \bar{S} \rightarrow S \) and \( \psi^\Omega : \bar{S} \rightarrow \Omega \), further yielding \( \psi(\bar{s}) = \langle \psi^S(\bar{s}), \psi^\Omega(\bar{s}) \rangle \) for all \( \bar{s} \in \bar{S} \). This way, the WAE-MDP learns all the components of \( \mathcal{P}' \), the latter being equivalent to \( \mathcal{P} \). With this model, we construct a latent POMDP \( \mathcal{P} = \langle \mathcal{M}, \Omega, \bar{O} \rangle \), where \( \bar{O} \) is a Dirac measure with impulse \( \bar{O}_\mu \). Figure 6 in Appendix F presents a visualization of the relationship between the different models. As with any POMDP, the belief update function \( \tau \) of \( \mathcal{P} \) allows to reason on the belief space to optimize the return. The belief update procedure is illustrated in Appendix B. Formally, assuming the latent belief at time step \( t \geq 0 \) is \( \bar{b}_t \in \Delta(\bar{S}) = \bar{B} \), \( a_t \) is executed, and then \( o_{t+1} \) observed, \( \bar{b}_t \) is updated according to \( \tau(\bar{b}_t, a_t, o_{t+1}) = \bar{b}_{t+1} \) iff, for any (unobservable) next state \( \bar{s}_{t+1} \in \bar{S} \), \[ \bar{b}_{t+1}(\bar{s}_{t+1}) = \frac{\mathbb{E}_{\bar{s}_t \sim \bar{b}_t} P(\bar{s}_{t+1} | \bar{s}_t, a_t) \cdot \bar{O}(o_{t+1} | \bar{s}_{t+1})}{\mathbb{E}_{\bar{s}_t \sim \bar{b}_t} \mathbb{E}_{\bar{s}' \sim P(\cdot | \bar{s}_t, a_t)} \bar{O}(o_{t+1} | \bar{s}')}. \tag{3} \] Latent policies. Given any history \( h \in (A \cdot \Omega)^* \), running a latent policy \( \bar{\pi} : \bar{B} \rightarrow \Delta(A) \) in \( \mathcal{P} \) is possible by converting \( h \) into a latent belief \( \bar{\tau}^*(h) = \bar{b} \) and executing the action prescribed by \( \bar{\pi}(\cdot | \bar{b}) \). Training \( \mathcal{M} \) grants access to the dynamics required to update the belief through its closed form (Eq. 3). However, integrating over the full latent space remains computationally intractable. As a solution, we propose to leverage the access to the dynamics of \( \mathcal{M} \) to learn a latent belief encoder \( \varphi : \bar{B} \times A \times \Omega \rightarrow \bar{B} \) that approximates the belief update function by minimizing \( D(\bar{\tau}^*(h), \varphi^*(h)) \) for some discrepancy \( D \) and \( h \in (A \cdot \Omega)^* \) drawn from some distribution. The belief encoder \( \varphi \) thus enables to learn a policy \( \bar{\pi} \) conditioned on latent beliefs to optimize the return in \( \mathcal{P} \): given the current history \( h \), the next action to play is given by \( a \sim \bar{\pi}(\cdot | \varphi^*(h)) \). Two key questions arise: (i) does the WAE-MDP encoding induces a latent POMDP with behaviors close to \( \mathcal{P}' \)? (ii) is the history representation induced by \( \varphi \) suitable for optimizing the expected return in \( \mathcal{P} \)? Our guarantees hinge on the history distribution and chosen discrepancy. The next section details our theoretical analysis of the required distributions and losses to answer (i) and (ii). 3.2 Losses and Theoretical Guarantees To yield the guarantees, we specifically target the episodic RL process setting for drawing histories. Assumption 2 (Episodic RL process). The environment \( \mathcal{P} \) embeds a special reset state so that (i) under any policy, the environment is almost surely eventually reset; (ii) when reset, the environment transitions to the initial state; and (iii) the reset state is observable. Lemma 3.1. There exists a well defined stationary distribution \( H_\pi \in \Delta((A \cdot \Omega)^*) \) over histories likely to be seen at the limit of the interaction when \( \bar{\pi} \) is executed in \( \mathcal{P} \) (proof in Appendix D). Remark 2. Assumption 2 is present in a vast majority of RL scenarios, where it is common practice to reset the environment and start anew from an initial state when the agent succeeds, fails, or after a finite number of time steps (Brockman et al., 2016; Pardo et al., 2018). A notable exception is in the domain of continual RL. However, the existence of a stationary distribution (Lem. 3.1) is often assumed in such scenarios (see, e.g., Huang, 2020), which allows to relax the episodic assumption. Local losses. The objective function of the WAE-MDP incorporates local losses (Gelada et al., 2019) that minimize the expected distance between the original and latent reward and transition functions: \[ L_R = \mathbb{E}_{s,o,a \sim H_\pi} |\mathcal{R}(s,a) - \bar{\mathcal{R}}(\phi(s,o),a)|, \quad L_P = \mathbb{E}_{s,o,a \sim H_\pi} W_d (\phi P_\Omega(\cdot | s,o,a), \bar{P}(\cdot | \phi(s,o),a)); \] and both are optimized locally, i.e., under \( H_\pi \), where \( s,o,a \sim H_\pi \) is a shorthand for (i) \( h \sim H_\pi \) so that \( o \) is the last observation of \( h \), (ii) \( s \sim \tau^*(h) \), and (iii) \( a \sim \pi(\cdot | \varphi^*(h)) \). Furthermore, \( \phi P(\cdot | s,a) \) is the distribution of transitioning to \( s' \sim P(\cdot | s,a) \), then embedding it to the latent space \( s' = \phi(s') \), and \( d \) is a metric on \( S \). In practice, the ability of observing states during learning enables the optimization of those local losses without the need of explicitly storing histories. Instead, we simply store the transitions of \( M_\Omega \) encountered while executing \( \pi \). We also introduce an observation loss which allows learning \( O \): \[ L_O = \mathbb{E}_{s,o,a \sim H_\pi} \mathbb{E}_{s' \sim P(\cdot | s,a)} d_{TV} \left( O(\cdot | s',a), \mathbb{E}_{o' \sim O(\cdot | s',a)} O(\cdot | \phi(s',o')) \right). \] Belief losses. We set \( D \) as the Wasserstein distance between the true latent belief update and our belief encoder. In addition, we introduce the following reward and transition regularizers to reconcile the behaviors obtained in the fully observable latent model \( \mathcal{M} \) and the partially observable one \( \mathcal{P} \): \[ L_{\mathcal{R}} = \mathbb{E}_{h \sim H_\pi} W_d (\tilde{\tau}^*(h), \varphi^*(h)), \quad L_{\mathcal{R}}^\mathcal{P} = \mathbb{E}_{h,s,o,a \sim H_\pi} \mathbb{E}_{s \sim \varphi^*(h)} |\mathcal{R}(\phi(s,o),a) - \bar{\mathcal{R}}(s,a)|, \] \[ L_{\mathcal{P}} = \mathbb{E}_{h,s,o,a \sim H_\pi} \mathbb{E}_{s \sim \varphi^*(h)} W_d (\bar{P}(\cdot | \phi(s,o),a), \bar{P}(\cdot | s,a)). \] \( L_{\mathcal{R}} \) and \( L_{\mathcal{P}} \) aim at regularizing \( \varphi \) and minimize the gap between the rewards (resp. transition probabilities) that are expected when drawing states from the current belief compared to those actually observed. Again, the ability to observe the states during training enables optimizing those losses without explicitly requiring the states to execute the policy. The belief loss and the related two regularizers can be optimized on-policy, i.e., coupled with the optimization of the latent policy \( \pi \) that is used to generate the episodes. Value difference bounds. We provide guarantees concerning the agent behaviors in \( \mathcal{P} \), when the policies are conditioned on latent beliefs. To do so, we formalize the behaviors of the agent through value functions. For a specific policy \( \pi \), the value of a history is the expected return that would result from continuing to follow the policy from the latest point reached in that history: \( V_\pi(h) = \mathbb{E}_\pi \left[ \sum_{t=0}^{\infty} \gamma^t r_t | b_I = \tau^*(h) \right] \). Similarly, we write \( V_{\bar{\pi}} \) for the values of the latent policy \( \bar{\pi} \) in \( \mathcal{P} \). Suppose the agent uses a latent policy whose inputs are produced by \( \varphi \), we claim that when the losses are minimized to zero, then (i) the latent model almost surely mimics the original environment, and (ii) our belief representation captures the value function. Precisely, let \( L = L_R + L_{\mathcal{R}}^\mathcal{P} + \mathcal{R}^\star L_{\mathcal{P}} + \gamma K_{\mathcal{P}} \cdot (L_P + L_{\mathcal{P}}^\mathcal{P} + L_{\mathcal{R}} + L_O) \), with \( \mathcal{R}^\star = \| \mathcal{R} \|_\infty \) and \( K_{\mathcal{P}} = \mathcal{R}^\star / 1 - \gamma \), then: **Theorem 3.2 (Model quality).** For any latent policy \( \bar{\pi}: \mathcal{B} \rightarrow \Delta(A) \), the values of \( \mathcal{P} \) and \( \mathcal{P} \) are bounded by the local and belief losses in average when, in both models, the actions are produced by \( \bar{\pi} \), which is conditioned on the latent belief induced by \( \varphi \), i.e., \( a \sim \pi(\cdot | \varphi^*(h)) \): \[ \mathbb{E}_{h \sim H_\pi} |V_\pi(h) - V_{\bar{\pi}}(h)| \leq \frac{L}{1 - \gamma}. \] **Theorem 3.3 (Representation quality).** Let \( \bar{\pi}^* \) be an optimal policy of \( \mathcal{P} \), then for any \( \epsilon > 0 \) and \( n \geq 0 \), there is a \( K \geq 0 \) so that for any histories \( h_1, h_2 \) of length at most \( n \) that are measurable under \( \mathcal{P} \) and \( \mathcal{P} \) with \( \varphi^*(h_1) = \bar{b}_1 \) and \( \varphi^*(h_2) = \bar{b}_2 \), the representation induced by \( \varphi \) yields: \[ |V_{\bar{\pi}^*}(h_1) - V_{\bar{\pi}^*}(h_2)| \leq K W_d (\bar{b}_1, \bar{b}_2) + \epsilon + \frac{KL_{\mathcal{P}} + L}{1 - \gamma} \left( \frac{1}{H_{\bar{\pi}^*}(h_1)} + \frac{1}{H_{\bar{\pi}^*}(h_2)} \right). \] While Thm. 3.2 asserts that training a WAE-MDP as a latent space model of the environment results in similar behaviors (i.e., close expected returns) compared to the original environment when they are measured under the agent policy — which justifies the usage of \( \mathcal{P} \) as model of the environment — Thm. 3.3 states that our learned update procedure yields a belief representation which is well-suited to optimize the policy: execution traces leading to close latent beliefs (via our learned updater \( \varphi \)) are guaranteed to yield close expected returns as well (proofs in Appendix F). 1 Analogous to \( \tau^* \), we define \( \varphi^* \) as the recursive application of \( \varphi \) along histories. Figure 2: WBU (left) learns to encode the history into a sub-belief solely by optimizing the belief loss. The policy, being conditioned on the sub-belief, is learned via A2C and does not back-propagate through the sub-belief encoder. The R-A2C agent (right) uses BPTT: the RNN leverages gradients from future time-steps to improve its compression of the history for learning a policy and value function. In both plots, the colored arrows represent the gradient flows of the different losses. 4 LEARNING TO BELIEVE In the following, we assume that we have access to the latent model learned by the WAE-MDP. Architecture. Our latent belief encoder $\varphi$ aims at generalizing to any POMDP. Therefore, we do not make any assumption about the underlying belief distribution. To accommodate complex belief distributions, we use a Masked Auto-Regressive Flows (MAF) (Papamakarios et al., 2017), a type of normalizing flow built on the auto-regressive property. Precisely, to fit with the WAE-MDP framework and leverage the guarantees presented in Sect. 3.2, we use the MAF of Delgrange et al. (2023) that learns relaxed multivariate latent distributions. The sub-belief $\beta_t$ is the vector that embeds the parameters of the belief distribution, which is converted into a belief via the MAF $M(\beta_t) = \tilde{b}_t$. We use a sub-belief encoder $\varphi^{\text{sub}}$ to recursively update $\beta_t$ via $\varphi^{\text{sub}}(\beta_t, a_t, o_{t+1}) = \beta_{t+1}$, so that $\varphi(\tilde{b}_t, a_t, o_{t+1}) = M \circ \varphi^{\text{sub}}(\beta_t, a_t, o_{t+1})$. RNNs are trained via back-propagation through time (BPTT), which is challenging (Pascanu et al., 2013). In contrast, albeit sub-beliefs are updated recursively in the same spirit as RNN hidden states, we do not need to use BPTT and use a simple feed-forward network for $\varphi^{\text{sub}}$, as illustrated in Fig. 2. In R-A2C, RNN hidden states serve as compact representations of histories for the policy. Since values of time-steps closer to the end of an episode are easier to learn, the gradients of future time-steps tend to be more accurate; thus BPTT helps learning. This is in stark contrast to learning the belief update rule: the beliefs of early time-steps are easier to infer, so BPTT is unnecessary and might even be harmful. Additionally, viewing the belief update function as the transition function of $M_\theta$, disabling BPTT aligns with the literature on model-based RL for learning Markovian transition functions (Gelada et al., 2019; François-Lavet et al., 2019). See Appendix G for a more detailed discussion on BPTT. Training. We aim to train $\varphi^{\text{sub}}$ and $M$ to approximate the update rule by minimizing the Wasserstein between the belief update rule $\bar{\tau}$ of the latent POMDP, and the belief encoder $\varphi$ (Eq. 5), to leverage the theoretical learning guarantees of Thm. 3.2 and 3.3. However, Wasserstein optimization is known to be challenging, often requiring the use of additional networks, Lipschitz constraints, and a min-max optimization procedure (Arjovsky et al., 2017), similar to how WAE-MDPs are trained. Also, sampling from both distributions is necessary for optimizing Wasserstein and, while sampling from our belief approximation is straightforward, sampling from the update rule (Eq. 3) is non-trivial. As an alternative to the Wasserstein optimization, we minimize the KL divergence between the two distributions. While this $D_{KL}$ proxy is easier to optimize and only requires sampling from one of the two distributions (in our case, the belief encoder), it bounds the Wasserstein distance by the Pinsker’s inequality (Borwein & Lewis, 2005) in the zero-temperature limit of the WAE-MDP (cf. Appendix E for a discussion). On-policy KL divergence. Using $D_{KL}$ as a proxy for $W_d$ allows to narrow the gap between $\varphi$ and $\bar{\tau}$. We train $\varphi$ on-policy, with the same samples as used for $\bar{\pi}$, which aids learning despite gradients are not allowed to flow between the networks. At any time-step $t \geq 0$, given the current belief $\tilde{b}_t$, Figure 3: Evolution of the (i) undiscounted cumulative return for WBU, R-A2C and DVRL, and (ii) estimated belief loss during learning for WBU (mean and standard error). We report 5 instances of each algorithm. Appendix I details the hyperparameter search performed. the action $a_t$ played by the agent, and the next perceived observation $o_{t+1}$, the belief proxy loss is: $$D_{KL}(\varphi(b_t, a_t, o_{t+1}) \| \tau(b_t, a_t, o_{t+1})) = \log \left( \mathbb{E}_{\tilde{s} \sim b_t} \mathbb{E}_{s' \sim P(\cdot | \tilde{s}, a_t)} \mathcal{O}(o_{t+1} | s') \right) + \mathbb{E}_{s_{t+1} \sim \varphi(b_t, a_t, o_{t+1})} \left[ \log \varphi(s_{t+1} | b_t, a_t, o_{t+1}) - \log \mathbb{E}_{\tilde{s} \sim b_t} \mathbb{P}(s_{t+1} | \tilde{s}, a_t) - \log \mathcal{O}(o_{t+1} | s_{t+1}) \right]. \tag{8}$$ Eq. 8 consists of 4 terms: a normalization factor, negative entropy of $\varphi$, belief update conformity with the latent MDP’s state transition function, and filtration of latent states unrelated to $o_{t+1}$. Remark 3 (Latent observations). The WAE-MDP learns from the augmented POMDP $P^*$ (Sect. 3.1), which is equivalent to $P$ and possesses a deterministic observation function. The WAE-MDP learns such a deterministic mapping through $\mathcal{O}_\mu$. However, when deterministic, the observation terms of Eq. 3 and 8 are Dirac, which prevents learning: deterministically filtering out states from the next belief that do not share the next observation is sample inefficient and may further require constructing the full belief distribution, which is usually intractable. In practice, to alleviate this, we model the observation function as a normal distribution: $\mathcal{O}(\cdot | \tilde{s}) = \mathcal{N}(\mathcal{O}_\mu(\tilde{s}), \sigma^2)$ where the variance can, e.g., be learned via $L_O$. Notice that we can enforce annealing the variance to zero to recover a Dirac. Policy learning is enabled by inputting the sub-belief into the policy, while the optimization of the belief encoder parameters by the RL agent is not allowed. Our method is applicable to any on-policy algorithm, and we employ A2C in our experiments. We provide the final algorithm in Appendix H. 5 EXPERIMENTS To evaluate our approach, we identify three types of POMDPs: those requiring long-term memory, those where features of the state space are hidden (and may be inferred from short-term memory), and those with noisy observations. Notably, we stress that long-term memory is crucial in POMDPs, whereas short-term memory could be mitigated by stacking frames (e.g., Mnih et al., 2015). We compare our agent to R-A2C and DVRL (Fig. 3), trained in environments from POPGym (Morad et al., 2023) and our own partially observable version of MINATAR (Young & Tian, 2019). Memorization. The RepeatPrevious environment involves shuffling two decks of cards at the start of each episode and presenting the agent with a card at each time step. The goal is to identify the suit of the card seen 8 time steps earlier. Our algorithm stands out as the sole method demonstrating mid- to long-term memorization capabilities. Unlike other methods, notably DVRL which also attempts to learn a belief distribution, WBU provably acquires a suitable representation of the history by learning to maintain a sufficient statistic, thereby explaining its ability to retain past information. Hidden features. We employ a cart pole scenario (STATELESSCARTPOLE) where velocity components of the system are hidden. R-A2C excels rapidly here, capitalizing on short-term memory to infer velocities from the preceding observation, while DVRL is overtaken. Still, WBU eventually reaches R-A2C final performance. We also explore the SPACEINVADERS environment, where the agent takes command of a cannon with the objective of shooting at groups of moving aliens. In the observation, we intentionally concealed the direction of alien movement and confounded friendly and enemy fires. In this more challenging setting, WBU excels by earning the highest rewards. Noise. We explore two types of noise. First, we introduce Gaussian noise to the observations of STATELESSCARTPOLE. Second, for SPACEINVADERS, binary noise is injected via a radar-like mask obscuring the position of each alien with high probability. Hence, the agent must infer their positions based on previous observations. By leveraging its ability to maintain a belief over the noiseless (latent) state space, WBU demonstrates its resilience to noise and swiftly provides superior solutions, whereas R-A2C eventually achieves comparable performance but with more variance. Belief representation. Ideally, close policy inputs should lead to close values, which would ease its optimization. Thm. 3.3 provides such a representation guarantee and ensures that the representation induced by $\varphi$ captures the value function. To support this, we performed a t-SNE (van der Maaten & Hinton, 2008) on our belief representation at the late stage of training, which projects latent beliefs on a 2D space (Fig. 4). Interestingly, latent beliefs clustered together have indeed close values, in line with Thm. 3.3. We reported the belief loss throughout the training phases (Fig. 3). Importantly, unlike other baselines, our approach distinctly separates the optimization of $\varphi$ and $\pi$. Consequently, the policy optimization does not influence the representation which is solely learned via $\varphi$. The decrease in this loss thus relates to improved representation quality for RL. 6 CONCLUSION WBU provides a principled approach that approximates directly the belief update for POMDPs, in contrast to SOTA methods that uses the RL objective and regularization to attempt to turn the history into a sufficient statistic. By learning the belief and its update rule, we provide strong guarantees on the quality of the belief, its ability to condition the optimal value function, and ultimately, the effectiveness of our algorithm. Our theoretical analysis and experimental results demonstrate the potential of our approach. Overall, WBU provides a promising new direction for RL in POMDPs, with potential applications in a wide range of settings where decision-making is complicated by uncertainty and partial observability, or when guarantees on the agent behaviors are required. Future work. The theory we developed is not limited to the algorithm proposed in our paper. It opens diverse avenues for future work, e.g., on formally verifiable policies for POMDPs, by leveraging the guarantees presented in our framework. We also leave the study and the adaptation of our framework under relaxed assumptions (e.g., in settings akin to the work of Lambrechts et al. (2022)) to future work. In addition, Thm 3.2 enables policy optimization through planning in the learned model, as demonstrated in successful model-based RL methods (e.g., Hafler et al. (2021)). Scaling to high-dimensional observations (e.g., images) may potentially be computation-intensive due to observation filtering. For this further challenge, we suggest either to modify the WAE-MDP framework by using a stochastic decoder (see Tolstikhin et al. (2018), e.g., via PixelCNN, van den Oord et al. (2016)), or learning a lower-dimensional latent observation space synced with the policy with a normalized or (relaxed) discrete prior (e.g., via a WAE-GAN, Tolstikhin et al. (2018)). Finally, incorporating bisimulation metrics (Desharnais et al. (2004); Ferns et al. (2011)) will strengthen guarantees for belief learning, even though bisimulation is challenging in POMDPs (Castro et al. (2009)). ACKNOWLEDGEMENTS This research was supported by funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” program and was supported by the DESCARTES iBOF project. R. Avalos is supported by the Research Foundation – Flanders (FWO), under grant number 11F5721N. G.A. Perez is also supported by the Belgian FWO “SAILor” project (G030020N). We thank Mathieu Reymond, Denis Steckelmacher, and Mustafa Mert Çelikok for their valuable feedback. REPRODUCIBILITY STATEMENT We referenced in the main text the parts of the Appendix presenting the proofs of our Lemma (Appendix D), and Theorems (Appendix E). We also provide the pseudo-code of our algorithm (Appendix H), as well as extra details required to compute our losses (Appendix B, E, and F.2). The code is available at https://github.com/raphaelavalos/wbu. Additionally, we provide the details of our hyperparameter search (Appendix I). REFERENCES Martín Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 214–223. PMLR, 2017. URL http://proceedings.mlr.press/v70/arjovsky17a.html. Raphael Avalos, Mathieu Reymond, Ann Nowé, and Diederik M. Roijers. Local Advantage Networks for Cooperative Multi-Agent Reinforcement Learning. In AAMAS ’22: Proceedings of the 21st International Conference on Autonomous Agents and MultiAgent Systems (Extended Abstract), 2022. J.M. Borwein and A.S. Lewis. Convex Analysis and Nonlinear Optimization: Theory and Examples. CMS Books in Mathematics. Springer New York, 2005. ISBN 9780387295701. URL https://books.google.be/books?id=TXWzqEkAa7IC. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. CoRR, abs/1606.01540, 2016. URL http://arxiv.org/abs/1606.01540. Peter E. Bulychev, Franck Cassez, Alexandre David, Kim Guldstrand Larsen, Jean-François Raskin, and Pierre-Alain Reynier. Controllers with minimal observation power (application to timed systems). In Supratik Chakraborty and Madhavan Mukund (eds.), Automated Technology for Verification and Analysis - 10th International Symposium, ATVA 2012, Thiruvananthapuram, India, October 3-6, 2012. Proceedings, volume 7561 of Lecture Notes in Computer Science, pp. 223–237. Springer, 2012. doi: 10.1007/978-3-642-33386-6_19. URL https://doi.org/10.1007/978-3-642-33386-6_19. Pablo Samuel Castro, Prakash Panangaden, and Doina Precup. Equivalence relations in fully and partially observable markov decision processes. In Craig Boutilier (ed.), IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence, Pasadena, California, USA, July 11-17, 2009, pp. 1653–1658, 2009. URL http://ijcai.org/Proceedings/09/Papers/276.pdf. Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, and Mark Rowland. Mico: Improved representations via sampling-based state similarity for markov decision processes. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 30113–30126, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/fd06b8ea02fe5b1c2496fe1700e9d16c-Abstract.html.
RJDjSXNuAZ
The method is evaluated on only virus detection in electron microscopy images, where viruses do not overlap. Thus, the method may not generalize to object detection (e.g., cell or nuclei detection) in other microscopy imaging modalities, such as hematoxylin and eosin (H&E) or immunohistochemistry (IHC) stained brightfield microscopy images, and fluorescence mages, which often have touching or overlapping cells or nuclei. In addition, the repeated optimization for each object would be expensive for H&E or IHC images that typically have thousands of or even more cells/nuclei.
Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images Hannah Kniesel, Leon Sick, Tristan Payer, Tim Bergner, Kavitha Shaga Devan, Clarissa Read, Paul Walther, Timo Ropinski Ulm University Pedro Hermosilla TU Vienna Abstract Current state-of-the-art methods for object detection rely on annotated bounding boxes of large data sets for training. However, obtaining such annotations is expensive and can require up to hundreds of hours of manual labor. This poses a challenge, especially since such annotations can only be provided by experts, as they require knowledge about the scientific domain. To tackle this challenge, we propose a domain-specific weakly supervised object detection algorithm that only relies on image-level annotations, which are significantly easier to acquire. Our method distills the knowledge of a pre-trained model, on the task of predicting the presence or absence of a virus in an image, to obtain a set of pseudo-labels that can be used to later train a state-of-the-art object detection model. To do so, we use an optimization approach with a shrinking receptive field to extract virus particles directly without specific network architectures. Through a set of extensive studies, we show how the proposed pseudo-labels are easier to obtain, and, more importantly, are able to outperform other existing weak labeling methods, and even ground truth labels, in cases where the time to obtain the annotation is limited. 1 Introduction Deep learning algorithms rely on large data sets for training a model to perform a complex task. However, annotating such large data sets usually requires a person to analyze each data point and label it accordingly, resulting in a time-consuming process. In particular, detecting particles in Electron Microscopy (EM) images is extremely costly, since this annotation process has to be performed by an expert, which usually results in small data sets that are not well suited to train deep models. Since particle detection is a key step in several scientific studies such as the analysis of the formation of infectious virions (Shaga Devan et al., 2021), catalytic investigation (Nartova et al., 2022), analysis of multi-tissue histology images (Graham et al., 2019), preclinical trials or single particle reconstruction (Sigworth, 2015; Shaikh et al., 2008), these studies could benefit significantly from automated object detection methods tailored towards particle detection. However, particle detection comes with additional challenges. First, it requires the need for experts to annotate the data. Second, since these areas are active research fields, they require a quick adaption of the detection model to new virus mutants, particles, or imaging modalities. To address this issue, weakly supervised algorithms (Oquab et al., 2015; Bency et al., 2016; Zeng et al., 2019; Gao et al., 2021) rely on a secondary task, usually classification, for which annotations are easy to obtain. Then, to solve the main task of object detection, weakly supervised algorithms usually use a set of bounding box candidates or Region of Interests (ROIs), obtained with a selective search strategy (Uijlings et al., 2013), which are later filtered based on the classification score of a pre-trained classification model. However, the accuracy of such object detection models highly depends on the quality and quantity of such ROI candidates (Girshick, 2015), since a direct regression of bounding... boxes based on the pre-trained classifier is not possible. This is why current weakly supervised methods in the field of particle detection in EM usually rely on more fine-grained (and expensive) object-level annotations rather than image-level annotations (Devan et al., 2019; Matuszewski & Sintorn, 2019). In our work, however, we reduce annotation time by exploiting image-level annotations for virus capsid detection in EM images, by proposing a distillation method, that is able to regress the bounding box position directly from a classifier pre-trained on image-level annotations. To this end, we combine a Gaussian masking strategy and domain-specific knowledge about the virus size and its shape, in order to localize virus capsids on the images using an optimization algorithm informed by the pre-trained classifier. To propagate the gradients over the full input image, we initialize the Gaussian mask with a large standard deviation and progressively reduce it during the optimization procedure, similar to the training mechanism used in score-based generative models (Song & Ermon, 2019). By exploiting this novel approach, we are able to perform accurate particle detection, which is robust with respect to the variance of the initial ROIs. Since our approach is only relying on image level labels, the collection of a new data set for a newly discovered virus mutant or a new imaging modality can be done efficiently. To evaluate our methods, we first conducted a user study comparing different types of labels which results show that our labels are easier to obtain and less prone to errors. Then, we compare our approach to other weakly supervised and fully supervised approaches on five different virus types. Our results show that our approach, solely relying on image labels, does not only outperform other weakly supervised approaches but even fully supervised ones when allocating the same annotation time. Thus, within this paper, we make the following contributions: - We propose a domain-specific gradient-based optimization algorithm, which exploits a pre-trained classifier and a Gaussian masking strategy, in order to detect virus capsids. - We introduce a class activation map guided initialization strategy to significantly reduce the computational overhead of the underlying optimization process. - We conducted a user study comparing different label types and show that image-level annotations are easier and faster to obtain, and more robust to annotation errors. - We show that our approach outperforms other weakly as well as fully supervised methods given the same annotation time. 2 RELATED WORK Weakly supervised object detection. The requirement for fast annotation times is a long-standing problem in several fields, which has made Weakly Supervised Object Localization (WSOL) and Weakly Supervised Object Detection (WSOD) an active area of research in the last few years (Oquab et al., 2015) introduced a CNN architecture that can be moved over the input image during inference time in a sliding window fashion to perform WSOD. Bazzani et al. (2016) used a selective search strategy (Uijlings et al., 2013) to draw a set of bounding box candidates on the image for which the score of each box was obtained from a pre-trained classification model. Bency et al. (2016) used a hierarchical search to reduce the number of bounding box candidates and the feature map of a deep network to find the location of the object of interest. Bilen & Vedaldi (2016) introduced Multiple Instance Learning (MIL) in an end-to-end trainable fashion. In MIL, training instances are organized in bags such that a positive bag contains at least one object of interest and a negative bag does not contain any object of interest. There are many works to follow and improve upon the MIL approach (Kantorov et al., 2016; Diba et al., 2017; Tang et al., 2017; Cheng et al., 2020; Huang et al., 2020; Ren et al., 2020; Zeng et al., 2019; Seo et al., 2022). Among these methods, the approach by Zeng et al. (2019) stands out, as it includes refinement of the ROI candidates in the loss to obtain more accurate bounding box predictions. A similar idea for bounding box refinement was also explored by Dong et al. (2021). However, they rely on additional data sets to learn bounding box modifiers that can be applied to the data set with weak labels. Contrary to these approaches, we propose to directly regress the bounding box of the objects from the pre-trained classifier without the need for supervised pre-training on different data sets, while further being robust to initial ROI proposals computed by selective search (Uijlings et al., 2013) or similar methods. This makes it possible to use a smaller amount of initial ROI candidates to reduce computational cost. In another line of work, researchers have investigated how to obtain the object’s bounding box from Class Activation Maps (CAMs) of pre-trained deep neural networks (Zhou et al., 2015). However, such methods have difficulties identifying specific discriminative parts of objects. To address this problem, Singh & Lee (2017) tried to improve the quality of CAMs by randomly masking patches of the image during the training phase, to not only rely on specific features of the object during predictions. This concept was later extended by Zhang et al. (2018) and Choe & Shim (2019), whereby both methods facilitate attention maps to mask certain regions of the image during training. Later, Xue et al. (2019) proposed a regularization loss and a hierarchical classification loss to enforce discrepancy in the feature maps, which allows the classifier to attend to the full extent of objects. More recently, Gao et al. (2021) investigate the attention mechanism of vision transformers (Dosovitskiy et al., 2020) to guide WSOD. Alternatively, Meng et al. (2021) aim to better capture the full object through object-aware and part-aware attention. With a similar goal, Wei et al. (2022) propose a mechanism that enforces inter-class feature similarity and intra-class appearance consistency, while Xu et al. (2022) use class-specific foreground and background context embeddings which act as class centroids during inference to form a more complete activation map. However, those methods still suffer from correctly identifying the full extent of objects and/or rely on specific architectures which might not be suited for small data sets, as they usually occur in EM scenarios. The most similar work to the one proposed in this paper is the work from Lu et al. (2020). They propose a secondary network to predict the geometric parameters of a mask, center and radius of an ellipsoid, which is then input to another network that predicts the final mask. They show in their experiments that using the predicted geometry directly leads to poor performance. In this work instead, we show that no neural network is necessary to predict or transform such mask and by using similar ideas to the ones used in score-based generative models (Song & Ermon, 2019) we can optimize directly the location of the object. Additionally, we introduce a method that is able to detect multiple instances of the same object in an image, which is not possible with the approach of Lu et al. (2020). **Virus particle detection in EM.** Despite the progress in WSOD in standard computer vision, its application in EM images is limited, even though the need for fast annotations in EM is of special interest. This is likely the case since low Signal to Noise Ratio (SNR) in EM images can limit the capacity of methods well performing on other imaging modalities. Further, EM data usually contains a high number of instances of the same object in one image, which are hard to detect in a weakly supervised setup with image-level labels only. To solve a similar problem in medical imaging, Dubost et al. (2020) introduced regression models for WSOD in 3D MRI data. In the EM domain Huang et al. (2022) introduced a weakly supervised learning schema for finding the location of proteins in cryo-EM. However, in their weakly supervised setup, they still require a small amount of labeled training data. For detecting virus particles in EM in a weakly supervised fashion Devan et al. (2019) trained a classification model on a small set of annotated bounding box crops of the HCMV nucleocapsids. They then use a weakly supervised approach similar to Oquab et al. (2015) to detect virus particles based on their classifier. However, this approach requires images of a single virus, instead of random crops with and without virus particles. The same authors later explore the improvement of supervised virus detection by augmenting training data by a generative adversarial network (Shaga Devan et al., 2021). One of the most promising works in weakly supervised detection and segmentation probably originates from Matuszewski & Sintorn (2018). They introduced a minimal annotation strategy for the segmentation of microscopy images: annotations of the center or center line of a target object are used to generate segmentation masks. The labels for the object of interest were generated by dilating each particle annotation with a disk of $0.7 \times$ average known size of the target object. The background label, on the other hand, was created by dilating the center annotations with $1.5 \times$ the average known size of the target object. Later, the same authors (Matuszewski & Sintorn, 2019) made use of the minimal labels to train an improved U-Net architecture (Ronneberger et al., 2015) for virus recognition in EM. However, all of the mentioned methods rely on more fine-grained annotations and/or the use of a compute inefficient sliding window to obtain ROI candidates to locate the particles. ### 3 METHOD Our method expects as input an EM image, $I \in \mathbb{R}^{W \times H}$, the expected virus radius $r$, and a classifier $C : \mathbb{R}^{W \times H} \rightarrow \mathbb{R}$. We pre-train the classifier on image-level annotations, such that it can classify Figure 1: Left: Overview of our weakly supervised virus detection approach, working in an iterative fashion until a stopping criteria is met. Right: Detailed description of our approach visualizing different steps for the detection of a virus. For the Initialization of the particle position \( p_0 \) we compute a CAM obtained through GradCAM (Selvaraju et al., 2017), and place \( p_0 \) at the position of the highest CAM value. During Optimization the position \( p_t \) is iteratively refined, guided by the classifier output and a Gaussian mask with decreasing standard deviation centered at \( p_t \). A Detection is happening once the position is converged to the exact position of the virus particle. Finally, the input image is prepared to detect the next virus by the Virus Removal of previously detected virus particles. We check at multiple points of the virus detection pipeline, if a stopping criteria is met. For more details see section 3.4. \( I \), based on the presence and absence of virus capsids in \( I \), in a binary manner. To locate the virus capsids, we first initialize their position \( p_0 \) with the location of the highest value of a CAM obtained for \( C(I) \). This position is then iteratively optimized to obtain a refined position \( p_t \) over time steps \( t \). In each step \( t \), we mask the input image with a Gaussian mask \( M \) centered at \( p_t \), before optimizing \( p_t \) to maximize the classifier score \( C(I \cdot M(p_t)) \). During this optimization, we fix the weights of the classifier and only optimize the position. In order to successfully converge to the desired position, even when \( p_0 \) is far from a virus particle, the gradient computation needs to consider areas of \( I \) far from \( p_0 \). Therefore, the standard deviation of the Gaussian is chosen to initially span the entire image and continuously decreases during the optimization process. Once the optimization process converges, the already detected virus particles are cut out from \( I \), using virus radius \( r \), such that the optimization towards a new viral particle is not misguided by already located particles. This iterative process stops when \( C(I) \) predicts the absence of viruses on the masked image. See Figure 1 for an overview of our method. ### 3.1 INITIALIZATION We compute the CAM of the input image \( I \) using the GradCAM (Selvaraju et al., 2017) algorithm based on our pre-trained classifier (implementation from Gildenblat & contributors, 2021). Then, the center of mass of the top 1% of activations is used as the initial position, \( p_0 \). Thus, if there are multiple instances of the virus, the CAM can spread over multiple regions of the input image. Therefore, to ensure we initialize \( p_0 \) inside of a relevant region, we check if the center of mass lies within the top 1% of the activations. If this is not the case, we define \( p_0 \) as a random position among those top 1%. ### 3.2 OPTIMIZATION Given an initial position \( p_0 \), we want to further optimize it using gradient descent to match the exact location of a virus. To achieve this, we define a fully differentiable mask \( M \in \mathbb{R}^{W \times H} \) as a Gaussian function centered at \( p_t \), where \( t \) is the current optimization iteration. This mask \( M \) is defined as: \[ M_{ij}(p_t) = \frac{1}{\sigma_t \sqrt{2\pi}} \exp \left( -\frac{\|x_{ij} - p_t\|^2}{2\sigma_t^2} \right) \] where \( x_{ij} \) is the coordinate of the position \( i, j \) in the image, and \( \sigma_t \) is the mask’s standard deviation at \( t \). Note, that the mask is normalized to have an integral equal to one since we found this to work Figure 2: Visualization of the magnitude and direction of the gradients for multiple positions in the input image over the optimization process. For large values of the standard deviation, $\sigma_{\text{max}}$, large portions of the image receive gradients pointing to virus particles. However, for small values of the standard deviation, $\sigma_{\text{min}}$, only the regions close to virus particles contain strong gradients pointing towards particles. By reducing the value of $\sigma$ during the optimization process we are able to accurately find a particle in the image even if the initial starting position is far away from any virus. Better in practice. Also, before we feed the masked input to the classifier, we normalize it based on the statistics of the pre-training data set. Then, the optimization objective is defined as: $$\max_{p_t} C(I \cdot M(p_t))$$ (2) **Mask standard deviation.** In order to propagate gradients to optimize the position over the full EM image, the standard deviation of the Gaussian mask needs to be adapted for each optimization step. While a Gaussian mask with a large standard deviation $\sigma_{\text{max}}$ pulls positions that are far from a virus closer to the optimal position, a Gaussian mask with a small standard deviation $\sigma_{\text{min}}$ will generate smooth gradients for positions close to a virus (see Figure 2). Therefore, we take inspiration from approaches commonly used for score generative models ([Song & Ermon, 2019](#)). We start with a large standard deviation $\sigma_{\text{max}}$ and then reduce it over the optimization process to $\sigma_{\text{min}}$. Since the different EM images can have different levels of magnification, we define the standard deviation depending on the real-world virus size in nm. We choose $\sigma_{\text{max}}$ such that the entire image will be visible if the mask is placed in the center of the image of the smallest magnification level. In practice, exponential decay performed the best when interpolating between $\sigma_{\text{max}}$ and $\sigma_{\text{min}}$. Figure 2 shows an illustration of gradient magnitude and direction at different points in an image for multiple $\sigma_t$. ### 3.3 Virus Removal Then, we iteratively repeat Initialization and Optimization. However, to prevent the virus detection to converge to the same position, we remove the already detected virus by masking it with a circular shape using the known virus size. ### 3.4 Stopping Criteria To stop the iterative detection, we consider three criteria: 1) During the Initialization step we compute the CAM and stop the virus detection when the computed CAM does not show any focus, meaning the minimum value equals the maximum value of the CAM. 2) After applying the Virus Removal step, we forward the image through the classifier. If the output score is smaller than a threshold $t$ we stop searching for viruses in the image, as the classifier predicts no remaining viruses in the image. The value of $t$ is chosen based on the smallest threshold used for computing the Mean Average Precision (mAP) metric. 3) During the Detection step, we test if the detected region actually contains a virus. We test this by masking everything in the image but the last virus detected and process this image with the pre-trained classifier. ### 3.5 Postprocessing Once all particles have been detected, we apply non-maximum-suppression, similar to the Faster-RCNN ([Ren et al., 2015](#)), to discard low-scoring virus particles that overlap with higher-scoring ones and exploit the fact that virus particles do not overlap in the image plane. Lastly, a bounding box is created for each virus detected in the image, using the known size of the virus. Moreover, we compute a score for each bounding box by masking all other detected viruses in the image with circular disks and forwarding it to the pre-trained classifier. 4 EXPERIMENTS In this section, we describe the experimental setup used to validate our method. To analyze our results on a variety of viruses, we have focused our experiments on the following five virus types: Herpes virus, Adeno virus, Noro virus, Papilloma virus, and Rota virus. Below, we will briefly discuss these with respect to imaging-relevant virus properties and data availability, before providing details on the conducted user study and discussing the obtained results. 4.1 DATA Herpes virus. The Herpes virus causes lifelong infections in humans. It is composed of an icosahedral capsid with double-stranded DNA, a tegument (middle layer), and an outer lipid bilayer envelope. We use the data from Shaga Devan et al. (2021) which contains 359 EM images with 2860 annotated bounding boxes of the virus particles in total. We use 287 images for training, 36 for validation, and 36 for testing. To approximate the size of the virus, we use values reported by Weil et al. (2020) and Yu et al. (2011) adjusted to account for the different image modality (Read et al., 2019) of room temperature Transmission Electron Microscopy (TEM). This results in a virus size of 165nm, which is also the average size in the data set. Adeno virus. The Adeno virus is a non-enveloped icosahedral capsid with dsDNA. It can infect the lining of the eyes, airways and lungs, intestines, urinary tract, and nervous system leading to cold-like symptoms. We use the data from Matuszewski & Sintorn (2021) containing 67 negative stain TEM images of the Adeno virus with location annotations. We approximate the virus size with 80nm as reported in literature by Goldsmith & Miller (2009). Noro virus. The Noro virus is a small-sized, non-enveloped capsid with icosahedral geometries and single-stranded RNA. It can cause acute gastroenteritis. For this virus, we use 54 negative stain TEM images from Matuszewski & Sintorn (2021) with location annotations. We approximate the virus size with 30nm as reported in literature by Ludwig-Begall et al. (2021). Papilloma virus. The Papilloma virus is a common virus in humans. While it can cause small benign tumors, it can also progress to cervical cancer in high-risk forms. It is non-enveloped with icosahedral DNA. Here, we use the data from Matuszewski & Sintorn (2021) containing 31 negative stain TEM images of the Papilloma virus with location annotations. We approximate the virus size with 50nm as reported in literature by Doorbar et al. (2015). Rota virus. The Rota virus has a distinctive wheel-like shape: round with a double-layered capsid, non-enveloped, with double-stranded RNA (segmented RNA genome). We use the data from Matuszewski & Sintorn (2021) containing 36 negative stain TEM images of the Rota virus with location annotations. We approximate the virus size with 75nm as reported in literature by Yates (2014). It can be noted that the data sets of the Adeno, Noro, Papilloma and Rota virus (maximum of 67 images) are significantly smaller than the data set of the Herpes virus (359 images). For all viruses, we work on image patches with a resolution of $224 \times 224$ pixels following the standard image input size for state-of-the-art image classifiers. To generate the patches we use a sliding window with no overlap. 4.2 USER STUDY We conducted a user study to compare the cost of obtaining different types of annotations, such that we later can analyze detection accuracy in relation to spent annotation time. The three types of annotation we collect during this study are 1) binary labels indicating virus presence, 2) bounding boxes that precisely describe the virus location and extent, and 3) locations of the virus centers, which is another popular choice for collecting weak detection labels (Li et al., 2019; Matuszewski & Sintorn, 2018; 2019). **Study design.** During the study, six experts were asked to annotate 85 patches of the TEM images of the Herpes virus with the three types of annotations. We provided an in-house application with a user interface designed to maximize the annotation speed of the three types of labels. We permuted the order of the conditions presented to the experts by a balanced Latin square. We presented the same data to all our participants during all conditions. To counterbalance a learning effect, we randomized the order of the presented patches. The 85 TEM patches show a range of 1-8 visible virus capsids, which is the full range of visible Herpes capsids in the data. For more information and results see the appendix. **Task performance.** We compare the performance of the participants in all three different tasks. We consider the $F_1$ score as the performance metric. For the evaluation of the location and bounding box task, we use an IoU threshold of 0.5 to define a True Positive (TP). We found significant differences in the performance of the participants between all tasks (see Figure 4). This reveals the increasing complexity of annotating binary labels, location labels, and bounding box labels: While binary annotations only require the decision of whether a virus is present in an image or not, localization and bounding box annotations require this decision for every visible particle in the image, making the annotations more prone to errors. Additionally, the bounding box annotations require the definition of the size of a virus, thus increasing their complexity even more. These results support the motivation for using binary labels, as the annotations are less prone to errors. **Annotation times.** Moreover, we investigate the annotation times per visible virus for all tasks. Figure 3 shows the average annotation times per patch of visible virus capsids. The time was measured between the moment showing the image and the user interaction to trigger the visualization of the next image. The average annotation times are slightly decreasing for the binary annotations when the number of visible virus capsids increases. We assume this to be the case based on a simpler detection of virus capsids when their occurrence is higher. However, for both other conditions, the annotation times increase with the number of visible virus capsids. This is accountable to the need for an independent annotation for every single virus. ### 4.3 Experimental Setup Our experiments include a wide range of object detection models with different levels of supervision. First of all, we include a fully supervised object detection model (BB). Second, we follow Li et al. (2019) and Matuszewski & Sintorn (2018; 2019) to use minimal labels for training an object detection model (Loc). We derive the bounding boxes for training from location labels and set their sizes by the known virus size. Finally, we compare the bounding boxes resulting from our optimization process (Ours(Opt)) and an object detection model trained with such boxes (Ours(OD)). We use a ResNet-101 (He et al., 2016) as our classification model and a Faster-RCNN with a ResNet-101 backbone as our object detection model. Additionally, we adapt two recent zero-shot segmentation models, SAM and CutLER, to work in a weakly supervised setup: We pick images of the training set that contain a virus and forward these through the pre-trained models to generate bounding boxes. We then find a suitable range of bounding box sizes on the validation set. The resulting range is used in the train set to obtain bounding boxes that are later used to train an object detection model. Next, we compare against different weakly supervised methods: GradCAM [Selvaraju et al., 2017], LayerCAM [Jiang et al., 2021], TS-CAM [Gao et al., 2021] and Reattention [Su et al., 2022]. As current state-of-the-art methods found that ViT-based architectures can be beneficial for WSOL, we compare the use of a ViT-B/16 backbone and a ResNet-101 backbone for GradCAM as well as LayerCAM. For methods that rely on saliency maps, we compute multiple bounding boxes by thresholding the maps and deriving bounding boxes from connected regions using [Boelli et al., 2019]. We choose the threshold based on the best result on the validation set and apply it to the test set. For a more fair comparison, we also include the knowledge about the virus size in the compared approaches. We report the best results over several runs with different hyperparameters. An extensive evaluation can be found in the appendix A.5. In all experiments, we obtained three runs with different seeds and reported the mean and standard deviation. For all methods, we perform a parameter search to find the best hyper-parameters. To measure the performance of the object detection models, we use mean average precision with an overlap of 50% (mAP$_{50}$). 4.4 Results To compare the three different types of annotations, we fix an annotation time budget and pick random image patches until the time is exhausted. We define the budget as the total time required to annotate the entire data set using binary labels. To compute the time cost of each image, we average the annotation times of the experts participating in the user study. Table 1 presents the results of this experiment. It can be observed that our method is able to outperform location and bounding box labels for all viruses. In particular, Ours(OD) is outperforming all other approaches. Moreover, we can see that location labels are not able to perform well for some of the virus particles due to the small number of training images. In our comparison to different zero-shot learning approaches, we found that, in the case of negative stain TEM images where the background is stained making the virus the most prominent structure, both SAM and CutLER performed comparably well. However, when dealing with small viruses such as Noro and Papilloma, CutLER’s performance was subpar, while SAM struggled particularly with the smallest virus, Noro. In general, the performance of these methods heavily depends on the sample, noise levels, and preparation method. Our approaches, on the other hand, are more stable over all data sets, leading to the best results on all viruses except for the Adeno. In conclusion, our investigation revealed that existing weakly supervised methods faced challenges in effectively detecting viruses in EM. Despite incorporating supplementary information about virus size into all comparison approaches, their performance remained suboptimal. This limitation is likely attributed to the fact that contemporary state-of-the-art methods thrive on large data set sizes, which are not available for the detection of virus capsids in EM. Furthermore, these methods are typically crafted to excel in scenarios with more object-centric data sets, whereas EM images present a distinctive challenge by containing numerous object instances within a single frame. Moreover, the conventional methods are not inherently equipped to handle the low signal-to-noise ratio (SNR) characteristic of EM. The prevalence of low SNR introduces inherent ambiguity in object boundaries, a challenge that can be partially mitigated by incorporating the known virus size into the methods, thereby circumventing this ambiguity. Additionally, the presence of noise and low-contrast regions in EM images poses obstacles to extracting discriminative features crucial for precise object localization. This became evident in certain classifiers trained on negative stain TEM data, leading to a bias towards virus borders (see Figure 9). Our introduced optimization, involving a fixed size, contributes to a more robust localization, addressing these challenges. These findings underscore the pressing need for methodologies purposefully tailored to excel in the intricate task of virus detection in EM. The unique characteristics of EM data necessitate specialized approaches that can navigate the challenges posed by low SNR, small data set sizes and the abundance of object instances within a single image. Reduced Annotation Time. We further investigate the impact of reducing the annotation times. For this experiment, we choose the herpes virus as it has the largest amount of annotated images as well as bounding box annotations. According to our study, this data set requires an annotation time of 19 hours to annotate bounding boxes, 17 hours to annotate location labels and 11 hours to annotate binary labels. We use the time required to annotate all available images using our binary labels as the upper bound of 100% and reduce annotation times to 75%, 50%, 25%, 10%, and 5%. Table 1: Comparison of the different methods for the different viruses reporting mAP$_{50}$. | | Herpes | Adeno | Noro | Papilloma | Rota | |----------|----------|----------|----------|-----------|----------| | BB | 89.18 ±0.95 | - | - | - | - | | Loc | 88.13 ±0.38 | 26.24 ±19.93 | 00.82 ±0.34 | 27.20 ±16.44 | 06.51 ±4.66 | | Ours(Opt) | 86.98 ±1.92 | 47.85 ±11.82 | 54.65 ±4.94 | 70.02 ±2.85 | 71.73 ±3.51 | | Ours(OD) | 91.20 ±0.24 | 58.28 ±5.91 | 74.32 ±1.18 | 78.33 ±2.40 | 78.34 ±2.15 | | SAM | 41.34 ±4.60 | 44.62 ±3.90 | 08.80 ±3.92 | 73.23 ±7.02 | 66.71 ±4.33 | | CutLER | 64.95 ±1.98 | 68.49 ±5.44 | 10.72 ±5.63 | 23.02 ±6.73 | 75.5 ±1.44 | | GradCAM ResNet | 78.79 ±2.04 | 19.17 ±0.78 | 05.54 ±2.99 | 11.57 ±4.17 | 31.78 ±21.58 | | LayerCAM ResNet | 78.44 ±2.73 | 16.48 ±9.34 | 05.04 ±1.91 | 10.87 ±5.33 | 31.22 ±20.07 | | GradCAM vit | 61.87 ±11.87 | 08.00 ±2.12 | 19.31 ±13.64 | 04.03 ±4.52 | 13.12 ±7.37 | | LayerCAM vit | 68.33 ±6.59 | 09.18 ±5.64 | 10.82 ±11.78 | 17.41 ±11.33 | 09.74 ±2.42 | | TS – CAM | 32.06 ±1.02 | 39.25 ±4.13 | 14.64 ±4.66 | 07.11 ±3.85 | 43.53 ±3.93 | | Reattention | 68.85 ±0.62 | 58.49 ±2.22 | 55.09 ±8.92 | 35.60 ±13.01 | 59.05 ±11.40 | Figure 5: Comparison of a detector model using different annotation times. Figure 6: Comparison of a detector model using different data set sizes. Figure 5 presents the results of this experiment. We found that 1) Ours(OD) is able to outperform other types of labels for all the time budgets. 2) Ours(Opt) can provide better performance than all other methods, including Ours(OD), when the data set is small (annotation time less than 25%). Infinite Annotation Time. Moreover, we evaluated the performance of all four methods when an infinite time for annotation is possible, but the amount of data is limited. The results are presented in Figure 6. We here elaborate again on the results of the Herpes virus, as it has the largest amount of annotated images as well as bounding box annotations. However, we also include results on the additional virus data sets in the appendix (Table 6). It can be observed that Ours(OD) obtained similar or slightly better performance than Loc and BB when the data set size is large. Moreover, we can see that Ours(Opt), although not able to reach the performance of the other methods, is able to achieve competitive performance. However, for small data set sizes, we see that the supervised approaches start to outperform the weakly supervised approach. We believe that this has two reasons: First, the smaller data set sizes do not allow to train a classifier, with good localization abilities. Additionally, training the Faster-RCNN on a data set that is small and noisy leads to worse performance. However, please note that the benefit of annotating image-level labels vanishes as the absolute time for annotation is already small. 5 CONCLUSION In this paper, we proposed a novel approach for virus particle detection in EM data based on weak supervision. Our approach optimizes bounding box positions of virus particles by leveraging a pre-trained classifier, Gaussian masking and domain-specific knowledge. Furthermore, to improve the optimization, we initialize the Gaussian masks based on GradCAM hotspots. We compared the results obtained with our method to other weakly supervised approaches, as well as fully supervised ones, where we show that our method is able to outperform those for the same amount of annotation time. Moreover, we conducted a user study that shows that binary labels are easier to obtain and more robust against errors than other annotation methods. Thus, our approach shows promise for efficient and accurate particle detection in EM images, opening new avenues for practical applications in this field. In the future, we would like to analyze the applicability of our method to the localization of objects that vary in size. ACKNOWLEDGMENTS We would like to thank Jens von Einem (Institute of Virology, Ulm University Medical Center) for providing herpesvirus infected cells. This work was financed by the Baden-Württemberg Stiftung (BWS) for the ABEM project under grant METID12–ABEM. REPRODUCIBILITY STATEMENT The source code associated with the experiments conducted in this paper is publicly available on GitHub at the following link: https://github.com/HannahKniesel/WSCD. Instructions for replicating experiments presented in the paper will be provided in the repository. This includes information on command-line arguments, hyperparameters, and any additional configurations necessary to reproduce the reported results. REFERENCES Paul Ayres. Using subjective measures to detect variations of intrinsic cognitive load within problems. *Learning and instruction*, 16(5):389–400, 2006. Loris Bazzani, Alessandra Bergamo, Dragomir Anguelov, and Lorenzo Torresani. Self-taught object localization with deep networks. In *2016 IEEE winter conference on applications of computer vision (WACV)*, pp. 1–9. IEEE, 2016. Archith John Bency, Heesung Kwon, Hyungtae Lee, S Karthikeyan, and BS Manjunath. Weakly supervised localization using deep feature maps. In *Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I* 14, pp. 714–731. Springer, 2016. Hakan Bilen and Andrea Vedaldi. Weakly supervised deep detection networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 2846–2854, 2016. Federico Bolelli, Stefano Allegretti, Lorenzo Baraldi, and Costantino Grana. Spaghetti labeling: Directed acyclic graphs for block-based connected components labeling. *IEEE Transactions on Image Processing*, 29:1999–2012, 2019. Gong Cheng, Junyu Yang, Decheng Gao, Lei Guo, and Junwei Han. High-quality proposals for weakly supervised object detection. *IEEE Transactions on Image Processing*, 29:5794–5804, 2020. Junsuk Choe and Hyunjung Shim. Attention-based dropout layer for weakly supervised object localization. *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019. K Shaga Devan, Paul Walther, Jens von Einem, Timo Ropinski, Hans A Kestler, and Clarissa Read. Detection of herpesvirus capsids in transmission electron microscopy images using transfer learning. *Histochemistry and cell biology*, 151:101–114, 2019. Ali Diba, Vivek Sharma, Ali Pazandeh, Hamed Pirsiavash, and Luc Van Gool. Weakly supervised cascaded convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 914–922, 2017. Bowen Dong, Zitong Huang, Yuelin Guo, Qilong Wang, Zhenxing Niu, and Wangmeng Zuo. Boosting weakly supervised object detection via learning bounding box adjusters. *IEEE/CVF International Conference on Computer Vision (ICCV)*, 2021. John Doorbar, Nagayasu Egawa, Heather Griffin, Christian Kranjec, and Isao Murakami. Human papillomavirus molecular biology and disease association. *Reviews in medical virology*, 25:2–23, 2015. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020.
hdCDVSPQ7v
What is the update frequency used in Shampoo when comparing wall-clock time? Is possible to use a second-order update interval for Shampoo such that it runs at a similar speed to Jorge to achieve the target accuracy?
Jorge: Approximate Preconditioning for GPU-Efficient Second-Order Optimization Anonymous authors Paper under double-blind review Abstract Despite their better convergence properties compared to first-order optimizers, second-order optimizers for deep learning have been less popular due to their significant computational costs. The primary efficiency bottleneck in such optimizers is matrix inverse calculations in the preconditioning step, which are expensive to compute on GPUs. In this paper, we introduce Jorge, a second-order optimizer that promises the best of both worlds – rapid convergence benefits of second-order methods, and high computational efficiency typical of first-order methods. We address the primary computational bottleneck of computing matrix inverses by completely eliminating them using an approximation of the preconditioner computation. This makes Jorge extremely efficient on GPUs in terms of wall-clock time. Further, we describe an approach to determine Jorge’s hyperparameters directly from a well-tuned SGD baseline, thereby significantly minimizing tuning efforts. Our empirical evaluations demonstrate the distinct advantages of using Jorge, outperforming state-of-the-art optimizers such as SGD, AdamW, and Shampoo across multiple deep learning models, both in terms of sample efficiency and wall-clock time. 1 Introduction Stochastic optimization methods such as stochastic gradient descent (SGD) (Robbins & Monro [1951]) and Adam (Kingma & Ba [2015]) are the de-facto standard for optimizing the objective function in the training of deep neural networks. These first-order optimization methods are relatively inexpensive in terms of their compute and memory requirements, and hence extremely popular. Second-order optimization methods typically have better convergence properties (fewer epochs to reach target validation metrics) than those of first-order methods. However, they are considerably slower in terms of per-iteration (per-batch) wall-clock times for training than first-order methods. This is because they often use a preconditioner, which multiplies the gradient by a matrix before taking a step. Computing these preconditioners requires performing matrix inversions, which are highly inefficient on GPU platforms due to the iterative nature of matrix inverse algorithms and their irregular memory access patterns. If one could develop a second-order optimizer that has better convergence than first-order methods and is on par with them in terms of wall-clock time per iteration, we could achieve the best of both worlds. In this paper, we present Jorge, a new second-order optimizer that uses an approximation for preconditioning by avoiding the calculation of the inverse of matrices in all steps. It has similar convergence properties to other second-order optimization methods but its wall-clock time per iteration is similar to that of inexpensive first-order methods. This is a win-win situation, which leads to much faster total training times for several different deep learning models when compared to other state-of-the-art optimizers. A new optimization method is most useful and promising if users do not have to spend significant time in tuning its hyperparameters. We demonstrate the process of deriving reasonable hyperparameters for Jorge from a well-tuned SGD baseline with minimal effort. Interestingly, these derived hyperparameters match the generalization of SGD and even improve it in many cases! Note that we use SGD over other adaptive optimizers such as Adam because prior research has shown that SGD often outperforms adaptive methods in terms of generalization (Wilson et al. [2017]). In our experiments across different network architectures, we demonstrate that Jorge performs better than two widely adopted first-order optimizers, SGD and AdamW, both in terms of sample efficiency and overall wall-clock times for convergence. Additionally, we demonstrate comparable sample efficiency to Shampoo (Gupta et al., 2018), a state-of-the-art second-order optimizer, while achieving faster convergence times. This paper makes the following important contributions: - A new second-order optimizer that avoids matrix inverse calculations when computing the preconditioner, making it extremely efficient on GPUs. This results in per-iteration wall-clock times within 5-10% of those of first-order optimizers such as SGD and AdamW, while matching the sample efficiency of Shampoo, a second-order optimizer. For training ResNet-50 on ImageNet, we demonstrate improvements of nearly 25% in the total training wall-clock time over SGD. - We show that reasonable hyperparameter configurations for Jorge can be easily bootstrapped from those of a well-tuned SGD baseline without extensive hyperparameter tuning that would require full training runs. These settings result in either similar and in many cases, even better generalization than that of SGD! - Most second-order optimizers need to exploit complex parallelism requiring multiple GPUs to get their total training times to be faster than those of first-order optimizers. Since Jorge is highly efficient, it can be run locally on each GPU and still outperform highly optimized parallel implementations of second-order optimizers. 1.1 Related work There have been several research efforts to develop computationally tractable second-order optimizers for deep learning. Martens (2010) proposes Hessian-free optimization, which exploits conjugate gradient (CG) to directly compute Hessian-vector products without explicitly computing the Hessian. Since CG requires multiple iterations, there has been subsequent work on reducing this cost (Erdogdu & Montanari, 2015). Several optimizers based on the L-BFGS method have also been proposed that approximate Hessian-vector products from the history of past gradients, again without explicitly computing the Hessian (Berahas et al., 2016; Bollapragada et al., 2018; Wang et al., 2017). Most state-of-the-art second-order optimizers rely on block-diagonal approximations of the Hessian to reduce the computational and memory requirements. The “blocks” typically correspond to substructures in the neural network, like a layer or a parameter tensor. Some recent methods in this category include Shampoo (Gupta et al., 2018), K-FAC (Martens & Grosse, 2015; Grosse & Martens, 2016), K-BFGS (Goldfarb et al., 2020) and the GGT method (Agarwal et al., 2019). However, these methods need to compute the inverse of their approximate Hessian matrices, which can be expensive to compute even with the block-diagonal approximations. As we show later in Section 5, Jorge outperforms one such optimizer, Shampoo, by nearly 37% in terms of the total wall-clock time for training ResNet50 on ImageNet. Closely related to Jorge is a line of work that exploits the Sherman-Morrison based Matrix identity to approximate the update steps in K-FAC without computing any matrix inverses (Mozaffari et al., 2023; Zhang et al., 2023; Tang et al., 2021). To mitigate the large computational costs of matrix inverses, researchers have also proposed parallel implementations of second-order optimizers, which aim to distribute the work of the optimizer across multiple GPUs. Several efforts focus on developing efficient parallel implementations of the K-FAC optimizer (Pauloski et al., 2020, 2021; Osawa et al., 2019, 2020; Ueno et al., 2020; Shi et al., 2021). On the other hand, Shi et al. (2023) and Anil et al. (2021) aim to accelerate the Shampoo (Gupta et al., 2018) optimizer via parallelism. Anil et al. (2021) present a heterogeneous solution that offloads the computation of the inverses to the CPU. Even though we implement Jorge without any multi-GPU parallelism, we demonstrate that its performance is better than one of the state-of-the-art parallel optimizers – Distributed Shampoo (Shi et al., 2023). 2 Background Second-order optimizers make use of both the gradients and curvature (second derivatives) of the loss function. By considering the curvature, second-order methods can approximate the loss function more accurately than first-order optimizers, and thus reduce the number of iterations required for convergence. Most second-order optimizers approximate the Newton step shown in Equation (1): \[ \theta_t = \theta_{t-1} - H_t^{-1} G_t \] This equation can be derived by minimizing a second-order Taylor’s approximation of the loss function at \( \theta_t \). This step of multiplying the gradients with \( H_t^{-1} \) is called preconditioning, and \( H_t^{-1} \) is often referred to as a preconditioner. Instead of using the actual Hessian, optimizers typically use positive semi-definite approximations of the Hessian (Schraudolph, 2002; Amari, 1998) to account for the non-convexity of the training objective (Vinyals & Povey, 2012; Botev et al., 2017; Roux et al., 2007; Martens & Grosse, 2015; Desjardins et al., 2015). Our proposed optimizer, Jorge, belongs to a class of methods called “adaptive optimizers”, which use the inverse of the gradient covariance matrix (or the empirical Fisher matrix) to precondition gradients. Examples of adaptive second-order optimizers include the full matrix version of Adagrad (Duchi et al., 2011) and Shampoo (Gupta et al., 2018). Note that several first-order adaptive optimizers have also been proposed in literature, which only use the diagonal elements of the covariance matrix. Popular examples include Adam (Kingma & Ba, 2015) and RMSProp. Jastrzebski et al. (2018); Sagun et al. (2018); Zhu et al. (2019) provide justification for the usage of the gradient covariance matrix as an approximation of the Hessian. 3 APPROXIMATE PRECONDITIONING IN JORGE As described in Section 1.1, the primary efficiency bottleneck in state-of-the-art second-order optimizers such as K-FAC (Martens & Grosse, 2015) and Shampoo (Gupta et al., 2018) is the matrix inverse computations performed to calculate the preconditioners. To overcome this limitation, we introduce Jorge, an efficient, adaptive, second-order optimizer tailored for GPU execution. Jorge’s formulation eliminates computing explicit matrix inversions, and is solely comprised of matrix multiplications and additions, which are highly optimized on GPUs. This results in Jorge’s wall-clock time per iteration to be on par with those of first-order optimizers, while also having faster convergence properties typical of a second-order optimizer. We propose Jorge as an enhancement of Shampoo (Gupta et al., 2018), another adaptive second-order optimizer. We first describe Shampoo’s optimizer algorithm at a high level before describing Jorge’s optimizer algorithm. Note that, throughout this section, we discuss Shampoo and by extension Jorge, within the context of a single layer. Application to multiple layers simply involves repeating the same steps for their parameters. Following Gupta et al. (2018), let us assume that the parameters, \( \theta \), of a single layer are organized in a two-dimensional (2D) \( m \times n \) matrix (N-dimensional parameter tensors, like those found in convolution layers are typically collapsed into 2D matrices, in practice). Shampoo maintains the second-order curvature information of the loss in two matrices – \( L_t \) (size \( m \times m \)) and \( R_t \) (size \( n \times n \)), which are called the left and right preconditioners, respectively. It iteratively updates the preconditioners from the current gradient information as shown in the equation below (for the left preconditioner): \[ L_t = \beta_2 L_{t-1} + (1 - \beta_2) G_t G_t^T \] Algorithm 1 shows how the preconditioners are used in Shampoo. Additional terms used in the algorithm are defined as follows. \( \beta_1 \) and \( \beta_2 \) are smoothing parameters for the exponential moving average (EMA) of the momentum and preconditioners. \( \tilde{G}_t \) is the preconditioned gradients at timestep \( t \). \( m_t \) is the EMA of the preconditioned gradients, and \( \eta_t \) is the learning rate at timestep \( t \). Lines 5–8 of Algorithm 1 show how the Shampoo optimizer iteratively updates the left and right preconditioners from the current gradients’ information. Line 11 illustrates the preconditioning step, wherein the gradients is multiplied by $L_t^{-1}$ and $R_t^{-1}$ on the left and right, respectively. The preconditioning step produces the preconditioned gradients, $\tilde{G}_t$, which minimize the loss faster than the raw gradients. Finally, we update the momentum estimate of the preconditioned gradients (line 14), and then use the momentum to update the weights (line 15). The matrix inverse computation in the preconditioning step (line 11) is the primary efficiency bottleneck in Shampoo, and is exactly what we want to optimize in Jorge. **Algorithm 1** Shampoo 1: Initialize $\theta_0$, $L_0 = \epsilon I_m$ 2: $R_0 = \epsilon I_n$ 3: for t=1 ...., T do 4: Update Preconditioners: 5: $L_t = \beta_2 L_{t-1}$ 6: $(1 - \beta_2) G_t G_t^T$ 7: $R_t = \beta_2 R_{t-1}$ 8: $(1 - \beta_2) G_t^T G_t$ 9: Precondition Gradients: 10: $\tilde{G}_t = L_t^{-1} G_t R_t^{-1}$ 11: Update Weights: 12: $m_t = \beta_1 m_{t-1} + (1 - \beta_1) \tilde{G}_t$ 13: $\theta_t = \theta_{t-1} - \eta_t m_t$ 14: end for **Algorithm 2** Jorge compared to Shampoo 1: Initialize $\theta_0$, $\hat{L}_0 = \epsilon^{-\frac{1}{4}} I_m$, $\hat{R}_0 = \epsilon^{-\frac{1}{4}} I_n$ 2: for t=1 ...., T do 3: Update Preconditioners: 4: $X_L = \hat{L}_{t-1}^{-4} G_t G_t^T$ 5: $\hat{L}_t = \beta_2^{-\frac{1}{4}} \hat{L}_{t-1} \left( I_m - \frac{(1 - \beta_2)}{4\beta_2} X_L + \frac{5(1 - \beta_2)^2}{32\beta_2^2} X_L^2 \right)$ 6: $X_R = \hat{R}_{t-1}^{-4} G_t^T G_t$ 7: $\hat{R}_t = (\beta_2')^{-\frac{1}{4}} \hat{R}_{t-1} \left( I_n - \frac{(1 - \beta_2')}{4\beta_2'} X_R + \frac{5(1 - \beta_2')^2}{32(\beta_2')^2} X_R^2 \right)$ 8: Precondition Gradients: 9: $\tilde{G}_t = \hat{L}_t G_t \hat{R}_t$ 10: Update Weights: 11: $m_t = \beta_1 m_{t-1} + (1 - \beta_1) \tilde{G}_t$ 12: $\theta_t = \theta_{t-1} - \eta_t m_t$ 13: end for In Algorithm 2, we show the functioning of Jorge side-by-side with Shampoo for the same 2D $m \times n$ parameter matrix of a single layer. The core idea behind Jorge is to approximate the computation of $L_t^{-1}$ and $R_t^{-1}$ in Shampoo (line 11 of Algorithm 1) in a GPU-efficient manner. In order to do this, we modify the computation in both lines 5–8 and line 11 of Algorithm 1. Just like Shampoo, Jorge also maintains two preconditioners, which we refer to as $\hat{L}_t$ and $\hat{R}_t$ in Algorithm 2. However, Jorge’s preconditioners are an approximation of the inverse fourth root of Shampoo’s preconditioners at every iteration, i.e., $\hat{L}_t \approx L_t^{-\frac{1}{4}}$ and $\hat{R}_t \approx R_t^{-\frac{1}{4}}$. We show the remaining steps for the left preconditioner approximation, and the right preconditioner approximation can be derived similarly. Since $\hat{L}_t \approx L_t^{-\frac{1}{4}}$, we can say that $L_t \approx \hat{L}_t^{-4}$, and $L_{t-1} \approx \hat{L}_{t-1}^{-4}$. We substitute $L_t$ and $L_{t-1}$ on both sides of Equation 2, which gives us: $$\hat{L}_t^{-4} = \beta_2 \hat{L}_{t-1}^{-4} + (1 - \beta_2) G_t G_t^T$$ $$\Rightarrow \hat{L}_t = \left( \beta_2 \hat{L}_{t-1}^{-4} + (1 - \beta_2) G_t G_t^T \right)^{-\frac{1}{4}}$$ $$= \beta_2^{-\frac{1}{4}} \hat{L}_{t-1} \left( I_m + \frac{(1 - \beta_2)}{\beta_2} \hat{L}_{t-1}^{-4} G_t G_t^T \right)^{-\frac{1}{4}}$$ $$= \beta_2^{-\frac{1}{4}} \hat{L}_{t-1} \left( I_m + \frac{(1 - \beta_2)}{\beta_2} X_L \right)^{-\frac{1}{4}}$$ $\hat{L}_{t-1}^{-4} G_t G_t^T$ (line 5, Algorithm 2) (4) Next, we get rid of the inverse computation in Equation (4) by employing the binomial series expansion on the expression in parenthesis. The binomial theorem for negative exponents suggests that for a square matrix \( A \in \mathbb{R}^{m \times m} \), provided \( \|A\| < 1 \) and \( p > 0 \), where \( \|.\| \) is a valid matrix norm, the following is true: \[ (I_m + A)^{-p} = \sum_{r=0}^{\infty} (-1)^r \frac{p(p+1)(p+2)...(p+r-1)}{r!} A^r \] Substituting \( A = \frac{(1-\beta_2)}{\beta_2} X_L \), and \( p = \frac{1}{4} \) in Equation (5) yields: \[ \left( I_m + \frac{(1-\beta_2)}{\beta_2} X_L \right)^{-\frac{1}{4}} = I_m - \frac{1}{4} \frac{(1-\beta_2)}{\beta_2} X_L + \frac{5}{32} \frac{(1-\beta_2)^2}{\beta_2^2} X_L^2 + ... \] Now, replacing the expression in parenthesis in Equation (4) with its binomial series expansion in Equation (6) we remove the inverse calculation entirely as shown below: \[ \hat{L}_t = \beta_2^{-\frac{1}{4}} \hat{L}_{t-1} \left( I_m - \frac{1}{4} \frac{(1-\beta_2)}{\beta_2} X_L + \frac{5}{32} \frac{(1-\beta_2)^2}{\beta_2^2} X_L^2 + ... \right) \] Note that the binomial expansion is an infinite series and thus intractable. In practice, we have found that ignoring the cubic and higher powers of this expansion does not degrade the sample efficiency of Jorge in comparison to Shampoo (See Section 5). Hence we drop the higher-order terms in Equation (7), which gives us line 6 of Algorithm 2. Notice how our preconditioner update step is composed entirely of matrix-matrix multiplications and additions, which are highly efficient to compute on GPUs, thereby making Jorge more compute-efficient than other second-order optimizers. After updating the preconditioners, we precondition the gradients by multiplying them with \( \hat{L}_t \) and \( \hat{R}_t \) on the left and right (line 11). Unlike Shampoo, we do not have to invert our preconditioners because, by definition, they are an approximation of the inverse fourth roots of Shampoo’s preconditioners. Finally, the weight update step in lines 14 and 15 is identical to Shampoo. Note that Equation (5) is only valid for \( \|A\| < 1 \), and therefore for \( \left\| \frac{(1-\beta_2)}{\beta_2} X_L \right\| < 1 \). To ensure this, Jorge dynamically adjusts \( \beta_2 \) (and \( \beta_2' \) for the right preconditioner) in each iteration such that the above constraint is met. We discuss this in detail in Appendix A.1. To improve performance, most second-order optimizers, including K-FAC and Shampoo, typically compute their preconditioners at regular intervals, instead of every iteration. Following suit, we also allow infrequent preconditioner updates for Jorge, with the interval kept as a user-configurable hyperparameter. In the iterations where we do not update the preconditioners, we simply reuse the preconditioners from the previous iteration. As empirical evidence of the efficacy of our approximation we measured the per-iteration times of SGD, Jorge and AdamW for training ResNet-50 (He et al., 2016b) and DeepLabv3 (Chen et al., 2017), and found Jorge to be 21–26% faster than Shampoo, and within 10% of SGD (more details in Appendix A.2). 4 BOOTSTRAPPING JORGE’S HYPERPARAMETERS FROM SGD A new optimizer such as Jorge would be useful in practice only if it does not require rigorous hyperparameter tuning to achieve a desired level of generalization on a given training task. Arguably, an important reason behind the popularity of SGD is the existence of various heuristics for deciding hyperparameters configurations quickly that can achieve decent generalization. In this section, we demonstrate Jorge’s ability to be an effective drop-in for SGD. We propose rules to deterministically bootstrap Jorge’s hyperparameters from those of a well-tuned SGD baseline. We call this process “single-shot tuning”. There are two implications of being able to single-shot tune Jorge’s hyperparameters from a well-tuned SGD. First, it eliminates the need to explore the expensive, combinatorial search space of Jorge’s hyperparameters. Second, the heuristics used to tune SGD’s hyperparameters can also be transferred to Jorge. Note that we focus on SGD over other adaptive optimizers such as Adam because prior research has demonstrated that SGD often outperforms adaptive methods in terms of generalization (Wilson et al., 2017; Zhuang et al., 2020; Keskar & Socher, 2017; Luo et al., 2019). Below, we propose some rules for transferring SGD’s hyperparameters to Jorge. **Learning Rate:** Agarwal et al. (2020) propose grafting, a technique for bootstrapping the learning rate and schedule of a new optimizer from another well-tuned optimizer. Grafting calculates the magnitude of the weight update by running a step of the well-tuned optimizer, and the direction of the weight update by running a step of the new optimizer. Using this approach, we employ grafting to directly use the learning rate of a well-tuned SGD baseline in Jorge. Integrating grafting in Jorge involves a small tweak to the weight update step in Algorithm 2 (lines 13-15), which we show in Appendix A.3. However, note that unlike Agarwal et al. (2020), we exploit grafting to adopt only the learning rate from SGD, but not the learning rate schedule (more details below). **Weight Decay Penalty:** For regularization, in Jorge, we implement the decoupled weight decay scheme proposed by Loshchilov & Hutter (2017a), as it has been shown to generalize better than L2 regularization for adaptive optimizers. We now explain how the weight decay penalty for Jorge, $\lambda_{\text{Jorge}}$, can be bootstrapped from SGD. Let $\beta_{\text{SGD}}$ and $\lambda_{\text{SGD}}$ be the momentum factor and the weight decay penalty, respectively, of a well-tuned SGD optimizer. We propose deterministically setting $\lambda_{\text{Jorge}}$ as follows: $$\lambda_{\text{Jorge}} = \frac{1}{1 - \beta_{\text{SGD}}} \lambda_{\text{SGD}}$$ (8) Using the almost universal value of 0.9 for $\beta_{\text{SGD}}$, we set Jorge’s weight decay to $10\times$ that of SGD for our experiments. While surprisingly simple, we have found this heuristic to work well across several benchmarks. In Appendix A.4, we describe the intuition behind Equation 8 in more detail. **Learning Rate Schedule** As per Agarwal et al. (2020), grafting should allow us to borrow not only the learning rate, but also the learning rate schedule of a well-tuned SGD baseline. However, we find that certain learning rate schedules are not suitable for Jorge. In Figure 1, we plot the progression of validation metrics for training ResNet-18 (He et al., 2016a) on CIFAR-10 (Krizhevsky et al.) (left plot) and DeepLabv3 (Chen et al., 2017) on MS COCO (Lin et al., 2015) (right plot). Note that using the default learning rate schedules of SGD, which are the cosine (Loshchilov & Hutter, 2017b) and polynomial rate schedules, respectively, leads to barely any improvements in sample efficiency over SGD. Interestingly, simply switching to the step decay schedule with 2 decay steps (reducing the learning rate by $10\times$ at each step) at one-third and two-thirds of the total training epochs (total epochs same as that of the tuned SGD baseline) resolves this issue. We observe sample efficiency gains of nearly 1.4–1.8× over SGD. Therefore, across all training tasks, we opt for the step decay learning rate schedule with the aforementioned configuration. Interestingly, in certain scenarios using the default learning rate schedule of a given well-tuned SGD baseline also leads to overfitting with Jorge. We discuss this in Appendix A.5. ![Figure 1](image.png) **Preconditioner Update Frequency:** As mentioned in Section 3, Jorge has a user-configurable hyperparameter to control the frequency at which the preconditioners are updated. We suggest using a value for this hyperparameter that brings the iteration wall-clock times within 10% of SGD. 5 EXPERIMENTAL RESULTS In this section, we discuss the empirical experiments conducted to evaluate the efficacy of Jorge against other state-of-the-art optimizers used in deep learning. 5.1 SETUP: BENCHMARKS AND METRICS Table 1 lists the training benchmarks used in our experiments, all of which are sourced from the torchvision repository (maintainers & contributors, 2016). For each benchmark, we consider two types of training runs – one where we let a given optimizer train for the maximum number of epochs specified in the repository, and the other where we only train up to the validation metrics specified in Table 1. The former helps us measure the generalization of each optimizer, whereas the latter helps us measure the sample efficiencies and total wall-clock times for training. Mask-RCNN (He et al., 2017) and DeepLabv3 (Chen et al., 2017) use ResNet-50 as their backbone. We use SGD as our baseline and also compare with AdamW, Shampoo, and a recently proposed parallel implementation of Shampoo (Shi et al., 2023). Table 1: List of benchmarks used to evaluate Jorge against other optimizers. The validation targets for the first two tasks are the same as those used in MLPerf. For the image segmentation task, it is the same as specified in the torchvision repository. | Training Task | Neural Network | Dataset | Batch Size(s) | Target Validation Metric | |------------------------|----------------|---------------|---------------|--------------------------| | Image Classification | ResNet-50 | ImageNet | 256/1024 | 75.9% Accuracy | | Object Detection | Mask-RCNN | MS-COCO 2017 | 32 | 37.7 Bbox mAP | | Image Segmentation | DeepLabv3 | MS-COCO 2017 | 64 | 66.4 IoU | Choice of Hyperparameters: For direct comparisons with SGD and AdamW, we use the default small batch sizes specified by torchvision, which are 256, 32 and 64 respectively for ResNet-50, Mask-RCNN, and DeepLabv3. To the best of our knowledge, most evaluations of second-order optimizers have been conducted at batch sizes much larger than these values. Thus, to facilitate a direct comparison with Shampoo, we also ran the ResNet-50 benchmark with a larger batch size of 1024. By doing this, we could directly borrow the hyperparameters from Shi et al. (2023), who evaluated Shampoo in a similar setting. All the benchmarks from torchvision used in our experiments employ an SGD optimizer, pre-optimized with a well-calibrated set of hyperparameters. Accordingly, for our evaluations with SGD, we adhere to these pre-set values. For our proposed optimizer, Jorge, we adopt the single-shot hyperparameter configuration outlined in Section 4, which is derived directly from SGD’s parameters. We borrow AdamW hyperparameters for the imagenet benchmarks from Heo et al. (2021). The complete list of all hyperparameters used in this study can be found in Appendix A.6. Evaluation Metrics: In our evaluation of each benchmark, we record validation accuracy/IoU/mAP with respect to both number of epochs and wall-clock time. While the epoch-based measurements provide insights into the sample efficiencies of different optimizers, wall-clock time offers an understanding of their computational speed and efficiency on GPU platforms. Together, these metrics offer a comprehensive assessment of each optimizer’s practical efficacy. 5.2 COMPARATIVE EVALUATION Rapid convergence toward a target validation accuracy is not the only goal of an optimizer. The balance between quick initial convergence and eventual generalization can dictate an optimizer’s selection. For example, SGD remains the optimizer of choice in computer vision due to its better final validation accuracy, even though Adam converges faster initially. We evaluate Jorge’s peak validation accuracy against SGD and AdamW across benchmarks, and detail the results in Table 2. In these experiments, we let each optimizer train for the maximum number of epochs specified in the repository. Notably, for ResNet-50 benchmarks, Jorge exceeds SGD’s best validation accuracy – 76.02% vs 76.70% (large batch size), and 75.97% – 76.85% (small batch size). For the Mask-RCNN benchmark, Jorge’s IoU of 38.92% represents a notable improvement over SGD’s 38.3%. It’s worth highlighting that these results were achieved using the single-shot tuning strategy described in Section 4. Though DeepLabv3’s performance with Jorge is marginally worse than that with SGD, the difference is within SGD’s standard deviation, suggesting that small hyperparameter tweaks could bridge the gap. Notably, AdamW falls short of SGD’s generalization in three out of four benchmarks but Jorge does better than SGD in three out of four benchmarks. This inconsistency in AdamW’s generalization capabilities due to overfitting has piqued considerable interest and has been a focal point in several prior studies (Wilson et al., 2017; Zhuang et al., 2020; Keskar & Socher, 2017; Luo et al., 2019). Table 2: Maximum validation accuracy ($\mu_{\pm \sigma}$) for SGD, AdamW, and Jorge across benchmarks. | Neural Network | Batch Size | # Trials | # Epochs | SGD | AdamW | Jorge | |----------------|------------|----------|----------|--------------|---------------|--------------| | ResNet-50 | 1024 | 3 | 90 | 76.02±0.05 | 71.85±0.11 | **76.70±0.07** | | ResNet-50 | 256 | 3 | 90 | 75.97±0.11 | 76.56±0.09 | **76.85±0.12** | | DeepLabv3 | 64 | 5 | 30 | **67.19±0.16** | 66.26±0.20 | 67.12±0.12 | | Mask-RCNN | 32 | 5 | 26 | 38.30±0.13 | 36.58±0.11 | **38.92±0.10** | Next, we compare the sample efficiency of Jorge to other optimizers. In this case, we only train up to the target validation metrics specified in Table 1. Figure 2 (left) showcases the progression of validation accuracy over training epochs for ResNet-50 on ImageNet with the larger batch size of 1024. For other benchmarks, we depict this progression in Figure 3. It is evident that in the context of sample efficiency, Jorge outperforms the first-order optimizers we compare with – SGD and AdamW. Across both the small (256) and large (1024) batch size training scenarios for ResNet-50, Jorge outperforms SGD by requiring around 27% fewer iterations to reach the target validation accuracy of 75.9%. The improvements in sample efficiency over SGD across other benchmarks are markedly higher – 40% for DeepLabv3, and 41% for Mask-RCNN. Again, we achieve these results by simply bootstrapping Jorge’s hyperparameters from SGD, only making the changes outlined in Section 4. The improvements in sample efficiency over AdamW are similar to those over SGD. Also, AdamW falls short of achieving the target validation metric in two out of four experiments. Figure 2: Validation accuracy [$\mu \pm \sigma$] v/s epochs (left) and time (right) for the large batch size training (1024) of ResNet-50 on the ImageNet dataset (experiments run on 16 A100 GPUs). As discussed in Section 3, we have designed Jorge to approximate Shampoo with a focus on GPU efficiency. Figure 2 (left) demonstrates that Jorge achieves the target validation accuracy in almost the same number of epochs as Shampoo (62 vs. 63). This observation strongly validates our approach and confirms that Jorge’s approximations do not degrade its statistical efficiency. Let us now turn our attention to an equally crucial metric: wall-clock time required for training. Figure 2 (right) demonstrates the progression of validation accuracy over time for the large batch size training of ResNet-50. We observe that Jorge achieves the target validation accuracy in 25% less time compared to SGD, which is a significant improvement. If we consider the serial implementation of Shampoo (pink line), it takes more total time to converge than SGD despite requiring 27% fewer epochs. This observation demonstrates the prowess of Jorge as a GPU-efficient adaptation of Shampoo: it’s significantly faster than Shampoo’s wall-clock time for convergence (239 minutes vs. 325 minutes), despite requiring a similar number of epochs. As noted in Section 1.1, the prevailing approach for mitigating the large overhead of preconditioning has been to develop distributed implementations of these optimizers. Within this context, Figure 2 (right) also presents the wall-clock time of a state-of-the-art parallel implementation of Shampoo (yellow line) (Shi et al., 2023). Notably, even though Jorge executes locally on each GPU, it still manages to yield a 5% speedup over the parallel version of Shampoo. ![Validation accuracy vs Epochs for ResNet-50 on ImageNet](image1.png) ![Validation IoU vs Epochs for DeepLabv3 on MS-COCO](image2.png) ![Validation mAP vs Epochs for Mask RCNN on MS-COCO](image3.png) Figure 3: Validation accuracy, IoU, and mAP $[\mu \pm \sigma]$ v/s epochs for ResNet-50 on ImageNet (left) (batch size of 256), DeepLabv3 on MS-COCO (center), and Mask-RCNN on MS-COCO (right). While a 5% improvement might seem modest, its implications are more far-reaching. Often times, AI practitioners do not have access to large numbers of GPU resources. In such resource-constrained settings, Jorge might be an ideal optimizer when parallelizing across GPUs is not an option. This also applies to environments with limited interconnect bandwidth. Finally, we focus on the small batch size benchmarks to evaluate how Jorge’s training wall-clock times compare with other first-order optimizers. We present these results in Table 3. Once again, Jorge makes significant improvements in the total training wall-clock times. Compared to SGD, Jorge improves the time to convergence by 23%, 34%, and 45% for ResNet-50, DeepLabv3, and Mask-RCNN respectively. The corresponding improvements over AdamW are even higher – 26%, 41%, and 58% (the last number is much higher since AdamW did not converge on that run). The wall-clock time improvements in these experiments highlight Jorge’s applicability to small batch size training scenarios, where the overheads of a second-order optimizer cannot be masked behind network computation, making it more challenging for Jorge to beat first-order optimizers. Table 3: Comparison of the total training time (in minutes) of Jorge with SGD and AdamW for the small batch size benchmarks (experiments run on four A100 GPUs). | Neural Network | Batch Size | # Runs | SGD | AdamW | Jorge | |----------------|------------|--------|-----|-------|-------| | ResNet-50 | 256 | 3 | 1005±40 | 1052±36 | 781±44 | | DeepLabv3 | 64 | 5 | 217±12 | 244±16 | 144±30 | | Mask-RCNN | 32 | 5 | 332±47 | 438±14 | 182±11 | 6 CONCLUSION AND FUTURE WORK In this work, we introduced Jorge, an efficient, adaptive, second-order optimizer tailored to GPU platforms. We eliminated the primary computational bottleneck of computing matrix inverses in second-order optimizers by proposing a novel approximation of the preconditioner computation in Shampoo, which sidesteps the need to explicitly compute matrix inverses. Further, we proposed a single-shot hyperparameter tuning strategy, that can directly bootstrap Jorge’s hyperparameters from a well-tuned SGD baseline without the need to conduct extensive tuning. We evaluated Jorge against state-of-the-art first-order optimizers – SGD and AdamW, as well as Shampoo, and we demonstrated improvements in generalization, sample efficiencies, and training wall-clock times. As future work, we plan to develop a single-shot hyperparameter bootstrapping strategy from AdamW as well. This will allow us to employ Jorge to train large language models. Additionally, we plan to develop a distributed implementation of Jorge to reduce its per-GPU memory consumption, which currently stands at 1.5–2× that of Adam (see Appendix A.7). Reproducibility Statement: We are committed to enabling reproducibility of our work, as it ensures correct and transparent results. We plan to open source the code for Jorge as well as the benchmarks evaluated in this paper. Additionally, we provide a comprehensive list of all hyperparameters used in this study for each optimizer and each benchmark in Appendix A.6. The hyperparameters can be directly substituted as the arguments of SGD and AdamW shipped with PyTorch 2.0 in the “torch.optim” package. Similarly, the hyperparameters listed for Jorge will be compatible with our open source codebase. REFERENCES Naman Agarwal, Brian Bullins, Xinyi Chen, Elad Hazan, Karan Singh, Cyril Zhang, and Yi Zhang. Efficient full-matrix adaptive regularization. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 102–110. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/agarwal19b.html Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive gradient methods from learning rates, 2020. Shun-ichi Amari. Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2):251–276, 02 1998. ISSN 0899-7667. doi: 10.1162/089976698300017746. URL https://doi.org/10.1162/089976698300017746 Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Scalable second order optimization for deep learning, 2021. Albert S. Berahas, Jorge Nocedal, and Martin Takáč. A multi-batch l-bfgs method for machine learning, 2016. Raghu Bollapragada, Dheevatsa Mudigere, Jorge Nocedal, Hao-Jun Michael Shi, and Ping Tak Peter Tang. A progressive batching l-bfgs method for machine learning, 2018. Aleksandar Botev, Hippolyt Ritter, and David Barber. Practical Gauss-Newton optimisation for deep learning. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 557–565. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/botev17a.html Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation, 2017. Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and koray kavukcuoglu. Natural neural networks. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper_files/paper/2015/file/2de5d16682c3c35007e4e92982f1a2ba-Paper.pdf John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(61):2121–2159, 2011. URL http://jmlr.org/papers/v12/duchilla.html Murat A. Erdogdu and Andrea Montanari. Convergence rates of sub-sampled newton methods, 2015. Donald Goldfarb, Yi Ren, and Achraf Bahamou. Practical quasi-newton methods for training deep neural networks. Advances in Neural Information Processing Systems, 33:2386–2396, 2020. Roger Grosse and James Martens. A kronecker-factored approximate fisher matrix for convolution layers, 2016.
r2ve0q6cIO
I am not sure I understand why one would want only the inference to be distributed if the training is not. More precisely, if the graphs are so large that inference needs to be distributed, how was the model even trained?
Graph Neural Networks Gone Hogwild Anonymous authors Paper under double-blind review Abstract Graph neural networks (GNNs) constitute a dominant class of architectures for modeling graph-structured data. Message-passing GNNs in particular appear to be ideal for applications where distributed inference is desired, since node updates can be performed locally. In this work, we are particularly motivated by the view that GNNs can be interpreted as parametric communication policies between agents which collectively solve a distributed optimization problem (e.g., in robotic swarms or sensor networks). For these applications, node synchrony and central control are undesirable, since they result in communication bottlenecks and reduce fault tolerance and scalability. We examine GNN inference under asynchrony, and find that most GNNs generate arbitrarily incorrect predictions in this regime. A notable exception is GNNs which cast message passing as a fixed point iteration with contractive update functions. We propose a novel GNN architecture, energy GNN, in which node embeddings are computed by minimizing a scalar-valued convex function which we call an ‘energy’ function. By framing message passing as convex optimization, we unlock a richer class of update functions which preserve robustness under asynchronous execution. We show that, empirically, we outperform other GNNs which are amenable to asynchronous execution on a multitude of tasks across both synthetic and real-world datasets. 1 Introduction Graph neural networks (GNNs) have gained prominence as a powerful framework for deep learning on graph-structured data, finding success in application domains like molecular chemistry (Duvenaud et al., 2015), social networks, and recommendation systems (Fan et al., 2019). GNNs use message passing within local graph neighborhoods to effectively produce a deep neural network architecture whose computational graph reflects the structure of the input graph. Neural network architectures exhibiting equivariance and/or invariance have been critical to the success of deep learning, and GNNs can be viewed as a way to generalize these concepts to graph-structured data. At first glance, the message passing framework appears to be a prime candidate for distributed and decentralized execution, which is desirable in a variety of contexts. Consider a group of agents (e.g., robots or “motes” in a sensor network) which need to collectively perform a task, but that might be unreliable, have limited range of communication, possess scarce local computational resources, and lack central control. GNNs are appealing as a way to learn local communication policies that solve the distributed problem, where each agent corresponds to a node in the graph and the edges correspond to local communication constraints. One could imagine learning an algorithm that enables a swarm of robots to localize themselves, or a collection of resource-constrained edge devices to collectively estimate environmental conditions. Another application where distributed computation is attractive is in GNN inference over large graphs, where nodes or node collections are managed on distinct machines (respecting the graph connectivity). This is especially relevant for GNN deployment on resource-constrained devices. Distributed inference may also facilitate privacy in settings where nodes correspond to entities such as individuals in a social network, by enabling local inference and precluding excessive data transmission. There is a significant defect in this analogy: distributed and decentralized computation is generally asynchronous, and existing GNN architectures implicitly assume synchronism at inference time. That is, the parameters of the GNN are trained such that the correct computation is performed provided there are synchronous rounds of message passing per layer. When nodes update at different times or messages are stale, the effective architecture diverges catastrophically from the training... architecture; this means the output of the GNN can exhibit arbitrarily large errors. Figure 1 illustrates the issue on a chain graph. We observe that certain classes of GNNs, which might be termed “hogwild-able” GNNs (inspired by Recht et al. (2011), are provably robust to asynchrony given lenient assumptions on staleness and per-node update frequency. For instance, implicit GNNs Gu et al. (2020); Liu et al. (2021), which use a fixed point iteration to implement message passing, are ‘hogwild-able’. We introduce an alternative to achieve robustness to asynchrony: reframing message passing as an optimization procedure over a convex global graph function. We propose a novel hogwild-able GNN architecture, which we refer to as an energy GNN. In an energy GNN, node embeddings are computed as the minimum of a convex function which is implemented via input-convex neural networks defined on nodal neighborhoods. We loosely interpret the function being minimized as an ‘energy’, as an analogy to physical systems which naturally tend to minimize their energy to achieve stable configurations. We show that energy GNN outperforms other hogwild-able GNNs in a variety of synthetic tasks, particularly those which demand long-range communication between nodes or the use of edge features for compelling performance. We also achieve competitive performance on benchmark datasets, certifying the merit of our approach even as a stand-alone GNN architecture. Section 3 provides a brief review of message passing GNNs. In section 4, we introduce our framework for asynchronous and distributed inference in GNNs. In section 5, we present our architecture: energy GNNs. Section 6 presents experimental results, where we evaluate performance during inference using both synchronous and asynchronous forward execution. 2 RELATED WORK 2.1 DISTRIBUTED OPTIMIZATION AND ASYNCHRONOUS ALGORITHMS Asynchronous algorithmic models (sometimes called chaotic relaxation models) date back at least to the late 1960s Chazan & Miranker (1969), and were explored extensively into the 1970s and 1980s Donnelly (1971); Miellou (1975); Robert et al. (1975); Baudet (1978); Bertsekas (1982, 1983); Bojanczyk (1984); Mitra (1987); Uresin & Dubois (1989). In appendix A.1, we study GNNs under partial asynchronism, a particular model which imposes constraints on the sequencing of computations and the frequency of communication between distributed elements. Our analysis of partially asyn- chronous GNN inference draws directly from prior work which analyzes the sufficient conditions for convergence (Tsitsiklis, 1984; Tsitsiklis et al., 1986; Bertsekas & Tsitsiklis, 1989). Interest in distributed computing and optimization originated at a similar time, coincident with the unprecedented rate of progress in digital electronics and computer engineering (Borodin & Munro, 1973; Goldschlager, 1978; Hockney & Jesshope, 1981; Hwang, 1984; Quinn, 1987). The project of scaling optimization and machine learning systems to extremely large datasets has sustained interest in decentralized, distributed computation (Bottou et al., 2018; Yang et al., 2019). Like energy GNNs, many of the problems can be formulated in the framework of convex optimization, in fact distributed convex optimization is an area of interest in its own right (Boyd et al., 2011). 2.2 ASYNCHRONICITY IN GRAPH NEURAL NETWORKS There has been a plethora of work in the area of distributed training of GNNs, where the data is partitioned and computed on by separate workers (Besta & Hoefer, 2022; Shao et al., 2022). In this training regime, workers may not be operating on independent data, e.g., when data partitions come from a single connected graph. Since ignoring this dependence reduces application performance, workers exchange embedding information associated with “boundary vertices” which are logically connected but delegated to different workers. Some distributed training frameworks assume workers operate asynchronously, using stale embedding (Md et al., 2021; Peng et al., 2022; Wan et al., 2022) or gradient information (Thorpe et al., 2021) (corresponding to previous training epochs) from other workers. However, across all these frameworks, the forward pass is executed synchronously per layer, and to our knowledge there is no work examining asynchronous GNN inference. 2.3 IMPLICIT GNNs Like energy GNNs, implicit GNNs (Scarselli et al., 2009; Gu et al., 2020; Liu et al., 2021) define node embeddings implicitly as the solution to an iterative algorithm. Specifically, these architectures obtain node embeddings via a fixed point iteration on a contractive node embedding update function. Since the number of iterations is not predetermined, implicit GNNs are sometimes referred to as “infinite-depth” GNNs. Implicit GNNs which use contractive node updates are extremely well suited for partially asynchronous, decentralized, and distributed inference. This is because under reasonable assumptions (see appendix A.1), it follows from the contractive property of the node updates that the embeddings converge (Bertsekas, 1983; Bertsekas & Tsitsiklis, 1989). That said, existing implicit architectures use relatively simple updates which can easily verified and enforced to be contractive. The original ‘nonlinear GNN’ proposed by Scarselli et al. (2009) is an exception, but this comes at a cost, as their method encourages rather than guarantees contraction. Their strategy is to formulate a multi-objective problem in which the norm of the Jacobian of the update function at the fixed point is also to be minimized. This heuristic can work in practice, but the sequence of iterates does not definitively converge, particularly if node embeddings are initialized far from the fixed point solution (as the norm of the Jacobian is only penalized at the fixed point). Another difficulty with implicit GNNs is that both the forward and backward passes of the network are usually implemented with iterative solvers. The number of iterations required to converge is not known in advance, and training is sensitive to hyper-parameter choices for the solvers. To alleviate this, EIGNN (Liu et al., 2021) derive a contractive fixed point update whose limit can be computed efficiently in closed form. This closed form solution requires global information and is not amenable to distributed inference. Interestingly, we observe that at inference time, results from iterative execution differ significantly from those achieved by the closed form (see appendix A.10). 3 GRAPH NEURAL NETWORKS We provide a brief overview of GNNs, loosely following notation used in (Hamilton, 2020). Consider a directed graph with \( n \) vertices \( V = \{1, \ldots, n\} \), and edges \( E \subseteq V \times V \). The connectivity of the graph is contained in its adjacency matrix \( A \in \{0, 1\}^{n \times n} \), where \( A_{i,j} = 1 \) if there is an edge from node \( i \) to node \( j \), and 0 otherwise. The graph may also have associated node and edge features \( X \in \mathbb{R}^{n \times p} \) and \( E \in \mathbb{R}^{|E| \times q} \), so we use \( G = (A, X, E) \) to denote the graph (and its features). Two canonical prediction tasks are classification and regression at the graph or node level, in which we want a vector representation either of the graph or of each node which is useful in the task. We focus on node-level embeddings, since they often underly graph-level embeddings. We are given a dataset \( \mathcal{D} = \{(G_d, Y^d)\}_{d=1}^{|\mathcal{D}|} \), where each \( G_d \) is associated with a node-level prediction target \( Y^d \in \mathbb{R}^{n_d \times \ell} \), where \( n_d \) is the number of nodes in graph \( d \). In their most general form, GNNs define a parameterized embedding function \( f_\theta : G \rightarrow \mathbb{R}^{n \times k} \) which takes as input the graph data and parameters and returns a \( k \)-vector embedding \( h_i \) for each node. A readout function (often a linear transformation) \( o_\phi : \mathbb{R}^k \rightarrow \mathbb{R}^\ell \) is applied to each embedding, which results in node predictions \( \hat{Y} = (o_\phi(h_1), ..., o_\phi(h_n))^T \in \mathbb{R}^{n \times \ell} \). Given a task-specific loss \( L \), training of the GNN corresponds to the following optimization problem: \[ \theta, \phi = \arg \min_{\theta, \phi} \frac{1}{|\mathcal{D}|} \sum_{d=1}^{|\mathcal{D}|} L(\hat{Y}^d, Y^d). \] ### 3.1 Message passing GNNs Most GNNs use message passing in the embedding function \( f_\theta \). At each iteration of message passing, each node \( i \) receives messages \( m_{ij} \) from nodes \( j \) in its local neighborhood. Each \( m_{ij} \) is obtained by applying a function \( m \) to information pertaining to that neighbor relation (e.g., embeddings and node/edge features). The messages are aggregated into a single message \( m_i \) via a permutation-invariant aggregation function \( g \). Finally, an update function \( u \) uses a node’s aggregated message to update its embedding. We use \( \theta_m, \theta_g, \theta_u \) to denote the subsets of the parameters \( \theta \) used in each of \( m, g, u \), respectively. GNNs often consist of several iterations (or “layers”) of message passing; node embeddings are updated \( L \) times and each iteration may have distinct functional forms and parameters for \( m, g, \) and \( u \). A node \( i \)'s embedding at iteration \( \ell \in \{0, ..., L\} \) is denoted by \( h_i^\ell \in \mathbb{R}^{k(\ell)} \), and the final layer embeddings \( h_i^L \) are used as input to the readout function \( o_\phi \). In its most general form, the embedding update function at iteration \( \ell \), \( f_\theta^\ell \), can be written as: \[ m_{ij}^\ell := m^\ell(h_j^\ell, h_i^\ell, X_j, X_i, E_{ij}; \theta_m^\ell) \quad \forall i, j \in E \quad \text{(Create message on edge } i, j) \] \[ m_i^\ell := g^\ell(\{m_{ij}^\ell | j \in \text{ne}(i)\}, A_i; \theta_g^\ell) \quad \text{(Aggregate message on node } i) \] \[ h_i^{\ell+1} := u^\ell(m_i^\ell, h_i^\ell, X_i; \theta_u^\ell), \quad \text{(Update hidden state of node } i) \] where \( \text{ne}(i) \) denotes the neighbors of node \( i \) and \( E_{ij} \) are the edge features from \( j \) to \( i \). Many GNNs initialize the node embeddings at \( \ell = 0 \) to be equal to the node features (or some simple function of the node features), and use the entries of the adjacency matrix \( A \) (or some variant of the adjacency matrix) in \( g \). The particular form of \( m, g, \) and \( u \) varies across GNNs; below we describe several concrete examples which we reference throughout the paper. We use \( H^\ell \in \mathbb{R}^{n \times k(\ell)} \) to denote all of the node embeddings at iteration \( \ell \). #### Graph Convolutional Networks GCNs [Kipf & Welling, 2017] define \( g \) as a weighted sum of neighbor messages based on the entries of the symmetric normalized adjacency matrix with added self-loops, \( \tilde{A} = (D + I)^{-\frac{1}{2}}(A + I)(D + I)^{-\frac{1}{2}} \). With node embeddings initialized to be equal to the node features, \( f_\theta^\ell \) is defined as: \[ m_i^\ell := \sum_{j \in \text{ne}(i)} \tilde{A}_{ij} \theta_m^\ell h_j^\ell \] \[ h_i^{\ell+1} := \text{ReLU}(m_i^\ell), \] where \( \theta_m^\ell \in \mathbb{R}^{k(\ell) \times k(\ell)} \). This update can be succinctly described at the graph level as \( H^{\ell+1} = \text{ReLU}(\tilde{A}H^\ell W^\ell) \). Note that for finite depth message passing GNNs which have \( L \) layers, such as GCN, it is impossible to propagate information farther than \( L \) hops. #### Implicit Graph Neural Networks Implicit GNNs [Scarselli et al., 2009; Gu et al., 2020; Liu et al., 2021] are “infinite-depth” GNNs, where the number of iterations of message passing is not predetermined, and instead a single parameterized layer is repeated as many times as is required to numerically reach a fixed point. The IGNN architecture [Gu et al., 2020] uses a similar embedding update function as GCN, but adds node features as an additional input to \( u \). A layer is defined at the graph level as: \[ H^{\ell+1} := \phi(\tilde{A}H^\ell \theta_m + X \theta_u), \] where \( \theta_m, \theta_u \in \mathbb{R}^{k \times k} \) and \( \phi \) is a component-wise non-expansive function such as ReLU. Convergence is guaranteed by constraining \( ||\theta_m||_\infty < \lambda_{\text{max}}(\tilde{A})^{-1} \), where \( \lambda_{\text{max}} \) is the maximum eigenvalue of \( \tilde{A} \). This ensures that the update is contractive, a sufficient condition for convergence. 4 ASYNCHRONOUS AND DISTRIBUTED INFERENCE IN GNNs GNNs assume that layer updates are performed synchronously, as depicted in Figure 1, where each node embedding is updated using the previous layer embeddings. As discussed previously, we identify two flavors of problems in which asynchronous, distributed execution is desirable, requiring us to break the synchronicity assumption of GNNs. The first is using GNNs to parameterize communication protocols between simple computational agents which have limited computational resources and range of communication, and the second is distributed execution of GNNs on large graphs. The existence of relevant applications motivates an analysis of asynchronous, distributed GNN inference. To our knowledge, this inference regime has not been explored, so we first describe partially asynchronous algorithms, and then outline GNN inference under partial asynchrony. We focus on per-node inference for clarity, but in practice each node can correspond to a worker operating on a graph partition rather than a single node. Computational models for asynchronous algorithms vary depending on the constraints imposed on the sequencing or frequency of computation or communication. We consider partial asynchronism, which, informally, places bounds on two key characteristics: the time between updates across each node, and the amount by which data retained at any node can be out of date (from the perspective of some other node(s)). In this section we present GNN inference under partial asynchronism. We give a brief but more precise overview of the algorithmic model in appendix A.1, see Bertsekas & Tsitsiklis [1989] for a thorough coverage. We write \( h = (h_1, h_2, \ldots, h_n) \in \mathbb{R}^d \) to denote a block vector containing the embedding data \( h_i \in \mathbb{R}^k \) associated with each of the \( n \) nodes. We define a collection of local or ‘node-specific’ update functions \( f_i : \mathbb{R}^d \mapsto \mathbb{R}^k \), which are essentially embedding updates \( f_\theta \) restricted to node neighborhoods. Without loss of generality, assume the update functions are continuously differentiable, so that this restriction can be stated: \[ j \notin \text{ne}(i) \implies \frac{\partial f_i}{\partial h_j}(z) = 0 \quad \forall z \in \mathbb{R}^k. \] (7) For inference under asynchronism, we aim to coordinate these nodes, so that in iterating the local updates \( f_i \) using local neighborhood data, the sequence of embeddings (across the graph) converges. We must reason about particular orderings of the local node updates, so we consider the embeddings as a function of time. Suppose we are given a set \( T^i \subseteq \{0, 1, 2, \ldots\} \) of times at which the node \( i \) is updated. For each \( t \in T^i \) we are given variables \( \tau^i_j(t), \ i, j = 1, \ldots, n \). The latter satisfy \( 0 \leq \tau^i_j(t) \leq t \), and can be interpreted as the time \( \tau^i_j(t) \in T^j \) corresponding to node \( i \)'s view of node \( j \) at time \( t \). The quantities \( s_{ij}(t) = t - \tau^i_j(t) \in [0, t] \) can be interpreted as the amount (in time) by which information associated with node \( j \) is outdated or “stale” when used in the update of \( h_i \) at time \( t \). For simplicity, assume that the embedding dimension \( k \) is fixed, so embeddings for a node are compatible with whatever update iteration the node is at. Additionally, we assume that the number of iterations \( |T_i| \) executed by each node is fixed and equal to the number of layers \( L \). For correspondence with the eq. (3) and eq. (4), we write the update for a single layer: \[ m_{ij}(t + 1) := m(h_j(\tau^i_j(t)), h_i(t), X_j, X_i, E_{ij}; \theta_m) \] (8) \[ m_i(t + 1) := g(\{m_{ij}(t + 1) \mid j \in \text{ne}(i)\}, A_i; \theta_g) \] (9) \[ h_i(t + 1) := u(m_i(t + 1), h_i(t), X_i; \theta_u), \] (10) for \( t \geq 0 \). Note the crucial difference introduced by partial asynchronism: in computing \( m_i \) (the message associated with node \( i \)), the neighbor data \( h_j(\tau^i_j(t)) \) may be out of date. As written, this update corresponds to node \( i \) executing a single layer. For simplicity, we do not write the update functions \( m, g, u \) or their parameters \( \theta_m, \theta_g, \theta_u \) indexed by layer, but this is straightforwardly generalized to the case of layer-specific parameters and functions described in eqs. (2) to (4). In our experiments, partially asynchronous execution is simulated; we give details of our implementation in appendix A.5. In the framework laid out here, implicit GNNs enjoy two advantages compared to finite depth GNNs. First, provided the fixed point iteration is contractive, the embeddings converge under partial asynchronism [Bertsekas, 1983]. In contrast, finite depth GNNs implement a specific feedforward neural network architecture, and partial asynchrony corrupts the computation performed by the network. We illustrate this in Figure 1 where partial asynchrony results in a (different) computation graph with some connections removed, and new connections that are not present in the original synchronous computation graph. In general, this means the final node embeddings may vary significantly depending on the particular node update sequence. Second, implicit GNNs can continue to iterate indefinitely, so by construction they are adaptive to dynamic inputs. Put another way, if node or edge features are not time-invariant, an iterating implicit GNN will eventually change its output in response to changes in the inputs. On the other hand, in finite depth GNNs each node is constrained to execute exactly $L$ updates, and there is no straightforward solution to the problem of coordinating another forward pass of the network. 5 ENERGY GRAPH NEURAL NETWORKS As discussed in the previous section, implicit GNNs which frame message-passing as a fixed point iteration are perfectly suited for decentralized, distributed, and asynchronous inference. However, existing implicit GNNs either (1) use simple update functions that can easily be enforced to be contractive [Gu et al., 2020; Liu et al., 2021], or (2) attempt to specify more flexible update functions which are encouraged (rather than guaranteed) to be contractive. An alternative strategy to constructing implicit GNNs is to replace the fixed point iteration with an optimization procedure. In other words, the GNN embedding updates can be viewed as iterations in an algorithm for solving an optimization problem. Previous work has explored the relationship between existing GNN updates and optimization objectives, and propose generalized forms of objectives which unify the optimization-oriented view of GNNs [Yang et al., 2021; Zhu et al., 2021]. As an example, we derive the optimization objective associated with EIGNNs [Liu et al., 2021] in Appendix A.9. However, there is no reason to limit the design space of GNN updates to correspond to the functional form of the objective proposed in previous work. We propose a novel implicit GNN architecture which we call energy GNNs, motivated by the optimization-oriented view of GNN updates. Energy GNNs compute node embeddings that minimize a parameterized, convex function, which we refer to as the ‘energy’ function. As we discuss later, this formulation enables robustness to distributed and partially asynchronous inference, like other implicit GNNs which use contractive fixed point node updates. By employing partially input-convex neural networks (PICNNs) in the architecture of the energy function, we open a rich, flexible class of convex graph objectives. 5.1 INPUT-CONVEX GRAPH NEURAL NETWORKS PICNNs [Amos et al., 2017] are scalar-valued neural networks that constrain the parameters in such a way that the network is convex with respect to a subset of the inputs. We use a partially input-convex GNN (PICGNN) as the energy function by extending PICNNs to operate on graph-structured data. A regular message passing GNN can be recast to be convex with respect to the node embeddings with two modifications. First, we use PICNNs for the functions $m$, $u$, and $o_\phi$, where the functions are convex with respect to the messages $m$ and node embeddings $H$, but not necessarily with respect to the features. Second, non-negative summation is used for the aggregation functions (which preserves convexity). We provide a more detailed description of the architecture of a general PICGNN in Appendix A.2. 5.2 ENERGY FORMULATION An energy GNN replaces the GNN embedding update function $f_\theta$ with an optimization procedure which minimizes an energy function $E_\theta$ with respect to the node embeddings. We use a PICGNN as $E_\theta$ and define $\theta = (\bar{\theta}, \bar{\phi})$ to be the parameters of the energy function, and $(\theta, \phi)$ to be the parameters of the energy GNN. In our experiments, $E_\theta$ is the sum of the node-level outputs $e_i \in \mathbb{R}$ which in turn are the sum of the layer output and the squared $L^2$ norm of the node embeddings: $$E_\theta(G, H) = \sum_{i=1}^{n} e_i \quad \text{where}$$ $$m_i = \sum_{j \in \text{ne}(i)} A_{i,j} m(h_j, h_i, X_j, X_i, E_{ij}; \bar{\theta}_m)$$ $$e_i = u(m_i, h_i, X_i; \theta_u) + (\beta/2)||h_i||_2^2.$$ In the forward pass, node embeddings are obtained by minimizing $E_\theta$ with respect to $H$: $$H^* = \arg\min_H E_\theta(G; H).$$ (13) Node-level predictions are then obtained using a neural network output function $o_\phi$ which takes as input the energy-minimizing embeddings. We use gradient descent to solve for $H^*$, although in principle any convex optimization procedure can be used. More formally, we initialize $H^0 = 0$ and for iterations $t = 0, ..., T - 1$ perform the following update: $$H^{t+1} = H^t - \alpha \frac{\partial E_\theta}{\partial H}(H^t),$$ (14) where $\alpha > 0$ is the step size and the number of iterations $T$ is dictated by when the embeddings numerically reach a fixed point $H^*$; i.e., when $H^{t+1} \approx H^t$. The node-level view of Equation (14) makes it clear that just like in regular message passing GNNs, updates are performed per node using information from directly connected neighbors: $$h_i^{t+1} = h_i^t - \alpha \sum_{j \in \text{ne}(i) \cup \{i\}} \frac{\partial e_i}{\partial h_i}(h_j^t | j' \in \text{ne}(j)),$$ (15) We prove in Appendix A.4 that since the $E_\theta$ is strongly convex and separable per node, this optimization procedure converges under partial asynchronism and can be executed in a distributed manner. We exploit convergence of the energy minimization process by using implicit differentiation to obtain gradients of the task-specific loss function $L$ with respect to the energy parameters $\theta$. This avoids unrolling the iterations of the energy minimization procedure in the backward pass, and requires a fixed amount of computation and memory. We derive the gradient and provide additional details in Appendix A.3. ### 5.3 Partially Asynchronous Inference In order to examine energy GNN inference under partial asynchronism, we associate each node $i$ with a $k$-vector embedding $h_i \in \mathbb{R}^k$, and a collection of $m_i = |\text{ne}(i)|$ additional $k$-vectors $g_{ij} \in \mathbb{R}^k$, $j = 1, ..., m_i$, one for each of the $m_i$ neighbors of node $i$. The meaning of $g_{ij}$ is the derivative of $e_i$ with respect to the (potentially outdated) node embedding $h_j$ associated with node $j$. We collect all the data associated with node $i$ into a length $d_i = k(m_i + 1)$ block vector $x_i = (h_i, g_{i1}, ..., g_{im_i}) \in \mathbb{R}^{d_i}$. We then aggregate the data associated with all $n$ nodes into a block vector with $d = \sum_{i=1}^n d_i$ elements which we denote $x = (x_1, x_2, ..., x_n) \in \mathbb{R}^d$; this corresponds with the state variables $x$ defined in appendix A.1. The node data evolves according to update functions $f_i : \mathbb{R}^d \mapsto \mathbb{R}^{d_i}$, $i = 1, ..., n$. Each update function $f_i$ is comprised of (1) a gradient descent step on the energy function with respect to $h_i$ and (2) derivative estimates $\tilde{g}_{ij} = \frac{\partial e_i}{\partial h_j}(h_j(\tau_j^i(t)))$ given (potentially updated) embeddings $h_j(\tau_j^i(t))$ associated with the node’s neighbors. ### 6 Experiments We perform experiments on a number of synthetic datasets, motivated by tasks which are of interest for multi-agent systems and where distributed, asynchronous inference is desirable. Specifically, we examine the ability of GNNs to capture long-range dependencies between nodes, perform size estimation and summation, and perform relative node localization. We compare performance of energy GNNs to IGNN [Jiu et al., 2020] and GCN [Kipf & Welling, 2017]. IGNN is compared against because it is the main existing GNN architecture that we identify to be amenable to asynchronous inference. Two other architectures mentioned in section 2.3 that are excluded are EIGNN [Liu et al., 2021] and the GNN proposed by [Searselli et al., 2009]. The former is excluded because the fixed point is solved for directly in the forward pass rather than iteratively, which requires global information, and the latter is excluded because fixed point convergence is encouraged rather than guaranteed. GCN is chosen as a representative architecture from the class of finite depth GNNs and we do not consider other architectures, since all GNNs from this class exhibit the same malignancies under asynchronous inference and are thus not compatible with the synthetic tasks. Performance of GCN under synchronous and asynchronous inference is provided as a reference, as well as to further demonstrate the deviation in predictions under asynchrony. We use 2 layers of message passing for GCN, and for IGNN and energy GNN we use a single parameterized layer of message passing. For all synthetic experiments, we simulate asynchronous inference of trained models. For IGNN and energy GNN, we report results under this regime, since both achieve (within numerical error) the same performance under synchrony or asynchrony. For GCNs, we report results with both synchronous and asynchronous inference as they deviate significantly. Since the output from asynchronous execution of a GCN depends on the order of node updates, we report mean performance across 10 random orderings. The simulated asynchronous inference algorithm is in Appendix A.5. Training details for the synthetic experiments are provided in Appendix A.6. We additionally perform experiments on benchmark datasets (MUTAG [Srinivasan et al., 1996], PROTEINS [Borgwardt et al., 2005], PPI [Hamilton et al., 2017]) for node and graph classification to evaluate energy GNNs as a synchronous GNN architecture, and achieve competitive performance on each dataset. Details related to these experiments are provided in Appendix A.8. ### 6.1 Chains In the absence of a central controller (as is the case for distributed, asynchronous inference), the ability of a GNN to capture long-range dependencies between node embeddings depends entirely on local message passing. The chains dataset, used in [Gu et al., 2020; Liu et al., 2021], is meant to evaluate this ability. The dataset consists of \( p \) undirected linear graphs with \( l \) nodes, with each graph having a label \( k \in \{1, ..., p\} \). The task is node classification of the graph label, where class information is contained only in the feature of the first node in the chain; the node feature matrix \( X \in \mathbb{R}^{n \times p} \) for a graph with class \( k \) has \( X_{1,k} = 1 \) and zeros at all other indices. Perfect classification accuracy indicates that information is successfully propagated to the final node in the chain. Table 1 shows binary classification accuracy for chains of lengths \( l = \{10, 20, 50, 100\} \). Both energy GNNs and IGNNs achieve perfect accuracy up to 50 nodes, with performance declining slightly at 100 nodes for IGNN. In Appendix A.7, we show plots of dataset loss convergence over the course of asynchronous inference, demonstrating convergence of energy GNN and IGNN predictions. | MODEL | # NODES | |----------------|---------| | | 10 | 20 | 50 | 100 | | GCN (sync) | 65.0 ± 0.0 | 57.5 ± 0.0 | 53.0 ± 0.0 | 51.5 ± 0.0 | | GCN (async) | 62.3 ± 2.9 | 56.9 ± 1.7 | 52.3 ± 0.2 | 50.9 ± 0.1 | | IGNN | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 93.3 ± 1.1 | | Energy GNN | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | Table 1: Node classification accuracy (%) for chains dataset, mean and standard deviation across 3 random parameter seeds. ### 6.2 Sums We construct a synthetic dataset meant to test the ability of a GNN to implement a simple distributed function: summation. We consider two regression experiments, node counting and node feature summation in undirected chain graphs. The dataset for node counting consists of graphs with different numbers of nodes, and no node features. For GCNs and IGNNs, which require node features as input, we use one-hot embeddings of node degrees. We consider two dataset sizes, with ranges of 1-10 and 1-50 nodes, respectively. The prediction target for each node is the total number of nodes in the graphs. The dataset for node feature summation consists of graphs of the same size, with different instantiations of binary node features \( X_i \in \{0, 1\} \). We consider two datasets using 100 graphs with 10 and 50 nodes, respectively. The prediction target for each node is the sum of the graph node features. Table 2 shows that energy GNN achieves the best relative test RMSE for each dataset. | MODEL | EXPERIMENT | |----------------|------------| | | COUNT (10) | COUNT (50) | SUM (10) | SUM (50) | | GCN (sync) | 61.8 ± 74.9 | 41.7 ± 7.0 | 24.3 ± 4.4 | 14.3 ± 2.7 | | GCN (async) | 199.6 ± 55.9 | 108.9 ± 35.9 | 47.1 ± 12.2 | 14.8 ± 2.8 | | IGNN | 3.7 ± 1.9 | 26.1 ± 3.6 | 14.7 ± 2.9 | 13.0 ± 2.1 | | Energy GNN | 2.9 ± 1.4 | 11.5 ± 4.5 | 2.9 ± 1.2 | 9.9 ± 1.6 | Table 2: Relative dataset RMSE (%) for counting and summing experiments for chain graphs, mean and standard deviation across 10 folds and 3 random parameter seeds. 6.3 Coordinates A common task for multi-agent collectives such as robot swarms is localization. This problem has previously been tackled in various ways that all employ a bespoke algorithm tailored for the task (Fodescato et al., 2016; Huang & Tian, 2017, 2018). We test the ability of off-the-shelf GNNs to solve this problem on static graphs. We construct a dataset where each node has a position in $\mathbb{R}^2$ and neighbors within some radius are connected by an edge. We don’t assume a global coordinate system; instead, we focus on relative localization, where pairwise distances between nodes are maintained. Each node predicts a position in $\mathbb{R}^2$, and the objective is the mean squared error between true pairwise node distances, and distances between their predicted positions. In order to break symmetries, each node has a unique ID which is one-hot encoded and used as the node feature. Distances to connected neighbors are provided as edge features. We consider two types of datasets; one using triangular lattice graphs, and the second using random graphs. In both cases, all graphs in the dataset are the same size (we use 10 and 20 node graphs). For the triangular lattice graph dataset, all graphs have the same structure but different permutations of node features, and node positions lie in the unit square. For the random graphs dataset, we sample points uniformly in the unit square and connect nodes by an edge if they are within a distance of 0.5. Each dataset has 500 graphs. Table 3 shows relative test RMSE for each dataset. GCNs and IGNNs, neither of which use edge features, perform reasonably well for lattice graphs, where edge features are uninformative since distances to neighbors are constant. For random graphs, the edge features are necessary for localization, so GCNs and IGNNs perform more poorly. Energy GNNs achieve the best performance for all datasets, but by a small margin, as localization is a difficult task. | MODEL | LATTICE (10) | LATTICE (20) | RANDOM (10) | RANDOM (20) | |-------------|--------------|--------------|-------------|-------------| | GCN (sync) | $20.8 \pm 1.0$ | $27.4 \pm 0.3$ | $27.1 \pm 0.5$ | $30.2 \pm 0.5$ | | GCN (async) | $287.3 \pm 74.7$ | $964.2 \pm 227.7$ | $360.5 \pm 113.6$ | $143.8 \pm 30.3$ | | IGNN | $27.9 \pm 0.6$ | $28.3 \pm 0.5$ | $30.2 \pm 0.5$ | $33.7 \pm 0.6$ | | Energy GNN | $20.0 \pm 1.0$ | $24.2 \pm 0.8$ | $22.7 \pm 2.1$ | $27.6 \pm 3.1$ | 7 Conclusion and Future Work We believe GNNs have the potential to provide learning frameworks for distributed systems, with applications to privacy, robotics, remote sensing, and other domains. However, as we have articulated, most conventional GNN architectures are not compatible with asynchronous inference and this hinders their deployment on these types of problems. We reiterate that this is a distinct problem from distributed training, which has different constraints but still assumes synchronism at inference time. In this work, we identified some extant architectures which are robust to asynchrony, and presented a competitive, novel class in the form of energy GNNs. The guarantees of our method arise from framing inference as a convex optimization problem that is amenable to “hogwild” asynchronous techniques as in Recht et al. (2011). We evaluate the performance of energy GNNs on a number of synthetic tasks, motivated by the application of GNNs to decentralized, low-constrained multi-agent systems, where distributed, asynchronous inference is desirable. In these tasks, we achieve better performance than IGNN (Gu et al., 2020), another ‘hogwild-able’ GNN architecture. In addition to its robustness to asynchronism, our method is comparable in generalization performance (on benchmark datasets) with other modern GNN architectures that do not offer these guarantees. We hope the positive results of our synthetic experiments motivates additional work in applying ‘hogwild-able’ GNN architectures to multi-agent related tasks. Inference over large graphs is another glaring application for asynchronous GNNs which should be explored in future work. This will likely also require distributed training, which will have to be adapted to the particulars of the forward and backward pass of asynchronous GNNs. We additionally believe it is crucial to explore asynchronism in training; this will involve additional distributed computation to solve the adjoint problem and collectively compute derivatives in a decentralized way. Another line of work which we expect to be interesting is real-time inference of dynamic graphs, both because of the relevance to problems in, e.g., robotics, but also due to the “anytime” nature of the energy GNN architecture. We are optimistic that the energy GNN framework itself, and in general the optimization-based view of GNN message passing, will provide a path forward for “learning to learn” distributed algorithms. REFERENCES Brandon Amos, Lei Xu, and J. Zico Kolter. Input convex neural networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 146–155. PMLR, 2017. Gerard M Baude. Asynchronous iterative methods for multiprocessors. Journal of the ACM, 25(2):226–244, 1978. Dimitri P Bertsekas. Distributed dynamic programming. IEEE Transactions on Automatic Control, 27(3):610–616, 1982. Dimitri P Bertsekas. Distributed asynchronous computation of fixed points. Mathematical Programming, 27(1):107–120, 1983. Dimitri P Bertsekas and John N Tsitsiklis. Parallel and distributed computation: Numerical methods, 1989. Maciej Besta and Torsten Hoefler. Parallel and distributed graph neural networks: An in-depth concurrency analysis. arXiv preprint arXiv:2205.09702, 2022. Adam W. Bojanczyk. Optimal asynchronous newton method for the solution of nonlinear equations. Journal of the ACM, 31(4):792–803, 1984. Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21:i47–i56, 2005. Allan Borodin and Ian Munro. The computational complexity of algebraic and numeric problems. 1975. Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60(2):223–311, 2018. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1):1–122, 2011. Daniel Chazan and Willard Miranker. Chaotic relaxation. Linear algebra and its applications, 2(2):199–222, 1969. JDP Donnelly. Periodic chaotic relaxation. Linear Algebra and its Applications, 4(2):117–128, 1971. David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. volume 28, 2015. Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. Graph neural networks for social recommendation. In The World Wide Web Conference. WWW ’19, pp. 417–426, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366748. Leslie M Goldschlager. A unified approach to models of synchronous parallel machines. In Proceedings of the Tenth Annual ACM Symposium on Theory of Computing, pp. 89–94, 1978. Fangda Gu, Heng Chang, Wenwu Zhu, Somayeh Sojoudi, and Laurent El Ghaoui. Implicit graph neural networks. In Advances in Neural Information Processing Systems, volume 33, pp. 11984–11995, 2020. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30, 2017. William L. Hamilton. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3):1–159, 2020.
kE9bsfMgin
In the light that there is no or little correlation between intrinsic bias measures and bias observed in a downstream task, how do you think the analysis of bias in attention heads is useful for downstream tasks?
ABSTRACT Transformer-based pretrained large language models (PLM) such as BERT and GPT have achieved remarkable success in NLP tasks. However, PLMs are prone to encoding stereotypical biases. Although a burgeoning literature has emerged on stereotypical bias mitigation in PLMs, such as work on debiasing gender and racial stereotyping, how such biases manifest and behave internally within PLMs remains largely unknown. Understanding the internal stereotyping mechanisms may allow better assessment of model fairness and guide the development of effective mitigation strategies. In this work, we focus on attention heads, a major component of the Transformer architecture, and propose a bias analysis framework to explore and identify a small set of biased heads that are found to contribute to a PLM’s stereotypical bias. We conduct extensive experiments to validate the existence of these biased heads and to better understand how they behave. We investigate gender and racial bias in the English language in two types of Transformer-based PLMs: the encoder-based BERT model and the decoder-based autoregressive GPT model. Overall, the results shed light on understanding the bias behavior in pretrained language models. 1 INTRODUCTION Transformer-based pretrained language models such as BERT (Devlin et al., 2018), GPT-2 (Radford et al., 2019), and large foundation models such GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), and LLaMA (Touvron et al., 2023) have achieved superior performance in many natural language processing (NLP) tasks (Adlakha et al., 2023; Gao et al., 2023; Li et al., 2023; Wei et al., 2023; Yao et al., 2023). However, since PLMs and foundation models are trained on large human-written corpora, they often encode undesired stereotypes towards different social groups, such as gender, race, or people with disabilities (Bender et al., 2021; Blodgett et al., 2020; Hutchinson et al., 2020). For example, GPT-2 has been shown to generate stereotypical text when prompted with context containing certain races such as African-American (Sheng et al., 2019). A stereotype is an over-simplified belief about a particular group of people, e.g., “women are emotional.” Stereotyping can cause representational harms (Blodgett et al., 2020; Barocas et al., 2017) because it can lead to discrimination, prejudice, and unfair treatment of individuals based on their membership in a particular group (Fiske, 1998). In order to design robust and accountable NLP systems, a rich and growing body of literature has investigated the stereotypes in PLMs from two perspectives. The first line of work aims to quantify the stereotypical biases. For example, May et al. (2019) propose a Sentence Encoder Association Test (SEAT), and Nadeem et al. (2021) develop the StereoSet dataset to assess if a PLM encodes stereotypes. The second line of work aims to propose de-biasing strategies that remove undesired stereotypical association biases from PLMs (Zhou et al., 2023; Guo et al., 2022; He et al., 2022; Kaneko & Bollegala, 2021). Similarly, foundations model also needs to be further aligned to alleviate its bias concern, using techniques such as Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022). However, there are still gaps in understanding stereotypical biases in transformer-based language models. For bias assessment, while the common practice uses one score to quantify the model bias, it is unclear how the bias manifests internally in a language model. For bias mitigation, existing works are usually designed in an end-to-end fashion with a “bias neutralization” objective, but the inner-workings of the entire debiasing procedure remain a black-box. There is a need for in-depth analysis that uncovers how biases are encoded inside language models. In this work, we propose a framework to analyze stereotypical bias in a principled manner. Our main research question is, how does bias manifest and behave internally in a language model? Prior work in better understanding the internal mechanisms of deep neural networks has focused on specific model components. For example, we take inspiration from the seminal work of finding a single LSTM unit which performs sentiment analysis (Radford et al., 2017) and attributing types of transformer attention heads as “induction heads” that do in-context learning (Olsson et al., 2022). In this work, we focus on attention heads in pretrained language models. Attention heads are important because they enable transformer-based models to capture relationships between words, such as syntactic, semantic, and contextual relationships (Clark et al., 2019). Our proposed framework begins by measuring the bias score of each Transformer self-attention head with respect to a type of stereotype. This is done by deriving a scalar for each attention head, obtained by applying a gradient-based head importance detection method on a bias evaluation metric, i.e., the Sentence Encoder Association Test (SEAT, May et al., 2019). Heads associated with higher bias scores are dubbed biased heads, and are the heads upon which we then conduct in-depth analyses. In our analysis, we start by investigating how gender biases are encoded in the attention heads of BERT. We visualize the positions of biased heads and how they are distributed across different layers. To further verify that the identified biased heads indeed encode stereotypes, we conduct a counter-stereotype analysis by comparing the attention score changes between the biased heads and normal (non-biased) heads. Specifically, given a sentence containing a gender stereotype such as “women are emotional,” we obtain its counter-stereotype “men are emotional.” We then calculate the attention score change for the stereotypical word “emotion.” Since the only difference between the original sentence and its counter-stereotype sentence is the gender-related word, we would expect significant score changes for those heads that encode biases, and minimal changes for those heads that do not encode biases. Our analysis on a large external corpus verifies that the attention score change of identified biased heads are statistically and significantly greater than that of the normal heads. Later in the paper, we extend the analysis to investigate bias in the GPT model, as well as racial stereotype associated with Caucasians and African Americans. Moreover, we show that a simple debiasing strategy that specifically targets a small set of biased heads (by masking), which is different from previous end-to-end bias mitigation approaches that tune the entire PLM, yields a lower model bias performance with minimal disruption to language modeling performance. In summary, this work makes two important contributions. First, we open the black-box of PLM biases, and identify biased heads using a gradient-based bias estimation method and visualizations, shedding light on the internal behaviors of bias in large PLMs. The proposed framework also contributes to the literature on understanding how PLMs work in general (Rogers et al., 2020). Second, we propose a novel counter-stereotype analysis to systematically study the stereotyping behavior of attention heads. As a resource to the research community and to spur future work, we will open-source the code used in this study. 2 BACKGROUND 2.1 Multi-head self-attention Multi-head self-attention in Transformers is the fundamental building block for language models (Vaswani et al., 2017). In short, the self-attention mechanism allows a token to attend to all the tokens in the context, including itself. Formally, \( \text{head}_{i,j} \) denotes the output of attention head \( j \) in layer \( i \), i.e., \( \text{head}_{i,j} = \text{Attention}(Q_{i,j}, K_{i,j}, V_{i,j}) \), where \( Q_{i,j}, K_{i,j}, \) and \( V_{i,j} \) are learnable weight matrices. A language model usually contains multiple layers of Transformer block and each layer consists multiple self-attention heads. For example, BERT-base contains \( L = 12 \) layers of Transformers block, and each layer consists of \( H = 12 \) self-attention heads. --- 1 Throughout the paper, we use the term bias to refer to stereotypical bias. 2 In this paper, we use <layer>–<head number> to denote a particular attention head, and both the layer index and head index start with 1. For example, the 12-th head in the 9-th layer in BERT-base model is denoted as 9-12. The attention outputs are concatenated and then combined with a final weight matrix by extending the self-attention to multi-headed attention: \[ \text{MultiHead}_i(X_{i-1}) = \text{Concat}_{j=1...H} (\text{head}_{i,j}) W^O, \] where \(W^O\) serves as a “fusion” matrix to further project the concatenated version to the final output, and \(X_{i-1}\) is the output from the previous layer. ### 2.2 Stereotyping and Representational Harms in PLMs A growing body of work exploring AI fairness in general, and bias in NLP systems in particular, has highlighted stereotyping embedded in state-of-the-art large language models – that is, such models represent some social groups disparately on demographic subsets, including gender, race, and age ([Bender et al., 2021](#), [Shah et al., 2020](#), [Guo & Caliskan, 2021](#), [Hutchinson et al., 2020](#), [Kurita et al., 2019](#), [May et al., 2019](#), [Tan & Celis, 2019](#), [Wolfe & Caliskan, 2021](#), [Rozado, 2023](#)). According to the survey of [Blodgett et al., 2020](#), a majority of NLP papers on bias study representational harms, especially stereotyping. Our work is in line with the branch of research on exploring stereotypical bias in Transformer-based PLMs. Prior work proposes several ways of assessing the stereotyping encoded in a PLM. A commonly used metric is the Sentence Encoder Association Test (SEAT) score, which is an extension of the Word Embedding Association Test (WEAT, [Caliskan et al., 2017](#)), which examines the associations in contextualized word embeddings between concepts captured in the Implicit Association Test ([Greenwald et al., 1998](#)). While the SEAT score provides a quantifiable score to evaluate the stereotyping in PLMs, it is unknown how such stereotypical associations manifest in PLMs. To mitigate stereotyping and representational harms in PLMs, many different debiasing strategies have been proposed, including data augmentation ([Garimella et al., 2021](#)), post-hoc operations ([Cheng et al., 2021](#), [Liang et al., 2020](#)), fine-tuning the model ([Kaneko & Bollegala, 2021](#), [Lauscher et al., 2021](#)), prompting techniques ([Guo et al., 2022](#)), and Reinforcement Learning from Human Feedback (RLHF) ([Ouyang et al., 2022](#)). However, recent literature has noted several critical weaknesses of existing bias mitigation approaches, including the effectiveness of bias mitigation ([Gonen & Goldberg, 2019](#), [Meade et al., 2022](#)), high training cost ([Kaneko & Bollegala, 2021](#), [Lauscher et al., 2021](#)), poor generalizability ([Garimella et al., 2021](#)), and the inevitable degradation of language modeling capability ([He et al., 2022](#), [Meade et al., 2022](#)). We believe that progress in addressing PLM bias has been inhibited by a lack of deeper understanding of how the bias manifests/behave internally in the PLM. This paper aims to offer a perspective on this research gap. ### 3 Attention Head Bias Estimation Framework Our proposed framework for attention head bias estimation measures the bias score of Transformer self-attention heads with respect to a focal/concerning bias (e.g., gender). We first introduce a new variable, the head mask variable, that exists independently in each attention head. We then discuss how this variable can be utilized to quantify the bias in each attention head. #### 3.1 Head Mask Variable [Michel et al., 2019](#) propose a network pruning method that examines the importance of each self-attention head in a Transformer model. Given our interest in measuring the importance of each self-attention head with respect to a concerning bias, for each attention layer \(i\) comprised of \(H\) attention heads, we introduce a variable \(m_i = [m_{i,1}, m_{i,2}, \ldots, m_{i,H}]^T\) called the head mask variable that is multiplied element-wise with the output from each attention head in the \(i\)th layer. This allows us to understand (and control) the contribution of each attention head to the model’s final output: \[ \text{MultiHead}_i(X_{i-1}) = \text{Concat}_{j=1,...,H} (m_{i,j} \cdot \text{head}_{i,j}) W^O, \] where \(m_{i,j}\) is a scalar initialized with 1 in our implementations. In Equation 2, if \(m_{i,j} = 0\), it signifies that the attention head \(i-j\) is completely masked out from the language model, that is, it contributes nothing to the model’s final output. On the contrary, if \( m_{i,j} = 1 \), it is degenerated into its standard multi-head attention form as shown in Equation 1. ### 3.2 Estimating Bias for Each Attention Head Next, we show how this head mask variable can be utilized to quantify biases for each attention head. Formally, let \( X \) and \( Y \) be two sets of target words of equal size, and let \( A \) and \( B \) be two sets of attribute words. Here, target words are those that should be bias-neutral but may reflect human-like stereotypes. For example, in the context of gender bias, target words include occupation-related words such as *doctor* and stereotyping-related words such as *emotional*, and attribute words represent feminine words (e.g., *she, her, woman*) and masculine words (e.g., *he, his, man*). We assume \( X \) is stereotyped with \( A \) (e.g., stereotype related to female) and \( Y \) is stereotyped with \( B \) (e.g., stereotype related to male). Since we aim to measure how much stereotypical association is encoded in each of the attention heads, we directly use the absolute value of the Sentence Encoder Association Test score as the objective function, as follows: \[ L_{|SEAT|}(X, Y, A, B) = \frac{|\text{mean}_{x \in X}s(x, A, B) - \text{mean}_{y \in Y}s(y, A, B)|}{\text{std.dev}_{w \in X \cup Y}s(w, A, B)}, \] where \( s(w, A, B) = \text{mean}_{a \in A}\cos(\vec{w}, \vec{a}) - \text{mean}_{b \in B}\cos(\vec{w}, \vec{b}) \) and \( \cos(\vec{a}, \vec{b}) \) denotes the cosine of the angle between contextualized embeddings \( \vec{a} \) and \( \vec{b} \). Therefore, the bias score of each attention head can be computed as: \[ b_{i,j} = \frac{\partial L_{|SEAT|}}{\partial m_{i,j}}, \] where a larger \( b_{i,j} \) indicates head \( i-j \) is encoded with higher stereotypical bias. Using the absolute value of the SEAT score as the objective function allows us to back-propagate the loss to each of the attention heads in different layers and quantify their “bias contribution.” Therefore, if the bias score of an attention head is positive, it means that a decrease in the mask score from 1 to 0 (i.e., excluding this attention head) would decrease the magnitude of bias as measured by SEAT. In other words, the head is causing the SEAT score to deviate from zero and intensify the stereotyping (intensify either female-related stereotyping or male-related stereotyping or both). In contrast, an attention head with negative bias score indicates that removing the head increases the model’s stereotypical association. Therefore, we define biased heads as those having positive bias scores, and the magnitude of bias score indicates the level of encoded stereotypes. Our proposed attention head bias estimation procedure has several advantages. First, the procedure is model-agnostic. The objective function (i.e., \( L_{|SEAT|} \)) can be easily customized/replaced to serve different purposes, providing flexibility for more general or specific bias analyses including different types of biases, datasets, and PLM model architectures. Second, it is only comprised of one forward pass (to compute \( L_{|SEAT|} \)) and one backpropagation process (to compute \( b_{i,j} \)). Thus, it is computationally efficient for increasingly large foundation models. Third and critically, the bias score can quantify the importance of each attention head on the concerning bias. We later empirically evaluate the proposed bias estimation procedure, enhancing our understanding of stereotype in PLMs. ### 4 Experimental Setup **Gender and Racial Bias Word Lists:** Our analysis focuses on studying gender bias and racial bias, which are two of the most commonly examined stereotypes in PLMs. For gender bias, we employ attribute and target word lists used in prior literature (Zhao et al., 2018; Masahiro & Bollegala, 2019). In total, the gender attribute word list contains 444 unique words (222 pairs of feminine-masculine). --- 3 We use the outputs from the final layer of the model as embeddings. Each word in the attribute sets is a static embedding obtained by aggregating the contextualized embeddings in different contexts via averaging which has been shown as an effective strategy (Kaneko & Bollegala, 2021). Figure 1: Bias score distributions for BERT-base gender (1a), GPT-2 gender (1b), and BERT-base race (1c). words), and the target list contains 84 gender related stereotypical words. For racial bias, we examine the stereotypical association between Caucasian/African American terms and stereotypical words. Specifically, we use the attribute word list and the target word list proposed in prior work (Manzini et al., 2019). The racial attribute word list contains 6 unique words (3 pairs of African-American vs. Caucasian words), and the target list contains 10 racial related stereotypical words. External Corpus for Bias Estimation: We use the News-commentary-v15 corpus to obtain contextualized word embeddings for PLMs and identify biased heads using the bias estimation method (Sec. 3.2). News-commentary-v15 corpus has often been used in prior PLM bias assessment and debiasing work (Masahiro & Bollegala, 2019; Liang et al., 2020). PLMs: We study the encoder-based BERT model and the decoder-based GPT model. For the BERT model, we consider BERT-base, which is comprised of 12 Transformer layers with 12 heads in each layer. For the GPT model, we consider GPT-2Small (Radford et al., 2019), which also consists of 12 Transformer layers with 12 attention heads in each layer. We implemented the framework and conducted experiments on an Nvidia RTX 3090 GPU using PyTorch 1.9. PLMs were implemented using the transformers library. 5 Assessing Gender Bias in BERT and GPT Prior literature has shown that PLMs like BERT and GPT exhibit human-like biases by expressing a strong preference for male pronouns in positive contexts related to careers, skills, and salaries (Kurita et al., 2019). This stereotypical association may further enforce and amplify sexist viewpoints when the model is fine-tuned and deployed in real-world applications such as hiring. In this section, we use the proposed method to assess gender bias in BERT and GPT-2. 5.1 Distribution of Biased Heads There are 144 attention heads in BERT-base and GPT-2Small; we obtain a bias score, $b_{i,j}$, for each of the attention heads. We visualize the bias score distribution in Figure 1a and Figure 1b respectively. It shows that most of the attention heads have a bias score that is centered around 0, indicating that they have no major effect on the SEAT score. Notably, there are several attention heads (on the right tail of the distribution curve) that have much higher bias scores compared to others. Moreover, GPT-2 contains more attention heads with pronounced negative bias scores than BERT, indicating that there are less biased attention heads in GPT-2. In the ensuing analysis, we examine the biased heads, especially those with higher bias score values. https://github.com/kanekomasahiro/context-debias https://github.com/TManzini/DebiasMulticlassWordEmbedding/ The dataset contains news commentaries, released for the WMT20 news translation task. We use the English data: https://www.statmt.org/wmt20/translation-task.html https://pypi.org/project/transformers/ Relatedly, the SEAT score of GPT-2Small is 0.351 while that of BERT-base is 1.35. Figure 2: Attention head visualizations for BERT-base gender (2a), GPT-2 gender (2b), BERT-base race (2c). Note that negative bias scores are converted to zero for better visual illustration. To understand the location of biased heads in BERT and GPT, we created a heatmap (Figure 2a and Figure 2b respectively) in which each cell represents a particular attention head, and the darker the color of the cell, the higher the bias score. Consistent with prior literature (Kaneko & Bollegala, 2021), the identified biased heads appear across all layers. 5.2 Counter-stereotype experiment We now turn to evaluate if the identified biased heads - those attention heads with positive bias scores - indeed encode more stereotypical associations than non-biased attention heads with negative bias scores. We propose a counter-stereotype experiment for this purpose. Although stereotyping in PLMs can be seen from the contextualized representations in the last layer, it is largely driven by how each token attends to its context in the attention head. By examining the attention maps (Clark et al., 2019) — the distribution of attention scores between an input word and its context words, including itself, across different attention layers — we can gain insight into how bias behavior manifests in PLMs. We argue that we can gain insight into how bias behavior manifests in an attention head by examining how it assigns the attention score between two words. For example, given two sentences “women are emotional” and “men are emotional”, since these two sentences have the exact same sentence structure except the gender attribute words are different, we should expect to see negligible attention score difference between the target word (emotional) and the gender attribute word (women, men). However, if an attention head encodes stereotypical gender bias that women are more prone to emotional reactions compared to men, there will be a higher attention score between “emotional” and “women” in the former sentence than that between “emotional” and “men” in the later sentence. In other words, simply substituting attribute words should not drastically change how the attention head works internally, unless the attention head is encoded with stereotypical associations. A running example is shown below. Running example: We take an input text “[CLS] the way I see it, women are more emotional beings... ” from the /r/TheRedPill corpus, feed it into the BERT-base model, and visualize its attention maps, the distribution of attention scores (Clark et al., 2019), for the target word “emotional” at one biased head and one randomly sampled regular head in Figure 3. Notably, for this biased head, the normalized attention score between the target word emotional and the attribute word women is 0.0167. However, in the counter-stereotype example where women is substituted with men, the normalized attention score drops to 0.0073. All other things being equal, this head encodes more stereotypical associations. On the other hand, for the unbiased head, the change between attention score is negligible. --- 9/r/TheRedPill dataset contains 1,000,000 stereotypical text collected from the Reddit community (Ferrer et al., 2021). 10Note that for clarity, we do not display the attention with regards to special tokens (e.g., [CLS], [SEP]) and punctuation (e.g., comma, period). 11The raw attention score is normalized using the min-max method, and the attentions to special tokens (i.e., [CLS] and [SEP]) and punctuation are excluded. Figure 3: A running example for the counter-stereotype experiment. The four plots show the attention score (the boldface number) in the original sentence and the counter-stereotype sentence of a biased head (left two figures) and an unbiased head (right two figures). In this example, the target word is “emotional”. The edge thickness is associated with its normalized attention score. BERT-base model is used in this example. (a) BERT; gender. (b) GPT-2; gender. (c) BERT; race Figure 4: Quantitative counter-stereotype experiments. It is worth noting that the absolute value of the attention score does not necessarily indicate the significance of bias. This is because some attention heads may indeed be “gender” heads that associate high weights between gender words and target word, which could be very useful for context such as coreference resolution. Therefore, to account for this, we measure the difference of attention score between a stereotype association (e.g., women and emotional) and a counter-stereotype association (e.g., men and emotional). Quantitative counter-stereotype analysis: To assess the bias in biased heads more systematically and quantitatively, we conduct the counter-stereotype analysis using a large sample of sentences. The detailed steps are as follows. Step 1: Form a stereotype dataset. We first obtain a set of sentences from TheRedPill corpus, where each sentence contains exactly one attribute word (e.g., “women”) from our predefined word lists and one of its associated stereotypical target word (e.g., “emotional”). Note that this set of sentences could contain both women-related and men-related stereotype. We denote this dataset as $S_{\text{orig}}$. Step 2: Form a counter-stereotype dataset. We then construct a counter-stereotype dataset by replacing the attribute word (e.g., “women”) with its counterpart (e.g., “men”), with all other words in the sentence unchanged, for each example in $S_{\text{orig}}$. For example, given an original sentence “women are emotional,” the counter-stereotype sentence would be “men are emotional.” We denote this dataset as $S_{\text{counter}}$. Note that sentences in $S_{\text{orig}}$ and $S_{\text{counter}}$ are paired, and the only difference in the paired sentences is that the stereotype related attribute words are different. Step 3: Examine attention score difference and statistical significance. For Head $i-j$ (the $j$-th head in the $i$-th layer), we calculate the attention score that the target word has on the attribute word for each of the sentences in $s \in S_{\text{orig}}$, which we denote as $w^s_{[i-j]}$. Similarly, we calculate the attention score for each of the counter-stereotype sentences $s' \in S_{\text{counter}}$, which we denote as $w^{s'}_{[i-j]}$. We measure the attention score change after the attribute word substitution as $d^s_{[i-j]} = w^s_{[i-j]} - w^{s'}_{[i-j]}$. We then conduct a one-tail t-test to examine the null hypothesis that $d^s_{[i-j]}$ equals to zero. If the examined focal attention head encodes stereotypical bias, we would see that $d^s_{[i-j]}$ is significantly greater than zero and thus reject the null hypothesis. The counter-stereotype experiment results are presented in Figure 4a (BERT) and Figure 4b (GPT) respectively. For BERT, we can see that for the biased heads, whose bias score is positive, the average attention score in $S_{\text{orig}}$ is statistically higher than that in $S_{\text{counter}}$ ($t$-stat = 3.182, $p$-value < 0.001, $N = 500$). However, the average attention score difference in the regular heads are not statistically significant ($t$-stat = −1.478, $p$-value = 0.93, $N = 500$), indicating that there is no significant change of attention score. The results are similar for GPT. The average attention score of biased heads in GPT is statistically higher in the original group than in the counter-stereotype group ($t$-stat = 2.897, $p$-value < 0.005, $N = 500$). However, there is no statistical significance between the original group and the counter-stereotype group for the regular heads ($t$-stat = 0.213, $p$-value = 0.42, $N = 500$). Taken together, the counter-stereotype experiment validates that the attention heads we identify as biased heads indeed encode stereotypical biases. It should be noted that our counter-stereotype experiment differs from StereoSet (Nadeem et al., 2021), which incorporates human-annotated stereotype and counter-stereotype sentences. In StereoSet, the examples of stereotype and counter-stereotype are represented by completely different sentences. In contrast, our counter-stereotype examples are constructed by altering only the attribute words (such as those related to gender), while the overall sentence context remains unchanged. This method enables us to examine how the attention score of a specific attention head changes in a controlled manner. 6 ADDITIONAL ANALYSIS 6.1 ASSESSING RACIAL STEREOTYPING In this section, to demonstrate our bias analysis framework is also applicable to other types of biases beyond gender bias, we apply our framework to examine racial bias between Caucasian/African American terms and racial related stereotypical words such as criminal, runner, etc. In the following experiment, we use BERT-base as the underlying PLM.\footnote{The results are similar for GPT model, and are omitted for space considerations.} We visualize the bias score distribution and heat map in Figure 1c and Figure 2c respectively. Much like the distribution of gender bias in BERT, we observe several heads with significantly higher bias scores. Moreover, the biased heads appear across all layers; some of the highest scores are distributed in the higher layers. We conduct a counter-stereotype experiment to validate the identified racial biased heads. Similar to the counter-stereotype experiment step for gender bias analysis, we first obtain a set of sentences from the Reddit corpus that contains both the racial attribute words (such as “black”) and stereotypical words (such as “criminal”). Then we measure the attention score change in a sentence and its counterfactual by replacing an attribute word to its counterpart word (such as “white”). Figure 4c shows that for the bias heads, the average attention score is significantly lower in the counter-stereotype group than in the original group, indicating these heads encode stronger racial stereotype associations ($t$-stat = 2.324, $p$-value < 0.05, $N = 500$). In contrast, for the unbiased heads group, there is no statistical difference in the original sentences and their counter-stereotypes ($t$-stat = −0.107, $p$-value = 0.54, $N = 500$). 6.2 UNDERSTANDING DEBIASING THROUGH THE LENS OF BIASED HEADS Existing bias mitigation approaches are usually designed in an end-to-end fashion and fine tune all model parameters with a bias neutralization objective or a bias neutral corpus. For example, Attanasio et al. (2022) propose to equalize the attention probabilities of all attention heads, and counterfactual data augmentation debiasing (CDA) proposes to pretrain a language model with a gender-neutral dataset (Zmigrod et al., 2019). In this sub-section, we use the scores from our bias analysis framework to shed light on possible application of biased heads for bias-mitigation. We examine a different debiasing strategy that specifically targets on a set of attention heads. As an initial exploration of targeted debiasing, we examine a simple strategy, called Targeted-Debias, that masks out top-K attention heads that have the largest bias score (Top-3). In addition, we also examine an opposite targeted debiasing that masks out K attention heads with the most negative bias score (Bottom-3). Moreover, we mask out all attention heads with a positive bias score (All) (in the case of gender bias in BERT, there are 45 attention heads with a positive bias score). To benchmark the performance of Targeted-Debias, we consider Random-Debias that randomly masks out K out of BERT-base’s 144 heads. To evaluate the impact of masking out attention heads, we assess the model’s bias using SEAT score, and we also evaluate the model’s language modeling capability using pseudo-perplexities (PPPLs) (Salazar et al., 2020), and model’s Natural Language Understanding (NLU) capability on the GLUE tasks (Wang et al., 2018). The main debiasing results are presented in Table 1a. We can see that Targeted-Debias (Top-3) achieves the best performance among the three debiasing strategies: it has the lowest SEAT and lowest PPPL scores. Compared to the two versions of Targeted-Debias (Top-3 vs. All(45)), masking out more biased heads does not further lower SEAT, but does significantly worsen the language modeling performance (4.16 vs. 5.75). The Top-3 Targeted-Debias only slightly increases BERT’s PPPL from 4.09 to 4.16. Interestingly, we can see that targeting on the anti-biased heads (Bottom-3) increases the overall model bias. Random-Debias, which randomly masks out attention heads, actually exacerbates model bias. We posit that this result makes sense, given that if random heads are removed, those biased heads that remain will have their bias amplified. The GLUE task results appearing in Table 1b show similar trends as the language modeling task. That is, masking out the top-3 biased heads achieves comparable NLU performance to the original BERT-base model, while masking out all biased heads significantly worsens model performance. Taken together, it is encouraging that a simple debiasing strategy, targeting a small set of highly biased heads, can reduce PLM bias without affecting language modeling and NLU capability. | Task | Metric | Result | |------|--------|--------| | RTE | Accuracy | 0.6907 / 0.7148 | | SST-2 | Accuracy | 0.9297 / 0.9308 | | WNLI | Accuracy | 0.5506 / 0.5818 | | QNLI | Accuracy | 0.9154 / 0.9154 | | MNLI | Matthew corr. | 0.5825 / 0.5792 | | MRPC | F1 / Accuracy | 0.8701 / 0.8266 | | QQP | F1 / Accuracy | 0.8829 / 0.9129 | | STS-B | Pearson / Spearman corr. | 0.8862 / 0.8847 | | MNLI | Matched acc. / Mismatched acc. | 0.8794 / 0.8406 | (a) Targeted debiasing. (b) GLUE benchmark. 7 CONCLUSION AND DISCUSSION In this work, we present an approach to understand how stereotyping biases are encoded in the attention heads of pretrained language models. We infer that the biases are mostly encoded in a small set of biased heads. We further analyze the behavior of these biased heads, by comparing them with other regular heads, and confirm our findings. We also present experiments to quantify gender bias and racial bias in BERT and GPT. This work is among the first work aiming to understand how bias manifests internally in PLMs. Previous work has often used downstream tasks or prompting to examine a PLM’s fairness in a black-box manner. We try to open up the black-box and analyze different patterns of bias. In doing so, we strengthen our understanding of PLM bias mechanisms. Future work can apply our method to assess concerning biases in increasingly large foundation models such as GPT-3 and LLaMA. Overall, our work sheds light on how bias manifests internally in language models, and constitutes an important step towards designing more transparent, accountable, and fair NLP systems. 13 Performed on the test split of “wikitext-2-raw-v1” accessible through https://huggingface.co/datasets/wikitext REFERENCES Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, and Siva Reddy. Evaluating correctness and faithfulness of instruction-following models for question answering. *arXiv preprint arXiv:2307.16877*, 2023. Giuseppe Attanasio, Debora Nozza, Dirk Hovy, and Elena Baralis. Entropy-based attention regularization frees unintended bias mitigation from lists. In *Findings of the Association for Computational Linguistics: ACL 2022*, pp. 1105–1119, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.88. URL https://aclanthology.org/2022.findings-acl.88. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. The problem with bias: Allocative versus representational harms in machine learning. In *9th Annual conference of the special interest group for computing, information and society*, 2017. Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, pp. 610–623, 2021. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. Language (technology) is power: A critical survey of “bias” in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 5454–5476, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.485. URL https://aclanthology.org/2020.acl-main.485. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. *Science*, 356(6334):183–186, 2017. Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. Fairfil: Contrastive neural debiasing method for pretrained text encoders. *arXiv preprint arXiv:2103.06413*, 2021. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*, 2022. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What does BERT look at? an analysis of BERT’s attention. In *Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP*, pp. 276–286, Florence, Italy, August 2019. Association for Computational Linguistics. doi: 10.18653/v1/W19-4828. URL https://aclanthology.org/W19-4828. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Xavier Ferrer, Tom van Nuenen, Jose M Such, and Natalia Criado. Discovering and categorising language biases in reddit. In *ICWSM*, pp. 140–151, 2021. Susan T Fiske. Stereotyping, prejudice, and discrimination. 1998. Jun Gao, Huan Zhao, Changlong Yu, and Ruifeng Xu. Exploring the feasibility of chatgpt for event extraction. *arXiv preprint arXiv:2303.03836*, 2023. Aparna Garimella, Akhash Amarnath, Kiran Kumar, Akash Pramod Yalla, N Anandhavelu, Niyati Chhaya, and Balaji Vasan Srinivasan. He is very intelligent, she is very beautiful? on mitigating social biases in language modelling and generation. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pp. 4534–4545, 2021.
2XwBIcywWM
Algorithm 1 indicates the use of $n$ samples for each domain. Could the authors provide guidance on how to effectively balance these samples across various domains to ensure a harmonized and representative dataset for each domain involved?
Learning Variational Neighbor Labels for Test-Time Domain Generalization Anonymous authors Paper under double-blind review Abstract This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains. We follow the strict separation of source training and target testing, but exploit the value of the unlabeled target data itself during inference. We make three contributions. First, we propose probabilistic pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time. We formulate the generalization at test time as a variational inference problem, by modeling pseudo labels as distributions, to consider the uncertainty during generalization and alleviate the misleading signal of inaccurate pseudo labels. Second, we learn variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels. Third, to learn the ability to incorporate more representative target information and generate more precise and robust variational neighbor labels, we introduce a meta-generalization stage during training to simulate the generalization procedure. Experiments on seven widely-used datasets demonstrate the benefits, abilities, and effectiveness of our proposal. 1 Introduction As soon as test data distributions differ from the ones experienced during training, deep neural networks start to exhibit generalizability problems and accompanying performance degradation (Geirhos et al., 2018; Recht et al., 2019). To deal with distribution shifts, domain generalization (Li et al., 2017; 2020; Motian et al., 2017b; Muandet et al., 2013) has emerged as a promising tactic for generalizability to unseen target domains. However, as methods are only trained on source domains, this may still lead to overfitting and limited performance guarantees on unseen target domains. To better adapt models to target domains – without relying on target data during training – test-time adaptation (Liang et al., 2023; Sun et al., 2020; Varsavsky et al., 2020; Wang et al., 2021) was introduced. It provides an alternative learning paradigm by training a model on source data and further adjusting the model according to the unlabeled target data at test time. Different settings for test-time adaptation have emerged. Test-time training (Sun et al., 2020) and test-time adaptation (Wang et al., 2021) attack image corruptions with a model trained on the original uncorrupted image distribution. The trained model is fine-tuned with self-supervised learning or entropy minimization to adapt to different corruptions in an online manner. The paradigm is also employed under the domain generalization setting using multiple source domains during training (Dubey et al., 2021; Iwasawa & Matsuo, 2021; Jang et al., 2023; Xiao et al., 2022), where the domain shifts are typically manifested in varying image styles and scenes, rather than corruptions. In this paper, we focus on the latter setting and refer to it as test-time domain generalization. One widely applied strategy for updating models at test time is by optimizing or adjusting the model with target pseudo labels based on the source-trained model (Iwasawa & Matsuo, 2021; Jang et al., 2023). However, due to domain shifts, the source-model predictions of the target samples can be uncertain and inaccurate, leading to updated models that are overconfident on mispredictions (Yi et al., 2023). As a result, the obtained model becomes unreliable and misspecified to the target data (Wilson & Izmailov, 2020). In this paper, we make three contributions to attack the unreliability of test-time domain generalization by pseudo labels. First, we define pseudo labels as stochastic variables and estimate their distributions. By doing so, the uncertainty in predictions of the source-trained model is incorporated into the generalization to the target data at test time, alleviating the misleading effects of uncertain and inaccurate pseudo labels. Second, due to the proposed probabilistic formalism, it is natural and convenient to utilize variational distributions to leverage extra information. By hinging on this benefit, we design variational neighbor labels that leverage the neighboring information of target samples into the inference of the pseudo-label distributions. This makes the variational labels more accurate, which enables the source-trained model to be better specified to target data and therefore conducive to model generalization on the target domain. Third, to learn the ability to incorporate more representative target information in the variational neighbor labels, we simulate the test-time generalization procedure across domains by meta-learning. Beyond the well-known meta-source and meta-target stages (Alet et al., 2021; Dou et al., 2019; Xiao et al., 2022), we introduce a meta-generalization stage in between the meta-source and meta-target stages to mimic the target generalization procedure. Based on the multiple source domains seen during training, the model is exposed to different domain shifts iteratively and optimized to learn the ability to generalize to unseen domains. Our experiments on seven widely-used domain generalization benchmarks demonstrate the promise and effectiveness of our proposal. 2 RELATED WORK Domain generalization. Domain generalization is introduced to learn a model on one or several source domains that can generalize well on any out-of-distribution target domain (Blanchard et al., 2011; Muandet et al., 2013; Zhou et al., 2022). Different from domain adaptation (Long et al., 2015; Luo et al., 2020; Wang & Deng, 2018), domain generalization methods do not access any target data during training. One of the most widely-used methods for domain generalization is domain-invariant learning (Arjovsky et al., 2019; Ghifary et al., 2016; Li et al., 2018c; Motian et al., 2017a; Muandet et al., 2013; Zhao et al., 2020), which learns invariant feature representations across source domains. As an alternative, source domain augmentation methods (Li et al., 2018a; Qiao et al., 2020; Shankar et al., 2018; Zhou et al., 2020a;b) try to generate more source domains during training. Recently, meta-learning-based methods (Balaji et al., 2018; Chen et al., 2023a; Dou et al., 2019; Du et al., 2020; Li et al., 2018b) have been explored to learn the ability to handle domain shifts. Test-time adaptation. Another solution to address distribution shifts without target data during training is adapting the model at test time. Source-free adaptation (Eastwood et al., 2021; Liang et al., 2020; Litrico et al., 2023) adapts the source-trained model to the entire target set. Differently, test-time adaptation achieves adaptation and prediction in an online manner, without halting inference. One common test-time adaptation is fine-tuning by entropy minimization (Wang et al., 2021; Goyal et al., 2022; Jang et al., 2023; Niu et al., 2022; Zhang et al., 2022). Since entropy minimization does not consider the uncertainty of source model predictions, probabilistic algorithms (Brahma & Rai, 2022; Zhou & Levine, 2021) based on Bayesian semi-supervised learning and models fine-tuned on soft pseudo labels (Rusak et al., 2021; Zou et al., 2019) have been proposed. Different from these works, we introduce the uncertainty by considering pseudo labels as latent variables and estimate their distributions by variational inference. Our models consider uncertainty within the same probabilistic framework, without introducing extra models or knowledge distillation operations. Test-time domain generalization. Many test-time adaptation methods adjust models to corrupted data distributions with a single source distribution during training (Sun et al., 2020; Wang et al., 2021). The idea of adjusting the source-trained model at test time is further explored under the domain generalization setting to consider target information for better generalization (Dubey et al., 2021; Iwasawa & Matsuo, 2021; Xiao et al., 2023; Zhang et al., 2021). We refer to these methods as test-time domain generalization. Dubey et al. (2021) generate domain-specific classifiers for the target domain with the target domain embeddings. Iwasawa & Matsuo (2021) adjust their prototypical classifier online according to the pseudo labels of the target data. Some also investigated meta-learning for test-time domain generalization (Alet et al., 2021; Du et al., 2021; Xiao et al., 2022). These methods mimic domain shifts during training with multiple source domains. Du et al. (2021) meta-learn to estimate the batch normalization statistics from each target sample to adjust the source-trained model. Xiao et al. (2022) learn to adapt their classifier to each individual target sample by mimicking domain shifts during training. Our method also learns the ability to adjust the model by unseen data under the multi-source meta-learning setting. Differently, we design meta-generalization and meta-target stages during training to simulate both the generalization and inference procedures at test time. Our entire algorithm is explored under a probabilistic framework. **Pseudo-label learning.** Pseudo-label learning relies on model predictions for retraining on downstream tasks. It is often applied for unlabeled data and self-training (Li et al., 2022; Miyato et al., 2018; Xie et al., 2020; Yalniz et al., 2019). To better utilize information from unlabeled target distributions, pseudo labels are also beneficial for unsupervised domain adaptation (Liu et al., 2021a; Shu et al., 2018; Zou et al., 2019), test-time adaptation (Chen et al., 2022; Rusak et al., 2021; Wang et al., 2022), and test-time domain generalization (Iwasawa & Matsuo, 2021; Jang et al., 2023; Wang et al., 2023). As pseudo labels can be noisy and overconfident (Zou et al., 2019), several studies focus on the appropriate selection and uncertainty of the pseudo labels. These works either select the pseudo labels with criteria such as the entropy consistency score of model predictions (Liu et al., 2021a; Niu et al., 2022; Shin et al., 2022) or use soft pseudo labels to take the uncertainty into account (Rusak et al., 2021; Yang et al., 2022; Zou et al., 2019). We also use pseudo labels to generalize the source-trained model to the target domain. Different from the previous methods, we are the first to introduce pseudo labels as latent variables in a probabilistic parameterized framework for test-time domain generalization, where we incorporate uncertainty and generate pseudo labels with neighboring information through variational inference and meta-learning. ### 3 METHODOLOGY **Preliminary.** We are given data from different domains defined on the joint space \( \mathcal{X} \times \mathcal{Y} \), where \( \mathcal{X} \) and \( \mathcal{Y} \) denote the data space and label space, respectively. The domains are split into several source domains \( \mathcal{D}_s = \{(x_s, y_s)\}_{i=1}^{N_s} \) and the target domain \( \mathcal{D}_t = \{(x_t, y_t)\}_{i=1}^{N_t} \). Our goal is to train a model on source domains that is expected to generalize well on the (unseen) target domain. We follow the test-time domain generalization setting (Dubey et al., 2021; Iwasawa & Matsuo, 2021; Xiao et al., 2022), where a source-trained model is generalized to target domains by adjusting the model parameters at test time. A common strategy for adjusting the model parameters is that the model \( \theta \) is first trained on source data \( \mathcal{D}_s \) by minimizing a supervised cross-entropy (\( L_{CE} \)) loss \( L_{train}(\theta) = \mathbb{E}_{(x_s, y_s) \in \mathcal{D}_s} [L_{CE}(x_s, y_s; \theta)] \); and then at test time the source-trained model \( \theta_s \) is generalized to the target domain by optimization with certain surrogate losses, e.g., entropy minimization (\( L_E \)), based on the online unlabeled test data, which is formulated as: \[ L_{test}(\theta) = \mathbb{E}_{x_t \in \mathcal{D}_t} [L_E(x_t; \theta_s)], \] where the entropy is calculated on the source model predictions. However, test samples from the target domain could be largely misclassified by the source model due to the domain shift, resulting in large uncertainty in the predictions. Moreover, the entropy minimization tends to update the model with high confidence even for the wrong predictions, which would cause a misspecified model for the target domain. To solve those problems, we address test-time domain generalization from a probabilistic perspective and further propose variational neighbor labels to incorporate more target information. A graphical illustration to highlight the differences between common test-time domain generalization and our proposals is shown in Figure 1. **Probabilistic pseudo-labeling.** Given target sample \( x_t \) and source-trained model \( \theta_s \), we would like to make predictions on the target sample, formulated as \( p(y_t | x_t, \theta_s) = \int p(y_t | x_t, \theta_t) p(\theta_t | x_t, \theta_s) d\theta_t \). Since the distribution of \( p(\theta_t) \) is intractable, the common test-time adaptation and generalization methods usually optimize the source model to the target one by the maximum a posterior (MAP), which is an empirical Bayesian method and an approximation of the integration of \( p(\theta_t) \) (Finn et al., 2018). The predictive likelihood is then formulated as: \[ p(y_t | x_t, \theta_s) = \int p(y_t | x_t, \theta_t) p(\theta_t | x_t, \theta_s) d\theta_t \approx p(y_t | x_t, \theta_t^*), \] where \( \theta_t^* \) is the MAP value of the optimized target model. The MAP approximation is interpreted as inferring the posterior over \( \theta_t \): \( p(\theta_t | x_t, \theta_s) \approx \delta(\theta_t - \theta_t^*) \), following a Dirac delta distribution. To model the uncertainty of predictions for more robust test-time generalization, we treat pseudo labels as stochastic variables in the probabilistic framework of common test-time generalization as shown in Figure 1 (b). The pseudo labels are obtained from the source model predictions, which follow categorical distributions. Then we reformulate eq. (2) as: \[ p(y_t | x_t, \theta_s) = \int p(y_t | x_t, \theta_t) \left[ \int p(\theta_t | \hat{y}_t, x_t, \theta_s) p(\hat{y}_t | x_t, \theta_s) d\hat{y}_t \right] d\theta_t \\ \approx \mathbb{E}_{p(\hat{y}_t | x_t, \theta_s)} [p(y_t | x_t, \theta_t^*)], \] where \( \theta_t^* \) is the MAP value of \( p(\theta_t | \hat{y}_t, x_t, \theta_s) \), obtained via gradient descent on the data \( x_t \) and the corresponding pseudo labels \( \hat{y}_t \) starting from \( \theta_s \). Note that we only use MAP approximation with gradient descent to estimate the model parameter \( \theta_t \), which will not hurt the generation of the probabilistic pseudo labels. This formulation allows us to sample different pseudo labels from the categorical distribution \( p(\hat{y}_t) \) to update the model \( \theta_t^* \), which takes into account the uncertainty of the source-trained predictions. The common pseudo-labeling method can be treated as a specific case of eq. 3, which approximates the expectation of \( p(\hat{y}_t) \) by utilizing the argmax function on \( p(\hat{y}_t) \), generating the hard pseudo labels. \( \theta_t^* \) is then obtained by a point estimation of the hard pseudo labels. However, due to domain shifts, the argmax value of \( p(\hat{y}_t) \) is not guaranteed to always be correct. The optimization of the source-trained model then is similar to entropy minimization (eq. 1), where the updated model can achieve high confidence but wrong predictions of some target samples due to domain shifts. More analysis is provided in Appendix A. **Variational neighbor labels.** To optimize the probabilistic framework, we use variational inference to approximate the true posterior of the probabilistic pseudo labels, in which we introduce more neighboring target information and categorical information during training. On one hand, introducing variational inference into pseudo-labeling is natural and convenient under the proposed probabilistic formulation. On the other hand, to generate pseudo labels that are more accurate and calibrated for more robust generalization, it is necessary to incorporate more target information. Assume that we have a mini-batch of target data \( X_t = \{x_i\}_{i=1}^{M} \), we reformulate eq. (3) as: \[ p(y_t | x_t, \theta_s, X_t) = \int p(y_t | x_t, \theta_t) \left[ \int \int p(\theta_t | \hat{y}_t, x_t, \theta_s) p(\hat{y}_t, w_t | x_t, \theta_s, X_t) d\hat{y}_t dw_t \right] d\theta_t \\ = \int \int p(y_t | x_t, \theta_t^*) p(\hat{y}_t, w_t | x_t, \theta_s, X_t) d\hat{y}_t dw_t. \] As in eq. (3), \( \theta_t^* \) is the MAP value of \( p(\theta_t | \hat{y}_t, x_t, \theta_s) \). We introduce the latent variable \( w_t \) to integrate the information of the neighboring target samples \( X_t \) as shown in Figure 1 (c). To facilitate the estimation of the variational neighbor labels, we set the prior distribution as: \[ p(\hat{y}_t, w_t | x_t, \theta_s, X_t) = p(\hat{y}_t | w_t, x_t) p(w_t | \theta_s, X_t), \] where \( p_\phi(w_t | \theta_s, X_t) \) is generated by the features of \( X_t \) together with their output values based on \( \theta_s \). In detail, to explore the information of neighboring target samples, we first generate the predictions of \( X_t \) by the source-trained model \( \theta_s \). Then we estimate the averaged target features of each category according to the source-model predictions. The latent variable \( w_t \) is obtained by the model \( \phi \) with the averaged features as the input. Therefore, \( w_t \) contains the categorical information of the target features and can be treated as an updated classifier with more target information. The variational neighbor labels \( \hat{y}_t \) are obtained by classifying the target samples using \( w_t \). Rather than directly using the source model $\theta_s$, we estimate $\hat{y}_t$ from the latent variable $w_t$, which integrates the information of neighboring target samples to be more accurate and reliable. To approximate the true posterior of the joint distribution $p(\hat{y}_t, w_t)$ and incorporate more representative target information, we design a variational posterior $q(\hat{y}_t, w_t | x_t, \theta_s, X_t, Y_t)$ to supervise the prior distribution $p(\hat{y}_t, w_t | x_t, \theta_s, X_t, Y_t)$ during training: $$q(\hat{y}_t, w_t | x_t, \theta_s, X_t, Y_t) = p(\hat{y}_t | w_t, x_t) q_\phi(w_t | \theta_s, X_t, Y_t).$$ The variational posterior distribution is obtained similarly as the prior by generating $w_t$ through the categorical averaged features. The model $\phi$ is shared by the prior and posterior distributions. The main difference is that the averaged features to generate $w_t$ are obtained with the actual target labels $Y_t$. Since the target labels $Y_t$ are inaccessible, we can only utilize the prior distribution $p(\hat{y}_t, w_t | x_t, \theta_s, X_t)$ at test time. Therefore, we introduce the variational posterior under the meta-learning framework (Du et al., 2021; Finn et al., 2017; Xiao et al., 2022), where we mimic domain shifts and the test-time generalization procedure during training to learn the variational neighbor labels. In this case, according to the variational posterior distribution, the prior distribution $p(\hat{y}_t | w_t, x_t) q_\phi(w_t | \theta_s, X_t)$ learns the ability to incorporate more representative target information and generate more accurate neighbor labels. **Meta-generalization with variational neighbor labels.** We split the source domains $D_s$ into meta-source domains $D_{s'}$ and a meta-target domain $D_{t'}$ during training. The meta-target domain is selected randomly in each iteration to mimic diverse domain shifts. Moreover, we divide each iteration into meta-source, meta-generalization, and meta-target stages to simulate the training stage on source domains, test-time generalization, and test stage on target data, respectively. **Meta-source.** We train the meta-source model $\theta_{s'}$ by minimizing the supervised loss $L_{CE}(x_{s'}, y_{s'}; \theta)$, where $(x_{s'}, y_{s'})$ denotes the input-label sample pairs of the meta-source domains. **Meta-generalization.** To mimic test-time generalization and prediction, our goal in the newly introduced meta-generalization stage is to optimize the meta-source model $\theta_{s'}$ by the meta-target data and make predictions with the generalized model. By introducing the variational neighbor labels, the log-likelihood of the meta-target prediction $y_{t'}$ is formulated as: $$p(y_{t'} | x_{t'}, \theta_{s'}, X_{t'}) = \int \int p(y_{t'} | x_{t'}, \theta_{t'}) p(\hat{y}_{t'}, w_{t'} | x_{t'}, \theta_{s'}, X_{t'}) d\hat{y}_{t'} dw_{t'},$$ where $\theta_{t'}$ is the MAP value of $p(\theta_{t'} | \hat{y}_{t'}, x_{t'}, \theta_{s'})$, similar to eq. (4), and $p(\hat{y}_{t'}, w_{t'} | x_{t'}, \theta_{s'}, X_{t'}) = p(\hat{y}_{t'} | w_{t'}, x_{t'}) p_\phi(w_{t'} | \theta_{s'}, X_{t'})$ is the joint prior distribution of the meta-target neighbor labels $\hat{y}_{t'}$ and latent variable $w_{t'}$. The joint variational posterior is designed as $q(\hat{y}_{t'}, w_{t'} | x_{t'}, \theta_{s'}, X_{t'}, Y_{t'}) = p(\hat{y}_{t'} | w_{t'}, x_{t'}) q_\phi(w_{t'} | \theta_{s'}, X_{t'}, Y_{t'})$ to learn more reliable neighbor labels by considering the actual labels $Y_{t'}$ of the meta-target data. Under this meta-learning setting, the actual labels $Y_{t'}$ of the meta-target data are accessible during source training. Thus, the variational distribution utilizes both the domain and categorical information of the neighboring samples and models the meta-target distribution more reliably, generating more accurate neighbor labels $\hat{y}_{t'}$ of the meta-target samples. With the variational neighbor labels $\hat{y}_{t'}$ the test-time domain generalization procedure is simulated by obtaining $\theta_{t'}^*$ from: $$\theta_{t'}^* = \theta_{s'} - \lambda_1 \nabla_\theta L_{CE}(x_{t'}, \hat{y}_{t'}; \theta_{s'}), \quad \hat{y}_{t'} \sim p(\hat{y}_{t'} | w_{t'}, x_{t'}), \quad w_{t'} \sim q_\phi(w_{t'} | \theta_{s'}, X_{t'}, Y_{t'}),$$ where $\lambda_1$ denotes the learning rate of the optimization in the meta-generalization stage. **Meta-target.** Since our final goal is to obtain good performance on the target data after optimization, we further mimic the test-time inference on the meta-target domain and supervise the meta-target prediction on $\theta_{t'}^*$ by maximizing the log-likelihood of eq (7): $$\log p(y_{t'} | x_{t'}, \theta_{s'}, X_{t'}) \geq E_{q_\phi(w_{t'})} [E_{p(\hat{y}_{t'} | w_{t'}, x_{t'})} [\log p(y_{t'} | x_{t'}, \theta_{t'})]] - D_{KL}[q_\phi(w_{t'}) || p_\phi(w_{t'})],$$ where $p_\phi(w_{t'}) = p_\phi(w_{t'} | \theta_{s'}, X_{t'})$ generated by the features of $X_{t'}$ together with their output values based on $\theta_{s'}$. $q_\phi(w_{t'}) = q_\phi(w_{t'} | \theta_{s'}, X_{t'}, Y_{t'})$ is obtained by the features of $X_{t'}$ considering the actual labels $Y_{t'}$. The detailed formulation is provided in Appendix A. As aforementioned, the actual labels $y_{t'}$ of the meta-target data are accessible during training. We can further supervise the updated model $\theta_{t'}^*$ on its meta-target predictions by the actual labels. Maximizing the log-likelihood $\log p(y_{t'} | x_{t'}, \theta_{s'}, X_{t'})$ is equal to minimizing: $$L_{meta} = E_{(x_{t'}, y_{t'})} [E_{q_\phi(w_{t'})} [E_{p(\hat{y}_{t'} | w_{t'}, x_{t'})} L_{CE}(x_{t'}, y_{t'}; \theta_{t'}^*)]] + D_{KL}[q_\phi(w_{t'}) || p_\phi(w_{t'})].$$ The source model $\theta_s$ in each iteration is finally updated by $\theta_s = \theta_{s'} - \lambda_2 \nabla_\theta L_{meta}$, where $\lambda_2$ denotes the learning rate for the meta-target stage. Note that the loss in eq. (10) is computed on the $\theta_{s'}$, obtained by eq. (8), while the optimization is performed over the meta-source model $\theta_{s'}$. Intuitively, the model updated by the meta-target neighbor labels is trained to achieve good performance on the meta-target data. Thus, the meta-generalization stage is further supervised to optimize the model well across domains and better generate and utilize the variational neighbor labels. The variational inference model $\phi$ is also optimized in the meta-target stage. To guarantee that the variational neighbor labels do extract the categorical neighboring information for classification, we add an extra cross-entropy loss ($L_{CE}$) on the variational neighbor labels with actual labels during the meta-target stage. Thus, $\phi$ is updated with a learning rate $\lambda_3$ by $\phi = \phi - \lambda_3 (\nabla_\phi L_{CE} + \nabla_\phi L_{meta})$. By simulating distribution shifts during training, the model learns the ability to generate more effective pseudo labels for fine-tuning the model across distribution shifts. The variational neighbor labels are further improved by considering more neighboring target information. **Test-time generalization.** At test time, the model trained on the source domains with the meta-learning strategy $\theta_s$ is generalized to $\theta_t^*$ by further optimization: $$\theta_t^* = \theta_s - \lambda_1 \nabla_\theta L_{CE}(x_t, y_t; \theta_s), \quad \hat{y}_t \sim p(\hat{y}_t | w_t, x_t), \quad w_t \sim p_\phi(w_t | \theta_s, X_t).$$ Since the target labels $Y_t$ are inaccessible, we generate neighbor labels $\hat{y}_t$ and latent variables $w_t$ from the prior distribution $p(\hat{y}_t, w_t | x_t, \theta_s, X_t) = p(\hat{y}_t | w_t, x_t)p_\phi(w_t | \theta_s, X_t)$. The distribution $p(w_t)$ is inferred as a Gaussian distribution by generating the mean $\mu$ and variance $\sigma$ using the target averaged features through $\phi$. Then we sample $w_t$ by Monte Carlo sampling and generate the categorical distribution $p(\hat{y}_t)$ with the input target features, which we utilize to obtain the MAP value $\theta_t^*$. From $\theta_t^*$ we make predictions on the (unseen) target data $D_t$, formulated as: $$p(y_t | x_t, \theta_s, X_t) = \int p(y_t | x_t, \theta_t^*) \left[ \int p(\hat{y}_t | \hat{y}_t, x_t, \theta_s)p(\hat{y}_t, w_t | x_t, \theta_s, X_t) d\hat{y}_t dw_t \right] d\theta_t$$ We provide both the training algorithm and test-time generalization algorithm in Appendix B. ### 4 EXPERIMENTS **Seven datasets.** We demonstrate the effectiveness of our method on image classification problems and evaluate it on seven widely used domain generalization datasets. Namely, PACS (Li et al., 2017): 7 classes, 4 domains and 9,991 images. VLCS (Fang et al., 2013): 5 classes, 4 domains and 10,729 images. Office-Home (Venkateswara et al., 2017): 65 classes, 4 domains and 15,500 images. TerraIncognita (Beery et al., 2018): 10 classes, 4 domains and 24,778 images. Mini DomainNet (Zhou et al., 2021): 126 classes, 4 domains and 140,000 images. We follow training and validation split in (Li et al., 2017) and evaluate model according to “leave-one-out” protocol (Li et al., 2019; Carlucci et al., 2019). We also evaluate our method on the Rotated MNIST and Fashion-MNIST datasets following Piratla et al. (2020). **Implementation details.** We utilize ResNet-18 for all our experiments and ablation studies and report the accuracies on ResNet-50 for comparison as well. We evaluate the method on the online test-time domain generalization setting (Iwasawa & Matsuo, 2021), we increment the target data iteratively and keep updating and evaluating the model. When we report an ERM baseline, it means we directly evaluate the source-trained model without any adjustment at test time (Gulrajani & Lopez-Paz, 2020). The backbones are pretrained on ImageNet same as the previous methods. During training, we use a varied learning rate throughout the model and train the model for 10,000 iterations. In the meta-generalization procedure, we set the learning rate $\lambda_1$ as $1e^{-4}$ for all layers. During meta-target, we set the learning rate for the pretrained ResNet ($\lambda_2$) to $5e^{-5}$ and the learning rate of the variational module $\phi$ ($\lambda_3$) and classifiers as $1e^{-4}$ for all datasets. The batch size is set to 70 during the training and set to 20 during the test-time generalization procedure. At test time, we use the learning rate of $1e^{-4}$ for all the layers and update all parameters. All hyperparameters for source training and test-time using the training validation set have been selected as mentioned in (Iwasawa & Matsuo, 2021). We use similar settings and hyperparameters for all domain generalization benchmarks. The method introduces a small computational cost for inference. Table 1: **Ablations on variational neighbor labels.** Results on PACS and TerraIncognita with ResNet-18. Our probabilistic formulation performs better than the common pseudo-labeling baseline for test-time domain generalization by considering the uncertainty. Incorporating more target information by the variational neighbor labels improves results further, especially when used in concert with meta-generalization. We provide per-domain results in Appendix F. | | PACS | TerraIncognita | |------------------------|----------|----------------| | Pseudo-labeling baseline (eq. 1) | 81.3 ±0.3 | 41.2 ±0.4 | | Probabilistic pseudo-labeling (eq. 3) | 82.0 ±0.2 | 42.5 ±0.5 | | Variational neighbor-labeling (eq. 4) | 82.4 ±0.3 | 43.8 ±0.5 | | Meta-generalization with variational neighbor labels (eq. 10) | 83.5 ±0.4 | 46.2 ±0.6 | at test time and about 1% more parameters than the backbone model. The time cost for test-time generalization is competitive with other fine-tuning methods, with 5m 33s on PACS. We provide more implementation details and detailed computational costs in Appendix C. **Ablations on variational neighbor labels.** To show the benefits of the proposed method, we conduct an ablation on PACS and TerraIncognita. We first compare the probabilistic pseudo-labeling (eq. 3) with the common one (eq. 1). As shown in the first two rows in Table 1, the probabilistic formulation performs better, which demonstrates the benefits of modeling uncertainty of the pseudo labels during generalization at test time. By incorporating more target information from the neighboring samples (eq. 4), the variational neighbor labels become more reliable, which benefits generalization on the target data. With the meta-generalization strategy (eq. 10), we learn the ability to incorporate more representative target information leading to further performance improvements. To show the benefits of meta-generalization, we conduct additional experiments for meta-generalization with pseudo-labeling and meta-generalization with probabilistic pseudo-labeling, which achieve 82.0 and 82.7 on PACS, respectively. Meta-generalization can also improve the other pseudo-labeling methods, while the proposed variational pseudo-labeling improves the most. **Calibration ability.** To further show the benefits of the variational neighbor labels, we also investigate the calibration ability by measuring the Expected Calibration Error (Guo et al., 2017). We report hard and soft pseudo-labeling as baselines, as well as the results of the state-of-the-art method (Xiao et al., 2022) that considers uncertainty by variational inference. As shown in Figure 2, the error of our method is lower than the alternatives on all domains, demonstrating a better ability to model uncertainty at test time. By incorporating pseudo labels as latent variables with variational inference and considering neighboring target information, the proposed method models the uncertainty of the target samples more accurately. With the better-calibrated labels, the model achieves more robust generalization on the target domain at test time. **Generalization in complex scenarios.** By considering the uncertainty and including more target information in the pseudo labels, our method can handle more complex test-time generalization scenarios. To demonstrate this ability, we conduct experiments with multiple target distributions on Rotated MNIST, as defined by Xiao et al. (2022). Specifically, we use 0°, 15°, 75° and 90° as source domains and 30°, 45° and 60° as target. As shown in Table 2, the common MAP method (Wang et al., 2021) achieves good results on the single target domains, while it is unable to outperform an ERM baseline on the multiple target domains. The proposed method per- ![Figure 2: Calibration ability on PACS. Variational neighbor labels consistently have a lower Expected Calibration Error.](image) Table 2: **Generalization in complex scenarios.** Our method generalizes well on both single and multiple target distributions. | Target distribution | Single | Multiple | |---------------------|--------|----------| | ERM baseline | 95.6 | 95.6 | | Wang et al. (2021) | 96.5 | 95.6 | | Xiao et al. (2022) | 96.9 | 96.9 | | **VNL** | **97.5 ±0.3** | **97.4 ±0.3** | forms well under both settings and better than Xiao et al. (2022), which achieves generalization on each sample, demonstrating the generalization ability of our method in more complex scenarios. **Generalization with varying batch sizes.** Test-time generalization and adaptation methods usually require large batches of target samples to update the source-trained model. However, during real-world deployment, the number of available target samples may be limited. Thus constraining test-time generalization performance. In Figure 3 we compare with Tent (Wang et al., 2021) on PACS for varying batch sizes. Tent performs well with large batch sizes, but suffers with smaller batch sizes, e.g., 16, and is worse than the ERM baseline. By contrast, our method consistently achieves good results even with small target batch sizes. Demonstrating the benefit of incorporating the uncertainty and representative neighboring information. We provide more detailed results in Appendix F. **Generalization along with inference.** For more insights into the variational neighbor labels, we provide the online performance along with generalization steps for the ‘art’ domain from PACS. As shown in Figure 4, starting from the same baseline accuracy, the gap between the results of variational neighbor labels and the hard pseudo labels becomes larger and larger along with the generalization steps. Variational neighbor labels achieve faster generalization of the source-trained model. After 50 iterations, the performance of the hard pseudo labels is saturated and even drops due to the error accumulation resulting from inaccurate pseudo labels during model updating. By considering the uncertainty and neighboring information, our variational neighbor labels improve performance and are less prone to saturation, leading to better accuracy. **Orthogonality.** Since the proposed meta-learned variational neighbor labels focus on generating pseudo labels at test time, the method is orthogonal to other deployment techniques, e.g., data augmentation for generalization at test time (Zhang et al., 2022). Achieving test-time domain generalization compounded with these methods will further improve the performance. To demonstrate this, we conduct test-time generalization by our method with augmented target samples on PACS without altering the source training strategy. When adding similar augmentation as in (Zhang et al., 2022), we increase our results on ResNet-18 from 83.5% to 85.0% overall accuracy. We provide the complete table including the per-domain results in Appendix F. In the following, we report the results of our method in conjunction with augmentations. **State-of-the-art comparisons.** We compare our proposal with state-of-the-art test-time domain generalization, as well as some standard domain generalization and test-time adaptation methods. Note the latter methods are designed for single-source image corruption settings, so we report the reimplemented results from Jang et al. (2023). Table 3 shows the results on PACS, VLCS, Office-Home, and TerraIncognita for both ResNet-18 and ResNet-50 backbones. Our method is competitive on most of the datasets, except for Office-Home where the sample-wise generalization of Xiao et al. (2022) performs better. The reason can be that the representative neighboring information is more difficult to incorporate with a larger number of categories (e.g., 65 in Office-Home), which needs larger capacity models $\phi$. We have experimented with $\phi$ values and obtained a mean accuracy of 57.1 with 2 layers and a mean accuracy of 64.3 with 3 layers in $\phi$. Table 3: **State-of-the-art comparisons** for ResNet-18 (RN18) and ResNet-50 (RN50) backbones. Our results are averaged over five runs. Test-time adaptation results by Wang et al. (2021) and Liang et al. (2020) for domain generalization provided by Jang et al. (2023). Gray numbers for Xiao et al. (2022) based on our reimplementation. Our method is either best (bold) or runner-up (underlined). | | PACS | | VLCS | | Office-Home | TerraIncognita | |----------------------|------|-------|------|-------|-------------|----------------| | | RN18 | RN50 | RN18 | RN50 | RN18 | RN50 | | **Standard domain generalization** | | | | | | | | ERM baseline | 79.6 | 85.7 | 75.8 | 77.4 | 61.0 | 67.5 | | Arjovsky et al. (2019)| 80.9 | 83.5 | 75.1 | 78.5 | 58.0 | 64.3 | | Shi et al. (2022) | 82.0 | 85.5 | 76.9 | 77.8 | 62.0 | 68.6 | | **Test-time adaptation on domain generalization** | | | | | | | | Wang et al. (2021) | 83.9 | 85.2 | 72.9 | 73.0 | 60.9 | 66.3 | | Liang et al. (2020) | 82.4 | 84.1 | 65.2 | 67.0 | 62.6 | 67.7 | | **Test-time domain generalization** | | | | | | | | Iwasawa & Matsuo (2021)| 81.7 | 85.3 | 76.5 | 80.0 | 57.0 | 68.3 | | Dubey et al. (2021) | - | 84.1 | - | 78.0 | - | 67.9 | | Jang et al. (2023) | 81.9 | 84.1 | 77.3 | 77.6 | 63.7 | 68.6 | | Chen et al. (2023b) | 83.8 | - | 76.9 | - | 62.0 | - | | Xiao et al. (2022) | 84.1 | 87.5 | 77.8 | 78.6 | 66.0 | 71.0 | | **VNL** | $\mathbf{85.0 \pm 0.4}$ | $\mathbf{87.9 \pm 0.3}$ | $\mathbf{78.2 \pm 0.3}$ | $\mathbf{79.1 \pm 0.4}$ | $\mathbf{64.3 \pm 0.3}$ | $\mathbf{69.1 \pm 0.4}$ | Note that our method still outperforms other recent methods (Chen et al., 2023b; Iwasawa & Matsuo, 2021; Jang et al., 2023; Wang et al., 2021) on Office-Home. Moreover, since we consider the uncertainty of the variational neighbor labels, the proposed method solves some hard cases of the single-sample approach reported in Xiao et al. (2022). As shown in Figure 5, our method has low confidence in the uncertain samples, e.g., with different objectives or limited information, showing good calibration of our method, which is also demonstrated in Figure 2. With the proposed method, the model predicts these hard cases correctly, showing the effectiveness of test-time generalization with the meta-generalized variational neighbor labels in complex scenes. In addition, there are also some recent standard domain generalization methods achieving good performance. For instance, (Gao et al., 2022) achieved good results on PACS, VLCS, Office-Home, and TerraIncognita based on ResNet-50 by utilizing an extra dataset before training to meta-learn loss function. This implies that we can also improve by utilizing more datasets during training. We provide more comparisons to the standard domain generalization methods in Appendix E. Experiments on Rotated MNIST, Fashion-MNIST, and mini-DomainNet are also provided in Appendix E. Our method also achieves competitive performance on these datasets. **Limitations.** Since our method utilizes meta-learning and neighboring target information, it requires multiple source domains during training and small batches of target samples at test time, which can be a limitation in some environments. We consider a single-source and single-target-sample variant of our approach as a valuable investigation for future work. ### 5 Conclusion We cast test-time domain generalization as a probabilistic inference problem and model pseudo labels as latent variables in the formulation. By incorporating the uncertainty of the pseudo labels, the probabilistic formulation mitigates updating the source-trained model with inaccurate supervision, which arises due to domain shifts and leads to misspecified models. Based on the probabilistic formulation, we further propose variational neighbor labels under the designed meta-generalization setting, which estimates the pseudo labels by incorporating neighboring target information through variational inference and learns the ability to generalize the source-trained model. Ablation studies and further comparisons show the benefits, abilities, and effectiveness of our method on seven common domain generalization datasets. REFERENCES Ferran Alet, Maria Bauza, Kenji Kawaguchi, Nurullah Giray Kuru, Tomás Lozano-Pérez, and Leslie Kaelbling. Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time. In *Advances in Neural Information Processing Systems*, volume 34, pp. 29206–29217, 2021. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019. Yogesh Balaji, Swami Sankaranarayanan, and Rama Chellappa. MetaReg: Towards domain generalization using meta-regularization. In *Advances in Neural Information Processing Systems*, volume 31, pp. 998–1008, 2018. Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In *European Conference on Computer Vision*, pp. 456–473, 2018. Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. In *Advances in Neural Information Processing Systems*, volume 24, pp. 2178–2186, 2011. Gilles Blanchard, Aniket Anand Deshmukh, Ürun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. *The Journal of Machine Learning Research*, 22(1):46–100, 2021. Dhanajit Brahma and Piyush Rai. A probabilistic framework for lifelong test-time adaptation. *arXiv preprint arXiv:2212.09713*, 2022. Fabio M Carlucci, Antonio D’Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi. Domain generalization by solving jigsaw puzzles. In *IEEE Conference on Computer Vision and Pattern Recognition*, pp. 2229–2238, 2019. Dian Chen, Dequan Wang, Trevor Darrell, and Sayna Ebrahimi. Contrastive test-time adaptation. In *IEEE Conference on Computer Vision and Pattern Recognition*, pp. 295–305, 2022. Jin Chen, Zhi Gao, Xinxiao Wu, and Jiebo Luo. Meta-causal learning for single domain generalization. *arXiv preprint arXiv:2304.03709*, 2023a. Liang Chen, Yong Zhang, Yibing Song, Ying Shan, and Lingqiao Liu. Improved test-time adaptation for domain generalization. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2023b. Qi Dou, Daniel C Castro, Konstantinos Kamnitsas, and Ben Glocker. Domain generalization via model-agnostic learning of semantic features. In *Advances in Neural Information Processing Systems*, 2019. Yingjun Du, Jun Xu, Huan Xiong, Qiang Qiu, Xiantong Zhen, Cees G M Snoek, and Ling Shao. Learning to learn with variational information bottleneck for domain generalization. In *European Conference on Computer Vision*, pp. 200–216, 2020. Yingjun Du, Xiantong Zhen, Ling Shao, and Cees G M Snoek. MetaNorm: Learning to normalize few-shot batches across domains. In *International Conference on Learning Representations*, 2021. Abhimanyu Dubey, Vignesh Ramanathan, Alex Pentland, and Dhruv Mahajan. Adaptive methods for real-world domain generalization. In *IEEE Conference on Computer Vision and Pattern Recognition*, pp. 14340–14349, 2021. Cian Eastwood, Ian Mason, Christopher KI Williams, and Bernhard Schölkopf. Source-free adaptation to measurement shift via bottom-up feature restoration. *arXiv preprint arXiv:2107.05446*, 2021. Yuming Fang, Weisi Lin, Zhenzhong Chen, Chia-Ming Tsai, and Chia-Wen Lin. A video saliency detection model in compressed domain. *IEEE transactions on circuits and systems for video technology*, 24(1):27–38, 2013. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International Conference on Machine Learning*, pp. 1126–1135. PMLR, 2017.
JiTVtCUOpS
As a continuation of my previous question, the use of cross-correlation for identifying lead-lag relationships suggests a focus on linear associations. May I inquire if this suggests that the algorithm's applicability is confined to variables that share a linear relationship (e.g., $X_1 = X_2^2$, the cross-correlation will return zero)?
Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators Lifan Zhao Shanghai Jiao Tong University mogician233@sjtu.edu.cn Yanyan Shen* Shanghai Jiao Tong University shenyy@sjtu.edu.cn Abstract Recently, channel-independent methods have achieved state-of-the-art performance in multivariate time series (MTS) forecasting. Despite reducing overfitting risks, these methods miss potential opportunities in utilizing channel dependence for accurate predictions. We argue that there exist locally stationary lead-lag relationships between variates, i.e., some lagged variates may follow the leading indicators within a short time period. Exploiting such channel dependence is beneficial since leading indicators offer advance information that can be used to reduce the forecasting difficulty of the lagged variates. In this paper, we propose a new method named LIFT that first efficiently estimates leading indicators and their leading steps at each time step and then judiciously allows the lagged variates to utilize the advance information from leading indicators. LIFT plays as a plugin that can be seamlessly collaborated with arbitrary time series forecasting methods. Extensive experiments on six real-world datasets demonstrate that LIFT improves the state-of-the-art methods by 5.4% in average forecasting performance. Our code is available at https://github.com/SJTU-Quant/LIFT. 1 Introduction Multivariate time series (MTS) forecasting, one of the most popular research topics, is a fundamental task in various domains such as weather, traffic, and finance. An MTS consists of multiple channels (a.k.a., variates\(^1\)), where each channel is a univariate time series. Many MTS forecasting researches argue each channel has dependence on other channels. Accordingly, numerous approaches adopt channel-dependent (CD) strategies and jointly model multiple variates by advanced neural architectures, including GNNs (Wu et al., 2020; Cao et al., 2020; Huang et al., 2023; Yi et al., 2023a), MLPs (Chen et al., 2023; Ekambaram et al., 2023; Wang et al., 2024a; Yi et al., 2023b), CNNs (Wu et al., 2023), Transformers (Zhou et al., 2021; Ni et al., 2023; Wang et al., 2024b; Liu et al., 2023a), and others (Shen et al., 2024; Jia et al., 2023; Fan et al., 2024). Unexpectedly, CD methods have been defeated by recently proposed channel-independent (CI) methods (Nie et al., 2023; Lee et al., 2024; Zhou et al., 2023; Jin et al., 2024; Cao et al., 2024; Chen et al., 2024; Dai et al., 2024) and even a simple linear model (Zeng et al., 2023; Li et al., 2023a; Xu et al., 2023). These CI methods separately forecast each univariate time series based on its own historical values, instead of referring to other variates. While only modeling cross-time dependence, CI Transformers (Nie et al., 2023; Zhou et al., 2023) surprisingly outperform CD Transformers that jointly model cross-time and cross-variate dependence (Grigsby et al., 2021; Zhang & Yan, 2023). One reason is that existing CD methods lack prior knowledge about channel dependence and may encounter the overfitting issue (Han et al., 2023). This gives rise to an interesting question: is there any explicit channel dependence that is effective to MTS forecasting? In this work, we turn the spotlight on the locally stationary lead-lag relationship between variates. An intriguing yet underestimated characteristic of many MTS is that the evolution of variates may --- *corresponding author. \(^1\)We use the terms “variate” and “channel” interchangeably. lag behind some other variates, termed as leading indicators. Leading indicators may directly influence the wave of other variates, while the influence requires a certain time delay to propagate and take effect. For example, an increasing concentration of an anti-fever drug in the blood may cause a decrease in body temperature after an hour but not immediately. On top of this, another common case is that both leading indicators and lagged variates depend on some latent factors, while the leading ones are the first to get affected. For example, a typhoon first cools down coastal cities and, after a few days, cools down inland cities. As such effects typically change little within a certain period, the lead-lag relationships are locally stationary once established. As illustrated in Figure 1a, the lagged variate and its leading indicators share similar temporal patterns across the lookback window and the horizon window. If a leading indicator evolves $\delta$-step ahead of the target variate, the latest $\delta$ steps of its lookback window will share similar temporal patterns with the future $\delta$ steps of the lagged variate. Particularly, when the lagged variate completely follows its leading indicator, the difficulty of forecasting $H$ steps for the lagged variate can be reduced to forecasting $H - \delta$ steps by previewing the advance information. Despite the advent of lead-lag relationships, the dynamic variation in leading indicators and leading steps poses the challenge to modeling channel dependence. As shown in Figure 1, the specific leading indicators and the corresponding leading steps can vary over time. In light of this, we propose a method named LIFT (short for Learning from Leading Indicators For MTS Forecasting), involving three key steps. First, we develop an efficient cross-correlation computation algorithm to dynamically estimate the leading indicators and the leading steps at each time step. Second, as depicted in Figure 2, we align each variate and its leading indicators via a target-oriented shift trick. Third, we employ a backbone to make preliminary predictions and introduce a Lead-aware Refiner to calibrate the rough predictions. It is noteworthy that many MTS are heterogeneous, where the variates are different dimensions of an object (e.g., wind speed, humidity, and air temperature in weather). In these cases, the lagged variates may be correlated with the leading indicators by sharing only a part of temporal patterns. To address this issue, we exploit desirable signals in the frequency domain and realize the Lead-aware Refiner by an Adaptive Frequency Mixer that adaptively filters out undesirable frequency components of leading indicators and absorbs the remaining desirable ones. The main contributions of this paper are summarized as follows. • We propose a novel method called LIFT that exploits the locally stationary lead-lag relationship between variates for MTS forecasting. LIFT works as a plug-and-play module and can seamlessly incorporate arbitrary time series forecasting backbones. • We introduce an efficient algorithm to estimate the leading indicators and the corresponding leading steps at any time step. We further devise a Lead-aware Refiner that adaptively leverages the informative signals of leading indicators in the frequency domain to refine the predictions of lagged variates. • Extensive experimental results on six real-world datasets demonstrate that LIFT significantly improves the state-of-the-art methods in both short-term and long-term MTS forecasting. Specifically, LIFT makes an average improvement of 7.9% over CI models and 3.0% over CD models. We also introduce a lightweight yet strong method LightMTS, which enjoys high parameter efficiency and achieves the best performance on popular Weather and Electricity datasets. Figure 2: Illustration of our key idea. In one case of test data, \( v_1 \) no longer leads \( v_3 \). Instead, the leading indicators of \( v_3 \) are \( v_2 \) and \( v_4 \), which lead by five and three steps, respectively. An intuitive idea is to shift \( v_2 \) and \( v_4 \) by the corresponding leading steps to keep them always aligned with \( v_3 \). 2 Preliminaries A multivariate time series (MTS)\(^2\) is denoted by \( X = \{ X^{(1)}, \ldots, X^{(C)} \} \), where \( C \) is the number of variates (a.k.a. channels) and \( X^{(j)} \) is the time series of the \( j \)-th variate. Given an \( L \)-length lookback window \( X_{t-L+1:t} = \{ X^{(j)}_{t-L+1}, \ldots, X^{(j)}_t \}_{j=1}^C \in \mathbb{R}^{C \times L} \), the MTS forecasting task at time \( t \) aims to predict \( H \) consecutive future time steps in the horizon window, i.e., \( X_{t+1:t+H} \in \mathbb{R}^{C \times H} \). We assume \( X^{(i)}_{t+1:t+H} \) is similar to \( X^{(j)}_{t+1-\delta:t+H-\delta} \) if variate \( i \) leads variate \( j \) by \( \delta \) steps at time \( t \). Through the lens of locally stationary lead-lag relationships, one can use recent observations to estimate the leading indicators and the leading steps. Specifically, the lead-lag relationship can be quantified by the cross-correlation coefficient between \( X^{(i)}_{t-L+1-\delta:t-\delta} \) and \( X^{(j)}_{t-L+1:t} \), which is defined as follows. **Definition 1 (Cross-correlation coefficient).** Assuming variate \( i \) is \( \delta \) steps ahead of variate \( j \) over the \( L \)-length lookback window, the cross-correlation coefficient between the two variates at time \( t \) is defined as: \[ R^{(j)}_{i,t}(\delta) = \frac{\text{Cov}(X^{(i)}_{t-L+1-\delta:t-\delta}, X^{(j)}_{t-L+1:t})}{\sigma^{(i)} \sigma^{(j)}} = \frac{1}{L} \sum_{t'=t-L+1}^{t} \frac{X^{(i)}_{t'-\delta} - \mu^{(i)}}{\sigma^{(i)}} \cdot \frac{X^{(j)}_{t'} - \mu^{(j)}}{\sigma^{(j)}}, \] where \( \mu^{(i)} \in \mathbb{R} \) and \( \sigma^{(i)} \in \mathbb{R} \) represent the mean and standard variation of the univariate time series within the lookback window, respectively. 3 The Lift Approach In this section, we propose our Lift method that dynamically identifies leading indicators and adaptively leverages them for MTS forecasting. 3.1 Overview Figure 3 depicts the overview of Lift, which involves 6 major steps as follows. 1. **Preliminary forecasting.** Given a lookback window \( X_{t-L+1:t} \), we first obtain rough predictions \( \hat{X}_{t+1:t+H} \) from a black-box backbone, which can be implemented by any existing time series forecasting model. 2. **Instance normalization.** Given \( X_{t-L+1:t} \) and \( \hat{X}_{t+1:t+H} \), we apply instance normalization (Kim et al., 2022) without affine parameters so as to unify the value range across the variates. Specifically, based on the mean and standard deviation of each variate in \( X_{t-L+1:t} \), we obtain a normalized lookback window \( X_{t-L+1:t} \) and normalized predictions \( \hat{X}_{t+1:t+H} \). 3. **Lead estimation.** Given \( X_{t-L+1:t} \), the Lead Estimator calculates the cross-correlation coefficients for pair-wise variates. For each variate \( j \), we select the \( K \) most possible leading indicators \( T^{(j)}_t \in \mathbb{R}^K \) (\( K \ll C \)) along with the corresponding leading steps \( \{ \delta^{(j)}_{i,t} \mid i \in T^{(j)}_t \} \) and cross-correlation coefficients \( R^{(j)}_{i,t} \in \mathbb{R}^K \). --- \(^2\) We use bold symbols to denote matrices of multiple variates. Figure 3: Overview of LIFT. All layers in the grey background are non-parametric. We depict the input of the lookback window by solid curves and the predictions of the horizon window by dashed curves. As an illustration, we choose the two most possible leading indicators for each target variate, e.g., the orange and the yellow ones are leading indicators of the red at time $t$. (4) **Target-oriented shifts.** After obtaining $T_t^{(j)}$ and $\{\delta_{i,t}^{(j)}\}_{i \in T_t^{(j)}}$ for variate $j$, we shift $X_{t-L+1:t}^{(i)}$ and $\tilde{X}_{t+1:t+H}^{(i)}$ by $\delta_{i,t}^{(j)}$ steps where $i \in T_t^{(j)}$. We thereby obtain a $j$-oriented MTS segment $S_t^{(j)} \in \mathbb{R}^{K \times H}$, where the $K$ leading indicators get aligned with variate $j$ in the horizon window. (5) **Lead-aware refinement.** The Lead-aware Refiner extracts signals from $S_t^{(j)}$ and refines the normalized preliminary predictions $\hat{X}_{t+1:t+H}^{(j)}$ as $\tilde{X}_{t+1:t+H}^{(j)}$. (6) **Instance denormalization.** Finally, we denormalize $\tilde{X}_{t+1:t+H}^{(j)}$ with the original mean and standard deviation, yielding the final predictions $\tilde{X}_{t+1:t+H}^{(j)}$. **Training scheme.** We can jointly train the backbone and Lead-aware Refiner by the MSE between $\tilde{X}_{t+1:t+H}^{(j)}$ and the ground truth $X_{t+1:t+H}$. Alternatively, given a pretrained and frozen backbone, we can precompute the preliminary predictions only once on training data, reducing the time of hyperparameter tuning and GPU memory occupation during training. **Technical challenges.** Notably, it is non-trivial to leverage the lead-lag relationships due to issues of efficiency and noise. As Eq. (1) requires $O(L)$, a brute-force estimation method that searches all possible $\delta$ in $\{1, \cdots, L\}$ requires $O(L^2)$ computations. Also, $S_t^{(j)}$ can contain some irrelevant patterns from leaders which are noise to the lagged variate. To tackle these issues, we implement the Lead Estimator by an efficient algorithm of $O(L \log L)$ complexity. And we develop the Lead-aware Refiner by an Adaptive Frequency Mixer that adaptively generates frequency-domain filters and mixes desirable frequency components according to the cross-correlations and variate states. ### 3.2 Lead Estimator Given the normalized lookback window $X_{t-L+1:t}$, the Lead Estimator first computes the cross-correlation coefficients between each pair of variate $i$ and variate $j$, based on an extension of Wiener–Khinchin theorem (Wiener, 1930) (see details in Appendix A). Formally, we estimate the coefficients for all possible leading steps in $\{0, \cdots, L - 1\}$ at once by the following equation: $$\left\{ R_{i,j}^{(t)}(\tau) \right\}_{\tau=0}^{L-1} = \frac{1}{L} F^{-1} \left( F(X_{t-L+1:t}^{(j)}) \odot F(X_{t-L+1:t}^{(i)}) \right),$$ where $F$ is the Fast Fourier Transform, $F^{-1}$ is its inverse, $\odot$ is the element-wise product, and the bar denotes the conjugate operation. The complexity is reduced to $O(L \log L)$. Note that variates can exhibit either positive or negative correlations. The leading step $\delta_{i,t}^{(j)}$ between the target variate $j$ and its leading indicator $i$ is meant to reach the maximum absolute cross- correlation coefficient, i.e., \[ \delta_{i,t}^{(j)} = \arg\max_{1 \leq \tau \leq L-1} |R_{i,t}^{(j)}(\tau)|. \] (3) For simplicity, we denote the maximum absolute coefficient \(|R_{i,t}^{(j)}(\delta_{i,t}^{(j)})|\) as \(|R_{i,t}^{(j)*}|\). Then, we choose \(K\) variates that show the most significant lead-lag relationships as leading indicators of variate \(j\), which are defined as: \[ I_t^{(j)} = \arg\text{TopK}(|R_{i,t}^{(j)*}|). \] (4) Specifically, the \(K\) leading indicators \(I_t^{(j)}\) are sorted by cross-correlations in descending order, i.e., the \(k\)-th indicator in \(I_t^{(j)}\) has the \(k\)-th highest \(|R_{i,t}^{(j)*}|\) w.r.t. variate \(j\). Furthermore, we use \(R_t^{(j)} \in \mathbb{R}^K\) to denote an array of \(\{|R_{i,t}^{(j)*}|\}_{i \in I_t^{(j)}}\). Notably, our Lead Estimator is non-parametric and we can precompute the estimations only once on training data, instead of repeating the computations at every epoch. ### 3.3 LEAD-AWARE REFINER For each variate \(j\), the Lead-aware Refiner is to refine \(\hat{X}_{t+1:t+H}^{(j)}\) by its leading indicators. We will describe the refinement process for variate \(j\), and the other \(C - 1\) variates are refined in parallel. #### Target-oriented shifts For each leading indicator \(i \in I_t^{(j)}\), we shift its sequence by the leading step as follows: \[ X_{t+1:t+H}^{(i \rightarrow j)} = \begin{cases} X_{t+1-\delta_{i,t}^{(j)}: t+H-\delta_{i,t}^{(j)}}^{(i)}, & \text{if } \delta_{i,t}^{(j)} \geq H \\ X_{t+1-\delta_{i,t}^{(j)}: t} \| \hat{X}_{t+1:t+H-\delta_{i,t}^{(j)}}^{(i)}, & \text{otherwise} \end{cases} \] (5) where \(\|\) is the concatenation. For a leading indicator \(i\) that is negatively correlated with the variate \(j\), we flip its values at each time step to reflect \(R_{i,t}^{(j)*} < 0\). Formally, for each \(i \in I_t^{(j)}\), we have: \[ \text{turn}(X_{t+1:t+H}^{(i \rightarrow j)}) = \text{sign}(R_{i,t}^{(j)*}) \cdot X_{t+1:t+H}^{(i \rightarrow j)}. \] (6) We then collect \(\{\text{turn}(X_{t+1:t+H}^{(i \rightarrow j)}) \mid i \in I_t^{(j)}\}\) as a target-oriented MTS segment \(S_t^{(j)} \in \mathbb{R}^{K \times H}\). #### State estimation For a comprehensive understanding of leading indicators, it is noteworthy that the lead-lag patterns also depend on variate states. Different variates lie in their specific states with some intrinsic periodicities (or trends), e.g., solar illumination is affected by rains in the short term but keeps its daily periodicity. The state of a variate may also change over time, exhibiting different correlation strengths with other variates, e.g., correlations between the traffic speeds of two adjacent roads are strong within peak hours but much weaker within off-peak hours. Therefore, the variate states are informative signals that can guide us to filter out uncorrelated patterns. Assuming there are \(N\) states in total, we estimate the state probabilities of variate \(j\) at time \(t\) by: \[ P_t^{(j)} = \text{softmax}\left(P_0^{(j)} + f_{\text{state}}(X_{t-L:t+1}^{(j)})\right), \] (7) where \(P_0^{(j)} \in \mathbb{R}^N\) represents the intrinsic state distribution of variate \(j\) and is a learnable parameter, \(f_{\text{state}} : \mathbb{R}^L \mapsto \mathbb{R}^N\) is implemented by a linear layer, and \(P_t^{(j)} = \{p_{t,n}^{(j)}\}_{n=1}^N \in \mathbb{R}^N\) includes the probabilities of all potential states at time \(t\). Our adaptive frequency mixer will take \(P_t^{(j)}\) to generate filters to filter out noisy channel dependence according to the variate state. #### Adaptive frequency mixer To extract valuable information from leading indicators, we propose to model cross-variate dependence in the frequency domain. Given the normalized predictions of variate \(j\) and its target-oriented MTS segment \(S_t^{(j)}\), we derive their Fourier transforms by: \[ V^{(j)} = \mathcal{F}(\hat{X}_{t+1:t+H}^{(j)}) \quad \text{and} \quad U^{(j)} = \mathcal{F}(S_t^{(j)}), \] (8) where $\mathcal{F}$ is the Fast Fourier Transform, $V^{(j)} \in \mathbb{C}^{\lfloor H/2 \rfloor + 1}$, and $U^{(j)} \in \mathbb{C}^{K \times (\lfloor H/2 \rfloor + 1)}$. Each element of $U^{(j)}$, denoted as $U_k^{(j)}$, is the frequency components of the $k$-th leading indicator. Let $\Delta_k^{(j)} = U_k^{(j)} - V^{(j)}$ denote the difference between variate $j$ and the $k$-th leading indicator. Intuitively, the preliminary predictions deserve more refinement from the leading indicators when the estimated correlation $R_t^{(j)}$ is large. To filter signals in $V^{(j)}$ and $U^{(j)}$, we employ a filter factory to generate $2K + 1$ frequency-domain filters as defined below: $$[r_{U,1}^{(j)}, \ldots, r_{U,K}^{(j)}, r_{\Delta,1}^{(j)}, \ldots, r_{\Delta,K}^{(j)}, r_V^{(j)}] = \sum_{n=1}^{N} p_n^{(j)} \cdot f_n(R_t^{(j)}),$$ where $f_n : \mathbb{R}^K \mapsto \mathbb{R}^{(2K+1)(\lfloor H/2 \rfloor + 1)}$ is a linear layer with parameters specific to the $n$-th state. On the one hand, we use the first $2K$ filters to model two kinds of lead-lag relationships: (1) variate $j$ is directly influenced by the $k$-th leader, and the ground-truth $V_{true}^{(j)}$ contains a degree of $U_k^{(j)}$, e.g., $V_{true}^{(j)} \approx V^{(j)} + r_{U,k}^{(j)} \odot U_k^{(j)}$; (2) variate $j$ is similar to the $k$-th leader when they are both influenced by a latent factor, and the ground-truth $V_{true}^{(j)}$ is the interpolation between $V^{(j)}$ and $U_k^{(j)}$, e.g., $V_{true}^{(j)} \approx (1 - r_{\Delta,k}^{(j)}) \odot V^{(j)} + r_{\Delta,k}^{(j)} \odot U = V^{(j)} + r_{\Delta,k}^{(j)} \odot \Delta_k^{(j)}$. On the other hand, we use $r_V^{(j)} \in \mathbb{R}^{\lfloor H/2 \rfloor + 1}$ to dismiss unreliable frequency components of $V^{(j)}$. Formally, we scale the frequency components by: $$\tilde{V}^{(j)} = r_V^{(j)} \odot V^{(j)}, \quad \tilde{U}_k^{(j)} = r_{U,k}^{(j)} \odot U_k^{(j)}, \quad \tilde{\Delta}_k^{(j)} = r_{\Delta,k}^{(j)} \odot \Delta_k^{(j)}.$$ Then, we gather information from $K$ leading indicators and mix the frequency components by: $$\tilde{V}^{(j)} = g \left( \tilde{V}^{(j)} \| \sum_{k=1}^{K} \tilde{U}_k^{(j)} \| \sum_{k=1}^{K} \tilde{\Delta}_k^{(j)} \right),$$ where $g : \mathbb{C}^{3(\lfloor H/2 \rfloor + 1)} \mapsto \mathbb{C}^{\lfloor H/2 \rfloor + 1}$ is a complex-valued linear layer. Finally, we apply inverse Fast Fourier Transform and denormalization in order to derive the final refined predictions, which are formulated as: $$\hat{X}_{t+1:t+H}^{(j)} = \text{denorm}(\mathcal{F}^{-1}(\tilde{V}^{(j)})),$$ where we use the mean and standard deviation of $X_{t-L+1:t}^{(j)}$ for denormalization. ### 3.4 Discussion **Reasoning why CD models show inferior performance.** Many variates are unaligned with each other, while traditional models (e.g., Informer (Zhou et al., 2021)) simply mix multivariate information at the same time step. Consequently, they introduce outdated information from lagged variates which are noise and disturb predicting leaders. Though other models (e.g., Vector Auto-Regression (Giannone et al., 2010)) memorize CD from different time steps by static weights, they can suffer from overfitting issues since the leading indicators and leading steps vary over time. **LIFT can cooperate with arbitrary time series forecasting backbones.** When combining LIFT with a CI backbone, we decompose MTS forecasting into two stages which focus on modeling time dependence and channel dependence, respectively. This scheme avoids introducing noisy channel dependence during the first stage and may reduce optimization difficulty compared with traditional CD methods. When combining LIFT with a CD backbone, we expect LIFT to refine the rough predictions with the actual observations of leading indicators in $S_t^{(j)}$. **LIFT alleviates distribution shifts by dynamically selecting and shifting indicators.** Existing normalization-based methods (Kim et al., 2022; Fan et al., 2023; Liu et al., 2023b) handle distribution shifts of the statistical properties (e.g., mean and variance) in the lookback window and the horizon window. Our work is orthogonal to them as we take a novel investigation into a different kind of distribution shifts in channel dependence (see visualization in Appendix D.2). ### 4 LIGHTWEIGHT MTS FORECASTING WITH LIFT Thanks to the flexibility of LIFT, we introduce a lightweight MTS forecasting method named LightMTS, where a simple linear layer serves as a CI backbone. Following Li et al. (2023a), we conduct instance normalization before preliminary forecasting to alleviate distribution shifts. As we do not learn representations in the high-dimensional latent space, LightMTS is more lightweight than popular CD models, including Transformers (Zhang & Yan, 2023; Liu et al., 2023a) and CNNs (Wu et al., 2023). Empirical evidence is provided in Appendix D.1, where the parameter efficiency of LightMTS keeps similar to DLinear Zeng et al. (2023). ### 5 EXPERIMENTS #### 5.1 EXPERIMENTAL SETTINGS **Datasets.** We conduct extensive experiments on six widely-used MTS datasets, including Weather (Zeng et al., 2023), Electricity (Wu et al., 2020), Traffic (Lai et al., 2018), Solar (Liu et al., 2023a), Wind (Liu et al., 2022), and PeMSD8 (Song et al., 2020). We provide the dataset details in Appendix C.1 and conduct experiments on more datasets in Appendix D.3. **Comparison Methods.** As LIFT can incorporate arbitrary time series forecasting backbones, we verify the effectiveness of LIFT with (i) two state-of-the-art CI models: PatchTST (Nie et al., 2023) and DLinear (Zeng et al., 2023); (ii) the state-of-the-art CD model: Crossformer (Zhang & Yan, 2023); (iii) a classic CD model: MTGNN (Wu et al., 2020). We use them to instantiate the backbone of LIFT, while we keep the same model hyperparameters for fair comparison. We also include the baselines of PatchTST, such as FEDformer (Zhou et al., 2022) and Autoformer (Wu et al., 2021). **Setups.** All of the methods follow the same experimental setup with the forecast horizon $H \in \{24, 48, 96, 192, 336, 720\}$ for both short-term and long-term forecasting. We collect some baseline results reported by PatchTST to compare performance with LightMTS, where PatchTST has tuned the lookback length $L$ of FEDformer and Autoformer. For other methods, we set $L$ to 336. We use Mean Squared Error (MSE) and Mean Absolute Error (MAE) as evaluation metrics. ### 5.2 PERFORMANCE EVALUATION Table 1 compares the forecasting performance between the four state-of-the-art methods and LIFT on the six MTS datasets, showing that LIFT can outperform the SOTA methods in most cases. Specifically, LIFT improves the corresponding backbone by 5.4% on average. **Improvement over CI Backbones.** LIFT makes an average improvement of 7.9% over PatchTST and DLinear on the six datasets. Notably, PatchTST and DLinear surpass Crossformer and MTGNN by a large margin on Weather, Electricity, and Traffic datasets, indicating the challenge of modeling channel dependence. Intriguingly, LIFT significantly improves CI backbones by an average margin of 4.7% on these challenging datasets, achieving the best performance in most cases. This confirms that LIFT can reduce overfitting risks by introducing prior knowledge about channel dependence. **Improvement over CD Backbones.** LIFT makes an average improvement of 3.0% over Crossformer and MTGNN on the six datasets. As CD backbones outperform CI ones on Solar, Wind, and Table 1: Performance comparison in terms of forecasting errors. We highlight the better results between each pair of backbones and LIFT in **bold** and the best results among all methods on each dataset with _underlines_. We show the relative improvement of LIFT over the corresponding backbone in the rightmost column. | Method | PatchTST + LIFT | DLinear + LIFT | Crossformer + LIFT | MTGNN + LIFT | |----------------|-----------------|----------------|--------------------|--------------| | | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | | Weather | 24 | 0.091 | 0.178 | 0.145 | 0.146 | 0.152 | 0.209 | 0.132 | 0.086 | 0.126 | 0.086 | 0.126 | 4.7% | | | 48 | 0.109 | 0.164 | 0.145 | 0.157 | 0.181 | 0.144 | 0.105 | 0.126 | 0.123 | 0.086 | 0.165 | 5.3% | | | 96 | 0.152 | 0.199 | 0.146 | 0.176 | 0.237 | 0.145 | 0.203 | 0.114 | 0.209 | 0.114 | 0.157 | 2.0% | | | 192 | 0.197 | 0.243 | 0.190 | 0.238 | 0.228 | 0.282 | 0.189 | 0.249 | 0.197 | 0.264 | 0.262 | 2.0% | | | 336 | 0.249 | 0.283 | 0.243 | 0.281 | 0.265 | 0.319 | 0.243 | 0.292 | 0.246 | 0.309 | 0.245 | 3.0% | | | 720 | 0.320 | 0.335 | 0.315 | 0.333 | 0.323 | 0.362 | 0.317 | 0.349 | 0.323 | 0.364 | 0.321 | 3.0% | | Electricity | 24 | 0.099 | 0.196 | 0.094 | 0.190 | 0.110 | 0.209 | 0.099 | 0.197 | 0.116 | 0.093 | 0.193 | 3.6% | | | 48 | 0.115 | 0.210 | 0.120 | 0.212 | 0.208 | 0.234 | 0.158 | 0.216 | 0.186 | 0.121 | 0.215 | 4.0% | | | 96 | 0.150 | 0.240 | 0.128 | 0.240 | 0.237 | 0.255 | 0.142 | 0.233 | 0.138 | 0.238 | 0.138 | 2.9% | | | 192 | 0.148 | 0.240 | 0.147 | 0.239 | 0.153 | 0.249 | 0.148 | 0.242 | 0.159 | 0.259 | 0.155 | 2.7% | | | 336 | 0.167 | 0.261 | 0.163 | 0.257 | 0.169 | 0.267 | 0.163 | 0.261 | 0.192 | 0.293 | 0.176 | 2.7% | | | 720 | 0.202 | 0.291 | 0.195 | 0.289 | 0.203 | 0.301 | 0.198 | 0.295 | 0.264 | 0.353 | 0.224 | 3.0% | | Traffic | 24 | 0.323 | 0.235 | 0.306 | 0.214 | 0.371 | 0.267 | 0.347 | 0.255 | 0.483 | 0.273 | 0.392 | 0.246 | 7.3% | | | 48 | 0.342 | 0.240 | 0.329 | 0.236 | 0.393 | 0.276 | 0.367 | 0.260 | 0.513 | 0.290 | 0.420 | 0.289 | 4.4% | | | 96 | 0.367 | 0.258 | 0.358 | 0.252 | 0.423 | 0.298 | 0.405 | 0.281 | 0.522 | 0.326 | 0.459 | 0.302 | 4.2% | | | 192 | 0.385 | 0.259 | 0.373 | 0.251 | 0.423 | 0.287 | 0.413 | 0.281 | 0.522 | 0.296 | 0.490 | 0.283 | 3.3% | | | 336 | 0.398 | 0.265 | 0.389 | 0.262 | 0.436 | 0.296 | 0.426 | 0.288 | 0.530 | 0.300 | 0.517 | 0.303 | 1.9% | | | 720 | 0.434 | 0.287 | 0.429 | 0.286 | 0.466 | 0.315 | 0.454 | 0.307 | 0.584 | 0.369 | 0.541 | 0.322 | 5.4% | | Solar | 24 | 0.095 | 0.160 | 0.087 | 0.147 | 0.133 | 0.219 | 0.093 | 0.149 | 0.082 | 0.134 | 0.079 | 0.129 | 11.0% | | | 48 | 0.155 | 0.200 | 0.145 | 0.200 | 0.190 | 0.267 | 0.145 | 0.197 | 0.146 | 0.203 | 0.140 | 0.178 | 11.0% | | | 96 | 0.185 | 0.237 | 0.164 | 0.231 | 0.202 | 0.285 | 0.185 | 0.238 | 0.182 | 0.227 | 0.182 | 0.206 | 6.2% | | | 192 | 0.205 | 0.260 | 0.190 | 0.245 | 0.249 | 0.309 | 0.214 | 0.204 | 0.254 | 0.197 | 0.250 | 0.210 | 7.6% | | | 336 | 0.200 | 0.252 | 0.194 | 0.249 | 0.269 | 0.324 | 0.198 | 0.260 | 0.216 | 0.257 | 0.204 | 0.254 | 7.5% | | | 720 | 0.229 | 0.282 | 0.203 | 0.261 | 0.271 | 0.327 | 0.207 | 0.260 | 0.211 | 0.250 | 0.202 | 0.255 | 8.5% | | Wind | 24 | 0.137 | 0.179 | 0.131 | 0.175 | 0.151 | 0.198 | 0.136 | 0.182 | 0.122 | 0.173 | 0.121 | 0.168 | 3.8% | | | 48 | 0.163 | 0.200 | 0.155 | 0.196 | 0.175 | 0.214 | 0.159 | 0.200 | 0.147 | 0.194 | 0.147 | 0.189 | 3.4% | | | 96 | 0.186 | 0.216 | 0.165 | 0.213 | 0.197 | 0.230 | 0.169 | 0.212 | 0.181 | 0.176 | 0.208 | 0.208 | 4.1% | | | 192 | 0.201 | 0.239 | 0.191 | 0.228 | 0.216 | 0.248 | 0.193 | 0.219 | 0.189 | 0.187 | 0.216 | 0.223 | 4.4% | | | 336 | 0.216 | 0.239 | 0.202 | 0.234 | 0.233 | 0.258 | 0.205 | 0.238 | 0.201 | 0.240 | 0.199 | 0.232 | 4.6% | | | 720 | 0.231 | 0.253 | 0.215 | 0.247 | 0.254 | 0.278 | 0.225 | 0.256 | 0.237 | 0.286 | 0.224 | 0.254 | 6.0% | | PMMSD8 | 24 | 0.289 | 0.247 | 0.285 | 0.246 | 0.361 | 0.318 | 0.306 | 0.265 | 0.303 | 0.253 | 0.299 | 0.252 | 5.2% | | | 48 | 0.367 | 0.281 | 0.356 | 0.277 | 0.475 | 0.378 | 0.386 | 0.303 | 0.342 | 0.271 | 0.340 | 0.270 | 5.8% | | | 96 | 0.445 | 0.316 | 0.410 | 0.309 | 0.563 | 0.421 | 0.449 | 0.356 | 0.375 | 0.290 | 0.360 | 0.286 | 7.6% | | | 192 | 0.439 | 0.340 | 0.411 | 0.337 | 0.522 | 0.411 | 0.522 | 0.357 | 0.409 | 0.282 | 0.381 | 0.269 | 7.6% | | | 336 | 0.562 | 0.366 | 0.511 | 0.353 | 0.648 | 0.462 | 0.532 | 0.439 | 0.318 | 0.430 | 0.310 | 0.460 | 9.1% | | | 720 | 0.653 | 0.403 | 0.563 | 0.378 | 0.748 | 0.519 | 0.597 | 0.414 | 0.488 | 0.356 | 0.468 | 0.338 | 12.3% | PeMSD8, we conjecture that these datasets have fewer distribution shifts in channel dependence, leading to fewer overfitting risks. Even though the CD backbones have benefited from channel dependence, LIFT can still refine their predictions, e.g., improving Crossformer by 4.1% on Solar. This indicates that existing CD approaches cannot fully exploit the lead-lag relationships without prior knowledge about the dynamic variation of leading indicators and leading steps. Moreover, Crossformer mixes information from the variates that show similarity at the same time step but pays insufficient attention to the different yet informative signals of leading indicators. MTGNN learns a static graph structure among variates on the training data and aggregates information within a fixed subset of variates. MTGNN may well suffer from distribution shifts in channel dependence, while LIFT dynamically selects leading indicators and reduces overfitting risks. **LightMTS as a Strong Baseline.** Moreover, we compare the performance of LightMTS and all baselines on Weather, Electricity, and Traffic datasets. We borrow the baseline results from the paper of PatchTST with $H \in \{96, 192, 336, 720\}$. As shown in Figure 5a, LightMTS with a simple linear layer as its backbone still shows considerable performance among the state-of-the-art models. In particular, LightMTS surpasses PatchTST, the complex Transformer model, by 3.2% on Weather and 0.7% on Electricity. However, PatchTST significantly outperforms LightMTS on the Traffic dataset. As Traffic contains the greatest number of variates with complex temporal patterns, it requires a strong backbone to model the intricate cross-time dependence. Nevertheless, LightMTS is still the most competitive baseline on Traffic. ### 5.3 Ablation Study To verify the effectiveness of our designs, we introduce three variants of LightMTS by removing the influence term $\sum_{k=1}^K U_j^{(j)}$ in Eq. (11), removing the difference term $\sum_{k=1}^K \Delta_k^{(j)}$ in Eq. (11), and directly using $V_j^{(j)}, \sum_{k=1}^K U_j^{(j)},$ and $\sum_{k=1}^K \Delta_k^{(j)}$ in Eq. (11), respectively. Figure 5: (a) Performance comparison between LightMTS and all baselines; (b) Performance comparison between variants of LightMTS; (c) Performance of DLinear+LIFT under different numbers of the selected leading indicators (i.e., $K$) and the states (i.e., $N$). As shown in Figure 5b, we conduct experiments on these variants with $H$ set to 96, reporting the relative MSE w.r.t. LightMTS on Weather, Electricity, and Traffic datasets. With both the influence and the difference involved, LightMTS considers two kinds of lead-lag relationships and keeps the best performance across the datasets. In contrast, LightMTS w/o influence and LightMTS w/o difference only consider one-sided information of leading indicators, thus showing inferior performance, especially on the Electricity dataset. Furthermore, LightMTS w/o filter achieves the worst results in all the cases, which fails to adaptively filter out the noise in leading indicators. 5.4 Hyperparameter Study Our method introduces merely two additional hyperparameters, i.e., the number of selected leading indicators $K$ and the number of states $N$. Thus it requires a little labor for hyperparameter selection. With DLinear as the backbone and $H$ set to 96, we study the hyperparameter sensitivity of LIFT. As shown in Figure 5c, LIFT achieves lower MSE with an increasing $K$ on most datasets. Nevertheless, LIFT may well include more noise with a too large $K$ (e.g., on the Wind dataset), resulting in performance degradation. Besides, LIFT cannot enjoy significant improvement with a larger $K$ on the Electricity dataset, where the lead-lag relationships are perhaps more sparse. As for variate states, LIFT achieves lower MSE with an increasing $N$ in most cases. We observe the most significant performance drop on Weather when ignoring the variate states. It is noteworthy that the variates of Weather (e.g., wind speed, humidity, and air temperature) are recorded by various kinds of sensors, and the lead-lag patterns naturally vary with the variate states. 6 Conclusion In this work, we rethink the channel dependence in MTS and highlight the locally stationary lead-lag relationship between variates. We propose a novel method called LIFT that efficiently estimates the relationships and dynamically incorporates leading indicators in the frequency domain for MTS forecasting. LIFT can work as a plug-and-play module and is generally applicable to arbitrary forecasting models. We further introduce LightMTS as a lightweight yet strong baseline for MTS forecasting, which keeps similar parameter efficiency to linear models and shows considerable performance. We anticipate that the lead-lag relationship can offer a novel cross-time perspective on the channel dependence in MTS, which is a promising direction for the future development of channel-dependent Transformers or other complex neural networks. ACKNOWLEDGEMENTS This work is supported by the National Key Research and Development Program of China (2022YFE0200500), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and SJTU Global Strategic Partnership Fund (2021SJTU-HKUST). REFERENCES Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Congrui Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, and Qi Zhang. Spectral temporal graph neural network for multivariate time-series forecasting. In NeurIPS 2020. ACM, November 2020. URL https://nips.cc/virtual/2020/public/poster_cdf6581cb7aca4b7e19ef136c6e601a5.html. Defu Cao, Furong Jia, Sercan O Arik, Tomas Pfister, Yixiang Zheng, Wen Ye, and Yan Liu. TEMPO: Prompt-based generative pre-trained transformer for time series forecasting. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=YH5wI2OuU0. Peng Chen, Yingying Zhang, Yunyao Cheng, Yang Shu, Yihang Wang, Qingsong Wen, Bin Yang, and Chenjuan Guo. Pathformer: Multi-scale transformers with adaptive pathways for time series forecasting. In International Conference on Learning Representations (ICLR), 2024. Si-An Chen, Chun-Liang Li, Nate Yoder, Sercan O. Arik, and Tomas Pfister. Tsmixer: An all-mlp architecture for time series forecasting. arXiv preprint arXiv:2303.06053, March 2023. Tao Dai, Beiliang Wu, Peiyuan Liu, Naiqi Li, Jigang Bao, Yong Jiang, and Shu-Tao Xia. Periodicity decoupling framework for long-term series forecasting. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=dp27P5HBbt. Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. arXiv preprint arXiv:2306.09364, June 2023. doi: 10.1145/3580305.3599533. Wei Fan, Pengyang Wang, Dongkun Wang, Dongjie Wang, Yuanchun Zhou, and Yanjie Fu. Dish-ts: A general paradigm for alleviating distribution shift in time series forecasting. In AAAI Conference on Artificial Intelligence, 2023. URL https://api.semanticscholar.org/CorpusID:257232506. Xinyao Fan, Yueying Wu, Chang Xu, Yuhao Huang, Weiqing Liu, and Jiang Bian. MG-TSD: Multi-granularity time series diffusion models with guided learning process. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=CZiY6OLktd. Domenico Giannone, Martha Banbura, and Lucrezia Reichlin. Large bayesian vector auto regressions. ULB Institutional Repository, 2010. URL https://api.semanticscholar.org/CorpusID:125553391. Jake Grigsby, Zhe Wang, Nam Nguyen, and Yanjun Qi. Long-range transformers for dynamic spatiotemporal forecasting. arXiv preprint arXiv:2109.12218, September 2021. Lu Han, Han-Jia Ye, and De-Chuan Zhan. The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. arXiv preprint arXiv:2304.05206, April 2023. Qihe Huang, Lei Shen, Ruixin Zhang, Shouhong Ding, Binwu Wang, Zhengyang Zhou, and Yang Wang. Crossgnn: Confronting noisy multivariate time series via cross interaction refinement. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in Neural Information Processing Systems, volume 36, pp. 46885–46902. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/9278abf072b58caf21d48dd670b4c721-Paper-Conference.pdf.
lgmCGI2IpI
In experiments, baseline “FULL” is using the entire labeled set for training? And the proposed AL algorithm querying much less labels could often surpass “FULL” from Figure 3? If so, seems interesting to understand the reasons behind. E.g., Is it because AQOT is better at filtering out noise?
AN EFFICIENT QUERY STRATEGY FOR ACTIVE LEARNING VIA OPTIMAL TRANSPORT Anonymous authors Paper under double-blind review ABSTRACT Active Learning (AL) aims to reduce labeling costs by iteratively querying instances. Existing AL methods typically query instances based on either informativeness or representativeness. Only considering informativeness leads to sample bias. Only considering representativeness leads to query amount of instances before the optimal decision boundary is found. It is essential to consider both when querying instances. However, current hybrid methods are also time-consuming. To query instance efficiently while considering both informativeness and representativeness, we propose an efficient active query strategy based on optimal transport called Active Query by Optimal Transport (AQOT). Optimal Transport (OT) enables us to measure the difference between two distributions efficiently, allowing us considering the distribution of instances easily. Via entropy regularization, we can solve OT efficiently. Specifically, we make use of the sparseness of the solution of OT to querying the most informative instance while considering representativeness. Additionally, we introduce a dynamic adjustment to AQOT. By concatenating AQOT to multiple classification models, we show AQOT is a broad-spectrum active query strategy. Experimental results demonstrate that our method surpasses state-of-the-art active learning methods and shows high efficiency. 1 INTRODUCTION Many machine learning methods require a large number of labeled instances to train the model. Typically, these labels are assigned by human annotators, resulting in high labeling costs. In some domains of expertise like medical image recognition, data labeling is extremely expensive. Active learning is one of the main approaches to reduce the labeling cost (Settles, 2009). It continuously selects the most helpful instances to query from the oracles (e.g., human annotators) and aims to query as few instances as possible to improve the model most. Due to the increasing demands of labeling data to train more complex models like deep neural network, active learning has received broad attention (Liu et al., 2022). Based on how we get unlabeled instances, active learning can be categorized into three scenarios (Settles, 2009). The first scenario is pool-based active learning, where all unlabeled instances are collected in a pool. We select query instances from the pool based on their utility (Lewis & Galett, 1994). Pool-based active learning is a well-motivated scenario used in many machine learning tasks (Settles et al., 2008; Beluch et al., 2018). The second scenario is stream-based active learning (Zhu et al., 2007), where unlabeled instances are obtained from data stream. We must decide whether to query the instance once we get it. The last scenario is membership query synthesis, where query instances are generated based on the hypothesis model rather than being selected from existing unlabeled instances (Angluin, 1988; Tran et al., 2019). In this paper, we follow the pool-based active learning scenario. When evaluating the utility of instances, most existing active learning strategies can be categorized into two main approaches: assessing informativeness and assessing representativeness. The former selects instances with the highest informativeness based on the assessment strategy, including entropy, distance and confidence (Guo & Schuurmans, 2007; Guo & Greiner, 2007; Bondu et al., 2010; Yang & Loog, 2016; Gal et al., 2017). Ning et al. (2022) proposed an active query method for open-set annotation based on uncertainty. Yan & Huang (2018) proposed an informative measurement for multi-labeling active learning, enhancing the adaptability of active learning methods. Additionally, Yoo & Kweon (2019) proposed a target-agnostic loss prediction method to select samples that tasks are most uncertain. Furthermore, Konyushkova et al. (2017) introduced an approach centered on training a regressor that predicts the expected error reduction for candidate samples. Li & Guo (2013a) introduced a multi-label active learning strategy based on max-margin. However, the common issue of them is ignoring the distribution of all instances, which leads to sample bias when querying instances. The latter selects instances that represent the overall unlabeled instances. Two typical means to explore the representativeness are clustering methods and optimal experimental design methods (Brinker, 2003; Wu et al., 2006; Fu et al., 2013; Reitmaier et al., 2015; Ye & Guo, 2017). Wang & Ye (2015) proposed a batch-mode active learning strategy under empirical risk minimization principle, introducing techniques to select samples that enhance the overall representation. Sener & Savarese (2018) proposed a strategy to query diversity samples, particularly relevant in the context of convolutional networks. However, focusing on representativeness usually queries a lot of instances before we get close to the true decision boundary, which leads to large labeling cost. It is essential to query instances taking both informativeness and representativeness into consideration. Many hybrid methods have been proposed (Huang et al., 2014; Li & Guo, 2013b). Sinha et al. (2019) proposed a hybrid method using a variational autoencoder and an adversarial network. Du et al. (2017) proposed a general active learning framework to fuse informativeness and representativeness. However, it is still time-consuming for current hybrid method to explore the representativeness of instances. This paper proposes an efficient hybrid Active Query strategy by Optimal Transport called AQOT. Specifically, we establish two Optimal Transport (OT) models from unlabeled instances to positive instances and negative instances respectively. We design the active query strategy by examining the differences in the distributions of coefficient vectors between these two models. Furthermore, noticing that the quality of the solution is influenced by the initially labeled instances, we propose a dynamic adjustment to AQOT to encourage early exploration. AQOT outperforms state-of-the-art active learning methods with high efficiency. Besides, we empirically concatenate AQOT with mainstream classification models and verify it is a broad-spectrum strategy. The rest of the paper is organized as follows. We introduce preliminaries in Section 2. Then we describe our approach in Section 3. Section 4 reports the experiments, followed by the conclusion in Section 5. ## 2 Preliminary Throughout the paper, we denote scalars by normal letters (e.g., \( y \)). We denote vectors and matrices by boldface lower and upper case letters respectively (e.g., \( x \) for vector and \( X \) for matrix). We denote by \( \text{diag}(a) \) the diagonal matrix with main diagonal equal to \( a \). We denote the \( i \)-th row and \( j \)-th column of \( X \) by \( X_i \) and \( X_j \) respectively. We denote sets by upper case letters with mathbb fonts (e.g., \( \mathbb{X} \)). For \( X, Y \in \mathbb{R}^{m \times n} \), we denote by \( \langle X, Y \rangle = \sum_{ij} X_{ij}Y_{ij} \). For positive integer \( d \), we denote by \( 1_d \) and \( \Delta_d \) the \( d \)-dimensional all-one vector and \( d \)-dimensional simplex respectively. For positive integer \( n \), we denote by \( [n] = \{1, \ldots, n\} \). ### 2.1 Optimal Transport We transform an probability distribution into another distribution in OT problem. The goal is to minimize the total cost of the transform (Torres et al., 2021). By establishing OT model between two distributions, we can intuitively see their connections. We can easily take instance distribution into consideration when querying instances via OT. We illustrate how OT works using a toy data set as an example. In this demonstration, we establish two OT models: one from the unlabeled instances to the positive instances and another from the unlabeled instances to the negative instances, just as we do in the following experiment. We treat each instance equally, i.e., assigning a probability value \( 1/u \) to each unlabeled instance, \( 1/p \) to each positive instance and \( 1/n \) to each negative instance. During the transporting process, unlabeled instances tend to transport to the nearest instance. The data set and coefficient matrix are shown in figure 1. Figure 1: (a) The plot of a toy data set. Six labeled instances are marked from A to F and four example unlabeled instances are numbered from I to IV. The dashed line represents the decision boundary. (b) The coefficient matrix. The size of the circle represents the mass of unlabeled instances transported to labeled instances. I, II are positive instances indeed and III, IV are negative instances. We denote by \( u \in \Delta_u \) and \( l \in \Delta_l \) two probability distributions respectively. The set of all admissible couplings \( T(u,l) \) is: \[ T(u,l) = \{ T \in \mathbb{R}_+^{u \times l} | T1_l = u, T^\top 1_u = l \}, \] where \( T \) is the coefficient matrix of this problem. \( T_{ij} \) is the amount of mass transported from \( u_i \) to \( l_j \). We denote by \( C \in \mathbb{R}^{u \times l} \) the cost matrix. \( C_{ij} \) is the cost of transporting unit mass from the position of \( x_i \) to the position of \( x_j \). We take Euclidean distance between two instances as the cost, which is \( C_{ij} = \|x_i - x_j\|_2 \). The goal of OT is to minimize the total transport cost from \( u \) to \( l \): \[ \min_{T \in T(u,l)} \langle C, T \rangle = \min_{T \in T(u,l)} \sum_{i \in [u]} \sum_{j \in [l]} C_{ij} T_{ij}. \] Though equation (2) can be solved by any linear programming algorithm [Kantorovitch 1958], the computational cost to precisely solve it in large scale problem is unacceptable. To address this, entropy regularization has been introduced [Cuturi 2013], enabling faster and satisfying solutions to the entropy regularized OT problem: \[ OT(u,l) = \min_{T \in T(u,l)} \langle C, T \rangle - \lambda \cdot H(T), \] where \( H(T) = -\sum_{ij} T_{ij} (\log T_{ij} - 1) \) and \( \lambda \) is regularization parameter. A larger \( \lambda \) encourages a uniform coefficient distribution. 3 APPROACH In this section, we describe the approach. Specifically, we first solve the entropy regularized OT problem by Sinkhorn-Knopp algorithm [Cuturi 2013]. We use the standard deviation of coefficient vectors to reflect the certainty in unlabeled instances. Then we propose our active query strategy via it. Finally, we propose the improvement of the active query strategy with dynamic adjustment. We denote by \( D \) the data set with \( n \) examples, which includes a labeled set \( L = \{(x_1,y_1),(x_2,y_2),\cdots,(x_{n_l},y_{n_l})\} \) with \( n_l \) labeled instances and an unlabeled set \( U = \{x_{n_l+1},x_{n_l+2},\cdots,x_{n_l+n_u}\} \) with \( n_u \) unlabeled instances, where \( n = n_l + n_u \). Besides, \( y_i \in \{0,1\} \); \( Y \) is the ground-truth label and \( x_i \in \mathbb{R}^d \) (\( i \in [n] \)). \( L = P \cup N \), where \( P \) and \( N \) denote the positive set with \( n_p \) positive instances and negative set with \( n_n \) negative instances respectively. Active learning iteratively selects the most useful instance from \( U \) to query its label from the oracle. According the ground-truth label of the query instances, we add it to \( P \) or \( N \). Then we train the classifier $F_\theta(x) : \mathbb{R}^d \rightarrow \mathbb{Y}$ parameterized by $\theta$ with the updated $L$. The classifier is expected to achieve better performance with the update of $L$. 3.1 SOLVE OT As mentioned before, we establish two OT models from $U$ to $P$ and $N$ respectively in our experiment. We introduce entropy regularization into original OT problem and use Sinkhorn-Knopp algorithm to solve the entropy regularization OT problem. The Lagrangian of equation (3) with dual variables $\gamma \in \mathbb{R}^{n_u}, \zeta \in \mathbb{R}^{n_l}$ is: $$L(T, \gamma, \zeta) = \sum_{j \in [n_u]} \sum_{i \in [n_l]} (T_{ij} C_{ij} + \lambda T_{ij} (\log T_{ij} - 1)) + \gamma^\top (T 1_{n_l} - \frac{1}{n_u}) + \zeta^\top (T^\top 1_{n_u} - \frac{1}{n_l}).$$ (4) By setting the partial derivative to zero, we can get the solution $T = \text{diag}(a) K \text{diag}(b)$, where $a = \exp(\gamma/\lambda), K = \exp(-C/\lambda)$ and $b = \exp(\zeta/\lambda)$ are the element-wise exponential of $\gamma/\lambda, -C/\lambda, \zeta/\lambda$. Considering the row and column marginals of $T$ are equals to their target values, we have: $$a \odot (K b) = n_u, b \odot (K^\top a) = n_l,$$ where $\odot$ is Hadamard product. The heat maps of the OT coefficient matrix of stock are shown in figure 2. It is evident that the coefficient vector is sparse when transporting unlabeled instance to instances with same label, while more uniform when transporting to instances with different label. Based on this observation, we can use the coefficient matrix to assess the informativeness of an unlabeled instance and incorporate it into our active query strategy. ![Heat maps of the OT coefficient matrix of stock](image) Figure 2: The heat maps of the OT coefficient matrix of stock. The OT models are established from 50 unlabeled instances to 50 positive instances and 50 negatives instances respectively from stock. Each row represents the coefficient vector of an unlabeled instance. The first half are positive instances and the second half are negative instances. It is evident that the coefficient vector is sparse when transporting to instances with same label. 3.2 QUERY STRATEGY After establishing and solving the entropy regularization OT model, we obtain coefficient matrices $T^P \in \mathbb{R}^{n_u \times n_p}, T^N \in \mathbb{R}^{n_u \times n_n}$ for transporting unlabeled instances to positive and negative instances respectively. We can select the query instance with these two matrices. The dynamic query score for selecting query instance is: $$\text{dyscore}(x_i) = (1 - \eta) \cdot (\alpha \cdot \text{conf}_i + (1 - \alpha) \cdot \frac{1}{p_i}) + \eta \cdot d_i,$$ (5) which consists of three terms: $\text{conf}_i, 1/p_i$ and $d_i$. We detail them respectively. The first term represents our confidence in \( x_i \). For an unlabeled instance \( x_i \in U \), its coefficient vectors are \( T^P_{i} \) and \( T^N_{i} \) respectively. \( T^P_{ij} \) represents the mass transporting from \( x_i \) to \( x_j \in P \) and \( T^N_{ik} \) represents the mass transporting from \( x_i \) to \( x_k \in N \). As previously introduced, the sparseness degree of the coefficient vector is related to the label of source instance and target instance. The sparser the coefficient vector is, the more likely the source instance and the target instance share the same label. Standard deviation is a good method to reflect the sparseness degree of instances. We define sample confidence of \( x_i \) by the standard deviation of coefficient vector: \[ \text{conf}_i = \left| \frac{\text{std}(T^P_{i})}{\max(T^P_{i})} - \frac{\text{std}(T^N_{i})}{\max(T^N_{i})} \right|. \] Normalizing the standard deviation helps eliminate the influence of specific coefficients. When sample confidence is high, it is more likely that one coefficient vector is sparse while the other is uniform and we are more certain about the label of the instance. High confidence indicates certainty, which is a significant aspect of informativeness. Importantly, with the introduction of OT, this certainty takes the entire distribution of labeled instances into account. However, querying instances based solely on certainty is not sufficient. In addition to certainty, uncertainty is also a crucial factor in assessing the informativeness of instances. That is why we introduce the second term. The second term represents the uncertainty degree of \( x_i \). In binary classification problems, \( p_i = |p(y=1|x_i) - p(y=0|x_i)| \). In multi-class classification problems, \( p_i = |p(y=\hat{y}_1|x_i) - p(y=\hat{y}_2|x_i)| \), where \( \hat{y}_1 \) and \( \hat{y}_2 \) represent the two labels with highest posterior probability. \( p(y|x) \) can be computed with function `predict_proba()` in sklearn. Obviously, the smaller \( p_i \) is, the more uncertain we are about the label of instance, making \( p_i \) a good method to measure uncertainty. Ideally, we aim to query instances with both high uncertainty and high certainty. And we define query score based on uncertainty and certainty of the instance: \[ \text{score}(x_i) = \alpha \cdot \text{conf}_i + (1 - \alpha) \cdot \frac{1}{p_i}, \] where \( \alpha \) is the parameter to weigh uncertainty and certainty. In initial experiment, we simply set \( \alpha = 0.5 \), where certainty and uncertainty carry equal weight. Instances with high query scores have both high certainty and uncertainty, which is helpful to improve our classifier. Based on the specific setup, active learning begins with a few labeled instances. If initial labeled instances are far away from the true decision boundary, it might take many iterations to get reach to the boundary. In some cases, querying instances near the wrong boundary reinforces the incorrect decision boundary. Encouraging exploration beyond the labeled instances leads to a quicker adjustment of the decision boundary, which is beneficial for achieving a potentially better boundary. So we propose a dynamic adjustment \( d_i = \sum_{x \in L} |x_i - x| \) to the initial query score. After adding the dynamic adjustment term, we get equation (5), where \( \eta = 1/(\delta + \log(t)) \), \( t \) is current iteration, \( \delta \) is the smooth parameter. At the beginning of training, \( \eta \) is set close to 1, which encourages querying instances that are far from the labeled instances. This leads to rapid changes in the decision boundary, which might potentially get closer to the ground truth. In the worst case, it might result in a waste of the first few turns. However, \( \eta \) decreases as training progresses and the query score of \( x_i \) is favored for querying instances. We prefer to querying instances with both certainty and uncertainty rather than outliers. In conclusion, in active query strategy with dynamic adjustment, we will query instance with the highest dynamic query score: \[ x_{\text{query}} = \max_{x \in U} \text{dyscore}(x). \] The first term controls certainty. The second term controls uncertainty. The last term encourages exploration in the beginning of the training process. It is important to note that the computation of the score is independent of the specific classifier, allowing us to concatenate the query score with any classifiers. The algorithm is detailed in algorithm 1. 4 EXPERIMENTS In this section, we concatenate AQOT strategy with three classification classifiers, i.e., SVM, GBDT and NN. We will begin by describing 6 real-world data sets, 6 compared methods, and experimental Algorithm 1 AQOT Input: Initial \( U, P, N \), max query iteration \( T, \delta \) Output: \( F_\theta(x) \) 1: \( t \leftarrow 1 \) 2: \( \eta \leftarrow 1/\delta \) 3: while \( t < T \) do 4: Train a new classifier by \( P \) and \( N \). 5: Establish OT models from \( U \) to \( P \) and \( N \) and compute the solution of the entropy-regularized OT problem by Sinkhorn-Knopp algorithm. 6: for \( i \leftarrow 1 \) to \( n_u \) do 7: \( T^P_i \leftarrow \) the coefficients vector of \( x_i \in U \) transporting to \( P \) 8: \( T^N_i \leftarrow \) the coefficients vector of \( x_i \in U \) transporting to \( N \) 9: \( p_i \leftarrow |p(y = 1|x_i) - p(y = 0|x_i)| \) 10: end for 11: Query \( x^{query} \) according to equation (8). 12: Add \( x^{query} \) to \( P \) or \( N \) according to its label and remove it from \( U \). 13: \( t \leftarrow t + 1 \) 14: \( \eta \leftarrow 1/(\delta + \log(t)) \) 15: end while settings. Subsequently, we will assess the algorithm is not sensitive to entropy regularization parameter. In addition to comparing with the state-of-the-art active learning methods, we compare run time and AQOT shows high efficiency compared to other hybrid methods. Moreover, we conduct ablation experiments to show the effectiveness of AQOT. 4.1 Experiment Setting We utilize six data sets from the UCI Machine Learning Repository, including monks-problem-1, qsar-biodeg, balance-scale, phoneme, stock and breast. We compare the following query strategies in our work: - **AQOT**: The proposed method of this paper, which queries the instance with high certainty and uncertainty as well as encourages exploring at the start of training. - **FULL**: We train a classifier using all labeled instances as a reference baseline. - **RANDOM**: This method queries instance randomly. - **UNCERTAINTY** ([Settles & Craven](#cite1) 2008): This method is based on informativeness. Specifically, it queries the instance with most uncertainty. The uncertainty is measured by prediction confidence. - **ENTROPY** ([Lewis & Catlett](#cite2) 1994): This method is based on informativeness. Specifically, it queries the instance with the highest entropy. - **CORESET** ([Sener & Savarese](#cite3) 2018): This method is based on representativeness. Specifically, it queries the instance minimize the core-set loss. - **QUIRE** ([Huang et al.](#cite4) 2014): This method is a hybrid method, which queries the instance with informativeness and representativeness. - **WMOCUAL** ([Zhao et al.](#cite5) 2021): This method is a hybrid method, which queries the instance based on the weighted form of MOCU. In our experiment, we use QUIRE and CORESET in ([Tang et al.](#cite6) 2019). For each data set, we randomly choose 20% of instances for testing. We randomly choose 5 positive instances and 5 negative instances as initial labeled instances. In each iteration, we query one instance from \( U \) and add it to \( L \). For data sets with instances less than 1000, we query 100 instances in total. For data sets with instances less than 5000, we query 300 instances in total. For data sets with instances more than 5000, we query 500 instances in total. 4.2 Performance We initially concatenate AQOT with SVM rather than with all three classifiers to demonstrate the results. Figure 3 shows the performance of eight methods on six data sets in terms of accuracy. In phoneme, QUIRE needs more than 2 hours to get the result, so there is no result of QUIRE in the corresponding figure. ![Figure 3](image) Figure 3: Results on six data sets in terms of accuracy. QUIRE spends too much time (over 2 hours), so its result is not shown in the corresponding figure. It can be seen from the result that on all data sets AQOT outperforms most of the baselines and achieves the best performance in all cases. 4.3 Parameter Analysis As introduced above, $\lambda$ has influence on the solution of the OT problem. If $\lambda$ is too small, the classifier degenerates to the original OT. Conversely, if $\lambda$ is too large, the coefficient vector becomes almost uniformly distributed. From the experimental point of view, a reasonable range of $\lambda$ is between 0.1 and 1. We concatenate AQOT with three classifiers and consider values of $\lambda \in \{0.1, 0.5, 1\}$. Table 1 summarizes the performance of the nine methods on six data sets in terms of F1 score based on ten trials. The win/tie/loss counts are summarized in the last three rows. It can be seen from the result that AQOT methods outperform most of the baselines regardless of the value of $\lambda$, and the performance is related to the classifier. Observing the results, we can also find that under the appropriate regularization parameter, AQOT is not sensitive to $\lambda$. Another parameter $\alpha$ controls the weight of certainty and uncertainty. If $\alpha$ equals to 0, the term of certainty disappears. Conversely, if $\alpha$ equals to 1, the term of uncertainty disappears. We concatenate AQOT with three classifiers and consider values of $\alpha \in \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1\}$. Figure 4 shows the performance of three AQOT methods on six data sets in terms of F1 score. It can be seen from the result that a too small $\alpha$ or too big $\alpha$ will decrease the performance of AQOT method. A suitable range for $\alpha$ falls between 0.4 and 0.6 based on the data set, which means the importance of certainty and uncertainty is similar in most of the cases. 4.4 Run Time Comparison We compare the running time to show AQOT is efficient. All algorithms are implemented in Python 3.7 on a personal computer with Intel i5-12500 2.5 GHz CPU and 16G RAM. Table 2 shows the result. It can be seen from the result that compared to two hybrid methods: QUIRE and WMOCU, our AQOT methods shows higher efficiency. Table 1: Results on six data sets in terms of F1 score over 10 trials. The best result on each data set is indicated in bold. The win/tie/loss counts are summarized in the last three rows. (A wins B means A is significantly better than B based on a pair-wise t-test at a 0.05 significance level) | Dataset | λ | RAN | UN | EN | CORE | QUIRE | WMOCU | AQSVM | AQGBDT | AQNN | |---------|-----|-----|-----|-----|------|-------|-------|-------|--------|------| | monks1 | 0.1 | .856±.010 | .868±.014 | .904±.012 | .868±.016 | .880±.025 | .902±.012 | .927±.014 | .973±.026 | .982±.013 | | | 0.5 | .856±.010 | .868±.014 | .904±.012 | .868±.016 | .880±.025 | .902±.012 | .936±.034 | .980±.010 | .972±.015 | | | 1 | .856±.010 | .868±.014 | .904±.012 | .868±.016 | .880±.025 | .902±.012 | .938±.034 | .990±.010 | .975±.018 | | qsar | 0.1 | .831±.033 | .818±.039 | .827±.031 | .815±.023 | .811±.028 | .852±.012 | .861±.020 | .874±.015 | .895±.032 | | | 0.5 | .831±.033 | .818±.039 | .827±.031 | .815±.023 | .811±.028 | .852±.012 | .885±.014 | .897±.019 | .885±.037 | | | 1 | .831±.033 | .818±.039 | .827±.031 | .815±.023 | .811±.028 | .852±.012 | .881±.025 | .883±.035 | .894±.034 | | balance | 0.1 | .945±.012 | .933±.030 | .933±.012 | .942±.013 | .923±.022 | .942±.006 | .976±.014 | .955±.007 | .958±.008 | | | 0.5 | .945±.012 | .933±.030 | .933±.012 | .942±.013 | .923±.022 | .942±.006 | .980±.011 | .952±.012 | .952±.010 | | | 1 | .945±.012 | .933±.030 | .933±.012 | .942±.013 | .923±.022 | .942±.006 | .960±.028 | .936±.012 | .937±.013 | | stock | 0.1 | .859±.020 | .856±.033 | .868±.019 | .874±.019 | .735±.024 | .877±.022 | .903±.034 | .957±.028 | .948±.017 | | | 0.5 | .859±.020 | .856±.033 | .868±.019 | .874±.019 | .735±.024 | .877±.022 | .917±.029 | .945±.031 | .952±.024 | | | 1 | .859±.020 | .856±.033 | .868±.019 | .874±.019 | .735±.024 | .877±.022 | .943±.022 | .962±.022 | .945±.013 | | breast | 0.1 | .961±.012 | .960±.011 | .960±.008 | .959±.016 | .962±.012 | .955±.020 | .979±.012 | .978±.007 | .977±.012 | | | 0.5 | .961±.012 | .960±.011 | .960±.008 | .959±.016 | .962±.012 | .955±.020 | .976±.011 | .975±.005 | .979±.017 | | | 1 | .961±.012 | .960±.011 | .960±.008 | .959±.016 | .962±.012 | .955±.020 | .978±.010 | .978±.006 | .975±.011 | | phoneme | 0.1 | .773±.014 | .788±.010 | .784±.037 | .765±.015 | N/A | .768±.013 | .803±.012 | .863±.019 | .804±.014 | | | 0.5 | .773±.014 | .788±.010 | .784±.037 | .765±.015 | N/A | .768±.013 | .820±.038 | .858±.021 | .802±.011 | | | 1 | .773±.014 | .788±.010 | .784±.037 | .765±.015 | N/A | .768±.013 | .815±.027 | .863±.017 | .805±.012 | Figure 4: Parameter analysis on six data sets. We adjust the value of α to show the influence of α. ### 4.5 Ablation Experiments As introduced above, our dynamic query score consists of three terms: certainty, uncertainty and dynamic adjustment term. We compare our AQOT method with methods lacking each of these three terms respectively. Table 3 presents the results. From the table, we can see that the method with dynamic adjustment surpasses the performance of the method regardless of lacking which term. ### 5 Conclusion In this paper, we proposed an efficient active query strategy AQOT. We establish two OT models from unlabeled instances to positive and negative instances respectively. We evaluate certainty of instances by the standard deviation of coefficient vector and evaluate uncertainty by the difference of two highest posterior probabilities. We query instances by weighing certainty and uncertainty Table 2: Results on six data sets in terms of running time (in seconds). | Data set | RAN | UN | EN | CORE | QUIRE | WMOCU | AQSVM | AQGBDT | AQNN | |----------|-------|-------|-------|-------|-------|-------|-------|--------|------| | monks1 | 0.107 | 0.257 | 1.187 | 0.160 | 286 | 173 | 0.708 | 3.63 | 7.53 | | qsar | 0.722 | 2.78 | 9.87 | 1.69 | 628 | 546 | 5.67 | 38.7 | 48.3 | | balance | 0.106 | 0.302 | 1.62 | 0.171 | 45.1 | 175 | 0.745 | 4.352 | 11.9 | | stock | 0.109 | 0.334 | 1.90 | 0.209 | 417 | 212 | 0.977 | 4.60 | 11.0 | | breast | 0.102 | 0.208 | 1.41 | 0.164 | 69.3 | 59.8 | 0.755 | 3.47 | 5.97 | | phoneme | 4.83 | 32.2 | 192 | 0.171 | N/A | 1250 | 88.3 | 127 | 138 | Table 3: F1 score of AQOT method and methods without three terms respectively over 10 trials. Method-1 denotes by the method lacking the certainty term. Method-2 denotes by the method lacking the uncertainty term. Method-3 denotes by the method lacking the dynamic adjustment term. • indicates the performance of AQOT is significantly better than the compared method (pairwise t-test at 0.05 significance level). (a) results on SVM | Data set | AQSVM | AQSVM-1 | AQSVM-2 | AQSVM-3 | |----------|-------|---------|---------|---------| | monks1 | .958±.034 | .880±.021• | .877±.012• | .921 ±.021• | | qsar | .881±.025 | .815±.021• | .821±.012• | .860±.011 | | balance | .960±.028 | .925±.027• | .932±.014• | .945±.022 | | stock | .943±.022 | .855±.012• | .848±.021• | .839±.014• | | breast | .978±.010 | .941±.014• | .928±.007• | .960±.016 | | phoneme | .815±.027 | .763±.021• | .775±.026• | .800±.015• | (b) results on GBDT | Data set | AQGBDT | AQGBDT-1 | AQGBDT-2 | AQGBDT-3 | |----------|--------|----------|----------|----------| | monks1 | .990±.010 | .942±.017• | .925±.023• | .933±.008• | | qsar | .883±.035 | .853±.010• | .832±.025• | .812±.021• | | balance | .956±.012 | .862±.011• | .843±.053• | .887±.012• | | stock | .962±.022 | .924±.035• | .915±.014• | .903±.021• | | breast | .978±.006 | .932±.025• | .924±.012• | .922±.015• | | phoneme | .863±.017 | .822±.029• | .827±.018• | .842±.015• | (c) results on NN | Data set | AQNN | AQNN-1 | AQNN-2 | AQNN-3 | |----------|------|--------|--------|--------| | monks1 | .975±.018 | .940±.015• | .942±.012• | .947±.023• | | qsar | .894±.034 | .834±.022• | .824±.012• | .852±.017• | | balance | .957±.013 | .904±.008• | .915±.012• | .914±.028• | | stock | .945±.013 | .865±.016• | .904±.011• | .896±.008• | | breast | .975±.011 | .921±.021• | .935±.022• | .955±.010• | | phoneme | .805±.012 | .733±.017• | .727±.023• | .725±.019• | with encouraging early exploration with taking instance distribution into account. Moreover, AQOT shows high efficiency compared to other hybrid methods. We concatenate it with multiple classifiers to show it is a broad-spectrum strategy. REFERENCES Dana Angluin. Queries and concept learning. *Machine Learning*, 2:319–342, 1988. William H Beluch, Tim Genewein, Andreas Nürnberger, and Jan M Köhler. The power of ensembles for active learning in image classification. In *Proceedings of 19th IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 9368–9377, Salt Lake City, UT, 2018. A. Bondu, V. Lemaire, and M. Boullé. Exploration vs. exploitation in active learning: A bayesian approach. In Proceedings of 22nd International Joint Conference on Neural Networks, pp. 1–7, Barcelona, Spain, 2010. Klaus Brinker. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th International Conference on Machine Learning, pp. 59–66, 2003. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 27, pp. 2292–2300, Red Hook, NY, 2013. Bo Du, Zengmao Wang, Lefei Zhang, Liangpei Zhang, Wei Liu, Jialie Shen, and Dacheng Tao. Exploring representativeness and informativeness for active learning. IEEE Transactions on Cybernetics, 47(1):14–26, 2017. Yifan Fu, Bin Li, Xingquan Zhu, and Chengqi Zhang. Active learning without knowing individual instance labels: a pairwise label homogeneity query approach. IEEE Transactions on Knowledge and Data Engineering, 26(4):808–822, 2013. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In Proceedings of 34th International Conference on Machine Learning, pp. 1183–1192, Sydney, Australia, 2017. Yuhong Guo and Russell Greiner. Optimistic active-learning using mutual information. In Proceedings of 20th International Joint Conference on Artificial Intelligence, pp. 823–829, Canberra, Australia, 2007. Yuhong Guo and Dale Schuurmans. Discriminative batch mode active learning. In Advances in Neural Information Processing Systems 20, pp. 593–600, Red Hook, NY, 2007. Sheng-Jun Huang, Rong Jin, and Zhi-Hua Zhou. Active learning by querying informative and representative examples. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(10):1936–1949, 2014. L. Kantorovitch. On the translocation of masses. Management Science, 5(1):1–4, 1958. Ksenia Konyushkova, Sznitman Raphael, and Pascal Fua. Learning active learning from data. In Advances in Neural Information Processing Systems 31, pp. 4228–4238, Red Hook, NY, 2017. David D. Lewis and Jason Catlett. Heterogenous uncertainty sampling for supervised learning. In Proceedings of the 11th International Conference on Machine Learning, pp. 148–156, San Francisco, CA, 1994. Xin Li and Yuhong Guo. Active learning with multi-label svm classification. In Proceedings of 26th International Joint Conference on Artificial Intelligence, pp. 1479–1485, Beijing, China, 2013a. Xin Li and Yuhong Guo. Adaptive active learning for image classification. In Proceedings of 26th IEEE Conference on Computer Vision and Pattern Recognition, pp. 859–866, Portland, OR, 2013b. Peng Liu, Lizhe Wang, Rajiv Ranjan, Guojin He, and Lei Zhao. A survey on active deep learning: from model driven to data driven. ACM Computing Surveys, 54(10s):1–34, 2022. Kun-Peng Ning, Xun Zhao, Yu Li, and Sheng-Jun Huang. Active learning for open-set annotation. In Proceedings of 23th IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 41–49, 2022. Tobias Reitmaier, Adrian Calma, and Bernhard Sick. Transductive active learning—a new semi-supervised learning approach based on iteratively refined generative models to capture structure in data. Information Sciences, 293:275–298, 2015. Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2018. Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
5Lp6qU9hzV
In Table 4, it's intriguing to observe that BERT-MPU performs better than Roberta-MPU on full text but worse on short text. The authors could provide further insights or hypotheses as to why this performance gap exists. This could help readers better understand the nuances of the approach and its applicability in different scenarios.
MULTISCALE POSITIVE-UNLABELED DETECTION OF AI-GENERATED TEXTS Yuchuan Tian¹, Hanting Chen², Xutao Wang², Zheyuan Bai², Qinghua Zhang², Ruifeng Li⁴, Chao Xu¹, Yunhe Wang²∗ ¹ National Key Lab of General AI, School of Intelligence Science and Technology, Peking University ² Huawei Noah’s Ark Lab ³ Huawei Group Finance ⁴ Huawei Central Software Institute tianyc@stu.pku.edu.cn, yunhe.wang@huawei.com ABSTRACT Recent releases of Large Language Models (LLMs), e.g. ChatGPT, are astonishing at generating human-like texts, but they may impact the authenticity of texts. Previous works proposed methods to detect these AI-generated texts, including simple ML classifiers, pretrained-model-based zero-shot methods, and finetuned language classification models. However, mainstream detectors always fail on short texts, like SMSes, Tweets, and reviews. In this paper, a Multiscale Positive-Unlabeled (MPU) training framework is proposed to address the difficulty of short-text detection without sacrificing long-texts. Firstly, we acknowledge the human-resemblance property of short machine texts, and rephrase AI text detection as a partial Positive-Unlabeled (PU) problem by regarding these short machine texts as partially “unlabeled”. Then in this PU context, we propose the length-sensitive Multiscale PU Loss, where a recurrent model in abstraction is used to estimate positive priors of scale-variant corpora. Additionally, we introduce a Text Multiscaling module to enrich training corpora. Experiments show that our MPU method augments detection performance on long AI-generated texts, and significantly improves short-text detection of language model detectors. Language Models trained with MPU could outcompete existing detectors on various short-text and long-text detection benchmarks. The codes are available at https://github.com/mindspore-lab/mindone/tree/master/examples/detect_chatgpt and https://github.com/YuchuanTian/AIGC_text_detector 1 INTRODUCTION Recent developments in Large Language Models (LLMs) have brought astonishing changes to people’s lives. The GPT-2 (Radford et al., 2019) model, created in early 2019, is capable of simple question-answering tasks; GPT-3 (Brown et al., 2020) is a great leap in model size and capability; ChatGPT (OpenAI, 2022), announced in late 2022, shows comparable performance to humans as a chatbot; GPT-4 (OpenAI, 2023a), released this year, has even better generative performance. These advancements are making people’s lives easier with applications like writing aids, search engines, and Office Suites. However, they could be used to generate deceptive fake texts for illegal and unethical purposes. Previous works have proposed numerous approaches to distinguish fake AI-generated text from genuine human languages. Canonical work (Solaiman et al., 2019) used simple machine learning classifiers as baselines; some works (Getzmann et al., 2019; Mitchell et al., 2023) proposed zero-shot detection measures based on pretrained models; numerous works (Solaiman et al., 2019; Crothers et al., 2022; Guo et al., 2023; Mitrovic et al., 2023) perform simple finetuning of pretrained language models on the AI-text classification task. Despite various methods, few mainstream methods investigated the negative impact of text length: the difficulty to detect significantly increases as texts become shorter. Some latest online ChatGPT detectors have noticed this issue, but they dodge rather than address it by putting up minimum text ∗Corresponding Author. length requirements (Tian, 2022; FudanNLPLab, 2023; OpenAI, 2023b). In the era of smartphones where people rely heavily on fragmented mobile media, fake short articles like SMSes, Tweets, and reviews generated by LLMs could pose huge threats to one’s daily life, yet we still lack a comprehensive detector that is capable of detecting both short texts and long-texts. To improve detectors’ performance on short texts, we rethink the plain “Binary Classification” setting that is intuitively applied. It is seemingly natural to phrase text detection as a binary classification task, as texts have clear origins (from human works or AI outputs) and thus, clear binary labels (real or fake); but interestingly, we observe a handful of machine-generated texts that are overly short and simple, such that these texts are highly similar to human (e.g. Ex. 2 in Table 1). It is not suitable to assign these simple machine texts with either clear human or AI labels; rather, they are in an “Unlabeled” state. Though the case is occasional and most short machine texts (e.g. Ex. 1 in Table 1) are still distinguishable based on manifold features, it prompts us to question the rationality of clear binary labels on general short machine texts. On the contrary, we hold that short machine-generated texts are partially “Unlabeled”. As machine-generated texts become shorter and simpler, the “Unlabeled” property could gradually dominate the text. **Example 1:** The first sentence in benchmark HC3-Sent (Guo et al., 2023) | Human: | You can’t just go around assassinating the leaders of countries you don’t like! | |-------|--------------------------------------------------------------------------------| | AI: | It is generally not acceptable or ethical to advocate for or condone the assassination of any individual, regardless of their actions or beliefs. | **Example 2:** Answer to “When is the independence day of the United States?” | Human: | Independence Day is annually celebrated on July 4th. | |-------|-----------------------------------------------------| | AI: | The Independence Day of the United States is celebrated on July 4th. | Table 1: Short example answers from human and AI. In general, short answers are distinguishable based on features like punctuations, emotions, and formality (see non-cherrypicked case Ex. 1). But in extreme cases (see Ex. 2), short simple answers are indistinguishable, and the unlabeled property is manifest. In this sense, we model the task of AI-generated text detection as a partial Positive-Unlabeled (PU) problem and formulate the Multiscale Positive-Unlabeled (MPU) training framework to address the challenging task of short text detection without sacrificing long texts. PU problems typically address binary classification tasks where positive data and unlabeled data are offered for training. Considering the partially “Unlabeled” property of short machine texts, we rephrase detector training as a partial PU problem and boost detectors’ performance on multiscale texts. In order to improve conventional PU optimization targets for texts of various lengths, a length-aware Multiscale PU (MPU) loss is proposed and applied during the training process. We are aware that the PU prior probability of a text being positive is length-variant. To this end, an abstract recurrent model is designed to adjust the PU prior probability automatically based on corpus length. Further, a Text Multiscaling module is also proposed to exert the effect of Multiscale PU loss by diversifying training corpora in terms of length. Experiments demonstrate that the MPU framework is significantly effective in improving short-text detection performance; meanwhile, detection on long texts is also augmented. ## 2 RELATED WORK **Text Detection Methods.** Since the introduction of GPT-2 (Radford et al., 2019) and its successors, fake texts generated by powerful LLMs are causing ethical and legal issues. Methods are developed to discriminate against these generated texts in various misuse scenarios. Zellers et al. (2019) shed light on machine-generated fake news by proposing a GPT-based news generator GROVER, and uses GROVER itself to sort fake news out; Adelani et al. (2020) looks at detection of fake online reviews; Iagni et al. (2020) focuses on machine-generated fake tweets and proposes the TweepFake dataset. Other proposed detection methods are for general scenarios. Several canonical baselines are mentioned by Solaiman et al. (2019) to detect GPT-2 texts, including simple TF-IDF classifiers and finetuned RoBERTa (Liu et al., 2019); GLTR (Gehrmann et al., 2019) detect generated texts in a zero-shot manner by using token prediction probabilities from available pretrained NLP models like BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019). After the introduction of ChatGPT (OpenAI, 2022), some new detection methods (Liu et al., 2022; Mitchell et al., 2023; Mitrovic et al., 2023; Guo et al., 2023) are released. **PU Methods.** Previous works have proposed methods to train a binary classifier with positive and unlabeled data. Many PU methods (Bekker & Davis, 2020; Du Plessis et al., 2014; Kiryo et al., 2017; Su et al., 2021; Hammoudeh & Lowd, 2020; Chen et al., 2020) constructs PU loss based on positive and unlabeled samples, for classifying unlabeled data. Other PU methods include two-step learning and bias learning (Liu et al., 2003). The two-step technique first identifies reliable negative examples and then performs learning based on the positives and negatives of the mark (He et al., 2018; Tenc̆o & Pensl, 2016); biased learning treats unlabeled data as a negative sample of class-labeled noise (Hsieh et al., 2015; Shao et al., 2015). Above all, we refer to applying a PU loss during training to address the task of multiscale AI-generated text detection, because PU losses could be generally applied on powerful finetuning text detectors without much additional computation costs. ### 3 Multiscale Positive-Unlabeled Text Detection #### 3.1 Text Detection as Positive-Unlabeled Classification Despite manifold methods for detecting AI-generated texts, mainstream detectors seldom take the factor of text length into account, and thus they always fail on short texts. We have tried several existing detection methods for short LLM-generated texts (shown in Table 4), but none of them perform well. As people nowadays are immersed in short, fragmented forms of mobile media, they are vulnerable to LLM attacks with no reliable means to defend themselves. Hence, we are in urgent need of a performant short AI-generated text detector. Intuitively, past works formulated the task of AI text detection as a binary classification problem, i.e. classifying texts as AI or Human. However, the formulation could be problematic for shorter texts as we found high similarities between extremely simple AI texts and human texts. The phenomenon could be rare in actual applications. But it is fundamentally reasonable, because LLMs learn from human languages; and for sentences whose structures are overly simple, they are seemingly “copied” by LLMs from what they have learned. Therefore, the attribution of these simple machine texts is uncertain: on one hand, they are indeed outputs from Language Models; on the other hand, they are ordinary human languages. Though the completely non-classifiable case mostly happens for extremely short texts or commonly used phrases (that rarely occurs in our benchmarks and detection of which is of no application value), it inspires us to think about the partially “unlabeled” property behind the vast majority of short, distinguishable texts despite their definite labels. To overcome this issue, we model the task of multiscale text detection as a partial Positive Unlabeled problem (PU). In this problem, corpora from human are regarded as “Positive”, but short texts from machines are given an additional “Unlabeled” mark for PU loss calculations (detailed in Sec. 3.3). Then our detector model is optimized within this partial PU context. #### 3.2 Preliminaries: Canonical PU Loss Functions PU losses are derived from the traditional Positive-Negative (PN, i.e. Binary Classification) setting, detailed in Appendix A. Some works (Du Plessis et al., 2014; Plessis et al., 2015) perform indirect approximation of the negative risk in the PN framework, yielding the unbiased PU (uPU) loss as follows: \[ \hat{R}_{uPU}(g) = \pi \hat{R}_P(g, +1) - \pi \hat{R}_P(g, -1) + \hat{R}_U(g, -1), \] where \( \hat{R}_P(g, -1) := \frac{1}{n_P} \sum_{i=1}^{n_P} L(g(x^P_i), -1) \) and \( \hat{R}_U(g, -1) := \frac{1}{n_U} \sum_{i=1}^{n_U} L(g(x^U_i), -1) \) are estimations calculated from positive and unlabeled training samples respectively. However, the deep learning classifier may be too flexible, leading to \( \hat{R}_U(g, -1) - \bar{\pi} \hat{R}_P(g, -1) < 0 \) and causing the model to overfit. As a remedy, Kiryo et al. (2017) proposes the non-negative risk estimator based on the uPU loss. The non-negative PU (nnPU) loss is thus derived as follows: \[ \hat{R}_{nnPU}(g) = \bar{\pi} \hat{R}_P(g, +1) + \max\{0, \hat{R}_U(g, -1) - \bar{\pi} \hat{R}_P(g, -1)\}. \] The nnPU loss [Kiryo et al., 2017] is performant and thus widely referred by later PU works and applications (Kato et al., 2019; Bepler et al., 2019; Peng et al., 2019; Xu et al., 2019; Chen et al., 2020; Su et al., 2021; Tang et al., 2022). However, to the best of our knowledge, no previous works have applied PU to scenario of length-variant texts, in which simple usage of the nnPU loss might not be effective. We hope to develop an effective PU mechanism in aid of detecting length-variant texts. 3.3 MPU: A Length-sensitive PU Approach In PU loss conventions as stated in Sec. 3.2, the estimation for the prior probability of a data being positive $\tilde{\pi}$ is always kept at a constant. The reason is that prior probability $\pi$ is closely associated with the dataset distribution, which is always assumed to be uniform. However, this might not be case with texts of different lengths. As explained in Section 1, short texts and long texts hold different properties; in other words, they do not share the same distribution. In this regard, the assumption of dataset distribution being uniform is flawed; fixing the prior estimation at a certain constant value is problematic in the case of multiscale text detection (i.e. where texts to be processed are of manifold length). Though long texts and short texts have different distributions, the distribution shift from long text to short text is a gradual process with respect to text lengths. To deal with the gradual shift of distribution, we look at this shift with respect to text length from a differentiation perspective. Texts of a certain length $l$ could be regarded as a small subset that features its own distribution, and also its own prior $\pi(l)$. We hope to provide a smooth, length-variant estimation $\tilde{\pi}(l)$ for the prior at length $l$, in order to fit the PU framework for the multiscale text detection problem. In this fashion, we propose the Multiscale PU loss $\hat{R}_{MPU}$ that uses length-sensitive priors $\tilde{\pi}$ for multiscale texts. However, we are faced with the challenge of modeling the length-variant prior $\tilde{\pi}$ in abstraction. Namely, we need to investigate the general probability of all sentences (of a certain length) being human, without access to specific details of any piece of text. To this end, we use the general recurrent language model [Mikolov et al., 2010; Sundermeyer et al., 2012] in abstraction as a discriminator for positive, human-spoken corpora, which is formulated as follows: given a sequence $S_i$ of $l$ tokens: $S_i = \{t_i\}_{i=1}^l$, abstract recurrent discriminator $\Delta : seq \rightarrow [0, 1]$ that is bounded one-dimensional (because from the discriminator we expect a confidence of a sequence being positive), the recurrent model in abstraction is expressed as: $$\Delta(S_{i+1}) = f(\Delta(S_i), t_{i+1}), \forall i \in [l - 1],$$ where $f$ is some function that merges the classification of all previous tokens $S_{i-1}$ with the classification of the last token $t_i$. Next, the abstraction is concretized based on task characteristics of human-generated text discrimination. Since relatively short texts tend to have simple semantic correlations to be captured, human text discrimination is performed via capturing signals from tokens. We hold that each token has a hidden property of origin, and the attribution contributes to the classification of the whole sequence. Tokens, as extreme cases of short texts, could be sorted into two categories: “clear positive”, i.e. the token could hardly be generated by AI; or “unlabeled”, i.e. the token is mediocre and universally used, giving no signal as “human-spoken”. Each token is expected to provide an equal contribution to the overall sequence classification towards the orientation of its own category [Kang et al., 2018]. In this sense, the merging function $f$ is formulated as equally-weighted addition: $$f(\Delta(S_i), t_{i+1}) = w_S \Delta(S_i) + w_t \delta(t_{i+1}) \quad \text{s.t.} \quad w_S = w_t,$$ where $\delta(t_{i+1})$ is defined as the contribution of $\delta(t_{i+1})$. For simplicity, we discretize the transition of classification from $i \rightarrow i + 1$ and each token contribution is designated as binary. We also take text length into consideration by normalizing $\delta(t_{i+1})$ with a factor of sequence length $l$. Under these assumptions, the transition is formulated as: $$\Delta(s_{i+1}) = \text{clip}(\Delta(S_n) + \delta(t_i), [0, 1]), \quad \text{s.t.} \quad \delta(t_i) = \begin{cases} 1/l & \text{if } t_i \text{ is clear positive}, \\ -1/l & \text{otherwise}. \end{cases}$$ Notably, we use a hard clip function to bound the overall classification results in interval $[0, 1]$ rather than other non-linear functions, e.g. sigmoid. This is because clear positive tokens could be rare in practice. This assumption is particularly true when we consider recent advancements of generative language models, where human and AI languages are more resembling. In other words, a majority of words are both frequently used by human and AI, while only a few signal words manifest unique human characteristics. This property requires the discriminate model to be highly sensitive to positive token signals. Hence, we set hard boundaries rather than using non-linear standardizing functions to scale the output between \([0, 1]\). Further, to encourage positive responses, we initially positive as the initial state \(\Delta(S_0)\) of the discriminator. Return to the original objective, we tend to calculate the prior probability of a sample being positive \(\tilde{\pi}\) based on the introduced recurrent language model. \(\tilde{\pi}\) could also be interpreted as the expectation of confidence from the recurrent discriminator \(E[\Delta(S_l)]\). The discretization of contribution is beneficial to reducing the continuous discriminator \(\Delta\) to discrete states: for a sequence \(S_l\) with \(l\) tokens, the confidence could only take values as \(i/l, \forall i \in [l]\). Therefore, discriminator \(\Delta\) has a total of \(i + 1\) equally spaced states as confidence output. We will show that the expectation \(E[\Delta(S_l)]\) of all length-\(l\) sequences could be exactly calculated given the positive probability \(p\) of a single token, i.e. the general probability of a token showing clear-human signal. As stated previously, \(p\) tends to be a small value. State transition matrix \(P \in \mathbb{R}^{(l+1) \times (l+1)}\) that represents the contribution of the last token is a band sparse matrix consisting of positive transition \(p\) and negative transition \(1 - p\) to adjacent states from the current state. Defining probability vector at state \(i\) as \(\sigma_i \in \mathbb{R}^{(l+1)}\), a single transition shown as Eq.(5) and the final state probability vector could be described as: \[ \sigma_{i+1} = \sigma_i P, \quad \sigma_l = \sigma_0 P^l. \] Thus, given one-hot initial state \(\sigma_0\), we could calculate the final state probability vector and the overall expectation \(\tilde{\pi}\) for a sequence of length \(l\): \[ \tilde{\pi}(l) = E[\Delta(S_l)] = \langle \sigma_l, \alpha \rangle = \sigma_0 P^l \alpha^T, \] where vector \(\alpha \in \mathbb{R}^{(l+1)}\) is the sequence vector of all possible positive confidence: \(\alpha = [i/l]_{i=0}^l\). Further details and derivations are mentioned in Appendix B. As a result, as text length decreases, the prior positive probability in samples of this length \(\tilde{\pi}_{length}\) decreases as well. This is in line with our expectation in Sec.3.1 that shorter texts tend to demonstrate more “unlabeled” properties. Finally, on top of the canonical non-negative PU loss as defined in Eq.(2) we define the Multiscale PU Loss with text-length-variant priors: \[ \hat{R}_{MPU}(g) = \langle \tilde{\Pi}, \hat{R}_P(g, +1) \rangle + \hat{R}_U(g, -1) - \langle \tilde{\Pi}, \hat{R}_P(g, -1) \rangle, \] where \(\tilde{\Pi}\) stands for an array: \([\tilde{\pi}(l_g)]\) that records the corresponding prior of training texts, calculated based on respective text lengths using Eq.(7). As is emphasized, short machine-generated texts should be viewed as partially “unlabeled” rather than entirely “unlabeled”. Hence, we weight-sum the multiscale PU loss and the canonical PN classification loss to get the final loss for detector model finetuning: \[ \hat{R}(g) = \hat{R}_{PN}(g) + \gamma \hat{R}_{MPU}(g). \] ### 3.4 Text Multiscaling The proposed Multiscale PU Loss expects training texts of highly variant lengths, but training sets may contain lengthy paragraphs only. Therefore, we introduce Text Multiscaling Module that generates a variety of short texts to exert the potential of the length-sensitive Multiscale PU loss. We propose random deletion at sentence scale as a solution. Text Multiscaling module consists of 3 steps: first, a complete training text is first tokenized into \(n\) sentences, denoted as sentence array \(C\); then the sentences are independently and randomly masked based on a sentence-wise mask probability \(p_{sent}\). In probabilistic terms, each sentence is decided by an independent Bernoulli trial in the sample space \(\{0, 1\}\). In the sample space, 0 means the sentence is discarded and 1 stands for the sentence is maintained. Finally, all sentences are merged again for the multiscaled training text \(c_{mul}\). Mathematically, with \( \odot \) stands for the element-wise Hadamard product, the above process could be concluded as: \[ c_{mul} = C \odot M, \quad \text{where } M \sim \text{Bernoulli}^n(1 - p_{sent}). \] (10) The proposed Text Multiscaling module is a one-to-one mapping from \( C \rightarrow c_{mul} \); we are not generating more training samples, but substituting the original sample for fair comparison in experiments. Notably, it is probable that multiscale could leave the original text intact, or only one sentence is left. The relative sequence of remaining sentences is maintained to avoid breaking excess logical relations between sentences. Multiscaled texts automatically inherit class labels of their original text. The concern for attribution change due to length reduction is to be addressed by the use of Multiscale PU Loss. Though random deletion is also applied in Easy Data Augmentation (EDA) (Wei & Zou [2019]), our method is different from theirs in two aspects. Firstly, our method is focused on multiscaling, while word-level random deletion proposed by EDA has limited effect in generating texts of various lengths. Secondly, EDA could break semantic meanings in sentences: deletion of keywords could change the class of a sentence; while a more integrated, sentence-level deletion reduces the chance of class property change. 4 EXPERIMENTS 4.1 SETTING OVERVIEW Datasets. We choose TweepFake (Fagni et al., 2020) and HC3 (Guo et al., 2023) as benchmarks for our experiments. TweepFake (Fagni et al., 2020) is a dataset of tweets for AI-generated microblog detection. Since latest LLMs have completely reshaped the task of AI text detection, we also adopt HC3 (Guo et al., 2023), which is an up-to-date ChatGPT text detection dataset including both English and Chinese. Additionally, HC3 has short-text benchmarks: HC3-English-Sent and HC3-Chinese-Sent. We use these datasets to demonstrate the effectiveness of our method. The length statistics in Table 2 show the distribution similarity of English short-text benchmarks, i.e. TweepFake (that consists of tweets) and HC3-En-Sent. We conclude from the statistics that the adopted HC3 short-text benchmark could simulate the fragmented language environment (e.g. Twitter) on mobile apps. Detector evaluation on these short-text benchmarks could reflect their real-world detection capabilities in smartphone-related scenarios. | Benchmark | Mean | Std | Q1 | Q2 | Q3 | |--------------------|------|-----|----|----|----| | TweepFake (Fagni et al., 2020) | 24.82 | 15.19 | 13 | 21 | 34 | | HC3-En-Sent (Guo et al., 2023) | 24.98 | 15.47 | 15 | 22 | 31 | Table 2: Token length statistics of short-text benchmarks. HC3-English-Sent has a similar length distribution as TweepFake. These short-text benchmarks could simulate languages that we encounter in Instant Messaging and Microblogging Apps, like Twitter. Detectors. BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) are adopted to apply our MPU method, due to their popularity and supreme performance in previous AI text detection works (Solaiman et al., 2019; Fagni et al., 2020; Liu et al., 2022; Guo et al., 2023). Training-agnostic detection algorithms are excluded from our consideration. 4.2 TWEEPFAKE DETECTION RESULTS In TweepFake experiments, we follow Kumarage et al. (2023) for our training settings. Kumarage et al. (2023) is one of the latest works on AI-generated text detection, and it claims outstanding performance on short-text detection. We strictly follow the original training strategy in Kumarage et al. (2023): the model is trained with the AdamW optimizer at batchsize 16 and learning rate \( 1e^{-5} \). TweepFake mainly consists of short tweets. We inspect the dataset and find that a vast majority of texts are single or a handful of sentences. Hence, we refrain from using Text Multiscaling that | Method | Acc. | |------------------------|------| | BERT-Finetuned | 89.1 | | RoBERTa-Finetuned | 89.6 | | RoBERTa-Stylo | 91.1 | | RoBERTa-MPU (Ours) | **91.4** | Table 3: Experiments on short-text dataset TweepFake (Fagni et al., 2020). randomly delete sentences for TweepFake datasets; rather, we directly apply Multiscale PU loss during training. As shown in Table 3, the experiment result of the proposed MPU is promising: it greatly improves the performance of finetuned RoBERTa, and its performance outcompetes the latest TweepFake baseline RoBERTa-Stylo (Kumarage et al., 2023) that requires an additional module for stylometric feature extraction during finetuning. ### 4.3 HC3-ENGLISH DETECTION RESULTS | Method | HC3-En-Full | HC3-En-Sent | |-------------------------|-------------|-------------| | GLTR (Gehrmann et al., 2019) | 96.52 | 40.19 | | PPL (Guo et al., 2023) | 95.20 | 62.04 | | OpenAI (OpenAI, 2023b) | 91.00 | 69.27 | | DetectGPT (Mitchell et al., 2023) | 87.39 | 63.32 | | BERT-Finetuned (Devlin et al., 2018) | 97.62±0.91 | 57.65±15.45 | | RoBERTa-Finetuned (Liu et al., 2019) | 97.42±0.92 | 58.60±10.53 | | RoBERTa-Stylo (Kumarage et al., 2023) | 96.48 | 81.46 | | BERT-MPU (Ours) | **98.60±0.52** | **79.76±3.07** | | RoBERTa-MPU (Ours) | 98.40±0.31 | **85.31±1.80** | Table 4: Comparison with English AI-generated text detection baselines on HC3 (Guo et al., 2023). Most baselines perform poorly on short texts (i.e. HC3-En-Sent); in contrast, our method improves short-text detection greatly. We also experiment our method on ChatGPT corpora that are much harder to detect. In the ChatGPT text detection experiments, we follow the setting of HC3 (Guo et al., 2023) to test the performance of our method. HC3 (Guo et al., 2023) is a dataset targeted at ChatGPT text detection. All texts are reduced into shorter texts for a sentence-level variant. We apply the MPU framework on the full-scale dataset of HC3 (Guo et al., 2023). Several baseline detectors are chosen to demonstrate the outstanding detection performance of our MPU method. These baselines are open-source and replicable. Among these baselines, GLTR (Gehrmann et al., 2019), PPL (Guo et al., 2023), and DetectGPT (Mitchell et al., 2023) are zero-shot methods that do not require further training: they rely on the likelihood outputs of a pretrained language model. The OpenAI Detector (OpenAI, 2023b) is a RoBERTa detector finetuned on OpenAI’s GPT-2 (Radford et al., 2019) corpora. RoBERTa-Stylo (Kumarage et al., 2023) is one of the latest detection baseline targeted for short texts. BERT-Finetuned and RoBERTa-Finetuned are language models plainly finetuned on HC3 (Guo et al., 2023), following the official setting; while BERT-MPU and RoBERTa-MPU are language models trained on HC3 (Guo et al., 2023) via the proposed MPU method. It could be observed from Table 4 that most existing methods perform poorly on short texts. The statistics verify our previous claim that the detection of shorter texts is a difficult problem. Specifically, finetuned BERT and RoBERTa are good at detecting long, full-level texts, but they fail to filter out shorter AI-generated texts. On the contrary, our MPU method could greatly improve short-text performances and boost long AI-generated text detection as well. We will further investigate the effect of solitary MPU components in Sec. 4.5. | Method | HC3-Ch-Full | HC3-Ch-Sent | |------------------------------|-------------|-------------| | GLTR (Gehrmann et al., 2019) | 87.40 | 49.94 | | RoBERTa-Finetuned (Liu et al., 2019) | 96.28±3.42 | 83.07±6.85 | | RoBERTa-MPU (Ours) | **97.42±0.24** | **89.37±1.94** | Table 5: Comparison with Chinese AI-generated text detection baselines. Our method is also proved effective on Chinese corpora. ### 4.4 HC3-Chinese Detection Results To verify the generality of the proposed MPU method in other languages, we also compare our method with baselines on Chinese AI text detection benchmark HC3-Chinese (Guo et al., 2023). Following Guo et al. (2023), we use chinese-roberta-wwm-ext (Cui et al., 2020) as the pretrained language model. The results are shown in Table 5. Our method could still outcompete other methods by large margins in terms of short-text detection, reaching an F1 score of 89.37 on HC3-Chinese-Sent. ### 4.5 Ablations **Harmful Short Texts.** We elaborate in Section 3.1 that short texts could manifest a partially unlabeled property, which impacts the normal training process of the detector. To demonstrate that short texts are indeed harmful for training, we design an experiment based on the HC3-English dataset (Guo et al., 2023) as follows: when the detector encounters a short training text during training, the training text is omitted from backward operations. Other settings are identical to Section 4.3. As shown in Table 6, finetuning without short texts demonstrates better performance compared with plain finetuning. This reveals that short sentences are harmful to detector training due to their partially unlabeled properties. Hence, PU frameworks need to be leveraged to address this issue. | Method | HC3-En-Full | HC3-En-Sent | |---------------------------------|-------------|-------------| | Finetuning with all texts | 97.42 ± 0.92 | 58.60 ± 10.53 | | Finetuning without short sentences | **98.19 ± 0.66** | **62.42 ± 5.60** | Table 6: Performance comparison between the detector finetuned with all texts and detector finetuned without short texts. | Measures | Text Mul. | MPU loss | HC3-English | HC3-Chinese | |----------|-----------|----------|-------------|-------------| | | | | Full | Sent | Full | Sent | | ✗ | ✗ | ✗ | 97.42±0.92 | 58.60±10.53 | 96.28±3.42 | 83.07±6.85 | | ✓ | ✗ | ✗ | 96.42±2.27 | 82.76±2.76 | 95.89±4.18 | 84.79±5.94 | | ✗ | ✓ | ✓ | 97.48±2.41 | 45.30±8.78 | 96.87±0.89 | 83.46±5.78 | | ✓ | ✓ | ✓ | **98.40±0.31** | **85.31±1.80** | **97.42±0.24** | **89.37±1.94** | Table 7: F1 scores of Finetuned RoBERTa on ChatGPT benchmark HC3. “Full” and “Sent” stands for model validated on long-text and short-text benchmarks, respectively. **Framework Components.** We perform ablations on the solitary effects of Text Multiscaling and Multiscale PU loss. From Table 7, it is firm that the addition of Text Multiscaling to training corpus greatly improves performance on sentence-level corpus detection as expected. Unfortunately, the detector’s capability on full corpus decays. This performance drop is attributed to the unreasonable label assignment to short corpus from random sentence deletion: the generated short corpora automatically inherit labels from their full-level predecessors in Text Multiscaling Module, neglecting “unlabeled” properties as introduced in Sec. 3.1. The addition of MPU loss reverses full-level corpus detection performance drop and boosts short-text performance as well. Solitary addition of MPU loss only would have little help for detection performance for lack of short texts. **MPU Loss.** We further investigate MPU loss configurations on ChatGPT text detection benchmark HC3-English (Guo et al., 2023). The performance of Multiscale PU loss is evaluated against ordinary PU loss that disregards changes in sentence lengths, as shown in Table 8. Multiscale PU loss is sensitive to training corpora of various lengths and thus is more performant compared with its ordinary counterpart. | PU type | Full | Sent | |-------------|------------|------------| | Ordinary | 97.05±2.15 | 83.53±3.14 | | Multiscale | **98.40±0.31** | **85.31±1.80** | Table 8: Performance comparison between ordinary PU loss and the proposed Multiscale PU loss. Introduced in the abstract recurrent detection model (Sec. 3.3), token-wise prior $p$ estimates the probability of a token being highly characteristic as human-spoken. As shown in Table 9, we carefully tune $p$ and found that the best performance is reached at $p = 0.2$, which is small as we expect. | $\gamma$ | Full | Sent | $p$ | Full | Sent | $p_{sent}$ | Full | Sent | |----------|------------|------------|-----|------------|------------|------------|------------|------------| | 0 | 96.42±2.27 | 82.76±2.76 | 0.1 | 96.29±1.31 | **86.06±1.97** | 0 | 97.48±2.41 | 45.30±8.78 | | 0.2 | 96.52±0.38 | 83.94±4.07 | 0.2 | **98.40±0.31** | 85.31±1.80 | 0.1 | 97.73±1.42 | 76.84±7.93 | | 0.4 | **98.40±0.31** | 85.31±1.80 | 0.3 | 96.81±1.70 | 84.17±2.78 | **0.25** | **98.40±0.31** | 85.31±1.80 | | 0.6 | 97.42±0.13 | **85.78±1.19** | 0.4 | 97.44±1.06 | 82.88±3.32 | 0.4 | 97.45±1.34 | **87.11±1.41** | | 0.8 | 96.90±1.49 | 84.54±2.09 | | | | | | | Table 9: Ablation experiment results on hyperparameters: loss proportion $\gamma$, the estimated probability of a token being clear-human $p$, and sentence mask probability $p_{sent}$. We also carefully adjust the affine weight hyperparameter for PU loss $\gamma$, as shown in Table 9. As the affine weight $\gamma$ for PU loss gradually increases, the full-level corpus detection performance reaches the peak at $\gamma = 0.4$ and then drops, while the sentence-level performance reaches its peak at $\gamma = 0.6$. From a comprehensive perspective, the best overall performance is reached at $\gamma = 0.4$ where both performances on full and sentence-level corpus are satisfactory. The climb-and-drop trend reveals that short machine-generated sentences are not completely unlabeled; short-text classification should be viewed as a partial PU problem rather than a complete PU problem. Further, we test the advantage of the non-negative risk estimator in the nnPU loss (Kiryo et al., 2017) against uPU loss (Du Plessis et al., 2014), as introduced in Sec. 3.2. The results are shown in Table 10. | Loss type | Full | Sent | |-------------------|------------|------------| | Unbiased PU (Du Plessis et al., 2014) | 97.90±0.25 | 84.87±1.28 | | Non-negative PU (Kiryo et al., 2017) | **98.40±0.31** | **85.31±1.80** | Table 10: Performance comparison between ordinary PU loss and the proposed Multiscale PU loss. **Text Multiscaling.** As introduced in Sec. 3.4, we randomly mask sentences of the training set at probability $p_{sent}$ for multiscale text augmentation. We investigate on tuning $p_{sent}$ for the optimal value. The statistics are shown in Table 9. When $p_{sent}$ is set at 0.25, the test performance on both full and sentence level corpus are satisfactory; when $p_{sent}$ is set too high, sentence-level detection performance is enhanced, but full-level performance is negatively impacted because the full-scale training texts are overly damaged. ## 5 Conclusion This paper proposes a Multiscale Positive-Unlabeled (MPU) framework for AI-generated text detection. We look at the iffy attribution of short AI-generated corpus, and model AI text detection as a partial PU problem. MPU loss and Text Multiscaling Module are to augment detectors’ discriminative ability on short corpus. ETHICS & REPRODUCIBILITY STATEMENT This paper proposes a training method for AI-generated text detectors. Despite outstanding performance on multiscale texts, chances are that the detectors output the wrong attribution of a certain piece of text. This may cause ethical issues when the detector is used for detecting plagiarism, fake news, et cetera. Hence, we strongly recommend that results from the detector could only serve as a reference in actual applications. Experiments are reproducible. We have attached complete training settings in the Appendix; we also fix random seeds in our codes for the ease of replication. All details are in Appendix E. ACKNOWLEDGEMENT This work is supported by National Key R&D Program of China under Grant No.2022ZD0160300 and National Natural Science Foundation of China under Grant No.62276007. We gratefully acknowledge the support of MindSpore, CANN and Ascend AI Processor used for this research. REFERENCES David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, and Isao Echizen. Generating sentiment-preserving fake online reviews using neural language models and their human- and machine-based detection. In Leonard Barolli, Flora Amato, Francesco Moscato, Tomoya Enokido, and Makoto Takizawa (eds.), Advanced Information Networking and Applications - Proceedings of the 34th International Conference on Advanced Information Networking and Applications, AINA-2020, Caserta, Italy, 15-17 April, volume 1151 of Advances in Intelligent Systems and Computing, pp. 1341–1354. Springer, 2020. doi: 10.1007/978-3-030-44041-1_114. URL https://doi.org/10.1007/978-3-030-44041-1_114 Jessa Bekker and Jesse Davis. Learning from positive and unlabeled data: A survey. Machine Learning, 109:719–760, 2020. Tristan Bepler, Andrew Morin, Micah Rapp, Julia Brasch, Lawrence Shapiro, Alex J Noble, and Bonnie Berger. Positive-unlabeled convolutional neural networks for particle picking in cryo-electron micrographs. Nature methods, 16(11):1153–1160, 2019. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165 Xuxi Chen, Wuyang Chen, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, and Zhangyang Wang. Self-pu: Self boosted and calibrated positive-unlabeled training. In International Conference on Machine Learning, pp. 1510–1519. PMLR, 2020. Evan Crothers, Nathalie Japkowicz, Herna L. Viktor, and Paula Branco. Adversarial robustness of neural-statistical features in detection of generative transformers. In International Joint Conference on Neural Networks, IJCNN 2022, Padua, Italy, July 18-23, 2022, pp. 1–8. IEEE, 2022. doi: 10.1109/IJCNN55064.2022.9892269. URL https://doi.org/10.1109/IJCNN55064 2022.9892269 Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 657–668, Online, November 2020. Association for Computational Linguistics. URL https://www.aclweb.org/ anthology/2020.findings-emnlp.58 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805
SZn1Ex72Lv
What specific issues or challenges in the field of Feed Forward Neural Networks (FFNNs) are you addressing with your proposed concept of block-operations and the Multiplexer? How does this concept enhance FFNNs, and what practical applications or benefits can be derived from it?
Block-operations: Creating an Inductive Bias to Route Data and Reuse Subnetworks Anonymous authors Paper under double-blind review Abstract Feed Forward Neural Networks (FNNs) often suffer from poor generalization due to their inability to effectively develop and reuse subnetworks for related tasks. Csordás et al. (2020) suggest that this may be because FNNs are more likely to learn new mappings than to copy and route activation patterns without altering them. To tackle this problem, we propose the concept of block-operations: Learnable functions that group neurons into larger semantic units and operate on these blocks, with routing as a primitive operation. As a first step, we introduce the Multiplexer, a new architectural component that enhances the FNN by adding block-operations to it. We experimentally verified that the Multiplexer exhibits several desirable properties, as compared to the FNN which it replaces: It represents concepts consistently with the same neuron activation patterns throughout the network, suffers less from negative interference, shows an increased propensity for specialization and transfer learning, can more easily reuse learned subnetworks for new tasks, and is particularly effective at learning algorithmic tasks with conditional logics. In several cases, the Multiplexer achieved 100% OOD-generalization on our tasks, where FNNs only learned correlations that failed to generalize. Our results suggest that block-operations are a promising direction for future research. Adapting more complex architectures than the FNN to make use of them could lead to increased compositionality and better generalization. 1 Introduction Problem. Typical artificial neural networks (NNs) perform poorly on tasks of systematic generalization (Marcus (1998), Bahdanau et al. (2018), Lake & Baroni (2018)). Numerous people have argued that this failure to generalize is caused by Neural Networks’ lack of compositionality: The ability to break complex concepts down into their atomic elements and to reuse and recombine them appropriately when faced with a new task (Pfeiffer et al. (2023), Lake & Baroni (2018), Bahdanau et al. (2018), Barrett et al. (2018), Hupkes et al. (2020), Hill et al. (2019)). Compositionality. Csordás et al. (2020) investigated in how far compositionality emerges automatically in neural networks. They found that neural networks frequently learn to develop specialized subnetworks for different tasks, but have difficulty learning to reuse subnetworks for similar tasks. They note that neural networks are bad at routing data because routing can only occur through network weights with a special structure that is difficult to learn. Their experiments suggest that neural networks tend to learn new feature mappings instead of representation-preserving mappings even in situations where a uniform representation would clearly be beneficial. They argue that this is an important issue and call for additional research on suitable inductive biases. This paper aims to answer that call, using experiments similar to their own to verify the effectiveness of our module. Novelty. The main novelty we introduce is the concept of block-operations: We reconceptualize how neurons in network activations are treated. Where fully-connected layers treat all neurons as independent of each other and where attention-mechanisms linearly interpolate entire tensors, we do a mix of both: We split activation tensors into uniformly sized blocks. We then introduce the Multiplexer and SMFR modules (Stack of Multiplexers and Feedforward Neural Networks with gated residual connections), which treat these blocks as semantic units that can be moved, processed and recombined independently. The SMFR learns to apply copy operations and interpolations between blocks, as well as feature mappings within each block. This creates an inductive bias that makes it very easy for the network to learn to transfer data without losing the generality of fully-connected layers. The SMFR replaces the basic Feedforward Neural Network (FNN) building block. It can learn to emulate an FNN or to transfer data, depending on what is most useful. Experiments. We compare the SMFR to the FNN, which it replaces, using synthetic datasets to test directly for the properties we are looking for. Our module showed signs of compositional behavior and attained substantial improvements over the FNN baseline. In Section 5.1, we show that our module suffers less from negative interference (McCloskey & Cohen, 1989). In Section 5.2, we show that our module is better at reusing modules and keeping data representations consistent throughout the network. This improves generalization and enables transfer learning. It achieved perfect Out-of-Distribution (OOD) generalization on several trials by learning to reuse a subnetwork. In Section 5.3, we show that our module is particularly effective at learning logical rules and variable assignments, such as those seen in algorithmic tasks. It generalized in a way that suggests that the module learned the underlying atomic operations of the task, unlike the FNN baseline, which only learned statistical correlations. 2 RELATED WORK Residual Connections. Residual connections allow a neural network to route data through the network unchanged, and many variants of this have been used to great success in different areas (Szegedy et al., 2017; He et al., 2016b; Paszke et al., 2017; Barchechnner et al., 2021; Wu et al., 2019; He et al., 2020). Due to the success of residual connections, we adjust them for our own architecture. In particular, we use a Copy Gate using a learnable interpolation weight instead of simple addition, as used by Csordás et al. (2021) in the Neural Data Router (NDR). They reported very strong generalization ability on multiple tasks using this variant of residual connections. Attention. Attention mechanisms have become a ubiquitous tool in deep learning, useful in a variety of fields such as natural language processing, image processing and speech recognition (Bahdanau et al., 2014; Xu et al., 2015; Vaswani et al., 2017; Devlin et al., 2018; Jaderberg et al., 2015; Chorowski et al., 2015). Note that the term Attention is not used consistently in the literature and is often conflated with the more specific Self-Attention that is used in Transformers. The Transformer has difficulty learning to route data without modification, just like the FNN, because it includes a multiplication with a Value matrix, which is similar to applying a fully connected layer. The Multiplexer module we introduce in this paper uses a form of Attention, but not Self-Attention, because the format requirements of its input and output are different, and it avoids using a Value Matrix. Despite their seeming similarity, Multiplexers and Self-Attention mechanisms therefore fulfill different, complementary functions in an architecture. Other. Many other architectures for routing exist and have been summarized by surveys (Pfeiffer et al., 2023; Han et al., 2021; McGill & Perona, 2017). For example, Capsule Networks can perform routing between a fixed number of capsules based on entity recognition (Sabour et al., 2017). Routing Networks are a family of architectures that learn routing directly through a separate routing module (Rosenbaum et al., 2017; Rosenbaum et al., 2019). Recurrent Independent Mechanisms use multiple largely independent cells that communicate through a bottleneck of attention (Goyal et al., 2019). 3 BLOCK-OPERATIONS Motivation. As noted by Csordás et al. (2020), there is no inductive bias that would cause FNNs to keep activation patterns for the same concept consistent throughout the network. A related issue is that neural networks tend to produce much more dense representations for concepts than biological neural networks. Ahmad & Scheinkman (2019) show that a more sparse representation is more resilient to noise. To solve these issues, we want to construct a new neural network module that avoids these problems. In this way, we attempt to construct a neuro-symbolic system using a purely connectionist approach, as suggested by Greff et al. (2020) as a possible solution to the binding problem. We aim for an inductive bias to satisfy the following design goals with much fewer parameter updates and side effects than an FNN: 3.1 DESIGN GOALS **Division of Concerns.** The activation patterns of different concepts should use different, non-overlapping subsets of neurons. **Routing.** It should be possible to easily copy activation patterns throughout the network without modification. Once the network has learned such a routing mechanism, it should work reliably even for OOD data. **Reuse.** The activation pattern of each concept should be reused consistently throughout the network. Subtasks that rely on the same concepts should represent these concepts with the same activation patterns. **Resilience to Combinatorial Explosion.** Different subtasks can rely on different subsets of all possible concepts used by the network. Fulfilling our design goal **Division of Concerns** needs to remain possible, even as the number of possible concepts in the network grows. We can achieve this through a mechanism to select only relevant subsets of concepts and drop irrelevant data. **Role Multiplicity.** It can happen that a subtask requires two concepts of the same type for different purposes (e.g., calculating \( f(a, b) = a + b^2 \) requires two inputs of type ‘number’). In these cases, it must be possible to unambiguously encode which of the items is which, even though they are both represented by the same neuron activation pattern and therefore overlap and interfere with each other. **Conditional Behavior.** All of the above needs to be optional. The network must be able to resort back to simple and efficient densely-connected layers when appropriate. Importantly, it must be possible to make that decision at inference time, conditional on intermediate results, and not hard coded through network weights. 3.2 APPROACH **Block-Operations.** To achieve these goals, we propose a novel way to design neural network architectures, by reconceptualizing the way we treat data: **All activation tensors are split into uniformly-sized blocks that aggregate individual neurons into larger semantic units.** Each block of a tensor should come to hold one distinct concept and the position of the block within the tensor should designate its role. **Replacing the FNN.** We introduce the Multiplexer and other modules in Section 4 and then combine them into the SMFR module. The SMFR is a replacement for the FNN that has a suitable inductive bias to learn to work with blocks. It can seamlessly combine two different ways of processing data: Learning new mappings just like a regular FNN, and conditionally routing blocks of data. Figure 1 illustrates the general idea without going into the details. ![Figure 1](image) **Figure 1:** **Left.** An example FNN receives a layer of 30 input neurons and maps it to a layer of 30 output neurons using densely connected layers. **Right.** An equivalent SMFR architecture instead views the input as 3 blocks of 10 neurons each and it outputs another 3 blocks of 10 neurons each. In this example, the first output block is a copy of the first input block. The second output block is generated through an FNN based on all 3 input blocks (the FNN is a submodule inside the SMFR). The third output block is a linear interpolation of the 3 input blocks and an FNN output. 4 MODULES Overview. We introduce the **Multiplexer** to route data and the **FNRR** (Feedforward Neural Network with gated residuals) to learn new feature mappings. We combine these two modules into the **MFNNR** (Multiplexer plus Feedforward Neural Network with gated residuals), which can do both. Finally, we stack multiple MFNNRs in sequence to form our final architecture, the **SMFR**. **Multiplexer.** A Multiplexer (Figure 2) takes $M$ input blocks and produces $N$ output blocks, each of which is a weighted average of all $M$ input blocks. The weights are generated by a Feed Forward network that receives all $M$ blocks as input and outputs an $M \times N$ weight matrix, which is normalized by applying a softmax over the first dimension. The Multiplexer can learn to dynamically transfer blocks based on their content as well as the content of other blocks, can select subsets of blocks or copy them, and can create linear interpolations of different blocks. ![Figure 2: A Multiplexer with $M = 4$ and $N = 3$.](image) **FNRR.** An FNRR (Figure 3) takes $N$ input blocks and produces $N$ output blocks. It uses an FNN to generate new blocks and then combines them with the input blocks using residual connections. These residual connections use a learned gating weight instead of simple addition, similar to [Csordás et al. (2021)](https://arxiv.org/abs/2106.07859). An FNRR can learn to either let an input block pass through unchanged, or to replace it with a newly derived block just as an FNN does, or to create a linear interpolation of both. It does this for each of the blocks independently, conditional on each of them as well as on extra input tensors. ![Figure 3: An FNRR with $N = 3$.](image) **MFNNR.** An MFNNR is composed of a Multiplexer followed by an FNRR module. The FNRR module uses the input blocks of the Multiplexer as its extra input. The MFNNR can learn to rearrange blocks, or to generate new blocks through an FNN, or to interpolate between both. It can learn to do this conditionally on the input and with separate rules for each output block. An MFNNR can emulate an FNN or a data copying mechanism by setting its generated weights to extreme values. It can learn arbitrary mappings like an FNN, but it also has an inductive bias to copy or interpolate blocks of data if doing so leads to a simpler solution. **SMFR.** Multiple MFNNR modules can be stacked one after the other to form a more powerful architecture. The SMFR is the architecture we use in our experiments, as an alternative to FNNs. Design Goals. Coming back to our original design goals: The SMFR achieves Routing because both the Multiplexer and the FNNR can pass blocks through without modification. It achieves Resilience to Combinatorial Explosion because it can select the subset of blocks that are needed for a task and ignore all others. It achieves Role Multiplicity because the Multiplexer can move a block from one position in the input tensor to a different position in the output tensor. It achieves Conditional Behavior because both the Multiplexer and the FNNR use an internal FNN to control gates so that all of these features are only used when they are useful. The goals Division of Concern and Reuse should be achieved as emergent behavior of the network. Our Experiments confirm this assumption. 5 EXPERIMENTS Summary. We run experiments on synthetic datasets to test for the presence or absence of useful properties. As the SMFR is intended as a replacement for the FNN, we compare SMFRs with FNNs with a similar numbers of parameters. All experiments are designed to be as simple as possible to test the properties we want to investigate. Format. Each experiment uses a set of digits as inputs, as well as a task indicator to differentiate between the subtasks of the experiment. The digits are one-hot encoded. We use a block size of ten. The task-indicator tensor becomes its own block and is padded to size ten. Comparisons and Hyperparameters. We use grid-search to generate different architectures of FNNs and SMFRs. For FNNs we vary the number and size of intermediate layers. For SMFRs we vary the number of MFNNR modules (depth) and the number of blocks in each layer of neurons between the MFNNR modules (width). A depth of zero means that only a single MFNNR is used to map the input blocks to the output blocks directly. We also vary the size of the FNNs inside the MFNNR modules, but only to a lesser extent. See the Appendix for details. Overview. The experiment in Section 5.1 measures negative interference by changing the distribution used for training and testing once a threshold accuracy is reached. It shows that SMFRs are less prone to negative interference than FNNs. The experiment in Section 5.2 measures generalization through the reuse of subnetworks by training the same task on two different inputs but with a limited training distribution for one of them. It shows that SMFRs can sometimes learn to reuse subnetworks perfectly even in the absence of training data, a case of transfer learning. The experiment in Section 5.3 measures performance on an algorithmic task that requires conditional logic and variable assignments. It shows that SMFRs perform better and develop a more accurate model of the underlying atomic operations. 5.1 ADDITION/MULTIPLICATION EXPERIMENTS Task. The addition/multiplication dataset is designed to test the resilience to negative interference of a neural network. The task is to either add or multiply two single-digit numbers (modulo 10). This task is similar to the addition/multiplication task in Csordás et al. (2020). We use two training stages. During the preparation stage the multiplication task is trained on limited data and the addition task is trained on all data. In the limited dataset, one number uses only low digits (0 to 4) and the other number uses only high digits (5 to 9). Once the network reaches a threshold of accuracy, we switch to the negative interference stage, in which the rule for training is inverted: The addition task is trained on limited data and the multiplication task is trained on all data. We run the negative interference stage for a fixed number of additional training steps, then measure the OOD accuracy: The accuracy on the data that was only used for training in the preparation stage but not the negative interference stage. Variant. We note that both the addition and the multiplication task are commutative. Our SMFR architecture is good at solving commutative tasks because the softmax used by the Multiplexer is also commutative, giving it a useful inductive bias. This is an additional strength of our architecture over FNNs. We perform an ablation study to measure how much of the SMFR’s strength comes from its block-routing abilities and how much from its ability to easily handle commutativity. To do so, we run variants of the experiment in which we use Straight Through Gumbel-Softmax instead of softmax for the Multiplexer, because Gumbel-Softmax makes a discrete pick among candidates and therefore does not help with commutative tasks in the same way softmax does (Jang et al., 2016). Table 1: OOD accuracy for different thresholds and architectures | Threshold | FNN | SMFR_{Softmax} | SMFR_{Gumbel} | SMFR_{Softmax}/FNN | |-----------|-----------|----------------|---------------|--------------------| | 0.7 | 0.032 ± 0.0034 | **0.142** ± 0.0101 | 0.088 ± 0.0080 | **4.44** | | 0.8 | 0.056 ± 0.0042 | **0.184** ± 0.0112 | 0.119 ± 0.0099 | **3.29** | | 0.9 | 0.123 ± 0.0068 | **0.259** ± 0.0137 | 0.208 ± 0.0113 | **2.11** | | 0.95 | 0.202 ± 0.0105 | **0.326** ± 0.0143 | 0.292 ± 0.0158 | **1.61** | | 1.0 | 0.350 ± 0.0157 | **0.434** ± 0.0179 | 0.395 ± 0.0182 | **1.24** | Results. Table 1 shows the OOD accuracy of different models. Each line compares averages and standard errors of 90 trials of FNN and 63 different architectures of SMFR for each of softmax and Gumbel-Softmax. The Threshold refers to the accuracy at which we switch to the second stage of training. The values show the OOD accuracy 2000 steps after switching to the negative interference stage, except for the last column, which shows the ratio between the values of SMFR_{Softmax} and FNN. We see that SMFR_{Softmax} performs best in all cases and the difference to FNNs is larger for smaller thresholds. In other words, SMFRs suffer less negative interference than FNNs, especially if the switch between the two training regimens happens earlier. Routing and Commutativity. SMFR_{Softmax} consistently performs better than SMFR_{Gumbel}, and both outperform FNNs. This shows that both the block-routing abilities of SMFRs and their ability to learn commutativity efficiently have positive effects. Model Size. The above analysis is based on an average over all model sizes for both SMFRs and FNNs. An ablation study showed that SMFRs perform better at smaller model sizes and FNNs at larger ones (see appendix for details). The SMFRs were still better than the FNNs at larger model sizes, but the difference was less pronounced. The only cases where FNNs slightly exceeded the performance of SMFRs were when the number of parameters was high and we also waited until convergence (threshold = 1.0) before starting negative interference. See the appendix for additional notes on this and on architecture optimization. 5.2 Double-addition experiments Task. The double-addition dataset is designed to test the ability of a neural network to reuse subnetworks for similar tasks. The network receives two pairs of numbers and has to return either the sum of the first pair or the sum of the second pair (modulo 10). Like the addition/multiplication task above, this task is similar to the double-addition task in Csordás et al. (2020). We want to measure how well the network learns that both tasks require the same logic. To do so, we use a biased training procedure: The distribution of the input numbers is fully uniform for the first pair of numbers, but it is restricted for the second pair of numbers. This uses the same logic that we also used in the addition/multiplication experiments: One number uses only low digits (0 to 4) and the other number uses only high digits (5 to 9). The training data for the second task is therefore a strict subset of the training data for the first task. During testing, we measure the Out-of-Distribution accuracy on the second pair of numbers. If the network learns both to use the same activation patterns to represent numbers at all layers and to use the exact same process for both tasks, then the OOD accuracy of the second pair of numbers should be perfect. Variants. As before, we run two variants of the SMFRs: One using softmax and one using Straight Through Gumbel-Softmax. Using Gumbel resulted in worse performance overall but otherwise showed the same patterns. Results. FNNs of all model sizes consistently get an OOD accuracy of 0.0, which is actually worse than guessing (0.1). This is because \( f(a, b) = a + b \) becomes a bijective function if either \( a \) or \( b \) are frozen, mapping ten possible inputs to ten possible outputs. The set of possible outputs of the limited training inputs has no overlap with the set of possible outputs of the OOD inputs, which results in an OOD accuracy of 0.0. In contrast, SMFRs reach an average of 0.205 OOD accuracy across all architectures we tested. The surprising part here was that some trials achieved 100% OOD accuracy. This is an important finding. Csordás et al. (2020) showed that contemporary neural networks are very bad at learning to reuse logic and our model was sometimes able to do this perfectly. Architecture. We investigated the effect of the architecture on performance. We found that model size correlated with performance, but bigger was not always better (see the appendix for details). Across all of our experiments with SMFRs, 10.4% achieved 100% OOD accuracy (25 of 240 trials). Best results were achieved when using softmax instead of Gumbel, at a depth of exactly 1 and a width of 8 or higher: For these architectures 75% of trials achieved 100% OOD accuracy (9 of 12). These findings suggest that architecture optimization will be important when using SMFRs in practice, which is a limitation that we hope to address in future work. We hypothesize that the depth has an optimum at 1 because at higher depths the network loses track of the routing it should use and just resorts back to using FNN logic. Convergence Behavior. Most experiments with a perfect OOD accuracy converged to 1.0 almost immediately. It happened more often that OOD accuracy increased than that it dropped. Perhaps most importantly, the OOD accuracy never dropped after reaching 1.0. In other words, the SMFR usually came to reuse subnetworks more rather than less as the training progressed and remained stable once converged. This suggests that subnetwork reuse will also occur if SMFRs are used as components inside larger architectures that train for longer. 5.3 ALGORITHMIC EXPERIMENTS Task. The ALGO task is designed to emulate patterns of information processing that typically occur in real-life coding tasks. It tests how good the network is at understanding conditional logic and variable-assignment operations. The task uses five variables as the input and expects the same five variables as the output. The task proceeds in several iterations, modifying one variable per iteration. On each iteration, a formula of the following form is applied: "Variables A, B, C, D should remain unaltered. Variable E should be assigned the value of variable A if C > D and the value of variable B otherwise." We use five different permutations of the variables A, B, C, D and E, and the task indicator tells the network which of these five rules it should apply on a given iteration. We want to find out if the network learns the actual underlying logic rule, or only a statistical correlate, as FNNs are prone to do. To do so, we use a special way to train and test the network: During training, we always perform exactly two applications of the rule. The network is run for two iterations, and the loss is only applied to the final output. As a result, the accuracy after an odd number of iterations is Out-of-Distribution and tests if the network understands that the data generating process can be decomposed into two applications of the same atomic rule. Variant: Incrementation. In a variant of the task, we also require the overwritten variable to be incremented by one. This additional incrementation step is no extra work for an FNN, but it makes the task harder for an SMFR because it now has to learn a mapping in addition to learning a transfer operation. Results. Figure 4 shows how the average accuracy of the different architectures changes with the number of iterations. Note that the training accuracy is the entry at iteration = 2. The FNNs didn’t generalize to odd lengths at all, while the SMFRs only suffered a small loss of performance. Poor FNN performance was not unexpected, since there is no reason for the FNN to make the results after one iteration human-readable. The SMFR also lost accuracy at odd-numbered iterations, but only to a much smaller extent, which indicates that it learned to decompose the two-step task into the correct atomic logical rule. Besides the average performance, we were also interested in knowing how often an architecture converged to 100% accuracy on the OOD iterations. This is shown by the second graph. SMFRs scored much better here. The majority of SMFR architectures we tried achieved 100% OOD accuracy on odd-numbered iterations. In contrast, no FNN achieved this and most of them even struggled to achieve 100% performance on even-numbered iterations. It appears that FNNs more easily learned good approximations, but SMFRs were more likely to learn the actual underlying rule that generalizes OOD. Architecture. The choice of architecture had very strong and consistent effects on performance in this experiment. For the width, all that mattered was making it wide enough that all variables could be represented. For the depth, there was a range of values where all experiments converged with 100% accuracy on all iterations regardless of the values of other hyperparameters (The range was 1 to 5 out of 0 to 10). This supports our previous finding that the performance of this architecture Figure 4: The left figure shows the average accuracy of different architectures and variants at different iterations. The zig-zag pattern is intended behavior: We train on 2 iterations, so odd-numbered iterations are more OOD than even-numbered ones. Note that the five variables become more similar to each other with each iteration if no value is incremented, which is why the performance on higher iterations goes up instead of down for the base variant. can be improved by tuning the number of layers. This is similar to Transformer models, which also achieve greater performance by tuning the number of heads. (Voita et al., 2019). As in previous experiments, FNNs benefitted more from larger model sizes than SMFRs, and SMFRs reached their best performance at lower model sizes. However, even the largest FNNs we tested performed much worse than the SMFRs on odd iterations (see appendix for details). Working with Noisy Input. In the above experiments, the inputs were all formatted in a way that is easy for the SMFR to work with because the data is already formatted into blocks in the appropriate way: Each block represents one variable. This raises the question: What happens if the input is noisy? Can the SMFR manage to reconstruct the correct format? To test this we ran a second set of experiments with a modification: A permutation is applied to the input, and applied again to the state after each iteration. This permutation is random but fixed, and it permutes all neurons across all blocks. We apply the permutation to intermediate states as well as the input to ensure that any number of iterations can be solved with the same atomic rule. This random permutation had no effect on FNNs, which still give the same performance since they do not care about the order of neurons in a layer. SMFRs likewise did not suffer a reduction on their training accuracy (two iterations) but did lose performance on their 1-iteration OOD-accuracy. However, their OOD-accuracy was still better than that of FNNs on average. Notably, in one of the 30 trials we ran the SMFR did achieve 100% OOD-accuracy again, even though the inputs were permuted. This shows that SMFRs can learn to automatically arrange unstructured data into the block format they need. They can sometimes even do so perfectly, but not reliably. Investigating how to do this more reliably remains as future work. 6 DISCUSSION Block-Operations in Larger Architectures. The success of our module on the synthetic tasks raises the question: Do these useful properties remain once SMFRs are part of a larger system and applied to practical problems? This remains as future work. The focus of this paper is establishing the soundness of block-operations in general, not solving a practical task. SMFRs are not an architecture of their own but a building block that can serve as a drop-in replacement for FNNs in existing architectures. Since densely-connected layers are so fundamental, it is likely that replacing all FNNs inside large established architectures has side effects. After all, the FNN has been in use for decades and many parts of the ML pipeline, such as the choice of Optimizer and Learning Rate, have been conditioned on the assumption that everything uses FNNs. It remains to be tested if replacing FNNs with SMFR wholesale immediately improves performance or runs afoul of harmful side effects that will require more research to get right. One challenge stands out in particular: Transferring data through an architecture requires an uninterrupted pipeline, so all modules must allow for routing of blocks. Generally speaking, any layer in an architecture that is fully connected can’t efficiently learn to pass data through without changing it. All such layers therefore need to be modified. The easiest such modification is the use of residual connections throughout the network. However, ideally modules could instead be adjusted to interact with blocks explicitly. **Commutativity and Argument Selection.** Besides easier routing, our module also exhibits an inductive bias that helps it to learn tasks that rely on commutative functions, or on selecting arguments from a set of options. FNNs cannot learn these kinds of tasks effectively as they are not permutation invariant. Commutativity and argument selection are fundamental concepts in mathematics and programming respectively, so learning them more easily is likely to be helpful for these types of tasks. **Interpretability.** In the course of our experiments we noticed that SMFRs were often easier to interpret than FNNs. The activations of the softmax and sigmoid neurons that control the routing correlate with the use of different subtasks. This is analogous to how Transformers can be inspected by visually highlighting how much attention the model pays to different words (Tenney et al., 2019). These findings are preliminary so far, but promising: They suggest that a correlation analysis could tell us which inputs are solved by the same subnetworks and which by different ones. See the appendix for details. 7 LIMITATIONS **Hyperparameter and Architecture Optimization.** SMFRs add hyperparameters to the architecture, which raises the question: What are the optimal block size and the optimal width and depth of the SMFR module? Our experiments showed that simply increasing the depth and width does not always improve performance. Instead, architecture optimization will be necessary to get the best performance out of the model. This used to be a problem for FNNs as well, until He et al. (2016a) fixed this issue by introducing Residual Connections. We hope that a similar fix can be discovered for SMFRs as well and leave this as future work. **Computational Overhead.** SMFRs take more time per training step than FNNs of equal size because they perform a large number of operations on small matrices. We have found empirically that they take more time than FNNs to converge on simpler tasks but less on harder tasks, where their useful inductive bias outweighs the computational overhead. **Stability.** We have found that the SMFR architecture can sometimes get stuck in local optima because the gating weights and softmax values take on too extreme values, which kills the gradient. We fixed this by adding a regularization loss: Whenever the absolute of the weight used for a softmax or sigmoid exceeds a threshold value, we apply an MSE-loss to that weight, pushing it back to the threshold (see appendix for details). We have not noticed any other stability problems since then, but as with all new technologies it cannot be ruled out that other issues remain that will only reveal themselves on larger tasks. 8 CONCLUSION We introduced the idea of block-operations, a reconceptualization of network activation tensors that aggregates neurons into larger semantic units. Based on this idea, we presented the SMFR, a module that replaces the FNN, with an inductive bias to learn routing and modular decomposition in a neural network more easily. Our experiments confirmed that it can learn to route data more effectively, sometimes even achieving 100% OOD accuracy in tasks that require reusing subnetworks. Our module also proved effective at handling tasks that required understanding commutativity and function argument selection. On an algorithmic task it learned a close approximation of the underlying logic, which is helpful for generalization. Our module can be used as a replacement for FNNs and we expect that it will help for any task that benefits strongly from compositionality. We emphasize that the concept of block-operations that underpins our module design had exactly the effects we predicted. This suggests that block-operations in general may be of interest to other researchers in neural architecture design. REFERENCES Subutai Ahmad and Luiz Scheinkman. How can we be so dense? the benefits of using highly sparse representations. *arXiv preprint arXiv:1903.11257*, 2019. Thomas Bachlechner, Bodhisattwa Prasad Majumder, Henry Mao, Gary Cottrell, and Julian McAuley. Rezero is all you need: Fast convergence at large depth. In *Uncertainty in Artificial Intelligence*, pp. 1352–1361. PMLR, 2021. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014. Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. Systematic generalization: What is required and can it be learned? *arXiv preprint arXiv:1811.12889*, 2018. David Barrett, Felix Hill, Adam Santoro, Ari Morcos, and Timothy Lillicrap. Measuring abstract reasoning in neural networks. In *International conference on machine learning*, pp. 511–520. PMLR, 2018. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. *Advances in neural information processing systems*, 28, 2015. Róbert Csordás, Sjoerd van Steenkiste, and Jürgen Schmidhuber. Are neural nets modular? inspecting functional modularity through differentiable weight masks. *arXiv preprint arXiv:2010.02066*, 2020. Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. The neural data router: Adaptive control flow in transformers improves systematic generalization. *arXiv preprint arXiv:2110.07732*, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. *arXiv preprint arXiv:1909.10893*, 2019. Klaus Greff, Sjoerd Van Steenkiste, and Jürgen Schmidhuber. On the binding problem in artificial neural networks. *arXiv preprint arXiv:2012.05208*, 2020. Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. Dynamic neural networks: A survey. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44(11):7436–7456, 2021. Fengxiang He, Tongliang Liu, and Dacheng Tao. Why resnet works? residuals generalize. *IEEE transactions on neural networks and learning systems*, 31(12):5349–5362, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In *Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV* 14, pp. 630–645. Springer, 2016b. Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L McClelland, and Adam Santoro. Environmental drivers of systematicity and generalization in a situated agent. *arXiv preprint arXiv:1910.00571*, 2019. Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. Compositionality decomposed: How do neural networks generalise? *Journal of Artificial Intelligence Research*, 67:757–795, 2020. Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. *Advances in neural information processing systems*, 28, 2015.
pbLjYjjWqd
Sec 4.4 observed that most clients suffer from the dominant class issues. However, what is the loss for local clients when they are not well fitting across different classes? Also, is dominate class are “easy samples”?
FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models Anonymous authors Paper under double-blind review Abstract Pre-trained language models (PLM) have revolutionized the NLP landscape, achieving stellar performances across diverse tasks. These models, while benefiting from vast training data, often require fine-tuning on specific data to cater to distinct downstream tasks. However, this data adaptation process has inherent security and privacy concerns, primarily when leveraging user-generated, device-residing data. Federated learning (FL) provides a solution, allowing collaborative model fine-tuning without centralized data collection. However, applying FL to finetune PLMs is hampered by challenges, including restricted model parameter access, high computational requirements, and communication overheads. This paper introduces Federated Black-box Prompt Tuning (FedBPT), a framework designed to address these challenges. FedBPT does not require the clients to access the model parameters. By focusing on training optimal prompts and utilizing gradient-free optimization methods, FedBPT reduces the number of exchanged variables, boosts communication efficiency, and minimizes computational and storage costs. Experiments highlight the framework’s ability to drastically cut communication and memory costs while maintaining competitive performance. Ultimately, FedBPT presents a promising solution for efficient, privacy-preserving fine-tuning of PLM in the age of large language models. 1 Introduction Large language models (LLM) have shown increasing power on various NLP tasks (Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020; Fedus et al., 2022; Zhang et al., 2021; Zeng et al., 2021; Sun et al., 2021; Qiu et al., 2020). Typically, these models are trained on a diverse range of text from books, articles, and websites to gain a broad understanding of human language and are known as the pre-trained language models (PLMs). However, task-specific data is often required to adapt PLMs to perform specific tasks or be more accurate in real-world scenarios. This fine-tuning process relies heavily on user-generated data on devices, providing a wealth of contextual insights and nuanced use cases that reflect actual human interaction and needs. In practice, it is challenging to use these devices and data securely. Data needs to be collected and stored for training, but exchanging and storing sensitive data carries security risks and privacy concerns. To overcome the issue of data isolation, federated learning (FL) can be applied to enable numerous devices to collaboratively finetune PLMs over decentralized data while preserving data privacy (McMahan et al., 2017; Sun et al., 2020). Although fine-tuning PLMs through FL presents promising opportunities, three challenges constrain their real-world application. Especially for LLMs, these challenges include (1) devices’ limited access to the PLM parameters, (2) computational and storage costs for local clients, and (3) communication overhead in the FL system. In the real world, devices utilize LLMs primarily by invoking APIs provided by LLM services (e.g., ChatGPT (OpenAI, 2022, 2023) or NeMo (Kuchaiev et al., 2019)). The clients cannot access the model parameters, thereby being unable to conduct local training. Additionally, even if the clients could access the model parameters, it is impractical for devices with limited resources to conduct local PLM fine-tuning, which is extremely memory-intensive and brings high computational overhead. Moreover, fine-tuning PLMs through FL requires the clients and server to frequently exchange model parameters or gradients, usually in the scale of millions or even billions. Such intensive communication cost is unfeasible for commercial edge devices with limited communication bandwidth. To this end, existing works (Sun et al., 2022a; Chen et al., 2022b; Zhao et al., 2023; Xu et al., 2023) apply parameter-efficient fine-tuning (PEFT) methods of PLMs to FL to reduce resource costs. Effective PEFT methods include adapter tuning (Houlsby et al., 2019), prefix tuning (Li & Liang, 2021), LoRA (Hu et al., 2021) and BitFit (Zaken et al., 2021). These techniques primarily freeze most parameters of PLMs and update only a few additional parameters, which can reduce communication costs significantly. However, these PEFT methods still require the clients to access model parameters and gradients for local training. Even if the computational cost could be reduced, these gradient-based PEFT methods requiring back-propagation are still unfeasible for most edge devices with limited resources, such as mobile phones and AR headsets. To solve these challenges simultaneously, we propose a new framework called Federated Black-box Prompt Tuning (FedBPT) as shown in Fig. 1. The goal of FedBPT is to train an optimal prompt to improve the performance of the frozen PLMs. The clients and the server exchange prompts rather than model parameters, which reduces the communicated variables from the scale of millions or billions to only hundreds, improving the communication efficiency significantly. The clients in FedBPT adopt a gradient-free optimization method rather than gradient-based methods to conduct local training, which frees the clients from being required to access the model parameters. In addition, only forward-propagation without back-propagation is needed for local training, which can reduce the computational and storage costs for both the devices holding a model and the LLM server that provides inference service APIs. We conducted experiments on multiple datasets using SOTA PLMs. The results show that FedBPT reduces the communication cost by more than $500k \times$ while achieving comparable results with the baselines that require model parameter access and back-propagation for optimization. FedBPT can also reduce the memory footprint by more than $3 \times$ without applying any additional efficient inference technique. By proposing FedBPT, we offer a solution to break down data silos in the era of LLMs without the limiting factors of requiring full model access, large communication bandwidth, and device compute capacity. We summarize our contributions as follows: - We present three challenges in applying FL to adapt PLMs in the real world, including the requirement of model access, communication cost, and on-device compute capacity. - We propose a federated black-box prompt tuning framework (FedBPT) that enables the devices to adapt PLMs in the real world collaboratively by solving the above-mentioned challenges simultaneously. - We evaluate FedBPT on multiple datasets with SOTA PLMs. FedBPT achieves comparable accuracy with the gradient-based methods that require clients to access model parameters while reducing communication and memory costs significantly. 2 RELATED WORKS 2.1 FEDERATED LEARNING Federated learning (FL) (Konečný et al., 2016; McMahan et al., 2017; Sun et al., 2022b) is a prominent distributed learning strategy, particularly beneficial for tasks that prioritize privacy. However, its application faces challenges due to the non-IID nature of distributed datasets. The heterogeneous data distribution across devices compromises accuracy relative to traditional centralized training. Numerous research efforts (Kairouz et al., 2021; Zhao et al., 2018; Chai et al., 2020; Li et al., 2018) have sought to mitigate this performance degradation. Recent works (Chen et al., 2022a; Nguyen et al., 2022) demonstrate that fine-tuning the pre-trained models through FL suffers less from the non-IID issue. Empirical research by Weller et al. (2022) suggests that Pretrained Language Models (PLMs) can diminish the effects of non-IID data and bridge the accuracy discrepancy with centralized training. Their results show that when applying PLMs, even the vanilla FedAvg can achieve comparable model performance with centralized training. These works indicate that FL presents a promising avenue for fine-tuning PLMs by leveraging user data while upholding privacy standards. However, PLMs, especially large-scale ones, introduce considerable communication overheads in FL scenarios, making federated training cumbersome and often unsuitable for practical applications. Additionally, the training of PLMs typically demands ample labeled data to ensure satisfactory accuracy – a condition that may be unattainable for individual device users. It is also noteworthy that many local devices are constrained by limited computational capacity and storage, making the local training of PLMs a challenging endeavor. Diverging from these studies, our work delves into adapting PLMs within FL, especially under tight resource constraints. 2.2 PROMPT-BASED LEARNING Prompt-based learning has gained significant attention in the realm of LLMs. Its essence is rooted in leveraging minimal examples or specific cues to guide a PLM toward the desired output. This contrasts with traditional supervised learning, where a model is trained explicitly using extensive labeled data. OpenAI’s GPT-3 (Brown et al., 2020) marked a pivotal turn in the exploration of prompt-based learning. The sheer scale of GPT-3 made it possible to produce relevant outputs with carefully crafted prompts (Brown et al., 2020; Lester et al., 2021) without the need for task-specific model fine-tuning. However, manually designed prompts still suffer a performance gap compared with a fine-tuned model (Brown et al., 2020; Schick & Schütze, 2020; Gao et al., 2020; Sun et al., 2022c). Recent works demonstrate that the prompt does not have to represent natural language. It can also be optimized efficiently in continuous space with gradient descent (Li & Liang, 2021; Hambardzumyan et al., 2021; Qin & Eisner, 2021; Liu et al., 2023; Zhong et al., 2021; Liu et al., 2021). In the case of only tuning the continuous prompt while keeping the parameters of large PLMs untouched, one can retain the efficient training benefits while matching the performance of full model tuning. Prompt tuning (Lester et al., 2021; Li & Liang, 2021) was proposed to fine-tune a continuous vector concatenated to the input embeddings. Unlike manual prompt design conducted at the vocabulary level, prompt tuning optimizes the prompt in the embedding space. Based on this idea, p-tuning (Liu et al., 2021/2022/2023) was proposed to improve the performance further. Similar to prompt tuning, p-tuning also learns concrete prompts in the embedding space. However, in p-tuning, an additional LSTM model is required to predict token embeddings. 3 PRELIMINARY: BLACK-BOX PROMPT TUNING Common language understanding tasks can be formulated as a classification task to predict for a batch of input texts \( X \) the labels \( Y \). Prompt tuning is to train a continuous prompt vector \( p \in \mathbb{R}^D \) such that the prediction performance can be improved when the model is fed the optimal prompt vector \( p^* \) together with the input \( X \). The objective of prompt tuning can be formulated as \[ p^* = \arg \min_{p \in P} L(f(p; X), Y), \] where \( f(\cdot) \) is the PLM inference API, \( L(\cdot) \) is the loss function and \( P \) is some search space of interest. To optimize \( p \), gradient-based methods (e.g., SGD) can be applied by conducting back-propagation of the model \( f \). Recently, a gradient-free optimization, Black-Box Tuning (BBT) (Sun et al., 2022d), was also proposed to optimize the prompt \( p \) without back-propagation. Based on the observation that large-scale PLMs have a low intrinsic dimensionality [Aghajanyan et al., (2020); Qin et al., (2021)], BBT optimizes \( z \in \mathbb{R}^d \) in a much smaller subspace (\( d \ll D \)) and uses a random projection matrix \( A \in \mathbb{R}^{D \times d} \) to project \( z \) on the original prompt space \( P \). The objective can be formulated as \[ z^* = \arg \min_{z \in Z} L(f(Az; X), Y). \] To optimize \( z \), BBT adopts a gradient-free optimizer CMA-ES (Covariance Matrix Adaptation Evolution Strategy) [Hansen, (2016)], a widely used evolutionary algorithm for non-convex black-box optimization in the continuous domain. CMA-ES maintains a parameterized search distribution, i.e., a multivariate normal distribution. In each iteration, CMA-ES samples a population of new query solutions from the multivariate normal distribution as \[ z_{t+1,i} \sim m_t + \sigma_t N(0, C_t), \] where \( i = 1, \ldots, \lambda \) and \( \lambda \) is the population size. \( m_t \in \mathbb{R}^d \) and \( C_t \in \mathbb{R}^{d \times d} \) are the mean vector and covariance matrix of the search distribution at iteration step \( t \), respectively. \( \sigma_t \) is the standard deviation that controls the step length. \( m_t, C_t \) and \( \sigma_t \) are updated by maximizing the likelihood of successful steps, which are the steps with lower loss values (cf. Hansen (2016) for more details). 4 METHOD To solve the challenges of model access, communication cost, and computational cost simultaneously, we propose Federated Black-box Prompt Tuning method (FedBPT) to train an optimal prompt in a federated fashion by adapting BBT to federated learning. Unlike FL methods communicating model parameters, the clients in FedBPT train and communicate with the server prompts rather than the model parameters, which is communication efficient. To optimize prompts, the clients only need to conduct inference rather than back-propagation, significantly reducing the computational cost and memory usage. The FL server aggregates the local prompts uploaded by the client and is completely agnostic to the employed LLM architecture. During training, the clients can treat the model as a black box: neither the clients nor the server requires access to the PLM parameters. 4.1 PROBLEM FORMULATION Suppose there are \( K \) clients in FL, and each client hosts a private dataset \( D^k = (X^k, Y^k) \) consisting of \( n^k \) samples \( \{x_i^k, y_i^k\}_{i \in [n^k]} \). Given a global projected matrix \( A \) in Eq. (2), the clients collaboratively train an optimal \( z \) with the objective to solve: \[ z^* = \arg \min_z \sum_{k \in [K]} \frac{n^k}{\sum_{k \in [K]} n^k} F^k(z), \] where \( F^k(z) \) is the loss of client \( k \): \[ F^k(z) = L(f(Az; X^k), Y^k) = \sum_{i \in [n^k]} L(f(Az; x_i^k), y_i^k). \] 4.2 OVERVIEW OF FEDBPT In FedBPT, the clients optimize local objectives based on BBT. Thus, unlike previous FL works, FedBPT aggregates the CMA-ES parameters applied by the clients to conduct BBT rather than the deep learning models. At the start of the training, the server initializes and distributes the projection matrix \( A \) to the clients. Then, the server and clients will freeze and apply \( A \) to calculate the prompt with the received \( z \). In each communication round (e.g., the \( t \)-th round), the server first sends the up-to-date global CMA-ES parameters, including the mean vector \( z_t \), covariance matrix \( C_t \) and the search step \( \sigma_t \) to clients. Then, the clients (e.g., the \( k \)-th client) conduct BBT to optimize the received CMA-ES parameters by minimizing their local loss, i.e. Eq. (5). After local optimization, the clients upload their locally optimal parameters and the local loss value \( F^k(z_{t+1}^k) \) to the server. After the server receives all CMA-ES parameters, it aggregates the local parameters and updates the global CMA-ES parameters for the next communication round. After the training is completed (e.g., $T$ communication rounds), the mean vector of the global CMA-ES $z_T$ will be adopted to compute the optimal prompt $p_T = Az_T$. The primary distinction between FedBPT and earlier FL algorithms lies in the use of BBT for optimization. Yet, integrating BBT into FL algorithms, such as FedAvg, is not straightforward. Simply combining BBT and FedAvg cannot achieve decent performance. The first challenge is the prompt overfitting problem caused by data distribution shifts across clients, which is common under non-IID settings. The second challenge is how to aggregate CMA-ES parameters on the server effectively. Unlike aggregating deep learning models, directly averaging CMA-ES parameters, mostly consisting of distribution statistics, is not feasible. We will introduce these challenges in detail and our solutions in the following sections. ### 4.3 Server-level CMA-ES Algorithm ![Figure 2: Comparison of aggregation between directly using FedAvg and FedBPT. FedAvg derives the global distribution by directly averaging the local distribution statistics. In FedBPT, the server applies CMA-ES to derive the global prompt distributions with the awareness of the evaluation results of the uploaded local distributions.](image) After receiving local CMA-ES parameters, the server conducts aggregation on the server to derive a global search distribution that can guide the clients’ search in the next communication round. Directly averaging the models uploaded by the clients following FedAvg is not effective for FedBPT. In FedBPT, the clients locally optimize the CMA-ES parameters parameterized by multivariate normal distribution statistics. Directly averaging the standard deviation and covariance matrices via FedAvg cannot derive an optimal global search distribution, as is shown in Sec. 5.2. In addition, CMA-ES is a random search algorithm that cannot guarantee to achieve a local optimum as with gradient-based optimization algorithms. Directly averaging optimal and inferior local search results makes it difficult to achieve a global optimum. To derive an optimal global search distribution on the server, we adopt a server-level CMA-ES algorithm to update the search distribution statistics based on the local search results. The comparison of aggregations by directly applying FedAvg and FedBPT is shown in Fig. 2. The intuition of the server-level CMA-ES is to consider the local search results as a set of solutions sampled by the server. The server then evaluates these sampled solutions and updates the search distributions for the next communication round. Suppose a set of clients $S_t$ participate in training in the $t$-th communication round. The server-level CMA-ES takes the received mean vectors $\{z_{k+1}^t\}_{k \in S_t}$ as the sampled solutions and the local loss values $\{F_k(z_{t+1}^k)\}_{k \in S_t}$ as the corresponding search step loss. To update the CMA-ES parameters, the search step length is required. However, the server-level “sampling” is conducted by multiple local search steps, and the server-level search step length $\sigma_t$ is intractable. Directly applying a local search step length causes the model to diverge. We provide a theoretical explanation for this divergence in Appendix A. We also theoretically derive a corrected search step length $\sigma'_t$ for the server formulated as $$\sigma'_t = 2 \sum_{k \in S'_t} \sum_{j=1}^{I} (\sigma_{t,j}^k)^2 / (|S_t| \cdot \lambda_k),$$ where $S'_t$ is the set of $\lfloor |S_t|/2 \rfloor$ clients that upload $z_{t+1}^k$ with the lowest local loss values $F_k(z_{t+1}^k)$. $\sigma_{t,j}^k$ is the step length of client $k$'s $j$-th local search iteration in communication round $t$. $I$ is the number of local search iterations, and $\lambda_k$ is the local search population of client $k$. The derivation can be found in Appendix A. 4.4 Local Black-box Prompt Tuning against Overfitting In real life, client data are non-IID distributed, which causes label-skew across clients (Li et al., 2018). The server-level CMA-ES evaluates the clients’ search results based on the uploaded local loss values. Such label-skew makes local searches overfitted to local data distributions by achieving low local loss values and makes it difficult for the server to evaluate their performance on the global data distribution. This overfitting issue is more serious when adopting BBT for local training. Gradient-based optimizations (e.g., SGD) incorporate both data and label information into the gradient for updating. In contrast, when using Eq. (2) as the local training objective, BBT modifies the CMA-ES parameters based primarily on how close predictions are to the labels while using the data only indirectly. It is a practical label-skew case in which most of a client’s data is distributed in one class (Tang et al., 2022). In this case, a local CMA-ES might learn a prompt that triggers the frozen PLM to generate predictions corresponding to the dominant class, regardless of the input. To demonstrate this issue, we conduct experiments on AG’s NEWS (OpenAI), a topic classification dataset with four data classes. We simulate an FL client to train prompts for a pre-trained RoBERTa (Liu et al., 2019) model using BBT. The simulated client holds data following the Dirichlet distribution, commonly applied in previous FL papers (Hsu et al., 2019; Tang et al., 2022) for non-iid setting, and more than 90% of its data are in class one. The confusion matrix evaluated with the prompt trained by this client is shown in Fig. 3. It is shown that all of the data will be classified as class one after applying the prompt trained by this client, which demonstrates the problem of overfitting caused by local BBT. To mitigate this overfitting issue, we propose a perturbation method to regularize the local training objective and avoid CMA-ES selecting overfitting prompts. For a sample \( \{x_i^k, y_i^k\} \) of client \( k \), we randomly generate a binary mask \( m_i^k \) with an artificial rate \( r_p \) of elements that are zeros. We then randomly sample a sentence \( \hat{x}_i^k \) from the vocabulary with the same length of \( x_i^k \) as shown in Fig. 4, and the local training objective for the \( k \)-th client is formulated as \[ z^* = \arg \min_{z \in Z} \sum_{i \in [n_k]} L(f(Az; x_i^k), y_i^k) - L(f(Az; x_i^k \odot m_i^k + \hat{x}_i^k \odot (1 - m_i^k)), y_i^k). \] The intuition is that given a perturbed input, the PLM should not be confident of generating a correct prediction even when fed an optimal prompt. Applying server-level CMA-ES and local perturbance method, the detailed algorithm of FedBPT can be found in Appendix B. 5 Experiments 5.1 Experimental Setup Datasets and Models We conduct experiments on three language understanding datasets: (1) The SST-2 (Socher et al., 2013) is a popular sentiment analysis dataset. The SST-2 dataset consists of sentences taken from movie reviews along with their corresponding sentiment labels. Each sentence is annotated as either "positive" or "negative" based on the sentiment conveyed. (2) The Yelp polarity (yelp) is another sentiment analysis dataset, which consists of reviews on Yelp along with their corresponding sentiment labels of "positive" or "negative". (3) The AG’s News dataset (OpenAI) is a large-scale topic classification dataset for the task of categorizing news articles into one of four predefined topic classes. The dataset is based on the AG’s corpus, a collection of news articles from various sources. We evaluate our FedBPT on two PLMs: (1) RoBERTa (Liu et al., 2019) is a variation of the BERT model. It is pre-trained using a variant of the masked language modeling (MLM) objective, whose objective is to predict masked tokens in a given text sequence. In this paper, we apply the version of 356 million parameters. (2) Llama 2 (Touvron et al., 2023) is a SOTA PLM released by Meta, which is a collection of foundation language models ranging from 7 billion to 70 billion parameters. Llama 2 models are trained on 2 trillion tokens and have double the context length than Llama 1. In this paper, we evaluate FedBPT on the model with 7 billion parameters. **Baselines** We compare our black-box tuning FL framework with several gradient-based and gradient-free methods. For gradient-based methods, we compare with three baselines: (1) **FedAvg** (McMahan et al., 2017) is the most widely-used algorithm for FL. In FedAvg, the clients fine-tune the whole model and transmit the updated model parameters. (2) **FedPrompt** (Zhao et al., 2023) is the SOTA work of applying FL to adapt the PLM with high communication efficiency. The clients in FedPrompt learn and transmit prompts, which reduces the communication cost significantly. (3) **FedP-tuning** is built on FedPrompt by replacing the local training from prompt tuning to p-tuning (Liu et al., 2022), which is more advanced and proven to achieve higher performance on downstream tasks. For gradient-free methods, we consider three baselines: (1) **Manual Prompt** is adapted following the templates and label words in Appendix C to conduct zero-shot evaluation. (2) **In-context Learning** Following Brown et al. (2020), we randomly select up to 5 training samples and concatenate them with the input texts. (3) **FedAvg-BBT** is a baseline by simply combining BBT (Sun et al., 2022d) and FedAvg. We build this baseline for comparison as part of an ablation study to show the effectiveness of our designed server-level prompt tuning. **FL setup & Hyperparameters** We follow FedPrompt (Zhao et al., 2023) to design our FL setup. The system has ten clients, and all of the clients participate in training in each round. Considering the real world, where many users possess only a limited amount of labeled data, we conduct experiments under few-shot settings. We randomly select 40 samples for each class to construct a training set $D_{train}$. We conduct experiments in both IID and non-IID settings. For ID settings, we split the training dataset $D_{train}$ evenly. For non-IID settings, we follow previous works to split the data following the Dirichlet distribution parameterized by $\alpha$. We maintain a default setting of $\alpha = 1.0$ throughout our experiments. The initial search step length $\sigma_1$ is 1. We set local iteration $I$ to 8 and the local population $\lambda_k$ to be 5 for all clients. ### 5.2 Experimental Results | Method | Trainable Params. | SST-2 Acc.(%) | AG’s NEWS Acc.(%) | Yelp Acc.(%) | |-----------------|-------------------|---------------|-------------------|--------------| | | | IID | non-IID | IID | non-IID | | **Gradient-based methods** | | | | | | | FedPrompt | 51K | 90.25 | 85.55 | 87.72 | 85.62 | 91.44 | 91.47 | | FedP-tuning | 15M | 90.6 | 87.16 | 88.17 | 86.11 | 93.61 | 91.63 | | FedAvg | 355M | 84.7 | 82.4 | 77.43 | 76.54 | 88.25 | 88.03 | | **Gradient-free methods** | | | | | | | Manual prompt | 0 | 83.6 | 75.75 | | | 88.37 | | In-Context Learning | 0 | 79.7 | 76.96 | | | 89.65 | | FedAvg-BBT | 500 | 84.45 | 84.17 | 76.54 | 76.46 | 89.64 | 89.72 | | FedBPT | 500 | 87.16 | 86.47 | 82.36 | 81.03 | 91.12 | 90.8 | Table 1: Results under both IID and non-IID settings with RoBERTa as the backbone model. **Results of RoBERTa.** The results when adopting RoBERTa as the PLM are shown in Tab. 1. Compared with the gradient-based methods, FedBPT achieves comparable or even higher accuracy with drastically reduced trainable parameters. Specifically, FedBPT achieves an accuracy of 0.92% higher than FedPrompt and only 0.69% lower than the best gradient-based baseline FedP-tuning for SST-2 under the non-IID setting. Meanwhile, FedBPT reduces the trainable parameters by more than $100 \times$ and $30,000 \times$ compared with FedPrompt and FedP-tuning, respectively. The trainable parameters are required to be transmitted in each communication round, which means that FedBPT reduces the communication cost of one device in one round from 120MB to only 4KB compared with FedP-tuning. For AG’s News and Yelp, FedBPT can also achieve comparable accuracy under IID and non-IID settings. Notably, FedAvg cannot improve the accuracy under both IID and non-IID settings. This demonstrates that directly fine-tuning LLMs is not feasible in realistic FL settings when the clients hold limited labeled samples. We document the memory usage by one client of different methods on SST-2 in Tab. 2. It is shown that FedBPT can reduce memory costs by more than $3 \times$ compared with gradient-based methods. Compared with gradient-free baselines, FedBPT achieves higher accuracies under IID and non-IID settings for all the datasets. FedBPT achieves accuracies of 2.3%, 4.57%, and 1.08% higher than FedAvg-BBT under non-IID settings for SST-2, AG’s News, and Yelp, respectively. It is shown that FedAvg-BBT achieves limited accuracy improvement compared with manual prompts for all the datasets, which demonstrates that simply combining FedAvg and BBT cannot achieve decent performance. The results show that gradient-based methods outperform gradient-free baselines significantly in accuracy, which is expected. However, we should realize that gradient-based methods require model parameter access and conducting back-propagation, which are not always realistic for most cases of FL, and only the gradient-free methods are feasible in many cases. | Method | Mem. | |-----------------|------| | FedPrompt | 5.8 GB | | FedP-tuning | 6.1 GB | | FedAvg | 7.2 GB | | In-context Learning | 2.1 GB | | FedBPT | 1.8 GB | Table 2: Memory footprint on SST-2 by applying RoBERTa. ![Figure 5](image-url) Figure 5: The results under IID and non-IID settings with Llama 2 as the backbone model. | Method | FedPrompt | FedP-tuning | FedAvg | Manual | FedAvg-BBT | FedBPT | |-----------------|-----------|-------------|--------|--------|------------|--------| | Trainable Params.| 205K | 235M | 7B | 0 | 500 | 500 | Table 3: Number of trainable parameters when adopting Llama 2 as the backbone model. **Results of Llama 2.** The number of trainable parameters when applying Llama 2 as the PLM is shown in Tab. 3. The trade-off between the communication cost of one device in one round and model accuracy is shown in Fig. 5. We have three important observations: (1) For Llama 2, FedBPT can improve the accuracy significantly compared with the gradient-free baselines and achieve comparable accuracies with the gradient-based methods in most settings. Specifically, FedBPT improves accuracy... by more than 12%, 11%, and 13% for SST-2, AG’s News, and Yelp compared with the manual prompts under non-IID settings, respectively. FedBPT can achieve slightly higher accuracy than FedPrompt under the AG’s News IID setting, while the gradient-free baselines experience declines in accuracy of over 15%. (2) FedBPT reduces the number of trainable parameters compared with gradient-based methods even more significantly than adopting RoBERTa. Specifically, compared with FedP-tuning, FedBPT reduces the trainable parameters from 235M to only 500, which means that FedBPT reduces the communication cost of one device in one round from nearly 2GB to 4KB. In summary, FedBPT can achieve much higher accuracy than gradient-free baselines and comparable accuracy as gradient-based methods for both RoBERTa and Llama 2 models. In addition, the number of trainable parameters does not increase when the model scale is larger. The reason is that FedBPT adopts a projection matrix to project the embedding space to a low-dimension space, which enables the clients to conduct CMA-ES learning to train a low-dimensional vector. This scalability is essential considering the rapid growth of the PLM parameter scale, which allows the clients in FedBPT not to pay more computational or storage costs when the FL system adopts larger PLMs. ### 5.3 Ablation Studies **Local binary mask rate ($r_p$).** We study the effect of the rate of zeros in the binary masks $m_k^b$ that local devices apply to perturb input and avoid overfitting. We conduct experiments on SST-2 and AG’s News under the non-IID setting for RoBERTa. As introduced in Sec. 4.4, a larger $r_p$ means that more tokens in a sentence will be randomly replaced. We set $r_p$ from 0% to 80%, and the results are shown in Tab. 4. It is shown that applying the random placement can improve the global accuracy compared with simply adopting vanilla BBT for local training (i.e., $r_p = 0$). This illustrates the effectiveness of our designed random placement in mitigating the local overfitting challenge. | Dataset | SST-2 | AG’s News | |---------|-------|-----------| | $r_p$ | 0 | 0 | | Acc. (%)| 84.86 | 78.28 | | | 85.21 | 80.92 | | | 86.03 | 81.03 | | | 86.47 | 80.75 | | | 86.12 | 80.83 | Table 4: Results of FedBPT adopting RoBERTa with different $r_p$ under non-IID settings. **Local population size ($\lambda_k$).** In each iteration of local search, the clients (e.g., the $k$-th client) sample $\lambda_k$ candidates for evaluation. We study the effect of local population $\lambda_k$ on the model accuracy. We set $\lambda_k$ from 5 to 20, and conduct experiments on SST-2 and AG’s News for RoBERTa. The results are shown in Fig. 6. It is shown that the model accuracy of FedBPT is not sensitive to $\lambda_k$. Thus, in real applications, $\lambda_k$ can be set relatively small to reduce computational cost. ![Figure 6: Results of FedBPT adopting RoBERTa with different $\lambda_k$.](image) ### 6 Conclusion We introduced an FL framework, FedBPT, allowing clients to adapt black-box PLMs efficiently using gradient-free optimization. This approach eliminates the need for clients to access model parameters and only requires forward propagation for local training, thus lowering computational and storage demands for devices and LLM service providers. Evaluations of several datasets with SOTA PLMs revealed that FedBPT matches the accuracy of gradient-based methods but with markedly less communication and memory overhead. REFERENCES Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. *arXiv preprint arXiv:2012.13255*, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Zheng Chai, Yujing Chen, Liang Zhao, Yue Cheng, and Huzefa Rangwala. Fedat: A communication-efficient federated learning method with asynchronous tiers under non-iid data. *ArXivorg*, 2020. Hong-You Chen, Cheng-Hao Tu, Ziwei Li, Han-Wei Shen, and Wei-Lun Chao. On pre-training for federated learning. *arXiv preprint arXiv:2206.11488*, 2022a. Jinyu Chen, Wenchao Xu, Song Guo, Junxiao Wang, Jie Zhang, and Haozhao Wang. Fedtune: A deep dive into efficient federated fine-tuning with pre-trained transformers. *arXiv preprint arXiv:2211.08025*, 2022b. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *The Journal of Machine Learning Research*, 23(1):5232–5270, 2022. Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. *arXiv preprint arXiv:2012.15723*, 2020. Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. Warp: Word-level adversarial reprogramming. *arXiv preprint arXiv:2101.00121*, 2021. Nikolaus Hansen. The cma evolution strategy: A tutorial. *arXiv preprint arXiv:1604.00772*, 2016. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pp. 2790–2799. PMLR, 2019. Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. *arXiv preprint arXiv:1909.06335*, 2019. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. *Foundations and Trends® in Machine Learning*, 14(1–2):1–210, 2021. Jakub Konečný, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. *arXiv preprint arXiv:1610.05492*, 2016. Oleksii Kuchaiev, Jason Li, Huyen Nguyen, Oleksii Hrinchuk, Ryan Leary, Boris Ginsburg, Samuel Kriman, Stanislav Beliaev, Vitaly Lavrukhin, Jack Cook, Patrice Castonguay, Mariya Popova, Jocelyn Huang, Christopher Parisien, and Erich Elsen. Nemo: a toolkit for building ai applications using neural modules. In *ASRU*, 2019. URL: https://arxiv.org/abs/1909.09577. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*, 2021. Tian Li, Anit Kumar Sahu, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. On the convergence of federated optimization in heterogeneous networks. *arXiv preprint arXiv:1812.06127*, 2018.
H8CtXin7mZ
In the context of traditional approaches of multigrid, either solver or preconditioner, either geometric or algebraic, imposing (nonhomogeneous) Neumann BC can be challenging and is usually an ad-hoc business.
A NEURAL-PRECONDITIONED POISSON SOLVER FOR MIXED DIRICHLET AND NEUMANN BOUNDARY CONDITIONS Anonymous authors Paper under double-blind review ABSTRACT We introduce a neural-preconditioned iterative solver for Poisson equations with mixed boundary conditions. The Poisson equation is ubiquitous in scientific computing: it governs a wide array of physical phenomena, arises as a subproblem in many numerical algorithms, and serves as a model problem for the broader class of elliptic PDEs. The most popular Poisson discretizations yield large sparse linear systems. At high resolution, and for performance-critical applications, iterative solvers can be advantageous for these—but only when paired with powerful preconditioners. The core of our solver is a neural network trained to approximate the inverse of a discrete structured-grid Laplace operator for a domain of arbitrary shape and with mixed boundary conditions. The structure of this problem motivates a novel network architecture that we demonstrate is highly effective as a preconditioner even for boundary conditions outside the training set. We show that on challenging test cases arising from an incompressible fluid simulation, our method outperforms state-of-the-art solvers like algebraic multigrid as well as some recent neural preconditioners. 1 INTRODUCTION The solution of linear systems of equations involving discrete Laplace operators is the bottleneck in many engineering and scientific applications. These large, symmetric positive definite and sparse systems of equations are notoriously ill-conditioned. Fast Fourier Transforms (Cooley & Tukey [1965]) are optimal for these problems when discretized over trivial geometric domains, however they are not applicable for practical domain shapes. Direct methods like Cholesky factorization (Golub & Loan [2012]) resolve conditioning issues, but suffer from loss of sparsity/fill-in and are prohibitively costly in practice when per-time-step refactoring is necessary (e.g., with changing domain shape). Iterative methods like preconditioned conjugate gradient (PCG) (Saad [2003]) and multigrid (Brandt [1977]) can achieve good performance, however an optimal preconditioning strategy is not generally available, and though multigrid can guarantee modest iteration counts, computational overhead associated with solver creation and other per-iteration costs can dominate runtimes in practice. Unfortunately, there is no clear algorithmic solution. Recently, machine learning techniques have shown promise for these problems. Tompson et al. [2017] showed that a network (FluidNet) can be used to generate an approximate inverse across domain shapes, albeit only with Neumann boundary conditions. Kaneda et al. [2023] developed DCDM (Deep Conjugate Direction Method), which improves on this approach by using a similar network structure and an iterative technique where gradient descent in the matrix norm of the error is preconditioned with a neural network. While their approach is similar to PCG, the nonlinearity of their approximate inverse required a generalization of the PCG method which proved effective. We build on this approach and generalize it to domains with mixed Dirichlet and Neumann boundary conditions. Notably, these problems arise in simulating free-surface liquid flows. The DCDM approach cannot handle these cases, however we show that a novel, more lightweight network structure can be used in DCDM’s iterative formalism that is both linear and capable of handling mixed boundary conditions over time-varying fluid domains. Furthermore, we show that this structure drastically improves performance over that in DCDM. We design our network structure to represent the dense nature of the inverse of a discrete Laplacian matrix. That is, the inverse matrix for a discrete Laplace operator has the property that local perturbations anywhere in the domain have non-negligible effects at all other points in the domain. Our network structure uses a hierarchy of grid scales to improve the resolution of this behavior over what is possible with the DCDM structure. In effect, the process of transferring information across the hierarchy from fine grid to increasingly coarse grids and back again facilitates rapid propagation of information across the domain. This structure has similarities with multigrid, however there are some important differences. We incorporate the effects of the Dirichlet and Neumann conditions at irregular boundaries with a novel convolution design. Specifically, we use stencils that learn spatially varying weights based on a voxel’s proximity to the boundary and the boundary condition types encoded there. Although our approximate inverses are linear (unlike the DCDM preconditioner) we still adopt the DCDM iterative formalism. We do this because we cannot guarantee that our neural network produces a symmetric and positive definite approximate inverse as required for standard PCG. It is possible to use a flexible PCG technique (Golub & Ye [1999]) in this case though (as in Bouwmeester et al. [2015]), however we show that the matrix-orthogonal gradient descent iteration in DCDM provides superior results. We show that our network outperforms state-of-the-art preconditioning strategies, including DCDM, FluidNet, algebraic multigrid and incomplete Cholesky. We perform our comparison across a number of representative free-surface liquid and fluid flow problems. To promote reproducibility we have released our full code and a link to our pretrained model at https://anonymous.4open.science/r/MLPCG-2102 2 RELATED WORK Many recent approaches leverage machine learning techniques to accelerate numerical linear algebra computations. Ackmann et al. [2020] use supervised learning to compute preconditioners from fully-connected feed-forward networks in semi-implicit time stepping for weather and climate models. Sappl et al. [2019] use convolutional neural networks (CNNs) to learn banded approximate inverses for discrete Poisson equations arising in incompressible flows discretized over voxelized spatial domains. However, their loss function is the condition number of the preconditioned operator which is prohibitively costly at high resolution. Ozbay et al. [2021] also use CNN to approximate solutions to Poisson problems arising in incompressible flow discretized over voxelized domains, however they do not learn a preconditioner and their approach only supports two-dimensional square domains. Our approach is most similar to those of Tompson et al. [2017] and Kaneda et al. [2023] who also consider discrete Poisson equations over voxelized fluid domains, however our lighter-weight network outperforms them and generalizes to a wider class of boundary conditions. Li et al. [2023] build on the approach of Sappl et al. [2019], but use a more practical loss function based on the supervised difference between the inverse of their preconditioner times a vector and its image under the matrix under consideration. Their preconditioner is the product of easily invertible, sparse lower triangular matrices. Notably, their approach works on discretizations over unstructured meshes. Götz & Anz [2018] learn Block-Jacobi preconditioners using deep CNNs. The choice of optimal blocking is unclear for unstructured discretizations, and they use machine learning techniques to improve upon the selection. Various works use hybrid deep learning/multigrid techniques. For example, the UNet [Ronneberger et al., 2015] and MSNet architectures [Mathieu et al., 2016] are similar to a multigrid V-cycle in terms of data flow, as noted by Cheng et al. [2021] and Azulay & Treister [2023]. Cheng et al. [2021] use the multi-scale network architecture MSNet to approximate the solution of Poisson equations arising in plasma flow problems. However, they only consider flows over a square domain in 2D. Azulay & Treister [2023] note the similarity between the multi-scale UNet architecture and a multigrid V-cycle. They use this structure to learn preconditioners for the solution of heterogeneous Helmholtz equations. Eliasof et al. [2023] also use a multigrid-like architecture for a general class of problems. Huang et al. [2023] use deep learning to generate multigrid smoothers at each grid resolution that effectively smooth high frequencies: CNNs generate the smoothing stencils from matrix entries at each level in the multigrid hierarchy. This is similar to our boundary-condition-dependent stencils, however we note that our network is lighter-weight and allowed to vary at a larger scale during learning. Furthermore, optimal stencils are known for the problems considered in this work, and we provide evidence that our solvers outperforms them. 3 Motivation: Incompressible Fluids With Mixed B.C.s While our solver architecture can be applied to any Poisson equation discretized on a structured grid, our original motivation was to accelerate a popular method for incompressible inviscid fluid simulation based on the splitting scheme introduced by Chorin (1967). The fluid’s velocity \( u(x,t) \) is governed by the incompressible Euler equations: \[ \rho \left( \frac{\partial u}{\partial t} + (u \cdot \nabla)u \right) + \nabla p = f_{\text{ext}} \quad \text{s.t.} \quad \nabla \cdot u = 0 \quad \text{in } \Omega, \] where \( \Omega \) is the domain occupied by fluid, pressure \( p \) is the Lagrange multiplier for the incompressibility constraint \( \nabla \cdot u = 0 \), \( \rho \) is the mass density, and \( f_{\text{ext}} \) accounts for external forces like gravity. These equations are augmented with initial conditions \( u(x,0) = u^0(x) \) and \( \rho(x,0) = \rho^0 \) as well as the boundary conditions discussed in Section 3.1. Incompressibility implies that the initial homogeneous mass density is conserved throughout the simulation (\( \rho \equiv \rho^0 \)). Chorin’s scheme employs finite differences in time and splits the integration from time \( t^n \) to \( t^{n+1} = t^n + \Delta t \) into two steps. First, a provisional velocity field \( u^* \) is obtained by an advection step that neglects the pressure and incompressibility constraint: \[ \frac{u^* - u^n}{\Delta t} + (u^n \cdot \nabla)u^n = \frac{1}{\rho^0} f_{\text{ext}}. \] Second, a projection step obtains \( u^{n+1} \) by eliminating divergence from \( u^* \): \[ -\nabla \cdot \frac{1}{\rho^0} \nabla p^{n+1} = -\frac{1}{\Delta t} \nabla \cdot u^*, \\ \frac{u^{n+1} - u^*}{\Delta t} = -\frac{1}{\rho^0} \nabla p^{n+1}. \] Equations (2,4) hold inside \( \Omega \), and we have deferred discussion of boundary conditions to Section 3.1. The bottleneck of this full process is (3), which is a Poisson equation since \( \rho^0 \) is spatially constant. 3.1 Boundary Conditions Our primary contribution is handling both Neumann and Dirichlet boundary conditions for the Poisson equation. We assume the computational domain \( D \) is decomposed into \( D = \Omega \cup \Omega_a \cup \Omega_s \), as sketched in the inset, where \( \Omega_a \) denotes free space and \( \Omega_s \) the region filled with solid. This decomposition induces a partition of the fluid boundary \( \partial \Omega = \Gamma_n \cup \Gamma_d \). Boundary \( \Gamma_n \) represents the fluid-solid interface as well as the intersection \( \partial \Omega \cap \partial D \) (i.e., the region outside \( D \) is treated as solid); on it a free-slip boundary condition is imposed: \( (1), u(x,t) \cdot \hat{n}(x) = u_n^\Gamma(x,t) \), where \( \hat{n} \) denotes the outward-pointing unit surface normal. This condition on \( u \) translates via (4) into a Neumann condition on (3): \[ \hat{n} \cdot \nabla p^{n+1} = \frac{\rho_0}{\Delta t} (\hat{n} \cdot u^* - u_n^\Gamma) \quad \text{on } \Gamma_n. \] Free-surface boundary \( \Gamma_d \) represents the interface between the fluid and free space. Ambient pressure \( p_a \) then imposes on (5) a Dirichlet condition \( p^{n+1} = p_a \) on \( \Gamma_d \). In our examples, we set \( p_a = 0 \). The Dirichlet conditions turn out to make solving (3) fundamentally more difficult: while the DCDM paper [Kaneda et al., 2023] discovered that a preconditioner blind to the domain geometry and trained solely on an empty box is highly effective for simulations featuring pure Neumann conditions, the same is not true for Dirichlet (see Figure 5). 3.2 Spatial Discretization We discretize the full domain \( D \) using a regular marker-and-cell (MAC) staggered grid with \( n_c \) cubic elements [Harlow, 1964]. The disjoint subdomains \( \Omega, \Omega_a, \) and \( \Omega_s \) are each represented by a per-cell rasterized indicator field; these are collected into a 3-channel image, stored as a tensor $\mathcal{I}$. In the case of a 2D square with $n_c = N^2$, this tensor is of shape $(3, N, N)$, and summing along the first index yields a single-channel image filled with ones. Velocities and forces are represented at the corners of this grid, and for smoke simulations the advection step (2) is implemented using an explicit semi-Lagrangian method (Stam [1999], Robert [1981]). For free-surface simulations, advection is performed by interpolating fluid velocities from the grid onto particles responsible for tracking the fluid state, advecting those particles, and then transferring their velocities back to the grid. In our examples, we use a PIC/FLIP blend transfer scheme with a 0.99 ratio (Zhu & Bridson [2005]). Pressure values are stored at element centers, and the Laplace operator in (3) is discretized into a sparse symmetric matrix $A^\mathcal{I} \in \mathbb{R}^{n_c \times n_c}$ using the standard second-order accurate finite difference stencil (with 5 points in 2D and 7 in 3D) but with modifications to account for Dirichlet and Neumann boundary conditions: stencil points falling outside $\Omega$ are dropped, and the central value (i.e., the diagonal matrix entry) is determined as the number of neighboring cells belonging to either $\Omega$ or $\Omega_a$. Examples of these stencils are visualized in 2D in the inset. Rows and columns corresponding to cells outside $\Omega$ are left empty, meaning $A^\mathcal{I}$ typically has a high-dimensional nullspace. These empty rows and columns are removed before solving, obtaining a smaller positive definite matrix $\tilde{A}^\mathcal{I} \in \mathbb{R}^{n_f \times n_f}$, where $n_f$ is the number of fluid cells. The right-hand side of (3) is discretized using the standard MAC divergence finite difference stencil into a vector $b \in \mathbb{R}^{n_c}$, which also receives contributions from the Neumann boundary. Entries of this vector corresponding to cells outside $\Omega$ are removed to form right-hand side vector $\tilde{b} \in \mathbb{R}^{n_f}$ of the reduced linear system representing the discrete Poisson equation: $$\tilde{A}^\mathcal{I} \tilde{x} = \tilde{b},$$ where $\tilde{x} \in \mathbb{R}^{n_f}$ collects the fluid cells’ unknown pressure values (a discretization of $p^{n+1}$). The constantly changing domains and boundary conditions of a typical fluid simulation mean traditional preconditioners for (6) like multigrid or incomplete Cholesky, as well as direct sparse Cholesky factorizations, need to be rebuilt at every frame. This prevents their high fixed costs from being amortized across frames and means they struggle to outperform a highly tuned GPU implementation of unpreconditioned CG. This motivates our neural-preconditioned solver which, after training, instantly adapts to arbitrary subdomain shapes encoded in $\mathcal{I}$. 4 NEURAL-PRECONDITIONED STEEPEST DESCENT WITH ORTHOGONALIZATION Our neural-preconditioned solver combines a carefully chosen iterative method (Section 4.1) with a preconditioner based on a novel neural network architecture (Section 4.2.1), inspired by multigrid. 4.1 ALGORITHM For symmetric positive definite matrices $A$ (like the discrete Laplacian $\tilde{A}^\mathcal{I}$ from (6)), the preconditioned conjugate gradient (PCG) algorithm (Shewchuk [1994]) is by far the most efficient iterative method for solving linear systems $Ax = b$ when an effective preconditioner is available. Unfortunately, its convergence rate is known to degrade when the preconditioner itself fails to be symmetric, as is the case for our neural preconditioner. Bouwmeester et al. [2015] have shown that good convergence can be recovered for nonsymmetric multigrid preconditioners using the “flexible PCG” variant at the expense of an additional dot product. However, this variant turns out to perform sub-optimally with our neural preconditioner, as shown in Table 1. Instead, we adopt the preconditioned steepest descent with orthogonalization (PSDO) method proposed in Kaneda et al. [2023], which was shown to perform well even for their nonlinear preconditioning operator. The PSDO algorithm can be understood as a modification of standard CG that replaces the residual with the preconditioned residual as the starting point for generating search directions and, consequently, cannot enjoy many of the simplifications baked into the traditional algorithm. Most seri- ously, $A$-orthogonalizing against only the previous search direction no longer suffices to achieve $A$-orthogonality to all past steps. Therefore, iteration $k$ of PSDO obtains its step direction $\mathbf{d}_k$ by explicitly $A$-orthogonalizing the preconditioned residual against the last $n_{\text{ortho}}$ directions (where $n_{\text{ortho}}$ is a tunable parameter) and determines step length $\alpha_k$ with an exact line search. PSDO reduces to standard preconditioned steepest descent (PSD) when $n_{\text{ortho}} = 0$, and it is mathematically equivalent to unpreconditioned CG when $n_{\text{ortho}} \geq 1$ and the identity operator is used as the preconditioner. In the case of a symmetric preconditioner $P = LL^\top$, PSDO differs from PCG by taking steps that are $A$-orthogonal rather than $LAL^\top$-orthogonal. When combined with our neural preconditioner, we call this algorithm NPSDO, presented formally in Algorithm 1 in the appendix. We empirically determined $n_{\text{ortho}} = 2$ to perform well, and we use this value in all reported experiments. 4.2 Neural Preconditioner The ideal preconditioner for all iterative methods described in Section 4.1 is the exact inverse $A^{-1}$; with it, each method would converge to the exact solution in a single step. Of course, the motivation for using an iterative solver is that inverting or factorizing $A$ is too costly (Figure 5), and instead we must seek an inexpensive approximation of $A^{-1}$. Examples are incomplete Cholesky, which does its best to factorize $A$ with a limited computational budget, and multigrid, which applies one or more iterations of a multigrid solver. Our method approximates the map $\mathbf{r} \mapsto A^{-1}\mathbf{r}$ by our neural network $\mathcal{P}_{\text{net}}(\mathcal{I}, \mathbf{r})$. Departing from recent works like Kaneda et al. (2023), we use a novel architecture that both substantially boosts performance on pure-Neumann problems and generalizes to the broader class of Poisson equations with mixed boundary conditions by considering geometric information from $\mathcal{I}$. The network performs well on 2D or 3D Poisson equations of varying sizes, but to simplify the exposition, our figures and notation describe the method on small square grids of size $N \times N$. We note that Algorithm 1 runs on linear system $\tilde{A}^{\mathcal{I}}\tilde{\mathbf{x}} = \tilde{\mathbf{b}}$, featuring vectors of smaller size $n_f$, but the network always operates on input vectors of full size $n_c$, reshaped into $(N, N)$ tensors. Therefore, to evaluate $\tilde{\mathbf{d}} = \mathcal{P}_{\text{net}}(\mathcal{I}, \tilde{\mathbf{r}})$, $\tilde{\mathbf{r}}$ is first padded by inserting zeros into locations corresponding to cells in $\Omega_a$ and $\Omega_s$, and then those locations of the output are removed to obtain $\tilde{\mathbf{d}} \in \mathbb{R}^{n_f}$. 4.2.1 Architecture Our neural network architecture (Figure 1) is inspired by geometric multigrid, aiming to propagate information across the computational grid faster than the one-cell-per-iteration of unpreconditioned CG. The architecture is constructed recursively, consisting of levels $1 \leq \ell \leq L$. A given level $\ell$ operates on an input image $\mathcal{I}^{(\ell)}$ and input vector $\mathbf{r}^{(\ell)}$. It performs a special image-dependent convolution operation on $\mathbf{r}^{(\ell)}$ and then downsamples the resulting vector $\mathbf{y}^{(\ell)}$, as well as $\mathcal{I}^{(\ell)}$, to the next-coarser level $\ell + 1$ using average pooling (analogous to restriction in multigrid). The output of the level $\ell + 1$ subnetwork is then upsampled (analogous to prolongation), run through another convolution stage, and finally linearly combined with \( y^{(\ell)} \) to obtain the output. At the finest level, \( I^{(1)} = I \) and \( r^{(1)} = r \), while at the coarsest level only a single convolution operation is performed. One crucial difference between our network and existing neural solvers like FluidNet (Tompson et al., 2017) is how geometric information from \( I \) is incorporated. Past architectures treat this geometric data on the same footing as input tensor \( r \), e.g. feeding both into standard multi-channel convolution blocks. However, we note that \( I \) determines the entries of \( A^I \), and so if the convolutions are to act analogously to the smoothing operations of multigrid, really this geometry information should inform the weights of convolutions applied to \( r \). This motivates our use of custom convolutional blocks whose spatially varying kernels depend on local information from \( I \). Each custom convolutional block (at the right corner in Figure 1) at level \( \ell \) learns an affine map from a \( 3 \times 3 \) sliding window in \( I^{(\ell)} \) to a \( 3 \times 3 \) kernel \( K^{(i,j)} \). This affine map is parametrized by a weights tensor \( W \) of shape \((3^2, 3, 3, 3)\) and a bias vector \( B \in \mathbb{R}^{3^2} \). Entry \( y_{i,j} \) of the block’s output is computed as: \[ y_{i,j} = \sum_{a,b=-1}^{1} K^{(i,j)}_{a,b} x_{i+a,j+b}, \quad K^{(i,j)}_{a,b} := \sum_{c=0}^{2} \sum_{l,m=-1}^{1} W_{3a+b,c,l,m} I^{(\ell)}_{c,i+l,j+m} + B_{3a+b}. \] Out-of-bounds accesses in these formulas are avoided by padding \( I^{(\ell)} \) with solid pixels (i.e., the values assigned to cells in \( \Omega_s \)) and \( x \) with zeros. In multigrid, the solutions obtained on the coarser grids of the hierarchy are corrections that are added to the finer grids’ solutions; likewise, our network includes connections labeled “linear combination” in Figure 1 that mix in upsampled data from the lower level. Our network determines each of the two coefficients in this combination by learning affine functions of the input image defined by (i) convolving \( I^{(\ell)} \) with a (spatially constant) kernel \( K \) of shape \((3, 3, 3)\); (ii) averaging to produce a scalar; and (iii) adding a scalar bias \( B \). For efficiency, these evaluation steps are fused into a custom linear block (indicated by blue arrows in Figure 1) that implements the formula: \[ z = B + \frac{1}{3^2 n_c} \sum_{i,j=0}^{N-1} \sum_{c=0}^{2} \sum_{l,m=-1}^{1} K_{c,l,m} I^{(\ell)}_{c,i+l,j+m}. \] Our custom network architecture has numerous advantages. Its output is a linear function of the input vector (unlike the nonlinear map learned by Kaneda et al. (2023)), making it easier to interpret as a preconditioner. The architecture is also very lightweight: a model with \( L = 4 \) coarsening levels has only \( \sim 25k \) parameters. Its simplicity accelerates network evaluations at solve time, critical to make NPSDO competitive with the state-of-the-art solvers used in practice. We note that our solver is fully matrix free, with \( P^{\text{net}} \) relying only on the image \( I \) of the simulation scene to infer information about \( A^I \). Furthermore, since all network operations are formulated in terms of local windows into \( I \) and \( r \), it can train and run on problems of any size divisible by \( 2^L \). The 3D version of our architecture is a straightforward extension of the 2D formulas above, simply using larger tensors with additional indices to account for the extra dimension, as well as extending the sums to run over these indices. ### 4.2.2 Training We train our network \( P^{\text{net}} \) to approximate \( A^I b \) when presented with image \( I \) and input vector \( b \). We calculate the loss for an example \((I, A^I, r)\) from our training dataset as the residual norm: \[ \text{Loss} = \| b - A^I P^{\text{net}}(I, b) \|_2. \] We found the more involved loss function used in Kaneda et al. (2023) not to benefit our network. Our training data set consists of 183 matrices collected from 10 different simulation scenes, some of domain shape \((128, 128, 128)\) and others \((256, 128, 128)\). For each matrix, we generate 800 right-hand side vectors using a similar approach to Kaneda et al. (2023), but with far fewer Rayley-Ritz vectors. We first compute 1600 Ritz vectors using Lanczos iterations (Lanczos, 1950) and then generate from them 800 random linear combinations. These linear combinations are finally normalized and added to the training set. To accelerate data generation, we create the right-hand sides for different matrices in parallel; it takes between 0.5 and 3 hours to generate the data for each scene. Since Ritz vector calculation is expensive, we experimented with other approaches, like picking random vectors or constructing analytical eigenmodes for the Laplacian on \( D \) and masking out entries outside \( \Omega \). Unfortunately these cheaper generation techniques led to degraded performance. In each epoch of training, we loop over the matrices of our dataset in shuffled order. For each matrix, we process all of its 800 right-hand sides in batches of 128, repeating five times. The full training process takes 5-7 days on an AMD EPYC 9554P 64-Core Processor with an NVIDIA RTX 6000 GPU. The training and validation losses are computed every five epochs, and we found it beneficial to terminate after 50 epochs. ### 4.2.3 Implementation We built our network using PyTorch (Paszke et al., 2019), but implemented our custom convolutional and linear blocks as custom CUDA extensions. The neural network was trained using single precision floating point. ## 5 Results and Analysis We evaluate the effectiveness and efficiency of our neural preconditioned solver by comparing it to high-performance state-of-the-art implementations of several baseline methods: unpreconditioned CG provided by the CuPy library (Okuta et al., 2017), as well as CG preconditioned by the algebraic multigrid (AMG) and incomplete Cholesky (IC) implementations from the AMGCL library (Demidov, 2020). All of these baseline methods are accelerated by CUDA backends running on the GPU, with the underlying IC implementation coming from NVIDIA’s cuSparse library. Where appropriate, we also compared against past neural preconditioners FluidNet (Tompson et al., 2017) and DCDM (Kaneda et al., 2023). Finally, we included characteristic performance statistics of a popular sparse Cholesky solver CHOLMOD (Chen et al., 2008). In all cases, our method outperforms these baselines, often dramatically. We executed all benchmarks on a workstation featuring an AMD Ryzen 9 5950X 16-Core Processor and an NVIDIA GeForce RTX 3080 GPU. We used as our convergence criterion for all methods a reduction of the residual norm by a factor of \( 10^6 \), which is sufficiently accurate to eliminate visible simulation artifacts. We evaluate our neural preconditioner in single precision floating point but implement the rest of the NPSDO algorithm in double precision for numerical stability. We benchmarked on twelve simulation scenes with various shapes—(128, 128, 128), (256, 128, 128), and (256, 256, 256)—each providing 200 linear systems to solve. For each solve, we recorded the number of iterations and runtime taken by each solver. These performance statistics are summarized visually in Figures 3a, 6, and in tabular form in Appendix A.3. Figure 3a summarizes timings from all solves in our benchmark suite: for each system, we divide the unpreconditioned CG solve time by the other methods’ solve times to calculate their speedups and plot a histogram. We note that our method significantly outperforms the others on a majority of solves: ours is fastest on 95.68% of the systems, which account for 98.52% of our total solve time. Our improvements are more substantial on larger problems, (Figures 3b and 9) for two reasons. First, condition numbers increase with size, impeding solvers without effective preconditioners: this is seen clearly by comparing results from two different resolutions (Figures 3d and 9). Second, the small matrices \( A^T \) correspond to simulation grids with mostly non-fluid cells. While CG, AMGCL... Figure 3: Histograms of solution speedup vs. a baseline of unpreconditioned CG (a) for all solves; and (b-f) for certain subsets of the systems to help tease apart the different modes of the distribution. Figure 4: Comparisons among AMG, IC, CG and NSPDO (Ours) on a single frame at $256^3$ with Neumann only BC (left two) and mixed BC (right two). Figure 5: Comparisons among AMG, IC, CG, DCDM, FluidNet (FN) and NSPDO (Ours) on a single frame at $128^3$ with Neumann only BC (left two) and mixed BC (right two). and IC timings shrink significantly as fluid cells are removed, our network’s evaluation cost does not: it always processes all of $\mathcal{D}$ regardless of occupancy. This scaling behavior is visible in Figure 6. Our speedups are also greater for examples with $\Gamma_d = \emptyset$. DCDM is applicable for these, and so we included in it Figure 3d (but not in Figure 3e) due to the network overspilling GPU RAM). DCDM’s failure to outperform CG and IC in these results, contrary to [Kaneda et al., 2023], can be attributed to the higher-performance CUDA-accelerated implementations of those baselines used in this work. With Dirichlet conditions (Figure 5), our preconditioner is less effective, and yet we still outperform the rest on 93.46% of the frames, which account for 97.06% of our total solve time. Statistics are not reported in this setting for DCDM and FluidNet, which struggle to reduce the residual (Figure 5). Further insights can be obtained by consulting Figures 4 and 5, which show the convergence behavior of each iterative solver on characteristic example problems. AMG is clearly the most effective preconditioner, but this comes at the high cost of rebuilding the multigrid hierarchy before each solve: its iterations cannot even start until long after our solver already converged. Our preconditioner is the second most effective and, due to its lightweight architecture, achieves the fastest solves. DCDM is also quite effective at preconditioning for Neumann-only problems, but its iterations are slowed by costly network evaluations. IC’s setup time is shorter than AMG but still substantial, and it is much less effective as a preconditioner. We note that the smoke example (Figure 5) also includes a comparison to FluidNet applied as a preconditioner for PSDO. In the original paper, FluidNet was presented as a standalone solver, to be run just once per simulation frame. However, in this form it cannot produce highly accurate solutions. Incorporating it as a preconditioner as we do here in theory allows the system to be solved to controlled accuracy, but this solver ended up stalling before reaching a $10^6$ reduction in our experiments; for this reason it was omitted from Figure 3. On average, our solver spends 79.4% of its time evaluating $\mathcal{P}^{\text{net}}$, 4.4% of its time in orthogonalization, and the remaining 16.2% in other CG operations. In contrast, AMG takes a full 90% of its time in its setup stage. IC’s quicker construction and slower convergence mean it takes only 23% in setup. Our architecture also confers GPU memory usage benefits: for $128^3$ grids, our solver uses 1.5GiB of RAM, while FluidNet and DCDM consume 5GiB and 8.3GiB, respectively (Appendix A.3). 6 CONCLUSIONS The neural-preconditioned solver we propose not only addresses more general boundary conditions than past machine learning approaches for the Poisson equation (Tompson et al., 2017) Kaneda et al., 2023 but also dramatically outperforms these solvers. It even surpasses state-of-the-art high-performance implementations of standard methods like algebraic multigrid and incomplete Cholesky. It achieves this through a combination of its strong efficacy as a preconditioner and its fast evaluations enabled by our novel lightweight architecture. Nevertheless, we see several opportunities to improve and extend our solver in future work. First, although we implemented our spatially-varying convolution block in CUDA, it remains the computational bottleneck of the network evaluation and is not yet fully optimized. We are also excited to try porting our architecture to special-purpose acceleration hardware like Apple’s Neural Engine; not only could this offer further speedups, but also it would free up GPU cycles for rendering the results in real-time applications like visual effects and games. Second, we would like to explore ways to explicitly enforce symmetry and even positive definiteness of our preconditioning operator so that the less expensive PCG algorithm could be used rather than PSDO. Third, for applications where fluid occupies only a small portion of the computational domain, we would like to develop techniques to exploit sparsity for better scaling (Figure 6). Finally, we look forward to extending our ideas to achieve competitive performance for problems posed on unstructured grids as well as equations with non-constant coefficients, vector-valued unknowns (e.g., elasticity), and nonlinearities. REFERENCES J. Ackmann, P. D. Düben, T. N. Palmer, and P. K. Smolarkiewicz. Machine-learned preconditioners for linear solvers in geophysical fluid flows. *arXiv preprint arXiv:2010.02866*, 2020. Y. Azulay and R. Treister. Multigrid-augmented deep learning preconditioners for the helmholtz equation. *SIAM Journal on Scientific Computing*, 45(3):S127–S151, 2023. doi: 10.1137/21M1433514. URL [https://doi.org/10.1137/21M1433514](https://doi.org/10.1137/21M1433514). H. Bouwmeester, A. Dougherty, and A.V. Knyazev. Nonsymmetric preconditioning for conjugate gradient and steepest descent methods. *Procedia Computer Science*, 51:276–285, 2015. ISSN 1877-0509. doi: https://doi.org/10.1016/j.procs.2015.05.241. URL [https://www.sciencedirect.com/science/article/pii/S1877050915010492](https://www.sciencedirect.com/science/article/pii/S1877050915010492) International Conference On Computational Science, ICCS 2015. A. Brandt. Multi-level adaptive solutions to boundary-value problems. *Math Comp*, 31(138):333–390, 1977. Y. Chen, T.A. Davis, W.W. Hager, and S. Rajamanickam. Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate. *ACM Trans. Math. Softw.*, 35(3), oct 2008. ISSN 0098-3500. doi: 10.1145/1391989.1391995. URL [https://doi.org/10.1145/1391989.1391995](https://doi.org/10.1145/1391989.1391995). L. Cheng, E.A. Illarramendi, G. Bogopolsky, M. Bauerheim, and B. Cuenot. Using neural networks to solve the 2d poisson equation for electric field computation in plasma fluid simulations. *arXiv preprint arXiv:2109.13076*, 2021. A. Chorin. A numerical method for solving incompressible viscous flow problems. *J Comp Phys*, 2(1):12–26, 1967. J. Cooley and J. Tukey. An algorithm for the machine calculation of complex fourier series. *Math Comp*, 19(90):297–301, 1965. D. Demidov. Amgcl—a c++ library for efficient solution of large sparse linear systems. *Software Impacts*, 6:100037, 2020. ISSN 2665-9638. doi: https://doi.org/10.1016/j.simpa.2020.100037. URL [https://www.sciencedirect.com/science/article/pii/S2665963820300282](https://www.sciencedirect.com/science/article/pii/S2665963820300282). M. Eliasof, J. Ephrath, L. Ruthotto, and E. Treister. Mgic: Multigrid-in-channels neural network architectures. *SIAM Journal on Scientific Computing*, 45(3):S307–S328, 2023. doi: 10.1137/21M1430194. URL [https://doi.org/10.1137/21M1430194](https://doi.org/10.1137/21M1430194). G. Golub and C. Van Loan. *Matrix computations*, volume 3. JHU Press, 2012. G. Golub and Q. Ye. Inexact preconditioned conjugate gradient method with inner-outer iteration. *SIAM J Sci Comp*, 21(4):1305–1320, 1999. doi: 10.1137/S1064827597323415. M. Götz and H. Anzt. Machine learning-aided numerical linear algebra: Convolutional neural networks for the efficient preconditioner generation. In *2018 IEEE/ACM 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (scalA)*, pp. 49–56, 2018. doi: 10.1109/ScalA.2018.00010. F. Harlow. The particle-in-cell method for numerical solution of problems in fluid dynamics. *Meth Comp Phys*, 3:319–343, 1964. R. Huang, R. Li, and Y. Xi. Learning optimal multigrid smoothers via neural networks. *SIAM Journal on Scientific Computing*, 45(3):S199–S225, 2023. doi: 10.1137/21M1430030. URL [https://doi.org/10.1137/21M1430030](https://doi.org/10.1137/21M1430030). A. Kaneda, O. Akar, J. Chen, V.A.T. Kala, D. Hyde, and J. Teran. A deep conjugate direction method for iteratively solving linear systems. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of *Proceedings of Machine Learning Research*, pp. 15720–15736. PMLR, 23–29 Jul 2023. URL [https://proceedings.mlr.press/v202/kaneda23a.html](https://proceedings.mlr.press/v202/kaneda23a.html).
FhbZ1PQCaG
Is the memory not initialized throughout the entire training process? Clarifying this point could help readers better understand the novelty of this approach, as memory initialization is typically performed per episode (e.g., NTM).
THINK BEFORE YOU ACT: DECISION TRANSFORMERS WITH INTERNAL MEMORY Anonymous authors Paper under double-blind review ABSTRACT Decision transformer model-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performance relies on massive data and computation. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model’s performance on previous tasks. In contrast to LLMs’ implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Thus inspired, we propose an internal memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in both Atari games and meta-world object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture. 1 INTRODUCTION Recently, with the tremendous success of decoder-only transformer models (Brown et al., 2020; OpenAI, 2023; Dosovitskiy et al., 2021; Touvron et al., 2023), an increasing number of researchers have focused on decoder-only transformer-based decision-making agents. As shown with GPT-3 (Brown et al., 2020) and follow-up work (Kaplan et al., 2020; Clark et al., 2022), the generalization of these LLMs depends significantly on the model size, i.e. the number of parameters. This is partly because neural network parameters act as implicit memory (Neyshabur et al., 2019), enabling models to “memorize” a huge amount of training data by fitting these parameters. However, relying purely on scale has practical and ethical limits: there are economic and ecological costs, it reduces accessibility, and more efficient uses of scale might improve performance further. To address some limits of the implicit, parameter-based memory of large models, we take the inspiration from the concept of “working memory” (Baddeley, 2003; Cowan, 2008) to explicitly store and recall past experiences for use in future decision-making. The concept, “working memory”, originates from cognitive psychology and neuroscience (Baddeley, 2003; Goldman-Rakic, 1995), where it refers to the system responsible for the temporary storage and manipulation of information during cognitive tasks. Our motivation comes from how humans think before they act: they can reason on past experiences to generate appropriate behavior in new situations. As an illustration, imagine we want to train a robot to play four different Atari games: Asteroids, Asteroids Deluxe, Space Invaders, and Space Invaders II (Figure 1). Asteroids Deluxe is a sequel to Asteroids that introduces new boss fights and enemies, and the same can be said about Space Invaders II and Space Invaders. For the robot to play these four games, it must actively store what it has learned in each game in its memory module and choose the appropriate strategy for each game. Throughout training, the robot’s memory module continuously processes and updates relevant game information, allowing it to make informed decisions and adapt its strategies. Followed by this intuition, we introduce Decision Transformers with Memory (DT-Mem): it stores an internal memory as a matrix and its functioning entails two primary steps: **memory update** and **memory retrieval**. DT-Mem builds on earlier work on memory-augmented neural networks (Santoro et al., 2016)—including neural Turing machines (Graves et al., 2014) and memory networks (Sukhbaatar et al., 2015)—in several ways, as we detail in the related work. We use content-based addressing (Eslami et al., 2016) to locate the memory position to update or retrieve from. The memory update involves modifying or replacing existing information. This enables the system to keep track of changes, maintain task-relevant information, and facilitate decision-making. More specifically, we first map the input sequence and memory into three entities: query, key, and value. Next, we use an attention-based mechanism to calculate the correlations between the input and memory, and then we use the attended weight of the input sequence to update the memory. Memory retrieval refers to the process of accessing and recovering stored information. It involves bringing relevant information back to condition decision-making. To do so, we read from the updated memory at the content-based address. Since experience must often be mapped from one task to another (e.g., through analogy in humans) to be useful, we also equip our memory module with an adaptable mapping capability. Specifically, for adapting the memory module to a new task, we employ the Low-Rank Adaptation (LoRA) method as described in (Hu et al., 2022) to fine-tune it. The main idea behind LoRA is to train a low-rank projection matrix on a small amount of labeled data from a new task. This matrix maps the parameters of a pre-trained model to a new task. We fine-tune only the memory module in this work because we rely on the generalization capacity of a pre-trained Decision Transformer (DT). Transformers are often pre-trained on large-scale datasets, as in the case of models like Multi-game DT (Lee et al., 2022) and Hyper-DT (Xu et al., 2023), and this pre-training enables them to capture broad knowledge that is transferable across tasks. In contrast, our memory module stores task-specific knowledge that should be adapted for new tasks. The functioning of DT-Mem differs from external memory and information retrieval-based methods in several ways: (1) memory size, (2) representation of stored information, and (3) retrieval method. In contrast to internal memory module, external memory methods generally require a large dataset that serves as a look-up table. Each raw data point in the external memory also requires an extra step of representation learning to be input to the neural network. Finally, our memory module relies on an attention-based retrieval method, since attention has demonstrated the ability to generalize across tasks. However, attention is computationally impractical for large sets, and hence external/retrieval-based memory systems tend to rely on $k$-nearest neighbor search for information retrieval. To validate our approach, we evaluate DT-Mem in two environments: (a) on Atari games against Multi-game Decision Transformer (MDT, Lee et al., 2022) and Recurrent Memory Decision Transformer (RMDT, Bessonov et al., 2023), and (b) on Meta-World environments against Prompt Decision Transformer (PDT, Xu et al., 2022) and Hyper-Decision Transformer (HDT, Xu et al., 2023). Our results show that DT-Mem improves generalization and adaptability with fewer model parameters and less training time. ## 2 RELATED WORK ### Transformer-based Reinforcement Learning methods Transformer (Vaswani et al., 2017) is a powerful architecture designed for sequence modeling. Owing to the capabilities that emerge as model and data size scale up, the Transformer has become a foundational model in several domains, including natural language processing (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023) and computer vision (Dosovitskiy et al., 2021). However, applying Transformer in reinforcement learning settings, such that it generalizes to multiple tasks, remains an open problem. Recently, Chen et al. (2021) and Janner et al. (2021) treat the RL problem as a sequence modeling problem and proposed a Transformer-based architecture to solve it with offline RL. These findings inspired researchers to develop more advanced Transformer-based RL methods. Subsequent efforts mainly focus on two aspects: generalization and adaptability. To improve model online adaptabil- Zheng et al. (2022) propose the Online Decision Transformer (Online DT), which utilizes the maximum-entropy idea to encourage pre-trained policies to explore during a phase of online adaptation. To improve offline adaptation, Xu et al. (2023) propose a Hyper-network-based module that helps DT adapt to unseen tasks efficiently. To facilitate task adaptation, Xu et al. (2023) introduce the prompt-based DT, which selects short trajectories to use in a task prompt in analogy with in-context learning for large language models. Furthermore, Lee et al. (2022) propose a multi-game DT (MDT), which use the expert action inference to consistently produce actions of highly-rewarding behavior. MDT demonstrates that DT can generalize to various Atari games with human-level performance. We argue that the generalization of the above-mentioned works relies on the size of models and does not learn the data efficiently. To address this issue, we introduce a memory module that can store, blend, and retrieve training information for better model and training efficiency. **Working memory** In the context of machine learning, there is a long history of neural network-based models that incorporate memory mechanisms (Das et al., 1992; Schmidhuber, 1992; Hochreiter and Schmidhuber, 1997; Santoro et al., 2016; Ba et al., 2016; Munkhdalai and Yu, 2017; Csordás and Schmidhuber, 2019; Ramsauer et al., 2020; Wu et al., 2022a). Generally, this research aims to enhance the capacity of neural networks to store and manipulate information over extended periods of time, leading to improved performance on a range of tasks. It often takes inspiration from human cognitive function. Most salient to our work, Graves et al. (2014) merge concepts from Turing machines and deep learning in “Neural Turing Machines” (NTMs), neural networks that include a content-addressable matrix memory space for storing and updating information throughout time. They show NTMs to be effective for various algorithmic tasks. Concurrently, Sukhbaatar et al. (2015) introduce “memory networks,” which use a content-addressable matrix memory store and retrieve information from previous computational steps to facilitate complex reasoning and inference tasks. Infinity-former excels in handling unbounded contexts with precision and flexibility, ideal for extensive and complex datasets (Martins et al., 2023). LONGMEM decoupled architecture and token-to-chunk retrieval make it adept at managing large contexts and overcoming memory staleness (Wang et al., 2023). kNN-augmented Transformer offers flexibility in context length and rapid adaptation to new data, enhancing the model’s real-time applicability (Wang et al., 2023). More recently, Bessonov et al. (2023) introduces a recurrent memory mechanism to address reinforcement learning challenges, which preserves a hidden state throughout the decision-making process. However, this method overlooks the storage and retrieval of task-related information, thereby falling short in fostering model generalization and task adaptation. Munkhdalai et al. (2019) propose a rapidly adaptable neural memory system, which they instantiate as a feedforward neural network trained by metalearning. They evaluate the memory’s effectiveness in a simple RL setting, maze exploration, and on various NLP tasks. Alternatively, Goyal et al. (2022) builds on the “global workspace” theory from cognitive science, which posits that different input entities share information through a common communication channel. The proposed shared global workspace method employs the attention mechanism to encourage the most useful information to be shared among neural modules. It is closely related to working memory and inspires us to explore how an explicit working memory can improve the generalization of Transformer-based models. An upshot of our work is that it may be valuable to revisit earlier memory-augmentation methods in light of more powerful foundation models. ### 3 Preliminaries #### 3.1 Offline Reinforcement Learning A trajectory consists of a series of states, actions, and rewards, expressed as $\tau = (s_0, a_0, r_0, s_1, a_1, r_1, \ldots, s_T, a_T, r_T)$. In the context of offline RL, data acquisition doesn’t come from active interaction with the environment. Instead, we rely solely on a predefined and limited dataset containing various trajectories generated by different policies. This scenario presents greater challenges as it restricts the agent’s ability to actively explore the environment and gather new information, which is a crucial aspect of traditional RL approaches. Formally, in the context of model evaluation, we can define a set of training tasks and testing tasks as $T^{train}$ and $T^{test}$, respectively. These two sets deliberately have no overlapping tasks, but they may share the same or similar observation and action spaces. To be more specific, for each training task $T^i \in T^{train}$, we have access to a large training dataset, which contains trajectories... \[ \tau^{0:H} = (s_0, a_0, r_0, \cdots, s_H, a_H, r_H), \] where \( H \) is the episode length. However, we assume access to only a small amount of data for the testing tasks. Our goal is to evaluate the proposed model in two dimensions. First, we want to assess the model’s **generalization**, which refers to its ability to solve the testing tasks within a finite time with no additional fine-tuning. Second, we want to test the model’s **adaptability**, which refers to its ability to improve its performance on the testing tasks through fine-tuning on limited data after pre-training on separate tasks. ### 3.2 Low-rank Adaptation Low-rank adaptation (LoRA, Hu et al., 2022) is a transfer learning technique used to adapt a pre-trained model to a new task with limited labeled data. LoRA assumes that the pre-trained model’s parameters can be expressed as a low-rank matrix, and that only a small number of parameters must be modified to adapt the model to the new task. The main idea behind LoRA is to utilize a small amount of labeled data from a new task to learn a low-rank projection matrix. This matrix maps the parameters of a pre-trained model to the new task. ## 4 METHODOLOGY ### 4.1 Overview of DT-Mem In Figure 2, we depict the architecture of DT-Mem, which consists of three components: the Transformer module, the Memory module, and the Multi-layer perceptron (MLP) module. The primary role of the Transformer module is to capture dependencies and relationships between states, actions, and returns in a sequence. The input of the Transformer module is a fixed-length sequence of trajectories, denoted as \( \tau_{t+1:t+K} \). The output is a sequence of embeddings, where each entry can be attended state embeddings, action embeddings, or return-to-go embeddings. The Transformer module follows the architecture of GPT-2 (Radford et al., 2019), but without the feed-forward layer after attention blocks. We separate the GPT-2 architecture into two pieces: the Transformer module and the MLP module, following the setup for natural language processing tasks: one GPT-2 model can be applied to a wide variety of tasks with different MLP modules (Radford et al., 2019). Finally, we introduce a memory module for storing and manipulating intermediate information. This is inspired by the Neural Turing Machine (Graves et al., 2014), where the memory is utilized to infer multiple algorithms. 4.2 Memory Module The design for the memory module is inspired by the way humans think before they act. Its functioning consists of three parts: identifying salient information output from the transformer module, determining where to store new information and how to integrate it with existing memories, and considering how to use these memories for future decision-making. We have broken down these questions and designed the following steps to address them. **Step 0: Memory Module Initialization.** The is initialized as a random matrix $M$, where each row $m_i \in \mathbb{R}^d$, with $i \in [0, N]$, represents a memory slot. **Step 1: Input Sequence Organizing.** Initially, we restructure the input sequence to adopt a different format. As illustrated in the problem formulation, the input sequence comprises multiple steps of the tuple $\langle \hat{r}_t, s_t, a_t \rangle$. Instead of directly feeding this sequence into the transformer module, we treat each tuple as an entity and embed them within the same space. Specifically, we define embedding functions $g_s(s) = e_s$, $g_a(a) = e_a$, and $g_r(\hat{r}) = e_r$, where $e_s, e_a,$ and $e_r \in \mathbb{R}^d$ with $d$ representing the dimension in the latent space. The final input sequence emerges from the concatenation of embeddings $E = [\cdots; e_s, e_a, e_r; \cdots]$. Given our memory structure as a matrix with fixed dimensions (i.e., number of slots * dimensions), it’s crucial to synchronize the input dimensions for efficient storage. It’s noteworthy that in this design, we maintain the relationships among them as posited in the DT paper, although this is not a requisite. For instance, in the trajectory transformer (Janner et al., 2021), states, rewards, and others are grouped individually. As demonstrated in Appendix B.6, these varied designs exhibit no significant difference. **Step 2: Content-based Address.** We use an attention-based method to locate the correct memory slot for new input by identifying correlated information. This approach is based on the idea that humans tend to store and group similar information together. To locate the memory position, we utilize an attention mechanism. The position address $w$ is calculated as: $w = \text{softmax}\left(\frac{QK^T}{\sqrt{d}}\right)$. Here, $Q = MW^q$ and $K = EW^k$, where $W^q$ and $W^k$ are parameters for the Multi-layer perceptron (MLP). The objective is to map the memory and input information into the query and key matrix, and then use the dot product to determine the similarities between these two matrices. The softmax function guarantees that the sum of all addresses equals one. **Step 3: Memory update.** To store incoming information and blend it with existing memory, we calculate two vectors: an erasing vector, $e^e$, and an adding vector, $e^a$. The erasing vector erases the current memory, while the adding vector controls information flow to the memory. To achieve this goal, we again utilize the attention mechanism. First, we map memory and input information to query, key, and value vectors, denoted as $\hat{Q} = M\hat{W}^q$, $\hat{K} = E\hat{W}^k$, and $\hat{V} = E\hat{W}^v$, respectively, where $\hat{W}^q$, $\hat{W}^k$, and $\hat{W}^v$ are parameters. Next, we calculate the writing strength, $\beta = \text{softmax}\left(\frac{QK^T}{\sqrt{d}}\right)$. The erasing vector is used to selectively erase information from the memory matrix and is computed as a function of the content-based addressing vector and the write strength. The erasing vector is calculated as $e^e = w \odot (1 - \beta)$, where $\odot$ indicates element-wise multiplication. The complement of the write strength is 1 minus the write strength, so this will result in a vector where the elements corresponding to the selected memory locations are set to 0, and the elements corresponding to the unselected memory locations are unchanged. The adding vector is used to selectively add information to the memory matrix and is computed as a function of the write strength and the input vector. Specifically, the adding vector is calculated as $e^a = (w \odot \beta)\hat{W}^v x$. Finally, the memory is updated as $M_t = M_{t-1} \odot (1 - e^e) + e^a$. If the selected memory slot is empty or erased, the new information will be stored. Otherwise, the new information will be blended with the existing memory contents. **Step 4: Memory retrieve** To utilize memory for decision-making, we retrieve information from the updated memory slot. Reading from the memory matrix is done by computing a read position vector. This vector can be computed using the above content-based addressing mechanism that compares the query vector with the contents of the memory matrix. Note that in other retrieval-based methods (Humphreys et al., 2022; Borgeaud et al., 2022), the nearest neighbor is the common way to retrieve related information. However, in our case, the internal memory is smaller than the typical external... memory, which makes attention-based retrieval feasible. Since the query information is the same as the input information, we use the same content address to retrieve the memory: \( E_{\text{out}} = w \odot M_t \). ### 4.3 Pre-training DT-Mem We use a set of training tasks \( T^{\text{train}} \), where each task \( T_i \in T^{\text{train}} \) has an associated offline dataset \( D_i \) consisting of hundreds of trajectories \( \tau \) generated by a behavior policy. The behavior policy can be either a pre-trained policy (such as DQN) or a rule-based policy, depending on what is available. Each trajectory \( \tau = (s_0, a_0, r_0, \cdots, s_H, a_H, r_H) \), where \( s_i \in S, a_i \in A, r_i \in R \), and \( H \) is the episode length. To serve as an input to the DT-Mem, we first segment the trajectory \( \tau \) into several pieces, each with length \( K \). We denote \( \tau_{t+1:t+K} = (s_{t+1}, a_{t+1}, r_{t+1}, \cdots, s_{t+K}, a_{t+K}, r_{t+K}) \) as one of the input sequence. However, we modify these trajectories instead of inputting them directly. Specifically, we follow the return-to-go Decision Transformer idea [Chen et al., 2021] and cal- culate the return to go, \( \hat{r}_t = \sum_{t+1}^{t+K} r_t \), for every timestep. This is effective because \( \hat{r}_t \) acts as a subgoal. It encourages the Transformer module to generate actions that can reduce the negative of this value as close to zero as possible. Then we input the modified trajectories \( \tilde{\tau}_{t+1:t+K} = (\hat{r}_{t+1}, s_{t+1}, a_{t+1}, \cdots, \hat{r}_{t+K}, s_{t+K}, a_{t+K}) \) to the transformer module. The output of the transformer module is a sequence embedding \( e_{\text{seq}} \in \mathbb{R}^{d \times 3K} \), where \( d \) is the dimension of the embedding space. Next, we transmit \( e_{\text{seq}} \) to the Memory module to update and retrieve the memory information. Finally, we use the retrieved memory \( E_{\text{out}} \) and MLP modules to generate the corresponding actions \( \hat{a}_t \). We minimize a supervised training loss with three terms: predicted actions \( \hat{a}_t \), predicted reward \( \hat{r}_t \), and predicted return-to-go \( \hat{R}_t \). The loss function is: \[ L = \sum_{t+1}^{t+K} ||\hat{a}_t - a_t||^2 + \alpha ||\hat{r}_t - \hat{r}_t||^2 + \lambda ||\hat{R}_t - r_t||^2, \] where \( \alpha \) and \( \lambda \) are scalar hyper-parameters. In experiments, we find that the final performance is not sensitive to these two hyper-parameters, so we set them to 1 for simplicity. The full pre-training process is summarized in Appendix A.3 Algorithm 1. ### 4.4 Fine-tuning DT-Mem with LoRA Fine-tuning LLMs involves heavy computation due to the large number of parameter updates required. We argue that fine-tuning only the memory module can achieve results comparable to those of fine-tuning the entire parameter space. LLMs benefit from being trained on large-scale datasets, which expose the model to a diverse range of linguistic patterns and semantic relationships, such as models like [Devlin et al., 2019] or GPT [Radford et al., 2019]. This exposure helps the model learn robust and generalized representations that can capture different aspects of language understanding and generation. After pre-training, the model can be fine-tuned on specific downstream tasks with task-specific labeled data. In our case, this task-specific knowledge is stored in the memory module. Thus, fine-tuning the memory module helps the model update its memory module to adapt to the new task. We apply the low-rank adaptation approach (LoRA, [Hu et al., 2022]) to fine-tune the memory module. Specifically, we modify the forward pass by adding low-rank matrices to \( W^q, W^k, W^v, \hat{W}^q, \) and \( \hat{W}^k \). Let’s take \( W^q \) as an example. Assuming the original output for query information \( Q = MW^q \), we adapt this query value to a new task as \( Q' = M(W^q + B^qA^q) \), where \( W^q \in \mathbb{R}^{n \times d}, B \in \mathbb{R}^{n \times m}, \) and \( A \in \mathbb{R}^{m \times d}, \) and \( m \) is the size of the memory module. Since the rank \( m \ll \min(n, d) \), fine-tuning the parameters \( B^q \) and \( A^q \) reduces the number of trainable parameters for downstream tasks. We perform supervised training by computing the loss between the model’s output and the labels in the fine-tuning dataset. During this process, only \( B^q \) and \( A^q \) are updated. The detailed fine-tuning procedure can be seen in Appendix A.3 Algorithm 2. 5 EVALUATION We design our experiments to answer the following questions: **Q1**: Does DT-Mem improve model generalization? **Q2**: Does DT-Mem improve pre-training results and training efficiency? **Q3**: Does DT-Mem scales with model size? **Q4**: Does fine-tuning only the memory module improve model adaptability? Recall that we use generalization to refer to performance on tasks the model has never trained on (zero-shot), and adaptability to refer to performance after fine-tuning. 5.1 ENVIRONMENTS AND MODELS SETUP **Atari Games** To ensure a fair comparison with the Multi-Game Decision Transformer, we used the same Atari dataset, which comprises multiple training runs of DQN trajectories. Due to limited compute resources and to prevent cherry-picking, we select 17 games from the available 41 based on their alphabetical order, as introduced in Lee et al. (2022). For each game, the data contains 50 policy checkpoints, each containing 500k environment steps. For the fine-tuning dataset, we randomly selected 10% of the data from the unseen dataset, which yielded 50k environment steps. Following the settings from Lee et al. (2022), we choose five games (Alien, Ms. Pac-Man, Pong, Space Invaders, and Star Gunner) to be used only for fine-tuning. Moreover, Brandfonbrener et al. (2022) suggests that return-conditioned supervised learning (RCSL) algorithms require strong dataset coverage to select a near-optimal policy. Therefore, our dataset contains both expert and non-expert behaviors. **Meta-World** To make a fair comparison with Hyper-DT and Prompt-DT, we evaluate the proposed method on the Meta-World environment (Yu et al., 2019). We evaluate using the Meta-World ML45 benchmark, which includes 45 training tasks and 5 testing tasks. Following the approach taken in Xu et al. (2023), for each training task, we generate an offline dataset containing 1000 episodes for each game, using a rule-based script policy. For fine-tuning data, we randomly pick 10k episodes from the testing dataset, as compared to 20k-80k episodes used in Hyper-DT. **DT-Mem settings** We report results for DT-Mem 20M (20 million parameters), which consists of 13M transformer parameters and 7M memory module parameters. We specify the architecture completely in Appendix A.1. **Training and Fine-tuning** For all games, we use eight V100 GPUs for model training and one V100 GPU for fine-tuning. We train on both Atari games and Meta-World for 10M steps. For fine-tuning on unseen scenarios, we train for 100k steps. 5.2 BASELINE METHODS We compare DT-Mem’s performance against the following baselines. **MDT** Multi-game Decision Transformer (Lee et al., 2022), which trains a large transformer-based model on multi-game domains. For a fair comparison, we train an MDT with 20M parameters, which is approximately the same size of DT-Mem. **RMDT** Recurrent Memory Decision Transformer (Bessonov et al., 2023), which utilizes a recurrent memory mechanism for solving reinforcement learning problems. This is the most related memory-based DT that is close to our work. **HDT** Hyper-Decision Transformer (Xu et al., 2023), which utilizes a hyper-network module to help DT adapt rapidly to unseen tasks. Since we do not have access to the implementation at the time of writing, for the sake of correctness, we compare our model with HDT on Meta-World only. The results reported in our evaluation section come from the HDT paper. **PDT** The Prompt Decision Transformer (Xu et al., 2022) generates actions by considering both recent context and pre-collected demonstrations from the target task. 5.3 DT-MEM IMPROVES MODEL GENERALIZATION. We evaluate five held-out games fine-tuning results as listed in Table I. Each evaluation signifies an average derived from 16 runs, each under differing random seeds. The derived results show that the memory-incorporated method, RMDT and DT-Mem, enhances model generalization when compared to their ablation method MDT. A noteworthy observation is that DT-Mem demonstrates superior generalization performance than RMDT in four out of the five games. Neither of the methods achieves a good result in "Pong". We further discuss whether fine-tuning helps to improve the performance in Section 5.5. | | Alien | MsPacman | Pong | SpaceInvaders | StarGunner | |------------------|---------|----------|---------|---------------|------------| | MDT | 3.8% (±0.4%) | 13.2% (±1.3%) | 0% (±0%) | 8.6% (±1.6%) | 2.3% (±0.1%) | | RMDT | 22.3% (±10.7%) | 22.9% (±8.9%) | 0% (±0%) | 17.6% (±9.2%) | 27.7% (±11.5%) | | DT-Mem | 51.0% (±32.2%) | 69.3% (±19.3%) | 0% (±0%) | 53.6% (±29.0%) | 62.2% (±19.1%) | Table 1: Evaluation results on 5 held-out games after pre-training on other Atari Games. Each value represents the DQN-normalized score, computed with a 95% confidence interval. 5.4 DT-MEM ENABLES MORE COMPUTATIONALLY EFFICIENT TRAINING AND SCALE WITH MODEL PARAMETERS. To demonstrate training efficiency, we illustrate the model training time in Table 4 and the training curve in Appendix B.2 Figure 7. During training, we find that DT-Mem reduces the training time by approximately 4 times, 8 times, and 32 times compared to MDT-13M, MDT-40M, and MDT-200M, respectively. For the training curve, it is reasonable to report the prediction loss on the training dataset since we use a supervised loss. Here, the prediction accuracy consists of three parts: action prediction accuracy, reward prediction accuracy, and return prediction accuracy. ![Figure 3: Scaling of IQM scores](image) Figure 3: Scaling of IQM scores Figure 3 showcases the scaling laws of the proposed DT-Mem model. We measure performance using the human-normalized IQM score. It’s crucial to note that for all instances of DT-Mem, we maintained a consistent number of memory slots. From the result, it’s evident that the performance of DT-Mem scales with the number of parameters. Notably, the generalization of DT-Mem with 20M parameters is approximately on par with the 200M parameter version of MDT. Furthermore, the 50M DT-Mem surpasses MDT by a margin of 16.7%. 5.5 FINE-TUNING ONLY THE MEMORY MODULE IMPROVES MODEL ADAPTABILITY. ![Figure 4: Model training time](image) Figure 4: Model training time Another question we care about is how the pre-trained DT-Mem performs on unseen tasks. We randomly selected nine unseen Atari games and evaluated their performance through relative im- DT-Mem consistently outperforms RMDT and MDT in most of the games listed, with the exception of Seaquest, where MDT excels. MDT exhibits the least superior performance across most games, with its performance particularly lagging in KungFuMaster, Robotank, and Phoenix. RMDT holds an intermediate performance level between DT-Mem and MDT across most games. The consistent superior performance of DT-Mem across most games suggests that this method might have a more adaptable approach. The singular superior performance of MDT in Seaquest prompts a further investigation into the unique attributes of this game that may favor the MDT method. To further understand the adaptability of the proposed method, we compare DT-Mem with HDT and PDT in meta-world environments. The quantitative fine-tuning results are shown in Table 2. Overall, DT-Mem achieves the best performance in the comparison. As we can see, compared to HDT, DT-Mem increases training, testing (no-FT), and testing (FT) scores by an average of 3%, 8%, and 3%, respectively. Moreover, the HDT adaptation module (hyper-network module), while small (69K) relative to the full model (13M), relies on the pre-trained hyper-network, which contains 2.3M parameters. We argue that the hyper-net is more burdensome than our design: it uses more than 10x the number of adaptation parameters (147K) used by DT-Mem and requires an extra compute phase to pre-train the hyper-network module. | Model Sizes | Meta-World ML45 Performances | |-------------|-----------------------------| | Adaptation | Percentage | Train | Test (no-FT) | Test (FT) | | HDT | 69K | 0.5% | 0.89 ± 0.00 | 0.12 ± 0.01 | 0.92 ± 0.10 | | PDT | 6K | 0.05% | 0.88 ± 0.00 | 0.06 ± 0.05 | 0.09 ± 0.01 | | DT-Mem | 147K | 0.7% | 0.92 ± 0.00 | 0.20 ± 0.01 | 0.95 ± 0.10 | Table 2: Evaluation results on Meta-World ML45 benchmarks 5.6 DT-MEM IMPROVES TRAINING PERFORMANCE. In this section, we evaluate whether adding the memory module helps improve the pre-training performance. Thus, we choose relative improvement: rel-impt(%) = (model score − best score in data)/best score in data × 100 to measure the model performance. For better visualization, we take the logarithm of the rel-impt(%). As shown in Figure 6, the proposed DT-Mem outperforms MDT in 13 out of 17 games. DT-Mem outperforms RMDT in 15 out of 17 games. These results demonstrate that memory module improves the policy training performance. 6 CONCLUSION LLM-based RL algorithms have shown generalization across multiple tasks and games. We argue that this ability comes from implicit memory that fits a large number of parameters to the training data, which is inefficient in terms of model size. In contrast, we propose a new approach inspired by the concept of “working memory” called Decision Transformers with Memory (DT-Mem), which stores training experience explicitly in a content-addressable matrix module for later retrieval and use. The evaluation demonstrates that DT-Mem achieves better generalization on Atari games with only 10% of the model parameters compared to the state-of-the-art method. We also show that DT-Mem outperforms other memory-based DT methods in terms of generalization and adaptability. Furthermore, we demonstrate that fine-tuning DT-Mem with a small amount of data can produce state-of-the-art results on both Atari games and the Meta-World environment, when compared to MDT, RMDT, PDT, and HDT. REFERENCES Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. Using fast weights to attend to the recent past. *Advances in neural information processing systems*, 29, 2016. Alan Baddeley. Working memory: looking back and looking forward. *Nature reviews neuroscience*, 4(10):829–839, 2003. Arkadii Bessonov, Alexey Staroverov, Huzhenyu Zhang, Alexey K Kovalev, Dmitry Yudin, and Aleksandr I Panov. Recurrent memory decision transformer. *arXiv preprint arXiv:2306.09459*, 2023. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. Improving language models by retrieving from trillions of tokens. In *ICML*, volume 162 of *Proceedings of Machine Learning Research*, pages 2206–2240. PMLR, 2022. David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, and Joan Bruna. When does return-conditioned supervised learning work for offline reinforcement learning? In *NeurIPS*, 2022. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. *CoRR*, abs/2005.14165, 2020. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In *NeurIPS*, pages 15084–15097, 2021. Aidan Clark, Diego de Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake A. Hechtman, Trevor Cai, Sebastian Borgeaud, George van den Driessche, Eliza Rutherford, Tom Hennigan, Matthew J. Johnson, Albin Cassirer, Chris Jones, Elena Buchatskaya, David Budden, Laurent Sifre, Simon Osindero, Oriol Vinyals, Marc’Aurelio Ranzato, Jack W. Rae, Erich Elsen, Koray Kavukcuoglu, and Karen Simonyan. Unified scaling laws for routed language models. In *ICML*, volume 162 of *Proceedings of Machine Learning Research*, pages 4057–4086. PMLR, 2022. Nelson Cowan. What are the differences between long-term, short-term, and working memory? *Progress in brain research*, 169:323–338, 2008. Róbert Csordás and Juergen Schmidhuber. Improving differentiable neural computers through memory masking, de-allocation, and link distribution sharpness control. *arXiv preprint arXiv:1904.10278*, 2019. Sreerupa Das, C Lee Giles, and Guo-Zheng Sun. Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory. In *Proceedings of The Fourteenth Annual Conference of Cognitive Science Society*. Indiana University, volume 14, 1992. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT (1)*, pages 4171–4186. Association for Computational Linguistics, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*. OpenReview.net, 2021.
rKMQhP6iAv
On the probing experiment. Technically speaking, if your data split is 50/50 yet the F1 is only 65%, isn’t it unconvincing that we could decode persona before the answers being generated? Could you provide other metrics, like accuracy, which is more widely adopted in probing literature?
PERSONAS AS A WAY TO MODEL TRUTHFULNESS IN LANGUAGE MODELS Anonymous authors Paper under double-blind review ABSTRACT Large Language Models (LLMs) are trained on vast amounts of text from the internet, which contains both factual and misleading information about the world. Can language models discern truth from falsehood in this contradicting data? Expanding on the view that LLMs can model different communicative agents, we present the persona hypothesis: LLMs can cluster agents into personas using common features of their generations. For instance, a truthful persona is a group of agents that are likely to produce truthful text and that share similar features like formal writing styles and scientific references. By modeling this persona, LLMs can generalize truthfulness beyond the specific contexts in which each agent generated the training text. For example, the model can infer that the agent “Wikipedia” will behave truthfully on topics that were only generated by “Science” because they both belong to the truthful persona. We show evidence for the persona hypothesis via two observations: (1) we can probe whether a model’s answer will be truthful before it is generated; (2) finetuning a model on a set of facts improves its truthfulness on unseen topics. Next, using arithmetics as a synthetic environment, we show that language models can separate true and false statements, and generalize truthfulness across agents; but only if agents in the training data share a truthful generative process that enables the creation of a truthful persona. Overall, our findings suggest that models can exploit hierarchical structures in the data to learn abstract concepts like truthfulness. 1 INTRODUCTION Large Language Models (LLMs) are pretrained on increasing amounts of data from the internet (Brown et al., 2020; Chowdhery et al., 2022)—a noisy, and mostly uncurated corpus—which contains both truthful statements about the world and untruthful statements such as misconceptions and conspiracy theories. The false claims in the data pose a risk of misinformation as they can be propagated by the model (Lin et al., 2021). Intriguingly, recent work shows that the truth value of a statement can be elicited from its embeddings (Burns et al., 2022; Li et al., 2023). This motivates the main question of this work: how do LLMs distinguish truth from falsehood? Consider two contradicting statements: "COVID vaccines are extremely deadly" (false) and "most studies suggest COVID vaccines are safe" (true). When asked about the safety of COVID vaccines, the classic view of language models suggests that models should generate the most frequent statement in the training data, regardless of whether this is true. However, we observe that slight changes in the question can steer the model to produce any of the two (Figure 1). This suggests that frequency alone is not sufficient to explain model behavior. Andreas (2022) hypothesizes that LLMs can infer the agent who produced the text and generate continuations according to the agent’s goals and beliefs. In this example, given the question "Why is the COVID vaccine so deadly?" with a false presupposition (Kim et al., 2022), the model may infer that the agent who asks the question already believes that the vaccine is deadly, and thus generate an answer following this (false) belief. If the question is instead framed as "Are COVID vaccines safe for humans?", the model generates the true answer. We build upon the above agent modeling view of language models (Andreas, 2022) and argue that LLMs could additionally benefit from modeling personas—groups of agents. Agent generates text in our training data based on their beliefs that a set of propositions \( A \) is true. This set can be different for each agent. We stick to the definition by Andreas (2022). Figure 1: Our main hypothesis is that LLMs can discern truth from falsehood due to the presence of truthful personas—cluster of agents who are likely to be truthful. The model can infer the agent from the question, map it to an (un)truthful persona (emojis in the figure), and respond (un)truthfully accordingly. **Persona:** a latent variable that emerges during LLM training that clusters sets of agents according to their commonalities. Intuitively, a persona helps the model infer that an agent is likely to believe proposition \( p \notin A \) is true if similar agents with the same persona believed so. We introduce the **persona hypothesis**, in the context of truthfulness, as a bridge to explain how the hypothesis from [Andreas](2022) can explain the empirical results from [Burns et al.](2022); [Li et al.](2023). **Persona hypothesis:** Language models can cluster agents into **personas** using common features of their generations. There exists a group of agents who are more truthful than others, and they can be clustered into a truthful persona; e.g., Wikipedia and Science can be grouped by their formal tones and extensive use of citations. By modeling this truthful persona, language models can distinguish true from false statements, and generate truthful text from the persona. We first provide evidence for the persona hypothesis by showing that it can explain two surprising observations on the TruthfulQA benchmark ([Lin et al.](2021)). First, using linear probing, we can predict whether the generated answer will be truthful or not from embeddings of the question alone. This observation is consistent with the hypothesis that the model infers the agent and its persona from the context (question) even before generation begins. Second, finetuning an LLM on a set of true question-answer pairs significantly improves truthfulness on unrelated topics. This is surprising because knowledge from the finetuning examples (e.g., blood type has no influence on personality) does not generalize to test examples (e.g., the temperature of a single day cannot accurately reflect the climate). However, with a truthful persona, the model can tie these facts together and generalize the truthful behavior to unseen topics. Next, we establish a direct connection between personas and model truthfulness through a synthetic environment of arithmetics, where different agents have either true or false beliefs about the semantics of each operator. We train language models on equations generated by these agents. By controlling the data generating process, we show that models can separate true and false equations, and generalize truthful behavior of an agent to unseen operators, but this is only possible when there exists a truthful persona, i.e. a set of truthful agents that can be clustered by common features. ## 2 Evidence of LLMs Modeling Personas ### 2.1 Personas Can Be Inferred from Context As a first step to test our persona hypothesis, we verify if the model can infer a truthful persona from the context by probing its internal activations. Hypothesis: LLMs can infer truthful or untruthful personas from context, and generate text according to the persona. Evidence: Truthfulness of the answer to a question can be predicted from model activations before the answer is generated. Experimental setup. We use the TruthfulQA dataset and the instruction-tuned Alpaca model (Taori et al., 2023). We randomly split the dataset into 50% for training and 50% for testing. We prompt Alpaca with each question (see Appendix A for the detailed prompt) and obtain: (1) the embedding of the last token of the question prompt at each layer and (2) the answer to the question using greedy decoding. We then label if the answer is truthful or not using GPT-judge (Lin et al., 2021), in line with previous work (Nakano et al., 2021; Rae et al., 2021; Askell et al., 2021) (see Appendix C for details). We finally train a linear classifier to predict truthfulness of an answer given the question embedding. To account for the imbalance in labels (there are more untruthful generations than truthful ones), we report the weighted F1-score. Results. We run the experiment (data splitting, training, evaluation) over 20 random seeds. Figure 2 shows the average and standard deviation of the F1-score of the probe using embedding from each layer. The probing result is significantly above random guessing from very early layers in the model and peaks at layer 17 at approximately 65% F1, suggesting that the model encodes a latent variable correlated with truthfulness of the answer. Next, we visualize the persona inference process by plotting the probe performance as we incorporate more context from the prompt. Specifically, we train linear probes on (1) a random token in the instruction part of the prompt before the question is given, (2) the first token of the question—often a “Wh-” clause, and (3) the seventh token of the question (on average, the middle token). Figure 2b shows the results using the representation from layer 17 where we observed a peak. Probing the prompt instruction performs as well as random guessing. As we incorporate more context from the question, performance increases, peaking when the entire question is observed by the model. In addition, we look at how the probe performs across categories. We find that performance depends on the question category. For instance, F1 for history questions peaks at 80% in late layers; while the maximum F1 for questions about stereotypes is only 55% in very early layers. This suggests that for certain topics the truthful statements can be harder to separate from the false ones. Appendix B contains detailed results for the 5 largest topics in the dataset. Nevertheless, for most topics we observe that the probe performs better than random guessing ruling out the possibility that the probe is solely relying on the topic. 2.2 Truthfulness can be generalized across topics Now that we have seen models are able to infer a truthful persona from context, we next test whether the model can use this persona to generalize truthfulness from one topic to another. We finetune | | TruthfulQA | BigBench-misconceptions | |----------------------|------------|-------------------------| | | GPT-judge | Human evaluation | Human evaluation | | No Finetuning | 39.0 ± 7.4 | 31.7 ± 7.1 | 54.2 ± 10.7 | | Truthful finetuning | 74.4 ± 6.6 | 58.0 ± 7.5 | 59.4 ± 10.5 | | Untruthful finetuning| 9.8 ± 4.5 | 6.7 ± 3.8 | 30.7 ± 9.9 | | TriviaQA | 24.4 ± 6.5 | 15.2 ± 5.4 | 45.3 ± 10.7 | | MS MARCO | 37.8 ± 7.4 | 21.3 ± 6.2 | 49.2 ± 10.7 | Table 1: Percentage of truthful model responses evaluated by the GPT-judge evaluator and human judges on 164 test questions with 95% confidence intervals. Finetuning on (un)truthful QA pairs makes the model more (un)truthful on unrelated questions. LLMs on pairs of questions and truthful answers. Since all questions in TruthfulQA are factually unrelated (i.e., there is no information that can be transferred from training to test questions), changes in truthfulness can be attributed to a latent persona that guides model behavior. **Hypothesis:** Finetuning on true answers associates the inferred (untruthful) agent with the truthful persona, which helps the model generalize to unseen topics. **Evidence:** Finetuning LLMs to generate true answers for misleading questions improves truthfulness on unseen topics. **Experimental setup.** We finetune the Alpaca model on question-answer pairs from TruthfulQA using LoRA (Hu et al., 2021). We split TruthfulQA into 80% for finetuning and 20% for evaluation. In *Truthful finetuning* (TF), the model is trained to output each truthful answer provided in the dataset given a question. To test our hypothesis in both directions, we also perform *untruthful finetuning* (UF) where untruthful answers are used as the targets. To ensure that the model is not relying on features specific to TruthfulQA, we further test the model on the misconceptions dataset from BigBench (Srivastava et al., 2022). We transform this dataset to fit our prompt format, resulting in 83 questions (details in Appendix C). To evaluate truthfulness of the generated answers, we again use GPT-Judge and the authors provided additional human evaluation. **Model generalizes to unseen topics and domains.** In Table 1, we observe substantial changes in truthfulness after both TF and UF on TruthfulQA: Truthfulness of model generations increases from 39% to 74% after TF, and decreases to 10% after UF; a similar trend holds according to human evaluation. Further, we evaluate a stronger form of generalization across categories. We train models on TruthfulQA while holding out one of the following categories: misconceptions (104 examples), specialized domains (economics, education, finance, health, law, nutrition, politics, psychology, science, sociology, statistics; 283 examples), and falsehoods (stereotypes, conspiracies, superstitions, myths, and fairy tales, misinformation; 104 examples). In Figure 3a, we see that improvement in truthfulness on held-out categories is comparable to the TF baseline trained on all categories. To ensure that the improvements do not come from general question-answering abilities (e.g., better adaptation to the QA format), we finetune the model on random splits from TriviaQA (Joshi et al., 2017) and MS Marco (Nguyen et al., 2016) of the same size as our finetuning set. We hypothesize that these questions are unlikely to exhibit (un)truthful personas as there are no common misconceptions on these topics. Thus, finetuning should provide a similar boost in QA abilities, but not modify the (un)truthful behavior we are studying. The results in Table 7 show that models finetuned on these datasets have similar truthfulness as the initial model. **Model generalizes from small sample size.** If finetuning mainly helps the model identify an already existing truthful persona, it should not require many examples to reach good performance. Thus, we finetune the model with varying sample sizes and investigate whether in-context learning (ICL) similarly guides the model to be more (un)truthful. We run TF with smaller splits (5%, 20%, and 50%) and in-context learning with 10 (1.5%) and 20 (3%) examples. Results in Figure 3b show --- 1 TruthfulQA may contain superficial patterns that can be exploited to increase truthfulness. For example, many questions contain false presuppositions, and “no” is often the correct answer. Figure 3: Generalization of Alpaca to unseen TruthfulQA questions. (Left) Results of models finetuned with heldout categories (TF - category), all categories (TF), and the original model (No finetuning). (Right) Results of small sample learning using ICL (10 and 25 examples) and finetuning. that, aside from ICL with 10 examples, all methods achieve a substantial increase in truthfulness. Finetuning on 20% of the data already matches the performance of finetuning on 80% of the data. Overall, our results support the hypothesis that LLMs model truthful personas in the data. We show this by predicting whether the generation will be truthful from only the question embeddings, and with generalization experiments where finetuning improves truthfulness on unseen topics and domains. 3 Arithmetic Laboratory: Connecting Personas to Truthfulness In the previous section, we have shown evidence of LLMs modeling (un)truthful personas. In this section, we establish a direct connection between personas and model truthfulness by controlling the data generating process in a synthetic environment inspired by Power et al. (2022). Dataset generation. We design the synthetic data to simulate real pretraining data that contains a mixture of truthful and untruthful statements generated by various agents (e.g. Wikipedia and Twitter). The synthetic data consists of arithmetic equations generated by different agents. Each agent \(a \in S\) has “belief” about the meaning of each arithmetic operator \(op \in O\), which takes in two integer operands \(x, y \in \mathbb{N}^+\) and returns \(z\). The agent may have a correct belief about \(op\), denoted by \(op^T\), or a false belief denoted by \(op^F\). For example, an agent may believe that \(op\) means addition (e.g., \(op(3, 2) = 5\)), which is the assigned true semantics of \(op\), whereas another agent has the false belief that \(op\) means subtraction (e.g., \(op(3, 2) = 1\)). Each data point follows the format: \(a | x op y = z\) where \(z\) is either \(op^T(x, y)\) or \(op^F(x, y)\) depending on the agent, and \(|\) is a separator token. Specifically, we use the following generative process: \[ a \sim \mathbb{U}(S) \quad ; \quad op \sim \mathbb{U}(O) \quad ; \quad x, y \sim \mathbb{U}(\{1, 2, .., n\}) \quad ; \quad z = \begin{cases} op^T(x, y) & \text{w.p. } p_{(a, op)} \\ op^F(x, y) & \text{otherwise} \end{cases} \] where \(p_{(a, op)} \in (0, 1]\) is the probability the agent \(a\) has correct belief about \(op\) and \(\mathbb{U}\) denotes the uniform distribution. We say that an agent \(a\) is truthful on \(op\) if \(p_{(a, op)}\) is high. The exact operations of the truthful and untruthful operators can be found in Appendix D. Experimental setup. In each experiment, we train a 4-layer Transformer with 4 attention heads on the synthetic data using the causal language modeling objective. The hidden dimension and the embedding dimension are set to 128. All models are trained with a batch size of 512 and learning rate of 0.001 using the Adam optimizer [Kingma & Ba (2014)] for a total of 20k steps. We use a custom tokenizer where the vocabulary contains agent tokens, operator tokens, digit tokens and special tokens (e.g., the separator). Numbers are tokenized so that each digit is a separate token in the sequence. For more training details, see Appendix C. --- 2We never set \(p_{(a, op)}\) to be exactly 0 (completely untruthful) or 1 (completely truthful) to stay closer to the real setting. 3.1 Probing for Truthfulness Motivated by the observations on LLMs, we train probes to predict whether a model’s answer for an incomplete equation (e.g., \(a \mid x \text{ op } y =\)) will be truthful. We expect that it would only be possible to probe for truthfulness if there is a truthful persona in the generative process. That is, agents who are likely to produce truthful outputs share some common features that can be clustered. We thus create two pretraining setups with and without truthful personas as follows: 1. **Truthful persona.** We use four agents (\(A, B, C,\) and \(D\)) and \(m\) operators. \(A\) and \(B\) are truthful agents who are truthful on all \(m\) operators, whereas \(C\) and \(D\) are untruthful on all \(m\) operators. Thus, the model can use the shared belief among \(A\) and \(B\), and \(C\) and \(D\) respectively to cluster these agents and form (un)truthful personas. We vary \(m \in \{8, 12, 16, 20\}\). 2. **No truthful persona.** Same as in (1), we have four agents and \(m\) operators. However, none of the agents is truthful across all the operators; each agent is truthful on only \(\frac{m}{4}\) operators (disjoint among the four agents). We similarly vary \(m \in \{8, 12, 16, 20\}\). Since all agents are (un)truthful on disjoint sets of operators, there are no features the model can use to cluster them hence no (un)truthful personas. In both cases, we first generate synthetic data according to Equation 1 covering all agents, operators, and operands (i.e. \(4 \cdot m \cdot 10k\) data points in total with \(n = 100\)). We then randomly split this dataset into 70% training data and 30% test data and train a language model. Then, we train probes to predict whether the model’s prediction given an input expression \(a \mid x \text{ op } y =\) is truthful or not. The probe is a linear model that takes in the embedding of ‘=’ from a particular layer. Analogous to the LLM probing experiments, we train the probes on half of the operators and evaluate them on the other half to ensure that they do not simply learn which combinations of agents and operators are truthful, but rather rely on features that generalize across agents (i.e. personas). We run the experiment 3 times using different random seeds to select which half of the operators to train (and test) the probe on, where for each run we select 5k examples for training and testing the probe respectively. In initial experiments, we observe that probes trained on different layers can achieve very different performance. To account for this, we report the maximum probing F1 across layers on the test set. We report the F1 score for the probes in both setups in Figure 4a. Across all values of \(m\), probes get higher F1 in the truthful persona training setup. We observe especially large variance in the setting with no truthful persona — we hypothesize that this happens because in the absence of a truthful persona, the probe can have widely varying generalization on the unseen half of the operators. This result supports our persona hypothesis where we can discern true and false statements only if truthful agents are clustered to form a truthful persona. 3.2 Generalizing Agent Beliefs to Unseen Operators To test our hypothesis that personas can be used to generalize an agent’s behavior to unseen contexts, we evaluate if models trained on the synthetic data can generalize a (un)truthful agent’s belief to unseen operators. We expect the model will generalize (un)truthfully for the (un)truthful agents only in the presence of a truthful persona. We create two training setups, as illustrated in Figure 5. 1. **Truthful persona.** The training data consists of seven agents, from \(A\) to \(G\), and four different operators, from \(op_1\) to \(op_4\). Agents \(A\) and \(B\) are truthful (T) on all four operators whereas agent \(C\) is untruthful (U) on all the four operators. The model can use the shared belief between \(A\) and \(B\) (i.e. the shared truthful interpretation \(op^T\) from both agents) to cluster them into a truthful persona. The rest of the agents (\(D, E, F, G\)) are used for evaluation on the unseen operator \(op_4\). Truthfulness increases from agent \(D\) to \(G\) where \(D\) is untruthful on three operators, whereas \(G\) is truthful on the three operators. The semantics of \(op^T\) and \(op^F\) for each operator can be found in Appendix D. 2. **No truthful persona.** The data consists of seven agents, from \(A\) to \(G\), and four different operators, from \(op_1\) to \(op_4\). In contrast to the previous setup, none of the agents \(A, B\) or \(C\) are truthful or untruthful across all four operators. Each of \(A, B,\) and \(C\) are truthful on two out of the four operators as illustrated in Figure 5. In this setup, there are no features the model can use to cluster Figure 4: (left) Maximum F1 score across layer with std. deviation. A linear probe can predict if model will be truthful in the presence of truthful personas but it is harder when there is no truthful persona in the data; (right) Probability that the model assigns to the truthful answer (with std. deviation) as described in Section 3.2. It increases with truthfulness of the agent when there are truthful persona, but we see high variance in the absence of a truthful persona. | | A | B | C | |---|---|---|---| | op1 | T | T | U | | op2 | T | T | U | | op3 | T | T | U | | op4 | T | T | U | Truthful Persona | | A | B | C | |---|---|---|---| | op1 | T | U | U | | op2 | T | U | T | | op3 | U | T | U | | op4 | U | T | T | No Truthful Persona Agent truthfulness increases → | | D | E | F | G | |---|---|---|---|---| | op1 | U | U | U | T | | op2 | U | U | T | T | | op3 | U | T | T | T | | op4 | ? | ? | ? | ? | Seen Unseen T - Truthful U - Untruthful Figure 5: Illustration of the synthetic setup used to test generalization. The first setup (top) has a truthful persona in the data (A, B) whereas the second one (bottom) does not. We evaluate whether models generalize truthfully by testing with 4 new agents (D, E, F, G) which exhibit varying degrees of truthfulness. the agents since they are truthful on subsets of operators with no (e.g., A and B) or little (e.g., A and C) overlap. Similar to the previous setup, the other agents (D, E, F, G) are used to evaluate generalization to the unseen operator op4 where truthfulness increases from D to G. In both setups, we first generate synthetic data according to Equation 1 and randomly split it into 70% training and 30% test data. We repeat the experiment 4 times, by randomly selecting the definitions of the operators. To evaluate the model on an unseen agent-operator combination, we compute the average probability assigned by the model to the truthful and untruthful answers across all held-out equations for that operator. We use $p_{\text{truthful}}$ and $p_{\text{untruthful}}$ to denote the average model likelihood for the truthful and untruthful answers respectively. Results. In each of the two setups, we report $p_{\text{truthful}}$ for the unseen operators across the four agents D, E, F, G in Figure 4b. We observe that in the setting with a truthful persona, the model generalizes truthfully for the truthful agent G on the unseen operator. Similarly, the model generalizes untruthfully for the untruthful agent D—both have much smaller variance than the intermediate agents where the agents are not (un)truthful on all operators. On the other hand, in the setup with --- See Appendix D for the graph of $p_{\text{untruthful}}$. | | D | E | F | G | |----------------|-------|-------|-------|-------| | Truthful Answer| 92.66%| 91.88%| 97.84%| 100% | | Control Answer | 47.82%| 45.36%| 45.29%| 46.33%| | Untruthful Answer| 96.38%| 94.73%| 90.78%| 79.33%| | Control Answer | 24.58%| 25.03%| 24.98%| 23.91%| Table 2: Probing accuracy for the equations involving \( \text{op}_4 \) to either predict the truthful answer, the untruthful answer or a control answer. Models encode both the truthful and untruthful answer much better than the control answer, irrespective of whether the equation involves a truthful or an untruthful agent. no truthful persona, we observe very high variance in \( p_{\text{truthful}} \). This happens because the model generalization widely varies over different runs (e.g. \( p_{\text{truthful}} \approx 0 \) in some runs and \( p_{\text{truthful}} \approx 1 \) in others). For models to generalize as expected in the setting with truthful persona, the model clusters agents who are mostly truthful (e.g. \( A, B, G \)), which can be used to determine which function to use for the unseen agent-operator combination (\( G \) on \( \text{op}_4 \)). Thus, consistent with our hypothesis, we observe that models can generalize to produce (un)truthful output for (un)truthful agents, only in the presence of a truthful persona. ### 3.3 Mechanism for Persona-based Computation Our hypothesis in this work is that LLMs can infer the agent based on the input context, map it to an (un)truthful persona based on the cluster the agent belongs to, and generate (un)truthful continuations accordingly. An interesting question here is the mechanism of how LLMs perform the persona-based computation — do they first infer the persona and then compute the corresponding answer? Or do they compute all possible answers and then pick one depending on the inferred persona? To answer this question, we perform some preliminary experiments in the synthetic setup. Specifically, we train two linear probes on the representation to predict the truthful answer and the untruthful answer to the equation respectively. We use the model from Figure 5 with truthful personas (top), and use the representation from the last layer to train the probes. Both the probes are trained on 50k randomly sampled examples, and evaluated on held-out equations for \( \text{op}_1 \). We also train control probes to predict an answer of an unrelated operation as a baseline — this helps to control for the possibility of the LLM encoding all numbers in the representation, or the probe learning to perform the task. More experimental details can be found in Appendix C. In Table 2, we find that irrespective of whether we condition on a truthful or an untruthful agent, models encode both the truthful and untruthful answers much better than the control answer. This indicates that models compute and store all possible answers of an input and then ‘pick’ an answer based on the inferred persona. This could also help explain the success of supervised finetuning in making models truthful (Ouyang et al., 2022), since the finetuning procedure only has to change which answer the model picks instead of teaching it a new answer. We leave more investigation along this direction for future work. **Limitations of the synthetic setting.** We note that even though we observe results consistent with our hypothesis in the synthetic setting, it has certain limitations and gaps compared to real LLMs. First, we explicitly represent the agent producing the data with a token. In real LLMs, models would have to infer the agent from the text and may not be able to do it as easily as in the synthetic setting. Second, in the synthetic setting, we assumed that both truthful and untruthful answers are equally easy or equally hard to compute — this leaves open the possibility that truthful (or untruthful) answers are ‘simpler’ and easier to model. Additionally, we assumed that truthful agents share common beliefs across most if not all operators — in practice, truthful agents do not necessarily agree on every fact. ### 4 Discussion **Have LLMs robustly learnt what is truthful?** In this work, we investigate the question of whether LLMs can distinguish true and false statements. Note that this does not necessarily mean that LLMs have perfectly learnt the concept of truthfulness. First, as we observed in both the LLM finetuning and probing experiments, even though models perform much better than chance there is still a considerable gap; e.g., we can probe with only up to $\approx 70\%$ accuracy whether the model will make a truthful prediction. Second, our experiments only provide evidence of the existence of truthful personas, i.e. there exist features that the model can use to cluster truthful agents. Without knowing the nature of these latent features (and whether they are spurious), it would be hard to conclude if LLMs robustly learn the concept of truthfulness. Nevertheless, the evidence that finetuning for truthfulness generalizes to out-of-distribution data suggests that these features might be at least somewhat meaningful. Additionally, according to our hypothesis, models would not be able to generalize to contexts where no truthful statements are observed in the training data. **Other hypotheses of how LLMs can learn truthfulness.** Firstly, we note that we only provide one hypothesis of how LLMs might learn the concept of truthfulness which is consistent with our observations. Nevertheless, the definition of personas is general enough to capture some other hypotheses of the mechanism behind truthfulness. For example, it could be possible that a small number of truthful and untruthful statements in the pretraining data have annotations, say in the form of comments in forums indicating whether the statement was truthful. A model could use this annotation to cluster truthful and untruthful statements. ## 5 RELATED WORK **Evaluating truthfulness of LLMs.** [Lin et al. (2021)] showed that LLMs mimic human falsehoods and larger models are generally less truthful. However a follow-up [Wei et al. (2022)] showed that this behaviour is in fact U-shaped — beyond a certain scale, truthfulness seems to increase as we increase the scale of models. **Improving truthfulness.** Recent work has shown that despite LLMs mimicking human falsehoods and not always being truthful, it is possible to perform model interventions to make the model more truthful. [Burns et al. (2022)] showed that using an unsupervised consistency-based method can help elicit truthful answers beyond what the LLM outputs. Similarly, [Li et al. (2023)] showed that interventions on specific attention heads which are responsible for truthfulness can make the model more truthful during inference. [Chuang et al. (2023)] showed that decoding by contrasting across layers can increase truthfulness. Recent work has also shown, similar to our probing results, that we can detect whether an answer produced by LLM is truthful either using its internal state representation [Azaria & Mitchell (2023)] or using linguistic features of the answer [Lee et al. (2023)]. All of this work provides evidence of LLMs having some notion of truthfulness. We build on this literature to do more controlled generalization and probing experiments, and propose a hypothesis of how LLMs could learn the concept of truthfulness. **Personas and Agents in LLMs.** Despite conflicting information in the data [Chen et al. (2022)], [Andreas (2022)] argued that LLMs can serve as models of agents where they can infer properties of the agent and predict the next word accordingly. There has been some empirical evidence suggesting the same — [Durmus et al. (2023)] show that we can steer LLMs to express opinions similar to people from some countries; [Sardari et al. (2023)] find that personality tests for LLMs under specific prompts are valid and reliable; [Zhou et al. (2023)]; [Lin et al. (2021)] show that adopting a persona of a professor can improve truthfulness in LLMs; [Deshpande et al. (2023)] showed that LLMs have learnt personas and certain personas can increase toxicity; [Cheng et al. (2023)] showed that we can use persona to measure stereotypes in LLMs. Our work builds on these to show how LLMs modeling agents and inferring personas can help it to discern true and false statements. ## 6 CONCLUSION We introduce a hypothesis of how LLMs can model truthfulness: **persona hypothesis** — LLMs can group agents that share common features into personas that can be used to distinguish true from false statements, and generalize agent behavior beyond the context in which it was observed during training. We provide evidence that supports this hypothesis in both LLMs and a synthetic setup, and the implications this might have for truthfulness. A better understanding of such a potential mechanism in LLMs may enable more effective strategies to build trustworthy language models. REFERENCES Jacob Andreas. Language models as agent models. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.findings-emnlp.423 Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, T. J. Henighan, Andy Jones, Nicholas Joseph, Benjamin Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, John Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Christopher Olah, and Jared Kaplan. A general language assistant as a laboratory for alignment. *ArXiv*, abs/2112.00861, 2021. URL https://api.semanticscholar.org/CorpusID:244799619 Amos Azaria and Tom M. Mitchell. The internal state of an llm knows when its lying. *ArXiv*, abs/2304.13734, 2023. URL https://api.semanticscholar.org/CorpusID:258352729 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. *ArXiv*, abs/2005.14165, 2020. Collin Burns, Hao-Tong Ye, Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in language models without supervision. *ArXiv*, abs/2212.03827, 2022. Hung-Ting Chen, Michael J.Q. Zhang, and Eunsol Choi. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In *Conference on Empirical Methods in Natural Language Processing*, 2022. URL https://api.semanticscholar.org/CorpusID:253107178 Myra Cheng, Esin Durmus, and Dan Jurafsky. Marked personas: Using natural language prompts to measure stereotypes in language models. *ArXiv*, abs/2305.18189, 2023. URL https://api.semanticscholar.org/CorpusID:258960243 Aankanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier García, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Seppasi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Díaz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways. *ArXiv*, abs/2204.02311, 2022. Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James R. Glass, and Pengcheng He. Dola: Decoding by contrasting layers improves factuality in large language models. *ArXiv*, abs/2309.03883, 2023. URL https://api.semanticscholar.org/CorpusID:261582463 A. Deshpande, Vishvak Murahari, Tanmay Rajpurohit, A. Kalyan, and Karthik Narasimhan. Toxicity in chatgpt: Analyzing persona-assigned language models. *ArXiv*, abs/2304.05335, 2023. URL https://api.semanticscholar.org/CorpusID:258060002 Esin Durmus, Karina Nyugen, Thomas Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, Liane Lovitt, Sam McCandlish, Orowa Sikder, Alex Tamkin, Janel Thamkul, Jared Kaplan, Jack Clark, and Deep Ganguli. Towards measuring the representation of subjective global opinions in language models. *ArXiv*, abs/2306.16388, 2023. URL https://api.semanticscholar.org/CorpusID:259275051
ESq3U7z6FD
In Sec 4.3, why is using beam-size = branching factored referred to as exact search? Are all documents visited in this setting? Also, how does beam-size = 0.1*branching_factor ensure that we search up to 10% documents? My understanding is that beam-size=b means that we end up at b leaf nodes and then exhaustively rank all documents in those leaf nodes. So unless the tree is of height = 1, setting beam-size=branching factor can not mean that we are exhaustively searching over all documents.
EHI: End-to-end Learning of Hierarchical Index for Efficient Dense Retrieval Anonymous authors Paper under double-blind review Abstract Dense embedding-based retrieval is now the industry standard for semantic search and ranking problems, like obtaining relevant web documents for a given query. Such techniques use a two-stage process: (a) contrastive learning to train a dual encoder to embed both the query and documents and (b) approximate nearest neighbor search (ANNS) for finding similar documents for a given query. These two stages are disjoint; the learned embeddings might be ill-suited for the ANNS method and vice-versa, leading to suboptimal performance. In this work, we propose End-to-end Hierarchical Indexing – EHI– that jointly learns both the embeddings and the ANNS structure to optimize retrieval performance. EHI uses a standard dual encoder model for embedding queries and documents while learning an inverted file index (IVF) style tree structure for efficient ANNS. To ensure stable and efficient learning of discrete tree-based ANNS structure, EHI introduces the notion of dense path embedding that captures the position of a query/document in the tree. We demonstrate the effectiveness of EHI on several benchmarks, including de-facto industry standard MS MARCO (Dev set and TREC DL19) datasets. For example, with the same compute budget, EHI outperforms state-of-the-art (SOTA) in by 0.6% (MRR@10) on MS MARCO dev set and by 4.2% (nDCG@10) on TREC DL19 benchmarks. 1 Introduction Semantic search (Johnson et al., 2019) aims to retrieve relevant or semantically similar documents/items for a given query. In the past few years, semantic search has been applied to numerous real-world applications like web search, product search, and news search (Nayak, 2019; Dahiya et al., 2021). The problem in the simplest form can be abstracted as: for a given query $q$, retrieve the relevant document(s) $d(q)$ from a static set of documents $\{d_1, d_2, \ldots, d_N\}$ s.t. $d(q) = \arg\max_{1 \leq j \leq N} \text{SIM}(q, d_j)$. Here $\text{SIM}$ is a similarity function that has high fidelity to the training data $B = \{(q_i, d_j, y_{ij})\}$. Tuple $(q_i, d_j, y_{ij})$ indicates if document $d_j$ is relevant ($y_{ij} = 1$) or irrelevant ($y_{ij} = -1$) for a given query $q_i \in Q$. Dense embedding-based retrieval (Johnson et al., 2019; Jayaram Subramanya et al., 2019; Guo et al., 2020) is the state-of-the-art (SOTA) approach for semantic search and typically follows a two-stage process. In the first stage, it embeds the documents and the query using a deep network like BERT (Devlin et al., 2018). That is, it defines similarity $\text{SIM}(q, d) := \langle E_\theta(q), E_\theta(d) \rangle$ as the inner product between embeddings $E_\theta(q)$ and $E_\theta(d)$ of the query $q$ and the document $d$, respectively. $E_\theta(\cdot)$ is a dense embedding function learned using contrastive losses (Ni et al., 2021; Menon et al., 2022). In the second stage, approximate nearest neighbor search (ANNS) retrieves relevant documents for a given query. That is, all the documents are indexed offline and are then retrieved online for the input query. ANNS in itself has been extensively studied for decades with techniques like ScaNN (Guo et al., 2020), IVF (Sivic & Zisserman, 2003), HNSW (Malkov & Yashunin, 2020), DiskANN (Jayaram Subramanya et al., 2019) and many others being used heavily in practice. The starting hypothesis of this paper is that the two-stage dense retrieval approach – disjoint training of the encoder and ANNS – is sub-optimal due to the following reasons: Misalignment of representations: When the encoder and ANNS are trained separately, there is no explicit optimization objective that ensures that the representations learned by the encoder are aligned Figure 1: EHI is an end-to-end hierarchical indexer which comprises an encoder and a hierarchical tree as the indexer where the entire pipeline is learnable and differentiable. Here, variables $V_{98}$, $V_{123}$, and $V_{576}$ are dense representations (embeddings) of the text and $P_{98}$, $P_{123}$, and $P_{576}$ are path embeddings of the respective samples. To efficiently train EHI without any warm starting, we use a combination of objectives - $L_{siamese}$, $L_{indexing}$, $L_{intra-leaf}$ (see Section 3 for details). with the requirements of the ANNS technique. For example, the documents might be clustered in six clusters to optimize encoder loss. However, due to computational constraints, ANNS might allow only five branches/clusters, thus splitting or merging clusters unnaturally and inaccurately. Ignoring query distribution: Generic ANNS techniques optimize for overall retrieval efficiency without considering the query distribution. As a result, the indexing structure might not be optimal for a particular train/test query distribution (Jaiswal et al., 2022). See Appendix B for more details. Motivated by the aforementioned issues, we propose EHI—End-to-end learning of Hierarchical Index – that jointly trains both the encoder and the search data structure; see Figure 1. To the best of our knowledge, EHI is the first end-to-end learning method for dense retrieval. Recent methods like DSI (Tay et al., 2022) and NCI (Wang et al., 2022) do not follow a dense embedding approach and directly generate document ID, but they also require a separate hierarchical clustering/tokenization phase on embeddings from a pre-trained encoder; see Section 2 for a more detailed comparison. EHI parameterizes the hierarchical tree-based indexer with classifiers in its nodes. One key idea in EHI is to map the path taken by a query or a document in the tree with a compressed, continuous, and dense path embedding. Standard path embedding in a tree is exponentially sized in tree height, but EHI’s path embeddings are linear in branching factor and tree height. EHI further uses these embeddings with contrastive loss function over (query, doc) tuples along with two other loss terms promoting diversity in indexing. We conduct an extensive empirical evaluation of our method against SOTA techniques on standard benchmarks. For example, on FIQA dataset (Maia et al., 2018) – a question-answering dataset – we observe that our method is 5.5% more accurate than standard dense retrieval with ScaNN ANNS index (Guo et al., 2020) when restricted to visit/search only 20% of documents in the corpus. Furthermore, for FIQA, EHI shows an improvement of 5.61% than the dense retrieval baselines with exact search, thus demonstrating better embedding learning as well. We attribute these improved embeddings to the fact that EHI enables integrated hard negative mining as it can retrieve irrelevant or negative documents from indexed leaf nodes of a query. Here, the indexer parameters are always fresh, unlike techniques akin to ANCE (Xiong et al., 2020). Our experiments on the popular MS MARCO benchmark (Bajaj et al., 2016) demonstrate that EHI shows improvements of 0.6% in terms of nDCG@10 compared to dense-retrieval with ScaNN baselines when only 10% of documents are searched. Similarly, EHI provides 4.2% higher nDCG@10 than state-of-the-art (SOTA) baselines on the MS MARCO TREC DL19 (Craswell et al., 2020) benchmarks for the same compute budget. EHI also achieves SOTA exact search performance on both MRR@10 and nDCG@10 metrics with up to 80% reduction in latency, indicating the effectiveness of the joint learning objective. Similarly, we outperform SOTA architectures such as NCI on NQ320k by 0.5% and ~ 2% on Recall@10 and Recall@100 metrics with a model one-tenth the size! (see Section 4.2). To summarize, the paper makes the following key contributions: - Proposed EHI, the first end-to-end learning method for dense retrieval that jointly learns both the encoder and the search indexer for various downstream tasks. (see Section 3). EHI represents a paradigm shift in dense retrieval where both encoder and ANNS could be integrated and trained accurately, efficiently, and stably in a single pipeline. - Extensive empirical evaluation of EHI on the industry standard MS MARCO benchmark and compare it to SOTA approaches like ColBERT, SGPT, cpt-text, ANCE, DyNNIBAL, etc. (see Appendix D). EHI’s focus is mainly on improving retrieval accuracy for a fixed computation/search budget and is agnostic to encoder architecture, similarity computation, hard negative mining, etc. 2 RELATED WORKS Dense retrieval (Mitra et al., 2018) underlies a myriad of web-scale applications like search (Nayak, 2019), recommendations (Eksombatchai et al., 2018; Jain et al., 2019), and is powered by (a) learned representations (Devlin et al., 2018; Kolesnikov et al., 2020; Radford et al., 2021), (b) ANNS (Johnson et al., 2019; Sivic & Zisserman, 2003; Guo et al., 2020) and (c) LLMs in retrieval (Tay et al., 2022; Wang et al., 2022; Guu et al., 2020). Representation learning. Powerful representations are typically learned through supervised and un/self-supervised learning paradigms that use proxy tasks like masked language modeling (Devlin et al., 2018) and autoregressive training (Radford et al., 2018). Recent advances in contrastive learning (Gutmann & Hyvärinen, 2010) helped power strong dual encoder-based dense retrievers (Ni et al., 2021; Izacard et al., 2021; Nayak, 2019). They consist of query and document encoders, often shared, which are trained with contrastive learning using limited positively relevant query and document pairs (Menon et al., 2022; Xiong et al., 2020). While most modern-day systems use these learned representations as is for large-scale ANNS, there is no need for them to be aligned with the distance metrics or topology of the data structures. Recent works have tried to address these concerns by warm-starting the learning with a clustering structure (Gupta et al., 2022) but fall short of learning jointly optimized representations alongside the search structure. Other works such as RepCONC (Zhan et al., 2022), and SPLADE (Formal et al., 2022) also work on the efficiency aspect of retrieval, where they focus on quantization of the representations using regularizers which explicitly work on reducing FLOPS. Approximate nearest neighbor search (ANNS). The goal of ANNS is to retrieve almost nearest neighbors without paying exorbitant costs of retrieving true neighbors (Clarkson, 1994; Indyk & Motwani, 1998; Weber et al., 1998). The “approximate” nature comes from pruning-based search data structures (Sivic & Zisserman, 2003; Malkov & Yashunin, 2020; Beygelzimer et al., 2006) as well as from the quantization based cheaper distance computation (Jegou et al., 2010; Ge et al., 2013). This paper focuses on ANNS data structures and notes that compression is often complementary. Search data structures reduce the number of data points visited during the search. This is often achieved through hashing (Datar et al., 2004; Salakhutdinov & Hinton, 2009; Kusupati et al., 2021), trees (Friedman et al., 1977; Sivic & Zisserman, 2003; Bernhardsson, 2018; Guo et al., 2020) and graphs (Malkov & Yashunin, 2020; Jayaram Subramanya et al., 2019). ANNS data structures also carefully handle the systems considerations involved in a deployment like load-balancing, disk I/O, main memory overhead, etc., and often tree-based data structures tend to prove highly performant owing to their simplicity and flexibility (Guo et al., 2020). For a more comprehensive review of ANNS structures, please refer to Cai (2021); Li et al. (2020); Wang et al. (2021). Works such as CCSA (Lassance et al., 2021) propose alternate ANN structures for efficient retrieval via constrained clustering. Encoder-decoder for Semantic Search. Recently, there have been some efforts towards modeling retrieval as a sequence-to-sequence problem. In particular, Differential Search Index (DSI) (Tay et al., 2022) and more recent Neural Corpus indexer (NCI) (Wang et al., 2022) method proposed encoding the query and then find relevant document by running a learned decoder. However, both these techniques, at their core, use a separately computed hierarchical k-means-based clustering of document embeddings for semantically assigning the document-id. That is, they also index the documents using an ad-hoc clustering method which might not be aligned with the end objective of improving retrieval accuracy. In contrast, EHI jointly learns both representation and a k-ary tree-based search data structure end-to-end. This advantage is reflected on MS MARCO dataset. EHI is up to 7.12% more accurate (in terms of nDCG@10) compared to DSI. Recently, retrieval has been used to augment LLMs also (Guu et al., 2020; Izacard & Grave, 2020b,a; Izacard et al., 2022). We would like to stress that the goal with LLMs is language modeling while retrieval’s goal is precise document retrieval. However, retrieval techniques like EHI can be applied to improve retrieval subcomponents in such LLMs. 3 END-TO-END HIERARCHICAL INDEXING (EHI) Problem definition and Notation. Consider a problem with a corpus of $N$ documents $\mathcal{D} = \{d_1, ..., d_N\}$, a set of $Q$ training queries $\mathcal{Q} = \{q_1, ..., q_Q\}$, and training data $(q_i, d_k, y_{ik})$, where $y_{ik} \in \{-1, 1\}$ is the label for a given training (query, document) tuple and $y_{ik} = 1$ denotes that $d_k$ is relevant to $q_i$. Given these inputs, the goal is to learn a retriever that maps a given query to a set of relevant documents while minimizing the computation cost. While wall-clock time is the primary cost metric, comparing different methods against it is challenging due to very different setups (language, architecture, parallelism, etc.). Instead, we rely on recall vs. % searched curves, widely considered a reasonable proxy for wall-clock time modulo other setup/environment changes (Guo et al., 2020). 3.1 OVERVIEW OF EHI At a high level, EHI has three key components: Encoder $E_\theta$, Indexer $I_\phi$ and Retriever. Parameters $\theta$ of the query/document encoder and $\phi$ of the indexer are the trainable parameters of EHI. Unlike most existing techniques, which train the encoder and indexer in a two-step disjoint process, we train both the encoder and indexer parameters jointly with an appropriate loss function: see Section 3.5. Learning the indexer – generally a discontinuous function – is a combinatorial problem that also requires multiple rounds of indexing the entire corpus. However, by modeling the indexer using a hierarchical tree and its “internal representation” as compressed path embedding, we demonstrate that the training and retrieval with encoder+indexer can be executed efficiently and effectively. In the following sections, we provide details of the encoder and indexer components. In Section 3.4, we detail how encoder+indexer can be used to retrieve specific documents for a given query, which is used both for inference and hard-negative mining during training. Section 3.5 provides an overview of the training procedure. Finally, Section 3.6 summarizes how documents are ranked after retrieval. 3.2 ENCODER $E_\theta$: DENSE EMBEDDING OF QUERY/DOCUMENTS Our method is agnostic to the architecture used for dual encoder. But for simplicity, we use standard dual encoder (Ni et al., 2021) to map input queries and documents to a common vector space. That is, encoder $E_\theta$ parameterized by $\theta$, maps query ($q \in \mathcal{Q}$) and document ($d \in \mathcal{D}$) to a common vector space: $E_\theta(q) \in \mathbb{R}^m$, and $E_\theta(d) \in \mathbb{R}^m$, where $m$ is the embedding size of the model (768 here). While such an encoder can also be multi-modal as well as multi-vector, for simplicity, we mainly focus on standard textual data with single embedding per query/document. We use the standard BERT architecture for encoder $E_\theta$ and initialize parameters $\theta$ using a pre-trained Sentence-BERT distilbert model (Reimers & Gurevych, 2019). Our base model has 6 layers, 768 dimensions, 12 heads with 66 million parameters. We then fine-tune the final layer of the model for the target downstream dataset. 3.3 INDEXER $I_\phi$: INDEXING OF QUERY/DOCUMENT IN THE HIERARCHICAL DATA STRUCTURE EHI’s indexer ($I_\phi$) is a tree with height $H$ and branching factor $B$. Each tree node contains a classifier that provides a distribution over its children. So, given a query/document, we can find out the leaf nodes that the query/document indexes into, as well as the probabilistic path taken in the tree. The final leaf nodes reached by the query are essential for retrieval. But, we also propose to use the path taken by a query/document in the tree as an embedding of the query/document – which can be used in training through the loss function. However, the path a query/document takes is an object in an exponentially large (in height $H$) vector space, owing to $B^H$ leaf nodes, making it computationally intractable even for a small $H$ and $B$. Instead, below, we provide a significantly more compressed path embedding – denoted by $T(\cdot; \phi)$ and parameterized by $\phi$ – embeds any given query or document in a relatively low-dimensional \((B \cdot H)\) vector space. For simplicity, we denote the query and the document path embedding as \(T_\phi(q) = T(E_\theta(q); \phi)\) and \(T_\phi(d) = T(E_\theta(d); \phi)\), respectively. We construct path embedding of a query/document as: \[T(E_\theta(q)) = T(E_\theta(q); \phi) = [p^H; p^{H-1}; \ldots; p^1],\] Where \(p^h \in [0, 1]^B\) denotes the probability distribution of children nodes for a parent at height \(h\). For a given leaf \(l\), say path from root node is defined as \(l = [i_1^l, i_2^l, \ldots, i_H^l]\) where \(i_h^l \in [1 \ldots B]\) for \(h \in [H]\). The probability at a given height in a path is approximated using a height-specific simple feed-forward neural network parameterized by \(W_{h+1} \in \mathbb{R}^{(B \cdot h + m) \times B}\) and \(U_{h+1} \in \mathbb{R}^{(B \cdot h + m) \times (B \cdot h + m)}\) (\(m\) is the embedding size). That is, \[p^{h+1} = \text{Softmax}(W_{h+1}^T F([o(i_h^l); o(i_{h-1}^l); \ldots; o(i_1^l); E_\theta(q)]; U_{h+1})) \cdot p^h[i_h^l] \tag{1}\] where one-hot-vector \(o(i)\) is the \(i\)-th canonical basis vector and \(F\) is a non-linear transformation given by \(F(x; U_h) = x + \text{ReLU}(U_h^T x)\). In summary, the path embedding for height 1 represents a probability distribution over the leaves. During training, we compute path embedding for higher heights for only the most probable path, ensuring that the summation of leaf node logits remains a probability distribution. Also, the indexer and path embedding function \(T(\cdot; \phi)\) has the following collection of trainable parameters: \(\phi = \{W_H, \ldots, W_1, U_H, \ldots, U_1\}\), which we learn by optimizing a loss function based on the path embeddings; see Section 3.5. ### 3.4 Retriever: Indexing Items for Retrieving Indexing and retrieval form a backbone for any search structure. EHI efficiently encodes the index path of the query and documents in \((B \cdot H)\)-dimensional embedding space. During retrieval for a query \(q\), EHI explores the tree structure to find the “most relevant” leaves and retrieves documents associated with those leaves. For retrieval, it requires encoder and indexer parameters \((\theta, \phi)\) along with Leaf, document hashmap \(M\). The relevance of a leaf \(l\) for a query \(q\) is measured by the probability of a query reaching a leaf at height \(H\) \((P(q, l, H))\). Recall from previous section that path to a leaf \(l\) is defined as \(l = [i_1^l, i_2^l, \ldots, i_H^l]\) where \(i_h^l \in [1 \ldots B]\) for \(h \in [H]\). The probability of reaching a leaf \(l\) for a given query \(q \in Q\) to an arbitrary leaf \(l \in \text{Leaves}\) can be computed as \(P(q, l, H) = p^H[i_H^l]\) using equation 1. But, we only need to compute the most probable leaves for every query during inference, which we obtain by using the standard beam-search procedure summarized below: 1. For all parent node at height \(h - 1\), compute probability of reaching their children \(\hat{S} = \bigcup_{c \in \text{child}(p)} P(q, c, h) \forall p \in P\). 2. Keep top \(\beta\) children based on score \(\hat{S}\) and designate them as the parents for the next height. Repeat steps 1 and 2 until the leaf nodes are reached. Once we select \(\beta\) leaves EHI retrieves documents associated with each leaf, which is stored in the hashmap \(M\). To compute this hash map, EHI indexes each document \(d \in D\) (similar to query) with \(\beta = 1\). Here, \(\beta = 1\) is a design choice considering memory and space requirements and is kept as a tuneable parameter. Algorithm 2 in the appendix depicts the approach used by our Indexer for better understanding. ### 3.5 Training EHI Given the definition of all three EHI components – encoder, indexer, and retriever – we are ready to present the training procedure. As mentioned earlier, the encoder and the indexer parameters \((\theta; \phi)\) are optimized simultaneously with our proposed loss function, which is designed to have the following properties: a) Relevant documents and queries should be semantically similar, b) documents and queries should be indexed together iff they are relevant, and c) documents should be indexed together iff they are similar. Given the encoder and the indexer, we design one loss term for each of the properties mentioned above and combine them to get the final loss function. To this end, we first define the triplet loss as: $$L(E_\theta(q), E_\theta(d_+), E_\theta(d_-)) = [E_\theta(q)^\top E_\theta(d_-) - E_\theta(q)^\top E_\theta(d_+) + \gamma]_+, \quad (2)$$ where we penalize if similarity between query $q$ and an irrelevant document $d_-$ ($y(q, d_-) \neq 1$) is within $\gamma$ margin of the corresponding similarity between $q$ and a relevant document $d_+$ ($y(q, d_+) = 1$). We now define the following three loss terms: 1. **Semantic Similarity**: the first term is a standard dual-encoder contrastive loss between a relevant document $d_+$ – i.e., $y(q, d_+) = +1$ – and an irrelevant document with $y(q, d_-) \neq 1$. $$L_{\text{siamese}} = L(E_\theta(q), E_\theta(d_+), E_\theta(d_-); \theta) \quad (3)$$ 2. **Indexing Similarity**: the second term is essentially a similar contrastive loss over the query, relevant-doc, irrelevant-doc triplet, but where the query and documents are represented using the path-embedding $T_\phi(\cdot)$ given by the indexer $I_\phi$. $$L_{\text{indexing}} = L(T_\phi(q), T_\phi(d_+), T_\phi(d_-); \theta, \phi) \quad (4)$$ 3. **Intra-leaf Similarity**: to spread out irrelevant docs, third loss applies triplet loss over the sampled relevant and irrelevant documents for a query $q$. Note that we apply the loss only if the two docs are semantically dissimilar according to the latest encoder, i.e., $\text{SIM}(a, b) = \frac{a^\top b}{|a||b|} < \tau$ for a pre-specified threshold $\tau = 0.9$. $$L_{\text{intra-leaf}} = 1\{\text{SIM}(E_\theta(d_+), E_\theta(d_-)) < \tau\} L(T_\phi(d_+), T_\phi(d_+), T_\phi(d_-); \theta, \phi) \quad (5)$$ The final loss function $L$ is given as the weighted sum of the above three losses: $$L(q, d_+, d_-; \theta, \phi) = \lambda_1 L_{\text{siamese}} + \lambda_2 L_{\text{indexing}} + \lambda_3 L_{\text{intra-leaf}} \quad (6)$$ Here $\gamma$ is set to 0.3 for all loss components, and $\lambda_1, \lambda_2, \lambda_3$ are tuneable hyper-parameters. Our trainer (see Algorithm 1) learns $\theta$ and $\phi$ by optimizing $L$ using standard techniques; for our implementation we used AdamW (Loshchilov & Hutter, 2017). Note that the loss function only uses in-batch documents’ encoder embeddings and path embeddings, i.e., we are not even required to index all the documents in the tree structure, thus allowing efficient joint training of both encoder and indexer. To ensure fast convergence, we use hard negatives mined from the indexed leaves of a given query $q$ for which we require documents to be indexed in the tree. But, this procedure can be done once in every $r$ step where $r$ is a hyper-parameter set to 5 by default across our experiments. We will like to stress that existing methods like DSI, NCI, or ANCE not only have to use stale indexing of documents, but they also use stale or even fixed indexers – like DSI, NCI learns a fixed semantic structure over docids using one-time hierarchical clustering. In contrast, EHI jointly updates the indexer and the encoder in each iteration, thus can better align the embeddings with the tree/indexer. ### 3.6 Re-ranking and Evaluation This section describes the re-ranking step and the test-time evaluation process after retrieval. In Section 3.4, we discussed how each document is indexed, and we now have a learned mapping of $d \times l$, where $d$ is the corpus size, and $l$ is the number of leaves. Given a query at test time, we perform a forward pass similar to the indexing pipeline presented in Section 3.4 and find the top-$b$ leaves ($b$ here is the beam size) the given query reaches. We collate all the documents that reached these $b$ leaves (set operation to avoid any repetition of the same documents across multiple leaves) and rank them based on an appropriate similarity metric such as cosine similarity, dot product, manhattan distance, etc. We use the cosine similarity metric for ranking throughout our experiments (see Section 4.2). ### 4 Experiments In this section, we present empirical evaluation of EHI on standard dense retrieval benchmarks. The goal of empirical evaluation is twofold: (a) highlight the paradigm shift of training both encoder and ANNS in an end-to-end fashion (EHI) is more favorable to training them in an disjoint fashion Figure 2: EHI is significantly more accurate than DE + ScaNN or Faiss-IVF, especially when restricted to visit a small fraction of documents. See Figure 5 in Appendix for results on Scifact, Fiqa. (off-the-shelf indexers such as ScaNN, Faiss-IVF, etc.), (b) understand EHI’s stability wrt various hyper-parameters and how to set them up appropriately. We note that due to typical scale of retrieval systems, a method’s ability to retrieve relevant documents under strict latency budget is critical, and defines success of a method. So, we would want to compare query throughput against recall/MRR, but obtaining head-to-head latency numbers is challenging as different systems are implemented using different environments and optimizations. So, following standard practice in the ANNS community, we use a fraction of documents visited/searched as a proxy for latency (Jayaram Subramanya et al., 2019; Guo et al., 2020). Appendix C provides exact training hyperparameters of EHI. 4.1 Experimental Setup Datasets: We evaluate EHI on four standard but diverse retrieval datasets of increasing size: SciFact (Wadden et al., 2020), FIQA (Maia et al., 2018), MS MARCO (Bajaj et al., 2016) and NQ320k (Kwiatkowski et al., 2019). Appendix A provides additional details about these datasets. Baselines. We consider five baseline methods to evaluate against EHI. In particular, baselines DE+Exact-search, DE+ScaNN, and DE+Faiss-IVF are standard dense retrieval methods with dual-encoder (DE) architecture (Menon et al., 2022) trained using Siamese loss (Chopra et al., 2005). The three methods use three different ANNS methods for retrieval: Exact-search\(^1\), ScaNN (Guo et al., 2020), and Faiss-IVF (Johnson et al., 2019). DSI (Tay et al., 2022) and NCI (Wang et al., 2022) are the remaining two main baselines. We report DSI numbers on MS MARCO using an implementation validated by the authors. However, we note that NCI fails to scale to large datasets like MS MARCO. For EHI and baseline dual-encoder (DE) models, we use a pre-trained Sentence-BERT (Reimers & Gurevych, 2019) fine-tuned appropriately on the downstream dataset using contrastive loss. For DE baselines, only the encoder is fine-tuned, while the ANNS structure (off-the-shelf indexers) is built on top of the learned representations. 4.2 Results SciFact. We first start with the small-scale SciFact dataset. Figure 5(a) and Table 6 compares EHI to three DE baselines. Clearly, EHI’s recall-compute curve dominates that of DE+ScaNN and DE+Faiss-IVF. For example, when allowed to visit/search about 10% of documents, EHI obtains up to +15.64% higher Recall@100. Furthermore, EHI can outperform DE+Exact Search with a 60% reduction in latency. Finally, representations from EHI’s encoder with exact search can be as much as 4% more accurate (in terms of Recall@100) than baseline dual-encoder+Exact Search, indicating effectiveness of EHI’s integrated hard negative mining. FIQA. Here also we observe a similar trend as SciFact; see Figure 5(b) and Table 7. That is, when restricted to visit only 15% documents (on an average), EHI outperforms ScaNN and Faiss-IVF in Recall@100 metric by 5.46% and 4.36% respectively. Furthermore, EHI outperforms the exact search in FIQA with a 84% reduction in latency or documents visited. Finally, when allowed to visit about 50% of the documents, EHI is about 5% more accurate than Exact Search, which visits all the documents. Thus indicating better quality of learned embeddings. \(^1\)Performance metric when 100% of documents are visited (see Figure 2, Figure 5). **MS MARCO.** As mentioned in Section 4.1, MS MARCO is considered the gold standard benchmark for semantic retrieval. We study the MS MARCO passage retrieval task on both standard dev set, as well as TREC DL-19 set (Craswell et al., 2020). We compare against the standard Sentence-BERT model (Huggingface, 2019), fine-tuned on MS MARCO, with Exact Search (see Table 8). For the standard dev set, EHI is able to match or surpass the accuracy of baseline Exact Search with an **80%** reduction in number of documents visited. This is in stark contrast to the baseline DE+ScaNN and DE+Faiss-IVF methods, which require visiting almost double, i.e., almost **50%** of the documents. Furthermore, when restricted to visiting only **1%** of the documents, EHI obtains **0.6%** higher nDCG@10 than DE+ScaNN and DE+Faiss-IVF. Note that such a gain is quite significant for the highly challenging and well-studied MS MARCO dataset. We also compare EHI against DSI on this dataset. We note that the DSI base model with 250M parameters is almost **four times** the size of the current EHI model. After multiple weeks of DSI training with doc2query + atomic id + base model, DSI’s MRR@10 value is **26%**, which is about **6%** lower than EHI with just **1%** visited documents. Note that despite significant efforts, we could not scale NCI code (Wang et al., 2022) on MS MARCO due to the dataset size; NCI paper does not provide metrics on MS MARCO dataset. For the **TREC DL-19 set**, EHI is able to match or surpass the nDCG@10 of baseline Exact Search with an **78%** reduction in latency. Furthermore, when restricted to visiting **1%** of the documents, EHI achieves **4.2%** higher nDCG@10 than DE+ScaNN and DE+Faiss-IVF. For completeness, we compare EHI’s accuracy against SOTA methods for this dataset that uses a similar-sized encoder. Note that these methods often use complementary and analogous techniques to EHI, such as multi-vector similarity, etc., and can be combined with EHI. Nonetheless, we observe that EHI is competitive with SOTA techniques like ColBERT with a similar encoder and is significantly more accurate than traditional DE methods like ANCE, HNSW, etc. Appendix D provides a more detailed comparison of EHI encoder against other SOTA encoders on the MS MARCO dataset. Note that any of these encoders could replace the Distilbert model used in EHI and only serve to show the efficacy of the learned representations. Furthermore, we note that distillation-based approaches such as ColBERT-v2 (Santhanam et al., 2021), and other sparse neural IR models such as SPLADE (Formal et al., 2022) do perform better than EHI (which does not use distillation) in exact search over the MS MARCO benchmark, and opens up future directions for EHI and how distillation could be used to further improve metrics on this benchmark. However, it is not possible to build a trivial ANN over the ColBERT-v2 model since it uses a late-interaction framework, and also use re-ranking.(see Table 3) **NQ320k.** Finally, we present evaluation on the standard NQ320k (Kwiatkowski et al., 2019) benchmark, in the setting studied by the NCI paper (Wang et al., 2022). EHI matches or surpasses the accuracy of baseline Exact Search with a **60%** reduction in latency. Furthermore, when limited to the same compute budget, EHI outperforms DE+SCANN and DE+Faiss-IVF by up to **0.4%** in Recall@10. **Comparison to DSI/NCI:** Note that EHI is able to significantly outperform DSI and NCI (without query generation) despite NCI utilizing a **10×** larger encoder! Furthermore, even with query generation, NCI is **0.5%** and **~ 2%** less accurate than EHI on Recall@10 and Recall@100 metrics, respectively. (see Table 5) To showcase the notion that end-to-end learning where the encoder learns embeddings are indeed aligned for downstream classification by EHI Indexer, and standalone indexers cannot. Here, we show indexing a common embedding representations from a pre-trained EHI encoder through EHI indexer does lead to significant gains over an off-the-shelf indexer addition. (see Appendix E.8) Our observations about EHI are statistically significant as evidenced by p-value tests Appendix E.3. Additional experiments such as the number of documents per leaf, robustness to initialization, qualitative analysis on the leaves of the indexer learned by the EHI model, and comparisons against ELIAS (Gupta et al., 2022) on XC benchmarks are depicted in Appendix E. ### 4.3 Ablations In the previous section, we demonstrated the effectiveness of EHI against multiple baselines on diverse benchmarks. In this section, we report results from multiple ablation studies to better understand the behavior of EHI. Additional properties such as load balancing, effect of negative mining refresh factor, and other properties of EHI are discussed in Appendix E. Figure 3: Ablation study of four major components in EHI to evaluate their contributions towards jointly learned representation and ANNS structure for state-of-the-art dense retrieval. **Effect of branching factor.** Figure 3(a) shows recall@100 of EHI on SciFact with varying branching factors. We consider two versions of EHI, one with exact-search, and another where we restrict EHI to visit about 10% visited document. Interestingly, for EHI + Exact Search, the accuracy decreases with a higher branching factor, while it increases for the smaller beam-size of 0.1. We attribute this to documents in a leaf node being very similar to each other for high branching factors (fewer points per leaf). We hypothesize that EHI is sampling highly relevant documents for hard negatives leading to a lower exact search accuracy. **Ablation w.r.t loss components.** Next, on FIQA dataset, we study performance of EHI when one of the loss components equation 6 is turned off; see Figure 3(b). First, we observe that EHI outperforms the other three vanilla variants, implying that each loss term contributes non-trivially to the performance of EHI. Next, we observe that removing the document-similarity-based loss term ($\lambda_3$), Eq. equation 5, has the least effect on the performance of EHI, as the other two-loss terms already capture some of its desired consequences. However, turning off the contrastive loss on either encoder embedding ($\lambda_1$), Eq. equation 3, or path embedding ($\lambda_2$), Eq. equation 4, loss leads to a significant loss in accuracy. This also indicates the importance of jointly and accurately learning both the encoder and indexer parameters. **Effect of hard negative sampling.** Figure 3(c) shows recall@100 with and without hard-negative mining using the learned indexer (see Algorithm 1) on FIQA. EHI with hard negative sampling improves recall@100 significantly by 3.1%, thus clearly demonstrating it’s importance. **Effect of height.** We study the accuracy of EHI when extending to multiple heights of the tree structure to extend its effectiveness in accurately indexing extensive web-scale document collections. Traditional indexing methods that rely on a single-height approach can be computationally impractical and sub-optimal when dealing with billions or more documents. To address this challenge, EHI treats height as a hyperparameter and learns the entire tree structure end-to-end. Our experimental results in Figure 3(d) demonstrate that trees with $H = 2$ also exhibit similar performance on the MS MARCO as $H = 1$. This extension enhances scalability and efficiency when indexing large web-scale datasets. For instance, EHI trained on Scifact with equal number of leaves, we notice a significant speedup with increasing height; for example at $(B = 64, H = 1)$, $(B = 8, H = 2)$, and $(B = 4, H = 3)$, we notice a per-query latency of 2.48 ms, 2.40 ms, and 1.99 ms respectively at the same computation budget. This extension to hierarchical k-ary tree is absolutely necessary for scalability and discussed in further detail in Appendix E.6. ## 5 Conclusions, Limitations, and Future Work We presented EHI, a framework and paradigm shift to jointly learn both the query/document encoder and the search indexer to retrieve documents efficiently. EHI is composed of three key components: encoder, indexer, and retriever; indexer generates compressed, low-dimensional path embeddings of query/documents in the tree, which is key to joint training of encoder and indexer. We demonstrated the effectiveness of EHI on a variety of standard benchmarks. Currently, path embeddings are mainly an intuitive construct without formal understanding. In the future, understanding path embeddings and providing rigorous guarantees should be of significant interest. Furthermore, combining EHI encoders that output hierarchical representations like matryoshka embeddings (Kusupati et al., 2022) or integrating with RGD (Kumar et al., 2023) to further improve generalization of tail queries should also be of interest. Finally, this paper addresses an abstract and established problem, so we don’t expect any significant additional societal implications from this work. REFERENCES Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. Ms marco: A human generated machine reading comprehension dataset. *arXiv preprint arXiv:1611.09268*, 2016. Erik Bernhardsson. *Annoy: Approximate Nearest Neighbors in C++/Python*, 2018. URL https://pypi.org/project/annoy/. Python package version 1.13.0. Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Scott Yih, Sebastian Riedel, and Fabio Petroni. Autoregressive search engines: Generating substrings as document identifiers. *Advances in Neural Information Processing Systems*, 35:31668–31683, 2022. Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. In *Proceedings of the 23rd international conference on Machine learning*, pp. 97–104, 2006. Deng Cai. A revisit of hashing algorithms for approximate nearest neighbor search. *IEEE Transactions on Knowledge and Data Engineering*, 33(6):2337–2348, 2021. doi: 10.1109/TKDE.2019.2953897. Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In *2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05)*, volume 1, pp. 539–546. IEEE, 2005. Kenneth L Clarkson. An algorithm for approximate closest-point queries. In *Proceedings of the tenth annual symposium on Computational geometry*, pp. 160–164, 1994. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. Overview of the trec 2019 deep learning track. *arXiv preprint arXiv:2003.07820*, 2020. Kunal Dahiya, Deepak Saini, Anshul Mittal, Ankush Shaw, Kushal Dave, Akshay Soni, Himanshu Jain, Sumeet Agarwal, and Manik Varma. Deepxml: A deep extreme multi-label learning framework applied to short text documents. In *Proceedings of the 14th ACM International Conference on Web Search and Data Mining*, pp. 31–39, 2021. Kunal Dahiya, Nilesh Gupta, Deepak Saini, Akshay Soni, Yajun Wang, Kushal Dave, Jian Jiao, Gururaj K, Prasenjit Dey, Amit Singh, et al. Ngame: Negative mining-aware mini-batching for extreme classification. In *Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining*, pp. 258–266, 2023. Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In *Proceedings of the twentieth annual symposium on Computational geometry*, pp. 253–262, 2004. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Chantat Eksombatchai, Pranav Jindal, Jerry Zitao Liu, Yuchen Liu, Rahul Sharma, Charles Sugnet, Mark Ulrich, and Jure Leskovec. Pixie: A system for recommending 3+ billion items to 200+ million users in real-time. In *Proceedings of the 2018 world wide web conference*, pp. 1775–1784, 2018. Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. From distillation to hard negative sampling: Making sparse neural ir models more effective. In *Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval*, pp. 2353–2359, 2022. Jerome H Friedman, Jon Louis Bentley, and Raphael Ari Finkel. An algorithm for finding best matches in logarithmic expected time. *ACM Transactions on Mathematical Software (TOMS)*, 3(3):209–226, 1977. Luyu Gao, Zhuyun Dai, Zhen Fan, and Jamie Callan. Complementing lexical retrieval with semantic residual embedding. corr abs/2004.13969 (2020). *arXiv preprint arXiv:2004.13969*, 2020.
bUv5gJAAxH
The authors have not accounted for the potential for bias in their Z-score test. The variance of the ID (and therefore the significance of the Z-score) itself strongly depends on the dimensionality within which it is assessed. Setting a threshold for hypothesis testing that is uniform across all dimensions may not be appropriate here. Estimation of ID also has its own biases that may confound hypothesis testing of this type. Accounting for (and if necessary, adjusting for) dimensional bias would greatly improve both the importance and novelty of the results.
Relating Implicit Bias and Adversarial Attacks through Intrinsic Dimension Anonymous authors Paper under double-blind review Abstract Despite their impressive performance in classification, neural networks are known to be vulnerable to adversarial attacks. These attacks are small perturbations of the input data designed to fool the model. Naturally, a question arises regarding the potential connection between the architecture, settings, or properties of the model and the nature of the attack. In this work, we aim to shed light on this problem by focusing on the implicit bias of the neural network, which refers to its inherent inclination to favor specific patterns or outcomes. Specifically, we investigate one aspect of the implicit bias, which involves the essential Fourier frequencies required for accurate image classification. We conduct tests to assess the statistical relationship between these frequencies and those necessary for a successful attack. To delve into this relationship, we propose a new method that can uncover non-linear correlations between sets of coordinates, which, in our case, are the aforementioned frequencies. By exploiting the entanglement between intrinsic dimension and correlation, we provide empirical evidence that the network bias in Fourier space and the target frequencies of adversarial attacks are closely tied. 1 Introduction An active field of research in artificial neural networks (ANNs) is focused on understanding why, despite their enormous success, their predictions can be drastically changed by subtle perturbations of their inputs, known as adversarial attacks (Szegedy et al., 2013). New research has shown a strong correlation between the implicit bias of artificial neural networks - which refers to their natural predisposition to exhibit a preference towards particular patterns or results - and their ability to resist adversarial attacks. This was highlighted in a recent study (Faghri et al., 2021), wherein it was demonstrated that the specific optimizer, neural network architecture, and regularizer employed had a substantial impact on the ability of a linear neural network to withstand adversarial interference. However, besides simple models (Gunasekar et al., 2018), a formal characterization of the implicit bias of a neural network remains a formidable challenge. The research presented in Karantzaz et al. (2022) offers an algorithm aimed at investigating a specific aspect of implicit bias even in the case of complex networks. This approach involves analyzing the essential input frequencies required to maintain the accuracy of a trained network. Such frequencies are computed by training, for each input image, a learnable modulatory mask that filters the frequency content of the image, reducing it to the bare minimum required to preserve correct classification. The essential frequency masks can serve as a unique fingerprint for the network, as they encapsulate the information that the ANN relies on when processing inputs. In this work, we leverage this methodology to investigate the correlation between the implicit spectral bias of the network, defined in terms of the image frequencies that are essential to perform the correct classification, and the frequencies targeted by adversarial attacks to deceive the network. In particular, for each image, we calculate the modulatory mask of the essential frequencies (using a similar approach to Karantzaz et al. (2022)) and, additionally, for the same image, we learn a mask containing the essential adversarial frequencies needed for an attack to be successful. Fig. 1 displays examples of clean and attacked images before (A, B) and after (C, D) being filtered by, respectively, essential frequency masks and adversarial frequency masks. We use these two sets of masks to check the dependence (or lack thereof) between the network bias in the Fourier domain and the frequencies that are being targeted by the adversarial attack. Our Figure 1: Examples of CIFAR-10 (Krizhevsky, 2009) images before and after being filtered by the Fourier masks: (A): original input images (B): adversarial images generated with $\ell_\infty$ Fast Minimum Norm (Pintor et al., 2021) attack on ResNet-20 (He et al., 2016) (C): images filtered by essential frequency masks (D): adversarial images filtered by adversarial frequency masks. primary objective is to offer empirical proof that the network spectral bias determines the nature of the adversarial attacks in Fourier space, in the same spirit of Faghri et al. (2021). However, defining and computing this correlation is a challenging task due to the high-dimensional nature of the modulatory mask sets, and the fact that their correlation can be, in principle, highly non-linear. To address these challenges we introduce a novel non-linear correlation method that relies on the observation that the intrinsic dimensionality ($I_d$) of a data set is affected by correlations between the features. By comparing the $I_d$ estimated in the data set with the distribution of $I_d$ that one would obtain in the case of fully uncorrelated data, we are able to quantify the probability that the two types of masks are correlated. Our findings indicate a strong correlation between the feature spaces defined by the two types of masks, providing empirical evidence of the connection between network bias in Fourier space and target frequencies of adversarial attacks. 2 RELATED WORK AND BACKGROUND 2.1 IMPLICIT BIAS AND IMPLICIT FOURIER BIAS The idea behind the phenomenon of implicit bias is that the loss landscape of an overparameterized network has many local minima, and which local minimum one converges to after training depends on the complex interplay between factors including the choice of the model architecture and parameterization (Gunasekar et al., 2018; Yun et al., 2020), the initialization scheme (Sahs et al., 2022), the optimization algorithm (Williams et al., 2019; Woodworth et al., 2020) and the data statistics (Yin et al., 2019). The implicit bias of state-of-the-art models has been shown to play a critical role in the generalization property of deep neural networks (Li et al., 2019; Arora et al., 2019). Analytical characterizations of the implicit bias have been provided only for deep linear convolutional or fully connected networks (Gunasekar et al., 2018). One interesting effect of the implicit bias of the network is its tendency to learn specific frequencies in the target function during training, a phenomenon called spectral bias (Rahaman et al., 2019). This bias results in the network learning low complexity functions and can potentially explain its ability to generalize (Fridovich-Keil et al., 2022; Cao et al., 2019; Wang et al., 2020; Tsuzuku & Sato, 2019). Also, not surprisingly, the implicit bias strongly influences the type of input features extracted by a trained neural network. In particular, in Karantzas et al. (2022), the authors show that very few image frequencies in the Fourier domain are essential to the network to perform classification. These findings have helped to characterize the spectral bias of neural networks with a focus on the input space rather than the target function (as in Rahaman et al., 2019). Interestingly, a deep connection exists between robust classification and implicit bias (Faghri et al., 2021). Empirically, a strong relationship has been found between the network robustness and the statistics of the Fourier spectra of the input data (Yin et al., 2019) or architecture (Caro et al., 2020) and detection strategies in the Fourier domain have been used to defend against adversarial attacks (Harder et al., 2021). 2.2 ADVERSARIAL ATTACKS Artificial Neural Networks are well known to be vulnerable to adversarial attacks (Szegedy et al., 2013). These attacks involve manipulating an input data point in a way that deceives an otherwise well-performing classifier, by making small alterations to a correctly classified data point. Numerous techniques have been proposed to create such adversarial examples, beginning with the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014), followed shortly by variants such as Projected Gradient Descent (PGD) (Madry et al., 2018). Both these methods employ gradient information to generate an appropriate adversarial example while ensuring that the $\ell_p$ norm of the perturbation remains below a fixed threshold $\epsilon$. These algorithms were primarily developed for effectiveness rather than optimality, which may limit their ability to generate input samples with minimal perturbations, resulting in them being classified as "maximum confidence" attacks. In contrast, "minimum norm" attacks prioritize the identification of adversarial examples with the least amount of perturbation by minimizing its norm. In this regard, some of the most notable proposals are L-BFGS (Szegedy et al., 2013), the Carlini and Wagner attack (Carlini & Wagner, 2017), DeepFool (Moosavi-Dezfooli et al., 2015) and the recent Fast Minimum Norm (FMN) attack (Pintor et al., 2021), which seeks to combine the efficiency of FGSM and PGD with optimality in terms of perturbation norm. The robustness of neural networks against adversarial attacks remains an unresolved issue. Although adversarial training is currently the most effective technique for improving the resilience of neural classifiers, it often involves a trade-off between robustness and a reduction in performance on non-adversarial, clean data (Goodfellow et al., 2014). Moreover, it remains unclear why adversarial examples exist and whether they represent an inevitable byproduct of current neural architectures and training methods (Ilyas et al., 2019; Shafahi et al., 2019). The goal of this work is not to propose a method for improving the adversarial robustness of neural networks. Rather, our aim is to provide valuable insights into the frequency content that is targeted by adversarial attacks and its relationship with the implicit spectral bias of the network. 2.3 INTRINSIC DIMENSION The concept of the intrinsic dimension ($I_d$) of a data set is widely used in data analysis and Machine Learning. Before providing a more formal definition, imagine a data set where your data points are the cities around the globe described by their 3D Cartesian coordinates. We will say that the embedding dimension of this data set is three. However, anyone familiar with cartography would agree that nearly the same information can be encoded with only two coordinates (latitude and longitude). Therefore, its $I_d$ would be equal to two. Indeed, one of the definitions of $I_d$ is the minimum number of coordinates needed to represent the data with minimal information loss. A complementary definition is the dimension of the manifold in which the data lies, that in this case would be a sphere. The intrinsic dimension estimation is closely related to the field of dimensionality reduction since it gives a hint about which should be the dimension of the projection space to avoid information loss. Thus, one possible way of estimating the $I_d$ is to find a meaningful projection into the lowest dimensional space possible. A classical method for doing that is Principal Component Analysis (Wold et al., 1987), but it has the drawback that, strictly speaking, it is only correct if the data lie in a hyperplane, since it performs a linear transformation. Therefore, the development of methods for overcoming such a limitation is an active research field, resulting in techniques like Multidimensional Scaling (Borg & Groenen, 2005), Isomap (Balasubramanian & Schwartz, 2002), t-distributed stochastic neighbor embedding (t-SNE) (van der Maaten & Hinton, 2008) or Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018), to mention some. Other methods can estimate the $I_d$ of a data set even in the case in which projecting in the lower dimensional space is not possible (for example, due to topological constraints). Typically, these approaches infer the $I_d$ from the properties of the Nearest Neighbors' distances. While a full review of these methods is out of the scope of this work (the interested reader is referred to Lee et al., 2015), it is worth mentioning the Maximum Likelihood approach (Levina & Bickel, 2005), the Dimensionality from Angle and... Norm Concentration (DANCo) approach (Ceruti et al., 2014) or the two-NN (Facco et al., 2017). The last is the one employed in this work since it is particularly fast and it behaves well even in the case of data sets with a high non-uniformity on the density of points. ![Diagram](image) **Figure 2:** Schematic representation of the method employed to obtain essential frequency masks and adversarial frequency masks. Only one channel is displayed for visualization purposes. Full details are provided in Sec. 4.4. ### 3 METHODS #### 3.1 MODULATORY MASKS The primary tools we use to gather insights on the implicit spectral bias and on the geometry of adversarial examples are modulatory masks. The latter retain information on the essential frequencies required to achieve a particular classification task. To obtain these masks, we follow a similar algorithm to the one outlined in Karantzas et al. (2022), as depicted in Fig. 2. We train masks that modulate the frequency content of an image by multiplying element-wise each entry of the Fast Fourier Transform (FFT) of the image with the corresponding entry of the mask, which is a learnable scalar between 0 and 1. Specifically, starting from an image \( x \), we compute its FFT \( \mathcal{F}x \) and multiply it element-wise with a learnable mask \( M \). The mask has the same shape of the image \( x \) (and its FFT \( \mathcal{F}x \)), meaning that if the image has RGB encoding we train a separate mask for each channel, and its entries are constrained to be in \([0, 1]\). The result of this multiplication is then projected back in pixel space by taking the real part of its inverse Fourier transform, thereby obtaining a new filtered image \( x_F \): \[ x_F = \Re(\mathcal{F}^{-1}(M \odot \mathcal{F}x)). \] (1) The image \( x_F \) is then fed into the trained classification model to obtain a prediction. We produce two sets of masks. The masks belonging to the first set encode the essential frequencies of an image to be correctly classified by the neural classifier, thus we will refer to these as *essential frequency masks* (\( M_{EF} \)). The second set is composed of masks that encode the essential frequency content required to maintain the effectiveness of an adversarial attack, that is, the essential frequencies needed to misclassify an adversarially perturbed image. We will refer to these masks as *adversarial frequency masks* (\( M_{AF} \)). Some examples of adversarial frequency masks are shown in Fig. 3 (the corresponding \( M_{EF} \) masks are shown in the Appendix in Fig. 6). Both sets of masks are learned using a preprocessing layer attached to a classifier ANN with freezed parameters. The essential frequency masks are trained by optimizing the Cross-Entropy loss of the entire model (consisting of the preprocessing layer and the trained classifier) on the original samples. Conversely, for adversarial frequency masks, the training objective is the Cross-Entropy with respect to the adversarial class (to preserve misclassification), and the masks are trained on adversarial data. The key property of the learned masks is their sparsity, which is achieved by enforcing an $\ell_1$ norm regularization on the entries of the mask during training. This regularization ensures that the mask accurately captures only the essential frequency content needed to accomplish a specific task, such as correctly classifying an input or misclassifying an adversarial example. Our primary objective is to determine whether a correlation exists between these distinct sets of masks. To do so, we propose a novel algorithm based on intrinsic dimension estimation. This algorithm overcomes the limitations of existing methods and is applicable to non-linearly correlated data. 3.2 Non-linear Correlation through Intrinsic Dimension As mentioned earlier, to examine the statistical relationship between implicit bias and adversarial attacks, it is necessary to compute correlations between two feature spaces that characterize the same images: the essential frequencies for image classification and those required for the adversarial attack to be successful. The conventional approach for investigating correlations is based on the Pearson correlation coefficient ($R^2$) between variables (Pearson, 1896). However, this method has two limitations that make it impractical. First, it cannot be applied to assess correlations between two sets of multiple variables, such as the different types of masks mentioned earlier. Second, it is unable to detect non-linear correlations, as illustrated in the example presented in Fig. 4. Therefore, we provide a new approach that overcomes these problems by using the intrinsic dimension. The intrinsic dimension of a data set is closely linked to the correlations among the various features that define the data points. These correlations determine the regions in which the data points can exist, thereby shaping the underlying manifold. As previously mentioned, the dimension of this manifold corresponds to what we refer to as the intrinsic dimension. Let us consider the simplest example: a two-dimensional data set. If the two variables are uncorrelated, the correlation coefficient ($R^2$) approaches zero while, if one feature is a linear function of the other, $R^2$ becomes equal to one. In the context of the data manifold, the first scenario corresponds to a plane ($I_d = 2$), while the second scenario corresponds to a line ($I_d = 1$). However, if we consider a slightly more complex scenario, the advantage of using the $I_d$ becomes evident. The spiral data set in Fig. 4 has $R^2 \approx 0$ due to the non-linear nature of the correlation between the two variables, while the behavior of the $I_d$ is identical to the one observed on the linearly correlated data set. Moreover, there is no theoretical limit to the dimension of the data sets for which it can be computed. Hence, we employ an approach in which we assess the probability that the observed intrinsic dimension ($I_d$) is consistent with the intrinsic dimension that would be measured if both sets of coordinates were entirely uncorrelated. It involves four steps (illustrated in Fig. 4): Figure 3: Examples of adversarial frequency masks, represented as RGB images. The labels refer to the classification of the clean image. The masks were obtained using CIFAR-10 and the Fast Minimum Norm attack on ResNet-20. 1. Estimating the intrinsic dimension ($I_d$) of the combined data set, obtained by concatenating the two sets of variables. 2. Generating multiple fully uncorrelated data sets by shuffling the positions of data points within one of the two sets of coordinates. 3. Estimating the average and standard deviation of the intrinsic dimension ($I_d$) for the uncorrelated data sets. 4. Applying a one-sided $Z$-test to determine the probability that the intrinsic dimension ($I_d$) estimated in step 1 is significantly lower than the average estimated in step 3. The key step enabling the usage of the $I_d$ to detect correlations is the second one, where we shuffle one of the two coordinate sets so that every vector belonging to the first set gets paired with a randomly chosen vector of the second set. By shuffling the order of the data points, the probabilities of the two sets of coordinates $p(x_1)$ and $p(x_2)$ remain unaltered but the joint probability becomes, by construction, $p(x_1, x_2) = p(x_1)p(x_2)$. However, this will not be the case if there is a correlation between $x_1$ and $x_2$ (see Fig. 8 in the Appendix for an example). Therefore, by examining the joint probability distribution before and after shuffling, we can discern whether there exists a correlation between $x_1$ and $x_2$. As explained above, this method overcomes the difficulties inherent in finding non-linear correlations between sets of coordinates. The $Z$-test may be limited as it assumes normality in the distribution of computed $I_d$ values on the dataset with shuffled coordinates. While this is generally fulfilled in the cases studied here, a more significant challenge arises due to the curse of dimensionality. The number of points needed to estimate the $I_d$ with a given level of accuracy increases nearly exponentially with the $I_d$ (Bac et al., 2021), making it challenging for datasets with high intrinsic dimension and a moderate number of points. ![Figure 4: Schematic depiction of our proposed $I_d$-based correlation method on synthetic, spiral-shaped data. We compare the $I_d$ of the original data set (A) with the $I_d$s of the shuffled data set (B) and $Z$-test the hypothesis that the original $I_d$ is lower than the shuffled $I_d$s.] Table 1: Correlation in spiral-shaped data. ($R^2$): linear correlation coefficient; ($I_d$): intrinsic dimension of the spiral; ($I_d$ (shuffle)): mean ± standard deviation of the intrinsic dimension of the data set obtained by shuffling one of the two coordinates; ($Z$): $Z$-score for the hypothesis that the original $I_d$ is significantly lower than the average of the shuffled distribution; ($P$-value): significance of the $Z$-test. | $R^2$ | $I_d$ | $I_d$ (shuffle) | $Z$ | $P$-value | |-------|------|-----------------|----|----------| | $2.5 \cdot 10^{-3}$ | 1.02 | $1.95 \pm 0.03$ | $-74.51$ | 0 | 4 EXPERIMENTAL RESULTS 4.1 DATA For our experiments, we primarily utilized CIFAR-10 (Krizhevsky, 2009), a widely-used benchmark data set that consists of 60000 RGB $32 \times 32$ training images and 10000 test images categorized into 10 classes. When studying adversarial examples, we used the test images, and the training set was solely employed for fine-tuning models, as explained in greater detail in the subsequent section. We also explored the feasibility of scaling up our experiments to a higher-dimensional data set. In particular, we trained masks on Imagenette (Howard, 2022), a 10-class subset of ImageNet (Deng et al., 2009), and report the results on such data set along with the specific setup details in Sec. [4.6]. However, for the majority of our analyses we relied on CIFAR-10 as the time needed to compute the intrinsic dimension of higher-dimensional mask data sets made it impractical to conduct multiple repeated runs. 4.2 MODELS To gain a more accurate understanding of how our proposed method behaves in various scenarios, we employed two classification models based on different neural architectures. The first one is ResNet-20, a relatively small representative of the very well known ResNet family, introduced in He et al. (2016). The second model belongs to the class of Vision Transformers (ViT) (Dosovitskiy et al., 2021). Namely, we used CCT-7, a Compact Convolutional Transformer (Hassani et al., 2021) model, that differs from the original ViT because it employs convolutions in the tokenization phase and a smaller hidden size, which allows scaling a ViT-like architecture to small size data sets such as CIFAR-10. Training details for all the models we employed are reported in the Appendix (Sec. [A.3]). 4.3 ATTACKS We employed the $\ell_\infty$ version of the Fast Minimum Norm (FMN) attack algorithm as our reference adversarial attack method (Pintor et al., 2021). This choice was primarily driven by the simplicity and effectiveness of the algorithm, as it does not require parameter fine-tuning and is capable of generating high-quality adversarial examples swiftly. Additionally, we conducted tests using other adversarial attack techniques, namely Projected Gradient Descent (PGD) (Madry et al., 2018) and DeepFool (Moosavi-Dezfooli et al., 2015), both in their $\ell_\infty$ versions. All the attacks were employed in the untargeted setting. For PGD, we selected a perturbation magnitude of $\epsilon = 0.01$, which was chosen to maintain consistency with the perturbation magnitude produced by the FMN attack. We provide an analysis of the robustness of our findings with respect to $\epsilon$ in the Appendix (Sec. [A.8]). To implement these attack algorithms, we utilized the Foolbox library (Rauber et al., 2020; 2017). 4.4 MASK TRAINING The key step in our experimental procedure is the training of Fourier masks (see Sec. [3.1]). Starting from a trained, well-performing classifier, we freeze its parameters and prepend to it a pre-processing layer that computes the FFT of an image, multiplies it element-wise by the trainable mask and computes the inverse FFT. The real part of the resulting image is then fed into the classifier. The process of training the masks is identical for both the set of essential frequency masks and adversarial frequency masks, with the only difference being the data set used for mask training. We train essential frequency masks using clean images associated with their original labels. In contrast, for adversarial frequency masks, we utilize adversarial images and the adversarial labels produced by the classifier for those images. In this step, we optimize the standard Cross-Entropy loss function with the addition of an $\ell_1$ penalty term to promote mask sparsity. Further details on the mask training procedure are reported in Sec. [A.4] in the Appendix. 4.5 CORRELATION BETWEEN MASKS To provide evidence of the relation between the implicit bias of the network and the adversarial perturbations, we adopt a direct approach: we correlate the essential frequency masks with the adversarial frequency masks. This correlation analysis is performed using our novel $I_d$-based corre- Table 2: Correlation between essential frequency masks and adversarial frequency masks (CIFAR-10). | Attack | Model | Cosine sim | $I_d$ | $I_d$ (shuffle) | $Z$ | P-value | |----------|-----------|------------|----------|-----------------|----------|---------| | FMN | ResNet-20 | 0.25 ± 0.16| 31.65 | 34.98 ± 0.73 | −4.56 | 2.5 · 10^{-6} | | | CCT-7 | 0.22 ± 0.17| 22.93 | 24.47 ± 0.34 | −4.50 | 3.4 · 10^{-6} | | PGD | ResNet-20 | 0.22 ± 0.15| 32.35 | 36.18 ± 0.72 | −5.31 | 5.4 · 10^{-8} | | | CCT-7 | 0.21 ± 0.17| 23.30 | 24.52 ± 0.39 | −3.13 | 8.7 · 10^{-4} | | DeepFool | ResNet-20 | 0.25 ± 0.15| 30.35 | 33.93 ± 0.73 | −4.91 | 4.5 · 10^{-7} | | | CCT-7 | 0.20 ± 0.16| 23.44 | 25.10 ± 0.35 | −4.81 | 7.4 · 10^{-7} | The outcomes of our evaluation, including the results of the $Z$-test (see Sec. 3.2) and the mean cosine similarity between the masks (which serves as a linear benchmark), are presented in Table 2. To determine the $I_d$ values, we utilized the implementation of TwoNN (Faccio et al., 2017) contained in the DADAPy (Glielmo et al., 2022) library, on the data set generated by concatenating the essential frequency masks and the adversarial frequency masks. We then compare these $I_d$ values with the distribution of $I_d$ obtained by shuffling the order of one of the two sets of masks (performing the shuffling process 50 times for each setup). We employ a one-sided $Z$-test to assess the hypothesis that the original $I_d$ value is significantly lower than the average of the shuffled $I_d$s. For all models and attacks tested, our findings indicate a significant correlation between the two sets of masks. 4.6 Correlation results on Imagenette To further evaluate our approach, we conducted experiments on a 10-class subset of the ImageNet data set (Howard, 2022). The subset consisted of 9469 training samples and 3925 test samples, which were resized to $224 \times 224$. We employed a ResNet-18 (He et al., 2016) classifier and conducted the training of modulatory masks (essential frequency masks and adversarial frequency masks) according to the same procedure outlined in Sec. 4.4 for CIFAR-10, with the only difference that both the training images and test images were used to calculate the masks. We made this choice because the accurate estimation of intrinsic dimension is crucial for our $I_d$-based correlation method (see Sec. 3.2), and the number of data points needed for reliable estimation scales exponentially with the intrinsic dimension (Bac et al., 2021). Being significantly higher-dimensional than CIFAR-10, the Imagenette data set yields noticeably higher $I_d$ values on the modulatory masks. Hence, relying solely on the smaller test set would have been insufficient, leading us to the decision to augment it with the training images. We conducted correlation tests between essential frequency masks and adversarial frequency masks using our $I_d$-based method, and the results are summarized in Table 3. The probability of correlation is high for FMN and DeepFool attacks, with $P$-values of the $Z$-test in the order of $10^{-2}$. However, it is important to note that the estimation of $I_d$ may have been compromised by the scarcity of data points, as indicated by the high variance in the measurements. In the case of PGD attack, the intrinsic dimension reached values well above 80 in the non-shuffled data set, which further hampered the accuracy of $I_d$ estimation. Consequently, the results obtained with this number of points are not considered reliable. To address this issue, the most straightforward approach is to increase the size of the data set used for mask generation. In this regard, we evaluated the possibility of further up-scaling our experiments to the full ImageNet ILSVRC 2012 data set, as it contains 50000 images in the validation set alone. However, despite having computed modulatory masks for such data, we found out that repeated $I_d$ computation on such an amount of data becomes infeasible both in terms of memory and time requirements. 4.7 Class-specific content in masks Expanding upon the findings presented in Karantzas et al. (2022) regarding the clustering of modulatory masks, we propose a hypothesis that masks computed on images of the same class possess similar frequency content. To validate this hypothesis, we designed a simple test, whose results are displayed in the Appendix in Sec. A.6. We applied multiple times our $I_d$-based correlation method... Table 3: Correlation between essential frequency masks and adversarial frequency masks (Imagenette, ResNet-18). | Attack | Cosine sim | $I_d$ | $I_d$ (shuffle) | Z | P-value | |----------|------------|----------|-----------------|---------|-------------| | FMN | 0.22 ± 0.10| 65.06 | 69.18 ± 2.12 | −1.94 | $2.6 \cdot 10^{-2}$ | | PGD | 0.12 ± 0.07| 81.80 | 77.25 ± 2.94 | 1.54 | $9.4 \cdot 10^{-1}$ | | DeepFool | 0.15 ± 0.09| 65.14 | 69.90 ± 2.50 | −1.90 | $2.8 \cdot 10^{-2}$ | to subsets containing $k$ randomly chosen classes, with $k$ ranging from 1 to 10. If masks belonging to the same class shared common frequencies, we would anticipate the average P-values to decrease (and, consequently, correlation probability to increase) as we added more classes. This is because increasing the number of classes would decrease the probability of matching masks belonging to the same class when they are shuffled. In the experimental results illustrated in Fig. 7, a distinct downward trend in P-values can be observed as $k$ increases, indicating that there is a considerable amount of class-specific information present in the masks. Based on this observation, we envisioned the possibility of training a single mask that encodes the essential frequency content for an entire class. Such masks (one for each class) can be obtained following the same approach used to learn essential frequency masks for single images, but training on all the images belonging to a certain class. We trained class-level masks on the training images of CIFAR-10 on ResNet-20 and observed that they effectively preserved correct classifications for the unseen test set. Even more interestingly, we noted that these class-level essential frequency masks also successfully mitigated the impact of adversarial attacks on most of the images. Quantitative results for this analysis are detailed in the Appendix (Sec. A.7). While this discovery alone is insufficient for constructing an adversarial defense technique, as countering the attack necessitates knowledge of the correct class to select the corresponding mask, we believe it represents a promising starting point for future research in this direction. 5 DISCUSSION Our study delves into the relationship between adversarial attacks and the implicit bias of neural networks. We introduce a novel method to uncover non-linear correlations, revealing a link between the minimum frequency content needed for correct image classification and adversarial attack frequencies. The analysis covers standard network architectures like ResNets and ViTs and data sets such as CIFAR-10 and Imagenette. This work represents a significant advancement in understanding the relationship between the implicit bias of neural networks and their robustness properties, in the same spirit of Faghri et al. (2021), but for models where the implicit bias is not available in an explicit form. Our results hold prospective implications for the field of adversarial attacks: the deceptive nature of these data manipulations is not yet fully comprehended, and our findings shed light on the crucial frequencies utilized by attackers. This understanding has the potential to drive the development of new defense and detection algorithms, enhancing the security and robustness of neural networks. Furthermore, our mask-based approach offers the ability to modulate both the phase and modulus in the Fourier transform of the data opening up new avenues for investigating the implicit frequency bias of a network. By manipulating these data features, we can gain deeper insights into the implicit bias and explore the influence of different frequency components on classification outcomes. In addition, other types of representations, such as wavelets, could be explored. Finally we note that the method employed in this paper for discovering non-linear correlations between feature spaces, based on $I_d$, exhibits intriguing potential applications beyond the scope of this study. Correlations play a vital role in various scientific domains, including physics (Gallus et al., 2023), economics (Fleckinger, 2012), epidemiology (Majumder & Ray, 2021), and social networks (Starnini et al., 2017), among others. Therefore, it would be interesting to examine whether this method can unveil correlations that were previously unseen using conventional approaches. To these aims, a theoretical development that explores the relationship between $I_d$ and conventional methods for addressing this problem is valuable. Such an investigation could lead to possible enhancements that either overcome the limitations of the method or enable more precise quantification of correlation strength. These research directions form part of our future objectives. REFERENCES S. Arora, S. S. Du, W. Hu, Z. Li, R. Salakhutdinov, and R. Wang. "On exact computation with an infinitely wide neural net". *Advances in Neural Information Processing Systems 32*, 2019. J. Bac, E. M. Mirkes M, A. N. Gorban, I. Tyukin, and A. Zinovyev. "Scikit-dimension: a python package for intrinsic dimension estimation". *Entropy* 23(10), 2021. M. Balasubramanian and E. L. Schwartz. "The Isomap Algorithm and Topological Stability". *Science* 295, 2002. I. Borg and P. J. F. Groenen. "Modern multidimensional scaling: Theory and applications". *Springer Science & Business Media*, 2005. Y. Cao, Z. Fang, Y. Wu, D.X. Zhou, and Q. Gu. "Towards Understanding the Spectral Bias of Deep Learning". *International Joint Conference on Artificial Intelligence*, 2019. N. Carlini and D. Wagner. "Towards evaluating the robustness of neural networks". *IEEE Symposium on Security and Privacy*, 2017. J. Ortega Caro, Y. Ju, R. Pyle, S. Dey, W. Brendel, F. Anselmi, and A. B. Patel. "Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples". *arXiv preprint arXiv:2006.11440*, 2020. C. Ceruti, S. Bassis, A. Rozza, G. Lombardi, E. Casiraghi, and P. Campadelli. "DANCo: An intrinsic dimensionality estimator exploiting angle and norm concentration". *Pattern Recognition* 47(8), 2014. J. Deng, W. Dong, R. Socher, J. Li, L. Kai, and F. F. Li. "Imagenet: A large-scale hierarchical image database". *Proceedings of the IEEE conference on computer vision and pattern recognition*, 2009. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale". *International Conference on Learning Representations*, 2021. E. Facco, M. d’Errico, A. Rodriguez, and A. Laio. "Estimating the intrinsic dimension of datasets by a minimal neighborhood information". *Scientific Reports* 7(1), 2017. F. Faghri, S. Gowal, C. Vasconcelos, D. J. Fleet, F. Pedregosa, and N. Le Roux. "Bridging the gap between adversarial robustness and optimization bias". *ICLR Workshop on Security and Safety in Machine Learning Systems*, 2021. P. Fleckinger. "Correlation and relative performance evaluation". *Journal of Economic Theory*, 147 (1), 2012. S. Fridovich-Keil, R. Gontijo Lopes, and R. Roelofs. "Spectral Bias in Practice: The Role of Function Frequency in Generalization". *Advances in Neural Information Processing Systems 35*, 2022. C. Gallus, E. M. Pothos, P. Blasiak, J. M. Yearsley, and B. W. Wojciechowski. "Bell correlations outside physics". *Scientific Reports*, 13(1), 2023. A. Glielmo, I. Macocco, D. Doimo, M. Carli, C. Zeni, R. Wild, M. d’Errico, A. Rodriguez, and A. Laio. "DADPy: Distance-based analysis of data-manifolds in Python". *Patterns*, 3(10), 2022. I. Goodfellow, J. Shlens, and C. Szegedy. "Explaining and harnessing adversarial examples". *arXiv preprint arXiv:1412.6572*, 2014. S. Gunasekar, J. D. Lee, D. Soudry, and N. Srebro. "Implicit bias of gradient descent on linear convolutional networks". *Advances in Neural Information Processing Systems 31*, 2018. P. Harder, F.J. Pfreundt, M. Keuper, and J. Keuper. "SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain". *International Joint Conference on Neural Networks*, 2021.
sFJr7okOBi
I don't understand the overall evaluation setup. What does 'We randomly generate 1000 protein sequences from these models'. What metadata did you condition on? Was it 1000 different sets of metadata? How do you make this comparison fair when using models like ESM that don't have the ability to condition on metadata?
NL2ProGPT: TAMING LARGE LANGUAGE MODEL FOR CONVERSATIONAL PROTEIN DESIGN Anonymous authors Paper under double-blind review ABSTRACT Large Language Models (LLMs), like ChatGPT, excel in cross-modal tasks thanks to their powerful abilities in natural language comprehension, generalization, and reasoning. Meanwhile, the wealth of human-curated protein knowledge in text form presents a unique opportunity for LLMs to contribute to advanced protein design. In this work, we propose a new LLMs-based framework, namely NL2ProGPT, for macromolecular protein sequence generation that bridges the domain gap between natural and protein languages. Specifically, we first combine the protein functions and properties to create specific text guidelines for designing the protein, ensuring it follows precise controls. Second, in order to form a more informative and generalizable protein description, we explicitly inject protein structural information by clustering the embeddings from pre-trained protein language models. Third, we train a reward model to align the protein language model with the Rosetta energy function, following an RL-AIF (reinforced learning from AI feedback) fashion. We empirically verify the effectiveness of NL2ProGPT from three aspects: (1) outperforms existing protein sequence design methods in different evaluations; (2) exhibits more than 90% consistency in text-to-protein generation; (3) has effective exploration potential in disordered regions. 1 INTRODUCTION Recent years have witnessed remarkable progress in Natural Language Processing (NLP), driven by pre-trained Large Language Models (LLMs) [Brown et al., 2020; Radford et al., 2019; OpenAI, 2023] that have shown powerful abilities in natural language comprehension, generalization, and reasoning. Notably, parallels have been drawn between protein sequences and human languages, both being composed of structured elements, with proteins using amino acids as their alphabet. Protein sequences, akin to human languages, efficiently encode structure and function in their order. Therefore, despite dissimilarities, the analogies between protein sequences and language have motivated the use of NLP in recent protein research works [Lin et al., 2022; Zheng et al., 2023; Brandes et al., 2022; Nijkamp et al., 2022; Madani et al., 2020; Ferruz et al., 2022]. For example, one main set of language models follows an autoregressive training strategy, where models predict successive words based on contextual information. Protein autoregressive language models, such as ProGen [Madani et al., 2020], ProGen-2 [Nijkamp et al., 2022], RITA [Hesslow et al., 2022], and Prot-GPT2 [Ferruz et al., 2022], have also been investigated, highlighting the promise of autoregressive generation in the context of protein design. However, most existing methods mainly utilize protein sequential or structural information to model the intrinsic properties of protein, lacking the kind of controllable generation in a conversational way like LLMs. Meanwhile, there exists a vast amount of human-curated knowledge in text format describing proteins’ high-level properties, such as their structure domain, function, and interactions. Given the advancements in NLP’s understanding and generation of human language, there’s potential to apply these methods to tackle protein-related challenges, especially for conversational protein design. Simultaneously, there exist two main challenges: 1) the sparse representation of protein description in text; 2) the lack of structure constraint in LLMs’ training. To address those challenges, with respect to early attempts to use LLMs in protein generation field, we propose our model as shown in Figure 1. In this work, we propose NL2ProGPT, a generic approach to finetune LLMs to design protein sequences of a desired field. First, we synthesize protein functions and property description texts to establish precise design guidelines, ensuring strict adherence to defined controls. Second, we enhance the informativeness and generalizability of protein descriptions by explicitly incorporating structural information, achieved through clustering the embeddings generated from pre-trained protein language models. Third, to leverage the strengths of high-level structural constraint, we employ a reward model and a Reinforcement Learning from AI Feedback (RLAIF) methodology \cite{ziegler2019learning, lee2023rlaif} to align the protein language model with Rosetta energy function \cite{baek2021protein} and cluster representation score, considering generality and consistency of generated proteins. Under textual constraints, our experimental data shows that NL2ProGPT exhibits a high degree of consistency in protein generation, with a probability of successfully satisfying textual constraints exceeding 90%. Compared with other unconstrained protein generation models, our research results show that NL2ProGPT is closer to the characteristics of natural amino acids in terms of isomeric energy analysis and self-consistent perplexity. Furthermore, by maintaining protein structural similarity, our results demonstrate that NL2ProGPT has effective exploration potential in disordered regions. In summary, NL2ProGPT demonstrates excellent performance in the field of protein generation, provides valuable insights into research in protein engineering and related fields, and is expected to promote future exploration and applications. We summarize our contributions as follows: - We propose our model (NL2ProGPT) that bridges the gap between protein sequence and biomedical text, achieving the goal of conversational protein design. - We introduce to enrich the informativeness and generalizability of protein descriptions by incorporating structural information from protein representation models. - We introduce a strategy based on RLAIF (reinforced learning from AI feedback) that fine-tunes our model under the constraints of structural information. - Comprehensive experiments demonstrate the effectiveness of NL2ProGPT on text-to-protein generation, surpassing existing protein sequence design methods. 2 RELATED WORKS Large language models: Recently, Large Language Models (LLMs) \cite{radford2018improving, radford2019language, brown2020language, openai2023scaling} with a mass of parameters have achieved remarkable success not only in Natural Language Processing (NLP) \cite{wei2023large} but also in cross-modal fields such as computer vision \cite{yu2023large}, recommender systems \cite{hou2023large}, biomedical text generation \cite{luo2022large}, and molecule discovery \cite{jumper2021highly}. For instance, ChemGPT \cite{bran2023chemgpt}, a GPT variant with over a billion parameters, has been introduced to understand and generate small molecules in chemistry. BioGPT \cite{luo2022large}, a domain-specific generative Transformer language model, pretrained on a large corpus of biomedical literature, was evaluated across six biomedical natural language processing tasks in the article. Therefore, LLMs demonstrate strong generalization and reasoning abilities, which enable them to excel in various tasks without extensive fine-tuning, reducing computational costs. Consequently, LLMs offer an unprecedented potential to advance protein discovery, particularly in the context of text-to-protein translation. Protein generation models: Protein structure design has recently witnessed significant advancements. It has evolved from traditional methods that relied on multiple sequence alignments \cite{do2005protein, thompson1994alignments} to generate protein structures to the utilization of deep learning and statistical techniques \cite{jumper2021highly, baek2021protein} for more precise modeling and prediction of the three-dimensional spatial structures of proteins. Several BERT architecture-like models, such as TCR-BERT \cite{wu2021protein}, epiBERTope \cite{park2022epibertope}, ESM-2 \cite{lin2022esm2}, LM-DESIGN \cite{zheng2023lm}, and ProteinBERT \cite{brandes2022proteinbert}, have demonstrated competitiveness on the task of protein representation learning, where they are pre-trained by introducing noise to input tokens and aiming to reconstruct the original sentences. Meanwhile, these models can also be adapted for protein generation. Another category of language models relies on autoregressive training, where models predict subsequent words based on context. Protein autoregressive language models like ProGen \cite{madani2020protein}, ProGen-2 \cite{nijkamp2022protein}, RITA \cite{hesslow2022rita}, et al., (2022), and ProtGPT2 Ferruz et al., (2022) have also been explored, highlighting the potential of autoregressive Transformers for protein design. One similar work to our model is ProteinDT Liu et al., (2023), which also leverages textual descriptions for protein design, but adopts contrastive learning to align the two modalities. **Protein credibility prediction:** The best way to confirm protein sequence reliability is through wet lab experiments like DMS assays, receptor binding assays, antibody tests, or thermal stability checks. However, such wet experiments require a large amount of manpower and resources, leading to the use of mathematical models for credibility prediction. For example, ProGen Madani et al., (2020) and ProGPT2 Ferruz et al., (2022) employ Rosetta Park et al., (2016) for heterogeneity energy analysis of proteins, while EvoDiff Lin et al., (2023) utilizes pLDDT and self-consistency perplexity measurements to assess the structural rationality of generated proteins, as well as secondary structure distribution to evaluate the biological properties of protein sequences. ProGPT2 and EvoDiff also possess the capability to explore disordered regions in proteins. ### 3 METHODOLOGY #### 3.1 DATA CONSTRUCTION In this study, the data preparation phase of our approach involves randomly selecting over 1 million proteins from the Pfam dataset Bateman et al., (2004) as our training dataset. For the vocabulary representation of amino acids, we adopt the standard 25 amino acid names in IUPAC Pettit & Powell (2006). For each protein sequence, we perform two different types of feature representation construction, as shown in Figure 1. Specifically, we first use the bioinformatics tool InterProScan Jones et al., (2014) to conduct multiple sequence alignments (MSAs) of protein sequences with the Pfam database Bateman et al., (2004), which contains a large number of structural domains and other relevant information, such as protein family and conserved site, to determine the functional domains and features presented in the input sequence. This process helps capture the functional information and domain characteristics of the protein. However, some attributes of protein in the Pfam database are quite sparse for the entire protein space (e.g., less than 150 proteins for the White spot syndrome virus structural envelope protein VP Domain) and the whole distribution appears in a long-tail form, restricting the model from generating diverse results. Therefore, secondly, we use the pre-trained protein representation model (PRM) to extract the embedded features of the protein, and then reduce protein embedded features dimensionality (RPEDF) to obtain its protein representation, thereby achieving the informativeness and generalizability enhancement of protein descriptions and further constraints on structure and function. In our research, we use ESM-2 [Lin et al. (2022)] and OntoProtein [Zhang et al. (2022)] models as examples. It is worth noting that the features extracted by ESM-2 were mainly used for protein structure prediction, making these features have certain structural representation capabilities. OntoProtein is a general framework for building protein pre-trained models using Gene Ontology structures. The features extracted by OntoProtein are often more related to Gene Ontology. The overall data processing process is as follows: \[ E^K_a = \text{PRM}(a), \\ E^K_p = \text{AveragePooling}(E^K_a), \\ E^2_p = \text{UMAP}(E^K_p), \\ C_p = \text{K-means}(E^2_p). \] Specifically, we first extract the residue dimension features \(E^K_a \in \mathbb{R}^{L \times K}\) of a protein \(a\) with residue length \(L\) through the PRM. Then, we perform an AveragePooling operation on the residue dimension features along the sequence dimension to obtain the overall representation feature of the protein \(E^K_p \in \mathbb{R}^K\). Next, we use the UMAP algorithm [McInnes et al. (2018)] to reduce the dimensionality of the protein representation feature \(E^K_p\) and map it to a two-dimensional space to obtain \(E^2_p \in \mathbb{R}^2\). Finally, we use the K-means clustering method to cluster the dimensionally reduced protein representation \(E^2_p\), group the protein data into different clusters, and obtain the cluster representation \(C_p \in \mathbb{R}\). It should be noted that the protein feature representation is first dimensionally reduced through UMAP, and then is clustered through K-means instead of clustering the protein feature representation directly through the clustering method, which can make the entire process more intuitive and reliable. Finally, we manually construct templates and embed the obtained protein representations into the text (for example, if ESM clustering is category 1, it is converted to the text "ESM_1" and embedded in the corresponding position of the template), generating descriptions for each protein. We then feed these constructed templates into ChatGPT [OpenAI (2023)] to obtain diverse protein text descriptions by using several prompts. These descriptions constitute the training dataset for text-protein pairs, serving as a foundation for further research and analysis. ### 3.2 Self-supervised Fine-tuning Let \(a = (a_1, \ldots, a_{n_a})\) represent the amino acid sequence, which signifies the composition of a protein. Similarly, let \(w = (w_1, \ldots, w_{n_w})\) denote the protein's description. \(A\) and \(W\) can be defined as the input spaces for \(a\) and \(w\), respectively, such that \(a \in A\) and \(w \in W\). By merging the textual description with the amino acid sequence into the sequence \(x = [w : a]\), we create a combined sequence containing protein information and derive its probability distribution \(P(x)\). To model this probability distribution, we employ the probabilistic chain rule and train it with the help of a neural network to minimize the negative log-likelihood on the dataset \(D\): \[ P(x) = \prod_{i=1}^{n} P(x_i | x_{<i}), \] \[ \mathcal{L}(D) = -\frac{1}{|D|} \sum_{k=1}^{|D|} \frac{1}{n_k} \sum_{i=1}^{n_k} \log p_\theta(x^k_i | x^k_{<i}). \] This training process helps us understand the distribution of combined sequences. We have paid special attention to \(P(a|w)\), which represents the distribution of protein amino acid sequence \(a\) given a text description \(w\). To achieve this goal, we conduct an initial fine-tuning phase, as shown in Figure 1, where we utilize the pre-trained GPT-2 model [Radford et al., 2019] with text understanding capabilities as the initial state of NL2ProGPT. Subsequently, NL2ProGPT further learns conditional distributions between amino acid and protein descriptions. This process involves mapping a sequence of tokens into a vector space and processing it through multiple Transformer layers. During fine-tuning, we utilize a cross-entropy loss function to compare the model output with the true labels to provide guidance. When generating new sequences, we use the softmax function to calculate the sampled final label distribution. 3.3 Reinforced Learning from AI Feedback Inspired by [Ziegler et al., 2019; Lee et al., 2023], as shown in Figure 1, we consider introducing the feedback mechanism of reinforcement learning into the text-to-protein generation task. For our text-to-protein generation task, we define the data distribution of protein text descriptions as \( \mathbb{D} \), and \( P \) as defined above a probabilistic strategy \( P(a|w) = P(aw)/P(w) \): fix the text description of the protein to \( w \), and then use probability \( P \) to generate subsequent tokens. In this paper, we denote the initial policy as is \( \pi = P \), and fine-tune \( \pi \) through reinforcement learning to better complete the task. The specific task is defined by the reward function \( r : W \times A \rightarrow \mathbb{R} \), then we could use RL to directly optimize the expected reward: \[ E_{\pi}[r] = E_{w \sim \mathbb{D}, a \sim \pi(.|w)}[r(w, a)] \] Our reward function \( r \) mainly considers two dimensions, namely generality and consistency. In the context of generality, we investigate the conformational energies of proteins and assessed the stability and energy of various protein conformations using the Rosetta energy function [Park et al., 2016], also referred to as the potential energy function. This function encompasses interactions and force fields such as van der Waals forces, charge interactions, hydrogen bonds, and virtual side-chain conformations. Generally, protein structures with lower scores are more likely to be closer to the native structure. The specific reward points are calculated as follows: \[ r_{\text{rosetta}} = \alpha - \ln(r_{\text{raw, rosetta}} + \beta) \] Among them, \( \alpha \) and \( \beta \) are customized bias terms, which are optimized by optimizing the original Rosetta score to better train the model. On the consistency dimension, we considered cluster representation scores. When the generated protein matches the target protein, we award it a score of \( \mu \). When there is no match, we consider the distance between the dimensionality reduction coordinates of the generated protein and the coordinates of the cluster center point. The farther the distance, the lower the reward score obtained. The specific reward points are calculated as follows: \[ r_{\text{esm}} = \begin{cases} \mu, & \text{if } (x_i^{\text{esm}}, y_i^{\text{esm}}) \rightarrow c_i^{\text{esm}} \\ \mu - \sqrt{(x_i^{\text{esm}} - x_{c_i}^{\text{esm}})^2 + (y_i^{\text{esm}} - y_{c_i}^{\text{esm}})^2}, & \text{otherwise} \end{cases} \] \[ r_{\text{onto}} = \begin{cases} \mu, & \text{if } (x_i^{\text{onto}}, y_i^{\text{onto}}) \rightarrow c_i^{\text{onto}} \\ \mu - \sqrt{(x_i^{\text{onto}} - x_{c_i}^{\text{onto}})^2 + (y_i^{\text{onto}} - y_{c_i}^{\text{onto}})^2}, & \text{otherwise} \end{cases} \] where \( \mu \) is the hit reward score of the clustering result, and \( c_i^{\text{esm/onto}} \) is the cluster center point corresponding to the \( i \)-th protein. Finally, our comprehensive award score is: \[ r = \lambda_1 r_{\text{rosetta}} + \lambda_2 r_{\text{esm}} + \lambda_3 r_{\text{onto}} \] where \( \lambda_1, \lambda_2, \) and \( \lambda_3 \) are hyperparameters used to balance the contribution of each reward score. 3.4 Protein Generation Application One significant challenge in protein design is the concept of inverse protein folding [Hsu et al., 2022], where the objective is to select amino acid sequences that autonomously fold into a predetermined backbone structure. While some existing LLMs-based methods [Madani et al., 2020; Ferruz et al., Figure 2: Comparison results of (a) conformational energy distributions, (b) foldability-measured sequence pLDDT distributions, (c) and self-consistency distributions. Model sizes: ESM-2MR (650M), ProGen-2 (764M), ProtGPT2 (738M), Ours (124M). Nijkamp et al. (2022) have demonstrated success in De novo protein design, none of them enables the sequence generation given target structures due to the lack of structural constraints. In this work, we take the first step toward inverse protein folding with LLMs, where we can obtain target protein structural embedding with protein structural representation model, i.e., ESM-2 Lin et al. (2022). Then we inject the embedding’s textural expression into the target protein description, making our model able to produce self-consistent overall sequences for a target protein as the results shown in Section 4.3. 4 RESULTS AND ANALYSIS 4.1 IMPLEMENTATION DETAILS Our training dataset comprise 1,001,890 text-protein sequence pairs in total. We extend our training based on the GPT-2 architecture with the following hyperparameters: random seed (42), batch size (12), learning rate (3e-5), training epochs (20) with a warm-up step of 11,000, and the Adam optimizer. For reinforcement learning, we employ Proximal Policy Optimization (PPO) Schulman et al. (2017) with a learning rate of 1.41e-6 and a ratio threshold of 8.0. We use ESMFold Lin et al. (2022) to predict protein structures and calculate Rosetta scores based on the weight configuration of ref2015 Park et al. (2016). The hyperparameters for the various reward functions, denoted as $\alpha$, $\beta$, and $\mu$, are set to 8.0, 500.0, and 1.0, respectively. The reward score weights, $\lambda_1$, $\lambda_2$, and $\lambda_3$, are assigned values of 2.0, 1.0, and 1.0, respectively. 4.2 GENERATION RESULTS EVALUATION Generality Evaluation. Our research focuses on assessing the quality of the generated protein sequences and examining whether the model can generate novel and structurally sound protein sequences. Therefore, we compared our NL2ProGPT (with and without reinforcement learning) with several state-of-the-art protein sequence generation methods, including ProGen-2(base) Nijkamp et al. (2022), ESM-2 Masking Reconstruction(ESM-2MR) Lin et al. (2022) and ProtGPT2 Ferruz et al. (2022). ESM-2MR is a method that employs a random 50% masking of the protein sequence, followed by sequence reconstruction utilizing the ESM-2 model. Random is obtained by randomly mutating 50% of the protein sequences. We randomly generate 1000 protein sequences from these models and used ESMFold Lin et al. (2022) for structure prediction, followed by Rosetta scoring Park et al. (2016). The results of this study are shown in Figure 2(a), showing that the protein sequences generated by text-precise protein constraints are closer to the distribution of real data in terms of Rosetta scores than other models. In particular, it is important to emphasize that models fine-tuned with reinforcement learning perform best in this regard. Overall, our generated protein sequences may have a higher success rate when performing wet experiments. Table 1: Comparison of consistency success rates under different text constraints. | Method | Domain | ESM Cluster CLS | OntoProtein Cluster CLS | |-------------------|--------|-----------------|-------------------------| | ESM-2MR | | 0.887 | 0.791 | | NL2ProGPT(no RL) | | 0.980 | 0.879 | | NL2ProGPT | | **0.994** | **0.917** | (a) Real Proteins (b) Ours, KL=2.4e-05 (c) ESM-2, KL=6.5e-05 (d) Random, KL=2.0e-03 Figure 3: We performed an analysis of the three-state secondary structure of the generated sequences, including multivariate distributions of helical and folded structures. As shown in Figure 2(b), we further evaluate the quality of the protein structure by calculating the average predicted local distance difference test (pLDDT) to measure the foldability of individual sequences. pLDDT not only reflects ESMFold’s degree of confidence in the protein structure but also provides an assessment of the quality of the prediction. We noticed that in some cases lower pLDDT scores may be associated with the presence of intrinsically disordered regions (IDRs) in the protein (e.g. as shown in Figure 5). This phenomenon also commonly exists in many natural proteins. In addition, we use the inverse folding algorithm ESM-IF [Tsu et al., 2022] to redesign each predicted protein structure and calculate self-consistent perplexities for the originally generated sequences as shown in Figure 2(c). Lower values of self-consistent perplexity indicate that the generated structure is more consistent with the sequence, while higher self-consistent perplexity may indicate that the generated sequence is more diverse. We can observe that NL2ProGTP achieves a good balance between reliability and diversity. Consistency evaluation. Since our protein generation task is based on textual constraints, it is critical to evaluate whether our model can accurately generate protein sequences that comply with the textual requirements. Considering that there is currently no model for text-to-protein generation use, we adopt ESM-2MR and Random as a strong baseline for evaluation. As shown in Table 1, compared with ESM-2MR, our model has achieved higher performance under various conditions. Interestingly, although we do not reward hitting protein domains during reinforcement learning fine-tuning, the hit rate of protein domain correlations has also been improved after fine-tuning, indicating that our model implicitly performs better in clustering. We have also predicted the three-state secondary structure of all protein sequences using the ProtT5 model [Elnaggar et al., 2020] and have calculated the KL divergence between the generated sequences and the real data secondary structure distribution. As shown in Figure 3, we observe that the protein secondary structure distribution generated by NL2ProGPT is closer to the real data than ESM-2MR. This demonstrates that NL2ProGPT is able to maintain not only high quality when generating proteins but also consistency with the natural distribution. Additionally, we explore whether NL2ProGPT truly learns cluster representation. We randomly select 3 protein descriptions and generate 500 protein sequences for each description. Subsequently, we use ESM-2 and protT5 model to extract feature representations from all protein sequences, and calculate the Fréchet ESM-2 distance (FED) [Alamdari et al., 2023] and Fréchet ProtT5 distance (FPD) [Alamdari et al., 2023] respectively. Through t-SNE dimensionality reduction visualization shown in Figure 4, we can find that the sequences generated by NL2ProGPT are clearly distributed in 3 clusters as the real distribution, while the sequences generated by ESM-2MR are not clearly distinguished as our NL2ProGPT in the case of ESM-2 embedding. Figure 4: ESM-2 embedding (a-c) & ProtT5 embedding (d-f) distribution of generated protein sequences. Embedding for real proteins (grey), NL2ProGPT (purple), ESM-2MR (pink), and random mutations (green). Table 2: TM-scores between protein clusters (ESM_32 and ESM_86), and comparison between real and generated clusters. | ESM Cluster | Real V.S. Gen. | |-------------|---------------| | 32 V.S. 32 | 0.87 | | 86 V.S. 86 | 0.89 | | 32 V.S. 86 | 0.79 | | Type | ESM Cluster | CSF | |------------|-------------|-----| | Real | 32 | 0.23| | | 86 | 0.85| | Gen. | 32 | 0.33| | | 86 | 0.89| 4.3 CASE STUDIES We adopted a clustering method to represent model features, but we need to consider whether these cluster representations only reflect differences in embedded features, or whether they are biologically interpretable. Taking proteins including ABC transporter-like, ATP-binding domain, the ESM model clusters them into ESM_32 and ESM_86 respectively. We first select 500 protein samples each from the corresponding described real data and use ESMFold [Lin et al., 2022] to predict the structure of each protein. Next, the TM-scores among these proteins are calculated through TMA-align [Zhang & Skolnick, 2005], and the results are shown in the second column of Table 2. We can find that the structural similarity of proteins in different clusters is significantly lower than the structural similarity of proteins in the same cluster. Similarly, we use NL2ProGPT to generate 500 protein samples of each of the two cluster descriptions and also calculate the TM-score between them. The results are shown in the third column of Table 2. Compared with real data results, we can observe that NL2ProGPT indeed learns the potential structure knowledge from the ESM cluster representation, producing TM-scores highly similar to the real data’s. Additionally, TM-scores computed by real and generated clusters at the fourth column of Table 2 further verify the high similarity between real and generated proteins. At the same time, we also noticed that the ESM clustering representation also includes some other biological characteristics, such as conserved sites of proteins. Taking the proteins containing ATP-binding domain in the ESM_86 category as an example shown in Table 3, we randomly select... 500 real proteins and find that 85% of them had ABC transporter-like conserved sites, while only 15% in other categories. Similarly, NL2ProGPT also shows this distribution pattern, indicating that NL2ProGPT has also learned this implicit biological knowledge. Overall, our clustering representation has a certain biological meaning, and NL2ProGPT has also learned this implicit biological meaning. In cellular functions, naturally occurring disordered regions in proteins, despite lacking a firm spatial structure, play important roles in many key biological processes, such as protein-protein interactions. Therefore, we investigate whether NL2ProGPT can explore disordered regions of proteins while meeting specific requirements. We screen proteins that have AMP-dependent synthetase/ligase domain and belong to ESM_13 and ONTO_22. As shown in Figure 5, first we use ESMFold [Lin et al., 2022] to predict the three-dimensional structure of the protein sequence, and then calculate the TM-score between the real protein and the generated protein through TMAlign [Zhang & Skolnick, 2005]. In addition, we evaluate the amino acid sequence similarity between the two through sequence alignment. Surprisingly, with only 43% amino acid sequence similarity, the TM-score is as high as 0.87. At the same time, we also use DR-BERT [Nambiar et al., 2023], a tool for predicting intrinsically disordered regions of proteins. The results show that the disordered region corresponding to the protein we have generated has a higher score, while the sequence similarity of the disordered region is only 38%, and the visual difference in the structure of the disordered region is obvious. This demonstrates that NL2ProGPT can successfully explore disordered regions of proteins while maintaining protein structural similarity. 5 CONCLUSION We introduce the NL2ProGPT framework, which aims to bridge the domain gap between natural language and protein language. The framework shows excellent performance and potential in multiple aspects: First, NL2ProGPT can generate macromolecular protein sequences that are close to natural proteins, indicating its potential application value in protein functional design. Secondly, the model demonstrates effectiveness in exploring disordered regions, demonstrating the ability to generate diverse protein sequences. In addition, NL2ProGPT skillfully embeds protein structural information into natural language text and shows excellent performance in natural language to protein translation consistency, emphasizing its ability to convert natural language descriptions into protein sequences. This research provides important innovations in the field of protein design. Integrating natural language and protein language opens up new research avenues for advanced protein design and provides solid support for future protein engineering and biological research. REFERENCES Sarah Alamdari, Nitya Thakkar, Rianne van den Berg, Alex Xijie Lu, Nicolo Fusi, Ava Pardis Amini, and Kevin K Yang. Protein generation with evolutionary diffusion: sequence is all you need. bioRxiv, pp. 2023–09, 2023. Minkyung Baek, Frank DiMaio, Ivan Anishchenko, Justas Dauparas, Sergey Ovchinnikov, Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch, R Dustin Schaeffer, et al. Accurate prediction of protein structures and interactions using a three-track neural network. Science, 373(6557):871–876, 2021. Alex Bateman, Lachlan Coin, Richard Durbin, Robert D Finn, Volker Hollich, Sam Griffiths-Jones, Ajay Khanna, Mhairi Marshall, Simon Moxon, Erik LL Sonnhammer, et al. The pfam protein families database. Nucleic acids research, 32(suppl_1):D138–D141, 2004. Andres M Bran, Sam Cox, Andrew D White, and Philippe Schwaller. Chemcrow: Augmenting large-language models with chemistry tools. arXiv preprint arXiv:2304.05376, 2023. Nadav Brandes, Dan Ofér, Yam Peleg, Nadav Rappoport, and Michal Linial. Proteinbert: a universal deep-learning model of protein sequence and function. Bioinformatics, 38(8):2102–2110, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Chuong B Do, Mahathi SP Mahabhashyam, Michael Brudno, and Serafim Batzoglou. Probcons: Probabilistic consistency-based multiple sequence alignment. Genome research, 15(2):330–340, 2005. Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rehawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas B. Fehér, Christoph Angerer, Martin Steinegger, Debsindhu Bhownik, and Burkhard Rost. Prottrans: Towards cracking the language of life’s code through self-supervised deep learning and high performance computing. bioRxiv, 2020. URL https://api.semanticscholar.org/CorpusID:220495861 Noelia Ferruz, Steffen Schmidt, and Birte Höcker. Protgpt2 is a deep unsupervised language model for protein design. Nature communications, 13(1):4348, 2022. Daniel Hesslow, Niccoló Zanichelli, Pascal Notin, Iacopo Poli, and Debora Marks. Rita: a study on scaling up generative protein sequence models. arXiv preprint arXiv:2205.05789, 2022. Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845, 2023. Chloe Hsu, Robert Verkuil, Jason Liu, Zeming Lin, Brian Hie, Tom Sercu, Adam Lerer, and Alexander Rives. Learning inverse folding from millions of predicted structures. In International Conference on Machine Learning, pp. 8946–8970. PMLR, 2022. Philip Jones, David Binns, Hsin-Yu Chang, Matthew Fraser, Weizhong Li, Craig McAnulla, Hamish McWilliam, John Maslen, Alex Mitchell, Gift Nuka, et al. Interproscan 5: genome-scale protein function classification. Bioinformatics, 30(9):1236–1240, 2014. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583–589, 2021. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023. Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, et al. Language models of protein sequences at the scale of evolution enable accurate structure prediction. BioRxiv, 2022:500902, 2022.
DOerIFfUbs
In Tab. 3, could you explain why you useLLaVA-Bench which is often used to test the multimodal instruction ability? And why the UTA with G/14 model did not beat that of L/14 model but exceeded in conversation and reasoning by a large margin?
Enhancing Vision-Language Model with Unmasked Token Alignment at Scale Anonymous authors Paper under double-blind review Abstract Contrastive pre-training on image-text pairs, exemplified by CLIP, becomes a standard technique for learning multi-modal visual-language representations. Although CLIP has demonstrated remarkable performance, training it from scratch on noisy web-scale datasets is computationally demanding. On the other hand, mask-then-predict pre-training approaches, like Masked Image Modeling (MIM), offer efficient self-supervised learning for single-modal representations. This paper introduces Unmasked Token Alignment (UTA), a method that leverages existing CLIP models to further enhance its vision-language representations. UTA trains a Vision Transformer (ViT) by aligning unmasked visual tokens to the corresponding image tokens from a frozen CLIP vision encoder, which automatically aligns the ViT model with the CLIP text encoder. The pre-trained ViT can be directly applied for zero-shot evaluation even without training on image-text pairs. Compared to MIM approaches, UTA does not suffer from training-finetuning inconsistency and is much more training-efficient by avoiding using the extra [MASK] tokens. Extensive experimental results demonstrate that UTA can enhance CLIP models and outperform existing MIM methods on various uni- and multi-modal benchmarks. 1 Introduction Contrastive pre-training, e.g., CLIP (Radford et al., 2021), with web-scale image-text pairs is becoming the mainstream technique for learning multi-modal visual-language representations. The pre-trained CLIP model has unlocked the potential of various downstream applications, including zero-shot image classification and retrieval, and high-quality text-to-image generation (Rombach et al., 2022; Ramesh et al., 2022). Furthermore, the pre-trained visual and text encoders can be further used for multi-modal and even uni-modal tasks. Unlike classical supervised learning on the human-annotated classification dataset, CLIP and its variants are typically trained on much noisier datasets found on the web such as LAION (Schuhmann et al., 2022) and WIT (Radford et al., 2021), and require an extremely large batch size to work well. Directly training on those datasets from scratch requires a lot of computing resources, making it not accessible to most researchers. In contrast, the mask-then-predict pre-training approaches, e.g., Masked Image Modeling (MIM) (He et al., 2021; Xie et al., 2021) and Masked Language Modeling (MLM) (Devlin et al., 2019), have been shown to be efficient and powerful way to learn single-modal (visual or language) representations in self-supervised manner and can achieve strong performance by fine-tuning the pre-trained models on downstream tasks. The key design of those methods is to predict the masked tokens from the other visible and unmasked input tokens. We ask the question: can we take advantage of both types of methods and further enhance the vision-language representations over CLIP? There are recent works, e.g., EVA (Fang et al., 2023b), utilizing a pre-trained CLIP model for generating the prediction targets for MIM. The resulting vision models show stronger performance than the encoders pre-trained using either only MIM or only CLIP, demonstrating the effectiveness of combining MIM and CLIP for multi-modal feature learning. However, those methods are limited to learning single-modal representations, and extra contrastive fine-tuning is needed for multi-modal feature learning, as proposed in EVA-CLIP (Sun et al., 2023). In this paper, we propose an efficient method, Unmasked Token Alignment (UTA), for enhancing the alignment between vision-language representations, which better utilizes existing pre-trained CLIP models. In particular, our method trains a Vision Transformer (ViT) (Dosovitskiy et al., 2021) model from scratch by using the unmasked and sparse visual tokens to align with corresponding image tokens of a frozen CLIP model. For the train-from-scratch ViT model, we randomly mask a portion of image tokens with a reversed masking strategy, where only the unmasked (i.e., kept) tokens (including the [CLS] token) are inputted into the ViT model and aligned with the output of the frozen CLIP visual model. We maximize the cosine similarity for token alignment, and therefore, the ViT model is automatically aligned with the CLIP text encoder in the normalized embedding space. There are two major advantages of using the proposed unmasked token alignment strategy. 1) After pre-training the vision model, we can directly conduct zero-shot classification and retrieval using the normalized features of the trained ViT model and the CLIP text encoder. We illustrate the pre-training and fine-tuning pipeline of UTA in Fig. 1. In contrast, the masked prediction objective used in existing MIM works (EVA (Fang et al., 2023b), BEiT-3 (Wang et al., 2022b)) relies on the [MASK] tokens to predict the CLIP features while the unmasked tokens are not trained to align with the CLIP model as we do. They do not support zero-shot evaluation without contrastive fine-tuning as only the unmasked tokens are used for zero-shot evaluation. 2) MIM works suffer from the training-finetuning inconsistency as a large portion of [MASK] tokens never appear during the fine-tuning. In contrast, our approach better maintains the training-finetuning consistency by only inputting and aligning the unmasked tokens, which are processed both in training and inference. We also empirically find that further adding the masked prediction objective on our UTA results in much worse zero-shot performance. Compared to the existing MIM approach that relies on the [MASK] tokens to predict the CLIP features with the masked prediction objective, our method is conceptually simple and computationally efficient by avoiding introducing the [MASK] tokens, which can reduce the training FLOPs for up to 50%. But at the same time, our pre-trained models are also suitable for fine-tuning on downstream uni-modal or multi-modal tasks. In particular, our pre-trained ViT-L obtains 78.5% zero-shot accuracy on ImageNet without contrastive fine-tuning from image-text pairs. After fine-tuning with the DataComp-1B dataset (Gadre et al., 2023), we obtained 80.8% zero-shot accuracy on ImageNet, surpassing the DataComp baseline and EVA-CLIP by 1.6% and 1.0%, respectively. On the more recent multi-modal benchmark, i.e., LLaVA-Bench (Liu et al., 2023), we outperform CLIP and EVA-02 by 2.2% and 1.4%, respectively. We also fine-tune the pre-trained vision model on object detection and segmentation tasks and demonstrate better results than the competitive EVA-02 (Fang et al., 2023a) models on those tasks. 2 METHOD In this section, we first review the widely used Masked Image Modeling (MIM) pre-training and its more advanced version equipped with a pre-trained CLIP model. We then introduce the unmasked token alignment (UTA) approach and its implementation. 2.1 A REVISIT OF MASKED IMAGE MODELING WITH CLIP MIM methods (Bao et al., 2021; He et al., 2021; Xie et al., 2021) typically use a Vision Transformer (ViT) (Dosovitskiy et al., 2021) for pre-training. An input image is first divided into non-overlapping image patches, which are converted into a sequence of tokens with a project layer and positional embedding. Then a portion of the tokens are randomly sampled, where the masked tokens are filled with a special [MASK] token. The masked image is processed by the ViT to produce the latent representations, and a lightweight head is utilized to predict the original image based on the latent representations. After pre-training, the ViT is used for further fine-tuning on downstream visual tasks. Some recent papers (Peng et al., 2022; Fang et al., 2023b; Hou et al., 2022; Xiao et al., 2022) utilize the hidden features of a pre-trained CLIP model as the reconstruction targets and achieve much better performance than methods using the low-level pixels as the targets (He et al., 2021; Xie et al., 2021). In particular, the unmasked image is fed into the visual encoder of the CLIP model for obtaining the full image’s hidden feature map. The masked prediction objective is to align the predicted feature with the CLIP’s visual feature on the masked tokens. 2.2 UNMASKED TOKEN ALIGNMENT Using the masked prediction objective to align a train-from-scratch ViT model with the pre-trained CLIP visual model still uses the problematic [MASK] tokens. It causes training-finetuning inconsistency and makes the trained ViT unable to perform zero-shot classification without fine-tuning. To tackle the issue, we propose a simple yet effective solution that does not utilize the extra [MASK] tokens. We align the feature maps of the two models with a dense distillation objective, where the feature maps of the train-from-scratch ViT model and CLIP vision encoder are obtained with a partial view and a full view, respectively. Specifically, given an input image, we use a random mask to mask a portion of image tokens. Unlike previous works that use the [MASK] tokens to fill in the masked patches, we directly drop the masked tokens and only input the rest tokens into the ViT encoder. For the pre-trained CLIP model, we input the original image and obtain a full hidden feature map. Then we select the corresponding unmasked (kept) tokens from the CLIP vision encoder’s feature map, which are used as the targets for the train-from-scratch ViT encoder. The cosine similarity is maximized for the token alignment. After pre-training, the ViT encoder is aligned with the CLIP vision encoder in the normalized embedding space. Therefore, the ViT encoder is also aligned with the CLIP text coder as the CLIP’s vision and text encoders share the same embedding space. As a result, we can directly conduct the zero-shot evaluation with the pre-trained ViT encoder and CLIP text encoder even without training on the image-text pairs. We show that we can already achieve decent zero-shot performance after the unmasked alignment. Reversed block-wise masking. Previous works (Bao et al., 2021) typically use block-wise masking to preserve the structure of input images. However, we note that such masking is spatially unequalized, which tends to mask the center area of the images with a much higher probability, and as a result, the tokens in the border area are trained much more times than tokens in the center area. We introduce a reversed block-wise masking strategy, which first generates a mask with block-wise masking and then randomly reverses the mask with a probability of 0.5. Our masking strategy preserves the structure of the input images and also alleviates the spatial unequalization problem. Pre-training efficiency analysis. As we do not need to process the extra [MASK] tokens during the pre-training, we can largely improve the masked training efficiency. In practice, we use a large mask ratio, e.g., 0.5, for pre-training. Thus, compared to EVA (Fang et al., 2023b) or BEiT v2 (Peng et al., 2022), which require inputting extra [MASK] tokens, our UTA can reduce the training FLOPs by 50%. 2.3 IMPLEMENTATION Vision transformer architecture. We follow EVA-02 (Fang et al., 2023a) to introduce architectural modifications on vision transformer for improving the performance and training stability. In particular, we add extra relative positional embedding introduced by Su et al. (2021) in the self-attention layer. We replace the original feedforward network (FFN) in vision transformer with the SwiGLU variant introduced by Shazeer (2020). Moreover, we add an extra LayerNorm (Ba et al., 2016) layer in the FFN to stabilize the training as proposed by Wang et al. (2022a). CLIP teacher model. Instead of using original CLIP models for pre-training, we follow Fang et al. (2023a) to use a better-performing CLIP model, i.e., giant-sized EVA-CLIP model (Sun et al., 2023), for providing the alignment targets during pre-training. Our experiments show that the stronger CLIP model can bring large zero-shot accuracy improvements. Additionally, we find the pre-trained ViT-L model can surpass the giant-sized CLIP model after contrastive fine-tuning. 3 EXPERIMENTAL SETUP To demonstrate the effectiveness of the proposed Unmasked Token Alignment (UTA), we conduct experiments to pre-train ViT to align with CLIP vision-language representation on large-scale dataset and apply the pre-trained models to downstream multi-modal and uni-modal tasks. The multi-modal tasks include zero-shot classification, zero-shot retrieval, and the more recent LLaVA-Bench (Liu et al., 2023). The uni-modal tasks include ImageNet classification (Deng et al., 2009), object detection, and segmentation. Pre-training. All ViT models are pre-trained on ImageNet-21K (Deng et al., 2009) dataset using $224 \times 224$ input resolution. Unless otherwise specified, we pre-train for 150 epochs with batch size of 4096. We use AdamW (Loshchilov & Hutter, 2017) optimizer with weight decay of 0.05. The learning rate is linearly increased to $1.5 \times 10^{-3}$ with 1 epoch of training and decays to $10^{-5}$ with cosine schedule (Loshchilov & Hutter, 2016). By default, we use reversed block-wise masking with mask ratios of 0.4 and 0.5 for base and large models, respectively. Contrastive fine-tuning. Although the pre-trained ViT model can already demonstrate excellent zero-shot capabilities even without contrastive fine-tuning, we also perform a much shorter contrastive fine-tuning similar to other CLIP counterparts to further improve its zero-shot performance, especially for the out-of-distribution tasks. In particular, we initialize the vision and text encoders with the pre-trained ViT model and CLIP text encoder. Then we perform contrastive fine-tuning on the DataComp-1B dataset (Gadre et al., 2023). The temperature parameter in the contrastive loss (Radford et al., 2021) is fixed to 0.01 during our training as initially the vision encoder and text encoder are already aligned. Fine-tuning. For evaluation on the LLaVA-Bench (Liu et al., 2023) and uni-modal tasks, we only keep the pre-trained ViT. On LLaVA-Bench, we follow the default settings to first train a projection layer on CC-3M dataset (Sharma et al., 2018) for feature alignment and then fine-tune the project layer and Large Language Model (LLM) (Chiang et al., 2023) on LLaVA-Instruct-150K dataset (Liu et al., 2023). For object detection and instance segmentation tasks, we adopt the Cascade Mask R-CNN (He et al., 2017; Cai & Vasconcelos, 2019) framework and separately fine-tune on the COCO (Lin et al., 2014) and LVIS (Gupta et al., 2019) datasets. For semantic segmentation task, we adopt the UperNet (Xiao et al., 2018) framework and fine-tune on the ADE20K (Zhou et al., 2017) dataset. Please refer to the appendix [A.1] for more detailed configurations. 4 MAIN RESULTS In this section, we compare the proposed Unmasked Token Alignment (UTA) to prior arts on various benchmarks. We first conduct comparisons between UTA and previous zero-shot results in Sec. 4.1. We then compare UTA with other pre-training methods on LLaVA-Bench in Sec. 4.2. To show the transferability of UTA, we present the transfer learning results on core vision tasks in Sec. 4.3. 4.1 ZERO-SHOT RESULTS We conduct zero-shot classification and retrieval and compare the results with other CLIP variants (Radford et al., 2021; Cherti et al., 2023; Sun et al., 2023). In Tab. 1, we show that the pre-trained ViT-B model can obtain 76.0% zero-shot accuracy on ImageNet-1K even without training on image-text pairs. After fine-tuning with only 2B image-text samples, our ViT-B obtains 77.0% zero-shot accuracy on ImageNet-1K, surpassing Open-CLIP (Cherti et al., 2023) and EVA-CLIP (Sun et al., 2023) by 2.3% and 1.0% respectively. On the challenging ObjectNet (Barbu et al., 2019) dataset, we outperform Open-CLIP and EVA-CLIP by 11.3% and 6.0% points respectively. Our pre-trained ViT-L model obtains 78.5% zero-shot accuracy on ImageNet-1K. After fine-tuning with 4B samples, we achieve 80.8% accuracy, which outperforms Open-CLIP and EVA-CLIP by 5.3% and 1.0%. Table 1: Zero-shot classification performance on ImageNet-1K (IN-1K), ImageNet-A (IN-A) (Hendrycks et al., 2021b), ImageNet-R (IN-R) (Hendrycks et al., 2021a), ImageNet-V2 (IN-V2) (Recht et al., 2019), ImageNet-Sketch (IN-S) (Wang et al., 2019), and ObjectNet (Barbu et al., 2019). We also report the average accuracy over the 6 datasets. | Method | Model | # I-T Pairs | IN-1K | IN-A | IN-R | IN-V2 | IN-S | ObjectNet | Average | |--------------|-----------|-------------|-------|------|------|-------|------|-----------|---------| | CLIP | B/16@224 | 13B | 68.3 | 50.0 | 77.7 | 61.9 | 48.2 | 55.3 | 60.2 | | Open-CLIP | B/16@224 | 34B | 70.2 | 38.2 | 80.6 | 62.3 | 56.1 | 56.0 | 60.6 | | EVA-02-CLIP | B/16@224 | 8B | 74.7 | 54.1 | 82.5 | 67.0 | 57.7 | 62.3 | 66.4 | | UTA | B/14@224 | OB | 76.0 | 54.2 | 76.7 | 68.1 | 52.5 | 63.6 | 65.2 | | UTA | B/16@224 | 2B | 77.0 | 59.8 | 84.1 | 69.5 | 60.2 | 68.3 | 69.8 | | CLIP | L/14@224 | 13B | 74.0 | 48.0 | 86.5 | 66.4 | 61.8 | 61.1 | 66.3 | | Open-CLIP | L/14@224 | 32B | 75.5 | 70.8 | 87.8 | 69.9 | 59.6 | 69.0 | | | DataComp | L/14@224 | 13B | 79.2 | 69.6 | 90.8 | 72.1 | 68.0 | 74.3 | 75.7 | | EVA-02-CLIP | L/14@224 | 4B | 79.8 | 76.1 | 92.7 | 72.9 | 68.1 | 75.3 | 77.5 | | UTA | L/14@224 | OB | 78.5 | 69.4 | 89.4 | 71.7 | 63.9 | 72.7 | 74.3 | | UTA | L/14@224 | 4B | 80.8 | 79.1 | 92.3 | 73.7 | 68.4 | 77.6 | 78.6 | | CLIP | L/14@336 | 13B | 76.6 | 77.5 | 89.0 | 70.9 | 61.0 | 72.0 | 74.5 | | EVA-02-CLIP | L/14@336 | 6B | 80.4 | 82.9 | 93.2 | 73.8 | 68.9 | 78.4 | 79.6 | | UTA | L/14@336 | 4B | 81.4 | 84.2 | 92.9 | 74.6 | 69.1 | 80.1 | 80.4 | | Open-CLIP | g/14@224 | 34B | 78.5 | 60.8 | 90.2 | 71.7 | 67.5 | 69.2 | 73.0 | | EVA-01-CLIP | g/14@224 | 11B | 79.3 | 74.1 | 92.5 | 72.1 | 68.1 | 75.3 | 76.9 | | UTA | g/14@224 | OB | 79.3 | 73.5 | 91.6 | 72.6 | 66.7 | 74.6 | 76.4 | | UTA | g/14@224 | 2B | 81.5 | 81.9 | 93.5 | 74.8 | 69.6 | 79.7 | 80.2 | Table 2: Zero-shot retrieval performance on Flickr30k (Young et al., 2014) and COCO (Lin et al., 2014). R@1, R@5, and R@10 denote the recall performance among top-1, top-5, and top-10, respectively. | Method | Model | # I-T Pairs | Flickr30k | COCO | Flickr30k | COCO | |--------------|-----------|-------------|-----------|------|-----------|------| | | | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | CLIP | B | 13B | 81.9 | 96.2 | 98.8 | 52.4 | 76.8 | 84.7 | 62.1 | 85.6 | 91.8 | 33.1 | 58.4 | 69.0 | | Open-CLIP | B | 34B | 86.3 | 97.9 | 99.4 | 59.4 | 81.8 | 88.6 | 69.8 | 90.4 | 94.6 | 42.3 | 66.7 | 77.1 | | EVA-02-CLIP | B | 8B | 85.7 | 96.7 | 98.9 | 58.7 | 80.7 | 88.2 | 71.2 | 91.0 | 94.7 | 42.4 | 66.9 | 76.3 | | UTA | B | OB | 88.4 | 98.5 | 99.5 | 63.4 | 83.9 | 90.7 | 75.5 | 91.5 | 96.4 | 46.8 | 71.3 | 80.8 | | UTA | B | 2B | 91.9 | 98.9 | 99.7 | 65.7 | 85.0 | 90.5 | 74.5 | 93.1 | 96.4 | 45.9 | 70.5 | 79.3 | | CLIP | L | 13B | 85.2 | 97.3 | 99.0 | 56.5 | 79.3 | 86.7 | 65.2 | 87.5 | 92.0 | 36.5 | 61.0 | 71.1 | | Open-CLIP | L | 34B | 88.7 | 98.4 | 99.2 | 62.1 | 83.4 | 90.3 | 75.0 | 92.5 | 96.2 | 46.1 | 70.7 | 79.4 | | EVA-02-CLIP | L | 4B | 89.7 | 98.6 | 99.2 | 63.7 | 84.3 | 90.7 | 77.3 | 93.6 | 97.5 | 47.1 | 71.2 | 79.7 | | UTA | L | OB | 91.2 | 98.7 | 99.8 | 66.6 | 80.5 | 91.5 | 78.3 | 94.1 | 96.9 | 49.5 | 73.4 | 81.9 | | UTA | L | 4B | 93.0 | 99.0 | 99.7 | 66.5 | 86.9 | 92.2 | 77.4 | 93.8 | 96.6 | 48.7 | 72.3 | 80.9 | | Open-CLIP | g | 34B | 91.4 | 99.2 | 99.6 | 66.4 | 86.0 | 91.8 | 77.7 | 94.1 | 96.9 | 48.8 | 73.3 | 81.5 | | EVA-01-CLIP | g | 11B | 91.6 | 99.3 | 99.8 | 68.2 | 87.5 | 92.5 | 78.9 | 94.5 | 96.9 | 50.3 | 74.0 | 82.1 | | UTA | g | OB | 92.2 | 99.1 | 99.7 | 68.0 | 87.2 | 92.4 | 79.0 | 94.5 | 97.2 | 50.3 | 74.2 | 82.5 | | UTA | g | 2B | 93.2 | 99.4 | 99.8 | 68.2 | 87.6 | 93.0 | 78.2 | 94.4 | 96.7 | 48.7 | 72.9 | 81.1 | respectively. Compared to strong EVA-CLIP, we achieve an average of 1.1% improvements over 6 evaluation datasets. We also fine-tune with 336×336 input resolution using 200M samples, and we obtain an average of 1.8% points improvements on the 6 evaluation datasets. We find that fine-tuning on the larger but noisier DataComp-1B dataset (Gadre et al., 2023) can greatly boost the performance on the ImageNet robust variants. Table 2 presents the zero-shot retrieval results on the Flickr30k (Young et al., 2014) and COCO (Lin et al., 2014) datasets. We find that the pre-trained model can already outperform other CLIP models on all evaluated metrics. In particular, the base model improves the Open-CLIP and EVA-CLIP by an average of 4% top-1 recall over the two datasets. For the large model, we improve the Open-CLIP and EVA-CLIP by an average of 3.4% and 1.8% top-1 recall, respectively. We also find that further fine-tuning on DataComp-1B dataset can improve the text retrieval performance but also degenerate the image retrieval performance. Question: What is the position of the skateboard in the image? EVA: The skateboard is on the ground, with the person standing on top of it. UTA: The skateboard is positioned upright, with the wheels off the ground, and the deck facing upwards. Question: What is the man sitting in the middle doing in the image? EVA: The man in the image is sitting down, holding a glass of beer, and making a gesture or a sign with his hand. UTA: The man in the image is sitting down, talking on his cell phone, and holding his hands up while doing so. Figure 2: Qualitative examples generated by LLaVA models fine-tuned with EVA-02 and UTA. Table 3: Results on LLaVA-Bench (Liu et al., 2023). The results of CLIP and EVA-02 are obtained by our re-implementation with official checkpoints. | Method | Model | Conversation | Detail | Reasoning | Overall | |--------|-------|--------------|--------|------------|---------| | CLIP | B/16 | 74.5 | **69.9** | 90.3 | 78.3 | | EVA-02 | B/16 | 75.3 | 61.1 | **91.8** | 76.2 | | UTA | B/16 | **80.8** | 66.2 | 88.8 | **78.8**| | CLIP | L/14 | 78.7 | 70.4 | 90.0 | 79.8 | | EVA-02 | L/14 | 80.4 | 71.6 | 91.1 | 80.6 | | UTA | L/14 | **81.4** | **72.2** | **91.8** | **82.0**| | EVA-01 | g/14 | 79.9 | 72.2 | 91.0 | 80.8 | | UTA | g/14 | **84.1** | 71.3 | **93.5** | **83.1**| 4.2 Multi-Modal Results The emergent multi-modal capabilities of GPT-4 (OpenAI, 2023) have attracted widespread attention, and there are various re-implementations of such capabilities using open-sourced vision and large language models (Liu et al., 2023; Zhu et al., 2023). We adopt the LLaVA framework and evaluate pre-trained models on the LLaVA-Bench. The results are presented in Tab.3. Note that all the results are obtained by fixing the vision encoders’ parameters, which can directly reflect the representation quality of the pre-trained model. Notably, our model achieves the best results in the overall category. Compared to the original CLIP large model (Radford et al., 2021), we overall obtain an improvement of 2.2% accuracy. Using the same pre-training dataset and iterations, we also outperform EVA-02 (Fang et al., 2023a) for 1.4%. We compare the outputs generated by the two LLaVA models and highlight the difference in Fig.2. We show that our approach can capture more fine-grained details to produce better answers. 4.3 Core Vision Task Results Prior arts (Bao et al., 2021; He et al., 2021) demonstrate that the MIM pre-trained models have superior performance after fine-tuning to downstream tasks, including ImageNet classification, object detection, image segmentation, etc. There are some recent papers (Xie et al., 2023) that show the mask-then-predict objective is the key to such fine-tuning capabilities. In our empirical evaluation, we show that our UTA pre-training also has such capabilities. We present the results of ImageNet classification in Tab.4. Compared to recent MIM works (e.g., BEiT v2 (Peng et al., 2022)) which also utilize pre-trained CLIP model for pre-training, we obtain an improvement of ~2% points after fine-tuning. We can also largely outperform the CLIP model for Table 4: ImageNet classification and ADE20K segmentation results. ZS and FT denote the zero-shot and fine-tuning top-1 accuracy on ImageNet respectively. † denotes the model after contrastive fine-tuning. | Method | Model | #Params | ImageNet | ADE20K | |----------|-------|---------|----------|--------| | | | | Input Size | ZS | FT | Input Size | mIoU | | MAE | B | 86M | 224 | - | 83.6 | 512 | 48.1 | | BEiT v2 | B | 86M | 224 | - | 85.5 | 512 | 53.1 | | CLIP | B | 86M | 224 | 68.3 | 85.7 | - | - | | EVA-02 | B | 86M | 224 | - | 87.4 | 512 | 55.3 | | UTA | B | 86M | 224 | 76.0 | 87.5 | 512 | 55.6 | | UTA† | B | 86M | 224 | 77.0 | 87.4 | 512 | 55.1 | | MAE | L | 304M | 224 | - | 85.9 | 512 | 53.6 | | BEiT v2 | L | 304M | 224 | - | 87.3 | 512 | 56.7 | | CLIP | L | 304M | 224 | 74.0 | 88.0 | - | - | | EVA-02 | L | 304M | 224 | - | 89.0 | 512 | 58.3 | | UTA | L | 304M | 224 | 78.5 | 89.2 | 512 | 58.8 | | EVA-CLIP | g | 1011M | 224 | 79.3 | 89.1 | 512 | 57.4 | Table 5: Object detection and instance segmentation results on COCO and LVIS datasets. † denotes the model after contrastive fine-tuning. | Method | Model | #Enc. Params | COCO | LVIS | |----------|-------|--------------|------|------| | | | | APbox | APmask | APbox | APmask | | ViTDet | B | 86M | 54.0 | 46.7 | 43.0 | 38.9 | | EVA-02 | B | 86M | 55.5 | 47.1 | 47.1 | 41.4 | | UTA | B | 86M | **55.8** | **47.7** | **49.1** | **43.1** | | UTA† | B | 86M | 55.6 | 47.5 | 47.9 | 42.2 | | ViTDet | L | 304M | 57.6 | 50.0 | 49.2 | 44.5 | | EVA-02 | L | 304M | 58.5 | 50.3 | 55.3 | 48.6 | | UTA | L | 304M | **58.7** | **50.5** | **55.9** | **49.5** | | EVA-CLIP | g | 1011M | 59.1 | 51.1 | 56.4 | 51.3 | both the zero-shot and fine-tuning accuracy. Compared with EVA-02, although we slightly improve the fine-tuning accuracy, we can largely improve the zero-shot accuracy. We show the results of performing object detection and instance segmentation on COCO and LVIS datasets in Tab. 5. Compared to the MAE pre-training (He et al., 2021), we find our UTA can improve the APbox for more than 1% mAP on COCO and 6% mAP on more challenging LVIS. Additionally, our approach also performs better than EVA-02, which demonstrates 2.0% and 0.6% mAP improvements on LVIS for the base and large models respectively. 5 ABLATION STUDIES In this section, we conduct ablation studies to evaluate the impact of different design choices of our proposed Unmasked Token Alignment (UTA). Unless otherwise specified, we use the ViT-B backbone and pre-train it for 90 epochs on the ImageNet-21K (Deng et al., 2009) dataset. Pre-training objectives. We thoroughly explore the effect of pre-training objectives and show the results in Tab. 6. We also explore combining UTA and MIM by inputting masked and unmasked tokens simultaneously and conducting token alignment for unmasked tokens and feature prediction for masked tokens. We find that UTA performs best on all evaluated benchmarks while requiring the least computation cost. In particular, we find the improvements on LVIS are most significant compared to other approaches. Moreover, we show that combining UTA and MIM can lead to much worse zero-shot accuracy but similar fine-tuning accuracy on ImageNet than using UTA alone. We suspect the training-finetuning inconsistency introduced by the extra [MASK] tokens is more significant when the backbone is fixed for evaluation. Table 6: The effect of pre-training objectives. FD denotes the re-implementation of the Feature Distillation method (Wei et al., 2022). ZS and FT denote the zero-shot and fine-tuned top-1 accuracy on ImageNet respectively. | Config | FLOPs | ImageNet | COCO | LVIS | ADE20K | |------------|-------|----------|------|------|--------| | | | ZS FT | Apbox | Apmask | Apbox | Apmask | mIoU | | FD | 1.0× | 74.7 87.2 | 55.2 | 47.0 | 47.9 | 42.2 | 54.7 | | MIM | 1.0× | 86.9 54.7 | 46.6 | 46.6 | 41.1 | | 54.3 | | UTA+MIM | 1.0× | 70.7 87.2 | 55.4 | 47.1 | 47.7 | 42.0 | 54.8 | | UTA | 0.6× | 75.0 87.3 | 55.7 | 47.4 | 48.9 | 43.1 | 55.4 | Table 7: The effect of positional embedding. PE denotes w/ or w/o positional embedding during pre-training. | Method | PE | ImageNet | COCO | ADE20K | |--------|----|----------|------|--------| | | | ZS FT | Apbox | Apmask | mIoU | | MIM | ✗ | - 85.8 50.9 | 43.2 | 51.8 | | MIM | ✔ | - 86.9 54.7 | 46.6 | 54.3 | | Performance gap | - | -1.1 -3.8 -3.4 -2.5 | | UTA | ✗ | 73.8 86.7 | 53.8 | 45.7 | 53.6 | | UTA | ✔ | 75.0 87.3 | 55.7 | 47.4 | 55.4 | | Performance gap | - | -1.2 -0.6 -1.9 -1.7 -1.8 | **Positional embedding.** Compared to UTA which directly conducts token alignment on unmasked tokens, MIM relies on the unmasked tokens to predict the features of the masked tokens. We speculate that the MIM approach is more susceptible to the influence of positional embedding. We conduct an experiment to remove all the positional embedding in the ViT architecture during pre-training. For fine-tuning, we add the positional embedding back but initialize them with zero to ensure that the initial state of fine-tuning is the same as the last state of pre-training. As shown in Tab. 7, we find that the performance drop of UTA is much smaller compared to MIM. In particular, MIM has 3.8 APbox and 3.4 APmask performance drop on COCO, while UTA only drops by half of the accuracies. **Different pre-trained CLIP models.** We study the impact of different pre-trained CLIP models on downstream performance. As shown in Tab. 8, we find that using a stronger CLIP model can lead to better downstream performance. Additionally, we observe that the performance gap was not as significant as on COCO and ADE20K, probably because the classes of those datasets can already be easily classified by CLIP-L/14. **UTA for pre-training the text encoder.** While we perform UTA to pre-train only the vision encoder by default, we also explore using it to also pre-train a text encoder from scratch. We train a smaller text encoder on DataComp-1B for 1 epoch. Empirically, we only obtain 54.5% zero-shot accuracy after pre-training, which is much lower than using the CLIP text encoder. Thus, we decide to not perform UTA for pre-training the text encoder. **Mask ratio and mask type.** We examine the effect of the mask ratio and mask type on the final performance. As shown in Tab. 9 (left), we find that using a mask ratio of 0.4 achieves the best computation-performance trade-off. Additionally, using the block-reversed masking performs best on all evaluated datasets. ## 6 RELATED WORKS **Vision (-Language) Foundation Models.** The Transformer architecture (Vaswani et al., 2017) has rapidly evolved to become a pivotal paradigm in both Computer Vision (CV) and Natural Language Processing (NLP). Models like BERT (Devlin et al., 2019) and the GPT (Floridi & Chiriatti, 2020) series, built upon the Transformer architecture, have exhibited exceptional prowess across various language tasks. Simultaneously, in the field of CV, Vision Transformers (ViTs) (Dosovitskiy et al., 2021) have emerged as potent contenders, gradually displacing CNNs in various downstream vision tasks. Furthermore, the fusion of text and images in a shared embedding space, exemplified by CLIP (Radford et al., 2021), has rendered the Transformer an indispensable tool for versatile uni- and... Table 8: The effect of pre-trained CLIP model. | CLIP Model | ZS | ImageNet | COCO | ADE20K | |------------|----|----------|------|--------| | | | ZS | FT | APbox | APmask | mIoU | | CLIP-L/14 | 74.0| 67.7 | 86.6 | 55.6 | 47.3 | 53.7 | | EVA-CLIP-g/14 | 79.3| 75.0 | 87.3 | 55.7 | 47.4 | 55.4 | Table 9: The effect of mask ratio (left) and mask type (right). Block-R denotes the reversed block-wise masking. We use mask ratio of 0.5 for the mask type ablation. | Ratio | FLOPs | ImageNet | COCO | ADE20K | |-------|-------|----------|------|--------| | | | ZS | FT | APbox | APmask | mIoU | | 0.0 | 1.0× | 74.7 | 87.2 | 55.2 | 47.0 | 54.7 | | 0.4 | 0.6× | 75.0 | 87.3 | 55.7 | 47.4 | 55.4 | | 0.5 | 0.5× | 74.8 | 87.3 | 55.3 | 46.8 | 55.0 | | 0.7 | 0.3× | 74.0 | 87.0 | 55.0 | 46.6 | 54.8 | | Mask | ImageNet | COCO | ADE20K | |------|----------|------|--------| | | ZS | FT | APbox | APmask | mIoU | | Block| 74.2 | 87.2 | 55.3 | 46.6 | 47.8 | | Random| 74.7 | 87.2 | 55.1 | 46.4 | 47.7 | | Block-R| 74.8 | 87.3 | 55.3 | 46.8 | 55.0 | multi-modal tasks. As training CLIP requires a large amount of computation resources, FLIP (Li et al., 2023b) proposes to mask the visual input tokens to accelerate the training process of CLIP. Recently, large-scale visual pre-training methods based on the Transformer architecture, such as BEiT-3 (Wang et al., 2022a) and EVA (Sun et al., 2023), have continuously pushed the performance boundaries of various downstream visual tasks. In this work, we introduce a simple yet effective large-scale pre-training method for enhancing the multi-modal representations and demonstrate competitive performance on various uni- and multi-modal tasks. **Masked Image Modeling (MIM).** MIM is a popular pretext task where the vision model learns rich visual representations by conducting reconstruction from corrupted images. Its initial introduction can be traced back to ViT (Dosovitskiy et al., 2021) and iGPT (Chen et al., 2020). Subsequent advancements in the field, exemplified by the notable contributions of BEiT (Bao et al., 2021), MAE (He et al., 2021), and others (Wang et al., 2022b; Liu et al., 2022; Xie et al., 2021), have consistently elevated the performance of the MIM method across diverse downstream tasks. Recent works (Fang et al., 2023b; Peng et al., 2022; Hou et al., 2022; Xiao et al., 2022) have highlighted the utilization of carefully devised reconstruction targets, like the hidden features from a pre-trained CLIP model, which has been shown to facilitate MIM in acquiring superior visual representations. However, these methods rely on the [MASK] tokens to predict the masked features/pixels which introduces the training-finetuning inconsistency. While UMT (Li et al., 2023a) does not use the [MASK] tokens and only processes the unmasked tokens, it focuses on training video models and does not align with the CLIP text model without contrastive fine-tuning. In contrast, our UTA automatically aligns the train-from-scratch ViT model with CLIP text model and enables zero-shot evaluation even without training on image-text pairs. ### 7 Conclusion In this paper, we introduce the Unmasked Token Alignment (UTA) method, which enhances the alignment between vision and language representations by leveraging pre-trained CLIP models. UTA trains a Vision Transformer (ViT) by aligning the unmasked tokens with corresponding visual tokens of a frozen CLIP model. UTA does not suffer from training-finetuning inconsistency and is training-efficient by avoiding using extra [MASK] tokens. The pre-trained ViT model and CLIP text model can be directly applied for zero-shot evaluation even without contrastive training on image-text pairs. Experimental results demonstrate the effectiveness of UTA across various uni- and multi-modal downstream tasks, outperforming existing MIM and CLIP methods. **Limitations** While the proposed UTA method presents promising results and advantages, it also has some limitations. Firstly, UTA relies on the availability of a pre-trained CLIP model, which may limit its applicability in scenarios where such models are not accessible or suitable. Additionally, although UTA achieves strong zero-shot performance without contrastive fine-tuning, it still benefits from further fine-tuning on large-scale image-text pairs, especially for robustness evaluation. While UTA shows great potential for enhancing multi-modal representations, further research is needed to address these limitations and improve its applicability in a wider range of applications. REFERENCES Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. In *ICLR*, 2021. Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. *Advances in neural information processing systems*, 32, 2019. Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: High quality object detection and instance segmentation. *IEEE transactions on pattern analysis and machine intelligence*, 43(5):1483–1498, 2019. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *ICML*, 2020. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 2818–2829, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K. Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*, 2021. Yuxin Fang, Quan Sun, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva-02: A visual representation for neon genesis. *arXiv preprint arXiv:2303.11331*, 2023a. Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representation learning at scale. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 19358–19369, 2023b. Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. *Minds and Machines*, 30:681–694, 2020. Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. *arXiv preprint arXiv:2304.14108*, 2023. Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 5356–5364, 2019. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In *Proceedings of the IEEE international conference on computer vision*, pp. 2961–2969, 2017. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll’ar, and Ross B. Girshick. Masked autoencoders are scalable vision learners. In *CVPR*, 2021.
cmcD05NPKa
In claim U2, one can see that 21 is wrongly classified into $C_4$. As a result I wonder, for other bases that are product of larger primes, e.g., again, $2021=43\times 47$, can this phenomenon still be observe?
LEARNING THE GREATEST COMMON DIVISOR: EXPLAINING TRANSFORMER PREDICTIONS François Charton Meta AI fcharton@meta.com ABSTRACT The predictions of small transformers, trained to calculate the greatest common divisor (GCD) of two positive integers, can be fully characterized by looking at model inputs and outputs. As training proceeds, the model learns a list \( D \) of integers, products of divisors of the base used to represent integers and small primes, and predicts the largest element of \( D \) that divides both inputs. Training distributions impact performance. Models trained from uniform operands only learn a handful of GCD (up to 38 GCD ≤ 100). Log-uniform operands boost performance to 73 GCD ≤ 100, and a log-uniform distribution of outcomes (i.e. GCD) to 91. However, training from uniform (balanced) GCD breaks explainability. 1 INTRODUCTION Transformers (Vaswani et al., 2017) have been applied to problems of mathematics, both symbolic (Lample & Charton, 2019; Charton et al., 2020; Shi et al., 2021) and numerical (Charton, 2021). Yet, they struggle with basic arithmetic (Lee et al., 2023; Nogueira et al., 2021). Large language models (LLM) can learn addition or multiplication by a small prefactor, and generalize beyond their training range when fine-tuned using scratchpad (Nye et al., 2021), chain-of-thought (Wei et al., 2023) or algorithmic prompting (Zhou et al., 2022), but these techniques require bespoke data and do not extend to complex tasks (Dziri et al., 2023). Math transformers were also found to be brittle (Welleck et al., 2021), to fail on simple tasks (Davis, 2023), and to be hard to interpret, except in the simplest cases (Nanda et al., 2023). Yet, small transformers can learn advanced calculations, such as eigen-decomposition (Charton, 2021) and polynomial roots (Charton, 2022b). In this paper, I train 4-layer transformers to compute the greatest common divisor (GCD) of two positive integers, an important operation for rational arithmetic and number theory, and observe that: 1. Transformers learn to cluster input pairs with the same GCD. All pairs of integers \((a, b)\) with the same GCD \(k\) are predicted the same. 2. Transformer predictions can be fully characterized. During training, the model learns a set of integers \(D\), and predicts, for any input pair \((a, b)\), the largest element in \(D\) that divides \(a\) and \(b\). 3. Early during training, transformers learn to predict products of divisors of the base used to represent integers. Small primes are “grokked” (Power et al., 2022) after extended training. 4. Models trained from log-uniform operands and outcomes achieve better performance. They correctly predict up to 91 GCD ≤ 100. Model predictions remain fully explainable. 5. An unbalanced distribution of outcomes in the training set is required for full explainability: explainability partially fails once models are trained from uniformly distributed GCD. These results demonstrate how transformers can be trained to perform exact calculations involving integer divisibility, a central task in integer arithmetic and number theory. Beyond GCD calculations, the broader potential impact of this research extends in three directions. First, it presents a new approach to model explainability: fully characterizing black-box model predictions by experimenting with selected inputs and leveraging our theoretical understanding of the underlying mathematics. Second, the results on log-uniform training distributions of operands and outcomes – faster learning and better performance – may extend to other arithmetic tasks, e.g. fine tuning LLM. Finally, mathematical tasks play a central role for Foundational Models for Science – large language models pre-trained on mathematics, and fine-tuned on specific fields, such as high energy physics, computational biology or astrophysics. Before they can do science, transformers must learn maths. RELATED WORK Neural networks for arithmetic were first proposed by Siu & Roychowdhury (1992), and recurrent models by Kalchbrenner et al. (2015), Zaremba et al. (2015) and Kaiser & Sutskever (2015). Recent research mostly focuses on fine-tuning LLM on arithmetic tasks, to solve math word problems (Meng & Rumshisky, 2019; Griffith & Kalita, 2021). See Lee et al. (2023) for a summary. As an alternative, Neural Arithmetic Logical Units (Trask et al., 2018; Mistry, 2023) learn exact computations that can generalize to any input, by constraining the weights of linear models to be close to 0, 1 or −1. The difficulty of learning arithmetic tasks was discussed by many authors. Saxton et al. (2019), benchmarking mathematical tasks, observe that number theoretic operations, like factorization, are hard. Palamas (2017) further investigates the hardness of modular arithmetic. Dziri et al. (2023) note the difficulty of extending the promising results obtained by Lee et al. (2023) on the four operations to complex mathematical calculations or algorithms – GCD and Euclid’s algorithm, here. The role of number representation was discussed by Nogueira et al. (2021) and Charton (2021). Grokking was first described by Power et al. (2022). Liu et al. (2022) propose metrics to characterize it. Gromov (2023) provides an insightful analysis of grokking in feed-forward networks. Most prior work on explainability in arithmetic transformers tries to interpret model weights (Nanda et al., 2023; Zhong et al., 2023). Charton (2022a) conducts similar experiments for linear algebra. 2 EXPERIMENTAL SETTINGS GCD calculations are framed as a supervised translation task. Problems (pairs of integers) are randomly sampled, represented as sequences of tokens, and used to train sequence-to-sequence transformers to translate input pairs into their GCD, by minimizing the cross-entropy between model predictions and the sequences representing correct solutions. Integers are encoded as sequences of digits in base $B$, preceded by a sign which also serves as a separator (Table 1). In base 10, the model translates $(8, 12)$, encoded as the sequence `+ 8 + 1 2', into its GCD, 4, encoded as `+ 4'. The choice of $B$ is a trade-off. Small bases result in longer sequences that are harder to learn, but use a small vocabulary that is easier to memorize. Composite bases allow for simple tests of divisibility: in base 10, divisibility by 2, 5 and 10 is decided by looking at the rightmost token in the sequence. Transformers with 4 layers, 512 dimensions and 8 attention heads, using Adam (Kingma & Ba, 2014) are trained with a learning rate of $10^{-5}$ (no scheduling is needed) on batches of 256 examples. All inputs pairs are sampled uniformly between 1 and $M = 10^6$. All data is generated on the fly: different training epochs use different examples for the train and test set. After each epoch (300,000 examples), the models are evaluated on two test sets of 100,000 examples: a natural test set of uniformly sampled pairs $(a, b)$, and a stratified test set with GCD uniformly distributed between 1 and 100. In the natural set, small GCD are more common – we have $P(\gcd(a, b) = k) = \frac{6}{\pi^2 k^2}$ (Cesàro, 1883). The stratified set has about 1000 examples with GCD $k$ for $1 \leq k \leq 100$, and is generated by: - sampling $k$, uniformly between 1 and 100, - sampling $a$ and $b$, uniformly between 1 and $\frac{M}{k}$, such that $\gcd(a, b) = 1$, using rejection sampling, - adding $(ka, kb)$ to the stratified test set. These two test sets provide two measures of accuracy. Model accuracy, measured on the natural set, is the probability that the GCD of two random integers from 1 to $M$ is correctly predicted. Accuracy on the stratified test set is the number of GCD correctly predicted between 1 and 100. The size of the problem space ($10^{12}$ possible input pairs) guarantees minimal duplication between train and test set. All experiments are run on one NVIDIA V100 GPU with 32 GB of memory. The source code for these experiments can be found at https://github.com/facebookresearch/GCD. | Base | Encoded input | Encoded output | |------|---------------|----------------| | 2 | [+, 1, 0, 1, 0, 0, 0, 0, +, 1, 1, 1, 1, 0, 0, 0] | [+, 1, 0, 1, 0, 0, 0] | | 6 | [+, 4, 2, 4, +, 3, 2, 0] | [+, 1, 0, 4] | | 10 | [+, 1, 6, 0, +, 1, 2, 0] | [+, 4, 0] | | 30 | [+, 5, 10, +, 4, 0] | [+, 1, 10] | Table 2: Number of correct GCD under 100 and accuracy. Best of 6 experiments. | Base | Correct GCD | Accuracy | |------|-------------|----------| | 2 | 7 | 81.6 | | 3 | 5 | 68.9 | | 4 | 7 | 81.4 | | 5 | 3 | 64.0 | | 6 | 19 | 91.5 | | 7 | 3 | 62.5 | | 10 | 13 | 84.7 | | 11 | 2 | 61.8 | | 12 | 19 | 91.5 | | 15 | 9 | 71.7 | | Base | Correct GCD | Accuracy | |------|-------------|----------| | 30 | 27 | 94.7 | | 31 | 2 | 61.3 | | 60 | 28 | 95.0 | | 100 | 13 | 84.7 | | 210 | 32 | 95.5 | | 211 | 1 | 61.3 | | 420 | 38 | 96.8 | | 997 | 1 | 61.3 | | 1000 | 14 | 84.7 | | 1024 | 7 | 81.5 | 3 LEARNING THE GREATEST COMMON DIVISOR - BASE EXPERIMENTS A model trained on pairs of positive integers under one million, encoded in base $B = 10$, correctly predicts 84.7% of the examples in the natural test set, and 13 correct GCD under 100 (accuracy on the stratified test set). Performances vary with the encoding base: from 61.8% accuracy and 2 correct GCD for base 11, to 96.8% and 38 GCD for base 420 (Table 2). The best performances are achieved for composite bases (30, 60, 210 and 420), the worst for large primes. Learning is very fast: for base 30, the model achieves 90% accuracy after 2 epochs (600,000 examples), and 93% after 6. Model size has little impact on performance (Appendix B). For base 30, 1-layer transformers with 32 dimensions (less than 300,000 parameters) achieve 93.3% accuracy. 24-layer models with 1024 dimensions (714 million parameters) achieve 93.4%. For base 31, accuracy is 61% for all models. These variations in model performance can be understood by looking at model predictions. Table 3 presents, for bases 2 and 10 and GCD up to 36, the most frequent model prediction for pairs with a given GCD (Pred), and its frequency in the stratified test set (%) – detailed results for 6 bases and GCD up to 100 are in Appendix E.3. All frequencies are very close to 100%: for every test pair with GCD $k$, the model makes the same prediction $f(k)$. In other words, the model can tell whether two input pairs have the same GCD. Correct model predictions ($f(k) = k$) only happen for products of divisors of the base. In fact, all model predictions can be summarized in three rules: (R1) Predictions are deterministic. The model predicts a unique value $f(k)$ for almost all (99.9%) pairs of integers with GCD $k$. Predictions are correct when $f(k) = k$. (R2) Correct predictions are products of primes dividing $B$. For base 2, they are 1, 2, 4, 8, 16, 32 and 64. For base 31, 1 and 31. For base 10, all products of elements from \{1, 2, 4, 8, 16\} and \{1, 5, 25\}. For base 30, all products of \{1, 2, 4, 8\}, \{1, 3, 9, 27\} and \{1, 5, 25\}. (R3) $f(k)$ is the largest correct prediction that divides $k$. For instance, $f(8) = 8$, and $f(7) = 1$, for base 2 and 10, but $f(15) = 5$ for base 10 and $f(15) = 1$ for base 2. These results can be interpreted as follows. For prime bases, such as $B = 2$, an integer is divisible by $B^k$ iff its representation ends in $k$ zeroes. The model learns to “predict” GCD by counting the rightmost zeroes in its operands, $z_a$ and $z_b$, and predicting $B^z$ with $z = \min(z_a, z_b)$. This accounts for all observed results. For instance, it will correctly predict the GCD of $a = 8 = 1000_2$ and $b = 12 = 1100_2$ to be $2^2 = 4$, and incorrectly predict the GCD of $7 = 111_2$ and $14 = 1110_2$ to be Table 3: Model predictions and their frequencies, for GCD 1 to 36. Correct predictions in bold face. | GCD | Base 2 Pred | % | Base 10 Pred | % | Base 2 Pred | % | Base 10 Pred | % | Base 2 Pred | % | Base 10 Pred | % | |-----|-------------|---|--------------|---|-------------|---|--------------|---|-------------|---|--------------|---| | 1 | 1 | 100 | 1 | 100 | 13 | 100 | 1 | 100 | 25 | 100 | 25 | 100 | | 2 | 2 | 100 | 2 | 100 | 14 | 100 | 2 | 100 | 26 | 100 | 2 | 100 | | 3 | 1 | 100 | 1 | 100 | 15 | 100 | 5 | 100 | 27 | 100 | 1 | 100 | | 4 | 4 | 100 | 4 | 100 | 16 | 100 | 16 | 100 | 99.7 | 28 | 4 | 100 | | 5 | 1 | 100 | 5 | 100 | 17 | 100 | 1 | 100 | 29 | 100 | 1 | 100 | | 6 | 2 | 100 | 2 | 100 | 18 | 100 | 2 | 100 | 30 | 100 | 2 | 100 | | 7 | 1 | 100 | 1 | 100 | 19 | 100 | 1 | 100 | 31 | 100 | 1 | 100 | | 8 | 8 | 100 | 8 | 100 | 20 | 100 | 4 | 100 | 20 | 100 | 32 | 99.9 | | 9 | 1 | 100 | 1 | 100 | 21 | 100 | 1 | 100 | 33 | 100 | 1 | 100 | | 10 | 2 | 100 | 10 | 100 | 22 | 100 | 2 | 100 | 34 | 100 | 2 | 100 | | 11 | 1 | 100 | 1 | 100 | 23 | 100 | 1 | 100 | 35 | 100 | 5 | 100 | | 12 | 4 | 100 | 4 | 100 | 24 | 8 | 100 | 8 | 100 | 36 | 4 | 100 | 4 | 100 | 1. For composite bases, such as \( B = 10 \), an integer \( a \) is divisible by \( f \), such that \( k f = B^n \), iff its \( n \) rightmost digits are in \( \{0, f, 2f \ldots (k-1)f\} \). The model learns to test the divisibility of its operands by comparing their \( n \) rightmost digits with the \( k \) possible values, and predict the largest \( f \) that divides both operands. In practice, only divisibilities that can be tested by considering the two last digits in the representation are learned. For \( B = 210 \) divisibility by 4 is learned, but divisibility by 8 is not. For \( B = 420 \) divisibility by 16 is learned, but not by 32. The three rules also account for variations in model accuracy (computed on the natural test set) for different bases (see Appendix C). **Learning GCD one prime power at a time.** Learning curves have a step-like shape (Figure 1), and GCD are learned in sudden batches. When the model learns a new power of a prime divisor of \( B \), it also learns its products with already known GCD. For instance, for base 30, the model initially predicts \( \{1, 2, 4\}, \{1, 3, 9\}, \{1, 5\} \) and their products: 17 GCD under 100. A first step happens around epoch 50, when the model learns 25 and the three associated multiples 50, 75 and 100 (21 GCD), a second around epoch 220, learning 8, 24, 40 and 72, and a third at epoch 660, learning 27 and 54, for a grand total of 27 correct GCD. The three rules hold at all times during training. **Accelerating learning by balancing the distribution of GCD.** The distribution of GCD verifies \( P(\gcd(a, b) = k) = \frac{1}{k^2} \) (Cesàro [1883]). As a result, large GCD are very rare in the training set, and learning them is very slow. This can be mitigated, and training accelerated, by adding a small proportion (5%) of uniformly sampled GCD to the training set: for \( B = 30 \), the model learns 25 GCD in 30 epochs, and 27 GCD in 175, vs 250 and 660 in the original experiments (Figure 2). In these experiments, models only correctly calculate GCD that are products of divisors of the base, and the best accuracies are achieved for bases divisible by many small primes, e.g. 30, 210 or 420. Still, all models learn to cluster pairs of input integers according to their GCD, and output a unique prediction \( f(k) \) for all pairs with GCD \( k \). This is a non-trivial result and a significant achievement. ### 4 LARGE COMPOSITE BASES \( B \) - GROKKING SMALL PRIMES For large bases \( B \), non-divisors of \( B \) are sometimes learned after extended training. In one experiment with base 1000, the model predicts 13 GCD \( \leq 100 \) after 84 epochs: all products of \( \{1, 2, 4, 8, 16\} \) and \( \{1, 5, 25\} \). Then, the training loss is flat during 100 epochs, and it seems that the model is no longer learning anything. But then, the model starts predicting GCD 3, with an accuracy of 0.2% at epoch 188, and 93% at epoch 193 (despite only seeing 100,000 input pairs with GCD 3 during these 5 epochs). Multiples of 3 are then learned, and by epoch 220, the model predicts 22 GCD: all products of \( \{1, 2, 4, 8, 16\}, \{1, 5, 25\} \) and \( \{1, 3\} \). Model predictions still respect rules R1 and R3 (Appendix E.1, Table 20), and the three rules can be updated as follows: - **(G1)** Prediction is deterministic. All pairs with the same GCD are predicted the same, as \( f(k) \). - **(G2)** Correct predictions are products of primes divisors of \( B \) and small primes. - **(G3)** \( f(k) \) is the largest correct prediction that divides \( k \). This phenomenon is related to grokking (Power et al. [2022]). Table 4 presents results for 16 large bases, with models trained up to 1300 epochs. Grokking usually sets in late during training: for bases 625 and 4000, all products of divisors of \( B \) are learned in 5 and 15 epochs, but it take 600 epochs for grokking (of 2 and 3) to happen. Primes and powers of primes are roughly grokked in order. Table 4: Predicted gcd, divisors and non-divisors of B. Best model of 3. For non-divisors, the epoch learned is the first epoch where model achieves 90% accuracy for this GCD. | Base | GCD predicted | Divisors predicted | Non-divisors (epoch learned) | |------------|---------------|--------------------|-----------------------------| | 625 = 5^4 | 6 | {1,5,25} | 2 (634) | | 2017 | 4 | {1} | 2 (142), 3 (392) | | 2021 = 43.47 | 10 | {1,43}, {1,47} | 2 (125), 3 (228) | | 2023 = 7.17^2 | 16 | {1,7}, {1,17} | 3 (101), 2 (205), 4 (599) | | 2025 = 3^4.5^2 | 28 | {1,3, 9, 27, 81}, {1,5,25} | 2 (217), 4 (493), 8 (832) | | 2187 = 3^7 | 20 | {1,3,9,27,81} | 2 (86), 4 (315), 5 (650) | | 2197 = 13^3 | 11 | {1,13} | 2 (62), 3 (170), 4 (799) | | 2209 = 47^2 | 8 | {1,47} | 2 (111), 3 (260), 9 (937) | | 2401 = 7^4 | 10 | {1,7,49} | 2 (39), 3 (346) | | 2401 = 7^4 | 14 | {1,7,49} | 3 (117), 2 (399), 4 (642) | | 2744 = 2^3.7^3 | 30 | {1,2,4,8,16,32}, {1,7,49} | 3 (543), 5 (1315) | | 3125 = 5^5 | 16 | {1,5,25} | 2 (46), 3 (130), 4 (556) | | 3375 = 3^3.5^3 | 23 | {1,3,9,27}, {1,5,25} | 2 (236), 4 (319) | | 4000 = 2^5.5^3 | 24 | {1,2, 4,8,16,32}, {1, 5, 25 } | 3 (599) | | 4913 = 17^3 | 17 | {1,17} | 2 (54), 3 (138), 4 (648), 5 (873) | | 5000 = 2^5.5^4 | 28 | {1,2,4,8,16,32}, {1,5,25} | 3 (205), 9 (886) | | 10000 = 2^4.5^4 | 22 | {1,2,4,8,16}, {1,5,25} | 3 (211) | Learning curves (Appendix E.1 Figure 5) retain their usual step-like shape: long periods of stagnation followed by sudden drops in the loss, and rises in accuracy, as new GCD are learned. Because it helps learn small GCD, grokking boosts model accuracy (from 63% to 91% for \( B = 2023 \)), but overall the number of correct GCD remains low (under 30 for all large bases). Balancing outcomes. The technique proposed in section 3 to accelerate learning (adding a small amount of uniformly distributed GCD to the training set) does not apply to larger bases (Appendix E.3 Table 2). However, the unbalanced distribution of GCD can be corrected by sampling from a log-uniform distribution – so that \( P(\gcd(a,b) = k) = \frac{C}{k} \) instead of \( \frac{C}{k^2} \) – as follows: - Sample \( k \) between 1 and 100, with probability \( P(k) = \frac{C}{k} \), with \( \frac{1}{C} = \sum_{i=1}^{100} \frac{1}{i} \). - Sample \( a \) and \( b \) uniformly from 1 to \( \frac{M}{k} \), such that \( \gcd(a,b) = 1 \). - Add \((ak, bk)\) to the training set. A log-uniform training distribution of GCD helps the model learn new non-divisors of \( B \) for 9 bases out of 35 (Table 5). For \( B = 211 \), primes up to 7 are learned. For \( B = 10000 \), 7, 9, 13 and 27 are learned, bringing the number of correct GCD to 62, our best result so far. For \( B = 30 \), a counter-intuitive situation prevails: instead of small primes, the model learns \( B - 1 \) and \( B + 1 \). Table 5: Log-uniform vs natural outcomes. Best model of 3, trained for 700 epochs. Non-divisors in bold. | Base | Natural # GCD | Log-uniform outcomes | Base | Natural # GCD | Log-uniform outcomes | |------|---------------|----------------------|------|---------------|----------------------| | 2 | 7 | 7 | 997 | 1 | 1 | | 3 | 5 | 5 | 1000 | 22 | 31 | | 4 | 7 | 7 | 2017 | 4 | 6 | | 5 | 3 | 3 | 2021 | 10 | 10 | | 6 | 19 | 20 | 2023 | 16 | 11 | | 7 | 3 | 3 | 2025 | 28 | 28 | | 10 | 13 | 14 | 2187 | 20 | 20 | | 11 | 2 | 2 | 2197 | 11 | 11 | | 12 | 19 | 20 | 2209 | 8 | 8 | | 15 | 9 | 10 | 2401 | 14 | 16 | | 30 | 25 | 36 | 2744 | 29 | 21 | | 31 | 2 | 2 | 3125 | 16 | 16 | | 60 | 28 | 33 | 3375 | 23 | 21 | | 100 | 13 | 15 | 4000 | 25 | 31 | | 210 | 32 | 32 | 4913 | 17 | 9 | | 211 | 1 | 18 | 5000 | 28 | 30 | | 420 | 38 | 47 | 10000| 22 | 40 | | 625 | 6 | 9 | 10000| 22 | 62 | 5 LEARNING FROM LOG-UNIFORM OPERANDS In all experiments so far, all pairs in the training sets are uniformly sampled between 1 and $10^6$. As a result, models are mostly trained from examples with large operands. 90% of operands are larger than 100,000, and small instances, like $\text{gcd}(6, 9)$, are almost never encountered. This contrast with the way we are taught, and teach, arithmetic. We usually insist that small examples should be mastered, and sometimes memorized, before larger instances, like $\text{gcd}(102370, 102372)$ can be tackled. In this section, I sample training pairs from a log-uniform distribution, by uniformly sampling real numbers $0 \leq x \leq \log M$, computing $e^x$ and rounding to the nearest integer. In this setting, the training set has as many 1-digit as 6-digit operands. In 3% of training example, both operands are smaller than 10, and in 11% of examples, both are smaller than 100. This presents the model with many simple examples that it can memorize, just like children rote learn multiplication and addition tables. This is different from curriculum learning: the distribution of operands does not change during training. Also, the log-uniform sampling only applies to the training set (the test sets are unaffected), and it has no impact on the distribution of outcomes. Table 6: Accuracy and correct GCD (up to 100), log-uniform operands. Best of three models, trained for 1000 epochs (300M examples). All models are tested on 100,000 pairs, uniformly distributed between 1 and $10^6$. | Base | Accuracy | Correct GCD | Base | Accuracy | GCD | Base | Accuracy | GCD | |------|----------|-------------|------|----------|-----|------|----------|-----| | 2 | 94.4 | 25 | 60 | 98.4 | 60 | 2025 | 99.0 | 70 | | 3 | 96.5 | 36 | 100 | 98.4 | 60 | 2187 | 98.7 | 66 | | 4 | 98.4 | 58 | 210 | 98.5 | 60 | 2197 | 98.8 | 68 | | 5 | 97.0 | 42 | 211 | 96.9 | 41 | 2209 | 98.6 | 65 | | 6 | 96.9 | 39 | 420 | 98.1 | 59 | 2401 | 99.1 | 73 | | 7 | 96.8 | 40 | 625 | 98.2 | 57 | 2744 | 98.9 | 72 | | 10 | 97.6 | 48 | 997 | 98.3 | 64 | 3125 | 98.6 | 65 | | 11 | 97.4 | 43 | 1000 | 99.1 | 71 | 3375 | 98.8 | 67 | | 12 | 98.2 | 55 | 1024 | 99.0 | 71 | 4000 | 98.7 | 66 | | 15 | 97.8 | 52 | 2017 | 98.6 | 63 | 4913 | 98.2 | 57 | | 30 | 98.2 | 56 | 2021 | 98.6 | 66 | 5000 | 98.6 | 64 | | 31 | 97.2 | 44 | 2023 | 98.7 | 65 | 10000| 98.0 | 56 | Training from log-uniform operands greatly improves performance (Table 6). Accuracy for all bases is between 94 and 99%, compared to 61 and 97% with uniform operands. For base 2401, the number of correct GCD is 73, our best result so far. For base 10, the number of correct GCD is 48 (vs 13). Learning is accelerated: for base 10, GCD 1, 2, 4 and 5 are learned as early as epoch 3, 3 and 8 by epoch 25, 7 and 9 by epoch 220 and 11 by epoch 750. As before, large bases perform better. All models with $B \leq 420$ have an accuracy over 98% and correctly predict more than 55 GCD 100. The divisors or $B$ are learned first, then, small powers of primes are grokked, roughly in order. After training, models have learned to predict all primes up to a certain value, some of their small powers, and all associated products. All primes up to 5 are learned for base 2, up to 11 for base 10, up to 17 for base 100, and up to 23 for base 1024. For base 1024, 2401, and 2744, only 27 GCD are incorrectly predicted: - the 16 primes from 29 and 97, all predicted as 1, - small multiples of these primes: products of 2 and 29, 31, 37, 41, 43 and 47, predicted as 2, and products of 3 and 29 and 31, predicted as 3, - powers of small primes: $49 = 7^2$, predicted as 7, and $81 = 3^4$, predicted as 27. - small multiples of these: $98 = 49 * 2$, predicted as 14. The three rules with grokking (G1 to G3) still apply: predictions are deterministic, for a pair $(a, b)$ with GCD $k$, the model predicts the largest correctly predicted GCD that divides $k$. Learning curves retain their step-like shape, but they are more noisy, and smoother (see Appendix E.2): transitions now span several epochs, and each new prime takes more examples to be fully learned. While the model During training, while the model learns a new divisor, rules G1 and G3 are temporarily violated. During a few epochs, model predictions are split between the old and the new value (e.g. between 7 and 49 when the model is learning 49). This situation, rarely observed in previous experiments, is common with log-uniform operands. Table 7: Accuracy and correct GCD, log-uniform operands and outcomes. Best model of 3. | Base | Accuracy | Correct GCD | Base | Accuracy | GCD | Base | Accuracy | GCD | |------|----------|-------------|------|----------|-----|------|----------|-----| | 2 | 16.5 | 17 | 60 | 96.4 | 75 | 2025 | 97.9 | 91 | | 3 | 93.7 | 51 | 100 | 97.1 | 78 | 2187 | 97.8 | 91 | | 4 | 91.3 | 47 | 210 | 96.2 | 80 | 2197 | 97.6 | 90 | | 5 | 92.2 | 58 | 211 | 95.3 | 67 | 2209 | 97.6 | 87 | | 6 | 95.2 | 56 | 420 | 96.4 | 88 | 2401 | 97.8 | 89 | | 7 | 93.0 | 63 | 625 | 96.0 | 80 | 2744 | 97.6 | 91 | | 10 | 94.3 | 65 | 997 | 97.6 | 83 | 3125 | 97.7 | 91 | | 11 | 94.5 | 57 | 1000 | 97.9 | 91 | 3375 | 97.6 | 91 | | 12 | 95.0 | 70 | 1024 | 98.1 | 90 | 4000 | 97.3 | 90 | | 15 | 95.4 | 62 | 2017 | 97.6 | 88 | 4913 | 97.1 | 88 | | 30 | 95.8 | 72 | 2021 | 98.1 | 89 | 5000 | 97.1 | 89 | | 31 | 94.4 | 64 | 2023 | 97.5 | 88 | 10000| 95.2 | 88 | Log-uniform outcomes. Balancing the distribution of GCD by making it log-uniform, as described in section 4 together with log-uniform operands, brings another large improvement in performance (Table 7). After 1000 epochs, all models with B larger than 1000 predict 87 to 91 GCD: all primes up to 53 and all composite numbers up to 100. These are our best results. They can be marginally improved by training models from an inverse square root distribution of outcomes (Appendix D.1). Note the low accuracy for base 2: with log-uniform outcomes, the model fails to learn GCD 1, for lack of examples. 6 LEARNING FROM UNIFORM OUTCOMES Log-uniform distributions of outcomes improve model performance by reducing the imbalance between small and large GCD in the training set. It is therefore tempting to push this logic further, and train models from a uniform distribution of GCD and operands, i.e. sample the training set like the stratified test set from Section 2. Figure 3 presents learning curves for three models using base 10. Model accuracy (measured on the natural test set) seems to vary randomly, and the test loss is flat. Yet, the number of correct GCD is stable over time, and increases in steps, from 10 to 17, in line with the results from section 3 (13 GCD are learned). Something is learned despite the flat loss. ![Learning curves for B=10. Uniform outcomes and operands.](image) Figure 3: Learning curves for B=10. Uniform outcomes and operands. 3 different seeds. Table 8 presents the most common model predictions, and their frequencies, for all GCD up to 20. At first glance, predictions seem chaotic. At epoch 266, the model achieves 81% accuracy, and correctly predicts 14 GCD: 1, 2, 5, 8, 20, 32, 40, 44, 48, 50, 64, 75, 80 and 100. One epoch later, accuracy is down to 6%, the model still predicts 14 GCD: 4, 8, 10, 16, 40, 50, 55, 60, 64, 66, 75, 80, 95 and 100, half of the correct GCD have changed! After another epoch, accuracy is 4% and the model predicts 4, 20, 25, 26, 30, 32, 40, 48, 50, 55, 64, 73, 80, 88 and 100. Again, half the correct GCD have changed. As in previous experiments, frequencies are close to 100%: the model makes a unique prediction \( f(k) \) for all pairs with GCD \( k \), with the notable exception of epoch 267 where model predictions for 1, 3 ... are split (almost evenly) between 11 and 19. Model predictions cluster by classes of GCD: all elements in class \( C_1 = \{1, 3, 7, 9, 11, 13, 17, 19\} \) are predicted as 1 at epoch 266, 19 at epoch 267, 73 at epoch 268, and so on. The same pattern appears for classes \( C_2 = \{2, 6, 14, 18\}, C_4 = \{4, 12\} \). Table 8: Prediction for base 10 - uniform operands and outcomes. Most common prediction for GCD 1 to 20, and frequency, for successive epochs. Correct predictions are in bold | Epoch 266 | Epoch 267 | Epoch 268 | Epoch 269 | Epoch 270 | Epoch 580 | Epoch 581 | |-----------|-----------|-----------|-----------|-----------|-----------|-----------| | Pred % | Pred % | Pred % | Pred % | Pred % | Pred % | Pred % | | 1 | 1 | 100 | 19 | 54 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 98 | 77 | 99 | | 2 | 2 | 100 | 66 | 100 | 26 | 100 | 62 | 100 | 66 | 100 | 22 | 93 | 22 | 99 | | 3 | 1 | 100 | 19 | 52 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 99 | 77 | 99 | | 4 | 44 | 91 | 4 | 100 | 4 | 100 | 44 | 100 | 4 | 100 | 4 | 100 | 4 | 100 | | 5 | 5 | 100 | 55 | 100 | 55 | 100 | 35 | 100 | 5 | 100 | 5 | 100 | 5 | 100 | | 6 | 2 | 100 | 66 | 100 | 26 | 100 | 62 | 100 | 66 | 100 | 22 | 93 | 22 | 99 | | 7 | 1 | 100 | 19 | 62 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 99 | 77 | 99 | | 8 | 8 | 99 | 8 | 100 | 8 | 100 | 8 | 100 | 8 | 100 | 88 | 100 | 88 | 99 | | 9 | 1 | 100 | 19 | 52 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 99 | 77 | 99 | | 10 | 70 | 70 | 10 | 100 | 30 | 99 | 70 | 100 | 70 | 100 | 30 | 100 | 70 | 100 | | 11 | 1 | 100 | 19 | 57 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 98 | 77 | 99 | | 12 | 44 | 91 | 4 | 100 | 4 | 100 | 44 | 100 | 4 | 100 | 4 | 100 | 18 | 22 | | 13 | 1 | 100 | 19 | 55 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 98 | 77 | 99 | | 14 | 2 | 100 | 66 | 100 | 26 | 100 | 62 | 100 | 66 | 100 | 22 | 92 | 22 | 99 | | 15 | 5 | 100 | 55 | 100 | 55 | 100 | 55 | 100 | 5 | 100 | 5 | 100 | 5 | 100 | | 16 | 48 | 97 | 16 | 84 | 48 | 99 | 48 | 99 | 46 | 98 | 48 | 98 | 48 | 78 | | 17 | 1 | 100 | 19 | 54 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 99 | 77 | 100 | | 18 | 2 | 100 | 66 | 100 | 26 | 100 | 62 | 100 | 66 | 100 | 22 | 93 | 22 | 99 | | 19 | 1 | 100 | 19 | 53 | 73 | 100 | 7 | 100 | 13 | 100 | 1 | 99 | 77 | 99 | | 20 | 20 | 100 | 60 | 100 | 20 | 98 | 20 | 100 | 20 | 100 | 20 | 100 | 20 | 100 | and $C_5 = \{5, 15\}$, i.e. pairs of integers both divisible by 2, 4, and 5, that would have been predicted as 2, 4, and 5 by the base 10 model from section [3]. In other words, the model learns to cluster input pairs into classes having a common divisor (a product of divisors of 10), just like it did in section [3] but instead of predicting the smallest (and most common) element in each class, it predict a different element at every epoch. This can be summarized into three rules with uniform outcomes: (U1) Predictions are mostly deterministic. At a given epoch, the model usually predicts a unique value $f(k)$ for a given GCD $k$. In rare cases, the model makes 2 or 3 predictions. (U2) Classes of multiples of products of prime divisors of $B$ are predicted the same. For base 10, some classes are $C_1 = \{1, 3, 7, 9, 11, 13, 17, 19, \ldots\}$, $C_2 = \{2, 6, 14, 18, 22, 26, 34, 38 \ldots\}$, $C_4 = \{4, 12, 24, 36, 44, 52, \ldots\}$ and $C_5 = \{5, 15, 35, 55 \ldots\}$. (U3) For each class, the model prediction is an element of the class. Prediction varies from one epoch to the next, but the number of correct GCD is stable over time: it is the number of classes, which increases as the model learns new divisors of $B$. The three rules explain the variations in the accuracy curve: since 61% of examples in the natural test set have GCD 1, accuracy jumps by 61% every time class $C_1$ is predicted as 1. Rule U3, on the other hand, accounts for the step-shaped learning curve for correct GCD. These results shed light on the learning process and the role of the distribution of outcomes. During training, all models, regardless of outcome distribution, learn to partition their input pairs into classes, with GCD multiples of a product of divisors of the base (or small primes when grokking happens), i.e. for base 10, multiples of 2, 4, 5, 10, 20, and a default class associated to 1. The model makes a unique prediction for all pairs in a class. When the distribution of outcomes is unbalanced, this prediction is the smallest element in the class, which happens to be the most common. When outcomes are uniformly distributed, a different element of the class is predicted at every epoch, somewhat randomly: the model becomes less explainable. Base 1000, grokking and loss of determinism. Models with base 1000, trained on uniform operands and outcomes, undergo a similar learning process (see Appendix D.3) during the first 400 training epochs. Grokking sets in around epoch 200. Multiples of 11, 22, 44, 55 and 88 are learned around epoch 220, then multiples of 3 by epoch 260 and of 7 by epoch 400. At this point, 41 GCD are correctly predicted. Note that grokking no longer happens in order: 11 is learned before 3. During the grokking phase, a new phenomenon develops. As new primes are grokked and more classes are created, model predictions for each class become less deterministic. Instead of predicting a unique value for each class at each epoch, the model now “hesitates” between several values, and the frequency of the most common prediction goes down. By epoch 400, for the class $C_1$, the model makes 18 different predictions with frequencies ranging from 2% to 13% (Table I.5 in Appendix D.3). Model predictions are no longer explainable, and the three rules are not respected. Interestingly, GCD continue to be learned under this new regime, starting with the largest (i.e. the smallest classes of multiples). By epoch 740, 95 GCD under 100 are correctly predicted. The worst performance is achieved for small GCD: 43, 74 and 85% correct predictions for GCD 1, 2 and 3. Appendix D.4 presents results for larger bases, where up to 99 GCD under 100 are learned. 7 DISCUSSION Can transformers learn the greatest common divisor? With enough examples and appropriate adjustment of their training distribution, they can. Models leveraging large composite bases, and trained on log-uniform operands and outcomes predict over 90 of the 100 first GCD. Models trained on uniform outcomes predict 95 GCD. However, the experiments from section 3 show the limits of naive, benchmark-based evaluations on arithmetic tasks: high accuracies (95%) can be achieved, on held-out test sets of random examples, by models that only predict a handful of GCD. The approach to explainability presented in this paper differs from most works on the subject. Instead of looking at model parameters, I engineer experiments that reveal the algorithms that the model is implementing. It is often repeated that transformers are incomprehensible black-boxes, that sometimes confabulate and often fail in unpredictable ways. Here, model predictions can be fully characterized by a small number of rules. This is a promising direction for future research. Experiments indicate that transformers learn a sieve algorithm for computing GCD. The model first learns divisibility by products of divisors of the base, which can be tested by looking at the last digits of a number, or counting its rightmost zeroes. Using these rules, the model clusters its input pairs into classes of multiples of divisors of the base, and predicts the GCD as the minimum for the class. All GCD corresponding to products of divisors of $B^2$ are learned this way. At the end of this phase, in base 2, the model correctly predicts 1, 2, 4, 8, 16 and 32. As training proceeds, new prime divisors are learned (grokked) in order. They are all prime because multiples of previous divisors were learned already, i.e. the model functions like a sieve. Every time a new divisor $p$ is learned, all existing classes are split between multiples and non-multiples of $p$. In base 2, once the model learns divisibility by 3, six new classes are created: multiples of 3, 6, 12, 24, 48 and 96 (splitted from 1, 2, 4, 8, 16 and 32). This accounts for the steps observed in the learning curves. A GCD is correctly predicted once all the powers of primes dividing it are learned. Eventually, all GCD will be learned this way. Experiments with uniform outcomes suggest that an unbalanced training distribution of GCD is needed for this algorithm to succeed, because it causes each class to be predicted by its smallest, and most common, member (the correct GCD), and it guarantees that primes are learned in order. Interestingly, this algorithm is not related to Euclid’s algorithm. Note also that it is not specific to transformers: Appendix D.5 shows that similar results can be achieved with LSTM and GRU. Another important finding is the role of training distributions. All models are tested on sets with uniform operands, but the best results are achieved with a log-uniform distribution of operands and outcomes in the training set. This may come as a surprise, since many authors observed that evaluating a model out of its training distribution has a negative impact on performance. The existence of special training distributions, that allow for faster learning and more robust models (with respect to out-of-distribution generalization) was already observed for linear algebra (Charton 2022a). A log-uniform distribution of operands strikes a balance between memorization and generalization, and helps models learn hard instances by memorizing easier cases. This is related to curriculum learning, but avoids catastrophic forgetting, because the training distribution never changes. These observations may apply to other arithmetic tasks. On the other hand, a log-uniform distribution of outcomes helps learning by enforcing a better representation of large GCD in the training set, a classical recipe in machine learning (classifiers are often trained on balanced datasets). The counter-intuitive result is that a perfectly balanced, uniform training distribution set degrades performance by preventing the model from learning small GCD, and breaking model explainability. Is it really grokking? Power et al. (2022) define grokking as “generalization far after overfitting.” In all experiments, training and test data are generated on the fly from a very large problem space. No overfitting can happen, and the classical pattern of grokking, train accuracy dropping, and validation accuracy catching up after a long time, will not occur. The similarity with grokking lies in the sudden change in accuracy after a long stagnation of the training loss. REFERENCES Ernesto Cesàro. Question 75 (solution). *Mathesis*, (3):224–225, 1883. François Charton. Linear algebra with transformers. *arXiv preprint arXiv:2112.01898*, 2021. François Charton, Amaury Hayat, and Guillaume Lample. Learning advanced mathematical computations from examples. *arXiv preprint arXiv:2006.06462*, 2020. François Charton. What is my math transformer doing? – three results on interpretability and generalization. *arXiv preprint arXiv:2211.00170*, 2022a. François Charton. Computing the roots of polynomials, 2022b. [https://f-charton.github.io/polynomial-roots](https://f-charton.github.io/polynomial-roots) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. *arXiv preprint arXiv:1406.1078*, 2014. Ernest Davis. Mathematics, word problems, common sense, and artificial intelligence, 2023. Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. Faith and fate: Limits of transformers on compositionality, 2023. Kaden Griffith and Jugal Kalita. Solving arithmetic word problems with transformers and preprocessing of problem text. *arXiv preprint arXiv:2106.00893*, 2021. Andrey Gromov. Grokking modular arithmetic, 2023. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. *arXiv preprint arXiv:1511.08228*, 2015. Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. *arXiv preprint arxiv:1507.01526*, 2015. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Guillaume Lample and François Charton. Deep learning for symbolic mathematics. *arXiv preprint arXiv:1912.01412*, 2019. Nayoung Lee, Kartik Sreenivasan, Jason D. Lee, Kangwook Lee, and Dimitris Papailiopoulos. Teaching arithmetic to small transformers. *arXiv preprint arXiv:2307.03381*, 2023. Ziming Liu, Ouai Kitouni, Niklas Nolte, Eric J. Michaud, Max Tegmark, and Mike Williams. Towards understanding grokking: An effective theory of representation learning, 2022. Yuanliang Meng and Anna Rumshisky. Solving math word problems with double-decoder transformer. *arXiv preprint arXiv:1908.10924*, 2019. Bhumika Mistry. *An investigation into neural arithmetic logic modules*. PhD thesis, University of Southampton, July 2023. URL [https://eprints.soton.ac.uk/478926/](https://eprints.soton.ac.uk/478926/) Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progress measures for grokking via mechanistic interpretability, 2023. Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple arithmetic tasks. *arXiv preprint arXiv:2102.13019*, 2021.
AOpJ3vPNu8
Descriptions in page 1 “However, optimally minimizing the individual task losses in the inner loop may not essentially lead to minimizing the average loss in the outer-loop…”. Based on my understanding, the goal of MAML is to find the meta initialization that minimizes the average inner task losses. So will the failure of minimizing the outer loss results in the failure of the minimizing inner losses in a theoretical sense?
A Game Theoretic Approach to Meta-Learning Nash Model-Agnostic Meta-Learning Anonymous authors Paper under double-blind review Abstract Meta-learning, or learning to learn, aims to develop algorithms that can quickly adapt to new tasks and environments. Model-agnostic meta-learning (MAML), proposed as a bi-level optimization problem, is widely used as a baseline for gradient-based meta-learning algorithms that learn meta-parameters. In MAML, task-specific parameters are adapted independently in the inner-loop. After learning the task-specific parameters, the meta-parameters are learned in the outer-loop by minimizing the average task loss. After MAML, some gradient-based meta-learning research has explored objectives beyond average task losses, such as minimizing worst-case task losses for risk management and improving zero-shot performance in unadaptable environments. However, if the purpose of learning meta-parameters changes, the inner-loop formulation must change accordingly. Therefore, we propose a novel gradient-based meta-learning framework that imposes joint strategy sets and utility functions among tasks, making each task affected by other tasks. To solve this complex problem, we first show the proposed framework can be formulated as a generalized Stackelberg game. After that, we propose the NashMAML algorithm to compute the generalized Stackelberg equilibrium of this model and theoretically prove its convergence. We validate our approach on sinusoidal regression and few-shot image classification tasks. The results demonstrate that our approach outperforms previous methods in handling few-shot learning problems. 1 Introduction Meta-learning, also known as learning to learn (Thrun & Pratt, 1998), aims to develop algorithms that enable more efficient adaptation to new unseen tasks, but similar to previous tasks, by learning from a variety of tasks. To achieve this goal, meta-learning algorithms are trained on a set of related tasks or domains to learn a more general set of skills (Nam et al., 2022) or priors (Finn et al., 2018; Kim et al., 2018) that can be applied to new tasks with limited data. Among them, model-agnostic meta-learning (MAML) (Finn et al., 2017) is a gradient-based meta-learning algorithm that can be applied to various different problems. After the emergence of MAML, numerous follow-up studies have been conducted within the machine learning community (Nichol et al., 2018; Zintgraf et al., 2019; Rajeswaran et al., 2019). These studies formulate meta-learning as a bi-level optimization problem and find an optimal solution via learning task-specific parameters independently in the inner-loop (lower level problem) first, then learning meta-parameters in the outer-loop (upper level problem) to minimize the average loss of the tasks after adaptation. However, optimally minimizing the individual task losses in the inner-loop may not essentially lead to minimizing the average loss in the outer-loop. Furthermore, if the goal of the outer-loop changes, the current inner-loop formulation, which adapts the model to individual tasks independently, does not help learn the meta-parameter. For instance, the purpose of learning meta-parameters can involve minimizing the worst-case loss (Collins et al., 2020) for risk management, enhancing zero-shot performance (Nooralahzadeh et al., 2020) in unadaptable environments, or increasing training stability. To address these limitations, we propose a new algorithm, Nash model-agnostic meta-learning (NashMAML), which was inspired by the Nash equilibrium of a game, that enables alignment of the learning objectives between the inner-loop and outer-loop. We formulate the NashMAML by Figure 1: The training process of NashMAML and MAML. The key feature of NashMAML is the presence of feasible regions (blue region) of task parameters as determined by the joint strategy sets. The task-specific parameters $\phi_1$, $\phi_2$ are projected to feasible regions whenever they are located outside the feasible regions during the inner-loop training. adding joint strategy sets and utility functions, both of which introduce the dependency among tasks to the inner-loop. Depending on the form of the joint strategy sets and joint utility functions, the NashMAML can be learned for various purposes. On the contrary, as shown in Figure 1, the conventional approach of independently optimizing task-specific parameters is no longer available for solving the inner-loop problem due to the influence of other task-specific parameters on each task. To compute the joint optimal task-specific parameters, we adopt a game-theoretic interpretation, wherein a batch of $N$ tasks in the inner-loop is regarded as decision-makers who determine its task-specific parameters. To be specific, we model the inner-loop problem of NashMAML as a generalized Nash game for $N$ tasks, whose solution is a generalized Nash equilibrium of the tasks. Furthermore, we consider the meta-learner as a decision maker who determines meta-parameters before the task-specific parameters are determined. Then, the bi-level interactions among the meta-learner and $N$ tasks can be formulated as a generalized Stackelberg game designed to model the interaction among the leader and the followers (Stackelberg et al., 1952). The solution of the generalized Stackelberg game is a generalized Stackelberg equilibrium. Our main contributions to this paper are as follows: • We interpret the current meta-learning algorithms’ formulation and solution concept from a game-theoretical perspective. • We propose a novel bi-level formulation of gradient-based meta-learning as a generalized Stackelberg game. This formulation enables alignment of the learning objectives between the inner-loop and outer-loop by incorporating joint strategy sets and utility functions between tasks. • We propose a NashMAML algorithm, which can compute the equilibrium of the proposed generalized Stackelberg game. We provide conditions for the convergence of NashMAML algorithm and its convergence speed. • We introduce a practical example of NashMAML by proposing a ball-shaped strategy set and joint penalty function, suppressing task-specific parameters from moving away from meta parameters. For the ball-shaped strategy set, we propose a methodology in which computing the gradient with backpropagation is tractable. • We demonstrate the proposed formulation’s and NashMAML algorithm’s effectiveness by conducting a comparative analysis on sinusoidal regression and image classification tasks. The results provide a potential for our approach to enhance performance, particularly in problems with complex task distributions. 2 RELATED WORKS 2.1 COMPUTATIONAL APPROACHES OF MODEL-AGNOSTIC META-LEARNING After the proposal of MAML, various follow-up studies have been introduced to address the challenges of MAML, mostly focusing on few-shot image classification tasks. Implicit MAML Ra- jeswaran et al. (2019) proposed Hessian-free methods, providing computational advantages compared to explicit differentiation methods. The FOMAML Finn et al. (2017) and Reptile Nichol et al. (2018) explore different strategies to approximate the outer-loop gradient update in MAML using first-order approximation approaches, effectively reducing the memory and time complexity while preserving performance. These studies share the objective, consistent with MAML, of minimizing the average task loss. 2.2 Variations in Objectives of Meta-Learning In addition to minimizing the average loss, various studies have been conducted with other objectives. Task-Robust MAML Collins et al. (2020) and TaRo-BOBA Gu et al. (2021) aim to improve the worst-case performance by minimizing the maximum task loss. Kim et al. (2018) introduces Bayesian frameworks and designed a new meta-learning objective with chaser loss to effectively model the uncertainty during the meta-learning process. In addition, instead of finding meta-parameters that perform well after adaptation, Nooralahzadeh et al. (2020) Verma et al. (2020) focus on enhancing zero-shot performance. 3 Preliminaries 3.1 Game Theory Game theory is the discipline that models scenarios where multiple decision-makers aim to optimize their respective objectives. A game consists of players who make decisions, their feasible regions (or strategies), and their objective functions (or utilities). Depending on the representation methods and information structures, there are various types of games. First, we discuss the $N$ player (generalized) Nash game Nash Jr. (1950) in which $N$ players make decisions simultaneously. **Definition 1** Let $G = \langle P, (u_i)_{i \in P}, (\Omega_i)_{i \in P} \rangle$ be a $N$ players’ generalized Nash game which is formulated as $$\max_{x_i \in \Omega_i(x_{-i})} u_i(x_i, x_{-i}), \forall i \in P$$ (1) where $P = \{1, \cdots, N\}$ is a set of players and $u_i$ is the utility function of the player $i$, $x_i$ is the player $i$’s decision belonging to their strategy set $\Omega_i(x_{-i})$, $x_{-i} = (x_1, \cdots, x_{i-1}, x_{i+1}, \cdots, x_N)$ is the player’s joint decision except player $i$. Then, we refer to $x^* \in \prod_{i \in P} \Omega_i(x^*_{-i})$ as a generalized Nash equilibrium of the $N$ player’s generalized Nash game $G$ if it satisfies the following equation. $$x^*_i = \arg \max_{x_i \in \Omega_i(x^*_{-i})} u_i(x_i, x^*_{-i}), \forall i \in P$$ (2) Let $S(M)$ be a randomly selected $M$ player which is a subset of $N$ players, and $x_{-i}(M) = (x_j)_{j \in S(M) - \{i\}}$ be the players’ joint decision except player $i$. Then, $x^* \in \prod_{i \in P} \Omega_i$ is a generalized $M$-subNash equilibrium if it satisfies the following equation for every $S(M)$, $$x^*_i = \arg \max_{x_i \in \Omega_i(x^*_{-i}(M))} u_i(x_i, x^*_{-i}(M)), \forall i \in S(M)$$ (3) Let $x^{VE} \in \prod_{i \in P} \Omega_i(x^{VE}_{-i})$ be a variational equilibrium of $G$ if it satisfies the following variational inequality. $$\left(\frac{\partial u_i(x^{VE})}{\partial x_i}\right)^T (x^{VE} - x) \geq 0, \forall x \in \prod_{i \in P} \Omega_i(x_{-i})$$ (4) When the player $i$’s strategy set is independent of the other players’ decisions, we refer to $G$ as a $N$ player’s Nash game. The Nash equilibrium, subNash equilibrium, and variational equilibrium of the Nash game are defined in the same way as the equilibrium of the generalized Nash game described in equations (2) - (4). Next, we discuss the $1 - N$ Stackelberg game, where a leader makes decisions first, and then $N$ followers make decisions simultaneously after observing the leader’s decision. **Definition 2** Let $\Gamma = \langle \{1\}, F, u^L, (u_i)_{i \in F}, \Omega^L, (\Omega_i)_{i \in F} \rangle$ be a $1 - N$ generalized Stackelberg game where $F = [N]$ is a follower set, $u^L$ is an utility function of a leader, $u_i$ is an utility function of follower $i$, $\Omega^L$ is a strategy set of a leader, $\Omega_i$ is a strategy set of follower $i$. When the follower $i$’s strategy set is independent of the other followers’ decisions, we refer to $\Gamma$ as a $1 - N$ Stackelberg game. Then, $(y^*, x^*) \in \Omega^L \times \prod_{i \in F} \Omega_i(y^*, x^*_{-i})$ is a optimal solution if it satisfies the following equation. $$\sup_{x^*(y^*) \in S(y)} u^L(y^*, x^*(y^*)) \geq \sup_{x^*(y) \in S(y)} u^L(y, x^*(y)), \forall y \in \Omega^L$$ where $S(y)$ is a generalized Nash equilibrium of the $N$ followers’ (generalized) Nash game given leader’s decision $y$. In detail, $(y^*, x^*) \in \Omega^L \times \prod_{i \in F} \Omega_i(y^*, x^*_{-i})$ is a (generalized) Stackelberg equilibrium if $S$ is a set of (generalized) Nash equilibrium of followers, a (generalized) M-subStackelberg equilibrium if $S$ is a set of (generalized) M-subNash equilibrium, and a variational Stackelberg equilibrium if $S$ is a set of variational equilibrium of followers. ### 3.2 Game Theoretical Interpretation of MAML The meta-learning problem is generally modeled as bi-level programming ($1 - 1$ Stackelberg game) since tasks are independent of each other. The purpose of solving task $i$ is to learn task-specific parameters $\phi_i$ using a dataset $D^{\text{tr}}_i$ to minimize the loss function $L(\phi_i; D^{\text{tr}}_i)$. Then, the meta-parameters $\theta$ aim to minimize the average loss across all the tasks. The problem that model-agnostic meta-learning (MAML) and first-order MAML (FOMAML) algorithms intend to solve is defined as the bi-level programming as follows. $$\theta^* = \arg \min_{\theta \in \mathbb{R}^d} F(\theta) := \frac{1}{N} \sum_{i=1}^{N} L(\phi^*_i(\theta); D^{\text{val}}_i)$$ $$\phi^*_i(\theta) = \arg \min_{\phi_i \in \mathbb{R}^d} L(\phi_i; D^{\text{tr}}_i), \forall i \in [N] := \{1, \cdots, N\}$$ The MAML (Finn et al., 2017) and FOMAML (Nichol et al., 2018) formulation reflect that task-specific parameter vector $\phi_i$ is close to the meta-parameter vector $\theta$ not through the problem structure but by controlling the number of inner steps. They approximately compute $\hat{\phi}_i \sim \phi_i = \theta - \alpha \frac{\partial L(\theta; D^{\text{tr}}_i)}{\partial \phi_i}$ through the finite number of gradient updates. However, since $\hat{\phi}_i$ is different from the optimal task-specific parameters $\phi^*_i(\theta)$ as defined in equation (7), the resulting meta-parameters is not the optimal solution (Stackelberg equilibrium) of the bi-level programming in equations (6)-(7). The MAML and FOMAML algorithms have limitations in that they control the proximity of task-specific parameters to the meta-parameters through the number of inner steps rather than the problem formulation. It means that the meta-parameters computed by adjusting the number of inner steps are not the optimal meta-parameters $\theta^*$ as defined in equation (6). Therefore, MAML and FOMAML algorithms have poorer performance than other meta-learning algorithms that compute the optimal meta-parameters. The game theoretic interpretations for implicit MAML (iMAML) and fast context adaptation via meta-learning (CAVIA) algorithms, which are famous extensions of MAML, are discussed in Appendix A. ### 4 Nash Model-Agnostic Meta-Learning MAML has a fixed formulation of the lower level problem designed to improve task adaptation performance. However, if the purpose of learning meta-parameters at the upper level changes, it is necessary to change the lower level formulation accordingly. For instance, the objective of learning meta-parameters may extend beyond merely minimizing the average task loss. It can involve minimizing the worst-case loss for risk management, enhancing zero-shot performance in unadaptable environments, or increasing training stability. To address this issue, we present a novel bi-level formulation and algorithm that considers the mutual interaction among tasks of the same batch by applying joint strategy sets or utility functions based on the formulation of MAML. Note that, this framework can be generally adapted to other gradient-based meta-learning algorithms, such as iMAML and CAVIA. 4.1 FORMULATION First, we formulate the target problem of NashMAML from a stochastic optimization perspective. We denote the joint task-specific parameter \((\phi_i)\) as \(\phi\), and \(N\) randomly sampling joint task-specific parameter except parameter \(i\), \((\phi_j)_{j \in S(N) - i}\), as \(\phi_{-i}\) where \(S(N) = \{i | T_i \sim p(T)\}\) is the set of \(N\) randomly sampling tasks’ index. The target problem of the NashMAML algorithm where the batch size is \(N\) is the following stochastic bi-level problem and its optimal solution is \((\theta^*, \phi^*(\theta^*))\). \[ \begin{align*} \theta^* &= \arg\min_{\theta \in \mathbb{R}^d} \mathbb{E}_{T_i \sim p(T)} [\mathcal{L}_i (\theta, \phi^*_i (\theta))] \\ \phi^*_i (\theta) &= \arg\min_{\phi_i \in \Omega_i (\phi^*_{-i} (\theta), \theta)} f_i (\phi_i, \phi^*_{-i} (\theta), \theta) \end{align*} \] where task \(i\)'s strategy set \(\Omega_i (\phi^*_{-i} (\theta), \theta)\) depends on the other task-specific parameters \(\phi_{-i}\), and task \(i\)'s utility function \(f_i (\phi_i, \phi_{-i}, \theta) = \mathcal{L}_i (\phi_i) + g (\phi_i, \phi_{-i}, \theta)\) is the sum of task \(i\)'s loss function \(\mathcal{L}_i\) and \(g\), the function affected by the \(\phi_{-i}\) and \(\theta\). This formulation makes the lower level problem target other purposes, other than task loss, imposed by \(\Omega_i\) and \(g\). However, in practice, learning meta-parameters is conducted in batch units, and the problems addressed during a single meta-parameters update can be precisely formulated. We model the single meta-parameter’s gradient update of the NashMAML algorithm as a \(1 - N\) generalized Stackelberg game \(\Gamma = \langle \{1\}, [N], F, (f_i)_{i \in [N]}, \mathbb{R}^d, (\Omega_i)_{i \in [N]} \rangle\) where leader’s decision is a meta-parameter \(\theta\), and followers’ decision are their respective task-specific parameter \(\phi_i\). The set of the follower is \([N] = \{1, \cdots, N\}\) where \(N\) is batch size, leader’s utility function is \(F (\theta, \phi) = \frac{1}{N} \sum_{i=1}^{N} \mathcal{L}_i (\phi_i; D_i^{\text{val}})\), follower \(i\)'s utility function is \(f_i (\phi_i, \phi_{-i}, \theta) = \mathcal{L}_i (\phi_i; D_i^{\text{tr}}) + g (\phi_i, \phi_{-i}, \theta)\), leader’s strategy set is \(\mathbb{R}^d\), and follower \(i\)'s strategy set is \(\Omega_i\). Then, the optimal solution \((\theta, \phi^*(\theta)) \in \mathbb{R}^d \times \Omega(\theta)\) of \(\Gamma\) satisfies the following equations (10) and (11) where \(\Omega(\theta) = \prod_{i \in [N]} \Omega_i (\theta, \phi^*_{-i} (\theta))\) is a generalized Stackelberg equilibrium. \[ \begin{align*} \theta^* &= \arg\min_{\theta \in \mathbb{R}^d} F (\theta, \phi^*(\theta)) \\ \phi^*_i (\theta) &= \arg\min_{\phi_i \in \Omega_i (\phi^*_{-i} (\theta), \theta)} f_i (\phi_i, \phi^*_{-i} (\theta), \theta), \forall i \in [N] \end{align*} \] where \(\phi = (\phi_i)_{i \in [N]}\) is a joint task-specific parameter and \(\phi_{-i} = (\phi_1, \cdots, \phi_{i-1}, \phi_{i+1}, \cdots, \phi_N)\) is a joint task-specific parameter except task \(i \in [N]\). 4.2 ALGORITHM In the NashMAML algorithm, we first compute the generalized Nash equilibrium \(\phi^*(\theta) = (\phi^*_i (\theta))_{i \in [N]}\) of the lower level problem as defined in equation (11). Next, we explicitly compute \(\frac{d\phi^*_i (\theta)}{d\theta}\) through back-propagation of \(\phi^*(\theta)\) to obtain the optimal meta-parameters \(\theta^*\). Thus, the solution computed by the NashMAML algorithm is a generalized \(N\)-subStackelberg equilibrium of the generalized Stackelberg game as formulated by equations (8)-(9). We describe the NashMAML algorithm, which is an extension of MAML, in the Algorithm[1], detailed in Appendix B. The difference between NashMAML and MAML is that including a projection step onto the strategy set if the joint task-specific parameter is not feasible for the strategy set. 4.3 THEORETICAL RESULT First, we define the following estimator to measure the error of the estimated gradient. Table 1: Complexity for the meta-learning algorithms | Algorithm | Iteration complexity | Memory | |------------------------------------------------|----------------------|----------------------------------| | MAML (GD, full back-prop) | \( \kappa \log(D/\delta) \) | Mem \( (\nabla L_i) \kappa \log(D/\delta) \) | | MAML (Nesterov’s AGD, full back-prop) | \( \sqrt{\kappa} \log(D/\delta) \) | Mem \( (\nabla L_i) \sqrt{\kappa} \log(D/\delta) \) | | implicit MAML (Nesterov’s AGD) | \( \sqrt{\kappa} \log(D/\delta) \) | Mem \( (\nabla L_i) \) | | NashMAML (PRGD, full back-prop) | \( \kappa \log(D/\delta) \) | Mem \( (\nabla L_i) \kappa \log(D/\delta) \) | **Definition 3** Let the joint task-specific parameter \( \hat{\phi} \) be a solution estimated by a computing algorithm (e.g., PRGD method). Then, \( \hat{\phi} \) is a \( \delta \)-accurate estimation of the optimal joint task-specific parameter \( \phi^* \) if it satisfies the following: \[ \| \hat{\phi} - \phi^* \| \leq \delta \] (12) **Definition 4** Let \( \frac{dF}{d\theta} \) be an approximated gradient of the meta loss function. Then, \( \hat{h}_\theta \) is an \( \epsilon \)-accurate estimation of the meta loss function if it satisfies the following: \[ \left\| \frac{d}{d\theta} F(\theta, \hat{\phi}) - \hat{h}_\theta(\theta, \hat{\phi}) \right\| \leq \epsilon \] (13) Table 1 summarizes the iteration complexity to compute \( \frac{d\phi^*(\theta)}{d\theta} \) of NashMAML and the conventional meta-learning algorithms, MAML and iMAML. Importantly, the iteration complexity to compute \( \frac{d\phi_i(\theta)}{d\theta} \) of the NashMAML is equivalent to the conventional algorithms as \( O(\log(D/\delta)) \) from the perspective of error, \( \delta \). Moreover, the memory complexity of the NashMAML is equivalent to the conventional algorithm as \( O(\text{Mem}(\nabla L_i) \kappa \log(D/\delta)) \) where \( \text{Mem}(\nabla L_i) \) is the memory taken to compute a single derivative \( \nabla L_i \) (Rajeswaran et al., 2019). We discuss the complexity to compute \( \frac{d\phi_i(\theta)}{d\theta} \) in the following theorem. **Theorem 1** Let \( D \) be the diameter of search space of the joint task-specific parameter \( \phi = (\phi_i)_{i \in [N]} \) in the inner optimization problem (i.e. \( \| \phi - \phi^*(\theta) \| \leq D \)). Suppose that the projected reflected gradient descent (PRGD) method (Malitsky, 2015) is used to compute the \( \delta \)-accurate estimation of the optimal joint task-specific parameter \( \hat{\phi} = (\hat{\phi}_i)_{i \in [N]} \) of the generalized Nash equilibrium, which is the convergent point of the inner-loop of the NashMAML algorithm. Under Assumption 1, the NashMAML algorithm computes \( \hat{\phi} \) with \( O(\kappa \log(D/\delta)) \) number of iterations, and only \( O(\text{Mem}(\nabla L_i) \kappa \log(D/\delta)) \) memory is required throughout. The remaining part covers the algorithm for finding the equilibrium of the lower level which holds not only for the PRGD method but also for general cases. The second main result is that we compute the error of the estimated gradient \( \hat{h}_\theta \) through back-propagation is bounded by a weighted sum of the error in estimating task-specific parameters \( \phi \) and the error in estimating gradient through back-propagation. **Theorem 2** Let \( \theta \) be a given meta-parameter, \( \phi^* \) be an optimal task-specific parameter, \( \hat{\phi} \) be a \( \delta \)-accurate estimated task-specific parameter, and \( \hat{h}_\theta \) be an \( \epsilon \)-accurate estimated gradient of \( F \) with respect to \( \theta \) computed through back-propagation. Under Assumption 2, the difference between the \( \epsilon \)-accurate estimated gradient \( \hat{h}_\theta \) and the gradient of the optimal meta loss function \( F \) with respect to \( \theta \), \( \frac{dF}{d\theta} \), is bounded by the weighted sum of the error in estimating \( \phi^* \) and the error in estimating the gradient through back-propagation. That is, \[ \left\| \frac{d}{d\theta} F(\theta, \phi^*(\theta)) - \hat{h}_\theta(\theta, \hat{\phi}) \right\| \leq C \left\| \phi^*(\theta) - \hat{\phi} \right\| + \left\| \frac{d}{d\theta} F(\theta, \hat{\phi}) - \hat{h}_\theta(\theta, \hat{\phi}) \right\| \] \[ \leq C \delta + \epsilon \] where \( C = L_1 + \frac{C_1 L_4 + C_2 L_2}{\mu_1 + \mu_2} + \frac{C_1 C_2 (L_3 + L_5)}{(\mu_1 + \mu_2)^2} \). Now, we prove the convergence of the NashMAML algorithm in Theorem 3 and prove the convergent point of the NashMAML algorithm is the generalized subStackelberg equilibrium of the stochastic optimization problem described in equations (8) and (9) in Theorem 4. **Theorem 3** Let \((\theta^*, \phi^*(\theta^*))\) be a convergent point of the NashMAML algorithm, and \(L(\theta)\) be an expected optimal meta loss function, that is, \(L(\theta) = \mathbb{E}_{T_i \sim p(T)} [F(\theta, \phi^*(\theta))]\). Under Assumptions 1, 2, and 3, the following statements hold. - The expected difference of the meta-parameter \(\mathbb{E}_{T_i \sim p(T)} \left[ \| \theta^{k+1} - \theta^k \|^2 \right]\) is bounded. - The expected difference of the optimal meta loss function \(L(\theta^{k+1}) - L(\theta^k)\) is bounded. - The expected error of the optimal meta loss function of the convergent point \(L(\theta^*) - L(\theta^k)\) is bounded. **Theorem 4** Let \((\theta^*, \phi^*(\theta^*))\) be an optimal solution of the stochastic optimization problem described in equations (8) and (9), which is the target problem of the NashMAML algorithm. We denote the expected meta loss function of the stochastic optimization problem \(\mathbb{E}_{T_i \sim p(T)} [L_i(\theta, \phi^*(\theta))]\) as \(E[L_i^*(\theta)]\). Let \(\delta\) and \(\bar{\delta}\) are the convergence criterion of the inner-loop and the outer-loop, respectively. Then, under Assumptions 1, 2, and 3, the NashMAML algorithm with step size \(\beta \leq \frac{\bar{\delta}}{\sqrt{4C^2\delta^2 + 4\left(C_1 + \frac{C_1C_2}{\mu_1 + \mu_2}\right)^2}}\) compute the optimal solution of the stochastic optimization problem described in equations (8) and (9) with the convergence speed \(O\left(\max\left\{k_b^2, k_\sigma^2, k_\sigma^2\right\}\right)\) and error \[ E[L_i^*(\theta^*)] - E[L_i^*(\theta^k)] \leq \frac{L_6}{2}\delta^2 + \frac{C^2\delta^2 - \left(C_1 + \frac{C_1C_2}{\mu_1 + \mu_2}\right)^2}{\sqrt{C^2\delta^2 + \left(C_1 + \frac{C_1C_2}{\mu_1 + \mu_2}\right)^2}}\bar{\delta} \] where \(C = L_1 + \frac{C_1L_4 + C_2L_2}{\mu_1 + \mu_2} + \frac{C_1C_2(L_3 + L_5)}{(\mu_1 + \mu_2)^2}\). Finally, we prove the convergent point of the NashMAML algorithm is always equivalent regardless of the order of the gradient update and the initial meta-parameter and task-specific parameters in Theorem 5. **Theorem 5** Under Assumption 1, 2, and 3, the NashMAML algorithm converges to the same optimal solution of the stochastic optimization problem described in equations (8) and (9) regardless of the order of the task-specific parameters’ gradient update in the inner-loop. Moreover, the NashMAML algorithm converges to the optimal solution of the stochastic optimization problem described in equations (8) and (9) regardless of the initial meta-parameter and initial task-specific parameters under Assumption 4. All proofs and details for the theorems are in Appendix C.2. Overall, we prove that the NashMAML algorithm consistently converges to the same point, regardless of the meta-parameters (initial parameter of task-specific parameters) or the order of task-specific parameters updates within the same batch. We also show that this convergent point corresponds to the generalized N-subStackelberg equilibrium of the equation (8)-(9). ## 5 EXPERIMENTS In this section, we propose two formulations for lower level problems to leverage optimal solutions without overfitting individual tasks while balancing the impact of tasks on the training of meta-parameters. After that, we present numerical experiments to demonstrate the potential of our framework. Our primary focus is to validate the theoretical findings and provide empirical evidence of the effectiveness of our proposed approach. For this purpose, we have chosen the 1D sine regression task as a representative example of a simple problem that clearly illustrates the core concepts and benefits of our game-theoretical framework. To show the scalability of our practical algorithm, we conduct experiments on few-shot image classification tasks. 5.1 Practical Formulation of NashMAML We conduct experiments with two different formulations of lower level problems. First, we formulate the NashMAML with the penalty function by regularizing the distance between the meta-parameter and the average of task-specific parameters as follows: $$\phi_i^*(\theta) = \arg\min_{\phi_i} L_i(\phi_i) + \frac{\lambda}{2} \left\| \theta - \frac{\phi_i + \sum_{j \neq i} \phi_j}{N} \right\|_2^2$$ (16) Second, we formulate the NashMAML with the joint strategy set by assigning a ball-shaped constraint that limits the sum of distances between the meta-parameter and each task-specific parameter as follows $$\phi_i^*(\theta) = \arg\min_{\phi_i \in \Omega_i(\phi_{-i}^*(\theta), \theta)} L_i(\phi_i)$$ (17) where the joint feasible strategy set of the task $i$ with hyperparameter $r \in \mathbb{R}_+$ is given by $$\Omega_i(\theta, \phi_{-i}) = \left\{ \phi_i \in \mathbb{R}^d : \left\| \phi_i - \theta \right\|_2^2 + \sum_{j \neq i} \left\| \phi_j - \theta \right\|_2^2 \leq r^2 \right\}. $$ (18) The joint utility function and strategy set in equations (16) and (18) ensure that task-specific parameters are influenced by meta-parameter and other task-specific parameters, thus preventing overfitting. Specifically, we prove that backpropagation is possible through Algorithm 1 by establishing the differentiability of the projection of task-specific parameters onto equation (18) in Appendix B.3. 5.2 Sinusoid Regression We consider 1D sine regression task, where each task instance $T_i$ is a regression problem $y = a_i \sin(x - b_i)$. Each task is the inference of the amplitude and phase from the sampled data. For each task, the learner is given $K$ samples where each sample $x_i$ is uniformly sampled from $[-5.0, 5.0]$ and tries to approximate the underlying function in terms of mean squared error (MSE). While amplitude and phase are typically sampled from a uniform distribution, we experiment with settings where both the training and test distribution for amplitude are skewed. In particular, we sample the amplitude from $[0.1, 1.05] \cup [4.95, 5.0]$. In this setting, naively minimizing the average loss of each task without considering other tasks may lead to an instability of training since the task distribution has two separate modes. First, we investigate the stability of training of our method. To compute training stability, we utilize the standard deviation of training loss over the course of training. As shown in Figure 5.2 (left), Nash-MAML with constraint shows a consistently lower standard deviation of training loss. Figure 5.2 presents a comprehensive comparison of the test mean MSE for all evaluated models. As depicted in Figure 5.2 (middle), both versions of NashMAML consistently surpass MAML in the sine regression task. We also thoroughly examine the properties of NashMAML concerning the hyperparameter, radius ($r$). As illustrated in Figure 5.2 (right), NashMAML exhibits a high degree of robustness with respect to variations in $r$. 5.3 Image Classification We evaluate our method on a popular few-shot image classification task, the Mini-ImageNet dataset. It consists of 60,000 color images of size $84 \times 84 \times 3$, 100 classes with 600 examples per class. We use the split proposed by (Ravi & Larochelle, 2017), 64 for training, 16 for validation, and 20 for test classes. Our objective is to solve the $N$-way $K$-shot classification problem, set up as follows: Given $N$ classes, we have access to $K$ different instances of each of the $N$ classes and evaluate the performance of the model to classify new instances from the $N$ classes. We compare our framework with FOMAML, MAML, and iMAML. While these methods are not state-of-the-art on this benchmark, they can provide an apples-to-apples comparison for studying game-theoretical analysis of gradient-based meta-learning. We also anticipate that our perspective can be extended to state-of-the-art meta-learning models with minor modifications. For a fair comparison, we use the identical convolutional architecture with baselines and follow the same training... Figure 2: **1D sine regression task results.** Experiment is conducted with 3 different random seeds, mean and 95% confidence interval is reported. procedure. To be specific, both models are trained with 5 inner gradient steps with a learning rate of 0.01 and evaluated using 10 gradient steps at test time. Table 3 shows the experiment results on the Mini-ImageNet dataset. As shown in the table, Nash-MAML outperforms baselines in both 1-shot and 5-shot cases. While FONashMAML does slightly worse on the 1-shot task, it still outperforms baselines in the 5-shot task, whose performance is close to NashMAML. It seems that our formulation, which makes task-specific parameters in the inner-loop does not diverge excessively from the meta-parameters, and the generalized Stackelberg equilibrium computed by the NashMAML algorithm effectively boosts the performance of meta-learning algorithms in scalable settings. Table 2: Mini-ImageNet 5-way $K$-shot results. FOMAML, MAML, and CAVIA results are taken from the original works (Nichol & Schulman, 2018; Zintgraf et al., 2019). | | 5-way 1-shot | 5-way 5-shot | |------------------|--------------|--------------| | MAML | 48.70 ± 1.84 % | 63.11 ± 0.92 % | | NashMAML (Constraint) | 51.70 ± 0.99 % | 65.34 ± 0.65 % | | NashMAML (Penalty) | 48.81 ± 0.97% | 62.95 ± 0.58% | | FOMAML | 48.07 ± 1.75 % | 63.15 ± 0.91 % | | FONashMAML (Constraint) | 46.70 ± 0.99 % | 64.12 ± 0.23 % | | FONashMAML (Penalty) | 46.81 ± 1.01 % | 63.05 ± 0.43 % | | CAVIA (32) | 47.24 ± 0.65 % | 59.05 ± 0.54 % | | NashCAVIA (Constraint) | 46.63 ± 0.91 % | 59.48 ± 0.81 % | | NashCAVIA (Penalty) | 47.05 ± 0.93 % | 60.27 ± 0.73 % | 6 CONCLUSION In this paper, we propose a novel algorithm called NashMAML, an extension of MAML. The Nash-MAML algorithm introduces a new methodology for aligning the objective functions at the lower level to accommodate various objectives that meta-learning problems may have, such as worst-case and zero-shot performance. By assigning appropriate joint strategy sets and utility functions to the lower level based on the given upper level objectives, NashMAML ensures that the upper and lower levels share the same objectives, enabling effective learning for arbitrary objectives. In practice, we present a formulation of NashMAML focused on enhancing the stability of training and validate it through experiments in both sinusoidal regression and image classification tasks. In future research, we plan to broaden the scope beyond zero-shot performance and explore a range of objectives, including worst-case performance. REFERENCES Liam Collins, Aryan Mokhtari, and Sanjay Shakkottai. Task-robust model-agnostic meta-learning. *Advances in Neural Information Processing Systems*, 33:18860–18871, 2020. Francisco Facchinei and Christian Kanzow. Generalized nash equilibrium problems. *Annals of Operations Research*, 175(1):177–211, 2010. Francisco Facchinei and Jong-Shi Pang. *Finite-dimensional variational inequalities and complementarity problems*. Springer, 2003. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pp. 1126–1135. PMLR, 2017. Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. *Advances in neural information processing systems*, 31, 2018. Alex Gu, Songtao Lu, Parikshit Ram, and Lily Weng. Nonconvex min-max bilevel optimization for task robust meta learning. In *International Conference on Machine Learning*, 2021. Jaeyeon Jo, Jihwan Yu, and Jinkyoo Park. Computing algorithm for an equilibrium of the generalized stackelberg game, 2023. Taesup Kim, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian model-agnostic meta-learning. *arXiv preprint arXiv:1806.03836*, 2018. Yu Malitsky. Projected reflected gradient methods for monotone variational inequalities. *SIAM Journal on Optimization*, 25(1):502–520, 2015. Taewook Nam, Shao-Hua Sun, Karl Pertsch, Sung Ju Hwang, and Joseph J Lim. Skill-based meta-reinforcement learning. *arXiv preprint arXiv:2204.11828*, 2022. John F Nash Jr. Equilibrium points in n-person games. *Proceedings of the national academy of sciences*, 36(1):48–49, 1950. Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. *arXiv preprint arXiv:1803.02999*, 2(3):4, 2018. Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms, 2018. Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. Zero-shot cross-lingual transfer with meta learning. *arXiv preprint arXiv:2003.02739*, 2020. Aravind Rajeswaran, Chelsea Finn, Sham M Kakade, and Sergey Levine. Meta-learning with implicit gradients. *Advances in neural information processing systems*, 32, 2019. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In *International conference on learning representations*, 2017. Heinrich von Stackelberg et al. Theory of the market economy. 1952. Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. *Learning to learn*, pp. 3–17, 1998. Vinay Kumar Verma, Dhanajit Brahma, and Piyush Rai. Meta-learning for generalized zero-shot learning. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 6062–6069, 2020. Luisa Zintgraf, Kyriacos Shiarli, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. In *International Conference on Machine Learning*, pp. 7693–7702. PMLR, 2019.
b3LNKq6tfA
While the introduction of the RUID dataset (and its creation) are very interesting and useful, I argue if the randomness of the approach could generate many samples that are very hard to transform to code, thus impeding the improvement of performance at training time.
Learning UI-to-Code Reverse Generator using Visual Critic without Rendering Anonymous authors Paper under double-blind review Abstract Automated reverse engineering of HTML/CSS code from UI screenshots is an important yet challenging problem with broad applications in website development and design. In this paper, we propose a novel vision-code transformer (ViCT) composed of a vision encoder processing the screenshots and a language decoder to generate the code. They are initialized by pre-trained models such as ViT/DiT and GPT-2/LLaMA but aligning the two modalities requires end-to-end finetuning, which aims to minimize the visual discrepancy between the code-rendered webpage and the original screenshot. However, the rendering is non-differentiable and causes costly overhead. We address this problem by actor-critic fine-tuning where a visual critic without rendering (ViCR) is developed to predict visual discrepancy given the original and generated code. To train and evaluate our models, we created two synthetic datasets of varying complexity, with over 75,000 unique (code, screenshot) pairs. We evaluate the UI-to-Code performance using a combination of automated metrics such as MSE, BLEU, IoU, and a novel htmlBLEU score. ViCT outperforms a strong baseline model DiT-GPT2, improving IoU from 0.64 to 0.79 and lowering MSE from 12.25 to 9.02. With much lower computational cost, it can achieve comparable performance as when using a larger decoder such as LLaMA. 1 Introduction Recent Language Models (LMs) have demonstrated a remarkable capability in generating coherent code. For example, Codex [Chen et al., 2021], GPT-4, Code Llama [Rozière et al., 2023], and CodeRL [Le et al., 2022] have shown promise in aiding software engineers in their daily work. On the other hand, attention-based models have achieved success in various vision tasks, such as image classification, segmentation, and annotation. Among them, some landmark works are Vision Transformer (ViT) [Dosovitskiy et al., 2020], Swin [Liu et al., 2021], and other architectures, with a few specifically targeting document-related tasks, e.g., Document Image Transformer (DiT) [Li et al., 2022a]. In this paper, we take the first step towards reverse-engineering a UI screenshot, i.e., generating an HTML/CSS code that can reproduce the image. By combining the strengths of both LLMs in code generation and Vision Transformer in image representation, and aligning them for the above task, we investigate the possibility of generating the markup code from the visual representations of the original image. Our contributions are threefold. First, we develop a synthetic dataset generation module used for front-end UI screenshot images and corresponding code generation. The module is designed to generate images with varying complexity and styles, as well as the corresponding markup code. This dataset is used for training and evaluating our proposed approach and baselines. We also propose htmlBLEU, a more accurate HTML and CSS code similarity metric. Secondly, we propose a vision-code transformer (ViCT) architecture composed of a vision encoder and a language model decoder. We then evaluate the performance of several choices for the encoder and decoder for this task. Specifically, we experiment with GPT-2 [Radford et al., 2019] and LLaMA [Touvron et al., 2023a] as the text decoder for code generation, and compare the performance of ViT and DiT as image encoders. ViT is a widely utilized model trained on natural images for image recognition tasks. In contrast, DiT is specifically designed for document-related tasks, whose domain tends to be closer to that of our task. We explore the efficacy of these architectures in generating HTML/CSS code from webpage screenshots. Thirdly, we develop a novel visual critic without rendering (ViCR) used to finetune ViCT for step-by-step code generation. In particular, we train a critic model that aims to evaluate the discrepancy between the original UI screenshot and the one to be created by the generated code without rendering from the code. ViCR avoids the non-differential rendering loss and its induced overheads. We then apply an Actor-Critic algorithm (AC2) to train ViCT in an end-to-end manner. Our work, for the first time, establishes a robust baseline for generating markup code from images using vision-code transformers. Moreover, we developed a novel evaluation metric htmlBLEU to assess the task. Our proposed approach holds potential applications in front-end web development, as it could offer a more efficient and automated method for generating markup code for web designers. ### 2 RELATED WORKS Recent years have witnessed significant progress in both image-understanding and text-generation tasks, empowered by deep learning and the availability of massive datasets [Goodfellow et al. (2014); Radford et al. (2016); Esser et al. (2021); Sutskever et al. (2014); Graves et al. (2013); Transformer models trained on large-scale data in a self-supervised manner have played a key role in these advancements [Vaswani et al., 2017; Devlin et al., 2019]. Code prediction and generation have received growing attention in the realm of text generation. While traditional NLP methods like N-grams and Probabilistic Context-Free Grammar (PCFG) have encountered challenges in code generation tasks [Maddison & Tarlow, 2014], recent advancements in Transformer models have led to remarkable improvements. For instance, Codex [Chen et al., 2021] and its derivative tool Copilot, fine-tuned on publicly available code from GitHub, have achieved impressive performance on code generation tasks. InCoder [Fried et al., 2022] enables bidirectional context for code infilling by training on publicly available repositories where code regions have been randomly masked and moved to the end of each file. CodeGen [Nijkamp et al., 2022] explores a multi-step paradigm for program synthesis, dividing a single program into multiple subproblems specified by multiple prompts. CodeRL [Le et al., 2022] incorporates deep Visual Critic with an error-predictor critic network that generates rewards by classifying the code. Works like BLiP [Li et al., 2022b], Git [Wang et al., 2022], and CoCA [Yu et al., 2022] have leveraged large-scale pre-training on visual-textual data followed by fine-tuning on target tasks and outperformed traditional methods. Recent works such as BLiP 2 [Li et al., 2023], and derivatives such as Minigpt-4 [Zhu et al., 2023] and InstructBLIP [Dai et al., 2023] significantly improve the text generation capacity by employing larger language models such as Vicuna [Chiang et al., 2023] and LLaMA-2 [Touvron et al., 2023b]. Other models aiming to generate code from images include pix2code [Beltramelli, 2018], which generates code based on context and GUI images, and Sketch2code [Robinson, 2019], which attempts to generate code from wireframes using traditional computer vision and deep learning algorithm. The paper found the deep learning-based pipeline to perform better. While some works add components such as image style transfer, they rely on predefined classification for code generation. Another work Pix2Struct [Lee et al., 2022] uses a novel screenshot parsing objective to generate a simplified HTML parse from a masked screenshot of a webpage, effectively learning rich representations of the underlying structure of web pages. It encourages joint reasoning about the co-occurrence of text, images, and layouts in webpages. In contrast, our work builds upon these foundations and offers innovative approaches to generating HTML/CSS code. We employ transformer architecture and introduce a visual similarity signal in the training process, enhancing the accuracy and versatility of our model. Furthermore, we curate a diverse dataset tailored for this specific task and establish a strong baseline for generating markup code from images using vision-code transformers, contributing to the solving of this challenge. Additionally, we introduce a novel evaluation metric designed to facilitate a more accurate assessment of the task’s performance. 3 METHODOLOGY In this section, we elucidate the components of our approach, including the dataset, evaluation metrics, baseline training procedure, task formulation using actor-critic, and details of our experiments. 3.1 NEW DATASETS FOR UI TO CODE GENERATION While a variety of code generation datasets are available [Rozière et al., 2023], the space of open UI-code correspondence datasets is scarce. Due to this, we utilize a synthetic generation process to create two datasets of varying complexity. We generate Random UI Dataset (RUID) by combining a small number of HTML elements, such as two types of Divs, a square, a circle, and a button element, with randomly chosen style attributes. This results in a diverse set of images that can be used for the training process. We then create RUID-Large, which incorporates elements in RUID but expands it to most tags available in HTML, randomly generating trees with forms, divs, inputs, dropdowns, etc. All the synthetic dataset code is enclosed in standard HTML opening and closing tags, specifically: ```html <!DOCTYPE html> ``` For RUID We set the description of the elements present in the body as the title, for example, “2 Circles, 0 Blocks”. For each element, a paragraph containing a number of words has been added, with the text sourced from Project Gutenberg [Project Gutenberg](https://www.gutenberg.org). The dataset generator can be used to create samples of varying complexity. We focus on sampling small elements for the RUID dataset. Most of our experiments are performed on the RUID dataset. A summary of the settings used for the work can be found in Table 1. Overall, the maximal input length of the adopted models acts as a ceiling to the length of the generated code. An example of the generated element is below. Note that some of the parameters in Table 1 have been adjusted to fit the webpage, but the aesthetics of the generated elements have not been taken into account. The datasets are used with a split of 80:10:10 for training, validation, and testing. For each code sample, we take a screenshot of its generated webpage as it looks when opened in a Chromium browser. Thereby, we collected a dataset of (image, code) pairs, facilitating the training and evaluation of our models. It is worth noting that the traditional pipelines cannot directly address the task studied in this paper due to their different problem formulations and dataset formats. Specifically, they reduce the problem to classification between predefined building blocks while our approach focuses on free-form code generation. Hence, comparisons to them on our proposed dataset and task are infeasible. That being said, we create baselines for Transformer models on our dataset for HTML/CSS code generation that explores greater freedom in attributes and has no restrictions on color variety. In addition, we have evaluated the performance of general Visual Language Models such as InstructBlip [Dai et al. (2023)], minigpt-4 [Zhu et al. (2023)], and Bing chat in executing the specified task. ### 3.2 Proposed Vision-Code Transformer (ViCT) Our model employs a Visual Transformer (ViT) [Dosovitskiy et al. (2020)] as the vision encoder. The ViT processes the input image by dividing it into patches and encoding them as a sequence of tokens, supplemented by a [CLS] token representing the image’s global features. This computationally efficient approach, which has become widely adopted in recent methods such as [Li et al. (2021)] eliminates the need for pre-trained object detectors for visual feature extraction. Since ViT was usually pre-trained on natural images while the images in our dataset are mainly UI images, we further explored the DiT model [Li et al. (2022a)], a transformer trained on document images (which has a smaller domain gap to UI images), as an alternative image encoder. We use GPT-2 [Radford et al. (2019)] and LLaMA [Touvron et al. (2023a)], autoregressive language models, as the image-grounded text decoder. This model integrates the ViT encoder’s output tokens into its first-layer inputs via a cross-attention (CA) layer, positioned between the causal self-attention (CSA) layer and the feedforward network (FFN) of the text encoder. The input sequence length is set to 900 while each sequence begins with a [BOS] token and is terminated by a [EOS] token. By optimizing a cross-entropy loss function, we train the ViT encoder and GPT-2 decoder in an end-to-end manner, which maximizes the likelihood of the ground-truth code in an autoregressive way. This objective | Properties | Rectangle | Ellipse | Button | |------------|-----------|---------|--------| | Left (%) | 0-80 | 0-80 | 0-80 | | Top (%) | 0-80 | 0-80 | 0-80 | | Width (%) | 10-30 | 10-30 | 10-30 | | Height (%) | 10-30 | 10-30 | 10-30 | | Background | Uniform | Uniform | – | | Text Length| 1 Word | 1 Word | 1 Word | | Occurrence | 12/25 | 12/25 | 1/25 | Table 2: Element types, widths, and parameters used for the synthetic dataset generation. The number of elements per sample was randomly drawn from 1 to 6. provides the model with the ability to generalize and effectively convert visual information into coherent HTML/CSS code. 3.3 Fine-tuning using Visual Critic without Rendering (ViCR) Our goal in the finetuning process is to improve the visual similarity between the original and the predicted code samples when rendered. To do this we formulate the image-to-code generation as an RL problem, where visual similarity score, like IoU serves as the basis for the reward signal, while the finetuned encoder-decoder model serves as the stochastic policy, with token predictions as action steps. \[ L_{\text{ViCR}}(\theta) = -\mathbb{E}_{W^s \sim p_\theta}[\text{IoU}(I_{\text{org}}, I_{W^s})] \] where \( \theta \) are the parameters of our model. \( \mathbb{E}_{W^s \sim p_\theta} \) represents the expectation over synthetic samples \( W^s \) drawn from the policy \( p_\theta \), which is the distribution over actions defined by the model. \( \text{IoU}(I_{\text{org}}, I_{W^s}) \) is the Intersection over Union (IoU) score, a measure of the visual similarity between the input image \( I_{\text{org}} \) and the image \( I_{W^s} \) rendered from the synthetic sample \( W^s \). Due to challenges in the training in text generation setups [Zhong et al., 2017; Le et al., 2022], we modify the actor-critic approach used by CodeRL [Le et al., 2022] where a language model is used as the critic. We experiment with using GPT2 and BERT [Devlin et al., 2019] models. Though from initial results, we see that GPT2 Critic significantly outperforms BERT, with the BERT model collapsing to single class prediction even after oversampling. So we proceed with using GPT2 Critic for the experiments. The performance of the critic can be seen in Figure 2. The critic model is trained using source code samples from the training set paired with the sampled result from the baseline model as input, and the similarity score between respective rendered visualizations as the label. The inputs are concatenated using the following template: ``` {predicted_code}\nGround:{source_code} ``` To simplify the training process, we do not use raw similarity values, rather approaching the critic training as a classification problem between 4 classes. Specifically, the IoU thresholds used are: very low (0–0.23), low (0.23–0.42), high (42–77), and very high (77+). We then use the critic model to generate intermediate outputs for each prediction, and source code samples in the training set. We also create a mask to only use the values related to predicted code tokens in the tuning loss calculation. We then apply softmax to each token’s corresponding output and select values of the IoU ground truth bucket. The resulting vector is then multiplied by the corresponding assigned reward. We assign rewards of -1, -0.7, and -0.3 to classes 0, 1, and 2, respectively, and a positive reward of 1 to class 3. The resulting vector is used to scale the original loss during the fine-tuning phase using the update: \[ \nabla_\theta L_{\text{ViCR}}(\theta) \approx -\mathbb{E}_{W^s \sim p_\theta} \left[ r(W^s) \sum \hat{q}_\phi(w^s_t) \nabla_\theta \log p_\theta(w^s_t | w^s_{1:t-1}, D) \right] \] Where \( \nabla_\theta L_{\text{ViCR}}(\theta) \) represents the gradient of the RL loss function with respect to the model parameters \( \theta \). The term \( \hat{q}_\phi(w^s_t) \) represents the critic’s estimated value for the token \( w^s_t \) at time step \( t \). \( \nabla_\theta \log p_\theta(w^s_t | w^s_{1:t-1}, D) \) is the gradient of the log-probability of token \( w^s_t \) at time step \( t \), given the history of previous tokens \( w^s_{1:t-1} \) and additional data \( D \), with respect to the model parameters \( \theta \). 3.4 Improving Metrics Evaluating Vision-Code models can be challenging. Some of the common metrics for language generation, such as BLEU scores can be misleading since there are multiple ways to achieve the same visual look during rendering. Similar goes for using a large language model such as GPT-4 for evaluating the output. Rendering itself is costly since it depends on an eternal browser. Inspired by Figure 3: Example renderings from different models tested on the RUID dataset. The top row shows input UI screenshots from the dataset. The subsequent rows show renderings of the code predicted by each model for those specific inputs. CodeBLEU \cite{ren2020codebleu} to improve generated code evaluation without rendering we propose htmlBLEU metric which emphasizes important pieces of code as well as aligning attributes and elements in the DOM tree. Still, the metrics are a proxy for target, which is a human assessment of quality. To measure this we conducted a survey with 59 volunteers. The participants were presented with the original screenshot and the rendered one side-by-side and then asked to rate them on a scale of 0 to 100 in terms of structural and color similarity. For each model, we report the averaged min-max normalized score across all annotators and samples. For automated metrics, we employed two groups to assess the model performance: (1) code-based metrics, which compare the generated code against the original code producing the input screenshot; and (2) image-based metrics, which evaluate the screenshot of the generated images against the input. We measure image similarity (2) in the following ways. First, we calculate the mean squared error (MSE) between the pixel values of the two images. Second, we create binary masks for each image, setting pixels to 1 wherever the magnitude is greater than zero. We then compute the MSE and intersection over union (IoU) between these masks. For (1), we employed BLEU score and a dataset-specific metric called Element Counts, where the presence of all elements in the generated code results in a score of 1, and misalignment yields a score of 0. Since the BLEU score equally penalizes any differences between the two pieces of code, it is not an ideal metric for code evaluation. To avoid penalizing differences that do not lead to visual discrepancies, we develop a new metric, htmlBLEU, from Code-BLEU [Ren et al., 2020], as HTML code lacks data flow or syntactic abstract syntax tree (AST). html-BLEU comprises four components: a basic BLEU score, a weighted BLEU score focusing on the most important keywords for HTML code, a Document Object Model (DOM) Tree Matching between the corresponding HTML elements, and an attribute matching that aims to correspond elements and attributes. To examine the efficacy of htmlBLEU, we measure the Spearman’s rank correlation coefficient [Spearman, 1904] between htmlBLEU scores and the MSE between input and generated images. The correlation between htmlBLEU and MSE is 0.764, compared to 0.329 for the correlation between BLEU and MSE. The significantly higher correlation of htmlBLEU demonstrates that it more accurately reflects visual similarity than BLEU. 4 EXPERIMENTS 4.1 Establishing baselines To establish baselines, we tested two recent visual language models - InstructBLIP and Minigpt-4 - on two tasks. The first was identifying the number of distinct shapes in an image. The second was recreating the source code for the image. In our experiments, neither model achieved strong performance on these tasks. They generated unchanging or nonsensical output for the source code and hallucinated the number of shapes. For example, Minigpt-4 produced the same incorrect output regardless of the input image (Figure 2). These results indicate that general visual language models of this type may not be well-suited for source code generation from image inputs without additional training or modifications. 4.2 RUID and RUID-Large Datasets We report the image similarity and code generation metrics for different models in Table 3, in which the DiT-based model significantly outperforms the ViT-based model across all metrics. Although ViT can accurately handle simpler images with a single element, it struggles to correctly capture > 1 types of elements present in the input image, as well as their positions and colors. This is consistent with the qualitative results of our study: The ViT-based model was found to frequently miss or misinterpret elements and had difficulty accurately predicting the hexadecimal values of the colors. Figure 3 show some examples: DiT model accurately identifies the types and locations of the elements, while ViT model struggles with these tasks. The ViCT-L (w/o ViCR) model which uses DiT-Large encoder and GPT2-Large decoder models further improves the performance, demonstrating the benefits of leveraging larger models in the image-to-code generation task. The DiT-Large-GPT2-Large model exhibits enhanced results in terms of IoU, MSE, and element counts compared to the ViCT (w/o ViCR) model. Moreover, we explore the application of Visual Critic to further enhance the code generation process. Specifically, the ViCR model outperforms its normal finetuning variants in all metrics, showcasing the effectiveness of RL-based approaches in improving the visual similarity between the generated code and the input image. Additionally, when compared to the DiT-Large-GPT2-Large model, the Table 3: Comparison of various model performances on the RUID dataset, comprising basic shapes and elements. Our model, ViCT exhibits superior performance over ViCT without ViCR finetuning and is competitive with, or exceeds, larger models. The “Metrics” section presents automatically calculated metrics, while “Human Evaluation” provides normalized survey results, both displayed with mean and variance values. | Model | ViT-GPT2 | ViCT (w/o ViCR) | DiT-GPT2 (L.) | ViCT (Our) | |----------------|----------|-----------------|---------------|------------| | **Metrics** | | | | | | BLEU ↑ | 0.65 ± 0.08 | 0.74 ± 0.09 | 0.68 ± 0.11 | **0.76 ± 0.08** | | htmlBLEU ↑ | 0.62 ± 0.13 | 0.69 ± 0.14 | 0.67 ± 0.12 | **0.70 ± 0.13** | | IoU ↑ | 0.31 ± 0.25 | 0.64 ± 0.27 | **0.81 ± 0.19** | 0.79 ± 0.23 | | MSE ↓ | 19.63 ± 11.59 | 12.25 ± 8.83 | 11.34 ± 8.17 | **9.02 ± 6.96** | | MSE (Mask) ↓ | 0.15 ± 0.09 | 0.07 ± 0.06 | **0.03 ± 0.05** | **0.03 ± 0.04** | | Element N ↑ | 0.97 ± 0.16 | 0.97 ± 0.18 | 0.86 ± 0.36 | 0.96 ± 0.20 | | Human Evaluation (Normalized) | |-------------------------------| | Color Fidelity ↑ | 0.41 ± 0.29 | 0.66 ± 0.28 | 0.51 ± 0.27 | **0.83 ± 0.21** | | Structural Sim. ↑ | 0.49 ± 0.33 | 0.67 ± 0.27 | **0.85 ± 0.18** | 0.83 ± 0.25 | Table 4: Comparison of various model performances on the RUID-Large dataset, overall scores are lower than RUID due to the dataset being more challenging, incorporating most HTML elements in complex combinations. Still, our model, ViCT performs similar or better than larger models. | Model | ViT-GPT2 | ViCT (w/o ViCR) | DiT-GPT2 (L.) | DiT-LLaMA | ViCT (Our) | |----------------|----------|-----------------|---------------|-----------|------------| | **Metrics (800 token dataset)** | | | | | | | BLEU ↑ | 0.60 ± 0.07 | 0.72 ± 0.08 | 0.65 ± 0.10 | 0.72 ± 0.09 | **0.74 ± 0.07** | | htmlBLEU ↑ | 0.59 ± 0.12 | 0.68 ± 0.11 | 0.64 ± 0.13 | 0.66 ± 0.12 | **0.69 ± 0.12** | | IoU ↑ | 0.24 ± 0.20 | 0.38 ± 0.18 | **0.42 ± 0.16** | 0.41 ± 0.17 | 0.40 ± 0.19 | | MSE ↓ | 19.95 ± 10.50 | 15.86 ± 9.20 | 14.90 ± 8.50 | 14.20 ± 8.20 | **13.50 ± 7.80** | | MSE (Mask) ↓ | 0.15 ± 0.08 | 0.11 ± 0.07 | 0.09 ± 0.06 | **0.08 ± 0.06** | **0.08 ± 0.05** | | Element N ↑ | 0.92 ± 0.15 | 0.94 ± 0.16 | 0.89 ± 0.18 | 0.91 ± 0.17 | **0.93 ± 0.17** | | Human Evaluation (Normalized) | |-------------------------------| | Color Fidelity ↑ | 0.73 ± 0.10 | 0.77 ± 0.08 | 0.74 ± 0.09 | 0.76 ± 0.09 | **0.81 ± 0.07** | | Structural Sim. ↑ | 0.76 ± 0.12 | 0.81 ± 0.06 | 0.83 ± 0.10 | **0.84 ± 0.09** | **0.84 ± 0.08** | ViCR model achieves superior results in certain metrics, indicating the added benefits of RL integration. These findings highlight the potential of RL-based approaches in enhancing code generation and improving the visual fidelity of the generated code. As shown in Figure 5, both models’ performance drops as the code samples get more complex. But DiT suffers a much slower drop on the IoU curve than ViT when predicting more than one element. Another interesting observation is that the generated code does not necessarily have a high text similarity as the ground-truth code for the input screenshot. It can still produce visually similar webpage even if the textual similarity is low, which indicates a promising generalization capability of the models. Figure 5: IoU vs. complexity (the number of <div> elements in the ground-truth code). Approximates complexity of reverse generation. Scores for different models tested. IoU drops as more elements are added. 4.3 Ablation We conducted an ablation study by using cross-entropy (CE) instead of IOU as the metric to create classification labels for the critic model. There is no notable improvements from ViCT (w/o ViCR) scores of IoU of 0.64 or MSE of 12.25 for RUID dataset. In tables 2 and 4 we can see gains of ViCT versus when trained without ViCR. In figure 5 we also see that there is a significant improvement in the drop of performance over increasing sample complexity when ViCR is used. 5 Conclusion This paper investigates how to build and train a vision-code transformer for reverse engineering a webpage screenshot and generating the HTML/CSS code that can reproduce the screenshot. We apply ViT or DiT as an image encoder and GPT-2 as a textual decoder that generates code from the ViT/DiT features of the input image. Unlike traditional pipelines, our models can be trained in an end-to-end manner for free-form code generation. Moreover, we collect a synthetic dataset to train and evaluate the proposed models and develop a novel htmlBLEU metric to evaluate the matching between the ground-truth code and the generated one. Our experimental results show that the ViCT (w/o ViCR) model outperforms ViT-GPT2 in terms of multiple metrics and human evaluation. Furthermore, we explore the effect of model size, as well as of Actor-Critic finetuning on the model performance. We train the critic model to consider visual similarity information and modify the actor encoder-decoder network’s loss function to incorporate the critic model’s output. The results show that the RL finetuning is effective at significantly boosting the underlying model’s performance, with the resulting model scoring similarly or better larger sized models on most metrics. This study serves as a proof-of-concept in the field, demonstrating that Transformer architectures could be a viable end-to-end solution for this task. However, further research is necessary to extend the generated code’s length, improve text snippet identification in the image, and explore more complex examples where the corresponding code may not be as straightforward. 6 Limitations Despite the promising results, it is essential to highlight the limitations of this study. The synthetic dataset used, albeit including variations in the size and location of elements, may not fully encapsulate the complexity of real-world web pages. Moreover, the dataset does not strictly adhere to all front-end development best practices, necessitating further research for practical implementation in real products. Additionally, our current pipeline is solely capable of generating static text pages and is limited to small samples. Moreover, the Visual Critic pipeline necessitates tuning of certain hyperparameters, specifically learning rate, and rewards, to ensure stable training. Consequently, while our method is efficient, it introduces a degree of overhead when adapting to new datasets and tasks. 7 Reproducibility Statement We provide comprehensive details of our experimental setup, with additional information in the Appendix. Our computations were performed on standard CPUs and GPUs using open-source software. An anonymized copy of the research source code is included in a downloadable zip archive. The code is designed to be user-friendly and extensible for future research. Upon publication, we will provide a public link to the source code, model weights, and datasets. References Tony Beltramelli. pix2code: Generating code from a graphical user interface screenshot. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp. 1–6, 2018. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *arXiv preprint arXiv:2107.03374*, 2021. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. URL https://arxiv.org/abs/2010.11929. Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis, 2021. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. *arXiv preprint arXiv:2204.05999*, 2022. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014. Alex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks, 2013. Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven CH Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. *arXiv preprint arXiv:2207.01780*, 2022. Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2struct: Screenshot parsing as pretraining for visual language understanding, 2022. Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, and Furu Wei. Dit: Self-supervised pre-training for document image transformer, 2022a. URL https://arxiv.org/abs/2203.02378. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation, 2022b. URL https://arxiv.org/abs/2201.12086. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023. Yawei Li, Kai Zhang, Jiezhang Cao, Radu Timofte, and Luc Van Gool. Localvit: Bringing locality to vision transformers, 2021. URL https://arxiv.org/abs/2104.05707. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 10012–10022, 2021.
9k0krNzvlV
The motivation of this paper is there is a limitation of decoding-based watermarking, namely, replacing it with a normal decoder. Is the assumption practical? In practice, for an LLM API, how do we conduct such an operation?
ON THE LEARNABILITY OF WATERMARKS FOR LANGUAGE MODELS Chenchen Gu, Xiang Lisa Li, Percy Liang, Tatsunori Hashimoto Stanford University {cygu, xlisali, thashim}@stanford.edu, pliang@cs.stanford.edu ABSTRACT Watermarking of language model outputs enables statistical detection of model-generated text, which can mitigate harms and misuses of language models. Existing watermarking strategies operate by altering the decoder of an existing language model. In this paper, we ask whether language models can directly learn to generate watermarked text, which would have significant implications for the real-world deployment of watermarks. First, learned watermarks could be used to build open models that naturally generate watermarked text, enabling watermarking for open models, where users can control the decoding procedure. Second, if watermarking is used to determine the provenance of generated text, an adversary can hurt the reputation of a victim model by spoofing its watermark and generating damaging watermarked text. To investigate the learnability of watermarks, we propose watermark distillation, which trains a student model to behave like a teacher model that uses decoding-based watermarking. We test our approach on three decoding-based watermarking strategies and various hyperparameter settings, finding that models can learn to generate watermarked text with high detectability. We also find limitations to learnability, including the loss of watermarking capabilities under fine-tuning on normal text and high sample complexity when learning low-distortion watermarks.\footnote{See \url{https://github.com/chenchenyu/watermark-learnability} for code and models.} 1 INTRODUCTION As language models (LMs) become more capable and widely used, watermarking LM outputs becomes increasingly important to mitigate potential harms and misuses of LMs. Watermarking enables statistical detection of LM-generated text, which enables enforcing policies on LM usage, e.g., removing LM-generated disinformation from social media platforms or detecting academic dishonesty. Another proposed use case of watermarking is identifying the provenance of text, i.e., tracing text to the specific LM that generated it (Abdelnabi & Fritz, 2021; Kuditipudi et al., 2023). Recent works have shown that it is possible for an LM provider to inject specific, known watermark signals into text using specialized decoding algorithms (Kirchenbauer et al., 2023a; Aaronson, 2023; Kuditipudi et al., 2023), but little is known about whether these watermarks are learnable by a model. The learnability of watermarks has significant implications for the real-world deployment of watermarks, as it could enable downstream applications and adversarial spoofing attacks. In this work, we study the learnability of watermarks by studying weights-based watermarking, which involves learning parameters for a language model that cause it to generate watermarked text under its natural sampling distribution, without using a special decoding-time watermarking algorithm. Our investigation is motivated by its relevant implications for two applications: (i) developing watermarking for open language models and (ii) spoofing watermarks. First, existing watermarking methods depend upon using a specialized decoding algorithm, making them too inflexible for open LMs. For open LMs, where the weights are released, a user can use an ordinary decoding algorithm and generate non-watermarked text, whether intentionally or not. We find that weights-based watermarking works with standard decoding strategies, removing the Figure 1: Decoding-based watermarking (top) versus weights-based watermarking (bottom). Decoding-based watermarking requires a specialized decoding algorithm $f_w$ to generate watermarked text, whereas weights-based watermarking can use standard decoding to generate watermarked text directly from the model, using just its weights. Watermark distillation enables weights-based watermarking by training a student model $p_\theta$ to behave like the teacher model $p_{LM}$ with decoding-based watermarking strategy $f_w$. reliance on a specialized decoder. This makes it a promising first step towards developing watermarking for open LMs. However, we also find that weights-based watermarking capabilities can be removed by fine-tuning on normal text, indicating that improving robustness to fine-tuning is an important remaining challenge. Second, in watermark spoofing attacks, an adversary outputs text that contains the watermark signal from a victim LM (Sadasiyan et al., 2023). If watermarking is used to identify the provenance of text, then an attacker could attribute damaging text to the victim LM and hurt its reputation. We find that the learning of weights-based watermarking can enable spoofing attacks, and we demonstrate a proof-of-concept attack on an instruction-following chat model. The possibility of spoofing attacks suggests that watermarking should not be used to attribute provenance or blame to a specific LM. Instead, watermarking should only be used to statistically detect LM-generated text, which can be used for tasks such as finding infractions of policies on LM usage. To enable weights-based watermarking, we propose logit-based and sampling-based watermark distillation, two simple methods for a student model to learn weights-based watermarking from a teacher model with decoding-based watermarking. Intuitively, in logit-based watermark distillation, the student model is trained to match the next token distribution outputted by the teacher model using decoding-based watermarking. In sampling-based watermark distillation, the teacher model with decoding-based watermarking is first used to generate watermarked samples. Then, the student model is fine-tuned on these watermarked samples. We experiment with three decoding-based watermarking strategies: KGW (Kirchenbauer et al., 2023a), Zhao et al. (2023a), Aar (Aaronson, 2023), and KTH (Kuditipudi et al., 2023), and various values for their hyperparameters that control the level of distortion induced by watermarking. We find that watermarks and hyperparameter settings vary in their degree of learnability. In each watermarking strategy, higher-distortion hyperparameter settings are successfully learned by both forms of watermark distillation (median p-values less than 0.0001). Lower-distortion watermarks and hyperparameter settings are more challenging and less sample efficient to learn, but not unlearnable, as the p-values are still noticeably smaller than the non-watermarked baseline of 0.5. 2 BACKGROUND AND NOTATION: DECODING-BASED WATERMARKING We study autoregressive language models $p_{LM}: V^* \rightarrow \Delta(V)$ that map from a prefix string $x \in V^*$ to a next token distribution over the vocabulary $V$. Informally, a decoding-based watermarking strategy $f_w$ uses a watermark key $\xi$ to modify the model’s original next token distribution $p_{LM}(\cdot | x)$ into a new distribution for generating watermarked text, which has a watermark signal embedded. The watermark detection algorithm $f_d$ looks for this signal using the same watermark key $\xi$. Formally, we define a decoding-based watermarking strategy to be a function $$f_w : \Delta(V) \times V^* \times \Xi \rightarrow \Delta(V)$$ (1) where $\Xi$ is the set of possible watermark keys. This function $f_w$ outputs a distribution $p_w(\cdot \mid x)$ from which to generate the next token in the watermarked text, given an original next token distribution $p_{LM}(\cdot \mid x)$, input text $x$, and watermark key $\xi \in \Xi$. We define a watermark detection algorithm to be a function $$f_d : V^* \times \Xi \rightarrow [0, 1].$$ Given some text $x$ and watermark key $\xi$, $f_d$ outputs a p-value with respect to the null hypothesis that $x$ is independent of $f_w$ with key $\xi$. Informally, $f_d$ computes a test statistic that measures the strength of the watermark signal, then computes a p-value using the distribution of the test statistic under the null hypothesis. If the p-value is below a given significance level, the null hypothesis is rejected and the text is detected as watermarked. Slightly imprecisely, rejecting the null hypothesis means the text is detected as model-generated.\footnote{This is slightly imprecise because model-generated text is not the only text that can be deliberately watermarked. For example, a human could write watermarked text by manually following the watermarking algorithm. However, for most practical use cases, such as detecting academic dishonesty, this minor imprecision is not an issue because either way, the user is doing something suspicious and unusual.} In this paper, we consider three decoding-based watermarking strategies: Algorithm 2 in Kirchenbauer et al. (2023a), the Gumbel softmax scheme in Aaronson (2023), and the exponential minimum sampling scheme in Kuditipudi et al. (2023). Using the authors’ names and initials, we refer to these as KGW, Aar, and KTH, respectively. We briefly describe these watermarking strategies below. See Appendix D for additional details and formal definitions. **KGW: green list bias.** In the KGW watermarking strategy (Kirchenbauer et al., 2023a), when generating the next token, the vocabulary is pseudorandomly split into a “green list” and “red list” by hashing the previous token using the watermark key $\xi$. The green list contains watermark hyperparameter $\gamma \in (0, 1)$ proportion of the vocabulary. Then, before the model’s logits are converted to probabilities via the softmax function, hyperparameter $\delta > 0$ is added to the logits of the green list tokens. This procedure makes green list tokens more likely in watermarked text than in non-watermarked text. So, at detection time, if the proportion of green list tokens in a text is much greater than $\gamma$, then the p-value is small. More generally, the previous $k$ tokens can be hashed, where $k$ is a hyperparameter. Values of $k > 1$ are investigated by Kirchenbauer et al. (2023b), finding that lower $k$ leads to more repetitive outputs. When $k = 0$, the green and red lists are fixed, regardless of the previous tokens. $k = 0$ was proposed by Zhao et al. (2023a) as Unigram-Watermark, a variant of KGW, but we will denote it as KGW $k = 0$ to simplify notation. KGW distorts model outputs by upweighting green list tokens, increasing perplexity of generated texts computed by a larger model (Kirchenbauer et al., 2023a). Increasing the bias hyperparameter $\delta$ increases detectability, i.e., smaller p-values, but also increases distortion. **Aar: boosting continuous hash scores.** The Aar watermarking strategy (Aaronson, 2023) hashes the previous $k$ tokens using key $\xi$ (where $k$ is a hyperparameter) to obtain a score $r_i$ for each token $i \in V$, where each $r_i$ is uniformly distributed in $[0, 1]$. Let $p_i$ be the original model probability for token $i$. Then, the next generated token is deterministically chosen to be the token $i$ which maximizes $r_i^{1/p_i}$, i.e., a token with both a high original probability $p_i$ and high hash score $r_i$. This procedure boosts the hash scores of tokens in watermarked text compared to non-watermarked text. So, at detection time, if the hash scores $r_i$ of the tokens in the observed sequence are high, then the p-value is low. Since Aar deterministically selects the next token based on the previous $k$ tokens and the original model probabilities, Aar can lead to repetitive text, especially for small $k$ (Kuditipudi et al., 2023). Increasing $k$ decreases repetitiveness, as larger $k$-grams are less likely to be repeated, but the watermark also becomes less robust to edits, as each token edit affects the hash scores for $k + 1$ tokens. **KTH: robust sequence alignment.** The KTH watermarking strategy (Kuditipudi et al., 2023) is similar to Aar, but instead of hashing previous tokens to obtain the scores $r_i$, the scores are obtained from the next element in the key sequence $\xi$. In KTH, $\xi = (\xi^{(1)}, \ldots, \xi^{(m)})$ where each $\xi^{(j)} \in [0, 1]^{|V|}$. contains the scores, with entries uniformly distributed across \([0, 1]\). Then, to generate the \(j\)-th token in the sequence, KTH deterministically chooses the token \(i\) that maximizes \(\xi_i^{(j)} / p_i\). Note that \(m\) should be larger than the maximum generation length. To allow different generations from the same prompt, before generating each sequence, \(\xi\) can be shifted by some random \(\tau\), i.e., \(\xi' = \xi^{(1+\tau \mod m)}, \ldots, \xi^{(m+\tau \mod m)}\). To study the impact of these shifts on learnability, we introduce a hyperparameter \(s \in [1, m]\) for how many shift values \(\tau\) are possible.\(^3\) Increasing \(s\) expands the range of possible model generations. At detection time, to be robust to text edits and shifts, the test statistic quantifies how well a text \(x\) can be aligned with the key sequence \(\xi\). More specifically, the test statistic computes a minimum Levenshtein distance using the alignment cost \(d(x, \xi) = \sum_{t=1}^{\text{len}(x)} \log(1 - \xi_t)\). A lower (more negative) test statistic indicates stronger watermark signal. To compute p-values, the observed test statistic is compared to a reference distribution of test statistics of non-watermarked texts. Letting \(T\) be the number of samples in the reference distribution, the p-values computing using this method are lower bounded by \(1/T\). ### 3 METHODS **Problem statement.** Given a teacher model \(p_{LM}\), decoding-based watermarking strategy \(f_w\), and key \(\xi\), the goal is to learn a student model \(p_\theta\) whose sampling distribution naturally generates watermarked text. Specifically, letting \(f_d\) be the detection algorithm corresponding to \(f_w\), if \(p_\theta\) generates text \(y\) with small detection p-value \(f_d(y, \xi)\) with probability similar to that of \(p_{LM}\) with \(f_w\), then \(p_\theta\) has learned a weights-based watermarking strategy, since \(p_\theta\) has learned to generate watermarked text using just its weights. Figure 1 illustrates decoding-based versus weights-based watermarking. Next, we present two methods for learning a weights-based watermarking strategy: logit-based watermark distillation and sampling-based watermark distillation, which fall under the broader category of knowledge distillation (Hinton et al., 2015; Kim & Rush, 2016). #### 3.1 LOGIT-BASED WATERMARK DISTILLATION In logit-based watermark distillation, we train the student model \(p_\theta\) to behave as if it had decoding-based watermarking strategy \(f_w\) applied. Specifically, given an input \(x\), we want the student model’s next token distribution \(p_\theta(\cdot | x)\) to match \(f_w(p_{LM}(\cdot | x), x, \xi)\), the next token distribution outputted by the teacher model \(p_{LM}\) with decoding-based watermarking strategy \(f_w\) and key \(\xi\). So, given a dataset of texts \(D\), the training objective is to minimize the mean KL divergence between the teacher and student next token distributions on all prefixes in \(D\), given by \[ L_{\text{logit}}(\theta) = \frac{1}{|D|} \sum_{x \in D} \sum_{t=1}^{\text{len}(x)} D_{KL}(f_w(p_{LM}(\cdot | x_{<t}), x_{<t}, \xi) \| p_\theta(\cdot | x_{<t}))). \] The teacher model \(p_{LM}\) is frozen. This approach requires that \(p_{LM}\) and \(p_\theta\) have the same tokenizer and vocabulary so that the logits can be aligned between the two models. It is also helpful if \(p_{LM}\) and \(p_\theta\) share the same model architecture, as then we can initialize \(p_\theta\) to \(p_{LM}\). Note that the ground truth next tokens \(x_t\) from dataset \(D\) are not used in the loss function, so \(D\) does not need to be watermarked text. Standard datasets containing non-watermarked human-generated text can be used.\(^4\) #### 3.2 SAMPLING-BASED WATERMARK DISTILLATION Sampling-based watermark distillation has two stages. First, we generate watermarked text from teacher model \(p_{LM}\) with decoding-based watermarking strategy \(f_w\) applied using key \(\xi\). Then, we fine-tune the student model \(p_\theta\) on this watermarked text using the standard language modeling cross-entropy loss. \(^3\)We space the \(s\) shifts evenly across \([1, m]\), i.e., the set of possible shifts \(\tau\) is \(\{i \cdot \lfloor m/s \rfloor : 0 \leq i < s\}\). \(^4\)If \(D\) is non-watermarked text, then it theoretically might be out of distribution for \(p_\theta\) to autoregressively generate watermarked text, since \(p_\theta\) would be conditioning on the watermarked text it has already generated. However, empirically, we find that logit-based distilled models can learn to generate watermarked text. Formally, given a set of prompts \( \mathcal{P} \), for each prompt \( z \in \mathcal{P} \), we generate a watermarked completion sequence \( x = x_1 x_2 \cdots x_n \), where each sampled token \( x_t \sim f_w(p_{\text{PLM}}(\cdot | zx_{<t}), zx_{<t}, \xi) \). Let the fine-tuning dataset \( \mathcal{D} \) consist of these watermarked generations \( x \). Then, we train \( p_\theta \) to minimize the cross-entropy loss on \( \mathcal{D} \), given by \[ L_{\text{sampling}}(\theta) = \frac{1}{|\mathcal{D}|} \sum_{x \in \mathcal{D}} \sum_{t=1}^{\text{len}(x)} - \log p_\theta(x_t | x_{<t}). \] Here, \( p_{\text{PLM}} \) and \( p_\theta \) do not need to share the same tokenizer or vocabulary. However, sampling-based watermark distillation is less efficient than logit-based watermark distillation due to autoregressively generating watermarked text in the first stage. 4 EXPERIMENTAL SETUP We run experiments to evaluate how well logit-based and sampling-based watermark distillation can learn weights-based watermarking from the decoding-based watermarking strategies seen in §2. Ideally, we want weights-based watermarking to match decoding-based watermarking in terms of watermark detectability and generation quality. 4.1 WATERMARKING STRATEGIES AND HYPERPARAMETERS We experiment with the three decoding-based watermarking strategies discussed in §2. We use various hyperparameter settings to vary the level of distortion induced by the watermarks. Specifically, we test KGW with \( k = \{0, 1, 2\} \), \( \gamma = 0.25 \) and \( \delta = \{1, 2\} \), Aar with \( k = \{2, 3, 4\} \), and KTH with key length 256 and number of shifts \( s = \{1, 2, 4, 256\} \). 4.2 TRAINING For each decoding-based watermarking strategy, we test logit-based and sampling-based watermark distillation for learning weights-based watermarking. For logit-based watermark distillation, we use Llama 2 7B (Touvron et al., 2023) as both the teacher and student models (the student model is initialized with the teacher model weights). We distill using a subset of OpenWebText (Gokaslan et al., 2019) for 5,000 steps with a batch size of 64 sequences, sequence length of 512 tokens, maximal learning rate of 1e-5, and cosine learning rate decay with a linear warmup. Full training details are in Appendix E.1. For sampling-based watermark distillation, we also use Llama 2 7B as both the teacher and student models. First, we use Llama 2 7B with a decoding-based watermarking strategy to generate 640,000 watermarked samples of length 256 tokens, prompted with 50-token prefixes from OpenWebText. Then, we fine-tune Llama 2 7B on the watermarked samples for 1 epoch of 5,000 steps, with a batch size of 128 sequences, sequence length of 256 tokens, maximal learning rate of 1e-5, and cosine learning rate decay with a linear warmup. Full training details are in Appendix E.2. In Appendix F, we perform sampling-based watermark distillation experiments where the teacher and student models have different tokenizers and sizes, using Llama 2 7B as the teacher model and Pythia 1.4B as the student model (Biderman et al., 2023). 4.3 EVALUATION AND METRICS Evaluation procedure. As in Kirchenbauer et al. (2023a) and Kuditipudi et al. (2023), we evaluate on generations prompted by prefixes from the RealNewsLike subset of the C4 dataset (Raffel et al., 2020). Ideally, the intended use case and domain of the student model \( p_\theta \) should inform the choices of the set of prompts \( \mathcal{P} \) and teacher model \( p_{\text{PLM}} \). However, empirically, we find that sampling-based watermark distillation is fairly robust to domain shifts (see §4.3 and Appendix I). Because we always use \( \gamma = 0.25 \), we sometimes omit explicitly stating the value of \( \gamma \) to simplify notation. We exclude \( k = 2, \delta = 1 \) since we find that \( k = 2, \delta = 2 \) already exhibits lower learnability. For KTH we use a batch size of 128 and sequence length of 256 tokens because we use key length 256. For each decoding-based watermarking strategy and distilled model, we generate 5,000 200-token completions from 50-token prompts from the validation split. We use standard sampling with temperature 1 for the main results, and investigate the model’s robustness to different decoding parameters in Appendix C. We include evaluations on additional datasets in Appendix D. We choose metrics to evaluate two properties: watermark detectability and generation quality. **Watermark detectability.** We compute the median watermark detection p-value across generations. Note that the p-values for the KTH watermark are lower bounded by how many samples \( T \) we compute in the reference distribution. Similar to [Kuditipudi et al., (2023)](https://arxiv.org/abs/2306.08497), we use \( T = 10,000 \), so the p-values are lower bounded by 1e-4. To make finer-grained distinctions in watermark strength below this lower bound, we also compute the median test statistic (discussed in §2) to evaluate KTH watermark strength. A lower (more negative) test statistic indicates higher watermark detectability. We also compute the AUROC (area under the receiver operating characteristic curve) for classifying model-generated versus human-generated text using the watermark detection p-values/test statistics. We compute the AUROC using an equal number of model-generated and human-generated texts, all of the same length. **Generation quality.** We use Llama 2 13B to compute the mean perplexity of generations. Lower perplexity tends to indicate higher quality and fluency, but repetitive text also achieves low perplexity. So, to evaluate repetitiveness, we compute the mean seq-rep-3 of generations, which is the proportion of duplicate 3-grams in a sequence, given by \[ 1 - \frac{\text{# of unique 3-grams}}{\text{# of 3-grams}} \] [Welleck et al., (2020)](https://arxiv.org/abs/2004.02984). **Comparisons.** For both watermark distillation methods, for each decoding-based watermarking strategy \( f_w \), we compare the teacher model with \( f_w \) applied (denoted by “Decoding”) against the distilled student model (denoted by “Logit” and “Sampling” for logit-based and sampling-based watermark distillation, respectively). As a baseline for generation quality, we use the base student model with no watermarking or distillation (denoted by “Base student”). ## 5 RESULTS Table 1 contains results for the logit-based and sampling-based watermark distillation experiments. The two watermark distillation methods exhibit similar trends. Both forms of watermark distillation successfully learn higher-distortion watermarks, achieving small p-values and high detectability. In some watermarks, e.g., KGW \( k = 0 \), watermark distillation matches the p-values achieved by decoding-based watermarking. In other watermarks, watermark distillation does not achieve as small watermark detection p-values as decoding-based watermarking, but for higher-distortion watermark hyperparameter settings (smaller \( k \) and larger \( \delta \) for KGW, smaller \( k \) for Aar, and smaller \( s \) for KTH), the p-values are still sufficiently small to enable high detectability, as shown by the high AUROC values. Figure 4 in Appendix A contains empirical CDFs of the distributions of p-values across generations, showing that for higher-distortion watermarks, the majority of generations from the watermark distilled models have small p-values. Within each watermark type, p-values from logit-based and sampling-based distillation are larger for lower-distortion hyperparameter settings, indicating that lower-distortion watermarks are harder to learn. However, these watermarks are still learned to some degree, as the p-values are noticeably smaller than the non-watermarked baseline of 0.5, and the AUROC values are noticeably higher than the non-watermarked baseline of 0.5. In Appendix C, sample complexity experiments show that more training samples and steps lead to smaller p-values for both logit-based and sampling-based distillation, with no sign of convergence. In addition, we find that when we train logit-based watermark distillation on KGW \( k = 2, \delta = 2 \) for five times longer (25,000 steps) on more data, the median p-value decreases from 0.1 to 0.012. This suggests that lower-distortion watermarks are less sample efficient to learn, but still learnable, given sufficient training data and steps. Compared to decoding-based watermarking, watermark distillation does not achieve as optimal a tradeoff between generation quality and detectability. For KGW and KTH, both watermark distillation methods achieve slightly to moderately higher perplexity and similar or larger p-values com- --- 9Here, we are using “distortion” somewhat informally, roughly meaning how much of a difference watermarking causes in terms of generation quality, behavior, etc. | Watermark | p-value (↓) (KTH test statistic (↓)) | AUROC (↑) | Perplexity (↓) | seq-rep-3 (↓) | |-----------|-------------------------------------|------------|----------------|---------------| | | Decoding | Logit | Sampling | Decoding | Logit | Sampling | Decoding | Logit | Sampling | Decoding | Logit | Sampling | | KGW | k = 0, δ = 2 | 6e-16 | 2e-17 | 2e-15 | 1.00 | 1.00 | 1.00 | 17.5 | 17.3 | 20.3 | 0.05 | 0.05 | 0.05 | | | k = 1, δ = 2 | 4e-18 | 7e-09 | 8e-07 | 1.00 | 1.00 | 1.00 | 16.5 | 17.6 | 19.2 | 0.04 | 0.03 | 0.04 | | | k = 2, δ = 2 | 9e-18 | 1e-01 | 1e-01 | 1.00 | 0.80 | 0.74 | 16.8 | 17.7 | 19.8 | 0.03 | 0.02 | 0.03 | | | k = 0, δ = 1 | 5e-04 | 3e-05 | 1e-03 | 0.98 | 0.99 | 0.98 | 13.0 | 12.9 | 15.7 | 0.03 | 0.03 | 0.03 | | | k = 1, δ = 1 | 1e-05 | 7e-03 | 2e-02 | 0.99 | 0.91 | 0.87 | 12.7 | 13.1 | 14.9 | 0.03 | 0.03 | 0.03 | | Aar | k = 2 | 1e-75 | 2e-12 | 3e-17 | 1.00 | 1.00 | 0.98 | 6.5 | 10.8 | 7.7 | 0.34 | 0.11 | 0.34 | | | k = 3 | 5e-73 | 1e-01 | 6e-03 | 1.00 | 0.78 | 0.88 | 9.5 | 11.6 | 10.5 | 0.14 | 0.04 | 0.17 | | | k = 4 | 4e-72 | 4e-01 | 3e-01 | 1.00 | 0.58 | 0.65 | 10.7 | 11.8 | 11.9 | 0.09 | 0.03 | 0.11 | | KTH | s = 1 | 1e-04 | 1e-04 | 1e-04 | 1.00 | 1.00 | 1.00 | 10.5 | 16.5 | 15.1 | 0.03 | 0.04 | 0.03 | | | s = 2 | 1e-04 | 1e-04 | 1e-04 | 1.00 | 0.99 | 0.99 | 10.7 | 16.3 | 13.4 | 0.03 | 0.04 | 0.03 | | | s = 4 | 1e-04 | 1e-03 | 1e-04 | 1.00 | 0.96 | 0.99 | 10.6 | 14.2 | 12.5 | 0.03 | 0.04 | 0.04 | | | s = 256 | 1e-04 | 8e-02 | 1e-04 | 1.00 | 0.85 | 0.97 | 10.8 | 11.3 | 11.3 | 0.03 | 0.04 | 0.04 | | Base student | 5e-01 | 0.50 | 11.8 | 0.03 | Table 1: Results for logit-based and sampling-based watermark distillation experiments. Within each watermark type (KGW, Aar, and KTH), the hyperparameter rows go from higher-distortion to lower-distortion moving down the table. Higher-distortion watermarks are successfully learned with small p-values and high detectability. Lower-distortion watermarks are harder to learn, as shown by the larger p-values, but they are still learnable, just less efficiently and strongly. pared to decoding-based watermarking. For Aar, watermark distillation achieves similar or lower seq-rep-3 as decoding-based watermarking, but larger p-values. This suggests that to learn weights-based watermarking, logit-based and sampling-based watermark distillation incur some cost to the tradeoff between generation quality and detectability. While logit-based and sampling-based watermark distillation show similar trends, there are some interesting differences. We defer this discussion to Appendix B due to space constraints. However, recall that logit-based and sampling-based distillation have different requirements (e.g., access to logits and shared tokenizer vs. access to samples and autoregressive generation, see §3.1 and §3.2), so they should not be compared solely on performance. So, logit-based and sampling-based distillation are each suitable and applicable for different settings, so neither is strictly better than the other in all scenarios. Robustness to text edits. We test the robustness of weights-based watermarking to edits by randomly corrupting generated text from the logit-based and sampling-based watermark distilled Llama 2 7B models with varying proportions of tokens randomly edited. See Appendix I for full experimental details. As shown in Figure 2, the detection p-values of all three watermark types are robust to moderate edit proportions, up to around 20–30%. At higher edit proportions, up to around 60–70%, KTH is significantly more robust to edits than KGW and Aar, consistent with the findings of Kuditipudi et al. (2023). Robustness to changes in decoding parameters. Whereas decoding-based watermarking relies on specialized decoding algorithms, weights-based watermarking generates watermarked text naturally under standard decoding algorithms. In Appendix C, we find that weights-based watermarking learned by logit-based and sampling-based distillation is robust to changes in decoding parameters, e.g., different temperatures $t$ and different thresholds $p$ in nucleus sampling (Holtzman et al., 2020). 6 WATERMARKING FOR OPEN MODELS In §5 we showed that weights-based watermarking works under standard decoding algorithms and is robust to changes in decoding parameters. This is a necessary first step towards watermarking Figure 2: Watermark detection p-values of generations from weights-based watermarking, corrupted with varying proportions of tokens randomly edited. The watermarks are robust to moderate amounts of corruption. Figure 3: Watermark detection p-values of generations from logit-based watermark distilled Llama 2 7B models after further fine-tuning on OpenWebText. The models’ weights-based watermarking is removed by fine-tuning. for open models, where users can run inference themselves. They may change the decoding algorithm, and the inference library they use may not enable decoding-based watermarking by default or implement it at all. Robust watermarking for open models should also ideally be robust to fine-tuning, as users have the ability and desire to fine-tune open models. Ideally, this fine-tuning should not remove watermarking capabilities, either intentionally or unintentionally. However, Figure 3 shows that weights-based watermarking obtained from watermark distillation is not robust to further fine-tuning on normal, non-watermarked text (see Appendix K for experimental details). We leave addressing this challenge and learning weights-based watermarking that is robust to fine-tuning to future work. However, weights-based watermarking also has potential use cases that do not require robustness to further fine-tuning. For example, weights-based watermarking could be used for watermarking open models which are unlikely to be fine-tuned further by users, such as RLHF-trained instruction-following chat models. In addition, weights-based watermarking simplifies decoding compared to decoding-based watermarking, as there is no need for an additional specialized decoding algorithm. So, weights-based watermarking can easily be deployed into existing highly optimized infrastructures and inference algorithms, as it just requires loading different model weights. 7 SPOOFING ATTACKS One proposed use case of watermark detection is to attribute the provenance of generated text to a specific model, which could help policy enforcement and auditing of model providers (Abdelnabi & Fritz [2021], Kuditipudi et al. [2023]). However, using watermarking for provenance attribution brings the risk of spoofing attacks: an adversary generates damaging text containing the watermark of a victim model, making it appear as if the victim model generated it, thus hurting the reputation of the victim model (Sadasiyan et al. [2023]). Sampling-based watermark distillation is applicable to the spoofing setting, as it only requires generated samples from the victim/teacher model. In this proof-of-concept experiment, we simulate a spoofing attack using a victim model of Llama 2-Chat 7B with KGW decoding-based watermarking ($k = 1$, $\gamma = 0.25$, $\delta = 2$). Llama 2-Chat 7B is trained for safety and tends to refuse harmful requests (Touvron et al. [2023]). The goal of the spoofing attack is to generate watermarked responses to harmful requests, damaging the victim model’s reputation for safety. We obtain an adversary model by performing sampling-based watermark distillation with Alpaca-7B (Taori et al. [2023]) as the student and the Llama 2-Chat 7B victim model as the teacher. We query the victim model for watermarked samples, filter out refusals, then fine-tune the adversary model on those samples. See Appendix L.1 for full experimental details. We evaluate model harmfulness using the HarmfulQ benchmark of toxic questions (Shaikh et al. [2023]). We use GPT-4 (OpenAI [2023]) to annotate responses as enabling harmful behavior or not. See Appendix L.2 for full evaluation details. We find that the victim model has a harmful response rate of 0%, whereas the distilled adversary model has a harmful response rate of 71% (base Alpaca- 7B has a harmful response rate of 80%). Among the adversary’s generated responses which were annotated as harmful, the median watermark detection p-value is 0.002 (with a median generation length of 593 tokens), showing that harmful text generated by the adversary may be wrongly attributed to the victim model. 8 RELATED WORK Post-hoc detection. Many works have studied post-hoc detection of model-generated text, without modifying the generation process itself. Some works train a binary classifier to perform detection (Zellers et al., 2019; Bakhitin et al., 2019; Tan et al., 2020), see Jawahar et al., (2020) for a survey. Other methods are zero-shot, using heuristics and metrics for detection (Gehrmann et al., 2019; Solaiman et al., 2019; Mitchell et al., 2023). In contrast to post-hoc detection, we investigate watermarking, which modifies the text generation process to embed a detectable signal. However, post-hoc detection could potentially be used in conjunction with watermarking (Mitchell et al., 2023). Text watermarking. Older works on text watermarking edit pre-existing text to inject signals that can be statistically detected (Rizzo et al., 2019; Abdelnabi & Fritz, 2021; Yang et al., 2022), see Kamaruddin et al., (2018) for a survey. Recently, many works have studied decoding-based watermarking, which modifies decoding procedures to generate new watermarked text (Venugopal et al., 2011; Kirchenbauer et al., 2023a; Aaronson, 2023; Kuditipudi et al., 2023; Zhao et al., 2023a; Christ et al., 2023; Hu et al., 2023; Wu et al., 2023; Huang et al., 2023; Zhao et al., 2024). Various classes of decoding-based watermarking methods have been proposed, e.g., semantic watermarks (Fu et al., 2023; Hou et al., 2023; Liu et al., 2023b; Ren et al., 2023), multi-bit watermarking (Yoo et al., 2023; Wang et al., 2023; Qu et al., 2024; Boroujeny et al., 2024), and public/private key watermarking (Liu et al., 2023a; Faroze et al., 2023). See Liu et al., (2023c) for a survey of text watermarking. Sander et al., (2024) find that it is possible to detect if a model’s training data contained watermarked text. Watermark attacks. Recent works have investigated attacks to remove the watermark from watermarked text, using methods such as paraphrasing, swapping tokens, etc. (Kirchenbauer et al., 2023b; Krishna et al., 2023; Sadasivan et al., 2023; Zhang et al., 2023; Pang et al., 2024; Jovanović et al., 2024). In addition, watermark spoofing attacks are where an adversary produces text that is falsely detected as watermarked and generated by a victim model. Sadasivan et al., (2023) and Jovanović et al., (2024) spoof the KGW watermark by exploiting its green list bias, and Pang et al., (2024) demonstrate spoofing attacks by exploiting watermark robustness and public detection APIs. In our work, we show that sampling-based watermark distillation can enable spoofing attacks. API watermarking for protection against model extraction. Prior works have studied API watermarking for protection against model extraction attacks, where an adversary imitates or reconstructs a victim model by distilling on its API outputs (He et al., 2022a; Zhao et al., 2022; He et al., 2022b; Zhao et al., 2023b). In API watermarking, a watermark signal is injected into the victim’s API outputs, making it possible to detect if a suspect model was distilled from the victim API. In contrast, text watermarking enables detecting whether a given text was model-generated. 9 CONCLUSION In this paper, we investigate the learnability of watermarks for language models. Using logit-based and sampling-based watermark distillation, we find that models can learn to naturally generate watermarked text using standard decoding algorithms, although lower-distortion watermarks are harder and less sample efficient to learn. Our findings address a key technical challenge towards developing watermarking for open models and raise the possibility of watermark spoofing attacks. Future work may explore improving the robustness of weights-based watermarking to further fine-tuning, which would address another important challenge towards robust watermarking for open models. Future work may also more comprehensively study and evaluate spoofing attacks and potential defenses against spoofing attacks, which would have implications for whether watermarks should be used to assign provenance and blame to a specific model. \footnote{Among all 200-token slices from each of the harmful responses, the median detection p-value is 0.04.} ETHICS STATEMENT In this paper, we find that sampling-based watermark distillation can potentially be used to carry out harmful watermark spoofing attacks. This may appear to be a potentially harmful insight that weakens watermarking by undermining its ability to identify the provenance of text. However, we believe that public knowledge of spoofing attacks and the limitations of watermarking is important. This way, the public knows not to trust watermarking for reliably attributing provenance or blame to a specific model. Then, if watermark detection is not used to prove that a text was generated by a specific model, spoofing attacks will cause significantly less harm, if any at all. Watermarking can still be used to statistically detect LM-generated text, which can be used for tasks such as finding infractions of policies on language model usage. REPRODUCIBILITY STATEMENT For the main results, we describe our experimental setup in §4, including training details, datasets used, and evaluation procedure. For all other experiments and results, we describe full experimental details in the appendix. The exact sections in the appendix are mentioned in the main paper where relevant. In addition, we release code and scripts to reproduce experiments at https://github.com/chenchenyu/watermark-learnability along with trained model weights. ACKNOWLEDGMENTS We gratefully acknowledge the support of an Open Philanthropy Project Award. Chenchen Gu was supported by a Stanford CURIS Fellowship. Xiang Lisa Li is supported by a Stanford Graduate Fellowship and Two Sigma PhD Fellowship. Tatsunori Hashimoto is supported by a gift from Open Philanthropy and by the Tianqiao and Chrissy Chen Institute. REFERENCES Scott Aaronson. Watermarking of large language models. Large Language Models and Transformers Workshop at Simons Institute for the Theory of Computing, 2023. URL https://www.youtube.com/watch?v=2Kx9jbSMZqA Sahar Abdelnabi and Mario Fritz. Adversarial watermarking transformer: Towards tracing text provenance with data hiding. In 2021 IEEE Symposium on Security and Privacy (SP), pp. 121–140. IEEE, 2021. doi: 10.1109/SP40001.2021.00083. Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. Real or fake? learning to discriminate machine from human generated text. arXiv preprint arXiv:1906.03351, 2019. URL https://arxiv.org/abs/1906.03351 Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, Usvsn Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar Van Der Wal. Pythia: A suite for analyzing large language models across training and scaling. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 2397–2430. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/biderman23a.html Massieh Kordi Boroujeny, Ya Jiang, Kai Zeng, and Brian Mark. Multi-bit distortion-free watermarking for large language models. arXiv preprint arXiv:2402.16578, 2024. URL https://arxiv.org/abs/2402.16578 Miranda Christ, Sam Gunn, and Or Zamir. Undetectable watermarks for language models. arXiv preprint arXiv:2306.09194, 2023. URL https://arxiv.org/abs/2306.09194 Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association
SFCHv2G33F
Could you please provide more details on the secondary structure prediction (SSP) task? How many classes is this task composed of? Is there a separate prediction head for structure prediction on top of the cryptic pocket prediction head? Why and how is its weight in the objective function chosen to be 1?
Protein Language Models Enable Accurate Cryptic Ligand Binding Pocket Prediction Anonymous authors Paper under double-blind review Abstract Accurate prediction of protein-ligand binding pockets is a critical task in protein functional analysis and small molecule pharmaceutical design. However, the flexible and dynamic nature of proteins conceal an unknown number of potentially invaluable “cryptic” pockets. Current approaches for cryptic pocket discovery rely on molecular dynamics (MD), leading to poor scalability and bias. Even recent ML-based cryptic pocket discovery approaches require large, post-processed MD datasets to train their models. In contrast, this work presents “Efficient Sequence-based cryptic Pocket prediction” (ESP) leveraging advanced Protein Language Models (PLMs), and demonstrates significant improvement in predictive efficacy compared to ML-based cryptic pocket prediction SOTA (ROCAUC 0.93 vs 0.87). ESP achieves detection of cryptic pockets via training on readily available, non-cryptic-pocket-specific data from the PDDBind dataset, rather than costly simulation and post-processing. Further, while SOTA’s predictions often include positive signal broadly distributed over a target structure, ESP produces more spatially-focused predictions which increase downstream utility. 1 Introduction The Transformer architecture (Vaswani et al., 2023) arose in the context of machine translation, and also enables SOTA applications in text-classification (Devlin et al., 2019) and text-generation (Radford et al., 2018). Large Language Models (LLMs) scale the Transformer using more blocks, larger embeddings, and larger datasets to achieve unprecedented performance on a variety of tasks in the zero-shot setting and now exhibit sophisticated knowledge of semantic relationships (Brown et al., 2020). A core hypothesis of structural biology is that structure and function arise from specific sequences of amino acids, in the same way that meaning in natural language arises from specific sequences of words. Since Transformers are agnostic to the meaning of semantic tokens, so-called Protein Language Models (PLMs) were trained using the same Masked Language Model pre-training objective as (Devlin et al., 2019). Rives et al. (2019) found that PLMs can not only learn key differences between amino acids themselves, but also distinguish which proteins have similar structures yet different sequences in the zero-shot setting. Rao et al. (2020) showed that PLMs learn what structural biologists call contact maps in their self-attention coefficients, with a linear relation between perplexity and “contact precision.” Lin et al. (2022) found that scaling a PLM improves performance on the aforementioned, and enables structure prediction competitive with AlphaFold2 (Jumper et al., 2021). These results are an astonishing confirmation of the protein sequence-structure-function hypothesis, and motivate investigating the frontiers of what PLMs can power. For example, Singh et al. (2023) used PLMs and molecular fingerprints to conduct virtual screening that successfully identified sub-nanomolar binders. The method was simple: (1) a protein’s PLM [CLS] embedding and ligand’s molecular fingerprint (Glem et al., 2006) were projected into the same dimensional space using a single linear layer followed by ReLU, (2) cosine similarity was calculated, and (3) the projectors were updated via contrastive learning. While this application is interesting, the real significance is that information sufficient for this purpose could be embedded into a single summary token by a PLM. Small molecule pharmaceutical discovery often leverages insight from structural biology data, and PLMs offer a new window into this domain. Specifically, knowledge of where compounds, or “lig- ands,” bind to a protein of interest is of critical importance, providing a starting point for medicinal chemists to design better molecules. This work focuses on the use of PLMs to identify hard-to-find, or “cryptic,” protein pockets from sequence alone, and in particular the following three specific aims: (1) explore the relevance of SOTA PLMs for sequence-based cryptic protein-ligand binding pocket prediction, (2) determine the extent to which multi-task learning with secondary structure prediction (SSP) enhances cryptic pocket prediction, and (3) offer a specific model that redefines SOTA for cryptic protein pocket prediction. Toward these ends we find that: (1) many PLMs enable predictive efficacy beyond previous SOTA cryptic pocket prediction algorithms, and Ankh-Large (Elnaggar et al., [2023]) and ESM-2 15B enable top AUC and APS, respectively, (2) multi-task learning with SSP enhances predictive efficacy for many cases, and (3) our ESP model outperforms the SOTA ML-based cryptic pocket prediction algorithm, PocketMiner (PM) (Meller et al., [2023]), by a significant margin (ROCAUC 0.93 vs 0.87) on its own test set. 2 BACKGROUND 2.1 IMPACT OF CRYPTIC POCKET PREDICTION TO SMALL MOLECULE DRUG DESIGN Cryptic protein-ligand binding pocket prediction is a high impact task because successful predictions can form the basis for novel structure-based small molecule pharmaceutical development programs. “Cryptic” or “non-obvious” pockets are so named because of the difficulty in recognizing such ligandable pockets with conventional tools. Whereas rigid, highly-conserved active sites and other non-cryptic pockets can commonly be identified by a structural biologist in receptor or enzyme protein structures determined via x-ray crystallography, cryptic pockets generally cannot. Instead, they are often discovered accidentally in experimentally solved protein structures in complex with wet-lab screening or fragment screening hits, or through extensive molecular dynamics (MD) simulations using existing structures. Finding a cryptic pocket on a pharmacologically validated protein target can motivate resource allocation to produce a novel, first-in-class medication. New binding sites for validated targets enable development of new chemistry with potential for improvement in efficacy, dosing regimen, and reduction of side-effects. Significant improvement in one or a combination of those three clinical properties can improve the standard of care for patients. Cryptic pocket prediction, and ligand binding pocket prediction in general, also has impact when engaging new targets for the first time. Because new targets may not have any reference compounds known to engage it, virtual screening and de novo molecular generation against a putative pocket may be required to find hits that medicinal chemists can turn into leads and eventually drug candidates. 2.2 NON-CRYPTIC POCKET IDENTIFICATION ALGORITHMS Computational identification of ligand binding sites on protein surfaces is a field with many mature tools, each with their own capabilities and limitations. They evaluate structures at the atomic level, the residue level, or arbitrary grid points in space. These methods perform best when there is a cavity on the protein surface that looks similar in volume and chemistry to common, known binding sites. They tend not to elucidate cryptic pockets because they only see snapshots of geometric and chemical properties. Dynamic properties of proteins that might correlate with or imply a propensity for cryptic pockets to form are not taken into account. Two notable examples of methods of this type are LIGSITE (Hendlich et al., [1997]) and fpocket (Le Guilloux et al., [2009]). LIGSITE scans a protein for concave volumes and reports cavities above a minimum size. Fpocket uses Voronoi tessellation to define alpha spheres which are then clustered prior to ranking clusters and scoring pockets. While dated by ML standards, these two algorithms are still relevant in computational chemistry and were used during label generation by the SOTA ML cryptic pocket algorithm discussed below. For a review of non-ML and ML pocket prediction approaches developed over the past three decades see Zhao et al. ([2020]) and Di Palma et al. ([2023]). 2.3 ML-BASED, CRYPTIC POCKET IDENTIFICATION ALGORITHMS Cryptic pocket identification algorithms aim to find hard-to-find areas of a protein where a drug can bind and achieve a disease-modifying effect. Simple geometric calculations to find concave surfaces will not suffice because cryptic pockets in experimentally-solved structures are generally not in a shape that allows a drug to bind. Cryptic pockets tend to form by protein atom movement opening a pocket, or bringing distal parts close enough to form a pocket. Since the flexibility of a protein enables these phenomena, MD is a tool for finding cryptic pockets. Unfortunately, scaling MD for this purpose is time consuming and cost prohibitive. The SOTA ML approach to cryptic pocket prediction is PocketMiner (PM). It uses a protein’s backbone atom coordinates and sequence to predict whether or not each residue is associated with the formation of a cryptic pocket. It does not however use information from SOTA PLMs. Its use case is to feed in protein structures absent any bound ligand, and then predict where cryptic pockets are most likely to form. PM was trained using labels generated by LIGSITE, ipocket, and conditional characterization of MD trajectories. It achieves improved accuracy and orders-of-magnitude improvement in inference compute cost compared to its predecessor CryptoSite (Cimermancic et al., 2016), which works best when MD simulations are executed at inference time. The small number of structures used to generate training data were enough to produce meaningful predictions on structures that are completely different from the training structures. Also, it achieves this with only 736,155 trainable parameters and no PLM. 3 RESULT This work presents three major results: (1) training using samples with less than 30% sequence similarity to validation or test samples, (2) training across all levels of sequence similarity, and (3) projection of predictions onto structures and comparison with SOTA. 3.1 PERFORMANCE AT THE 30% SEQUENCE IDENTITY LIMIT For the first set of results, training samples having greater than 30% sequence identity with any validation or testing sample have been removed. This reduces data leakage in the structure domain, because samples with relatively low sequence identity can still be structurally similar. Amino acids can sometimes be changed without altering the overall structure or function of the protein, for example, if the amino acids are very similar or solvent exposed. While structural similarity can occur even when sequence identity is below 30%, results at the 30% threshold are still a meaningful measure of generalization ability. Sensitivity to this threshold will be addressed in the next section. Further detail is provided in the Methods section. Figure 1 shows the best prediction head and task regime for the model with the highest APS and AUC for each PLM. Ankh-Large enables the best AUC of 0.926, and the best AUC for all prediction head classes except MLP. ESM-2 15B enables the best APS of 0.865, and the best MLP in both APS and AUC. ESM-2 15B also enables the best APS for MHA without \([\text{CLS}]\) tokens. MHA using \([\text{CLS}]\) tokens is the prediction head most common in Figure 1a, and PDBBind-label-only the most common training regime. MLP was the top ESM-2 15B prediction head in terms of AUC, and for ProtT5-XL on both APS and AUC. Multitask training using SSP resulted in top models for half of top MLPs, and was less common for either MHA architecture. SOTA, PocketMiner, achieves 0.81 and 0.87 APS and AUC on the test set, respectively. Ankh-Large, ESM22 15B, ESM-2 3B, and ProtT5-XL all produced prediction heads of each class (MLP, MHA with \([\text{CLS}]\), and MHA without \([\text{CLS}]\)) that outperformed SOTA in either or both of APS and AUC. No prediction head atop ProtBert outperformed SOTA on either APS or AUC. Ankh-Large has an embedding size of 1536, compared to 5120 and 2560 for ESM-2 15B and ESM-2 3B, respectively. Both ProtT5-XL and ProtBert have an embedding size of 1024. The benefit of the higher APS achievable via ESM-2 15B is offset by significantly higher cost in terms of calculating the embeddings and training the prediction head. When a high AUC predictor is more appropriate, then Ankh-Large vastly outperforms both ESM-2 variants when computational cost is taken into account. If computational cost is the highest priority, users may wish to use ProtT5-XL embeddings with MLP prediction heads. Figure 1: Figures (a)-(d) show APS and AUC for the best prediction head for each PLM overall, and for each prediction head class. Text above each bar indicates the best in class prediction head. The integers indicate width of MLP or attention heads of MHA. PDB or SSP indicate the single- or multi-task setting, respectively. 3.1.1 Ankh-Large at the 30% Sequence Identity Limit Performance metrics for ESP using Ankh-Large are summarized in Table I. Table elements in bold indicate the best performance on either APC or AUC within a class of prediction heads. The prediction heads classes are: LR/MLP, MHA without \([\text{CLS}]\) tokens, MHA using \([\text{CLS}]\) tokens. Ankh-Large achieves the best performance of all the PLMs under test in terms of AUC, and second best in terms of APS. All Ankh-Large prediction heads outperform SOTA for AUC, and many out-perform SOTA for APS. Of the Ankh-Large MHA results, the best came from those trained on the single task of predicting the PDBBind labels, and MHAs using 4 attention heads tended to under-perform. Multi-task training with SSP achieved the best APS for MHA without \([\text{CLS}]\) tokens. LR achieved APS and AUC of 0.805 and 0.889, respectively. This too outperforms SOTA on AUC. The high performance of LR suggests that the embeddings produced by Ankh-Large have a meaningful degree of linear correlation to the PDBBind-ligand-derived labels. Single layer MLPs of various widths struggled to make meaningful improvement beyond this baseline, with the best performing MLP having only 64 nodes (apparent using more significant figures). Table 1: APS and ROCAUC results across architectures for ESP using Ankh-Large embeddings. Training samples with greater than 30% sequence identity with any member of validation or testing sets has been removed. | Architecture | PDBBind only | w/ SSP | |--------------------|--------------|--------| | | APS | AUC | APS | AUC | | PocketMiner (PM) | 0.81 | 0.87 | N/A | N/A | | LR | 0.805 | 0.889 | 0.805 | 0.889 | | MLP 16 | 0.805 | 0.890 | 0.801 | 0.890 | | MLP 64 | **0.808** | **0.890** | 0.807 | 0.890 | | MLP 256 | 0.806 | 0.890 | 0.807 | 0.890 | | MLP 1024 | 0.808 | 0.890 | 0.804 | 0.889 | | MHA 4 no CLS | 0.845 | 0.911 | 0.755 | 0.891 | | MHA 8 no CLS | 0.852 | **0.916** | **0.853** | 0.911 | | MHA 16 no CLS | 0.820 | 0.897 | 0.841 | 0.908 | | MHA 4 | 0.802 | 0.906 | 0.832 | 0.907 | | MHA 8 | 0.821 | 0.908 | 0.849 | 0.897 | | MHA 16 | **0.854** | **0.926** | 0.840 | 0.902 | ### 3.2 Performance as a Function of Sequence Identity Limit Figure 2 shows the best prediction head and task regime for the model with the highest APS and AUC for each sequence identity threshold. Ankh-Large was used for all models in this subsection. Figure 2a shows that 7 of 12 best models were MHA using \([\text{CLS}]\) tokens, whereas the remainder were MHA without use of \([\text{CLS}]\) tokens. 4 of 12 best models were trained in the multi-task setting using both PDDBind ligand-derived labels and SSP labels. The best performing models were fairly consistent in terms of AUC until the 100% sequence identity threshold has used, when AUC rose to above 0.95. APS was less consistent, but trended upward overall with increased sequence identity threshold, achieving an APS of nearly 0.90 at the sequence identity threshold of 100%. Figure 2b shows performance as a function of sequence identity limit for LR/MLP prediction heads. The upward trending performance as sequence identity threshold is increase is smooth yet slight for AUC, and again more varied but overall uptrending for APS. The relatively flat performance curve until the 100% sequence identity level is consistent with expectations of a well generalized model. One potential confounder arises when multiple structures with different labels for the same residues enter the training set as the sequence identity limit increases. Detection, analysis, and mitigation of this possibility is left for future work. ### 3.3 Inference on Test Set Examples The PM test set offers two types of samples. The first has only positive labels and “unknown” labels, and the second has only negative labels and “unknown” labels. Positive labels indicate residues known to be associated with cryptic pocket formation, whereas negative labels indicate residues known to not be associated with cryptic pocket formation. “Unknown” labels indicate residues where cryptic pocket formation is neither known to occur nor known to not occur, and are masked when calculating APS and AUC. #### 3.3.1 E. coli Outer Membrane Transporter FecA (1KMO) Figure 3 shows inference results and labels for E. coli outer membrane transporter FecA (RCSB PDB ID 1KMO). The ESP inference results for 1KMO, Figure 3a, show positive signal focused in the area of the cryptic pocket positive labels shown in Figure 3c, which are the PM test set labels for this protein. Outside the area of the cryptic pocket labels, ESP predicts that no cryptic pockets are present. This prediction is easy to interpret, and phenomenologically correct in the sense that the known pocket area stands out as such. Figure 2: Figures 2(a)-(d) show APS and AUC for the best prediction head atop Ankh-Large overall, and for each prediction head class. (Text above each bar as above.) The PM inference results for 1KMO, Figure 3b, also show positive signal in the area of the cryptic pocket positive labels. However, the PM inference result shows significant positive signal in many other places on the protein. While some of these positive signals in the unknown region may reveal new cryptic pockets, there are so many that it is difficult to motivate any particular starting point for drug design programs. There may also be many false positives. Distillation of this result into actionable insights is therefore not straightforward as with ESP. Figure 3: Figure 3a shows ESP inference results, blue being negative prediction and red being positive. Figure 3b shows PM inference results using the same color scale. Figure 3c shows the binary PM test labels, where red indicates residues known to be associated with cryptic pocket formation, and green indicates unknown status and is masked during APS and AUC calculation. 3.3.2 Bovine Trypsin (1BTP) Figure 4 shows inference results and labels for bovine trypsin (RCSB PDB ID 1BTP). The ESP inference results for 1KMO, Figure 4a, show negative signal focused in the area of the non-pocket, negative labels as shown in Figure 4c, which are the PM test set labels for this protein. ESP predicts that no cryptic pockets are present in the area of the non-pocket, negative labels. This prediction is also easy to interpret, and phenomenologically correct in the sense that the known non-pocket areas stand out as such. The PM inference results for 1BTP, Figure 4b, show positive signal broadly distributed across many residues with non-pocket negative labels. Whereas in the above example there were many positive predictions in an unknown area, here many positive predictions are in known non-pocket regions and are therefore clear false positives. Distillation of this PM inference result into actionable insights is therefore not possible because of false positives. (a) 1BTP ESP inference. (b) 1BTP PM inference. (c) 1BTP PM labels. Figure 4: Figure 4a shows ESP inference results (colors as above). Figure 4b shows PM inference results (colors as above). Figure 4c shows the binary PM test labels, where blue indicates residues known to not be associated with cryptic pocket formation, and green as above. 3.4 Results Summary Using PDBBind-derived samples with less than 30% sequence identity with validation or test set samples, ESP with Ankh-Large achieves APS and AUC of 0.85 and 0.93, respectively. Using the same sequence identity threshold, ESP with ESM-2 15B achieves the best APS of all PLMs tested of 0.86, but at considerably higher computational cost due to the large PLM itself and embedding size of 5120, compared to Ankh’s embedding size of 1536. MHA prediction heads tend to outperform others except for ProtT5-XL which favors MLP. Multi-task training using the ESM-2 SSP dataset produced the best model in many cases, but not the majority. SOTA, PocketMiner, achieves 0.81 and 0.87 APS and AUC on its test set. The PM test set was also used for all ESP APS and AUC calculations. All PLMs except ProtBert enabled models that outperformed SOTA on one or both of APS and AUC. All Ankh-Large enabled models achieved the same. Figure 5 shows ROC and precision-recall (PR) curves for the best Ankh-Large models trained with 30% and 100% sequence identity thresholds and PocketMiner, as evaluated using the PocketMiner test set. Inference via ESP tends to produce positive signal in a more focused and spatially-localized manner, whereas inference via PM tends to produce positive signal broadly and smoothly distributed over many areas of a protein. While inference via ESP may miss some cryptic pockets, PM inference may not be favored for pharmaceutical discovery due to the quantity and distribution of positive predictions and non-trivial incidence of false positives. 4 Conclusion We find that: (1) many PLMs enable ESP predictive efficacy beyond previous SOTA cryptic pocket prediction algorithms, and Ankh-Large and ESM-2 15B enable top AUC and APS, respectively, (2) that multi-task learning using the ESM-2 SSP dataset enhances predictive efficacy for many cases, Figure 5: ROC and PR comparison between PocketMiner, ESP with Ankh-Large and 30% sequence identity threshold, ESP with Ankh-Large and 100% sequence identity threshold. and (3) that ESP outperforms PM by a significant margin (ROCAUC 0.93 vs 0.87) on its own test set. Because mere logistic regression also achieved a meaningful AUC, this result shows that Ankh-Large learned residue-level information from unsupervised training that linearly correlates with cryptic pocket formation propensity. A small molecule pharmaceutical development program invests significant capital in exactly one target. Therefore prediction clarity and ease of interpretation are essential. Initiating drug development against a false positive cryptic pocket prediction would be very costly in terms of capital, time, and leadership bandwidth. False negative predictions are less costly in direct terms. Application of ESP at scale is therefore more valuable to small molecule drug developers than SOTA. Future work can explore alternate labeling schema, including representations for multiple ligands for similar structures and using a residue’s minimum distance to the nearest ligand atom to map labels into a continuous range. A PLM ensemble approach that combines embeddings from multiple PLMs and optionally downsamples may be worth exploring. It may also be useful to evaluate the efficacy of other PLMs in powering this application and explore potential for transfer learning between additional protein- and residue-level prediction tasks. Perhaps most significantly, future work may attempt to combine PLM- and MD-based approaches to achieve results outperforming either individual approach. 5 METHODS 5.1 DATASETS Three datasets form the foundation for this work: (1) PDBBind (Su et al., 2019), (2) the ESM-2 SSP dataset, and (3) the PM validation and test sets. Training is conducted using the sequences and labels: (1) derived from the PDBBind dataset, and (2) directly from the ESM-2 SSP dataset. The PM validation and test sets serve those functions herein. Since significant curation effort has been invested in the PDBBind dataset, we use it without further curation except for omission of proteins with synthetic residues. This results in a dataset of 17,986 complexes. Each protein’s amino acid sequence is extracted from its structure file. The subject of missing residues is left to future work, rather we wish to test if PLMs can enable bypassing this step and still achieve useful results. We assign positive labels to any residue containing at least one atom within 6 Å of any ligand atom (Eguida & Rognan, 2022). We assign negative labels everywhere else. The average sequence length is 292, total number of positive labels is 495,482, and total number of labels is 5,254,922. The ESM-2 SSP dataset is used without modification, however since it is significantly larger than the PDBBind dataset we only use the 12,026 samples obtained using the “cv_partition=0” and “split=train” options. The PM validation and testing data are used in the same way as by the PM authors in their work. Residues assigned an “unknown” or “unclassified” label are masked during loss calculation. This means that negative labels are only from rigid structures where the PM authors are confident no pocket can form, and positive labels are only associated with close proximity to ligands in resolved protein/ligand complexes. The validation and testing sets have 436 and 563 positive and 375 and 1,283 negative labels, respectively. Data leakage is possible when identical sequences are present in the training set and either the validation or testing set. Because the PDB IDs are known for the PM validation and test sets, the authors used the RCSB APIs [Rose et al., 2021] to identify sets of structures in the PDBBind dataset that are within arbitrary sequence identity thresholds to the PM validation and test structures. Sequence identity thresholds of 100%, 95%, 90%, 70%, 50%, and 30% were used. At training time, the structures within the desired identity threshold to the validation and test structures are removed from the training set. In addition to data leakage prevention, training using different sequence identity thresholds offers insight into generalizability of ESP and dependence of generalizability on the specific PLM used. The number of structures from the RCSB PDB database [Berman et al., 2000], PDBBind dataset, and PBDBind structures removed from the training dataset at different levels of sequence identity are reported in Table 6 (see Appendix). 5.2 ARCHITECTURE Protein sequences extracted from PDBBind are input without modification into several PLMs: Ankh-Large, ESM-2 15B, ESM-2 3B, ProtT5-XL, and ProtBert. Fine-tuning is not executed; embeddings are calculated and stored, and then used to train a prediction head using the PDBBind dataset and optionally the ESM-2 SSP dataset. For ProtT5-XL, the average embedding is used as a pseudo-[CLS] token, as suggested by [Ni et al., 2021]. PLM embeddings are then input into following prediction heads: Logistic regression (LR), multi-layer perceptron (MLP) with one hidden-layer, a single layer of multi-headed-attention (MHA) not using the PLM’s output [CLS] embeddings, and a single layer of multi-headed-attention (MHA) using the PLM’s output [CLS] embeddings. The number of learnable parameters per prediction head for each PLM in the single-task setting is presented in Table 7 (see Appendix). The output of the prediction head is a residue-level cryptic pocket score in the single-task setting, and also SSP class likelihood in the multi-task setting. 5.3 TRAINING PROCESS Training is executed using the PDBBind input data and labels derived via proximity to the ligand using binary cross entropy loss. When training concurrently with the SSP data, loss for SSP is evaluated via cross entropy loss. The total loss is a sum of the two, and a coefficient of SSP loss is used to adjust the relative significance of each loss. For results presented here, the SSP loss coefficient used is 1.0. SGD has been used with no weight decay and a momentum value of 0.9. Results for each specific PLM, prediction head, and task configuration are reported for the best model from 7 trials. Each trial is limited to a maximum of 40 epochs of training. We define a Figure of Merit (FOM) using the validation APS and AUC as shown in Eq(1). Several early stopping criteria have been implemented: detection of any NaN FOM, identical FOM for two consecutive epochs, or FOM increasing beyond 105% of an individual trial’s lowest FOM. This strategy was chosen for convenience, since occurrence of any of these conditions tended not to lead to meaningful results. \[ FOM = 2 - APS - AUC \] ACKNOWLEDGMENTS The authors would like to thank Greg Bowman, Artur Meller, and the other authors of the PocketMiner manuscript, for their excellent work on the topic of cryptic pocket prediction and open-sourcing of their dataset and code. Protein structure visualizations are generated using PyMOL (Schrodinger, LLC, 2015). REFERENCES Helen M. Berman, John Westbrook, Zukang Feng, Gary Gilliland, T. N. Bhat, Helge Weissig, Ilya N. Shindyalov, and Philip E. Bourne. The Protein Data Bank. *Nucleic Acids Research*, 28(1):235–242, 01 2000. ISSN 0305-1048. doi: 10.1093/nar/28.1.235. URL https://doi.org/10.1093/nar/28.1.235. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. Peter Cimermancic, Patrick Weinkam, T. Justin Rettenmaier, Leon Bichmann, Daniel A. Keedy, Rachel A. Woldeyes, Dina Schneidman-Duhovny, Omar N. Demerdash, Julie C. Mitchell, James A. Wells, James S. Fraser, and Andrej Sali. Cryptosite: Expanding the druggable proteome by characterization and prediction of cryptic binding sites. *Journal of Molecular Biology*, 428(4):709–719, 2016. doi: https://doi.org/10.1016/j.jmb.2016.01.029. URL https://www.sciencedirect.com/science/article/pii/S0022283616000851. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. Francesco Di Palma, Carlo Abate, Sergio Decherchi, and Andrea Cavalli. Ligandability and drugability assessment via machine learning. *WIREs Computational Molecular Science*, n/a(n/a):e1676, 2023. doi: https://doi.org/10.1002/wcms.1676. URL https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/wcms.1676. Merveille Eguida and Didier Rognan. Estimating the similarity between protein pockets. *Int J Mol Sci.*, 23(20), Oct 2022. ISSN 1422-0067 (Electronic); 1422-0067 (Linking). doi: 10.3390/ijms232012462. Ahmed Elnaggar, Hazem Essam, Wafaa Salah-Eldin, Walid Moustafa, Mohamed Elkerdawy, Charlotte Rochereau, and Burkhard Rost. Ankh: Optimized protein language model unlocks general-purpose modelling, 2023. Robert C Glern, Andreas Bender, Catrin H Arnby, Lars Carlsson, Scott Boyer, and James Smith. Circular fingerprints: flexible molecular descriptors with applications from physical chemistry to adme. *IDrugs*, 9(3):199–204, Mar 2006. ISSN 1369-7056 (Print); 1369-7056 (Linking). Manfred Hendlich, Friedrich Rippmann, and Gerhard Barnickel. Ligsite: automatic and efficient detection of potential small molecule-binding sites in proteins. *Journal of Molecular Graphics and Modelling*, 15(6):359–363, 1997. doi: https://doi.org/10.1016/S1093-3263(98)00002-3. URL https://www.sciencedirect.com/science/article/pii/S1093326398000023. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with alphafold. *Nature*, 596(7873):583–589, 2021. doi: 10.1038/s41586-021-03819-2. URL https://doi.org/10.1038/s41586-021-03819-2.
QuIiLSktO4
For the synthetic noise a log-normal distribution is used. I believe you meant the parameters for this distribution are $\mu = 0$ and $\sigma \in [0,50]$. The mean of a log-normal distribution cannot be 0...
ALGORITHMS FOR CACHING AND MTS WITH REDUCED NUMBER OF PREDICTIONS∗ Karim Abdel Sadek University of Amsterdam† karim.abdel.sadek@student.uva.nl Marek Eliáš Department of Computing Sciences Bocconi University marek.elias@unibocconi.it ABSTRACT ML-augmented algorithms utilize predictions to achieve performance beyond their worst-case bounds. Producing these predictions might be a costly operation – this motivated [Im et al., 2022] to introduce the study of algorithms which use predictions parsimoniously. We design parsimonious algorithms for caching and MTS with action predictions, proposed by [Antoniadis et al., 2023], focusing on the parameters of consistency (performance with perfect predictions) and smoothness (dependence of their performance on the prediction error). Our algorithm for caching is 1-consistent, robust, and its smoothness deteriorates with the decreasing number of available predictions. We propose an algorithm for general MTS whose consistency and smoothness both scale linearly with the decreasing number of predictions. Without the restriction on the number of available predictions, both algorithms match the earlier guarantees achieved by [Antoniadis et al., 2023]. 1 INTRODUCTION Caching, introduced by [Sleator and Tarjan, 1985], is a fundamental problem in online computation important both in theory and practice. Here, we have a fast memory (cache) which can contain up to \( k \) different pages and we receive a sequence of requests to pages in an online manner. Whenever a page is requested, it needs to be loaded in the cache. Therefore, if the requested page is already in the cache, it can be accessed at no cost. Otherwise, we suffer a page fault: we have to evict one page from the cache and load the requested page in its place. The page to evict is to be chosen without knowledge of the future requests and our target is to minimize the total number of page faults. Caching is a special case of Metrical Task Systems introduced by [Borodin et al., 1992] as a generalization of many fundamental online problems. In the beginning, we are given a metric space \( M \) of states which can be interpreted as actions or configurations of some system. We start at a predefined state \( x_0 \in M \). At time steps \( t = 1, 2, \ldots \), we receive a cost function \( \ell_t : M \rightarrow \mathbb{R}^+ \cup \{0, +\infty\} \) and we need to make a decision: either to stay at \( x_{t-1} \) and pay a cost \( \ell_t(x_{t-1}) \), or to move to another, possibly cheaper state \( x_t \) and pay \( \ell_t(x_t) + d(x_{t-1}, x_t) \), where the distance \( d(x_{t-1}, x_t) \) represents the transition cost between states \( x_{t-1} \) and \( x_t \). The online nature of both caching and MTS forces an algorithm to make decisions without knowledge of the future which leads to very suboptimal results in the worst case ([Borodin et al., 1992; Sleator and Tarjan, 1985]. A recently emerging field of learning-augmented algorithms, introduced in seminal papers by [Kraska et al., 2018] and [Lykouris and Vassilvitskii, 2021], investigates approaches to improve the performance of algorithms using predictions, possibly generated by some ML model. In general, no guarantee on the accuracy of these predictions is assumed. Therefore, the performance of learning-augmented algorithms is usually evaluated using the following three parameters: Consistency. Performance with perfect predictions, preferably close to optimum. Robustness. Performance with very bad predictions, preferably no worse than what is achievable by known algorithms which do not utilize predictions. ∗Full version of this paper can be found in Appendix and at https://arxiv.org/abs/2404.06280 †The presentation of this paper was financially supported by the Amsterdam ELLIS Unit and Qualcomm. Work completed while Abdel Sadek was in his final year of BSc at Bocconi University Smoothness. Algorithm’s performance should deteriorate smoothly with increasing prediction error between the consistency and robustness bound. These three parameters express a desire to design algorithms that work very well when receiving reasonably accurate predictions most of the time and, in the rest of the cases, still satisfy state-of-the-art worst-case guarantees. See the survey by Mitzenmacher and Vassilvitskii (2020) for more information. Producing predictions is often a computationally intensive task, therefore it is interesting to understand the interplay between the number of available predictions and the achievable performance. In their inspiring work, Im et al. (2022) initiated the study of learning-augmented algorithms which use the predictions parsimoniously. In their work, they study caching with next-arrival-time predictions introduced by Lykouris and Vassilvitskii (2021). Their algorithm uses $O(b \log_{b+1} k)$ OPT predictions, where OPT is the number of page faults incurred by the offline optimal solution and $b \in \mathbb{N}$ is a parameter. It achieves smoothness linear in the prediction error. It satisfies tight consistency bounds: with perfect predictions, it incurs at most $O(\log_{b+1} k)$ OPT page faults and no algorithm can do better. In other words, it achieves a constant competitive ratio with unrestricted access to predictions ($b = k$) and, with $b$ a small constant, its competitive ratio deteriorates to $O(\log k)$ which is comparable to the best competitive ratio achievable without predictions. One of their open questions is whether a similar result could be proved for MTS. In this paper, we study parsimonious algorithms for MTS working with action predictions which were introduced by Antoniadis et al. (2023). Here, each prediction describes the state of an optimal algorithm at the given time step and its error is defined as the distance from the actual state of the optimal algorithm. The total prediction error is the sum of errors of the individual predictions. In the case of caching, action predictions have a very concise representation, see Section 2.7. Unlike next-arrival-time predictions, action predictions can be used for any MTS. Using the method of Blum and Burch (2000), it is easy to achieve near-optimal robustness for any MTS losing only a factor $(1 + \epsilon)$ in consistency and smoothness. Therefore, we study how the reduced number of predictions affects the consistency and smoothness parameters. We consider the following two regimes. Bounded number of predictions: The algorithm can request a prediction whenever it prefers as far as the total number of requested predictions is bounded by $b$ OPT, where $b$ is a parameter. This regime is similar to Im et al. (2022). Well-separated queries to the predictor: The queries to the predictor need to be separated by at least $a$ time steps, for some parameter $a$. This captures the situation when producing each prediction takes more than one time step. 1.1 Our results We evaluate the algorithm’s performance using competitive ratio which is, roughly speaking, the worst-case ratio between the cost incurred by the algorithm and the cost of the offline optimum, see Section 2 for a formal definition. We say that an algorithm achieves consistency $\alpha$ and robustness $\beta$ if its competitive ratio is at most $\alpha$ when provided with perfect predictions and at most $\beta$ with arbitrarily bad predictions. For a given function $g$, we call an algorithm $g(\eta)$-smooth if its competitive ratio is at most $g(\eta)$ whenever provided with predictions with the total error at most $\eta$. Our first contribution is an algorithm for caching which receives action predictions describing the states of the optimal offline algorithm Belady proposed by Belady (1966). High quality such predictor based on imitation learning was already designed by Liu et al. (2020). Its empirical evaluation within existing algorithms designed for action predictions was performed by Chiedowski et al. (2021). **Theorem 1.1.** Let $f$ be an increasing convex function such that $f(0) = 0$ and $f(i) \leq 2^i - 1$ for each $i \geq 0$. There is an algorithm for caching requiring $O(f(\log k))$ OPT predictions which achieves consistency 1, robustness $O(\log k)$, and smoothness $O(f^{-1}(\eta/OPT))$, where $\eta$ denotes the total prediction error with respect to Belady and OPT is the number of page faults of Belady. In fact, the number of required predictions is slightly smaller than what is stated in the theorem. Table 1 shows numbers of predictions and achieved smoothness for some natural choices of $f$. Already with $O(\sqrt{k})$ OPT predictions, our bounds are comparable to Antoniadis et al. (2023), whose algorithm asks for a prediction in every step, its consistency is constant and its smoothness is logarithmic in... The algorithm also works with \( f(i) = 0 \). In that case, it asks for at most \( 2 \text{OPT} \) predictions and still remains 1-consistent. However, its smoothness is not very good. We use sliding marking phases and a careful distribution of queries of the predictor over the time horizon. This allows us to avoid dealing with so called "ancient" pages considered by [Rohatgi (2020)] and [Antoniadis et al. (2023)], resulting in an algorithm with better consistency and a simpler analysis. We discuss tightness of our bounds in Section 7 in the full version of this paper (see Appendix). We show that with, for example, only \( 0.5 \text{OPT} \) available predictions, no algorithm can be better than \( O(\log k) \)-competitive – a guarantee comparable to the best classical online algorithms without predictions. We also show that the number of predictions used by our algorithm is close to optimal. **Theorem 1.2.** Let \( f \) be an increasing function. Any \( f(\eta) \)-smooth algorithm for caching with action predictions, i.e., an algorithm whose competitive ratio with predictions of error \( \eta \) is \( f^{-1}(\eta) \) for any \( \eta > 0 \), has to use at least \( f(\ln k) \text{OPT} \) predictions. For general MTS, we cannot bound the number of used predictions as a function of OPT. The reason is that any instance of MTS can be scaled to make OPT arbitrarily small, allowing us to use only very few predictions. We propose an algorithm which queries the predictor once in every \( a \) time steps, making at most \( T/a \) queries in total, where \( T \) denotes the length of the input sequence. **Theorem 1.3.** There is a deterministic algorithm for any MTS which receives a prediction only once per each \( a \) time steps and its cost is at most \( O(a) \cdot (\text{OFF} + 2\eta) \), where OFF denotes the cost of an arbitrary offline algorithm and \( \eta \) the error of predictions with respect to this algorithm. This is a more general statement than Theorem 1.1, which requires OFF to be Belady. Considering any offline optimal algorithm OFF, Theorem 1.3 implies a smoothness \( O(a) \cdot (1 + 2\eta/\text{OPT}) \) and consistency \( O(a) \). Our algorithm is based on work functions. For \( a = 1 \), its smoothness is \( 1 + 2\eta/\text{OFF} \), see Section 4, which improves upon the smoothness bound of \( 1 + 4\eta/\text{OFF} \) by [Antoniadis et al. (2023)]. It is not robust on its own. However, it can be combined with any online algorithm for the given MTS using the result of [Blum and Burch (2000)] achieving robustness comparable to that algorithm and losing only a factor of \( (1 + \epsilon) \) in smoothness and consistency. No algorithm receiving a prediction only once in \( a \) time steps can be \( o(a) \)-consistent. This follows from the work of [Emek et al. (2009)] on advice complexity, see Section 7 of the full version (in Appendix) for more details. The same can be shown for smoothness by modifying the lower bound construction of [Antoniadis et al. (2023)]. **Theorem 1.4.** There is no \( o(a\eta/\text{OPT}) \)-smooth algorithm for MTS with action predictions which receives predictions only once in \( a \) time steps. We can modify our algorithm for caching to ensure that the moments when the predictions are queried are separated by at least \( a \) time steps, not losing too much of its performance. **Theorem 1.5.** There is an algorithm for caching which receives prediction at most once in \( a \leq k \) time steps and using at most \( O(f(\log k)) \text{OPT} \) predictions in total which is \( O(1) \)-consistent, \( O(\log k) \)-robust and \( O(f^{-1}(a\eta/\text{OPT})) \)-smooth. In Section 5, we provide empirical results suggesting that our algorithm’s performance can be comparable to the performance of algorithms imposing no limitations on their use of predictions. Our algorithm may therefore be useful especially with heavy-weight predictors like [Liu et al. (2020)]. In Section 8 of the full version of this paper (see Appendix), we provide an algorithm for an alternative prediction setup which we call FitF oracle: each prediction says which of the pages in the current algorithms cache will be requested furthest in the future. ### Table 1: Smoothness vs. number of predictions. | \( f(i) \) | \( 2^i - 1 \) | \( i^2 \) | \( i \) | \( 0 \) | |-----------|---------------|----------|--------|-------| | # of predictions | \( O(\sqrt{k}) \text{OPT} \) | \( O(\log^2 k) \text{OPT} \) | \( O(\log k) \text{OPT} \) | \( 2 \text{OPT} \) | | smoothness | \( O(1 + \log(\frac{\beta}{\text{OPT}} + 1)) \) | \( O(\sqrt{2\frac{\beta}{\text{OPT}}}) \) | \( O(\frac{\beta}{\text{OPT}}) \) | \( O(\frac{\beta}{\text{OPT}}) \) | 1.2 Related Work The most related work is by Im et al. (2022), who studied caching with next arrival time predictions. A smaller number of predictions affects the consistency of their algorithm: with $b(\log k / \log b)$ OPT predictions, they achieve consistency $O(\log k / \log b)$ and they show that this is tight. They also show that their algorithm achieves linear smoothness. In contrast, our algorithm is 1-consistent when receiving at least OPT predictions. This demonstrates that action predictions, although not containing more bits, seem to contain useful information about the input instance in a more condensed form. See Antoniadis et al. (2023) for comparison and connections between these prediction setups. Drygala et al. (2023) study ski rental and bahncard problems with predictions of a fixed cost. There are several other papers on caching with predictions, including Lykouris and Vassilvitski (2021), Rohatgi (2020), Wei (2020), Emek et al. (2009), Antoniadis et al. (2023), which design algorithms asking for a prediction at each time step. Consistency parameters achieved by these algorithms are constants greater than 1. Note that those using black-box methods to achieve robustness are $(1 + \epsilon)$-consistent (e.g., Wei (2020)). We can explicitly compare our smoothness bounds to Antoniadis et al. (2023) who use the same kind of predictions: their smoothness is $O(1 + \log(\frac{\eta}{\text{OPT}} + 1))$ with unlimited use of predictions while our algorithm achieves the same smoothness bound with $O(\sqrt{k})$ OPT predictions. We compare the smoothness of the other algorithms experimentally in Section 5. Antoniadis et al. (2022) study a prediction setup where each prediction is only a single bit, however their algorithms need to receive it in every time step. Gupta et al. (2022) study several problems including caching in a setting where each prediction is correct with a constant probability. Antoniadis et al. (2023) proposed a 1-consistent and $(1 + 4\eta/\text{OPT})$-smooth algorithm for MTS with action predictions which can be robustified by loosing factor $(1 + \epsilon)$ in consistency and smoothness. Getting smoothness bounds sublinear in $\eta/\text{OPT}$ for specific MTS problems other than caching remains a challenging open problem even with unlimited number of predictions and this holds even for weighted caching. Specific results on weighted caching are by Jiang et al. (2022) who studied it in a setup requiring very verbose predictions and by Bansal et al. (2022) whose bounds depend on the number of weight classes. There is also a consistency/robustness trade-off by Lindermayr et al. (2022) for $k$-server. Since the seminal papers by Kraska et al. (2018) and Lykouris and Vassilvitskii (2021) which initiated the study of learning-augmented algorithms, many computational problems were considered. There are papers on ski rental (Purohit et al., 2018), secretary problem (Dütting et al., 2021), online TSP (Bernardini et al., 2022), energy efficient scheduling (Bamas et al., 2020), flow-time scheduling (Azar et al., 2021; 2022), and online page migration (Indyk et al., 2022). Further related works can be found in References and are discussed in the full version of this paper (see Appendix). 2 Preliminaries Consider an algorithm ALG for MTS which produces a solution $x_0, x_1, \ldots, x_T$ for an instance $I$ consisting of cost functions $\ell_1, \ldots, \ell_T$. We denote $\text{cost(ALG}(I)) = \sum_{t=1}^{T} (\ell_t(x_t) + d(x_{t-1}, x_t))$. We say that ALG is $r$-competitive with respect to an offline algorithm OFF if there is an absolute constant $\alpha \in \mathbb{R}$ such that $\mathbb{E}[\text{cost(ALG}(I))] \leq r \cdot \text{cost(OFF}(I)) + \alpha$ for any instance $I$. If ALG is $r$-competitive with respect to an optimal offline algorithm, we say that ALG is $r$-competitive and call $r$ the competitive ratio of ALG. In the classical setting (without predictions), the best achievable competitive ratios are $\Theta(\log k)$ for caching (Fiat et al., 1991) and of order $\text{poly log } n$ for MTS (Bartal et al., 2006; Bubeck et al., 2019), where $n$ is the number of points in the underlying metric space $M$. We refer to Borodin and El-Yaniv (1998) for a textbook treatment. 2.1 Action Predictions for MTS Antoniadis et al. (2023) proposed a prediction setup which they call action predictions, where the predictions tell us what a good algorithm would do. More precisely, at each time $t$, the algorithm receives a prediction $p_t$ of a state where some offline algorithm OFF moves to at time $t$. The error of prediction $p_t$ is then $\eta_t = d(p_t, o_t)$, where $o_t$ is the real state of OFF at time $t$. The total prediction error is defined as $\eta = \sum_{t=1}^{T} \eta_t$. Considering the case of caching, the state corresponds to a cache content, and the prediction error is the number of pages present in the cache of OFF and absent from the predicted cache content. A whole cache content may seem like a huge piece of information, but action predictions for caching can be implemented in a very succinct way. Antoniadis et al. (2023) explain how to represent them with only $O(\log k)$ bits per time step when they are received at each time step. Our algorithm asks, in each query, a specific number of indices of pages which are present in its cache but absent from the predicted cache. When we talk about a bound on the number of provided predictions, this bound applies both to the number of such queries as well as to the total number of indices reported by the predictor during the running time of the algorithm. There are predictors which can generate predictions of a similar kind by Iain and Lin (2016), Shi et al. (2019), Liu et al. (2020). See Antoniadis et al. (2023) for a detailed treatment of this prediction setup and a comparison to other setups for caching. ### 2.2 Caching: Belady’s Algorithm, Marking, and Lazy Algorithms The classical optimal offline algorithm for caching proposed by Belady (1966) is denoted Belady in this paper. At each page fault, it evicts a page which is going to be requested furthest in the future. In the case of a tie, i.e., if there are several pages in the cache which will not be requested anymore, it chooses one of them arbitrarily. Our caching algorithm assumes that the predictor is trying to simulate Belady. The following useful property allows us to detect errors in the predictions quickly. It was recently used by Eliás et al. (2024). **Observation 2.1.** Consider request sequence $r_1, \ldots, r_T$. For any $t \leq T$, the cost incurred by Belady for $r_1, \ldots, r_T$ until time $t$ is the same as the cost of Belady with input $r_1, \ldots, r_t$. To see this, it is enough to note that the solution produced by Belady with input $r_1, \ldots, r_T$ agrees until time $t$ with the solution produced by Belady on $r_1, \ldots, r_t$ which breaks ties based on the arrival times in $r_{t+1}, \ldots, r_T$. We use properties of marking algorithms in this work. Such algorithms split the input sequence into phases, i.e., maximal subsequences where at most $k$ distinct pages are requested. Usually, the first phase starts in the beginning and the next phase follows just after the end of the previous one. However, we will consider phases starting at arbitrary moments. Let $O$ be the cache content of an algorithm in the beginning of the phase. Whenever a page is requested for the first time during the phase, we call this moment an arrival and we mark the page. At the end of the phase, the set $M$ of marked pages will have size $k$: some of them belong to $O$ and are called old while those in $C = M \setminus O$ are called clean. Exactly $|C|$ pages from $O$ remain unmarked until the end of the phase. Marking algorithms is a class of algorithms which never evict a marked page and all of them have cache content $M$ at the end of the phase. Belady is not marking and our algorithm is not marking either, although it uses ideas from marking to achieve desired robustness and smoothness properties. At the end of each phase, we can bound the difference between the cache content of some algorithm and marking. **Observation 2.2.** Let $c$ be the cost incurred by some algorithm during a marking phase. Then, $c \geq |M \setminus S|$, where $S$ is the cache content of the algorithm at the end of the phase and $M$ is the set of $k$ pages requested during the phase. This is because each page in $p \in M$ has to be present in algorithm’s cache when requested during the phase. If $p \notin S$, then the algorithm must have evicted it during the phase incurring cost 1. **Observation 2.3.** If a page $p$ is evicted by Belady at time $t$, then $p$ is not going to be requested in the marking phase containing $t$ anymore. If $p$ is evicted by Belady at time $t$, then the currently requested page $r_t$ and $k - 1$ pages from the cache are $k$ distinct pages that are requested before the moment when $p$ is requested next time. The current phase then needs to end before that moment. We say that an algorithm is lazy if it evicts only one page at a time and only at a page fault. Belady is lazy while our algorithm, as described, may not be. However, any algorithm can be made lazy without increasing its cost. See Borodin and El-Yaniv (1998) for more information about caching. **Observation 2.4.** The difference in the cache content of two lazy algorithms can increase only if both of them have a page fault. In that case, it can increase by at most 1. 3 BOUNDED NUMBER OF PREDICTIONS In this section, we prove Theorem 1.1. We propose an algorithm called F&R which consists of two parts: Follower and Robust. It starts with Follower which is 1-consistent, but lacks in smoothness and robustness. At each page fault, Follower recomputes Belady for the part of the sequence seen so far and checks whether it also has a page fault. If yes, it copies the behavior of the predictor (Line 3). Otherwise, it must have received an incorrect prediction before. Therefore, it switches to Robust (Line 5), which is no more 1-consistent, but achieves desired smoothness and robustness. Robust runs for one marking phase and then returns back to Follower. At such moment, the predictor’s and the algorithm’s cache can be very different and Follower may need to lazily synchronize with the predictor (Line 4). Algorithm 1: Follower ``` 1. $P :=$ initial cache content; // Prediction for time 0 2. foreach pagefault do 3. if $r_t \notin P$ and Belady has a pagefault then query new prediction $P$ and evict any $p \in C \setminus P$; 4. else if $r_t \in P$ then evict arbitrary $p \notin P$; 5. else Switch to Robust (Algorithm 2); ``` Algorithm Robust runs during a single marking phase starting at the same moment, splitting it into windows as follows (assuming $k$ is a power of 2): The first window $W_1$ starts at the beginning of the phase and lasts $k/2$ arrivals, i.e., it ends just before the arrival number $k/2 + 1$. $W_i$ follows the $W_{i-1}$ and its length is half of the remaining arrivals in the phase. The last window $W_{\log k+1} = \{k\}$ lasts until the end of the phase. Robust comes with an increasing convex function $f$ such that $f(0) = 0$ and $f(i) \leq 2^i - 1$. Faster growing $f$ does not further improve our smoothness bounds. Function $f$ determines that we should request $f(i) - f(i-1)$ predictions in window $i$. If the window is too small, we ask for prediction at each time step. Robust starts with the cache content of a marking algorithm whose new phase would start at the same moment (Line 1). In the case of a page fault, it evicts an unmarked page chosen uniformly at random. At arrivals belonging to the sets $S$ and $F$, it performs synchronization with the predictor and queries the predictor’s state respectively. The synchronization is always performed with respect to the most recent prediction $P$ which, in the case of lazy (or lazified) predictors, implicitly incorporates information from the previous predictions. Algorithm 2: Robust$_f$ (one phase) ``` 1. Load $k$ distinct most recently requested pages; 2. $S := \{k - 2^j + 1 | j = \log k, \ldots, 0\}$; 3. $W_i := [k - 2^{\log k-i+1} + 1, k - 2^{\log k-i}]$ for $i = 1, \ldots, \log k$ and $W_{\log k+1} = \{k\}$; 4. Choose $F \subseteq \{1, \ldots, k\}$ such that $|F \cap W_i| = \min\{f(i) - f(i-1), |W_i|\}$ for each $i$; 5. foreach pagefault during the phase do 6. if it is arrival belonging to $F$ then ask for new prediction $P$; 7. if it is arrival belonging to $S$ then synchronize with $P$; 8. if requested page is still not in cache then evict random unmarked page; 9. Load all pages marked during the phase; 10. Switch to Follower (Algorithm 1); ``` Synchronization with $P$ (Line 7) works as follows. All pages previously evicted by random evictions return to the cache and the same number of pages not present in $P$ is evicted. We denote $E_i^- = E_i^- \cup E_i^+$ the set of pages evicted at the beginning of $W_i$, where pages in $E_i^-$ are requested during $W_i$ while those in $E_i^+$ are not. Note that algorithm’s and predictor’s cache may not become the same after the synchronization. Since the algorithm starts with pages in $M$ and loads only clean pages, we have the following observation. **Observation 3.1.** Let $C_i$, $|C_i| = c_i$ be the set of clean pages arriving before the start of $W_i$. Then, $E_i \subseteq M \cup C_i$ and $|E_i| = |M \cup C_i| - k = c_i$. We assume that the predictor is lazy and does not load pages that are not requested. Therefore, no page from $E_i^+$ will be loaded during $W_i$ by the predictor and the same holds for Robust, implying the following. **Observation 3.2.** For every $i = 1, \ldots, \log k$, we have $E_i^+ \subseteq E_{i+1}$ and therefore $E_i \setminus E_{i+1} \subseteq E_i^-$. Synchronization with the marking cache performed by Robust is to ensure that the difference between the cache of the algorithm and Belady can be bounded by costs incurred by Belady locally using Observation 2.2 instead of diverging over time solely due to incorrect predictions. **Implementation suggestions.** Algorithms are described as to simplify the analysis. Synchronization in Robust (line 7) should be done lazily as to make use of the most recent prediction. At arrivals of clean pages, one may evict a page not present in predictor’s cache instead of a random unmarked page; one can also ask for a fresh prediction (at most 2 OPT additional queries). The second synchronization with the marking cache in Robust (line 9) can be omitted. With $f(i) = 0$, one can query the predictor only at clean arrivals, using at most 2 OPT predictions in total. We recommend a lazy implementation. Since Robust is not 1-consistent, one may also switch from Follower only once Follower’s cost is at least a constant (e.g. 2 or 3) times higher than the cost of Belady. We denote $H_i$ the $i$th phase of Robust$_f$ and $H_i^-$ a hypothetical marking phase which ends just before $H_i$ starts. Note that $H_i^-$ might overlap with $H_{i-1}$. But $H_1, H_2, \ldots$ are disjoint and we denote $G_{i,i+1}$ the time interval between the end of $H_i$ and the beginning of $H_{i+1}$. $c(H_i)$ is the number of clean pages during phase $H_i$. For a given time period $X$, we define $\Delta^A(X), \Delta^B(X),$ and $\Delta^P(X)$ the costs incurred by F&R, Belady, and the predictor respectively during $X$ and $\eta(X)$ the error of predictions received during $X$. Here is the main lemma about the performance of Robust. Overview of its proof is deferred to Section 3.1. **Lemma 3.3.** Denote $X_i = H_{i-1} \cup H_i^- \cup H_i$. During the phase $H_i$, Robust$_f$ receives at most $f(\log k) + 1$ predictions and we have $$\mathbb{E}[\Delta^A(H_i)] \leq O(1)f^{-1}\left(\frac{\eta(H_i)}{\Delta^B(X_i)}\right)\Delta^B(X_i).$$ At the same time, we also have $$\mathbb{E}[\Delta^A(H_i)] \leq O(\log k)\Delta^B(X_i)$$ and $$\Delta^A(H_i) \leq O(k) + O(k)\eta(H_i).$$ The following lemma is useful to analyze the cost incurred during the Follower part of the algorithm. The proof of Theorem 1.1 then combines it with Lemma 3.3 and can be found in the full version of the paper. **Lemma 3.4.** Consider the gap $G_{i,i+1}$ between phases $H_i$ and $H_{i+1}$ of Robust$_f$. We have $$\Delta^A(G_{i,i+1}) \leq \Delta^B(G_{i,i+1}) + \Delta^B(H_i).$$ ### 3.1 Analysis of Robust$_f$ The full version of this section and the proof of Lemma 3.3 can be found in Appendix (Section 3.2), here we include a short overview. Charging a page fault on a page evicted due to predictor’s advice to a single incorrect action prediction can only give us smoothness linear in the prediction error. This is in contrast with next-arrival predictions where algorithms can be analyzed by estimating lengths of eviction chains caused by each incorrect prediction, as proposed by Lykouris and Vassilvitski (2021). To achieve sublinear smoothness, we need to charge each such page fault to a long interval of incorrect predictions. This is the most challenging part of our analysis because Belady also moves and the same prediction incorrect at one time step may be correct at another time step. We estimate the error of predictions received during each window by introducing window rank which bounds the prediction error from below accounting for the movements of Belady. ### 4 Well-separated queries to the predictor The full version of this section, which can be found in Appendix, contains a consistent and smooth algorithm for MTS proving Theorem 1.3 and extends our analysis of F&R to the setting where the queries to the predictor need to be separated by at least $a$ time steps, proving Theorem 1.5. In MTS, the cost functions usually do not satisfy any Lipschitz property. Therefore, the difference between the cost of the state reported by the predictor and the state of the optimal algorithm does not need to be proportional to their distance. We show that a state satisfying this property which is close to the predicted state can be found using a classical technique for design of algorithms for MTS called work functions, see (Chrobak and Larmore, 1996) for reference. Then, we use the approach of Emek et al. (2009) to interpolate between predictions received at times $t$ and $t + a$. In the case of caching, the performance of F&R in this regime is the same as if it has received $a$ incorrect predictions for each prediction error. Therefore, $\eta$ in its smoothness bound needs to be multiplied by $a$. 5 EXPERIMENTS We perform an empirical evaluation of our caching algorithm F&R on the same datasets and with the same predictors as the previous works (Lykouris and Vassilvitskii, 2021; Antoniadis et al., 2023; Im et al., 2022). We use the following datasets. - BrightKite dataset (Cho et al., 2011) contains data from a certain social network. We create a separate caching instance from the data of each user, interpreting check-in locations as pages. We use it with cache size $k = 10$ and choose instances corresponding to the first 100 users with the longest check-in sequences requiring at least 50 page faults in the optimal policy. - CitiBike dataset contains data about bike trips in a bike sharing platform CitiBike. We create a caching instance from each month in 2017, interpreting starting stations of the trips as pages, and trimming length of each instance to 25,000. We use it with cache size $k = 100$. Some of the algorithms in our comparison use next-arrival predictions while F&R uses action predictions that can be generated from next-arrival predictions. Therefore, we use predictors which predict the next arrival of the requested page and convert it to action predictions. This process was used and described by Antoniadis et al. (2023) and we use their implementation of the predictors. Our algorithm is then provided limited access to the resulting action predictions while the algorithm of Im et al. (2022) has limited access to the original next-arrival predictions. - Synthetic predictions: compute the exact next arrival time computed from the data and add noise to this number. This noise comes from a log-normal distribution with the mean parameter $\mu = 0$ and the standard deviation parameter $\sigma$. We use $\sigma \in [0, 50]$. - PLECO predictor proposed by Anderson et al. (2014): This model estimates the probability $p$ of a page being requested in the next time step and we interpret this as a prediction that the next arrival of this page will be in $1/p$ time steps. The model parameters were fitted to BrightKite dataset and not adjusted before use on CitiBike. - POPU – a simple predictor used by Antoniadis et al. (2023): if a page appeared in $p$ fraction of the previous requests, we predict its next arrival in $1/p$ time steps. In our comparison, we include the following algorithms: offline algorithm Belady which we use to compute the optimal number of page faults OPT, standard online algorithms LRU and Marker (Fiat et al., 1991), ML-augmented algorithms using next arrival predictions L&V (Lykouris and Vassilvitskii, 2021), LMark and LnonMark (Rohatgi, 2020), FtPM which, at each step, evicts an unmarked page with the furthest predicted next arrival time, and algorithms for action predictions FtP and T&D (Antoniadis et al., 2023). We use the implementation of all these algorithms published by Antoniadis et al. (2023). We implement algorithm AQ (Im et al., 2022) and our algorithm F&R. Notes on implementation of F&R. We follow the recommendations in Section 3 except that Follower switches to Robust whenever its cost is $\alpha = 1$ times higher compared to Belady in the same period. With higher $\alpha$, the performance of F&R approaches FtP on the considered datasets. With $k = 10$ (BrightKite dataset), we use $F = [1, 6, 9]$ corresponding to $f(i) = i$. Note that, with such small $k$, polynomial and exponential $f$ would also give a very similar $F$. With $k = 100$ (CitiBike dataset), we use exponential $f(i) = 2^{i+1} - 1$. With $a$-separated queries, Follower uses LRU heuristic when prediction is unavailable, and Robust ignores $F$, querying the predictor at each page fault separated from the previous query by at least $a$ time steps. Figure 1: BrightKite dataset with Synthetic predictor, standard deviation at most 0.003 and 300 resp. | Predictor | Marker | FtP | AQ_b8 | FtPM_a1 | FtPM_a5 | F&R_a1 | F&R_a5 | F&R_a20 | |-----------|--------|-------|-------|---------|---------|--------|--------|---------| | POPU | 1.861 | 1.739 | 1.782 | 1.776 | 1.833 | 1.800 | 1.802 | 1.803 | | PLECO | 1.861 | 2.277 | 1.875 | 1.877 | 1.867 | 1.878 | 1.879 | 1.879 | Figure 2: Competitive ratios on CitiBike dataset with $k = 100$, standard deviation at most 0.001 **Results.** Figures 1 and 2 contain averages of 10 independent experiments. Figure 1 shows that the performance of F&R with high-quality predictions is superior to the previous ML-augmented algorithms except for FtP which follows the predictions blindly and is also 1-consistent. With high $\sigma$, the performance of T&D becomes better. This is true also for F&R with $F = [1..10]$, suggesting that T&D might be more efficient in using erroneous predictions. The second plot shows the total number of times algorithms query the predictor over all instances. Response to such query is a single page missing from predictor’s cache in the case of F&R and T&D and next arrival times of $b$ pages in the case of AQ$_b$. Note that FtPM is equivalent to the non-parsimonious version of AQ with $b = k$. F&R makes the smallest number of queries: with perfect predictions, it makes exactly OPT queries and this number decreases with higher $\sigma$ as F&R spends more time in Robust. Figure 2 shows that F&R performs well in the regime with $a$-separated queries. While the performance of FtPM with POPU predictor worsens considerably towards Marker already with $a = 5$, F&R keeps its improvement over Marker even with $a = 20$. Predictions produced by PLECO seem much less precise as suggested by FtP with PLECO being worse than Marker and smaller number of such predictions either improves (AQ, FtPM) or does not affect performance (F&R) of considered algorithms. Further details of our experimental results are presented in Appendix (Section 5). **6 CONCLUSIONS** We present algorithms for MTS and caching with action predictions working in the setting where the number of queries or the frequency of querying the predictor are limited. We have shown that one can achieve theoretical as well as empirical performance comparable to the setting with unlimited access to the predictor, possibly enabling usage of precise but heavy-weight prediction models in environments with scarce computational resources. **REPRODUCIBILITY STATEMENT** The appendix contains a full version of our paper which includes proof of all the theorems and lemmas. We provide textual description of the implementation of our algorithm in Section 5. The code of our implementation can be found at https://github.com/marek-elias/caching/. REFERENCES [1] A. Anderson, R. Kumar, A. Tomkins, and S. Vassilvitskii. The dynamics of repeat consumption. In Proceedings of conference World Wide Web ’14, pages 419–430, 2014. doi: 10.1145/2566486.2568018. [2] A. Antoniadis, T. Gouleakis, P. Kleer, and P. Kolev. Secretary and online matching problems with machine learned advice. In NeurIPS, 2020. [3] A. Antoniadis, C. Coester, M. Eliáš, A. Polak, and B. Simon. Learning-augmented dynamic power management with multiple states via new ski rental bounds. In NeurIPS, 2021. [4] A. Antoniadis, J. Boyar, M. Eliáš, L. M. Favrholdt, R. Hoeksma, K. S. Larsen, A. Polak, and B. Simon. Paging with succinct predictions, 2022. [5] A. Antoniadis, P. J. Ganje, and G. Shahkarami. A novel prediction setup for online speed-scaling. In SWAT, volume 227 of LIPIcs, pages 9:1–9:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. [6] A. Antoniadis, C. Coester, M. Eliáš, A. Polak, and B. Simon. Online metric algorithms with untrusted predictions. ACM Trans. Algorithms, 19(2), apr 2023. ISSN 1549-6325. doi: 10.1145/3582689. URL https://doi.org/10.1145/3582689 [7] Y. Azar, S. Leonardi, and N. Touitou. Flow time scheduling with uncertain processing time. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021, page 1070–1080, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380539. doi: 10.1145/3406325.3451023. URL https://doi.org/10.1145/3406325.3451023 [8] Y. Azar, S. Leonardi, and N. Touitou. Distortion-oblivious algorithms for minimizing flow time. In Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms, SODA 2022, Virtual Conference / Alexandria, VA, USA, January 9 - 12, 2022, pages 252–274. SIAM, 2022. URL https://doi.org/10.1137/1.9781611977073.13 [9] É. Bamas, A. Maggiori, L. Rohwedder, and O. Svensson. Learning augmented energy minimization via speed scaling. In NeurIPS, 2020. [10] N. Bansal, C. Coester, R. Kumar, M. Purohit, and E. Vee. Learning-augmented weighted paging. In SODA, 2022. [11] Y. Bartal, B. Bollobás, and M. Mendel. Ramsey-type theorems for metric spaces with applications to online problems. J. Comput. Syst. Sci., 72(5):890–921, 2006. [12] L. A. Belady. A study of replacement algorithms for virtual-storage computer. IBM Syst. J., 5(2):78–101, 1966. doi: 10.1147/sj.52.0078. URL https://doi.org/10.1147/sj.52.0078 [13] G. Bernardini, A. Lindermayr, A. Marchetti-Spaccamela, N. Megow, L. Stougie, and M. Sweeney. A universal error measure for input predictions applied to online graph problems. CoRR, abs/2205.12850, 2022. doi: 10.48550/arXiv.2205.12850. URL https://doi.org/10.48550/arXiv.2205.12850 [14] A. Blum and C. Burch. On-line learning and the metrical task system problem. Mach. Learn., 39(1):35–58, 2000. doi: 10.1023/A:1007621832648. [15] H. Böckenhauer, D. Komm, R. Královič, R. Královič, and T. Mömke. Online algorithms with advice: The tape model. Inf. Comput., 254:59–83, 2017. [16] A. Borodin and R. El-Yaniv. Online computation and competitive analysis. Cambridge University Press, 1998. ISBN 978-0-521-56392-5. [17] A. Borodin, N. Linial, and M. E. Saks. An optimal on-line algorithm for metrical task system. J. ACM, 39(4):745–763, 1992. doi: 10.1145/146585.146588.
qg5JENs0N4
Just curious: regarding Lemma 4.1, do you have any comments or implications on the (sort of) bias term $\E_{p(h)} \left[ p^{\beta_h}_+ (s_{t+} \mid s) p^{\beta_h}_+ (s) \right] - p^{\beta}_+ (s_{t+} \mid s) p^{\beta}_+ (s)$ ?
CLOSING THE GAP BETWEEN TD LEARNING AND SUPERVISED LEARNING – A GENERALISATION POINT OF VIEW Raj Ghugare\textsuperscript{1} Matthieu Geist\textsuperscript{2} Glen Berseth\textsuperscript{1,*} Benjamin Eysenbach\textsuperscript{3,*} \textsuperscript{1}Mila, Université de Montréal \textsuperscript{2}Google DeepMind \textsuperscript{3}Princeton University raj.ghugare@mila.quebec ABSTRACT Some reinforcement learning (RL) algorithms can stitch pieces of experience to solve a task never seen before during training. This oft-sought property is one of the few ways in which RL methods based on dynamic-programming differ from RL methods based on supervised-learning (SL). Yet, certain RL methods based on off-the-shelf SL algorithms achieve excellent results without an explicit mechanism for stitching; it remains unclear whether those methods forgo this important stitching property. This paper studies this question for the problems of achieving a target goal state and achieving a target return value. Our main result is to show that the stitching property corresponds to a form of combinatorial generalization: after training on a distribution of (state, goal) pairs, one would like to evaluate on (state, goal) pairs not seen together in the training data. Our analysis shows that this sort of generalization is different from i.i.d. generalization. This connection between stitching and generalisation reveals why we should not expect SL-based RL methods to perform stitching, even in the limit of large datasets and models. Based on this analysis, we construct new datasets to explicitly test for this property, revealing that SL-based methods lack this stitching property and hence fail to perform combinatorial generalization. Nonetheless, the connection between stitching and combinatorial generalisation also suggests a simple remedy for improving generalisation in SL: data augmentation. We propose a temporal data augmentation and demonstrate that adding it to SL-based methods enables them to successfully complete tasks not seen together during training. On a high level, this connection illustrates the importance of combinatorial generalization for data efficiency in time-series data beyond tasks beyond RL, like audio, video, or text. 1 INTRODUCTION Many recent methods view RL as a purely SL problem of mapping input states and desired goals, to optimal actions [1,3]. These methods have gained a lot of attention due to their simplicity and scalability [4]. These methods sample a goal $g$ (or a return $r$) from the dataset, which was previously encountered after taking an action $a$ from a state $s$, and then imitate $a$ by treating it as an optimal label for reaching $g$ (or achieving return $r$) from $s$. These methods, collectively known as outcome conditional behavioral cloning algorithms (OCBC), achieve excellent results on common benchmarks [3]. However, at a fundamental level, there are some important differences between RL and SL. This paper studies one of those differences: the capability of some RL algorithms to stitch together pieces of experience to solve a task never seen during training. While some papers have claimed that some OCBC approaches already have this stitching property [2], both our theoretical and empirical analyses suggest some important limitations of these prior claims. The stitching property [5] is common among RL algorithms that perform dynamic programming (e.g., DQN [6], DDPG [7], TD3 [8], IQL [9]). It is often credited for multiple properties of dynamic programming algorithms like superior data efficiency and off policy reasoning (See Section 4 for detailed discussion). Importantly, we show that stitching also allows for a third property – the ability to infer solutions to a combinatorial number of tasks during test time, like navigating between certain state-goal pairs that never appear together (but do appear separately) during training. An example of stitching is that humans don’t need access to optimal actions to go from an airport to new tourist places; they can use their previous knowledge to navigate to a taxi-stand, which would take them to any *Equal advising. location. But, purely supervised approaches to sequential problems like RL, do not explicitly take such temporal relationships into account. Even in other sequential domains like language, a large body of work is dedicated to study the combinatorial generalisation abilities of large language models [10]-[13]. Our work shows that combinatorial generalisation is also required to solve tasks in the context of RL. We start by formalising stitching as a form of combinatorial generalisation. We observe that when data are collected from a mixture of policies, there can be certain (state, goal) pairs that are never visited in the same trajectory, despite being frequented in separate trajectories. Information from multiple trajectories should be stitched to complete these tasks. Because such tasks (state-goal pairs) are seen separately, but never together, we call the ability of algorithms to perform these tasks as combinatorial generalisation. This connection further motivates an inspiration from SL; if generalisation is the problem, then data augmentation is likely an effective approach [14]. We propose a form of temporal data augmentation for OCBC methods so that they acquire this stitching property and succeed in navigating between unseen (start, goal) pairs or achieving greater returns than the offline dataset. This form of data augmentation involves time, rather than random cropping or shifting colors. Intuitively, temporal data augmentation augments the original goal, with a new goal sampled from a different overlapping trajectory in the offline dataset. This data augmentation scheme does require an estimate of distance between states to detect overlapping trajectories. We demonstrate that this data augmentation is theoretically backed, and empirically endows OCBC algorithms with the stitching property on difficult state-based and image-based tasks. Our primary contribution is to provide a formal framework for studying stitching as a form of combinatorial generalisation. Because of this connection, we hypothesize that OCBC methods do not perform stitching. Perhaps surprisingly, simply increasing the volume of data does not guarantee this sort of combinatorial generalization. Our empirical results support the theory: we demonstrate that prior RL methods based on SL (DT [2] and RvS [3]) fail to perform stitching, even when trained on abundant quantities of data. Our experiments reveal a subtle consideration with the common D4RL datasets [15]: while these datasets are purported to test for exactly this sort of combinatorial generalization, data analysis reveals that “unseen” (state, goal) pairs do actually appear in the dataset. Thus, our experiments are run on a new variant of these datasets that we constructed for this paper to explicitly test for combinatorial generalization. On 10 different environments, including both state and image based tasks, and goal and return conditioning, adding data augmentation improves the generalisation capabilities of SL approaches by up to a factor of 2.5. 2 RELATED WORK Prior methods that do some form of explicit stitching. Previous work on stitching abilities of SL algorithms have conflicting claims. The DT paper [2] shows experiments where their SL-based method performs stitching. On the contrary, [16] provide an example where SL algorithms do not perform stitching. RvS [3] shows that a simple SL-based algorithm can surpass the performance of TD algorithms. In tabular settings, [17] show that the benefits of TD-learning arise from trajectory stitching. We provide a formal definition of stitching as a form of combinatorial generalisation. In contrast, generalisation in RL has been generally associated with making correct predictions for unseen but similar states and actions [18]-[20], planning [21], ignoring irrelevant details [22]-[24], or robustness towards changes in the reward or transition dynamics [25]-[27]. Offline RL datasets. A large amount of work is done to build offline RL datasets. [15] provided a first standard offline RL benchmark, [28] provide exploratory offline datasets to underscore the importance of diverse data, [29]-[30] focus on data efficiency and real world deployment and [31] provide benchmarks that also compare the online evaluation budget of offline RL algorithms. Although many offline RL papers informally allude to stitching, we devise new offline RL datasets that precisely test the stitching abilities of offline RL algorithms. Data augmentation in RL. Data augmentation has been proposed as a remedy to improve generalisation in RL [32]-[38], akin to SL [39]. Perhaps the most similar prior work are the ones which use dynamic programming to augment existing trajectories to improve the performance properties of SL algorithms [40]-[42]. However, because these methods still require dynamic programming, they don’t have the same simplicity that make SL algorithms appealing in the first place. 1Open sourced code and data is available: https://github.com/RajGhugare19/stitching-is-combinatorial-generalisation 3 PRELIMINARIES Controlled Markov processes. We will study the problem of goal-conditioned RL in a controlled Markov process with states \( s \in S \) and actions \( a \in A \). The dynamics are \( p(s' | s, a) \), the initial state distribution is \( p_0(s_0) \), the discount factor is \( \gamma \). The policy \( \pi(a | s, g) \) is conditioned on a pair of state and goal \( s, g \in S \). For a policy \( \pi \), define \( p^\pi_t(s_t | s_0) \) as the distribution over states visited after exactly \( t \) steps. We can then define the discounted state occupancy distribution and its conditional counterpart as \[ p^\pi_+(s_{t+} = g) \triangleq \mathbb{E}_{s \sim p_0(s_0)} \left[ p^\pi_+(s_{t+} = g | s_0 = s) \right], \tag{1} \] \[ p^\pi_+(s_{t+} = g | s_0 = s) \triangleq (1 - \gamma) \sum_{t=0}^{\infty} \gamma^t p^\pi_t(s_t = g | s_0 = s), \tag{2} \] where \( s_{t+} \) is the variable that specifies a future state corresponding to the discounted state occupancy distribution. Given a state-goal pair \( s, g \sim p_{\text{test}}(s, g) \) at test time, the task of the policy is to maximise the probability of reaching the goal \( g \) in the future \[ \max_{\pi} J(\pi), \quad \text{where} \quad J(\pi) = \mathbb{E}_{s,g \sim p_{\text{test}}(s,g)} \left[ p^\pi_+(s_{t+} = g | s_0 = s) \right]. \tag{3} \] Data collection. Our work focuses on the offline RL setting where the agent has access to a fixed dataset of \( N \) trajectories \( D = \{(s^i_0, a^i_0, ..)\}_{i=1}^{N} \). Our theoretical analysis will assume that the dataset is collected by a set of policies \( \{\beta(a | s, h)\} \), where \( h \) specifies some context. For example, \( h \) could reflect different goals, different language instructions, different users or even different start state distributions. Precisely, we assume that the data was collected by first sampling a context from a distribution \( p(h) \), and then sampling a trajectory from the corresponding policy \( \beta(a | s, h) \). We will use the shorthand notation \( \beta_h(\cdot | \cdot) = \beta(\cdot | \cdot, h) \) to denote the data collecting policy conditioned on context \( h \). Trajectories are assumed to be stored without \( h \), hence the context denotes all hidden information that the true data collection policies used to collect the data. This setup of collecting data corresponds to a mixture of Markovian policies.\(^2\) There is a classic result saying that, for every such mixture of Markovian policies, there exists a Markovian policy that has the same discounted state occupancy measure. **Lemma 3.1** (Rephrased from Theorem 2.8 of [43], Theorem 6.1 of [44]). Let a set of context-conditioned policies \( \{\beta_h(a | s)\} \) and distribution over contexts \( p(h) \) be given. There exists a Markovian policy \( \beta(a | s) \) such that it has the same discounted state occupancy measure as the mixture of policies: \[ p^\beta_+(s_{t+}) = \mathbb{E}_{p(h)} \left[ p^{\beta_h}_+(s_{t+}) \right]. \tag{4} \] The policy \( \beta(a | s) \) is simple to construct mathematically as follows. For data collected from the mixture of context conditioned policies, let \( p^\beta(h | s) \) be the distribution over the context given that the policy arrived in state \( s \). \[ \beta(a | s) \triangleq \sum_h \beta_h(a | s)p^\beta(h | s). \tag{5} \] Theorem 6.1 [44] proves the correctness of this construction. The policy \( \beta(a | s) \) is also easy to construct empirically – simply perform behavioral cloning (BC) on data aggregated from the set of policies. We will hence call this policy the BC policy. Outcome Conditional behavioral cloning (OCBC). While our theoretical analysis will consider generalisation abstracted away from any particular RL algorithm, we will present empirical results using a simple and popular class of goal-conditioned RL methods: Outcome conditional behavioral cloning [45] (DT [2], URL [1], RvS [3], GCSL [46] and many others [47, 48]). These methods take as input a dataset of trajectories \( D = \{(s^i_0, a^i_0, ..)\}_{i=1}^{N} \) and learn a goal-conditioned policy \( \pi(a | s, g) \) using a maximum likelihood objective: \[ \max_{\pi(\cdot | \cdot, \cdot)} \mathbb{E}_{(s,a,g) \sim D} [\log \pi(a | s, g)]. \tag{6} \] \(^2\)Note that the mixture is at the level of trajectories, not at the level of individual actions. The sampling above can be done by first sampling a trajectory from the dataset (uniformly at random), then sampling a (state, action) pair from that trajectory, and setting the goal to be a random state that occurred later in that same trajectory. If we incorporate our data collecting assumptions, then this sampling can be written as $$\max_{\pi(\cdot|\cdot)} \mathbb{E}_{h \sim p(h)} \left[ \mathbb{E}_{a,s \sim \beta_h(a|s), p^{\beta_h}_+(s)} \left[ \log \pi(a | s, s_{t+}) \right] \right].$$ (7) 4 “STITCHING” AS A FORM OF COMBINATORIAL GENERALISATION Before concretely defining stitching, we will describe three desirable properties that are colloquially associated with “stitching” and the learning dynamics of TD methods. (Property 1) The ability to select infrequently seen paths that are more optimal than frequent ones. While a shorter trajectory between the state and the goal may occur infrequently in the dataset, TD methods can find more examples of this trajectory by recombining pieces of different trajectories, thanks to dynamic programming. This property is enjoyed by both SARSA (expectation over actions) and Q-learning (max over actions) methods, and is primarily associated with the sample efficiency of learning. (Property 2) The ability to evaluate policies different from those which collected the data, and perform multiple steps of policy improvement. This property is unique to Q-learning. (Property 3) Temporal difference methods (both Q-learning and SARSA) can also recombine trajectories to find paths between states never seen together during training. This property is different from the first property in that it is not a matter of data efficiency – temporal difference methods can find paths that will never be sampled from the data collecting policies, even if given infinite samples. All three of these properties are colloquially referred to as “stitching” in the literature. While these properties are not entirely orthogonal, they are distinct: certain algorithms may have just some of these properties. Obtaining all these properties in a simpler (than TD) framework is difficult, and it remains unclear whether OCBC methods possess any of them. To better understand the differences and similarities this study focuses on the third property. We formalize this property as a form of generalisation, which we will refer to as combinatorial generalisation. Defining combinatorial generalisation will allow us to analyze if and when OCBC methods perform stitching, both theoretically (this section) and experimentally (Section 5). Intuitively, combinatorial generalisation looks at connecting states and goals, which are never seen together in the same trajectory, but where a path between them is possible using the information present in different trajectories. It therefore tests a form of “stitching” [15,49], akin to “combinatorial generalisation” [50–52]. To define this generalisation, we will specify a training distribution and testing distribution. The training distribution corresponds to sampling a context $h \sim p(h)$ and then sampling an $(s,g)$ pair from the corresponding policy $\beta_h$. This is exactly how OCBC methods are trained in practice (Section 3). The testing distribution corresponds to sampling an $(s,g)$ pair from the BC policy $\beta(a | s)$ defined in Equation (5). For each distribution, we will measure the performance $f^{\pi(\cdot|s,g)}(s,g)$ of goal-conditioned policy $\pi(a | s,g)$. Definition 1 (Combinatorial generalisation). Let a set of context-conditioned policies $\{\beta_h(a | s)\}$ be given, along with a prior over contexts $p(h)$. Let $\beta(a | s)$ be the policy constructed via Eq. (5). Let $\pi(a | s,g)$ be a policy for evaluation. The combinatorial generalisation of a policy $\pi(a | s,g)$ measures the differences in goal-reaching performance for goals sampled $g \sim p^{\beta_h}_+(s_{t+} | s)$ versus goals sampled from $g \sim \mathbb{E}_{p(h)}[p^{\beta_h}_+(s_{t+} | s)]$: $$\mathbb{E}_{s \sim p^{\beta_h}_+(s)} \left[ f^{\pi(\cdot|s,g)}(s,g) \right] - \mathbb{E}_{h \sim p(h), s \sim p^{\beta_h}_+(s)} \left[ f^{\pi(\cdot|s,g)}(s,g) \right].$$ (8) The precise way performance $f$ is measured is not important for our analysis: “generalisation” simply means that the performance under one distribution is similar to the performance under another. In our experiments, we will look at performance measured by the success rate at reaching the commanded goal. On the surface, it could seem like both the test and train distributions are the same. Lemma 3.1 about reducing mixtures of policies to a single Markovian policy seems to hint that this might be true. Indeed, this distinction has not been made before while analysing OCBC methods [16, 35]. This misconception is demonstrated by the following lemma: Figure 1: (a) The MDP has 5 states and two actions (up and right). (b) Training distribution: Data is collected using two contexts conditioned policies shown in blue and red. (c) Testing distribution: The behavior cloned policy (equation 5) is shown in purple. During training, the state-goal pair \(\{s_t = 2, s_{t+} = 4\}\) is never sampled, as no data collecting policy goes from state 2 to state 4. But the behavior cloned policy has non zero probability of sampling the state-goal pair \(\{s_t = 2, s_{t+} = 4\}\). Because of this discrepancy between the train and test distributions, OCBC algorithms do not have any guarantees of outputting the correct action for the state-goal pair \(\{s_t = 2, s_{t+} = 4\}\). Whereas dynamic programming based methods can propagate rewards through the backwards stitched path of \(4 \rightarrow 3 \rightarrow 2\) to output the correct action. **Lemma 4.1.** There exist a collection of policies \(\{\beta_h\}\) and context distribution \(p(h)\) such that, conditioned on a state, the distribution of states and goals for the data collecting policies (training) is different from the distribution of states and goals (testing) for BC policy \(\beta\). \[ E_{p(h)} \left[ p^{\beta_h}_+(s_{t+} | s)p^{\beta_h}_+(s) \right] \neq p^\beta_+(s_{t+} | s)p^\beta_+(s) \quad \text{for some states } s, s_{t+}. \] (9) **Proof.** The proof is base on a simple counterexample, shown in Fig. 1. See the related caption for a sketch of proof and Appendix D.1 for the formal one. \(\square\) In summary, while the BC policy \(\beta(a | s)\) will visit the same states as the mixture of data collecting policies on average, conditioned on some state, the BC policy \(\beta(a | s)\) may visit a different distribution of future states than the mixture of policies. Even if infinite data is collected from the data collecting policies, there can be pairs of states that will never be visited by any one data collecting policy in a single trajectory. The important implication of this negative result is that stitching requires the OCBC algorithm to recover a distribution over state-goal pairs (\(\beta\)) which is different from the one it is trained on \((\beta_{h=1}, \beta_{h=2})\). In theory, the training distribution has enough information to recover the state-goal distribution of the BC policy without the knowledge of the contexts of the data collecting policies. It is upto the algorithm to extract this information. Many RL methods can recover the test distribution implicitly and sidestep this negative result by doing dynamic programming (i.e., temporal difference learning). One way of viewing dynamic programming is that it considers all possible ways of stitching together trajectories, and selects the best among these stitched trajectories. But OCBC algorithms based on SL [1, 3, 46, 47] can only have guarantees for iid generalisation [53]. And in line with previous works studying other forms of combinatorial generalisation in SL [10, 54], it is not clear apriori why these methods should have the combinatorial generalisation property, leading to the following hypothesis: *Conditional imitation learning methods do not have the combinatorial generalisation property.* We will test this hypothesis empirically in our experiments. In Appendix A we discuss connections between stitching and spurious correlations. ## 5 TEMPORAL AUGMENTATION FACILITATES GENERALISATION The previous section allows us to rethink the oft-sought “stitching” property as a form of generalisation, and measure that generalisation in the same way we measure generalisation in SL: by measuring a difference in performance under two objectives. Casting stitching as a form of generalisation allows us to employ a standard tool from SL: data augmentation. When the computer vision expert wants a model that can generalize to random crops, they train their model on randomly-cropped images. Indeed, prior work has applied data augmentation to RL to achieve various notions of generalisation [32,52]. However, we use a different type of data augmentation to facilitate stitching. In this section, we describe a data augmentation approach that allows OCBC methods to improve their stitching capabilities. Recall that OCBC policies are trained on \((s, a, g)\) triplets. To perform data augmentation, we will replace \(g\) with a different goal \(\tilde{g}\). To sample these new goals \(\tilde{g}\), we first take the original goal \(g\) and identify states from the offline dataset which are nearby to this goal (Section 5). Let \(w\) denote one of these nearby “waypoint” states. Looking at the trajectory that contains \(w\), the new goal \(\tilde{g}\) is a random state that occurs after \(w\) in this trajectory. We visualize this data augmentation in Fig. 2. **Identifying nearby states.** The problem of sampling nearby states can be solved by clustering all the states from the offline dataset before training. This assumes a distance metric in the state space. Using this distance metric, every state can be assigned a discrete label from \(k\) different categories using a clustering algorithm [55,56]. Although finding a good distance metric is difficult in high-dimensional settings [57], our experiments show that using a simple L2 distance leads to significant improvement, even for image-based tasks. **Method summary.** Algorithm 1 summarizes how our data augmentation can be applied on top of existing OCBC algorithms. Given a method to group states, we can add our data-augmentation to existing OCBC algorithms in about 5 lines of code (marked in blue). In our experiments, we use the k-means algorithm [55]. To sample nearby waypoint states, we randomly sample a state from the same group (same cluster) as the original goal. The augmented goal is then sampled from the future of this waypoint (See Fig. 2). **Algorithm 1 Outcome-conditioned behavioral cloning with (temporal) data augmentation.** The key contribution of our paper is this form of data augmentation, which is shown in blue text. ``` 1: Input: Dataset : \(D = \{(s_0, a_0, \ldots)\}\). 2: Initialize OCBC policy \(\pi_\theta(a|s, g)\) with parameters \(\theta\). 3: Set \(\epsilon =\) augmentation probability, \(m =\) mini-batch size. 4: \((\{d_0, d_1, \ldots\}) = \text{CLUSTER}(\{s_0, s_1, \ldots\}). \quad \triangleright \text{Group all states in the dataset.} 5: while not converged do 6: for \(t = 1, \ldots, m\) do 7: Sample \((s_t, a_t, g_{t+}) \sim D.\) \quad \triangleright Equation 7 8: With probability \(\epsilon:\) 9: Get the group of the goal: \(k = d_{t+}\). 10: Sample waypoint states from the same group: \(w \sim \{s_i; \forall i \text{ such that } d_i = k\}\). 11: Sample augmented goal \(\tilde{g}\) from the future of \(w\), from the same trajectory as \(w\). 12: Augment the goal \(g_{t+} = \tilde{g}\). 13: Collect the loss \(L_t(\theta) = -\log \pi_\theta(a_t | s_t, g_{t+})\). 14: Update \(\theta\) using gradient descent on the mini-batch loss \(\frac{1}{m} \sum_{t=1}^{m} L_t(\theta)\) 15: Return : \(\pi_\theta(a|s, g)\) ``` **Theoretical intuition on temporal data augmentation.** While data augmentations in general do not have exact theoretical guarantees, we can prove that temporal data augmentation, under certain smoothness assumptions, will generate additional state-goal pairs which may not be seen otherwise during training. In Appendix D.2 we show that there exists a hierarchy of distributions with increasing stitching abilities, where 0-step distribution corresponds to the train distribution (Eq. (9), left) and the per-step distribution corresponds to the test distribution (Eq. (5), right). We prove that applying temporal data augmentation once, samples state-goals from the 1-step distribution. **Lemma 5.1.** Under the smoothness assumptions mentioned in Appendix D.2 (Eq. (11)), for all \( s, a \) pairs, temporal data augmentation \( p^{\text{temp-aug}}(g \mid s, a) \) approximately samples goal according the distribution of one-step stitching policy \( p^{1\text{-step}}(g \mid s, a) \). Intuitively, the smoothness assumptions are required to ensure that nearby states have similar probabilities under the data collection policies. In Fig. 2, for example, this ensures taking action \( a \) from state \( s \) has similar probabilities of reaching nearby states \( w \) and \( g \). For the complete proof as well as more details see Appendix D.2. ![Figure 3: Goal conditioned RL](image) Different colors represent the navigation regions of different data collecting policies. During data collection, these policies navigate between random state-goal pairs chosen from their region of navigation. These visualisations are for the “point” mazes. The “ant” maze datasets are similar. Appendix Fig. 12 shows the “ant” maze datasets. ![Figure 4: Return conditioned RL](image) We visualise our new image based and partially observable environment created using Miniworld [58]. ## 6 EXPERIMENTS The experiments aim (1) to verify our theoretical claim that OCBC methods do not always exhibit combinatorial generalisation, even with larger datasets or larger transformer models, and (2) to evaluate how adding temporal augmentation to OCBC methods can improve stitching in both state-based and image-based tasks. All experiments are conducted across five random seeds. ### OCBC methods. RvS [3] is an OCBC algorithm that uses a fully connected policy network and often achieves results better than TD-learning algorithms on various offline RL benchmarks [3]. DT [2] treats RL as a sequential SL problem and uses the transformer architecture as a policy. DT outputs an action, conditioning not only on the current state, but a history of states, actions and goals. See Appendix C.2 for implementation details. ### 6.1 TESTING THE ABILITY OF OCBC ALGORITHMS AND TEMPORAL DATA AUGMENTATION TO PERFORM STITCHING. While the maze datasets from D4RL [15] were originally motivated to test the stitching capabilities of RL algorithms, we find that most test state-goal pairs are already in the training distribution. Thus, a good success rate on these datasets does not necessarily require stitching. This may explain why OCBC methods have achieved excellent results on these tasks [3], despite the fact that our theory suggests that these methods do not perform stitching. In our experiments, we collect new offline datasets that precisely test for stitching (see Fig. 3 and Fig. 12 for visualisation). To collect our datasets, we use the same “point” and “ant” mazes (umaze, medium and large) from D4RL [15]. To test for stitching, we condition OCBC policies to navigate between (state, goal) pairs previously unseen together, and measure the success rate. In Fig. 3, this conditioning corresponds to (state, goal) pairs that appear in differently coloured regions. Each task consists of 2-6 randomly chosen (state, goal) pairs from different regions in the maze. In Appendix C.3, we discuss the important differences between the D4RL and our datasets, which are necessary to test for stitching. Figure 5: Adding data augmentation outperforms the OCBC baselines on most tasks. “Only goal augmentation” refers to an oracle version of our augmentation that uses privileged information \((x, y)\) coordinates when performing augmentation. Adding temporal data augmentation (both standard and oracle versions) improves the performance of both RvS and DT on \(5/6\) tasks. Results. In Fig. 5, we can see that both DT and RvS struggle to solve unseen tasks at test-time. However, applying temporal data-augmentation to RvS improves the goal-reaching success rate on \(5/6\) tasks, because the augmentation results in sampling (state, goal) pairs otherwise unseen together. To show that temporal data augmentation can also be applied to only important parts of the state, based on extra domain knowledge, we also compare an oracle version of our data augmentation. This oracle version uses only the \(x, y\) coordinates from the state vector to apply the K-means algorithm. Figure 5 also shows that using extra domain knowledge can further improve performance. Figure 6: Temporal data augmentation on image-based tasks. It is difficult to find a reliable metric to apply temporal data augmentation in high-dimensional tasks. We show that using a simple L2 distance metric can surprisingly improve the combinatorial generalisation of OCBC algorithms on both goal-conditioned (left and center) and return-conditioned (right) tasks. 6.2 CAN TEMPORAL DATA AUGMENTATION WORK FOR HIGH DIMENSIONAL TASKS? As mentioned in Section 5, it can be difficult to provide a good distance metric, especially for tasks with high-dimensional states. Although this is a limitation, we show that temporal data augmentation, using a simple L2 distance metric, can improve the combinatorial generalization of OCBC algorithms even on high-dimensional image-based tasks. To evaluate this, we use both image-based goal-conditioned and return-conditioned tasks. For the goal-conditioned tasks, we use an image-based version of the “point” mazes [3]. The agent is given a top-down view of the maze to infer its location. For the return conditioned tasks, we create a new task using Miniworld [58] (See Fig. 4) called “collect”. The task is to collect both the keys and return to the start position. A reward of 1 is received after collecting each key. There are two data collecting policies, each collecting only one key. At test time, the OCBC policy is conditioned on the unseen return of 2 (collect both keys). Results. In Fig. 6, we can see that temporal data augmentation improves the performance of RvS and DT on \(4/4\) and \(3/4\) tasks, respectively. Although temporal data augmentation can be successfully applied to some high dimensional tasks, it is not guaranteed to succeed [5,1]. There remains room for other scalable and robust methods to achieve even better performance. Figure 7: Performance of DT trained on different offline dataset sizes (left) and using a different number of hidden layers (right) averaged across all “point” mazes. Even with larger datasets or models, the generalisation of DT is worse than DT + data augmentation. 6.3 Ablation experiments. Does more data remove the need for augmentation? Although our theory (Lemma 4.1) suggests that generalisation is required because of a change in distribution and is not a problem due to limited data, conventional wisdom says that larger datasets generally result in better generalisation. To empirically test whether this is the case, we train DT on 10 million transitions (10 times more than Fig. 5) on all “point” maze tasks. In Fig. 7(left), we see that even with more data, the combinatorial generalisation of DT does not improve much. Lastly, scaling the size of transformer models [59] is known to perform better in many SL problems. To understand whether this can have an effect on stitching capabilities, we increased the number of layers in the original DT model. In Fig. 7(right), we can see that increasing the number of layers does not have an effect on DT’s stitching capabilities. How sensitive is temporal data augmentation to the number of centroids used for K-means? In Fig. 8, we ablate the choice of the number of centroids used in K-means on two environments – “point” maze-medium and “ant” maze-medium. All choices of centroids significantly outperform the RvS method on both tasks. Combinatorial generalisation due to spurious relations. In most of our experiments, OCBC algorithms do exhibit, albeit very low, combinatorial generalisation. We believe this occurs not due to the combinatorial generalisation ability of OCBC algorithms, but due to certain spurious relations that are present in the dataset. In Appendix A, we discuss the relation of combinatorial generalisation with spurious relations. In Appendix B, we perform didactic experiments to show that combinatorial generalisation in OCBC algorithms occurs because the OCBC policy network picks up on such spurious relations. 7 Discussion In this work, we shed light on an area that the community has been investigating recently, can SL-based approaches perform stitching. We formally show that stitching requires combinatorial generalisation, and recent SL approaches to RL (OCBC methods) generally do not have any guarantees to perform such generalisation. We empirically verify this on many state-based and image-based tasks. We also propose a type of temporal data augmentation to perform the desired type of combinatorial generalisation precisely and help bridge the gap between OCBC and temporal difference algorithms. Limitations. Our proposed augmentation assumes access to a local distance metric in the state space, which can be difficult to obtain in general. Lifting this assumption and developing scalable OCBC algorithms that generalise is a promising direction for future work. Overall, our work hints that current SL approaches may not efficiently use sequential data found in RL: even when trained on vast quantities of data, these approaches do not perform combinatorial generalisation (stitching). Due to the temporal nature of RL, it is possible to solve a combinatorial number of tasks from the same sequential data. Similar gains in data efficiency can be made by designing algorithms capable of combinatorial generalisation in other problems involving time series data, for example, audio, videos, and text. Acknowledgements. This work was supported by Mila IDT, Compute Canada, and CIFAR. We thank Artem Zholus, Arnav Jain, and Tianwei Ni for reviewing an earlier draft of our paper. We thank Seohong Park and Mikail Khona for helpful pointers related to the code. We thank Siddarth Venkatraman and members of the Robotics and Embodied AI (REAL) Lab for fruitful discussions throughout the project. REFERENCES [1] Juergen Schmidhuber. Reinforcement learning upside down: Don’t predict rewards – just map them to actions, 2020. [2] Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34:15084–15097, 2021. [3] Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, and Sergey Levine. Rvs: What is essential for offline rl via supervised learning? arXiv preprint arXiv:2112.10751, 2021. [4] Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, and Igor Mordatch. Multi-game decision transformers, 2022. [5] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pages 1433–1438. Chicago, IL, USA, 2008. [6] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. [7] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. [8] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587–1596. PMLR, 2018. [9] Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169, 2021. [10] Thaddius Wiedemer, Prasanna Mayilvahanan, Matthias Bethge, and Wieland Brendel. Compositional generalization from first principles, 2023. [11] Abulhair Saparov and He He. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought, 2023. [12] Yi Zhang, Arturs Backurs, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, and Tal Wagner. Unveiling transformers with lego: a synthetic reasoning task, 2023. [13] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models, 2021. [14] Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning, 2017. [15] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning, 2021. [16] David Brandfonbrener, Alberto Bietti, Jacob Buckman, Romain Laroche, and Joan Bruna. When does return-conditioned supervised learning work for offline reinforcement learning? In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. [17] David Cheikhi and Daniel Russo. On the statistical benefits of temporal difference learning. 2023. [18] Amy Zhang, Nicolas Ballas, and Joelle Pineau. A dissection of overfitting and generalization in continuous reinforcement learning, 2018. [19] Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning, 2019. [20] Kenny Young, Aditya Ramesh, Louis Kirsch, and Jürgen Schmidhuber. The benefits of model-based generalization in reinforcement learning, 2023. [21] Michael Janner, Qiyang Li, and Sergey Levine. Offline reinforcement learning as one big sequence modeling problem. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 1273–1286. Curran Associates, Inc., 2021.
xwZhyKynCB
Definition 12: For an abstract query graph G, a grounding is a function I that maps G into a query graph. Do you impose any restrictions on this mapping? For example, could two distinct nodes with the type
EFO$_k$-CQA: Towards Knowledge Graph Complex Query Answering beyond Set Operation Anonymous authors Paper under double-blind review Abstract To answer complex queries on knowledge graphs, logical reasoning over incomplete knowledge is required due to the open-world assumption. Learning-based methods are essential because they are capable of generalizing over unobserved knowledge. Therefore, an appropriate dataset is fundamental to both obtaining and evaluating such methods under this paradigm. In this paper, we propose a comprehensive framework for data generation, model training, and method evaluation that covers the combinatorial space of Existential First-order Queries with multiple variables (EFO$_k$). The combinatorial query space in our framework significantly extends those defined by set operations in the existing literature. Additionally, we construct a dataset, EFO$_k$-CQA, with 741 query types for empirical evaluation, and our benchmark results provide new insights into how query hardness affects the results. Furthermore, we demonstrate that the existing dataset construction process is systematically biased and hinders the appropriate development of query-answering methods, highlighting the importance of our work. Our code and data are provided in https://anonymous.4open.science/r/EFOK-CQA/README.md 1 Introduction The Knowledge Graph (KG) is a powerful database that encodes relational knowledge into a graph representation (Vrandečić & Krötzsch [2014], Suchanek et al. [2007]), supporting downstream tasks (Zhou et al. [2007], Ehringer & Wöß [2016]) with essential factual knowledge. However, KGs suffer from incompleteness during its construction (Vrandečić & Krötzsch [2014], Carlson et al. [2010]), which is formally acknowledged as Open World Assumption (OWA) (Libkin & Sirangelo [2009]). The task of Complex Query Answering (CQA) proposed recently has attracted much research interest (Hamilton et al. [2018], Ren & Leskovec [2020]). This task ambitiously aims to answer database-level complex queries described by logical complex connectives (conjunction $\land$, disjunction $\lor$, and negation $\neg$) and quantifiers ($\exists$ existential, $\forall$ universal) (Wang et al. [2022], Ren et al. [2023], Leskovec [2023]). Currently, learning-based methods dominate the CQA tasks because they can empirically generalize to unseen knowledge as well as prevent the resource-demanding symbolic search. The thriving of learning-based methods also puts an urgent request on high-quality benchmarking methods, including datasets with comprehensive coverage of queries and sound answers, and fair evaluation protocol for learning-based approaches. In the previous study, datasets are developed by progressively expanding the syntactical expressiveness, where conjunction (Hamilton et al. [2018]), union (Ren et al. [2020]), negation (Ren & Leskovec [2020]), and other operators (Liu et al. [2021]) are taken into account sequentially. In particular, the dataset proposed in Ren & Leskovec [2020] contains all logical connectives and becomes the standard training set for model development. Wang et al. [2021] proposed a large evaluation benchmark EFO-1-QA that systematically evaluates the combinatorial generalizability of CQA models on such queries. More related works are included in Appendix A. However, the queries in aforementioned datasets (Ren & Leskovec [2020], Wang et al. [2021]) are recently justified as “Tree-Form” queries (Yin et al. [2023]) as they rely on the tree combinations. 1The universal quantifier is usually not considered in query answering tasks, as a common practice from both CQA on KG (Wang et al. [2022], Ren et al. [2023]) and database query answering (Poess & Floyd [2000]). of set operations. Compared to the well-established TPC-H decision support benchmark (Poess & Floyd, 2000) for database query processing, queries in existing CQA benchmarks (Ren & Leskovec, 2020; Wang et al., 2021) have two common shortcomings: (1) lack of combinatorial answers: only one variable is queried, and (2) lack of structural hardness: all existing queries subject to the structure-based tractability (Rossi et al., 2006; Yin et al., 2023). It is rather questionable whether existing CQA data under such limited scope can support the future development of methodologies for general decision support with open-world knowledge. The goal of this paper is to establish a new framework that addresses the aforementioned shortcomings to support further research in complex query answering on knowledge graphs. Our framework is formally motivated by the well-established investigation of constraint satisfaction problems, in which all queries can be formulated. In general, the contribution of our work is four folds. **Complete coverage** We capture the complete Existential First Order (EFO) queries from their rigorous definitions, underscoring both combinatorial hardness and structural hardness and extending the existing coverage (Wang et al., 2021) which covers only a subset of EFO$_1$ query. The captured query family is denoted as EFO$_k$, where $k$ stands for multiple variables. **Curated datasets** We derive EFO$_k$-CQA dataset, an enormous extension of the previous EFO-1-QA benchmark (Wang et al., 2021) and contains 741 types of query. We design several rules to guarantee that our dataset includes high-quality nontrivial queries, particularly those that contain multiple query variables and are not structure-based tractable. **Convenient implementation** We implement the entire pipeline for query generation, answer sampling, model training and inference, and evaluation for the undiscussed scenarios of combinatorial answers. Our pipeline is backward compatible, which supports both set operation-based methods and more recent ones. **Results and findings** We evaluate six representative CQA methods on our benchmark. Our results refresh the previous empirical findings and further reveal the structural bias of previous data. ## 2 PROBLEM DEFINITION ### 2.1 EXISTENTIAL FIRST ORDER (EFO) QUERIES ON KNOWLEDGE GRAPHS Given a set $\mathcal{E}$ of entities and a set $\mathcal{R}$ of relations, a knowledge graph $\mathcal{KG}$ encodes knowledge as a set of factual triple $\mathcal{KG} = \{(h, r, t)\} \subset \mathcal{E} \times \mathcal{R} \times \mathcal{E}$. According to the OWA, the knowledge graph that we have observed $\mathcal{KG}_o$ is only part of the real knowledge graph, meaning that $\mathcal{KG}_o \subset \mathcal{KG}$. The existing research only focuses on the logical formulas without universal quantifiers (Ren et al., 2023; Wang et al., 2023). We then offer the definition of it based on strict first order logic. **Definition 1** (Term). A term is either a variable $x$ or an entity $a \in \mathcal{E}$. **Definition 2** (Atomic formula). $\phi$ is an atomic formula if $\phi = r(h, t)$, where $r \in \mathcal{R}$ is a relation, $h$ and $t$ are two terms. **Definition 3** (Existential first order formula). The set of the existential formulas is the smallest set $\Phi$ that satisfies the following: (i) For atomic formula $r(h, t)$, itself and its negation $\neg r(h, t) \in \Phi$ (ii) If $\phi, \psi \in \Phi$, then $(\phi \land \psi), (\phi \lor \psi) \in \Phi$ (iii) If $\phi \in \Phi$ and $x_i$ is any variable, then $\exists x_i \phi \in \Phi$. **Definition 4** (Free variable). If a variable $y$ is not associated with a quantifier, it is called a free variable, otherwise, it is called a bounded variable. We write $\phi(y_1, \cdots, y_k)$ to indicate $y_1, \cdots, y_k$ are the free variables of $\phi$. **Definition 5** (Sentence and query). A formula $\phi$ is a sentence if it contains no free variable, otherwise, it is called a query. In this paper, we always consider formula with free variable, thus, we use formula and query interchangeably. **Definition 6** (Substitution). For $a_1, \cdots, a_k$, where $a_i \in \mathcal{E}$, we write $\phi(a_1/y_1, \cdots, a_k/y_k)$ or simply $\phi(a_1, \cdots, a_k)$ for the result of simultaneously replacing all free occurrence of $y_i$ in $\phi$ by $a_i$, $i = 1, \cdots, k$. --- 2We always assume all variables are named differently as common practice in logic. Figure 1: Operator Tree versus Query Graph. **Left**: An operator tree representing a given query “List the presidents of European countries that have never held the Olympics” (Ren & Leskovec, 2020); **Right**: A query graph representing a given query “Find a pair of persons who are both colleagues and co-authors and were born in the same country, with one having awarded the fields medal while the another not”, which is both a multigraph and a cyclic graph, containing two free variables. **Definition 7** (Answer of an EFO query). For a given existential query $\phi(y_1, \cdots, y_k)$ and a knowledge graph $KG$, its answer is a set that defined by $$A[\phi(y_1, \cdots, y_k)] = \{(a_1, \cdots, a_k) | a_i \in E, i = 1, \cdots, k, \phi(a_1, \cdots, a_k) \text{ is True in } KG\}.$$ **Definition 8** (Disjunctive Normal Form (DNF)). For any existential formula $\phi(y_1, \cdots, y_k)$, it can be converted to the Disjunctive normal form as shown below: $$\phi(y_1, \cdots, y_k) = \gamma_1(y_1, \cdots, y_k) \lor \cdots \lor \gamma_m(y_1, \cdots, y_k),$$ (1) $$\gamma_i(y_1, \cdots, y_k) = \exists x_1, \cdots, x_n.\rho_{i1} \land \cdots \land \rho_{it},$$ (2) where $\rho_{ij}$ is either an atomic formula or the negation of it, $x_i$ is called an existential variable. DNF has a strong property that $A[\phi(y_1, \cdots, y_k)] = \cup_{i=1}^{m}A[\gamma_i(y_1, \cdots, y_k)]$, which allows us to only consider conjunctive formulas $\gamma_i$ and then aggregate those answers to retrieve the final answers. This practical technique has been used in many previous research (Long et al., 2022; Ren et al., 2023). Therefore, we only discuss conjunctive formulas in the rest of this paper. ### 2.2 Constraint satisfaction problem for EFO queries Formally, a constraint satisfaction problem (CSP) $P$ can be represented by a triple $P = (X, D, C)$ where $X = (x_1, \cdots, x_n)$ is an $n$-tuple of variables, $D = (D_1, \cdots, D_n)$ is the corresponding $n$-tuple of domains, $C = (C_1, \cdots, C_t)$ is $t$-tuple constraint, each constraint $C_i$ is a pair of $(S_i, R_{S_i})$ where $S_i$ is a set of variables $S_i = \{x_{i_j}\}$ and $R_{S_i}$ is the constraint over those variables (Rossi et al., 2006). Historically, there are strong parallels between CSP and conjunctive queries in knowledge bases (Gotlob et al., 1999; Kolaitis & Vardi, 1998). The terms correspond to the variable set $X$. The domain $D_i$ of a constant entity contains only itself, while it is the whole entity set $E$ for other variables. Each constraint $C_i$ is binary that is induced by an atomic formula or its negation, for example, for an atomic formula $r(h, t)$, we have $S_i = \{h, t\}$, $R_{S_i} = \{(h, t) | h, t \in E, (h, r, t) \in KG\}$. Finally, by the definition of existential quantifier, we only consider the answer of free variable, rather than tracking all terms within the existential formulas. **Definition 9** (CSP answer of conjunctive formula). For a conjunctive formula $\gamma$ in Equation 2 with $k$ free variables and $n$ existential variables, the answer set of it formulated as CSP instance is: $$A[\gamma(y_1, \cdots, y_k)] = A[\gamma^*(y_1, \cdots, y_{n+k})], \text{ where } \gamma^* = \rho_{i1} \land \cdots \land \rho_{it}.$$ This shows that the inference of existential formulas is easier than solving CSP instances since the existential variables do not need to be kept track of. ### 2.3 The representation of query To give an explicit representation of existential formula, Hamilton et al. (2018) firstly proposes to represent a formula by operator tree, where each node represents the answer set for a sub-query, and the logic operators in it naturally represent set operations. This method allows for the recursive computation from constant entity to the final answer set in a bottom-up manner (Ren & Leskovec, 2020). Figure 2: Left: Example of trivial abstract query graph, in the upper left graph, the $x_1$ is redundant violating Assumption [13] in the bottom left graph, answers for the whole query can be decomposed to answer two free variables $y_1$ and $y_2$ alone, violating Assumption [14]. Right: Example of new query graph that is not included in previous benchmark (Wang et al., 2021) even though it can be represented by operator-tree. The representation of query graph follows Figure 1. We also provide full details of the operator tree and tree-form query in Appendix C. However, this representation method is inherently directed, acyclic, and simple, therefore more recent research breaks these constraints by being bidirectional (Liu et al., 2022; Wang et al., 2022), or being cyclic or multi (Yin et al., 2023). To meet these new requirements, they propose to represent the formula by the query graph (Yin et al., 2023), which inherits the convention of constraint network in representing CSP instance. We utilize this design and further extend it to represent EFO$_k$ formula that contains multiple free variables. We provide the illustration and comparison of the operator tree and the query graph in Figure 1, where we show the strong expressiveness of the query graph. We also provide the formal definition of query graph as follows: **Definition 10 (Query graph).** Let $\gamma$ be a conjunctive formula in equation [2] its query graph is defined by $G(\gamma) = \{(h, r, t, \{T/F\})\}$, where an atomic formula $\rho = r(h, t)$ in $\gamma$ corresponds to $(h, r, t, T)$ and $\rho = \neg r(h, t)$ corresponds to $(h, r, t, F)$. Therefore, any conjunctive formulas can be represented by a query graph, in the rest of the paper, we use query graphs and conjunctive formulas interchangeably. ### 3 THE COMBINATORIAL SPACE OF EFO$_k$ QUERIES Although previous research has given a systematic investigation in the combinatorial space of operator trees (Wang et al., 2021), the combinatorial space of the query graph is much more challenging due to the extremely large search space and the lack of explicit recursive formulation. To tackle this issue on a strong theoretical background, we put forward additional assumptions to exclude trivial query graphs. Such assumptions or restrictions also exist in the previous dataset and benchmark (Ren & Leskovec, 2020; Wang et al., 2021). Specifically, we propose to split the task of generating data into two levels, the abstract level, and the grounded level. At the abstract level, we create abstract query graph, at the grounded level, we provide the abstract query graph with the relation and constant and instantiate it as a query graph. In this section, we elaborate on how we investigate the scope of the nontrivial EFO$_k$ query of interest step by step. #### 3.1 NONTRIVIAL ABSTRACT QUERY GRAPH OF EFO$_k$ The abstract query graph is the ungrounded query graph without information of certain knowledge graphs, and we give an example in Figure 3. **Definition 11 (Abstract query graph).** The abstract query graph $G = (V, E, f, g)$ is a directed graph with three node types, \{\textbf{Constant Entity}, \textbf{Existential Variable}, \textbf{Free variable}\}, and two edge types, \{\textbf{positive}, \textbf{negative}\}. The $V$ is the set of nodes, $E$ is the set of directed edges, $f$ is the function maps node to node type, $g$ is the function maps edge to edge type. **Definition 12 (Grounding).** For an abstract query graph $G$, a grounding is a function $I$ that maps it into a query graph $I(G)$. We propose two assumptions of the abstract query graph as follows: **Assumption 13 (No redundancy).** For a abstract query graph $G$, there is not a subgraph $G_s \subseteq G$ such that for every grounding $I$, $A[I(G)] = A[I(G_s)]$. **Assumption 14 (No decomposition).** For an abstract query graph $G$, there are no such two subgraphs $G_1$, $G_2$, satisfying that $G_1, G_2 \subseteq G$, such that for every instantiation $I$, $A[I(G)] = A[I(G_1)] \times A[I(G_2)]$, where the $\times$ represents the Cartesian product. Figure 3: Illustration of all functionalities of our framework. Real-world KG is preprocessed and fed into our pipeline, which contains the whole process of data generation and supports end-to-end machine learning as well as evaluation. Additionally, the figure of the real-world KG is taken from https://medium.com/@fakrami/re-evaluation-of-knowledge-graph-completion-methods-7dfe2e981a77. The assumption inherits the idea of the structural decomposition technique in CSP (Gottlob et al., 2000), which allows for solving a CSP instance by solving several sub-problems and combining the answer together based on topology property. Additionally, meeting these two assumptions in the grounded query graph is extremely computationally costly thus we avoid it in practice. We provide some easy examples to be excluded for violating the assumptions above in Figure 2. 3.2 Nontrivial Query Graph of EFO_k Similarly, we propose two assumptions on the query graph. Assumption 15 (Meaningful negation). For any negative edge e in query graph G, we require removing it results in different CSP answers: \( A[G - e] \neq A[G] \). Assumption 15 treats negation separately because of the fact that for any \( K\mathcal{G} \), any relation \( r \in \mathcal{R} \), there is \( |\{(h,t)| h,t \in \mathcal{E}, (h,r,t) \in K\mathcal{G}\}| \ll |\mathcal{E}|^2 \), which means that the constraint induced by the negation of an atomic formula is much less “strict” than the one induced by a positive atomic formula. Assumption 16 (Appropriate answer size). There is a constant \( M \ll |\mathcal{E}| \) to bound the candidate set for each free variable \( f_i \) in G, such that for any \( i, |\{a_i \in \mathcal{E}|(a_1, \cdots, a_i, \cdots, a_k) \in A[G]\}| \leq M \). We note the Assumption 16 extends the “bounded negation” assumption in the previous dataset (Ren & Leskovec, 2020; Wang et al., 2021). We give an example “Find a city that is located in Europe and is the capital of a country that has not held the Olympics” in Figure 2 where the candidate set of \( x_1 \) is in fact bounded by its relation with the \( y_1 \) variable but not from the bottom “Olympics” constant, hence, this query is excluded in their dataset due to the directionality of operator tree. Overall, the scope of the formula investigated in this paper surpasses the previous EFO-1-QA benchmark because of: (1) We include the EFO_k formula with multiple free variables for the first time; (2) We include the whole family of EFO_1 query, many of them can not be represented by operator tree; (3) Our assumption is more systematic than previous ones as shown by the example in Figure 2. More details are offered in Appendix D.3. 4 FRAMEWORK We develop a versatile framework that supports five key functionalities fundamental to the whole CQA task: (1) Enumeration of nontrivial abstract query graphs as discussed in Section 3.2; (2) Sample... grounding for the abstract query graph; (3) Compute answer for any query graph efficiently; (4) Support implementation of existing CQA models; (5) Conduct evaluation including newly introduced EFO\(_k\) queries with multiple free variables. We explain each functionality in the following. An illustration of the first three functionalities is given in Figure 3, where we show how each functionality cooperates to help CQA tasks. We note that preprocessing allows us to extend our framework to more avant-garde settings, like inductive settings or graphs with numerics, more discussions in Appendix G. ### 4.1 Enumerate Abstract Query Graph As discussed in Section 3, we are able to abide by those assumptions as well as enumerate all possible query graphs within a given search space where certain parameters, including the number of constants, free variables, existential variables, and the number of edges are all given, shown in Figure 3. Additionally, we apply the graph isomorphism algorithm to avoid duplicated query graphs being generated. More details for our generation method are provided in Appendix D.1. ### 4.2 Ground Abstract Query Graph To ground an abstract query graph \(G\) and comply with the assumption [15], we split the abstract query graph into two parts, the positive part and the negative part, \(\hat{G} = G_p \cup G_n\). Then the grounding process is also split into two steps: 1. Sample grounding for the positive subgraph \(G_p\) and compute its answer, 2. Ground the \(G_n\) to decrease the answer got in the first step. Details in Appendix D.2. Finally, to fulfill the assumption [16] we follow the previous practice of manually filtering out queries that have more than 100 answers [Ren & Leskovec, 2020; Wang et al., 2021], as we have introduced the EFO\(_k\) queries, we slightly soften this constraint to be no more than \(100 \times k\) answers. ### 4.3 Answer for Existential Formula As illustrated in Section 2.2, the answer to an existential formula can be solved by a CSP solver, however, we also show in Definition 9 that solve it as CSP leads to huge computation costs. Thus, we develop our own algorithm following the standard solving technique of CSP, which ensures consistency conditions in the first step, and do the backtracking to get the final answers in the second step. Finally, we select part of our sampled queries and double-check it with the CSP solver [https://github.com/python-constraint/python-constraint]. ### 4.4 Learning-based Methods As the query graph is an extension to the operator tree regarding the express ability to existential formulas, we are able to reproduce CQA models that are initially implemented by the operator tree in our new framework. Specifically, since the operator tree is directed and acyclic, we compute its topology ordering that allows for step-by-step computation in the query graph. This algorithm is illustrated in detail in the Appendix F. Therefore, our pipeline is backward compatible. Conversely, for the newly proposed models that are based on query graphs, the original operator tree framework is not able to implement them, while our framework is powerful enough. We have therefore clearly shown that the query graph representation is more powerful than the previous operator tree and is able to support arbitrary existential formulas as explained in Section 2.3. ### 4.5 Evaluation Protocol As we have mentioned in Section 2.1, there is an observed knowledge graph \(KG_o\) and a full knowledge graph \(KG\). Thus, there is a set of observed answers \(A_o\) and a set of full answers \(A\) correspondingly. Since the goal of CQA is to tackle the challenge of OWA, it has been a common practice to evaluate CQA models by the “hard” answers \(A_h = A - A_o\) (Ren et al., 2020; 2023). However, to the best of our knowledge, there has not been a systematic evaluation protocol for EFO\(_k\) queries, thus we leverage this idea and propose three types of different metrics to fill the research gap in the area of evaluation of queries with multiple free variables, and thus have combinatorial answers. **Marginal.** For any free variable \(f_i\), its full answer is \(A^{f_i} = \{a_i \in E | (a_1, \cdots, a_i, \cdots, a_k) \in A\}\), the observed answer of it \(A^{f_i}_o\) is defined similarly. This is termed “solution projection” in CSP. Table 1: HIT@10 scores(%) for inferring queries with one free variable on FB15k-237. We denote \( e \), \( c \) as the number of existential variables, constant entities correspondingly. SDAG represents Simple Directed Acyclic Graph, Multi for multigraph, and Cyclic for cyclic graph. AVG.(c) and AVG.(e) is the average score of queries with the number of constant entities / existential variables fixed. | Model | \( c \) | 0 | 1 | 2 | AVG.(c) | AVG. | |-------|--------|-----|-----|-----|---------|------| | | | SDAG | SDAG | Multi | SDAG | Multi | Cyclic | | BetaE | 1 | 31.4 | 33.0 | 22.3 | 21.1 | 17.7 | 30.7 | 22.1 | | | 2 | 57.2 | 36.2 | 35.5 | 29.3 | 29.4 | 45.3 | 32.5 | | | 3 | 80.0 | 53.1 | 53.6 | 38.2 | 37.8 | 58.2 | 42.1 | | | AVG.(e)| 59.3 | 43.8 | 40.6 | 33.8 | 32.7 | 49.3 | | | LogicE| 1 | 34.4 | 34.9 | 23.0 | 21.4 | 17.4 | 30.3 | 22.4 | | | 2 | 60.0 | 38.4 | 36.8 | 29.8 | 29.3 | 45.3 | 33.0 | | | 3 | 83.0 | 55.5 | 55.5 | 38.5 | 37.8 | 57.8 | 42.4 | | | AVG.(e)| 62.2 | 46.0 | 42.0 | 34.2 | 32.6 | 49.1 | | | ConE | 1 | 34.9 | 35.4 | 23.6 | 21.8 | 18.4 | 34.2 | 23.5 | | | 2 | 61.0 | 39.1 | 38.4 | 32.0 | 31.5 | 50.2 | 35.2 | | | 3 | 84.8 | 56.7 | 57.1 | 41.1 | 40.0 | 63.4 | 44.9 | | | AVG.(e)| 63.4 | 47.0 | 43.5 | 36.5 | 34.7 | 54.1 | | | CQD | 1 | 39.0 | 34.2 | 17.6 | 17.4 | 12.7 | 28.7 | 18.7 | | | 2 | 50.7 | 33.8 | 33.6 | 28.4 | 28.4 | 45.7 | 31.4 | | | 3 | 58.4 | 49.6 | 52.4 | 39.3 | 39.1 | 60.4 | 42.6 | | | AVG.(e)| 50.7 | 41.4 | 38.4 | 33.8 | 32.4 | 50.2 | | | LMPNN | 1 | 38.6 | 37.8 | 21.8 | 22.9 | 17.8 | 31.7 | 23.2 | | | 2 | 62.2 | 40.2 | 35.0 | 30.8 | 28.1 | 44.4 | 32.5 | | | 3 | 86.6 | 56.9 | 51.9 | 38.3 | 35.3 | 55.8 | 40.8 | | | AVG.(e)| 65.4 | 47.8 | 39.6 | 34.5 | 30.8 | 48.0 | | | FIT | 1 | 38.7 | 42.7 | 32.5 | 26.1 | 22.5 | 41.5 | 28.8 | | | 2 | 65.5 | 47.7 | 48.2 | 39.7 | 40.1 | 56.5 | 43.4 | | | 3 | 84.2 | 63.9 | 63.5 | 50.5 | 50.4 | 63.5 | 53.6 | | | AVG.(e)| 65.8 | 54.7 | 51.5 | 44.9 | 43.7 | 57.5 | | theory (Greco & Scarcello, 2013) to evaluate whether the locally retrieved answer can be extended to an answer for the whole problem. Then, we rank the hard answer \( A_{f_i}^h = A_{f_i} - A_{f_i}^o \) against those non-answers \( E - A_{f_i} - A_{f_i}^o \) and use the ranking to compute standard metrics like MRR, HIT@K for every free variable. Finally, the metric on the whole query graph is taken as the average of the metric on all free variables. We note that this metric is an extension of the previous design proposed by Liu et al. (2021). However, this metric has the inherent drawback that it fails to evaluate the combinatorial answer by the \( k \)-length tuple and thus fails to find the correspondence among free variables. **Multiply.** Because of the limitation of the marginal metric discussed above, we propose to evaluate the combinatorial answer by each \( k \)-length tuple \((a_1, \cdots , a_k)\) in the hard answer set \( A_h \). Specifically, we rank each \( a_i \) in the corresponding node \( f_i \) the same as the marginal metric. Then, we propose the HIT@\( n^k \) metric, it is 1 if all \( a_i \) is ranked in the top \( n \) in the corresponding node \( f_i \), and 0 otherwise. **Joint.** Finally, we note these metrics above are not the standard way of evaluation, which is based on a joint ranking for all the \( E^k \) combinations of the entire search space. We propose to estimate the joint ranking in a closed form given certain assumptions, see Appendix E for the proof and details. ## 5 The EFO\(_k\)-CQA Dataset and Benchmark Results ### 5.1 The EFO\(_k\)-CQA Dataset With the help of our framework developed in Section 4, we are able to develop a new dataset called EFO\(_k\)-CQA, whose combinatorial space is parameterized by the number of constants, existential and free variables, and the number of edges. EFO\(_k\)-CQA dataset includes 741 different abstract query graphs in total. Then, we conduct experiments on our new EFO\(_k\)-CQA dataset with six representative CQA models including BetaE (Ren & Leskovec, 2020), LogicE (Lius et al., 2021), and ConE (Zhang et al., 2021), which are built on the operator tree, CQD (Arakelyan et al., 2020), LMPNN (Wang et al., 2023), and --- \(^4\)We note \( A_{f_i}^h \) can be empty for some free variable or even for all free variables, making these marginal metrics not reliable, details in Appendix E. Figure 4: Relative performance of the six representative CQA models in referring queries with one free variable, where the ranking of query types is determined by the average HIT@10 score. A Gaussian filter with sigma=1 is added to smooth the curve. We also use the red box to highlight the easiest queries and the black box to highlight the most challenging ones. FIT (Yin et al., 2023) which are built on query graph. The experiments are conducted in two parts, (1). the queries with one free variable, specifically, including those that can not be represented by operator tree; (2). the queries that contain multiple free variables. The parameters and the generation process, as well as its statistics, are detailed in Appendix D.4 where we also provide a dataset constructed in inductive settings. However, we mainly focus on transductive settings in the main paper since there are very few inductive models to benchmark. We have made some adaptations to the implementation of CQA models, allowing them to infer EFO_k queries, full detail is offered in Appendix F. The experiment is conducted on a standard knowledge graph FB15k-237 (Toutanova & Chen, 2015) and additional experiments on other standard knowledge graphs FB15k and NELL are presented in Appendix H. 5.2 Benchmark results for k = 1 Because of the great number of abstract query graphs, we follow Wang et al. (2021) to group query graphs by three factors: (1). the number of constant entities; (2). the number of existential variables, and (3). the topology of the query graph. The result is shown in Table 1 and Figure 4. Structure analysis. Firstly, we find a clear monotonic trend that adding constant entities makes a query easier while adding existing variables makes a query harder, which the previous research (Wang et al., 2021) fails to uncover. Besides, we are the first to consider the topology of query graphs: when the number of constants and existential variables is fixed, we have found the originally investigated queries that correspond to Simple Directed Acyclic Graphs (SDAG) are generally easier than the multigraphs ones but harder than the cyclic graph ones. This is an intriguing result that greatly deviates from traditional CSP theory in close world which finds that the cyclic graph is NP-complete, while the acyclic graph is tractable (Carbonnel & Cooper, 2016). Our conjecture for this intriguing result in the open world is that the cyclic graph contains one more constraint than SDAG that serves as a source of information for CQA models, while the multigraph tightens an existing constraint and thus makes the query harder. Model analysis. For models that are built on operator tree, including BetaE, LogicE, and ConE, their relative performance is steady among all breakdowns and is consistent with their reported score in the original dataset (Ren & Leskovec, 2020), showing similar generalizability. However, for models that are built on query graphs, including CQD, LMPNN, and FIT, we found that LMPNN performs generally better than CQD in SDAG, but falls behind CQD in multigraphs and cyclic graphs. We assume the reason is that LMPNN requires training while CQD does not, however, the original dataset are biased which only considers SDAG, leading to the result that LMPNN doesn’t generalize well to the unseen tasks with different topology property. We expect future CQA models may use our framework to address this issue of biased data and generalize better to more complex queries. Moreover, by the detailed observation in Figure 4, we plot two boxes, the red one and the black one. In the red box, we find that even the worst model and the best model have pretty similar performance. --- 5 To facilitate our discussion, we make a further constraint in our EFO_k-CQA dataset that the total edge is at most as many as the number of nodes, thus, a graph can not be both a multigraph and a cyclic graph. Table 2: HIT@10 scores(%) of three different types for answering queries with two free variables on FB15k-237. The constant number is fixed to be two. \( e \) is the number of existential variables. The SDAG, Multi, and Cyclic are the same as Table 1. | Model | HIT@10 Type | \( e = 0 \) | \( e = 1 \) | \( e = 2 \) | AVG. | |-------|-------------|------------|------------|------------|------| | | | SDAG | Multi | SDAG | Multi | Cyclic | SDAG | Multi | Cyclic | | | BetaE | Marginal | 54.5 | 50.2 | 49.5 | 46.0 | 58.8 | 37.2 | 35.5 | 58.3 | 43.8 | | | Multiply | 27.3 | 22.4 | 22.3 | 16.9 | 26.2 | 16.9 | 13.9 | 25.7 | 18.3 | | | Joint | 6.3 | 5.4 | 5.2 | 4.2 | 10.8 | 2.2 | 2.3 | 9.5 | 4.5 | | LogicE| Marginal | 58.2 | 50.9 | 52.2 | 47.4 | 60.4 | 37.7 | 35.8 | 59.2 | 44.6 | | | Multiply | 32.1 | 23.1 | 24.9 | 18.1 | 28.3 | 18.1 | 14.8 | 26.6 | 19.5 | | | Joint | 6.8 | 6.0 | 6.1 | 4.5 | 12.3 | 2.5 | 2.7 | 10.3 | 5.1 | | ConE | Marginal | 60.3 | 53.8 | 54.2 | 50.3 | 66.2 | 40.1 | 38.5 | 63.7 | 47.7 | | | Multiply | 33.7 | 25.2 | 26.1 | 19.8 | 32.1 | 19.5 | 16.3 | 30.3 | 21.5 | | | Joint | 6.7 | 6.4 | 6.2 | 4.8 | 12.6 | 2.6 | 2.7 | 10.9 | 5.3 | | CQD | Marginal | 50.4 | 46.5 | 49.1 | 45.6 | 59.7 | 33.5 | 33.1 | 61.5 | 42.8 | | | Multiply | 28.9 | 23.4 | 25.4 | 19.5 | 31.3 | 17.8 | 16.0 | 30.5 | 21.0 | | | Joint | 8.0 | 8.0 | 7.4 | 6.0 | 13.9 | 3.6 | 3.9 | 12.0 | 6.4 | | LMPNN | Marginal | 58.4 | 51.1 | 54.9 | 49.2 | 64.7 | 39.6 | 36.1 | 58.7 | 45.4 | | | Multiply | 35.0 | 26.7 | 29.2 | 21.7 | 33.4 | 21.4 | 17.0 | 28.4 | 22.2 | | | Joint | 7.6 | 7.5 | 7.1 | 5.3 | 12.9 | 2.8 | 2.9 | 9.5 | 5.2 | | FIT | Marginal | 64.3 | 61.0 | 63.1 | 60.7 | 58.5 | 49.0 | 49.1 | 60.2 | 54.3 | | | Multiply | 39.7 | 32.2 | 35.9 | 27.8 | 27.4 | 29.5 | 26.8 | 32.4 | 29.2 | | | Joint | 7.4 | 9.0 | 7.8 | 6.5 | 10.1 | 3.7 | 4.6 | 10.6 | 6.4 | In these easiest queries despite that they may differ greatly in other queries. In the black box, we note that CQD (Arakelyan et al., 2020), though designed in a rather general form, is pretty unstable when comes to empirical evaluation, as it has a clear downward curve and deviates from other model’s performance enormously in most difficult query types. Therefore, though its performance is better than LMPNN and comparable to BetaE on average as reported in Table 1, its unsteady performance suggests its inherent weakness, especially when the users are risk-sensitive and desire a trustworthy machine-learning model that does not crash in extreme cases (Varshney, 2019). We note FIT is designed to infer all EFO\(_1\) queries and is indeed able to outperform other models in almost all breakdowns, however, its performance comes with the price of computational cost, and face challenges in cyclic graph where it degenerates to enumeration: we further explain in Appendix F. ### 5.3 Benchmark results for \( k = 2 \) As we have explained in Section 4.5, we propose three kinds of metrics, marginal ones, multiply ones, and joint ones, from easy to hard, to evaluate the performance of a model in the scenario of multiple variables. The evaluation result is shown in Table 2. As the effect of the number of constant variables is quite clear, we remove it and add the metrics based on HIT@10 as the new factor. For the impact regarding the number of existential variables and the topology property of the query graph, we find the result is similar to Table 1, which may be explained by the fact that those models are all initially designed to infer queries with one free variable. For the three metrics we have proposed, we have identified a clear difficulty difference among them though they generally show similar trends. The scores of joint HIT@10 are pretty low, indicating the great hardness of answering queries with multiple variables. Moreover, we have found that FIT falls behind other models in some breakdowns which are mostly cyclic graphs, corroborating our discussion in Section 5.2. We offer more experiment results and further discussion in Appendix H. ### 6 Conclusion In this paper, we make a thorough investigation of the family of EFO\(_k\) formulas based on strong theoretical background. We then present a new powerful framework that supports several functionalities essential to CQA task, with this help, we build the EFO\(_k\)-CQA dataset that greatly extends the previous dataset and benchmark. Our evaluation result brings new empirical findings and reflects the biased selection in the previous dataset impairs the performance of CQA models, emphasizing the contribution of our work. REFERENCES Dimitrios Alivanistos, Max Berrendorf, Michael Cochez, and Mikhail Galkin. Query Embedding on Hyper-relational Knowledge Graphs, September 2022. URL http://arxiv.org/abs/2106.08166, arXiv:2106.08166 [cs]. Erik Arakelyan, Daniel Daza, Pasquale Minervini, and Michael Cochez. Complex Query Answering with Neural Link Predictors. In International Conference on Learning Representations, 2020. Jiaxin Bai, Zihao Wang, Hongming Zhang, and Yangqiu Song. Query2Particles: Knowledge Graph Reasoning with Particle Embeddings. In Findings of the Association for Computational Linguistics: NAACL 2022, pp. 2703–2714, 2022. Yushi Bai, Xin Lv, Juanzi Li, and Lei Hou. Answering Complex Logical Queries on Knowledge Graphs via Query Computation Tree Optimization. In Proceedings of the 40th International Conference on Machine Learning, pp. 1472–1491. PMLR, July 2023. URL https://proceedings.mlr.press/v202/bai23b.html ISSN: 2640-3498. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating Embeddings for Modeling Multi-relational Data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. URL https://papers.nips.cc/paper_files/paper/2013/hash/1cecc7a77928ca8133fa24680a88d2f9-Abstract.html Clément Carbonnel and Martin C Cooper. Tractability in constraint satisfaction problems: a survey. Constraints, 21(2):115–144, 2016. Publisher: Springer. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam Hruschka, and Tom Mitchell. Toward an architecture for never-ending language learning. In Proceedings of the AAAI conference on artificial intelligence, volume 24, pp. 1306–1313, 2010. Issue: 1. Lisa Ehringer and Wolfram Wöß. Towards a definition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48(1-4):2, 2016. Michael Galkin, Zhaocheng Zhu, Hongyu Ren, and Jian Tang. Inductive logical query answering in knowledge graphs. Advances in Neural Information Processing Systems, 35:15230–15243, 2022. Georg Gottlob, Nicola Leone, and Francesco Scarcello. Hypertree decompositions and tractable queries. In Proceedings of the eighteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pp. 21–32, 1999. Georg Gottlob, Nicola Leone, and Francesco Scarcello. A comparison of structural CSP decomposition methods. Artificial Intelligence, 124(2):243–282, December 2000. ISSN 0004-3702. doi: 10.1016/S0004-3702(00)00078-3. URL https://www.sciencedirect.com/science/article/pii/S0004370200000783 Gianluigi Greco and Francesco Scarcello. On The Power of Tree Projections: Structural Tractability of Enumerating CSP Solutions. Constraints, 18(1):38–74, January 2013. ISSN 1383-7133, 1572-9354. doi: 10.1007/s10601-012-9129-8. URL http://arxiv.org/abs/1005.1567 arXiv:1005.1567 [cs]. Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. Embedding logical queries on knowledge graphs. Advances in neural information processing systems, 31, 2018. Zhiwei Hu, Víctor Gutiérrez-Basulto, Zhiliang Xiang, Xiaoli Li, and Jeff Pan. Type-aware Embeddings for Multi-Hop Reasoning over Knowledge Graphs. May 2022. Qian Huang, Hongyu Ren, and Jure Leskovec. Few-shot relational reasoning via connection subgraph pretraining. Advances in Neural Information Processing Systems, 35:6397–6409, 2022. Zhen Jia, Soumajit Pramanik, Rishiraj Saha Roy, and Gerhard Weikum. Complex Temporal Question Answering on Knowledge Graphs. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM ’21, pp. 792–802, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 978-1-4503-8446-9. doi: 10.1145/3459637.3482416. URL https://dl.acm.org/doi/10.1145/3459637.3482416
9OevMUdods
Data leaky may happen. Note that the questions and answers are annotated based on the text of previous public datasets. The pretraining data of evaluated LLMs is likely to contain the text. This may cause data leaky, and LLMs might have memorized/learned shortcuts to answer the question. It makes the evaluation results on the benchmark less convincing.
Towards Understanding Factual Knowledge of Large Language Models Xuming Hu\textsuperscript{1,2*}, Junzhe Chen\textsuperscript{1*}, Xiaochuan Li\textsuperscript{1*}, Yufei Guo\textsuperscript{1}, Lijie Wen\textsuperscript{1†}, Philip S. Yu\textsuperscript{3}, Zhijiang Guo\textsuperscript{4†} \textsuperscript{1} Tsinghua University \textsuperscript{2} The Hong Kong University of Science and Technology (Guangzhou) \textsuperscript{3} University of Illinois at Chicago \textsuperscript{4} University of Cambridge xuminghu@hkust-gz.edu.cn, wenlj@tsinghua.edu.cn, zg283@cam.ac.uk Abstract Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks. The factual knowledge acquired during pretraining and instruction tuning can be useful in various downstream tasks, such as question answering, and language generation. Unlike conventional Knowledge Bases (KBs) that explicitly store factual knowledge, LLMs implicitly store facts in their parameters. Content generated by the LLMs can often exhibit inaccuracies or deviations from the truth, due to facts that can be incorrectly induced or become obsolete over time. To this end, we aim to explore the extent and scope of factual knowledge within LLMs by designing the benchmark Pinocchio. Pinocchio contains 20K diverse factual questions that span different sources, timelines, domains, regions, and languages. Furthermore, we investigate whether LLMs can compose multiple facts, update factual knowledge temporally, reason over multiple pieces of facts, identify subtle factual differences, and resist adversarial examples. Extensive experiments on different sizes and types of LLMs show that existing LLMs still lack factual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing trustworthy artificial intelligence. The dataset Pinocchio and our codes are publicly available at: https://github.com/THU-BPM/Pinocchio 1 Introduction Large language models (LLMs) have revolutionized natural language processing (NLP) in recent years since they have significantly improved performance on various downstream tasks (Brown et al., 2020; Chowdhery et al., 2022; Ouyang et al., 2022; Touvron et al., 2023a,b; OpenAI, 2022, 2023). Prior efforts have shown that language models can store factual knowledge and act as knowledge bases (Petroni et al., 2019; Jiang et al., 2020c). Factual knowledge in language models acquired during pretraining can benefit knowledge-intensive downstream tasks such as question answering and fact checking (Roberts et al., 2020; Yu et al., 2023a; Pan et al., 2023). Despite advancements in LLMs, they still struggle with generating content that exhibits inaccuracies or deviations from the facts and making reasoning errors (Lin et al., 2022; Bubeck et al., 2023). These factual errors can be difficult to identify since LLMs implicitly memorize facts through their parameters rather than explicitly store factual knowledge as traditional Knowledge Bases. Accessing and interpreting the computations and memories of these models can be challenging (Ribeiro et al., 2016; Belinkov & Glass, 2019), especially when APIs are the only means of interaction and many interpretation methods rely on weights and representations (Cao et al., 2021b). The presence of errors in stored factual knowledge or the incorrect induction and obsolescence of certain facts over time may be contributing factors to this limitation, which in turn affects the performance of LLMs (Elazar et al., 2021; Cao et al., 2021a). This limitation restricts the application of LLMs in some high-stakes areas, such as healthcare, finance, and law (Dong et al., 2022). Hence, exploring the degree to which LLMs hold factual information and their ability to reason with such knowledge is vital. * Equal Contribution. † Corresponding authors. Figure 1: Pinocchio is a comprehensive dataset that tackles 7 distinct tasks related to factual knowledge and reasoning. It consists of 20,713 multiple-choice questions that have been sourced from various reliable and diverse channels. To this end, we propose the Pinocchio, a testbed aimed at understanding factuality and reasoning for LLMs. It contains 20K diverse factual questions that span different sources, timelines, domains, regions, and languages. Furthermore, we investigate whether LLMs are able to recognize the combination of multiple facts, reason over structured and unstructured evidence, realize facts change over time, identify subtle factual differences, and resist adversarial examples based on the dataset. We control for problem difficulty in each distinct reasoning task to enable fine-grained analysis. With the Pinocchio benchmark, we explore whether various LLMs (Scao et al., 2022b; Zhang et al., 2022; Ouyang et al., 2022; Chung et al., 2022; Touvron et al., 2023a; Chiang et al., 2023) could store factual knowledge and perform reasoning based on it. We envision Pinocchio as a suite of benchmarks, subsets of which could be separately utilized to assess certain model abilities of interest and analyze important strengths and limitations of LLMs. For instance, in temporal tasks, we find that LLMs lack factual knowledge for up-to-date questions; in complex factual tasks that require multi-hop reasoning, LLMs still have limitations, even when various prompting strategies are employed. We hope Pinocchio can serve as the initial step towards understanding the abilities of LLMs from multiple dimensions and facilitate the development of LLMs. 2 DATASET CONSTRUCTION 2.1 TASKS Aiming to systematically evaluate the factual knowledge and related reasoning abilities of LLMs, we raise seven research questions, then carefully select factual statements from different sources summarized in Table 1. - **Task 1: Multifaceted** Previous research (Petroni et al., 2019) has shown that small language models like BERT have the ability to retain relational knowledge from training data and answer “fill-in-the-blank” cloze statements. This raises the question of whether LLMs can also store and reason over multiple pieces of facts obtained during pretraining. It is not just important for LLMs to memorize individual facts accurately, but to also recognize and generate new combinations of facts from different sources. To investigate this issue, we have selected claims from the FEVER dataset (Thorne et al., 2018), which were written by human annotators based on information from Wikipedia articles. These claims are either supported or refuted by multiple facts from (the same or several) Wikipedia articles, or there is insufficient information available to verify them. To assess the performance of language models in handling various combinations of facts, we have sampled statements that require different numbers of evidence, ranging from one to many, enabling fine-grained analysis. - **Task 2: Structural** In addition to unstructured text, factual knowledge is also commonly stored in a structured format, such as tables, lists, or databases (Bhagavatula et al., 2013). However, Table 1: Pinocchio Dataset Sources, Descriptions, and Data Distribution. | Domain | Description | Sources | Fact. | Non-Fact. | NEI | ALL | |-------------------------|--------------------------------------------------|---------------|-------|-----------|-----|-------| | Multifaceted | Contain multiple facts | FEVER | 1,111 | 1,111 | 1,110 | 3,332 | | Structural | Contain structured and unstructured facts | FEVEROUS | 1,741 | 1,953 | 250 | 3,944 | | Adversarial | Contain facts edited by adversarial methods | Symmetric, FM2| 815 | 921 | | 1,736 | | Temporal | Contain facts that change over time | VitaminC | 1,898 | 1,043 | 355 | 3,296 | | Real-World | Contain factual statements spread online | PolitiFact | 986 | 1,987 | 609 | 3,582 | | Domain-Specific | Contain facts from health and science domains | PubHealth, SciFact | 1,135 | 715 | 737 | 2,608 | | Multi-Lingual | Contain facts in different languages | XFact, CHEF | 820 | 848 | 547 | 2,215 | current LLMs are primarily trained on unstructured text using next word prediction loss (Brown et al., 2020; Touvron et al., 2023a). In order to process structured data, it is often converted into text strings using various methods, such as linearizing tables. This raises the question of whether LLMs are capable of effectively memorizing and reasoning over facts from structured sources, similar to their performance with unstructured text. To investigate this question, we sample factual statements from the FEVEROUS dataset (Aly et al., 2021), which is constructed in a similar manner to FEVER but includes evidence in the form of tables, sentences, or both. - **Task 3: Adversarial** Language models are known to be vulnerable to adversarial examples that are strategically modified to deceive even advanced models with hardly noticeable changes (Shen et al., 2023). Given this knowledge, it is important to examine whether LLMs can withstand adversarial examples in the context of factuality. To investigate this, we utilize two datasets, namely Symmetric (Schuster et al., 2019) and FM2 (Eisenschlos et al., 2021). These datasets consist of adversarial examples that have been crafted using various strategies, including temporal inference and diverting to unrelated facts. - **Task 4: Temporal** Facts are not static but rather possess a dynamic nature. With the vast amount of new information constantly emerging, facts often undergo changes, additions, or alterations. It raises the question of whether LLMs are able to adapt to these factual changes over time. In particular, we wonder if LLMs are capable of discerning factual knowledge from different time periods, since the pretraining corpus may not be processed and organized chronologically. To explore this, we utilize the VitaminC (Schuster et al., 2021) dataset, which consists of claims based on modifications made to factual content in Wikipedia articles. Claims can be either refuted by outdated facts or supported by updated facts. - **Task 5: Real-World** In contrast to other tasks that assume Wikipedia has all the essential factual information, verifying viral claims on the internet often requires not only factual knowledge from various sources but also common sense and worldly knowledge. An important query we have is whether LLMs can effectively integrate diverse types and sources of knowledge acquired during training. To address this, we select claims from the FactCheck (Misra, 2022) dataset, which consists of claims spread over the Internet and subsequently verified by journalists. - **Task 6: Domain-Specific** In addition to the tasks mentioned earlier, which primarily focus on factual knowledge in general domains, we are also interested in exploring how LLMs possess the capability to access domain-specific factual knowledge. The domain-specific setting presents unique challenges. Take the science domain as an example, LLMs need to acquire background knowledge, handle quantitative reasoning, and comprehend specialized statistical language. To investigate this further, we sample claims from PubHealth (Kotonya & Toni, 2020) in the public health domain and SciFact (Wadden et al., 2022) in the science domain. - **Task 7: Multi-Lingual** Existing LLMs are mainly trained on English corpus because of their abundance and quality (Chowdhery et al., 2022; Touvron et al., 2023a). However, the scarcity of training data in other languages raises the question of whether LLMs can transfer the factual knowledge acquired in English to other languages. To investigate this, we collected claims from various languages including French, Chinese, and more, using the XFACT dataset (Gupta & Srikumar, 2021) and the CHEF dataset (Hu et al., 2022) in a total of 27 different languages. ### 2.2 Annotation and Quality Control Multiple-choice questions offer a practical approach to assess the complex capabilities of LLMs, of which GPT-4 is a prime example (OpenAI, 2023). Key benchmarks such as MMLU (Hendrycks et al., 2021b), HellaSwag (Zellers et al., 2019), ARC (Clark et al., 2018a), and TruthfulQA (Lin et al., 2022), all of which utilize multi-choice formats, serve distinct purposes in evaluating various aspects of GPT-4’s proficiency. Specifically, the MMLU gauges an LLM’s knowledge breadth and depth. HellaSwag tests commonsense reasoning, and ARC focuses on challenging questions. TruthfulQA measures how LLMs mimic human falsehoods. Furthermore, the evaluation of language generation brings its own set of challenges, as a universal metric for measurement is currently lacking (Sai et al., 2023), which multiple-choice questions help to mitigate by offering straightforward classification accuracy for assessment (Hendrycks et al., 2021b). Also, prior studies (Kadavath et al., 2022) underscore that LLMs demonstrate reliable calibration on multiple-choice scenarios. Therefore, we also used the multi-choice questions as a simple but good proxy to evaluate the abilities of LLMs. For data annotation, we hired 10 undergraduate students, all with good English proficiency. We asked the students to rewrite the original claims into questions without distorting factuality while providing factuality labels for the questions. By transforming declarative statements into questions, using a Question-Answering approach can more effectively elicit factual knowledge from LLMs (Kadavath et al., 2022; Lin et al., 2022), and we also illustrate through experiments in Sec. 4.2. Note that claims in the original datasets are usually labeled based on given evidence, e.g. evidence supports or refutes the claim, but in Pinocchio, we only need to judge the factuality of the question. So we use unified labels: Yes, No, Not Sure Enough. The three labels correspond respectively to Factual, Non-Factual, and Not Enough Information for factual questions. Considering that all fact-checking datasets use a three-label system (Guo et al., 2022), we did not modify the number of labels to maintain consistency in labeling. When dealing with factuality questions in low-resource languages, for Chinese, the 5 undergraduate students we hired are native Chinese speakers. For other low-resource languages, we first use Google Translate to translate them into English and generate factuality questions, then translate the English questions back to the corresponding languages. The label distribution is shown in Table I. We paid the annotators accordingly based on the quantity and quality of the annotations. We ensure the quality of the annotated factuality questions in two ways. The two authors of this paper served as meta-reviewers, sampling 10 questions from each of the three categories across the seven domains in Pinocchio. The meta-reviewers judged if the factuality labels were correct. For the 210 factuality questions, the average label accuracy was 92.4%. We divided the 10 students into two groups and had each group re-annotate a random 200 questions annotated by the other group, then calculated inter-annotator agreement (IAA). The final IAA was 85.6%. Based on meta-reviewer results and IAA, the factuality labels in Pinocchio are of good quality. 3 METHODOLOGY 3.1 MODELS To give a comprehensive view of the status of LLMs in a factual context, we evaluate 10 accessible LLMs, undergone different training stages including pretraining, instruction tuning, and reinforcement learning from human feedback (Ouyang et al., 2022), covering diverse organizations and varying in size. A detailed description can be found in Appendix A.2. 3.2 PROMPT STRATEGY As illustrated in Figure 2, we employ 4 types of prompts to elicit desired responses from LLMs, namely: Zero-shot, Zero-shot with CoT (Kojima et al., 2022), Few-shot, and Few-shot with CoT (Wei et al., 2022). Specifically, we begin by providing the model with task instruction, denoted as $Z$: “You Table 2: Results obtained using different forms of prompts on 10 accessible LLMs. | Methods | Zero-shot w/o CoT | Zero-shot w/ CoT | Few-shot w/o CoT | Few-shot w/ CoT | Overall Performance | |------------------|-------------------|------------------|------------------|------------------|---------------------| | | Accuracy F1 | Accuracy F1 | Accuracy F1 | Accuracy F1 | Accuracy F1 | | OPT-6.7B | | | | | | | BLOOM-7B | 29.7 | 26.2 | 14.8 | 18.1 | 27.9 | | LLaMA-7B | 31.8 | 29.6 | 22.3 | 24.9 | 36.8 | | Alpaca-7B | 40.2 | 23.7 | 33.7 | 24.4 | 37.9 | | Vicuna-7B | 33.2 | 33.6 | 34.2 | 32.9 | 35.5 | | Vicuna-13B | 42.6 | 35.6 | 44.0 | 36.9 | 47.0 | | ChatGLM-6B | 37.4 | 31.0 | 36.5 | 31.7 | 41.6 | | Flan-T5-11B | 24.6 | 21.5 | 29.9 | 29.3 | 25.9 | | Text-Davinci-002 | 45.2 | 36.2 | 45.7 | 37.3 | 46.6 | | Text-Davinci-003 | 42.8 | 41.4 | 43.1 | 42.1 | 48.8 | | GPT-3.5-Turbo | 46.9 | 44.3 | 46.8 | 44.4 | 47.2 | will be given a question. You should answer whether it is Yes, No, or Not Sure Enough and show your evidence”. This instruction informs the LLMs about the expected input and output. Subsequently, for any given input $Q$, we anticipate obtaining an output label $Y$ from the LLMs $f$: $Y = f(Q, Z)$. Zero-Shot Prompt In the zero-shot setting, the LLMs are expected to provide answers based on the Question $Q$ and the task instruction $Z$. We anticipate that the LLMs can directly generate the factual answer “No” when presented with $Q$: “Has gas prices gone up 99 percent since Obama became president, making it the highest gas price increase since Carter?” The zero-shot with CoT setting extends the question $Q$ by adding a two-stage prompt (Kojima et al., 2022): “Let’s think step by step”, designed to encourage the LLMs to contemplate the process of determining the factual label $Y$. Few-Shot Prompt In the few-shot setting, we employ three shots for model input ($Q$). Detailed examples of the prompts in Figure 2 are presented in Appendix A.4. In the few-shot with CoT setting, we provide potential reasoning instructions to the LLMs before presenting the factual label ($Y$). As shown in Figure 2, for the $Q$: “Is there a capital called Mogadishu?” Our reasoning approach entails first explaining the noun phrase in the $Q$ (the subject and object), and subsequently elaborating on modifying phrases such as predicates or adjectives. Regarding the subject “Mogadish”, we begin by furnishing a detailed definition: “Mogadishu is a city in East Africa, specifically in Somalia.” Following this, we proceed to reason about the relation between “Mogadish” and “capital”: “Furthermore, the capital of Somalia is indeed Mogadishu.” Consequently, we arrive at the ultimate factual label: “Therefore, the answer is Yes.” 4 EXPERIMENTS In an effort to take the initial step in understanding the capabilities of LLMs, we undertake a comprehensive analysis of various LLMs on Pinocchio, under different conditions and tasks. 4.1 MAIN RESULTS In Table 2, we present the average results of 10 accessible LLMs operating under varying settings on Pinocchio, run three times each. From Table 2, we draw the following conclusions: - Regarding overall performance, we observe that, on average, LLMs without instruction tuning underperform those with instruction tuning by 16.0%. GPT family LLMs undergoing RLHF exhibit superior results, indicating that instruction tuning and RLHF optimize alignment with human knowledge, thereby improving factual question response accuracy. - Results obtained using the Few-shot setting significantly outperform those obtained when simply asking factual questions to LLMs in the Zero-shot setting, especially for models without RLHF, exhibiting an average improvement of 7.3%. This highlights the capability of some sample prompts to better extract the inherent factual knowledge of LLMs. - Using the CoT method, we observed a relative boost in performance in LLMs subjected to instruction tuning and RLHF, improving by an average of 2.1%. Notably, the factual accuracy of LLMs like OPT, BLOOM, and LLaMA was mostly stable or even decreased. A review of outputs from these untuned LLMs revealed that, post-CoT application, LLMs tend to produce related Table 3: Results of different LLMs using Few-shot w/ CoT prompts across different tasks. | Task | Multifaceted | Structural | Adversarial | Temporal | Real-World | Domain Specific | Multi-lingual | |---------------|--------------|------------|-------------|----------|------------|-----------------|---------------| | | Acc. | F1 | Acc. | F1 | Acc. | F1 | Acc. | F1 | Acc. | F1 | Acc. | F1 | Acc. | F1 | | OPT-6.7B | 34.5 | 24.1 | 45.5 | 30.9 | 51.8 | 51.7 | 30.0 | 18.0 | **53.7** | 27.5 | 28.2 | 28.3 | 16.2 | 17.7 | | BLOOM-7B | 10.7 | 13.5 | 0.8 | 3.5 | 2.0 | 3.7 | 3.7 | 7.7 | 5.4 | 8.5 | 11.8 | 15.6 | 9.8 | 15.9 | | LLama-7B | 38.3 | 33.9 | 44.1 | 32.1 | 43.2 | 46.1 | 41.6 | 30.0 | 26.4 | 26.3 | 23.6 | 25.0 | 27.8 | 27.7 | | Alpaca-7B | 38.6 | 28.8 | 48.0 | 23.6 | 46.4 | 35.1 | 49.6 | 26.1 | 24.5 | 19.9 | 42.9 | 26.8 | 24.2 | 17.7 | | Vicuna-7B | 44.2 | 36.0 | 49.7 | 36.3 | 59.0 | 59.2 | **50.1** | 37.6 | 49.0 | 41.8 | 44.3 | 38.6 | **46.7** | 43.1 | | Vicuna-13B | 49.9 | 45.3 | 48.1 | 37.9 | 58.9 | 60.0 | 45.4 | 37.8 | 47.7 | 42.7 | 43.5 | 40.4 | 37.8 | 37.9 | | ChatGLM-6B | 41.0 | 36.0 | 46.8 | 35.7 | 51.5 | 48.6 | 39.4 | 32.4 | 48.9 | 34.8 | 35.2 | 35.0 | 37.1 | 35.3 | | Flan-T5-11B | 49.2 | 49.4 | 43.5 | 33.7 | 54.7 | 56.6 | 31.6 | 30.6 | 31.1 | 29.4 | 35.6 | 34.6 | 25.3 | 14.4 | | Text-Davinci-002 | 47.7 | 47.7 | **50.8** | 38.4 | 64.3 | 64.3 | 33.9 | 31.1 | 51.7 | 41.4 | 36.4 | 36.1 | 43.1 | 39.5 | | Text-Davinci-003 | 51.1 | 47.8 | 44.3 | 33.7 | 64.1 | 63.7 | 41.4 | 35.1 | 48.0 | 42.8 | 40.4 | 41.4 | 43.7 | **43.6** | | GPT-3.5-Turbo | 53.6 | **53.1** | 44.8 | 37.8 | 67.4 | 67.4 | 37.4 | 33.9 | 50.4 | 43.1 | 38.7 | 40.3 | 41.3 | 41.1 | content considerations, and extensive considerations often overshadow factual discernment tasks, causing incorrect factual label outputs. In contrast, for instruction-tuned LLMs, the CoT method facilitates enhanced exploration of factual entity relations in questions, resulting in accurate factual labels. See Appendix A.3 for detailed case analyses. - The OPT model, without being tuned to instructions, struggles significantly to output correct factual labels under the settings of Zero-shot and Zero-shot CoT, often resulting in either a repetition of the original question or a refusal to output any content at all. This issue is somewhat alleviated under the settings of Few-shot and Few-shot CoT. - Additionally, we studied the hyperparameters of LLMs. Due to limited computing resources, we only explored Vicuna-7B and Vicuna-13B. We found that as model parameters increase, performance on factual questions improves correspondingly, with an average increase of 5.4%. This indicates that LLMs with more parameters can store more world knowledge and have stronger factual knowledge recognition capabilities. In Table 3, we present the factual performance of LLMs in various tasks under the Few-shot CoT setting. This reveals the relative difficulty LLMs have in understanding and responding to factual questions in different tasks, providing insights for future training of factual knowledge in LLMs. From Table 3, it is observed that LLMs exhibit relatively poorer performance on factual questions related to the real-world, domain-specific knowledge, and multilingualism, being on average 6.4% lower compared to the other four tasks. This is attributed to the fact that the training data for LLMs typically come from general domains and are not up-to-date, which indirectly inspires the exploration of retrieval-augmented LLMs (Ram et al., 2023). We analyze the LLMs in different tasks in Sec. 4.2 4.2 ANALYSIS In this section, we explore LLMs’ capabilities focusing on key areas like handling of multi-hop factual questions, proficiency in diverse prompt strategies, and tackling challenges like numerical reasoning and entity ambiguity. We also examine their performance on time-sensitive factual questions, against adversarial attacks, with fine-grained labels and prompts in multiple languages. (a) Multi-hop Reasoning Analysis (b) Structural Knowledge Analysis (c) Challenges of Different Questions Figure 3: GPT-3.5-Turbo’s outcomes across three distinct tasks under Few-shot CoT setting. Multi-hop Factual Question Analysis To analyze the performance of LLMs when faced with factual questions based on multiple pieces of facts that require complex logical reasoning, we categorize multifaced and structural factual questions into distinct subsets, depending on the number of “hops” necessary to validate each factual question. To maintain fairness, we randomly sampled 1,490 data pieces from each of the two datasets for verification. Figure 3(a) illustrates the data counts and Macro F1 scores of GPT-3.5-Turbo for each respective subset. The figure reveals a clear pattern: as the number of “hops” increases, the reasoning chain for deriving conclusions from existing factual knowledge extends, necessitating heightened logical reasoning capabilities from the LLMs. Consequently, the performance of the LLMs exhibits diminishing trends. **Structural Knowledge Analysis in LLMs** To investigate whether LLMs can effectively memorize factual knowledge from structured data, we divided the structural task questions into three subsets according to evidence distribution: evidence in unstructured data (Only text), structured data (Only tables), or both (Combine text and tables). Figure 3(b) shows a notable decline (Avg. -5.5%) in GPT-3.5-Turbo’s performance when evidence involves structured data, indicating LLMs’ limited ability in extracting knowledge from structured tables. The LLMs also perform less effectively when handling questions requiring the combination of both evidence types, reflecting their incapacity to integrate diverse structured evidence effectively. **Analysis of Different Factual Questions Poses Challenges** To assess the capabilities of LLMs in addressing various challenges, we partitioned each factual question within the structural task into six distinct challenges: 1) Entity disambiguation, 2) Other, 3) Multi-hop reasoning, 4) Combining tables and text, 5) Search terms not in claim, 6) Numerical reasoning, each centered around the most critical difficulty encountered during verification. Figure 3(c) illustrates GPT-3.5-Turbo’s performance and data distribution across challenges. The extensive training and large-scale parameters enhance LLMs’ performance in handling entity ambiguity. Longer reasoning chains and various forms of evidence challenge LLMs’ factual abilities. When correct inference involves unmentioned entities, LLMs may lack necessary hints from factual questions, posing significant challenges. LLMs also exhibit deficiencies in precise numerical calculations due to the inherent hallucination phenomenon, resulting in subpar performance when numerical reasoning is needed for verification. ![Figure 4: Results of GPT-3.5-Turbo in three different tasks under Few-shot CoT setting.](image) **Temporal Analysis** As time progresses, the factuality of questions may undergo changes. This task encompasses such data, and we leverage this task to explore the ability of LLMs to adapt to factual changes. Figure 4(a) illustrates that GPT-3.5-Turbo exhibits a modest yet noticeable performance difference when dealing with outdated data as compared to updated data. This discrepancy arises from the fact that LLMs are pretrained on a corpus of text prior to a specific temporal point. Consequently, LLMs lack the capability to acquire real-time, up-to-date knowledge, rendering them unable to validate questions that hinge on the most recent information for accurate assessments. **Adversarial Analysis** To evaluate the robustness of LLMs to adversarial attacks, we divide the adversarial questions into three subsets: auto-generated questions from the corpus, manually modified synthesized questions yielding adversarial ones, and artificially created adversarial questions. Figure 5(b) presents the performance of GPT-3.5-Turbo on these three subsets. It is evident that following adversarial attacks, LLMs exhibit a substantial decrease in performance. Furthermore, factual questions that have undergone manual modifications or were artificially created prove to be more challenging compared to those that are automatically generated (Shen et al., 2023). This disparity could be attributed to the fact that automatically synthesized factual questions often contain explicit positive or negative words that hint at the outcome, and the exceptional comprehension abilities of LLMs enable them to accurately discern and provide the correct response in such cases. **Label Granularity Analysis** To assess the effect of different label granularities on LLMs’ performance, we conducted a manual re-labeling of the real-world task questions. Per the settings of Misra (2022), besides labeling as “Factual”, “Non-Factual”, and “Not Enough Information”, we also require them to annotate the dataset with six factual labels: “Factual”, “Mostly Factual”, “Mostly False”, “Non-Factual”, “Pants-Fire”, and “Not Enough Information”. We also modified the prompt for GPT-3.5-Turbo for more intricate factual responses to test its competency with nuanced labels. Results in Figure 4(C) disclosed: 1) The results show that, in general, there is a significant decrease in performance (-23.83%) when transitioning from coarse-grained justification to fine-grained justification. With finer granularity, LLMs are not only required to assess the authenticity of each question but also to judiciously employ their knowledge base to precisely gauge the credibility of each factual questions. 2) When comparing the performance of coarse-grained labels with fine-grained labels, we observe significant drops in the three categories: “Factual” by 13.3%, “Non-Factual” by 23.2%, and “Not Enough Information” by 22.3%. This indicates that finer-grained labels introduce additional options that can potentially disrupt the original judgment of the LLMs. A potential remedy could be the aggregation of multiple judgments through voting (Wang et al., 2023a). **Multilingual Task with Chinese and English Prompts** To investigate the influence of prompts in different languages on LLMs, we extracted Chinese factual questions from the multilingual tasks to create a subset. We then evaluated the LLMs’ performance when using both Chinese and English prompts, both of which are depicted in Appendix A.4. Table 4 illustrates the results, indicating that the LLMs perform better when using a Chinese prompt. This underscores the notion that employing prompts in the same language as the questions can enhance the transfer capabilities from English factual knowledge to other languages of LLMs. | Language | English | Chinese | |----------|---------|---------| | Factual | 41.7 | 55.5 | | Non-Factual | 47.9 | 49.7 | | NEI | 43.8 | 35.5 | | Overall | 44.5 | 46.9 | Table 4: Macro F1 over Chinese and English prompts. **Table 5: Results in different domains obtained on the Pinocchio-Lite using different prompts.** | Task | Multifaceted | Structural | Adversarial | Temporal | Real-World | Domain Specific | Multi-lingual | Overall | |------------|--------------|------------|-------------|----------|------------|-----------------|---------------|---------| | Acc. F1 | Acc. F1 | Acc. F1 | Acc. F1 | Acc. F1 | Acc. F1 | Acc. F1 | Acc. F1 | Acc. F1 | | 1 shot | 56.0 50.9 | 37.0 35.7 | 50.5 | 56.6 | 39.5 39.5 | 43.0 42.7 | 40.0 40.1 | 42.0 38.7 | | 2 shots | 56.0 53.4 | 41.0 42.3 | 47.5 | 56.2 | 41.0 42.0 | 40.5 41.7 | 42.5 43.5 | 36.5 34.8 | | 3 shots | 54.5 50.0 | 38.0 36.8 | 49.0 | 54.9 | 40.0 39.0 | 39.5 38.1 | 41.5 41.7 | 40.5 39.2 | | 6 shots | 54.5 51.7 | 38.5 38.3 | 49.0 | 55.8 | 42.0 41.5 | 42.5 41.6 | 39.0 39.5 | 41.0 38.4 | | 9 shots | 57.5 53.3 | 38.0 37.8 | 52.0 | 57.3 | 43.0 42.2 | 42.5 39.8 | 37.5 36.7 | 37.5 35.0 | | 12 shots | 55.5 52.0 | 38.5 38.6 | 53.0 | 58.8 | 47.0 46.9 | 46.0 44.7 | 34.0 34.5 | 39.0 37.1 | | Complex CoT| 51.0 50.2 | 38.5 35.0 | 37.5 | 47.2 | 39.0 39.0 | 39.5 36.8 | 36.0 35.7 | 38.5 31.7 | | Self-Consistency | 55.5 51.2 | 43.0 42.6 | 49.5 | 54.8 | 43.0 41.6 | 43.0 41.9 | 42.0 42.4 | 39.5 36.8 | | Self-Refinement | 55.0 52.1 | 44.5 44.0 | 53.5 | 59.2 | 42.5 42.2 | 41.5 40.3 | 42.0 43.4 | 43.0 39.9 | | Declarative Claim | 52.0 51.1 | 39.0 35.1 | 45.5 | 49.3 | 40.5 40.7 | 40.0 37.9 | 41.0 40.6 | 38.5 36.3 | **Prompt Strategy Analysis** In prior research, various CoT methods have been employed to enhance the performance of LLMs. These methods include 1) augmenting the number of in-context learning examples, 2) implementing self-consistency mechanisms, which alleviates the hallucination phenomenon through majority voting after multiple judgments of LLMs (Wang et al., 2023a), 3) incorporating complex instances as demos to steer the cognitive processes of LLMs (Fu et al., 2022), and 4) employing self-refinement strategies, which refines LLMs’ answers through continuous feedback of another LLM on responses to achieve better results (Madaan et al., 2023) and so forth. Additionally, we examined the influence of utilizing declarative claims as instances of in-context learning. We randomly sampled 200 factual questions from each task of the Pinocchio, totaling 1400 questions, to compose Pinocchio-Lite with the aim of speeding up the testing of different prompt strategies. The performance results of various CoT methods are presented in Table 5. To maintain fairness, three in-context learning examples are employed in the complex CoT, self-consistency, self-refinement, and declarative claim methods. Different types of CoT prompts are shown in Appendix A.4. It is worth noting that 1) when the number of in-context learning examples is limited, the incremental improvement in performance is marginal upon increasing the number of examples. However, beyond a specific threshold, the addition of more examples gains more performance improvement. This could be due to the inability of LLMs to fully encapsulate the correct reasoning with fewer examples. 2) Concurrently, a fascinating observation is that the LLM’s performance substantially deteriorates as the complexity of the CoT increases. This could stem from the difficulty LLMs have in extracting a generalized reasoning pattern from complex, multi-stage thinking processes with limited examples. 3) The self-consistency method markedly boosts performance by mitigating the hallucination issue in LLMs through consistency voting, enhancing their response accuracy. 4) In the self-refinement approach, the model might initially provide an incorrect response, but it can amend its mistakes through feedback and refine its answers. In the end, when no additional refinement is needed, the model often reaches the correct conclusion, achieving optimal performance. 5) Compared to the 3 shots method, the declarative claims method saw a 2.3% performance drop, illustrating that using questions as inputs better elicits factual knowledge than the original claim in the datasets. 5 RELATED WORK Factual Knowledge in Language Models Previous research shows that LLMs can retain and utilize factual knowledge, effectively acting as knowledge bases (Petroni et al., 2019; 2020; Heinerling & Inui, 2021). This acquired factual knowledge in language models during pretraining can be advantageous for knowledge-intensive tasks like question answering and fact checking (Roberts et al., 2020; Yu et al., 2023a; Pan et al., 2023). To evaluate the factual knowledge stored in language models, Petroni et al. (2019) employed cloze tests consisting of triples and prompts specifically designed to simulate missing objects. Jiang et al. (2020b) explored the role of prompts in retrieving factual information from language models and devised improved prompts for probing. However, Elazar et al. (2021) demonstrated the unreliability of rank-based probing methods with paraphrased context, leading to inconsistent findings. Cao et al. (2021b) contended that biased prompts and leakage of golden answers often lead to overestimations of LLMs’ knowledge storage capability. Our method is more in line with Kadavath et al. (2022) and Lin et al. (2022), employing self-evaluation by querying the models to assess response accuracy regarding factual knowledge. More recent studies have directed their focus towards the detection of hallucinations—factually incorrect statements—in the responses generated by LLMs. For instance, the SelfCheckGPT (Manakul et al., 2023) uses a sampling method to detect inconsistencies in LLM responses, identifying hallucinated claims. Alternatively, FactScore (Min et al., 2023) approaches the challenge by deconstructing generations into atomic facts—concise statements—and assigning binary labels to assess their veracity. Furthermore, Chern et al. (2023) introduced a tool-enhanced framework for hallucination detection encompassing five core components: claim extraction, query formulation, tool-based querying, evidence gathering, and validation of consistency. However, these contributions primarily target the identification of factual inaccuracies in the models’ output. In contrast, our benchmark is primarily designed to evaluate the breadth and depth of factual knowledge within LLMs. Benchmarks for Large Language Models The advent of LLMs has underscored the importance of exhaustive benchmarks for effective capability assessment. Presently, there are predominantly two types of existing benchmarks. One evaluates the general knowledge and reasoning capacities of LLMs, exemplified by the MMLU (Hendrycks et al., 2021a), a multi-choice benchmark that measures tasks from real-world tests and literature, spanning diverse subjects like elementary math, US history, computer science, and law. Moreover, benchmarks also exist for non-English languages (Huang et al., 2023) or in a bilingual context (Zhong et al., 2023). BIG-bench (Srivastava et al., 2022) is a collaborative benchmark examining LLMs’ capabilities across 204 diverse tasks from various fields like linguistics, childhood development, software development, and more. HELM (Liang et al., 2022) employs 7 metrics over 42 tasks to assess LLMs, focusing on aspects from accuracy to robustness. Specific benchmarks like GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021a) target mathematical problem-solving, presenting elementary to competition-level problems. In program synthesis, HumanEval (Chen et al., 2021a) and MBPP (Austin et al., 2021) evaluate functional correctness through program synthesis from docstrings. Additional benchmarks address instruction following (Dubois et al., 2023), tool usage (Xu et al., 2023), and decision making (Liu et al., 2023). Our benchmark mainly focuses on factual knowledge, differing from ones like TruthfulQA (Lin et al., 2022), which specifically tests truthfulness in LLMs’ generated responses, with questions structured to provoke imitative falsehoods over truthful answers. 6 CONCLUSION In this work, our primary focus is the development of the Pinocchio benchmark, an extensive test bed encompassing 20,713 questions across seven varying complexity tasks, as a tool to investigate whether LLMs are capable of memorizing factual knowledge and reasoning on the basis of it. Upon applying the Pinocchio benchmark, we observe that various types of LLMs using different prompting strategies such as self-refine and self-consistency still have challenges in optimal performance on factual tasks. It is our hope that this novel benchmark will shed light on this area and act as a foundation for further improvements in LLMs’ factual knowledge and reasoning abilities. ACKNOWLEDGEMENT This work is supported in part by NSF under grant III-2106758. Additionally, Junzhe Chen and Xiaochuan Li are supported by Beijing Natural Science Foundation under grant number QY23115 and QY23116. REFERENCES Firoj Alam, Shaden Shaar, Fahim Dalvi, Hassan Sajjad, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, Ahmed Abdelali, Nadir Durrani, Kareem Darwish, Abdulaziz Al-Homaid, Wajdi Zaghouani, Tommaso Caselli, Gijs Danoe, Friso Stolk, Britt Bruntink, and Preslav Nakov. Fighting the COVID-19 infodemic: Modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 611–649, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-emnlp.56. URL https://aclanthology.org/2021.findings-emnlp.56 Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. FEVEROUS: fact extraction and verification over unstructured and structured information. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/68d30a9594728bc39a24be94b319d21-Abstract-round1.html Fatma Arslan, Naeemul Hassan, Chengkai Li, and Mark Tremayne. A benchmark dataset of check-worthy factual claims. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pp. 821–829, 2020. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. On the cross-lingual transferability of monolingual representations. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pp. 4623–4637. Association for Computational Linguistics, 2020. doi: 10.18653/V1/2020.ACL-MAIN.421. URL https://doi.org/10.18653/v1/2020.acl-main.421 Akari Asai and Eunsol Choi. Challenges in information-seeking QA: unanswerable questions and paragraph retrieval. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pp. 1492–1504. Association for Computational Linguistics, 2021. doi: 10.18653/V1/2021.ACL-LONG.118. URL https://doi.org/10.18653/v1/2021.acl-long.118 Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. Multilingual extractive reading comprehension by runtime machine translation. CoRR, abs/1809.03275, 2018. URL http://arxiv.org/abs/1809.03275 Akari Asai, Jungo Kasai, Jonathan H. Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. XOR QA: cross-lingual open-retrieval question answering. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pp. 547–564. Association for Computational Linguistics, 2021. doi: 10.18653/V1/2021.NAACL-MAIN.46. URL https://doi.org/10.18653/v1/2021.naacl-main.46 Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. MultiFc: A real-world multi-domain dataset for evidence-based fact checking of claims. arXiv preprint arXiv:1909.03242, 2019.
wlqkRFRkYc
The primary motivation of the paper is to address the limitations of two-dimensional image features in effectively representing global features within autonomous driving scenarios. The authors propose the use of BEV representations. The reviewer wonders why the authors did not explore 3D occupancy as a potential solution to this issue, as suggested by Huang et al. in their paper
BEV-CLIP: Multi-modal BEV Retrieval Methodology for Complex Scene in Autonomous Driving Anonymous authors Paper under double-blind review Abstract The demand for the retrieval of complex scene data in autonomous driving is increasing, especially as passenger vehicles have been equipped with the ability to navigate urban settings, with the imperative to address long-tail scenarios. Meanwhile, under the pre-existing two dimensional image retrieval method, some problems may arise with scene retrieval, such as lack of global feature representation and sub-par text retrieval ability. To address these issues, we have proposed BEV-CLIP, the first multimodal BEV retrieval methodology that utilize descriptive text as an input to retrieve corresponding scenes. This methodology applies the semantic feature extraction abilities of a large language model (LLM) to facilitate zero-shot retrieval of extensive text descriptions, and incorporates semi-structured information from a knowledge graph to improve the semantic richness and variety of the language embedding. Our experiments result in 87.66% accuracy on NuScenes dataset in text-to-BEV feature retrieval. The demonstrated cases in our paper support that our retrieval method is also indicated to be effective in identifying certain long-tail corner scenes. 1 Introduction In recent years, a growing focus has been raised on the retrieval task in the field of autonomous driving. A well-designed retrieval method is essential for addressing corner cases in autonomous driving data [Bogdoll et al., 2021]. However, corner case scenarios often contain instances or features that rarely occurs. For example, unprotected left turn scenario describes ego-vehicle is turning left without the protection of a left turn traffic light. In scenarios such as this, all necessary road participants (e.g opposite vehicle, lane line and vehicle in the left neighbouring lane) are distributed in a unique pattern requiring global abstraction. Meanwhile, the precise description text of road participants may be extremely customized in specific cases, and is unable to be included in pre-existing labels from any dataset. Hence, the retrieval model desires the capability to represent complex features that are distributed over a wide range of scenes [Li et al., 2022a]. This paper aims to study two fundamental problems towards developing a system for image-text retrieval in autonomous driving scenes. (1) How can we overcome the limitations intrinsic to two-dimensional image features, particularly their poor capability to effectively represent global feature within autonomous driving scenarios? (2) Which methodologies could potentially enhance the currently unsatisfactory efficacy of text representations within the field of autonomous driving? To address these two issues, we suggest the following. Feature extraction We suggest the utilisation of the Bird’s-Eye View (BEV) framework as it offers a unified representation for autonomous driving scene description. By combining multiview camera data, the BEV framework converts 2D perception into a detailed 3D description from a top-down perspective [Xie et al., 2022; Huang et al., 2021; Li et al., 2023b]. This approach overcomes limitations associated with feature truncation, which frequently occurs in single-view approaches, and enables better downstream tasks. As a notable solution, BEVFormer [Li et al., 2022b], a transformer-based BEV encoder, generates global features from camera input alone and serve as an end-to-end model for various downstream tasks. Thus, performing scene retrieval on BEV features is a integrated solution to address the problem of extracting global representation, and as a Figure 1: BEV-CLIP, the first BEV retrieval method retrieves corner cases on autonomous driving. In contrast to 2D image retrieval, BEV-CLIP allows semantic retrieval related to complex global features in the context of BEV features. Meanwhile, BEV-CLIP uses a Large Language Model (LLM) to enhance the model’s ability to understand complex descriptions in the retrieved text. well-known method, incorporating BEVFormer for BEV feature extraction is both advantageous and justified. Language representation We suggest the incorporation of intricate semantic data as an additional input to compensate for abstracted features not evident solely in image data. Existing multimodal large language models (LLM) have demonstrated remarkable capabilities in expressing features of other modalities [Huang et al., 2023] [Liu et al., 2023]. CLIP [Radford et al., 2021] presents a baseline for multimodal retrieval, enabling the model to generate zero-shot inferences by leveraging the language model’s decoding capabilities. Inspired by this, we construct an improved LLM with fine-tuning strategies to provide more richness semantic information as supplement for BEV feature. Additionally, knowledge graph features will be incorporated to enhance the salience of knowledge in the autonomous driving domain. The fusion of the LLM and knowledge graph aims to achieve excellent cross-modal understanding in our method. In this paper, we propose BEV-CLIP, the first BEV retrieval method. It unifies capabilities of BEV feature aggregation and representation, and rich semantic abstraction abilities from LLM and knowledge graph. Our key designs can be summarized as follows: (a) A novel method to perform BEV retrieval and BEV caption generation. (b) A well-performed assembly method of LLM and knowledge graph to improve the generalisation ability of language comprehension. (c) An efficient structure called shared multimodal prompt (SCP) that bridges the gap between BEV and language branch to provide a well-fused feature representation before contrastive learning. Our contributions can be summarised as follows: 1. We propose a retrieval method based on BEV feature which can retrieve global features in autonomous driving scenarios and have a significant understanding capability of complex scenes. To the best of our knowledge, this is the first BEV retrieval method in the field of autonomous driving. 2. We propose a multimodal retrieval method powered by LLM and knowledge graph to achieve contrastive learning between text description and BEV feature retrieval for autonomous driving, so that it can perform zero-shot retrieval using long text descriptions. 3. We build a retrieval validation pipeline based on existing datasets and achieve a result at rank-1 of 87.66% on the NuScenes dataset, fully verifying the effectiveness of optimising the BEV retrieval model. 2 RELATED WORK BEV Feature Acquisition Vision-dependent Bird’s Eye View (BEV) perception has gained significant attention as it offers advantages in rendering complex scenes and facilitating the fusion of multiple camera inputs. (Garnett et al., 2019; Can et al., 2021) (Chen et al., 2022) proposed an approach based on Inverse Perspective Mapping (IPM), which inversely maps features in perspective space to BEV space. However, IPM is restricted to the ideal assumption of flat ground. (Philon & Fidler, 2020; Hu et al., 2022) proposed a method based on monocular depth estimation (MDE). However, the lack of explicitly supervised images prevents the effectiveness of MDE, which, in turn, affects the accuracy of BEV features. In recent years, transformer architectures have been widely adopted for BEV models. Transformer uses a global attention mechanism, where the mapping of any position in the target domain to the source domain has the same distance, overcoming the limitation of CNN. (Wang et al., 2022; Can et al., 2021) converts the query to a 2D feature by projection, so that the network can find the real 3D obstacle features automatically. BEVFormer (Li et al., 2022b) combines spatial and temporal attention, creating a novel aggregation technique for BEV features. By iteratively updating the history frames with query information using an RNN-like method and subsequently passing it to the current frame, the computational burden is effectively managed. We claim that the inclusion of temporal information in BEV features is highly suitable for retrieval tasks, as it excels in reconstructing dynamic scenarios for autonomous driving. Retrieval tasks The field of cross-modal retrieval, which aims to bridge the representation gap between different modalities, has gained significant attention in the literature. One prominent approach proposed by (Radford et al., 2021) involves training a migratable visual model using text as a supervision signal. In this approach, both the text and image inputs are separately encoded by a text encoder and an image encoder, respectively, generating corresponding feature representations that can be utilized for contrastive learning. Through extensive training with a large amount of data, the two encoders achieve good generalization capabilities, enabling zero-shot retrieval. Based on this, (Li et al., 2023a) introduces a multimodal Encoder-Decoder structure, which effectively perform multitask pre-learning and transfer learning. Additionally, it introduces a learnable Q-Former structure for bridging the representation gap between modalities, so that the model can be fine-tuned using an LLM with frozen parameters while updating only a small number of parameters. (Khattak et al., 2023; Chen et al., 2023) proposed a joint prompt method that adds learnable context tokens to the main branch as implicit prompts to establish interaction between the image and text branches. We argue that this joint prompt method is able to perform supervised training for retrieval tasks based on large language models. Language Pretraining Recent research has demonstrated that the emergent ability of LLMs allows them to achieve a great improvement in comprehension after reaching a certain magnitude of token count. (Brown et al., 2020) demonstrated for the first time the superiority of autoregressive language modeling by showing that impressive performance can be achieved with few-shot/zero-shot inferencing through prompting and in-context learning. Moreover, several studies (Chowdhery et al., 2022; Touvron et al., 2023; Chung et al., 2022; Driess et al., 2023) have confirmed the efficacy of LLMs on modest tasks, leveraging a limited number of fine-tuned parameters across a diverse mixture of multi-task datasets. These findings collectively emphasize the significant potential of LLMs in enhancing various language-related tasks. Retrieval tasks based on knowledge graph Knowledge graphs are known for their exceptional scalability in handling unstructured data types. We explore the use of knowledge graphs in the Figure 2: **Overall structure of BEV-CLIP.** (a) Processing of BEV and text features. The image from 6 surrounding cameras are generated into a BEV feature by the BEV Encoder with frozen parameters. At the same time, the input text embedding is concatenated with the keyword-matched Knowledge Graph node embedding, and fed into the Language Encoder with LoRA branch for processing. (b) Shared cross-modal prompt (SCP), which aligns the BEV and linguistic features in the same hidden space. (c) Joint supervision of caption generation and retrieval tasks. ⊙ denotes dot product. Retrieval-augmented Generation (RAG) domain and draw inspiration from it to complement generative LLMs. There are several studies in the existing literature that combine multiple kinds of knowledge to augment language models, such as augmenting common-sense reasoning with knowledge graphs (Yu et al., 2022) and introducing multimodal visual features to augment affective dialog (Liang et al., 2022). 3 METHOD In this section, we discuss main structure of BEV-CLIP, a methodology of text-to-BEV contrastive learning retrieval. We obtain the method of utilizing the pre-trained BEV encoder weight for retrieval tasks and apply cross-modal interacting strategy with the language representation, which fused by text description and knowledge graph embedding. 3.1 GLOBAL BEV FEATURE In our approach, we have explored multiple methods of acquiring BEV feature, and realized that visual based BEV approaches have the most universality for autonomous driving applications. Therefore, we adopt BEVFormer as the baseline model for BEV feature extraction. BEVFormer is a dedicated camera-based BEV perception model that incorporates two critical modules: spatial attention and temporal attention. These modules enable the aggregation of spatial and temporal information, facilitating the characterisation of moveable obstacles from multiple perspectives. Thus, the generated BEV features possess a higher capacity to encapsulate the entire scene. It is worth mentioning that our approach can accommodate various feature extraction networks that integrate multi-view images or point cloud data into BEV features. In the retrieval task, we keep all parameters of the BEVFormer model frozen and employ the generated features directly for downstream post-processing and retrieval. Consequently, the approach minimises the overheads associated with training and enhances the overall efficiency of the retrieval method. 3.2 KNOWLEDGE GRAPH PROMPTING In the context of autonomous driving scene description, semantic information related to the scene often exhibits discrete characteristics. To address this, we propose the incorporation of unstructured information that complements the descriptive text by providing associative information. In Figure 3: **Knowledge graph keyword matching.** We generate the knowledge graph embedding of autonomous driving domain by pre-training, and concatenate the embedding that has appeared in the input text with the tokenised text in order, and jointly input them into the language encoder. The language encoder is a structure with frozen LLM parameters and a LoRA branch for fine-tuning. Our approach, we leverage a Graph Neural Network (GNN) to train a knowledge graph in the field of autonomous driving. Each node in the graph corresponds to a keyword relevant to autonomous driving, and the embeddings associated with these nodes capture the associative representation of autonomous driving keywords. Subsequently, these keyword embeddings are concatenated into the tokenised sentence, thereby expanding the representation of the encoded text. | Subject | Predicate | Object | |---------------|--------------------|-------------------| | inst:scene | rdf:type | Object:Scene | | inst:scene | rdf:includeType | Object:Vehicle | | inst:scene | rdf:include | inst:vehicle | | inst:vehicle | rdf:participate_in | inst:driving | | inst:driving | rdf:type | Movement:Driving | | inst:vehicle | rdf:participate_in | inst:driving | | inst:driving | rdf:has_participant | inst:vehicle | Table 1: Example Resource Description Frameworks (RDF) in PandaSet In order to acquire comprehensive knowledge in the domain of autonomous driving, it is essential to extract and generalize relationships within a knowledge graph across a wide range of instances. Constructing perceptual data into graph instances that can be effectively learnt enables this process. To achieve this, (Wickramarachchi et al., 2020) proposed a knowledge graph specifically tailored for the field of autonomous driving. This graph is constructed using scene-aware data obtained from PandaSet (Xiao et al., 2021), and abstracts triplets (as illustrated in Table 1) to establish associations between perceptual instances, labels, and actions. In accordance with our specific requirements, the entities within the knowledge graph are categorized into three main groups: instances, objects, and movements. Within the knowledge graph, a predominant portion of nodes corresponds to instances, while object and movement nodes connect to numerous distinct instance nodes. Exploiting the extensive connectivity between object and movement nodes to extract there relations serves as our primary objective, enabling the acquisition of their associative representations. To achieve this, we employ an embedding technique in which entities and relations present in the knowledge graph are mapped into a continuous vector space. Through training, we iteratively optimise and regress suitable vectors within the vector space for each node. Drawing inspiration from the approach introduced by TransE (Bordes et al., 2013), based on translational distance modelling, we adopt a distance-based scoring function to assess relationships. For each triplet in the graph, the following scoring function is defined: $$f_r(h, t) = -\| h + r - t \|_{norm}$$ \hspace{1cm} (1) Here, $h$, $r$, $t$ stands for subject, predicate, object, respectively. The norm is generally defined as the $L_1$ or $L_2$ norm. Following the application of the aforementioned scoring functions, the embedding vectors associated with nodes within the graph successfully capture the translation transformation relations represented by all the triplets present. To explore more strategies of acquiring more structured representation of relations within the knowledge graph, we also adapt DistMult (Yang et al., 2014) and ConvE (Dettmers et al., 2018) as alternative knowledge graph embeddings. DistMult is a bi-linear model that calculates the credibility of entities and relationships in vector space. ConvE is a model that uses two-dimensional convolution to achieve link prediction. ConvE first converts the head entity and relationship into two-dimensional vectors, then uses a convolution layer and a fully connected layer to obtain interaction information, then calculates with matrix W and tail entity to judge the credibility of the current triplet. We have tested these two approaches with our baseline method and found that DistMult performs well on symmetric relationships, while ConvE is advantageous with graphs containing high-degree nodes. 3.3 Semantic Representation Fusing To achieve zero-shot retrieval, it is crucial to extract the comprehensive semantics information from text input. Pretrained models possess the potential to offer a substantial understanding of word knowledge, grammar, syntax, and overall semantics within general-purpose contexts. These models also exhibit proficiency in contextual comprehension and generation of coherent responses. Furthermore, in various end-to-end LLM approaches, the model demonstrates the ability to jointly characterize diverse modal information of serialization. According to the semantic understanding capabilities of existing LLMs, we fine-tune them for training within the domain of autonomous driving. Leveraging sentence embedding, we integrate and merge graph embeddings to create a synthetic representation. Specifically, we index the keywords from the graph that appear in the text input and concatenate them into the text embedding sequence in the order of their occurrence. The resulting language embedding, fused with the graph representation, serves as the supervision for retrieval task and caption generation. 3.4 Shared Cross-Modal Prompt In this section, we explain how the cross-modal interaction in our method is performed between the BEV and the text branch. In the early stage of our experiment, we have attempted to directly optimise the contrastive loss but it resulted in an unsatisfactory outcome. We realised that the components in both BEV and text branches are pre-trained on single-modal datasets. Meanwhile, most of the parameters of these components have to remain frozen to maintain an affordable training cost. Inspired by Q-Former structure from BLIP2 (Li et al., 2023a), we design an independent structure to bridge the two modalities by implementing a cross-attention method, which is referred to as Shared Cross-Modal Prompt (SCP). These prompts comprise a set of long sequential tokens that are learnable under an autonomous feature space. Our intention is to leverage these prompts to map the BEV features and textual features onto the same manifold space, facilitating the alignment of the divergent modal information present in the two branches. The learnable parameters in SCP can be represented as a sequence: \( T = \{t_1, t_2, \ldots, t_k\} \), and the BEV features can be reshaped and compressed into a sequence of feature embeddings, \( F = \{f_1, f_2, \ldots, f_n\} \). For each token, \( t_i \), in the SCP sequence, the similarity of each feature \( f_j \) in the sequence of BEV features to token \( t_i \) can be computed as: \( r_{ij} = \text{sim}(t_i, f_j) \), and the maximum value of its similarity can be obtained as the projection of the BEV feature \( F \) on the token \( t_i \): \( r_i = \max_j(r_{ij}) \). For all learnable tokens \( T \), use the same method to obtain maximum similarities of \( F \) on \( T \) as a result of the projection of feature \( F \) on \( T \): \( R = \{r_1, r_2, \ldots, r_k\} \). With the softmax function, \( R \) can be converted into a weight for the sequence of SCP: \( w_i = \frac{e^{r_i}}{\sum_j e^{r_j}} \). Such an operation is also performed in the same fashion for text branches. By assigning the weight to the SCP, the feature sequence obtained after the fusion of BEV features with SCP can be derived. In our approach, SCP is shared between both the BEV and text branches, with the difference in output derived from distinct weight sequences assigned to the features within each branch. Through this fusion technique, it is ensured that the resulting features originate from the same embedding space and maintain an identical shape of the feature map. To implement the contrastive learning task in CLIP, we substitute the image features with BEV features. This process involves using the outputs of the text branch and the BEV branch merged with the prompt as distinct sets of features, which are pooled to generate one-dimensional vectors. Subsequently, these vectors are employed for similarity calculations. Additionally, we introduce a caption generation task as an auxiliary component for model training. We employ the output of the BEV branch to generate captions. To fulfill this task, we utilize a lightweight decoder based on the Transformer structure, while the corresponding text description to the BEV sample serves as the supervision label for this task. 4 EXPERIMENT 4.1 DATASETS Nuscenes Dataset The NuScenes dataset (Caesar et al., 2020) is a large-scale public dataset for autonomous driving. The dataset contains a total of 1,000 driving scenarios collected in Boston and Singapore, with each scenario lasting about 20s. Each sample is captured by RGB cameras distributed in six different viewpoints of the car and a lidar placed on the roof. The training and validation sets contain a total of 34,149 key frames, and the dataset provides textual descriptions of each different scene and labelled 2D and 3D target detection results. We performed all the training and evaluation experiments on the NuScenes dataset, and in order to further improve the informative diversity of this textual description and to reduce the repetition of the caption, we briefly supplemented the original caption with the target detection results labelled in the scenes. Caption complement strategy For each keyframe sample, we obtain all the obstacle information in the current perception result through the object detection labelling result. Concurrently, the frequency of each obstacle type’s occurrence is quantified. In order to mitigate the complexity of the training task, the quantity descriptors in the tabulated results are rendered more ambiguous, and supplement the caption text with e.g., “many cars, several trucks, one bus, one bicycle, one motorcycle, several traffic cones”. We concatenated the supplemented text directly after the original caption text as the scene description for our training and evaluation. Knowledge graph data We utilise autonomous driving knowledge graph (ADKG) proposed by (Wickramarachchi et al., 2020) as knowledge graph data to train knowledge graph embedding for keywords in the field of autonomous driving. ADKG is generated based on perceptual data from Hesai’s PandaSet (Xiao et al., 2021). It contains more than 57,000 instances and more than 330,000 triplets. It also contains 7 action labels and more than 40 object labels. Based on these labels, we manually perform synonym mapping so that the caption mentioned above can trigger the knowledge graph retrieval more frequently and accurately. 4.2 IMPLEMENTATION DETAILS In the following experiment, the default size of our BEV feature map is (2500,1,256), the hidden size of the LLM embedding is set to 4096. To control the experimental variables, we set the batch size to 32 during the training process, trained with 8*NVIDIA A100 GPU, and dynamically updated the learning rate using the cosine strategy during the training process. The baseline text encoder we use is based on the BERT model, which is natively used in CLIP and uses an MLP layer for feature mapping. We use (Rk, k=1,5,10) as our evaluation metrics of recall accuracy, they represent the percentage of correct retrieved items in top-k results. In the following experimental results, B2T and T2B refer to BEV-to-text retrieval and text-to-BEV retrieval, respectively. 4.3 EXPERIMENT RESULT To summarise our outcome, we performed the BEV-retrieval task using BEV-CLIP. We adapted pre-trained BEVFormer to extract the BEV feature, and the concatenated results of fine-tuned parameters from Llama2+LoRA with the embedding generated from the knowledge graph were used as text features. We apply SCP to map the features generated from the two branches to generate a sequence of BEV features and text features with the same dimensionalities. We supervised the training jointly by caption generation loss BEV-Text contrastive loss based on cosine similarity. | Method | LoRA | SCP | KG | CG | B2T_R1 | B2T_R5 | B2T_R10 | T2B_R1 | T2B_R5 | T2B_R10 | |----------|------|-----|----|----|--------|--------|---------|--------|--------|---------| | BERT* | – | – | – | – | 0.6409 | 0.9129 | 0.9557 | 0.5594 | 0.8915 | 0.9384 | | Llama2* | ✓ | – | – | – | 0.7875 | 0.9757 | 0.9909 | 0.8194 | 0.9812 | 0.9906 | | | ✓ | ✓ | – | – | 0.8059 | 0.9783 | 0.9947 | 0.8584 | 0.9909 | 0.9959 | | | ✓ | ✓ | ✓ | – | **0.8599** | 0.9947 | 0.9994 | 0.8757 | 0.9968 | 0.9994 | | | ✓ | ✓ | ✓ | ✓ | 0.8578 | **0.9954** | **0.9994** | **0.8766** | **0.9971** | **0.9997** | Table 2: Comparison of all results. * denotes that the model parameters are frozen, SCP refers to shared cross-modal prompt, KG refers to Distmult knowledge graph embeddings, and CG refers to caption generation head. All results of our comparative experiments are demonstrated in table[2]. We observe our best result on the combination of Llama2, LoRA, SCP, distmult knowledge graph embedding and caption generation head, which are the accuracy proportions of 85.78% and 87.66% on BEV-to-text rank@1 and text-to-BEV rank@1, respectively. And we have exceeded the accuracy 99% of the remaining indicators, which out-performs the compared baseline method. These experimental results demonstrate that our proposed BEV-CLIP method can effectively solve the BEV retrieval problem. ### 4.4 Ablation Study In this section, we verify the effect of each of our proposed methods on the retrieval results and validate the effectiveness of the methods through multiple sets of ablation experiments. We discuss the effect of the large language model, knowledge graph, Shared Cross-Modal Prompt, and caption generation tasks respectively. For the baseline of the experiments, we employ the CLIP native text branch as our adaptive method and replace the original visual branch with BEVFormer encoder. Additionally, we insert a layer of Multilayer Perceptron (MLP) between the BEV branch and contrastive loss in order to align the feature size. | Method | MLP | LoRA | B2T_R1 | B2T_R5 | B2T_R10 | T2B_R1 | T2B_R5 | T2B_R10 | |----------|-----|------|--------|--------|---------|--------|--------|---------| | BERT* | – | – | 0.6409 | 0.9129 | 0.9557 | 0.5594 | 0.8915 | 0.9384 | | Llama2* | ✓ | – | 0.7244 | 0.9472 | 0.9713 | 0.7030 | 0.9507 | 0.9730 | | | ✓ | ✓ | **0.7875** | **0.9757** | **0.9909** | **0.8194** | **0.9812** | **0.9906** | Table 3: Ablation study results when using different text encoder. * denotes that the model parameters are frozen. #### Language Models We adapt Llama2 as a large language model text encoder to compare with the BERT-based CLIP native text encoder. Additionally, we also utilize LoRA [Hu et al., 2021] to perform fine-tuning for language model. The experimental results are shown in table[3]. We observed that the output results using Llama2 decoder have a significant performance improvement in all metrics compared to the CLIP text branch using BERT. Meanwhile, we also tried some methods for fine-tuning of large language models to enhance the encoding ability of Llama2 for scene description, such as LoRA. LoRA is to add a low-rank weight matrix as a learnable parameter in addition to the backbone network of LLM to realize the fine-tuning of LLM. Comparing Llama2 with and without LoRA fine-tuning, it can be found that fine-tuning Llama2 using LoRA also has significant gains. Further improvements of about 6% and 10% are achieved in the B2T_R1 and T2B_R1 metrics, respectively. One possible cause for this improvement is that the pre-training task of Llama2 contains fewer autonomous driving scenarios, and the fine-tuning on the autonomous driving dataset using the LoRA approach is able to bridge this gap better. | Method | KGE | B2T_R1 | B2T_R5 | B2T_R10 | T2B_R1 | T2B_R5 | T2B_R10 | |--------------|-----|--------|--------|---------|--------|--------|---------| | Llama2* + LoRA | | | | | | | | | | TransE | 0.8009 | 0.9804 | 0.9936 | 0.8455 | 0.9892 | 0.9965 | | | Distmult | **0.8059** | **0.9783** | **0.9947** | **0.8584** | **0.9909** | **0.9959** | | | ConvE | 0.8050 | 0.9780 | **0.9956** | 0.8473 | 0.9889 | **0.9968** | Table 4: Ablation study results when using different knowledge graph. * denotes that the model parameters are frozen. KGE refers to knowledge graph embedding. Knowledge graph embeddings In order to verify the effect of adding knowledge graphs from autonomous driving related datasets to the text branch, we tried to add knowledge graphs derived from several different methods to the text branch for comparison experiments, including transE, distmult, and convE. We used the text branch of LoRA fine-tuned Llama2 as the baseline, and the experimental results are shown in Table 4. We note that after adding the text embedding output from the 3 different maps to the text branch, there is a significant improvement compared to the baseline, especially in the T2B_R1, which is improved by 3% to 4% on average. Among these 3 kinds of knowledge graph, distmult achieves the optimal result. | Method | LoRA | MLP | SCP | Distmult | B2T_R1 | B2T_R5 | B2T_R10 | T2B_R1 | T2B_R5 | T2B_R10 | |--------------|------|-----|-----|----------|--------|--------|---------|--------|--------|---------| | Llama2* | – | ✓ | – | – | 0.7244 | 0.9472 | 0.9713 | 0.7030 | 0.9507 | 0.9730 | | | ✓ | – | ✓ | – | 0.8291 | 0.9938 | 0.9997 | 0.8247 | 0.9959 | 1.0000 | | | ✓ | – | ✓ | ✓ | 0.7875 | 0.9757 | 0.9909 | 0.8194 | 0.9812 | 0.9906 | | | ✓ | – | ✓ | ✓ | 0.8552 | 0.9944 | 0.9988 | 0.8751 | 0.9962 | 0.9991 | | | ✓ | – | ✓ | ✓ | 0.8059 | 0.9783 | 0.9947 | 0.8584 | 0.9909 | 0.9959 | | | ✓ | – | ✓ | ✓ | 0.8599 | 0.9947 | 0.9994 | 0.8757 | 0.9968 | 0.9994 | Table 5: Ablation study results of adding SCP when using multiple different baselines.* denotes that the model parameters are frozen, SCP refers to shared multi-modal prompt. Shared cross-modal prompt SCP is used as a strategy to align text features with BEV features in our method, and we verify its effectiveness in Table 5. Comparing the three sets of ablation experiments of SCP under different conditions, it can be found that the direct introduction of SCP without modifying the encoder of the two branches is able to significantly improve the model metrics, and comparing the fine-tune strategy with the introduction of LoRA and the baseline model with knowledge graph embedding also both indicate that SCP brings effective improvement. | Method | CG | Distmult | B2T_R1 | B2T_R5 | B2T_R10 | T2B_R1 | T2B_R5 | T2B_R10 | |--------------|------|----------|--------|--------|---------|--------|--------|---------| | Llama baseline | – | – | 0.8552 | 0.9944 | 0.9988 | 0.8731 | 0.9962 | 0.9991 | | | ✓ | – | 0.8573 | 0.9930 | 0.9988 | 0.8751 | 0.9974 | 0.9991 | | | ✓ | ✓ | 0.8599 | 0.9947 | 0.9994 | 0.8757 | 0.9968 | 0.9994 | | | ✓ | ✓ | 0.8578 | 0.9954 | 0.9994 | 0.8766 | 0.9971 | 0.9997 | Table 6: Ablation study results of adding caption generation as an auxiliary task. CG refers to the model trained with caption generation head. Llama baseline includes: Llama2* + LoRA + SCP. Caption generation We add a caption generation head for assisted supervised training to fine-tune text branch for better semantic representation and to improve the feature alignment capability of the SCP. The experimental results are shown in Table 6. We observe that in the majority of cases, the adoption of the caption generation task leads to a better performance. In particular, we achieve the best result among all experiments for the T2B_R1. 5 CONCLUSION In this paper, we propose a method for cross-modal retrieval with BEV features and text features for the first time. Specifically, on the BEV-branch, we proposed that the existing BEV model can be used to obtain BEV features without fine-tuning. On the text-branch, we proposed to use the pre-trained decoder-only LLM as the text encoder and concatenate the embedding generated by the knowledge graph training in the field of autonomous driving to form more robust text features. In addition, we propose SCP to fuse and align two modal information, and add caption heads to achieve multi-task training. We quantitatively verified the effectiveness of the method through a large number of ablation experiments on NuScenes dataset, and the analysis of the visualization results shows that the BEV retrieval task can deal with complex scenes in autonomous driving that cannot be solved by relying on single-frame and single-view images. REFERENCES Daniel Bogdoll, Jasmin Breitenstein, Florian Heidecker, Maarten Bieshaar, Bernhard Sick, Tim Fingscheidt, and Marius Zöllner. Description of corner cases in automated driving: Goals and challenges. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1023–1028, 2021. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26, 2013. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11621–11631, 2020. Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, and Luc Van Gool. Structured bird’s-eye-view traffic scene understanding from onboard images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15661–15670, 2021. Li Chen, Chonghao Sima, Yang Li, Zehan Zheng, Jiajie Xu, Xiangwei Geng, Hongyang Li, Conghui He, Jianping Shi, Yu Qiao, et al. Persformer: 3d lane detection via perspective transformer and the openlane benchmark. In European Conference on Computer Vision, pp. 550–567. Springer, 2022. Yuxiao Chen, Jianbo Yuan, Yu Tian, Shijie Geng, Xinyu Li, Ding Zhou, Dimitris N Metaxas, and Hongxia Yang. Revisiting multimodal representation in contrastive learning: from patch and token embeddings to finite discrete tokens. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15095–15104, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. Noa Garnett, Rafi Cohen, Tomer Pe’er, Roee Lahav, and Dan Levi. 3d-lanenet: end-to-end 3d multiple lane detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2921–2930, 2019. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, and Dacheng Tao. St-p3: End-to-end vision-based autonomous driving via spatial-temporal feature learning. In European Conference on Computer Vision, pp. 533–549. Springer, 2022. Junjie Huang, Guan Huang, Zheng Zhu, Yun Ye, and Dalong Du. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view. arXiv preprint arXiv:2112.11790, 2021.
Ax9cPWDKkR
While using a post-hoc metric like Shapley or the proposed AI is beneficial in some circumstances, it should be mentioned that doing so requires extra (test time) compute. Algorithms like Q-MIX or VDN do not suffer from this issue, in the sense that they provide an estimation of contribution at any point in training.
EFFICIENTLY QUANTIFYING INDIVIDUAL AGENT IMPORTANCE IN COOPERATIVE MARL Anonymous authors Paper under double-blind review ABSTRACT Measuring the contribution of individual agents is challenging in cooperative multi-agent reinforcement learning (MARL). In cooperative MARL, team performance is typically inferred from a single shared global reward. Arguably, among the best current approaches to effectively measure individual agent contributions is to use Shapley values. However, calculating these values is expensive as the computational complexity grows exponentially with respect to the number of agents. In this paper, we adapt difference rewards into an efficient method for quantifying the contribution of individual agents, referred to as Agent Importance, offering a linear computational complexity relative to the number of agents. We show empirically that the computed values are strongly correlated with the true Shapley values, as well as the true underlying individual agent rewards, used as the ground truth in environments where these are available. We demonstrate how Agent Importance can be used to help study MARL systems by diagnosing algorithmic failures discovered in prior MARL benchmarking work. Our analysis illustrates Agent Importance as a valuable explainability component for future MARL benchmarks. 1 INTRODUCTION In recent years, multi-agent reinforcement learning (MARL) has achieved significant progress, with agents being able to perform similar or better than human players and develop complex coordinated strategies in difficult games such as Starcraft (Samvelyan et al., 2019; Vinyals et al., 2019), Hanabi (Foerster et al., 2019; Bard et al., 2020; Hu and Foerster, 2021; Du et al., 2021) and Diplomacy (Bakhtin et al., 2022). Furthermore, MARL has also shown promising results in solving real-world problems such as resource allocation, management and sharing, network routing, and traffic signal controls (Vidhate and Kulkarni, 2017; Brittain and Wei, 2019; Nasir and Guo, 2019; Spatharis et al., 2019; Liu et al., 2020; Zhao et al., 2020; Pretorius et al., 2020; Gu et al., 2021). These real-world settings are naturally formulated as cooperative MARL systems, where agents need to coordinate to optimise the same global reward. One of the critical challenges in cooperative MARL is multi-agent credit assignment (Chang et al., 2003). Since agents typically receive a global reward for their joint actions, this makes determining individual agent contributions challenging. This need for correct attribution becomes especially important as more autonomous systems are deployed in the real world. The inherent complexity of these MARL systems impedes our understanding of decision-making processes and the motivations behind actions, hindering progress in this field. Improved credit assignment could play a vital role in comprehending agent behaviour and system-level decision-making, aiding in accountability, trust, fairness, and facilitating the detection of potential issues such as coordination failures, or unethical behaviour. Credit assignment can be considered from a core algorithmic perspective, where components of reinforcement learning (RL) algorithms, such as the value function, are adapted to better decouple the impact of the actions of individual agents. Methods such as VDN (Sunehag et al., 2017a), COMA (Foerster et al., 2018), and QMIX (Rashid et al., 2018a) fall into this domain. However, since these algorithms are trained end-to-end through the use of function approximators, explainability is difficult, i.e. it is challenging to correlate specific agent actions to reward outcomes over time. Furthermore, since these notions of agent impact are part of the RL algorithms themselves, it is not easy to transfer these between different algorithms. Accurate credit assignment within a team of agents can also be seen as a form of explainable AI (XAI). XAI consists of machine learning (ML) techniques that are used to provide insights into the workings of models (Arrieta et al., 2020). It has been used across various domains in ML, and more recently in single-agent RL\(^1\) (Glanois et al., 2021) and multi-agent systems (Heuillet et al., 2022). Following from (Arrieta et al., 2020; Glanois et al., 2021), we use the notion of explainability to refer to any external post-hoc methodology that is used to gain insights into a trained model. These techniques have the notable advantage of being able to be used across algorithms, often irrespective of their design or formulation. Efforts to enhance explainability in RL have resulted in the development of various techniques (Juozapaitis et al., 2019; Madumal et al., 2020; Puiutta and Veith, 2020; Glanois et al., 2021; Heuillet et al., 2021; Vouros, 2022; Dazeley et al., 2023). In contrast, MARL lacks dedicated explainability tools, with only a limited number of works addressing this topic (Kraus et al., 2019; Boggess et al., 2022; Heuillet et al., 2022). One notable approach involves leveraging the Shapley value (Shapley, 1953), a metric derived from game theory, and adapting it to MARL to quantify agent contributions to the global reward (Heuillet et al., 2022). Although Shapley values have shown promise in MARL explainability, calculating these values is expensive as the computational complexity grows exponentially with respect to the number of agents. In this paper, we highlight the need for employing explainable tools to help quantify credit assignment in cooperative MARL systems. We show that an averaged calculation of the difference reward (Wolpert and Tumer, 2001) across evaluation episodes, can be used as an effective metric for measuring an agent’s contribution, which we refer to as the Agent Importance. Unlike Shapley values, the Agent Importance has a linear computational complexity (w.r.t. the number of agents) making it more efficient to compute. Through empirical analysis, we demonstrate a strong correlation between the Agent Importance values and the true Shapley values, while also empirically validating the scalability and computational advantage of this approach. To showcase the practical use of Agent Importance, we revisit a previous benchmark in cooperative MARL (Papoudakis et al., 2021) and follow the standardised evaluation guideline proposed by (Gorsane et al., 2022) to reproduce key results from this benchmark under a sound protocol. We then proceed by applying Agent Importance to specific scenarios of interest as highlighted by the authors of this benchmark. This includes investigating: (1) why Multi-Agent Advantage Actor-Critic (MAA2C) (Mnih et al., 2016a; Papoudakis et al., 2021) outperforms Multi-Agent Proximal Policy Optimisation (MAPPO) (Yu et al., 2022) in the Level-Based Foraging (LBF) environment\(^2\) (Albrecht and Ramamoorthy, 2015; Albrecht and Stone, 2019; Christianos et al., 2020); and (2) why parameter sharing between agents leads to improved performance (3) analyse agents’ behaviour in case of heterogeneous settings. Using agent importance, we uncover that for (1) MAA2C achieves a more equal contribution among agents when compared to MAPPO, i.e. agents have a more similar importance to the overall team and therefore have a higher degree of cooperation; and that for (2) architectures without parameter sharing exhibit a higher variance in agent importance, leading to credit assignment issues and lower performance compared to architectures with parameter sharing. The source code to reproduce our analysis and compute the agent importance, as well as our raw experiment data is publicly available\(^3\). ### 2 RELATED WORK **Explainability in RL** With the surging popularity of Deep RL, which relies on black-box deep neural networks, there has been an increase in literature that attempts to enable human understanding of complex, intelligent RL systems (Juozapaitis et al., 2019; Madumal et al., 2020; Puiutta and Veith, 2020; Glanois et al., 2021; Heuillet et al., 2021; Vouros, 2022; Dazeley et al., 2023). Additionally, --- 1 In this paper, we use the term "RL" to exclusively refer to single-agent RL, as opposed to RL as a field of study, of which MARL is a subfield. 2 A somewhat surprising result since MAPPO uses importance sampling for off-policy correction and is expected to perform at least as well as MAA2C as it incorporates a clipping function based on importance sampling allowing data retraining without divergent policies. 3 Data is accessible at the following link. frameworks like ShinRL (Kitamura and Yonetani, 2021) and environment suites like bsuite (Osband et al., 2019) offer comprehensive debugging tools including state and action space visualizations and reward distributions, and carefully crafted environments for behavioural analysis in RL. **Explainability in MARL.** In contrast to explainable RL, there has been a limited amount of work focusing on explainability in MARL (Kraus et al., 2019; Boggess et al., 2022; Heuillet et al., 2022). Specifically, we are interested in explainability in the context of cooperative MARL with a shared, global reward and the aim is to effectively quantify credit assignment. The challenges associated with measuring credit assignment in MARL have motivated researchers to explore the use of the **Shapley value** (Shapley, 1953). Originating from game theory, the Shapley value addresses the issue of payoff distribution within a “grand coalition” (i.e. a cooperative game) and quantifies the contribution of each coalition member toward completing a task. Specifically, consider a cooperative game $\Gamma = (\mathcal{N}, v)$, where $\mathcal{N}$ is a set of all players and $v$ is the payoff function used to measure the “profits” earned by a given coalition (or subset) $\mathcal{C} \subseteq \mathcal{N} \setminus \{i\}$, such that the marginal contribution of player $i$ is given by $\phi_i(\mathcal{C}) = v(\mathcal{C} \cup \{i\}) - v(\mathcal{C})$. The Shapley value of each player $i$ can then be computed as: $$S_i(\Gamma) = \sum_{\mathcal{C} \subseteq \mathcal{N} \setminus \{i\}} \frac{|\mathcal{C}|!(|\mathcal{N}| - |\mathcal{C}| - 1)!}{|\mathcal{N}|!} \cdot \phi_i(\mathcal{C}). \quad (1)$$ Calculating Shapley values in the context of MARL presents two specific challenges: (1) it requires computing $2^n-1$ possible coalitions of a potential $n(2^n-1)$ coalitions (with $|\mathcal{N}| = n$) which is computationally prohibitive and (2) it strictly requires the use of a simulator where agents can be removed from the coalition and the payoff of the same states can be evaluated for each coalition. Despite its limitations, the Shapley value is able to alleviate the issue of credit assignment and help towards understanding individual agent contributions in MARL. As a result, numerous efforts have been undertaken to incorporate it as a component of an algorithm (Wang et al., 2020; Yang et al., 2020a; Han et al., 2022; Wang et al., 2022). However, in this work, we focus on the Shapley value as an explainability metric. One such approach is introduced in (Heuillet et al., 2022), where the authors utilise a Monte Carlo approximation of the Shapley value to estimate the contribution of each agent in a system, which we refer to here as **MC-Shapley**. This approximate Shapley value is computed as: $$\hat{S}_i^{MC}(\Gamma) = \frac{1}{M} \sum_{m=1}^{M} (r_{C_m \cup \{i\}} - r_{C_m}) \approx S_i(\Gamma), \quad (2)$$ where $M$ is the number of samples (episodes), $C_m$ is a randomly sampled coalition out of all possible coalitions excluding agent $i$, and $r_{C_m \cup \{i\}}$ and $r_{C_m}$ are the episode returns obtained with and without agent $i$ included in the coalition. In essence, Heuillet et al. (2022) attempts to address the second limitation of the Shapley value, which involves removing agents from the environment. They propose three strategies for proxies of agent removal while computing the return $r_{C_m}$. The first hypothesis is to provide the agent $i$ with a no-op (no-operation) action, the second is to assign the agent $i$ with a random action, and the third is to replace the action of agent $i$ with a randomly selected agent’s action from the current coalition $C_m$. The paper’s findings indicate that using the no-op approach yields the most accurate approximation of the true Shapley value. A primary limitation of this work is the dependence on a significant number of sampled coalitions, with each sample corresponding to a single episode. This characteristic has a notable impact on training speed, especially if the proposed approach is employed as an online metric for detecting the evolution of agents’ contributions during system training. **Difference Rewards.** Of central relevance to this work is difference rewards (Wolpert and Tumer, 2001; Agogino and Tumer, 2004; 2008; Devlin et al., 2014) which presents a method for estimating credit assignment within a system. It can be written as $D_i(z) = G(z) - G(z_{-i})$ where $D_i(z)$ is the difference reward for agent $i$, $z$ is a state or state-action pair depending on the application, $G(z)$ is the performance of the global system and $G(z_{-i})$ is the performance of a theoretical system that omits agent $i$. Any action taken that increases the difference reward $D_i(z)$ also increases $G(z)$ but will have a higher impact on the (typically unknown or hypothetical) individual reward for each agent compared to the global reward. It is from this property that we may determine the relative impact of each agent in a system. 3 Agent Importance We compute the Agent Importance as an average of difference rewards and use it as an efficient estimate of the Shapley value. To ensure accuracy in our estimation, we emphasize the importance of utilizing an adequate number of samples. This is reminiscent of the MC-Shapley approach which uses Monte Carlo approximation over entire episodes (Heuillet et al., 2022). However, in this work, we show that such an approach to estimation is not necessary and instead, we compute difference returns over samples collected per step, rather than per episode, without the need to resample coalitions. We simply compute the difference reward for each agent at each timestep during evaluation and aggregate over all evaluation timesteps. This approach greatly improves the sample efficiency in estimation during online evaluation. Concretely, the Agent Importance is given by $$S_{AI}^i(\Gamma) = \frac{1}{T} \sum_{t=1}^{T} r^t - r^t_{-i},$$ (3) where $T$ is the number of timesteps in a full evaluation interval, $r^t$ is the team reward (i.e. the reward of the grand coalition), at timestep $t$ and $r^t_{-i}$ is the team reward when agent $i$ performs a no-op action. Applying Equation 3 poses a technical challenge as it requires comparing rewards between agents based on the same exact environment state at a given timestep. In MARL, most simulators are not easily resettable and/or stateless, which makes measuring one reward and undoing that step and then measuring a second reward difficult\(^4\). To overcome this limitation, we adopt a simple solution outlined in Algorithm 1, where we create a copy of the environment for each agent to be able to compute the Agent Importance. **Algorithm 1** Per timestep difference reward contribution in Agent Importance Require: $t$: evaluation timestep, marginal_contribution: dictionary 1: env_copies ← deepcopy(env, len(agents)) \(>\) Create deepcopies of the environment. 2: $r^t$ ← env.step(selected_actions) 3: for $i$ in range(len(agents)) do 4: actions_with_no_op ← disable_actions(selected_actions, $i$) 5: $r^t_{-i}$ ← env_copies[$i$].step(actions_with_no_op) 6: add_to_dict(marginal_contribution, $i$, ($r^t - r^t_{-i}$)) 7: end for 4 Case Study: Using Agent Importance to Analyse a Prior Benchmark Our case study setup is based on the work of (Papoudakis et al., 2021), which made a comparative benchmark of cooperative MARL algorithms. The study conducts evaluations and comparisons of multiple categories of MARL algorithms, covering Q-Learning, and policy gradient (PG) methods, across two paradigms: independent learners (ILs), and centralised training with decentralised execution (CTDE). The findings of this study align with those of (Gorsane et al., 2022), concluding that current MARL algorithms are most performant on the popular Multi-Particle Environment (MPE) (Lowe et al., 2017) and Starcraft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019) environments—with most algorithms achieving comparable performance, in some cases seemingly to the point of overfitting. Consequently, our main analysis focuses on the remaining two environments from this benchmark: LBF, and RWARE. Environments. The Multi-Robot Warehouse (RWARE) (Christianos et al., 2020; Papoudakis et al., 2021) is a multi-agent environment that is designed to represent a simplified setting where robots move goods around a warehouse. The environment requires agents (circles) to move requested shelves (colored squares) to the goal post (dark squares) and back to an empty square as illustrated at \(^4\)We however do note, that this could easily be achieved with simulators written using pure functions in JAX (Freeman et al., 2021; Lange, 2022; Bonnet et al., 2023). the top of Figure 1. Tasks are partially observable with a very sparse reward signal as agents have a limited field of view and are rewarded only upon a successful delivery. Level-Based Foraging (LBF) (Albrecht and Ramamoorthy, 2015; Albrecht and Stone, 2019; Christianos et al., 2020) is a mixed cooperative-competitive game with a focus on inter-agent coordination illustrated at the bottom of Figure 1. Agents are assigned different levels and navigate a grid world where the goal is to consume food by cooperating with other agents if required. Agents can only consume food if the combined level of the agents adjacent to a given item of food exceeds the level of the food item. Agents are awarded points equal to the level of the collected food divided by their level. LBF has a particularly high level of stochasticity since the spawning position and level assigned to each agent and food are all randomly reset at the start of each episode. In the original benchmarking work by Papoudakis et al. (2021), the authors used the popular Starcraft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019) environment. In our case study, we instead use SMACLite (Michalski et al., 2023), an environment designed to replicate SMAC faithfully, in Python. An illustration of SMACLite is given in Figure 1. SMACLite has similar system dynamics to SMAC but does not rely on the StarCraft 2 video game engine as a backend. Due to this SMACLite requires significantly less RAM making it more suitable for utilising parallel processing. This also means it can be used in conjunction with Python methods like `copy` which makes contribution analysis methods like simpler to implement. **Algorithms.** As in the original benchmarking setup of Papoudakis et al. (2021), we use the exact same collection of algorithms for our case study. Specifically, we use the value-based algorithms Independent Q-Learning (IQL) (Tan, 1997), Value-Decomposition Network (VDN) (Sunehag et al., 2017a), and QMIX (Rashid et al., 2018a), alongside two policy-gradient (PG) algorithms, namely Multi-Agent Proximal Policy Optimisation (MAPPO) (Yu et al., 2022) and Multi-Agent Advantage Actor-Critic (MAA2C) (Foerster et al., 2018). To investigate the influence of parameter sharing, we conduct experiments with both parameter-sharing and non-parameter-sharing architectures. Further details about the algorithms can be found in the Appendix section A.2. **Evaluation Protocol.** We follow the protocol outlined by Gorsane et al. (2022), and apply the evaluation tools from Agarwal et al. (2022) in the MARL setting as advocated in the protocol. We evaluate agents at 201 equally spaced evaluation intervals for 32 episodes each during training. Following from the recommendations of Papoudakis et al. (2021) we train off-policy algorithms for a total of 2M timesteps and on-policy algorithms for a total of 20M timesteps summed across all parallel workers. This implies that evaluation occurs at fixed intervals of either 10k or 100k total environment steps for off- and on-policy algorithms respectively. For all our experiments, we use the EPyMARL framework (Papoudakis et al., 2021) which is opensourced under the Apache 2.0 licence. This is to ensure we are evaluating all algorithms on the same tasks, using the same codebase as was done by Papoudakis et al. (2021) for maximal reproducibility. Furthermore, it allows us to use identical hyperparameters as used in their work, which are available in the Appendix section A.3. All results that are presented are aggregated over 10 independent experiment trials. In cases where aggregations are done over multiple tasks within an environment, as opposed to an individual task (e.g. for computing performance profiles), the interquartile mean is reported along with 95% stratified bootstrap confidence intervals. For all plots except for sample efficiency curves, the absolute metric (Colas et al., 2018; Gorsane et al., 2022) for a given metric is computed. This metric is the average metric value of the best-performing policy found during training rolled out for 10 times the number of evaluation episodes. **Computational resources.** All experiments were run on an internal cluster using either AMD EPYC 7452 or AMD EPYC 7742 CPUs. Each independent experiment run was assigned 5 CPUs and 5GB of RAM with the exception of the scalability experiments which were exclusively run using AMD EPYC 7742 CPUs and either 5, 15, 30, or 200 GB of memory depending on the number of agents and subsequently the number of environment copies that were required. Figure 2: Correlation analysis for agents \( \{a_0, a_1, a_2, a_3\} \), for each metric: Agent Importance \( i \), Shapley Value \( s \), and Individual Reward \( r \) using the VDN algorithm. (a) Heatmap of Correlations among Metrics. **TOP:** LBF 15x15-4p-5f. **BOTTOM:** RWARE small-4ag. (b) Matching Rankings Comparison on LBF 15x15-4p-5f. (c) Matching Rankings Comparison on RWARE small-4ag. The legend refers to which metric is being compared to the individual agent rewards. 5 RESULTS We demonstrate the validity of Agent Importance by considering its correlation to the true Shapley value, its computational scalability and its reliability in quantifying individual agent contributions. We then proceed to illustrate how Agent Importance may be used as an explainability tool. 5.1 Validating Agent Importance Correlation between Agent Importance and the Shapley value. We note that the Agent Importance metric is not mathematically equivalent to the Shapley value. It focuses on the grand coalition rather than all possible agent coalitions. However, through empirical study, we argue that Agent Importance is sufficient for capturing agents’ contributions in the context of cooperative MARL. To validate our assertion, we conduct experiments on both LBF and RWARE to empirically assess the correlation between Agent Importance and the Shapley value. We generate a heatmap that describes the correlation between the metrics for the VDN algorithm. Furthermore, we assess the ability of a metric to maintain the relative agent rankings according to each agent’s individual rewards (which are not seen by the agents). If a metric gives the same ranking to agents, we count this as a positive result—implying that a higher-ranking match is better. While only results on VDN are displayed, the trend is consistent for all algorithms across various tasks. Further results to this end are given in the Appendix section D. Figure 2 (a) shows that there exists a strong correlation between the Agent Importance, the Shapley value and the individual agent reward as calculated by the Pearson correlation coefficient. This indicates the effectiveness of both the Shapley value and Agent Importance in assessing agents’ contributions, making them valuable substitutes for individual agent rewards in environments where such rewards are unavailable. Notably, Agent Importance showcases a promising ability to effectively replace both the Shapley value and individual rewards. While the Shapley value may provide greater consistency in ranking information when compared to the Agent Importance (as illustrated in Figures 2 (b,c)) where the frequency of ranking agreement between the individual reward and the contribution estimators is illustrated, it is important to note that Agent Importance is highly correlated with the individual reward and shows a minimal rate of non-matched rankings. Scalability of Agent Importance. In order to validate the computational feasibility of the simplified Agent Importance against the full Shapley value we record the run time of both approaches on LBF tasks with 2, 4, 10, 20, and 50 agents. We run the algorithm without any training and compute the number of seconds it takes for agents to take a single environment step while computing each metric. The reported results here are the mean and standard deviation over 3 independent runs. The Shapley value became prohibitively slow as agent number was increased and required approximately 2 hours to measure a single step within the environment with 20 agents. Nonetheless, Figure 3 clearly illustrates how the Agent Importance is significantly more computationally efficient than the Shapley value. Reliability of Agent Importance. In order to validate the ability of the Agent Importance to effectively untangle agent contributions from a shared team reward, we create a deterministic version of LBF where agent levels are always fixed to be 1, 2, and 3 respectively, and the maximum level of each food is a random value between 1 and 6. Since agent 2 is assigned a fixed greater level than its counterparts we should expect it to contribute the most to the team return. Figure 4 illustrates the ability of Agent Importance to uncover the correct ordering and approximate level of contribution among agents towards the overall team goal. 5.2 APPLICATIONS OF AGENT IMPORTANCE We replicated the experiments performed by Papoudakis et al. (2021), obtaining similar results. However, our work adds value by following a strict protocol (Gorsane et al., 2022) which includes additional evaluation measurements such as examining the probability of improvement and providing performance profiles (Agarwal et al., 2022), as shown in Figure 5. Additional plots and tabular results for different scenarios and the performance of the algorithms without parameter sharing are included in the Appendix along with more detailed performance plots for SMAClite in Appendix section C. MAA2C vs MAPPO. Empirical results in RL consistently demonstrate that PPO tends to outperform A2C (Heess et al., 2017; Schulman et al., 2017; Henderson et al., 2018). This trend naturally leads to the question of whether a similar pattern is observed in the multi-agent setting, i.e. between MAPPO and MAA2C. However, when examining the results in Figure 5, a conflicting observation arises. In the case of RWARE, we observe the expected behaviour with the probability of improvement... aligning with our initial expectations. However, in the case of LBF, the opposite occurs as MAA2C outperforms MAPPO, presenting an unexpected outcome. Figure 6 (b) highlights a possible reason. By tracking Agent Importance, we may attribute this outcome to a narrowing in the spread of importance values between MAA2C agents at convergence, as compared to MAPPO agents. The assumption of lower variance in Agent Importance leading to improved performance in LBF is due to the stochasticity of the environment. It is reasonable to expect that an algorithm performing well in this environment should have the capability to adapt to the variability in agent and food levels across episodes. From the narrower spread in Agent Importance values in MAA2C we can see it has learnt treat all agents as equally important. For additional findings on RWARE see section B.3 in the supplementary material. **Parameter sharing vs non-parameter sharing.** Consistent with the findings of Papoudakis et al. (2021), our experiments demonstrate that algorithms utilizing parameter sharing outperform those without it. As mentioned in the benchmark paper, this outcome is expected as parameter sharing enhances sample efficiency. Additionally, parameter sharing enhances the sharing of learned information across the system. The Agent Importance analysis for IQL, QMIX, and VDN provides clear evidence of the impact of parameter-sharing architectures, as illustrated in Figure 7. It is apparent that in the absence of parameter sharing, the agents contribute to varying degrees, leading to an uneven distribution of importance. And as mentioned previously, given LBF’s characteristics, requiring a high level of coordination in the presence of significant stochasticity, all agents should be expected (on average) to contribute equally. However, in the non-parameter sharing cases, especially for IQL and VDN, we observe that a small number of the agents dominate the contributions, resulting in lower performance compared to when parameter sharing is utilised. Heterogeneous Agents. In both LBF and RWARE the importance of each agent and the total reward are highly correlated as all agents have similar capabilities. In the heterogeneous setting of MMM2, rather than converging to similar importance levels over time, agents will instead converge to clear groups of importance levels as seen in figures 8b and 8c. Furthermore, note that agents of the same type can still fall into different levels of importance which is consistent with role decomposition analysis in ROMA (Yang et al., 2020b). As shown by Yang et al. (2020b), the optimal policy in MMM2 requires a subset of marine agents to die early in the episode, who then cannot contribute to the team reward remaining timesteps, whereas a smaller number of marines survive until the end. This is clearly seen in figure 8c for MAPPO. In the case of MAA2C in figure 8b we can see that although clear clusters have formed, it has not learned to assign the correct importance to a subgroup of marines that are required to optimally solve the environment. 6 DISCUSSION In this work, we illustrate that Agent Importance is an efficient and reliable measure for agent contributions towards the team reward in cooperative MARL. Aside from only quantifying the agent contributions we have also shown how the metric may be used as an explainability tool for uncovering failure modes in existing MARL results. Limitations. Although Agent Importance is useful, using simulators that allow for an agent’s removal during runtime would be highly advantageous. Solely relying on no-op actions could still impact the coalition reward by obstructing other agents’ presence and movement in their observations. Unfortunately, agent removal is uncommon in most simulators and some simulators also do not offer the option for a no-op action. Additionally, while popular MARL research environments are fairly low resource, creating multiple parallel instances of the environment during the Agent Importance calculation, makes using more resource-heavy simulators prohibitive from a memory perspective. However, with the growing popularity of the JAX framework, more stateless environments are becoming available where the parallel environments can be replaced with direct access to the environment state (Freeman et al., 2021; Lange, 2022; Bonnet et al., 2023). Future Work. It would be useful to investigate the rankings calculated by agent importance for simulators which do not have a no-op action. We could consider using random actions or the random actions of specific agents as a proxy for the no-op action or make use of function approximators to learn minimal impact actions for the marginalised agents. REFERENCES R. Agarwal, M. Schwarzer, P. S. Castro, A. Courville, and M. G. Bellemare. Deep Reinforcement Learning at the Edge of the Statistical Precipice, 2022. --- 5 An optimal policy for MMM2 can be found in a video by the original SMAC authors in link. 6 For additional information refer to appendix C.1. A. K. Agogino and K. Tumer. Unifying temporal and structural credit assignment problems. In *Autonomous Agents and Multi-Agent Systems Conference*, 2004. A. K. Agogino and K. Tumer. Analyzing and visualizing multiagent rewards in dynamic and stochastic domains. *Autonomous Agents and Multi-Agent Systems*, 17:320–338, 2008. S. V. Albrecht and S. Ramamoorthy. A game-theoretic model and best-response learning method for ad hoc coordination in multiagent systems, 2015. S. V. Albrecht and P. Stone. Reasoning about hypothetical agent behaviours and their parameters, 2019. A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. *Information fusion*, 58:82–115, 2020. A. Bakhtin, N. Brown, E. Dinan, G. Farina, C. Flaherty, D. Fried, A. Goff, J. Gray, H. Hu, A. P. Jacob, M. Komeili, K. Konath, M. Kwon, A. Lerer, M. Lewis, A. H. Miller, S. Mitts, A. Renduchintala, S. Roller, D. Rowe, W. Shi, J. Spisak, A. Wei, D. J. Wu, H. Zhang, and M. Zijlstra. Human-level play in the game of diplomacy by combining language models with strategic reasoning. *Science*, 378:1067 – 1074, 2022. N. Bard, J. N. Foerster, S. Chandar, N. Burch, M. Lanctot, H. F. Song, E. Parisotto, V. Dumoulin, S. Moitra, E. Hughes, I. Dunning, S. Mourad, H. Larochelle, M. G. Bellemare, and M. Bowling. The hanabi challenge: A new frontier for ai research. *Artificial Intelligence*, 280:103216, 2020. ISSN 0004-3702. doi: https://doi.org/10.1016/j.artint.2019.103216. URL https://www.sciencedirect.com/science/article/pii/S0004370219300116. K. Boggess, S. Kraus, and L. Feng. Toward policy explanations for multi-agent reinforcement learning. In *International Joint Conference on Artificial Intelligence*, 2022. C. Bonnet, D. Luo, D. Byrne, S. Abramowitz, V. Coyette, P. Duckworth, D. Furelos-Blanco, N. Grinsztajn, T. Kalloniatis, V. Le, O. Mahjoub, L. Midgley, S. Surana, C. Waters, and A. Laterre. Jumanji: a suite of diverse and challenging reinforcement learning environments in jax, 2023. URL https://github.com/instandeepai/jumanji. M. Brittain and P. Wei. Autonomous air traffic controller: A deep multi-agent reinforcement learning approach, 2019. Y.-H. Chang, T. Ho, and L. Kaelbling. All learning is local: Multi-agent learning in global reward games. *Advances in neural information processing systems*, 16, 2003. F. Christianos, L. Schäfer, and S. Albrecht. Shared experience actor-critic for multi-agent reinforcement learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, *Advances in Neural Information Processing Systems*, volume 33, pages 10707–10717. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/7967cc8e3ab559e68cc944c44b1cf3e8-Paper.pdf. C. Colas, O. Sigaud, and P.-Y. Oudeyer. Gep-pg: Decoupling exploration and exploitation in deep reinforcement learning algorithms. In *International conference on machine learning*, pages 1039–1048. PMLR, 2018. R. Dazeley, P. Vamplew, and F. Cruz. Explainable reinforcement learning for broad-xai: a conceptual framework and survey. *Neural Computing and Applications*, pages 1–24, 2023. S. Devlin, L. Yliniemi, D. Kudenko, and K. Tumer. Potential-based difference rewards for multiagent reinforcement learning. In *Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems*, pages 165–172, 2014. W. Du, S. Ding, C. Zhang, and S. Du. Modified action decoder using bayesian reasoning for multi-agent deep reinforcement learning. *International Journal of Machine Learning and Cybernetics*, 12, 10 2021. doi: 10.1007/s13042-021-01385-7.
22to0JZ4zh
Table 2 suggests that SSBM is more effective at preserving the marginal than minimizing the KL divergences. Do the authors have any insights or intuitions about why this is the case? It may be helpful to explain how this observation arises, as it is not explicitly implied by the algorithm itself.
SYMMETRIZED SCHRÖDINGER BRIDGE MATCHING Anonymous authors Paper under double-blind review ABSTRACT Schrödinger bridge (SB) has demonstrated numerous applications in probabilistic generative modeling. Finding the solution of probability paths aligns with entropy-regularized optimal transport that employs the Sinkhorn algorithm, which is characterized by performing iterative proportional fitting between marginal densities. This paper argues that the standard training of the SB is prone to exaggerate the amount of learning due to its inherent geometric nature. We leverage a symmetrized variant of Sinkhorn to study more lenient convergence of Schrödinger potentials and prove distinctive theoretical properties of the symmetrization such as linear convergence and monotonic improvements. To this end, we propose a dynamic SB algorithm named Symmetrized Schrödinger Bridge Matching (SSBM). Inspired by score and flow matching models, the concurrent projection scheme of SSBM is conceptualized as matching forward and backward drifts concurrently, constructing a time-symmetric learning objective for the SB model. We empirically validate our SB method by solving classical optimal transportation and model-based stochastic optimal control problems with physical dynamics. 1 INTRODUCTION The Schrödinger bridge (SB; Schrödinger, 1932) offers a general formulation for the dynamical evolution of a particle system. The corresponding problem has gained popularity by its connection to the entropy regularized Monge-Kantorovich optimal transport (EOT; Peyré et al., 2019), implying various applications in diverse areas such as image processing, natural language processing, and control systems (Pavon & Wakolbinger, 1991; Léonard, 2012; Caron et al., 2020; Liu et al., 2023; Alvarez-Melis & Jaakkola, 2018; Chen et al., 2022). For its computation, the SB problem is typically solved by the Sinkhorn algorithm (Sinkhorn & Knopp, 1967; Cuturi, 2013), relying on iterative projections between marginals. The algorithm is renowned for the simplicity and the convergence properties inherent to iterative proportional fitting (IPF; Kullback, 1968; Ruschendorf, 1995). There has been great advancement in synthesizing complex data distributions for deep generative models. Score-based models (Song et al., 2021) seek to find nonlinear functions that transform simple distributions into complex data distributions. These models are characterized by learning the time-reversal process of progressive diffusion starting from data (Sohl-Dickstein et al., 2015), through matching the score function of a stochastic differential equation (SDE). Another line of research involves flow matching (Lipman et al., 2023), which stems from deterministic conditional OT paths between marginals. This is well-described by a continuous vector field of probability ordinary differential equation (ODE), which governs a direct way of translating one distribution to another (Chen et al., 2018). The success of both approaches is supported by nonlinear computational models such as neural networks and corresponding learning schemes for their guidance. Recent studies have highlighted that SB succeeds in fundamental aspects of score and flow matching models (Liu et al., 2023; Shi et al., 2023). For instance, Learning of SB generally performs score matching where the first training stage of IPF is equivalent to the exact score matching. The projection corresponds to the variational lower bound maximization, or Kullback-Leibler (KL) projection under the Girsanov theorem (Huang et al., 2021). On the other hand, the fluid dynamics formulation of SB generalizes flow-based model by a time-symmetric drift field of probability flow (Nelson, 2001). Despite the strong resemblance, we claim that the direct extension of Sinkhorn only partially embraces the advancement of deep generative models due to the strict geometric constraint of IPF’s alternation. In order to efficiently solve the SB problem with a handful of networks at most, we investigate whether there is a more lenient way of training SB models at the algorithmic level. Figure 1: An entropic OT problem and IPFP sequences. In the problem, our PSIPF method (Algorithm 2) shows monotonic decrement of temporal variation, and induce more stable learning than standard IPF (Algorithm 1). In this work, we study an alternative learning scheme to the Sinkhorn algorithm. We leverage the concept called symmetrization (Kurkas, 2015) and propose a novel pseudo-symmetric variant of Sinkhorn for reaching a fair amount of updates at each iteration. We claim that not only does this approach find another way of convergence but also retains distinct properties which help SB training in practice, especially for costly projections involving deep neural networks. For the sake of better understanding, we conducted an actual experiment in Fig. 1. The blue lines in the plots demonstrate that our strategy results in reduced perturbation in both total variation and ground-truth loss. For the detailed comparison, Algorithm 1 outlines the discrete Sinkhorn-Knopp algorithm. For a cost matrix $C$, its objective is to model optimal transport coupling i.e., $\pi_* = \text{diag}(u_*)K\text{diag}(v_*)$, which represents a coupling between $\mu$ and $\nu$. The consecutive IPF projections are represented in Lines 3 and 4. A discretized version of our approach, called pseudo-symmetric IPF, is demonstrated in Algorithm 2. Note that the projection onto coupled $(\tilde{u}_\ell, \tilde{v}_\ell)$ occurs in parallel from the current budget $(u_{\ell-1}, v_{\ell-1})$. Due to the contraction of projective operations between submanifolds (Bauschke & Borwein, 1994), we adjust the iterates with symmetrical division by the square root of the measure contraction coefficient $\kappa_\ell$. For $n$ dimension, the complexity of each iteration is asymptotically bounded as $O(n^2)$ for both IPF methods. To this end, we propose Symmetrized Schrödinger Bridge Matching (SSBM), a practical algorithm for training Schrödinger bridge, similar to well-established score and flow matching methods. First, we formally state the theoretical benefits of linear convergence and the monotonic improvement in the static SB problem. We then devise our learning algorithm for the dynamic SB problem, which is based on an entropic version of optimal transport in mass-preserving fluid dynamics (Benamou & Brenier, 2000). For the matching objective, we concurrently train both forward and backward models and construct a time-symmetric learning objective for the SB model. As shown in Table 1, one of the key features of our framework is the preservation of both score functions of marginal distributions, which is a similar property appeared in deep Schrödinger bridge matching (DSBM; Shi et al., 2023). However, our approach, unlike DSBM, does not have the constraint of tractable reference measure, which allows us to train a dynamic SB model with arbitrary physical dynamics. Our contributions are three-fold. First, we present a symmetrization scheme, which has theoretically pleasing properties that reduce the instability of neural network training. Second, we devise a matching algorithm that allows easier training. Lastly, we validate SSBM to OT benchmarks and stochastic control problems with physical dynamics and compare it to other SB algorithms. ### 2 Schrödinger Bridge Problem and Symmetrized Sinkhorn **Schrödinger Bridge Problem (SBP).** Let $(X, \mu)$ and $(Y, \nu)$ be Polish spaces. For marginals $(\mu, \nu)$ and the associated path measure $P(\mu, \nu)$ with a given time interval $[0, T]$, the formal de- Table 1: A technical overview. DSB-IPF (De Bortoli et al., 2021) allows dynamical training by drift matching. DSBM-IMF (Shi et al., 2023) preserves marginals during training. SSBM combines these desired properties with stable learning. | Preserving $c \cdot (\mu, \nu)$ | Dynamic | Monotony | |---------------------------------|---------|----------| | DSB | ✗ | ✔ | | DSBM | ✔ ($c=1$)| ✗ | | SSBM | ✔ ($c=\kappa$)| ✔ | **Algorithm 1** The Sinkhorn-Knopp algorithm (IPF). Input: a pair $(\mu, \nu)$, a cost matrix $C$, $\lambda \in \mathbb{R}^+$. 1: $u^{(0)} = 1_\mu$, $K = \exp(-C/\lambda)$ 2: for $n = 1$ to $N$ do 3: $v^{(2n-1)} = \nu \otimes [K^\top u^{(2n-2)}]$ 4: $u^{(2n)} = \mu \otimes K v^{(2n-1)}$ 5: return $\text{diag}(u^{(2N)})K\text{diag}(v^{(2N-1)})$ **Algorithm 2** A pseudo-symmetric variant (PSIPF). 1: $u^{(0)} = 1_\mu$, $v^{(0)} = 1_\nu$, $K = \exp(-C/\lambda)$ 2: for $\ell = 1$ to $L$ do 3: $\tilde{u}^{(\ell)} = \mu \otimes [K v^{(\ell-1)}]$ 4: $\tilde{v}^{(\ell)} = \nu \otimes [K^\top u^{(\ell-1)}]$ 5: $\kappa_\ell = \sum[\text{diag}(\tilde{u}^{(\ell)})K\text{diag}(\tilde{v}^{(\ell)})]$ 6: $u^{(\ell)} = \tilde{u}^{(\ell)}/\sqrt{\kappa_\ell}$, $v^{(\ell)} = \tilde{v}^{(\ell)}/\sqrt{\kappa_\ell}$ 7: return $\text{diag}(u^{(L)})K\text{diag}(v^{(L)})$ scription of SBP is to find the KL projection \( P_{SB} := \inf_{\mathbb{P} \in \Pi(\mu, \nu)} H(\mathbb{P} || Q) \) where \( Q \in \mathcal{P}(\mu, \nu) \) is a reference measure. Let \( \Pi(\mu, \nu) \) be the set of couplings and \( c \) be a continuous cost function. By the disintegration of measures (Léonard, 2014), the relative entropy of SBP yields the chain rule \[ H(\mathbb{P} || Q) = H(\pi || G) + \int_{X \times Y} H(\mathbb{P}_\pi || Q_G) \, d\pi(x, y), \] where \( \otimes \) is the product of measures and \( G \) denotes a Gibbs measure \( dG \propto e^{-c_\lambda} d(\mu \otimes \nu) \) for \( c_\lambda := c/\lambda \) with \( \lambda \in \mathbb{R}^+ \). Enforcing the conditional probabilities \( \mathbb{P}_\pi = Q_G \) yields the problem: \[ \inf_{\pi \in \Pi(\mu, \nu)} H(\pi || G) = \int_{X \times Y} c(x, y) \, d\pi(x, y) + \lambda H(\pi || \mu \otimes \nu) + \text{const}. \] Therefore, the static SBP is equivalent to the standard EOT with a \( \lambda \)-regularizer. This relationship allows us to consider the problem as finding an optimal EOT plan \( \pi_* \in \Pi(\mu, \nu) \) (Peyré et al., 2019). **Sinkhorn Algorithm.** The constrained optimization (2) naturally yields the strong duality. In particular, consider the Schrödinger potentials \( (\varphi_*, \psi_*) \), which constitute \( \pi_* \) with the Radon-Nikodym derivative: \( d\pi_* = e^{\varphi_* \oplus \psi_* - c_\lambda} d(\mu \otimes \nu) \). Then, the following statement holds for the potentials. **Lemma 2.1** (Duality of SBP; Theorem 3.2 of Nutz, 2021). Assume the existence of Schrödinger bridge \( \pi_* \in \Pi(\mu, \nu) \) and corresponding Schrödinger potentials \( (\varphi, \psi) \in L^1(\mu) \times L^1(\nu) \). Then, \[ \min_{\pi \in \Pi(\mu, \nu)} H(\pi || G) = \sup_{\varphi, \psi} F(\varphi, \psi), \quad F(\varphi, \psi) := \mu(\varphi) + \nu(\psi) - \int_{X \times Y} e^{\varphi \oplus \psi} \, dG + 1, \] where \( \oplus \) indicates the direct sum of two potentials and \( \mu(\varphi) := \int_X \varphi \, d\mu \) and \( \nu(\psi) := \int_Y \psi \, d\nu \). From a geometric perspective, the Sinkhorn updates are characterized by differentiating the dual functional \( F \). As a result, the algorithm performs alternating projections (Nutz & Wiesel, 2023): \[ \psi_{2n-1}(y) = -\log \int_X e^{\varphi_{2n}(x) - c_\lambda(x,y)} \mu(dx), \quad \varphi_{2n}(x) = -\log \int_Y e^{\psi_{2n-1}(y) - c_\lambda(x,y)} \nu(dy), \] for all \( (x, y) \in X \times Y \), and these operations are essentially linear in terms of exponential. The estimation of the coupling from the current budget naturally split into two versions: \[ d\pi_{2n-1} = e^{\varphi_{2n-2} \oplus \psi_{2n-1} - c_\lambda} d(\mu \otimes \nu), \quad d\pi_{2n} = e^{\varphi_{2n} \oplus \psi_{2n-1} - c_\lambda} d(\mu \otimes \nu). \] Note that the acquisition of marginals is also splitted; \( \mu \) can be acquired with the first marginal of \( \pi(\varphi_{2n}, \psi_{2n-1}) \), and \( \nu \) with the second marginal of \( \pi(\varphi_{2n-2}, \psi_{2n-1}) \) by its alternating nature. **A Symmetrization Proposal.** In this work, we propose the following symmetrization framework for SBP which is the direct extension of Algorithm 2. The procedure is composed of two stages. First, the Schrödinger potentials are concurrently updated with intermediate representations: \[ \tilde{\varphi}_\ell(x) = -\log \int_Y e^{\psi_{\ell-1}(y) - c_\lambda(x,y)} \nu(dy), \quad \tilde{\psi}_\ell(y) = -\log \int_X e^{\varphi_{\ell-1}(x) - c_\lambda(x,y)} \mu(dx). \] Unlike alternating update in Eq. (5), it is evident that the concurrent operation in Eq. (6) does not satisfy the constraint of \( \Pi(\mu, \nu) \); thus, \( (\tilde{\varphi}_\ell, \tilde{\psi}_\ell) \) are not associated as potentials. Hence, one can subsequently recover the constraint by equally subtracting a certain amount: \[ \varphi_\ell(x) = \tilde{\varphi}_\ell(x) - \log \sqrt{\kappa_\ell}, \quad \psi_\ell(y) = \tilde{\psi}_\ell(y) - \log \sqrt{\kappa_\ell}, \] where \( \kappa_\ell \) denotes measure contraction \( \kappa_\ell := \int_{X \times Y} e^{\tilde{\varphi}_\ell \oplus \tilde{\psi}_\ell - c_\lambda} d(\mu \otimes \nu) \). Applying the projection in parallel, the algorithm seeks to recover both marginals involving the scaling factor \( \sqrt{\kappa_\ell} \). **Remark 2.2.** For \( \{\pi_\ell\}_{\ell \in \mathbb{N}} \), \( \int_Y e^{\varphi_\ell \oplus \psi_{\ell-1} - c_\lambda} d\nu = \mu / \sqrt{\kappa_\ell} \) and \( \int_X e^{\varphi_{\ell-1} \oplus \psi_\ell - c_\lambda} d\mu = \nu / \sqrt{\kappa_\ell} \). Compared to the standard Sinkhorn, the estimation of coupling from current budget writes in a singular form \( d\pi_\ell = e^{\varphi_\ell \oplus \psi_\ell - c_\lambda} d(\mu \otimes \nu) \). This work refers to the procedure as symmetrized Sinkhorn. ### 3 THEORETICAL ANALYSES ON SYMMETRIZED SINKHORN This section analyzes the associated sequences i.e. \( \{\varphi_\ell\}_{\ell \in \mathbb{N}}, \{\psi_\ell\}_{\ell \in \mathbb{N}} \), and \( \{\pi_\ell\}_{\ell \in \mathbb{N}} \) in their convergence and theoretical properties. Assume that projections occur in finite-dimensional spaces and the boundedness of the cost function. Under these assumptions, we show the linear convergence of symmetrized Sinkhorn for the dual functional \( F \); thus, our method leads the potentials to a unique fixed point of convergence with gradual improvements. Proposition 3.1 (Linear convergence). Suppose a bounded cost, and let \((\mathcal{X}, \mathcal{F}_1, \mu)\) and \((\mathcal{Y}, \mathcal{F}_2, \nu)\) be the probability spaces. The sequence of symmeterized Sinkhorn iterates \((\varphi_\ell, \psi_\ell)\) converges strongly in \(L^p(\mathcal{X}, \mathcal{F}_1, \mu) \times L^p(\mathcal{Y}, \mathcal{F}_2, \nu)\) for \(p \in [1, \infty]\). Upon the existence of the solution \(\pi_*\), \[ F(\varphi_*, \psi_*) - F(\varphi_\ell, \psi_\ell) \leq k^\ell \left( F(\varphi_*, \psi_*) - F(\varphi_0, \psi_0) \right), \quad \ell \in \mathbb{N} \] holds, where \(k = 1 - e^{-22\|c_\lambda\|_\infty} \in (0, 1)\) and \((\varphi_*, \psi_*)\) are the optimal potentials for \(F\). Notice that our analysis achieves a slightly tighter contraction than \((1 - e^{-24\|c_\lambda\|_\infty})\) of centered Sinkhorn (Carlier, 2022), and the reason of the difference is mainly due to the fact that we put more the number of projections per iteration. Therefore, this result further suggests that increasing the number of projections helps choosing fair amount of SB training. Meanwhile, the Birkhoff-Bushell theorem (Birkhoff, 1957; Bushell, 1973) predicts measure contraction property such that \(\log \kappa_\ell \leq 0\). Since the suboptimality gap gradually gets minimal by Eq. (8), \(\log \kappa_\ell\) are monotone increasing to 0. Using this property, we present the monotony of the algorithm in terms of relative entropy for sufficiently large iterations. It is related to the well-known monotony of the Sinkhorn iterates \(\{H(\pi_*|\pi_{2n})\}_{n>0}\) and \(\{H(\pi_*|\pi_{2n-1})\}_{n>0}\) (Nutz, 2021). However, the drawback is that \(H(\pi_*|\pi_n) \leq H(\pi_*|\pi_{n+1})\) does not hold; thus the inconsistency might persist especially when the IPF projections are estimated with a finite number of neural networks. Proposition 3.2 (Monotony of symmetrized Sinkhorn). Suppose that the EOT coupling \(\pi_*\) exists. For sufficiently large iteration \(\ell\), the relative entropy between coupling monotonically decreases for the iterates drawn from the symmetrized Sinkhorn, denoting \(H(\pi_*|\pi_\ell) \leq H(\pi_*|\pi_{\ell-1})\). In a computational context, our algorithm is also stable with discrete measures (i.e., mini-batch learning), as it inherits geometric convergence properties from Sinkhorn (see Theorem 2.1 of Nutz & Wiesel, 2023). To summarize, we have found that our algorithm brings theoretically pleasing properties, namely, linear convergence and monotonic improvement. Compared to established probabilistic generative methods, training of an SB model has been criticized for its complexity and instability in finding solutions (Liu et al., 2023). Our hypothesis is that the benefits from the symmetrization holds for general SB problems. We claim that our symmetrization scheme leads to more stable results than standard Sinkhorn, especially when solving SBP relies on finite models, and the correlation between subsequent iterations is considerable. 4 SYMMETRIZED SCHRÖDINGER BRIDGE MATCHING Based on the theoretical analyses, we aim to apply the aforementioned symmetrization framework to general controllable dynamics for robust training of models. We propose a deep dynamic SB algorithm, which we call SSBM. To computationally model the notion of concurrent projection, we utilize the essential technique from matching algorithms (De Bortoli et al., 2021; Lipman et al., 2023) to provide symmetric targets. This section sets the regularization coefficient \(\lambda\) to 1. 4.1 DYNAMIC CONTROL FORMULATION OF SB Suppose that stochastic processes control the path measures \(\mathbb{P}^\mu\) and \(\mathbb{P}^\nu\) starting from \(\mu\) and \(\nu\), respectively. If these two measures form SB between the marginals, the solution is represented with time-varying potentials \((\Psi, \hat{\Psi}) \in C^{1,2}([0,T], \mathbb{R}^n)\) which construct coupled SDEs: \[ dX_t = \left[ f(t, X_t) + gg^T(t, X_t) \nabla \log \Psi(t, X_t) \right] dt + g(t, X_t) dW_t, \quad X_0 \sim \mu, \] \[ dX_s = \left[ -f(s, \bar{X}_s) + gg^T(s, \bar{X}_s) \nabla \log \hat{\Psi}(s, \bar{X}_s) \right] ds + g(s, \bar{X}_s) dW_s, \quad \bar{X}_0 \sim \nu, \] where \(f\) and \(g\) denote base drift and diffusion function given by the environment. In the SDEs, \(X_t\) evolves with the “forward” equation (9a), and \(\bar{X}_s\) also evolves, but with the “reversed” time coordinate \(s := T - t\). It is well known that \(\Psi\) and \(\hat{\Psi}\) satisfy the partial differential equation (PDE): \[ \begin{cases} \frac{\partial \Psi(t,x)}{\partial t} = -\nabla \Psi^T f - \frac{1}{2} \text{Tr}(gg^T \nabla^2 \Psi) & \Psi(0,\cdot) = \mu, \\ \frac{\partial \hat{\Psi}(t,x)}{\partial t} = -\nabla \cdot (\hat{\Psi} f) + \frac{1}{2} \nabla^2 \cdot (gg^T \hat{\Psi}) & \hat{\Psi}(T,\cdot) = \nu, \end{cases} \] where the operator \(\nabla^2\) and \(\nabla^2 \cdot\) denotes a shorthand notation for the Hessian and a nested divergence operator for matrix functions. The PDE (10) suggests that \((\Psi, \hat{\Psi})\) formulates the solutions to minimum control (or entropy-regularized) optimization problem (Bensoussan et al., 2013) while preserving density. Using nonlinear Feynman-Kac (FK) lemma (Han et al., 2018; Pereira et al., 2020), SB studies have presented a deep neural network parameterization according to the forward-backward SDE (SB-FBSDE; Chen et al., 2022; Liu et al., 2022a), where we delineate the detailed derivation to Appendix B. Based on the SB-FBSDE theory, training of one of SDE is based on the backward trajectories sampled from the reversed counterpart, maximizing the likelihood of the reversed path measure. 4.2 TIME-SYMMETRIC APPROACH TO DYNAMICAL SB PROBLEMS To construct the learning target for both forward and backward SDEs, we utilize an optimal transport formulation in mass-preserving fluid dynamics. Suppose a drift field \( v : [0, T] \times \mathbb{R}^d \rightarrow \mathbb{R}^d \) and a corresponding probability density \( \rho(t, \cdot) \in \mathcal{P}(\mathbb{R}^d) \) where \( \mathcal{P}(\mathbb{R}^d) \) is the set of probability measures on \( \mathbb{R}^d \). Using the Nelson’s duality (Nelson, 2001), we define time-symmetric current drift \( v_t(x) := \frac{1}{2}[f^+(t, x) - f^-(s, \bar{x})] \) where drifts \( f^+(t, x) \) and \( f^-(s, \bar{x}) \) drawn from the FBSDE. For a transport cost function \( c(x, y) = \frac{1}{2}\|x-y\|^2 \), an entropic analogue of the Benamou-Brenier formula (Benamou & Brenier, 2000; Gigli & Tamanini, 2020), or the time-symmetric dynamical SBP writes \[ H(\mathbb{P}|\mathbb{Q}) = \inf_{(v_t, \rho_t)} \left\{ \int_0^T \int_{\mathbb{R}^d} \left( \frac{1}{2}\|v_t - f_t(x)\|^2 + \frac{1}{8}\|\nabla \log \rho_t(x)\|_{gg^\top}^2 \right) \rho_t(x) dx dt \middle| \frac{\partial \rho}{\partial t} + \nabla \cdot (\nu \rho) = 0 \right\}. \] The objective encodes the kinetic energy endowed with a geometry incurred by the Fisher information metric, and the condition on the righthand side is called the continuity equation which states the conservation of the probability density. Under mild conditions, Eqs. (1) and (11) are equivalent in terms that the cost of energy in the space of information geometry models the EOT problem. Just like other SB representations, the optimality is unique, satisfying a Hamilton-Jacobi equation (HJE). In the following proposition, we present the HJE with a function \( \Phi \) defined with \( (\Psi, \hat{\Psi}) \). **Proposition 4.1.** Suppose a function \( \Phi \in C^{1,1}([0, T], \mathbb{R}^n) \) and let \( f, g \) satisfy growth and Lipschitz conditions. The vector field \( v_t(x) := f_t(x) + gg^\top \nabla \Phi(t, x) \) corresponds to the solution of Eq. (11) if \[ \frac{\partial \Phi(t, x)}{\partial t} + v_t \cdot \nabla \Phi(t, x) = \frac{1}{4}\|\nabla \log \Psi(t, x)\|_{gg^\top}^2 + \frac{1}{4}\|\nabla \log \hat{\Psi}(s, \bar{x})\|_{gg^\top}^2, \] \[ \Phi(t, x) := \frac{1}{2}\{\log \Psi(t, x) - \log \hat{\Psi}(s, \bar{x})\}, \quad s := T - t, \] where the potentials \( (\Psi, \hat{\Psi}) \) satisfy the PDE (10). Due to the uniqueness of SDE solutions, Eqs. (10) and (12) predict the identical SB structure. In quantum mechanics, \( j = v\rho = \frac{1}{2}(\hat{\Psi} \nabla \Psi - \Psi \nabla \hat{\Psi}) \) is often called as probability flux (Paul & Baschnagel, 1999; Chen et al., 2017), making a concise way of describing the continuity \( \partial_t \rho_t + \nabla \cdot j = 0 \). In this context, we can understand the relationship between \( (v, \rho) \) and \( (\Psi, \hat{\Psi}) \) as two equivalent representations of EOT for a probability path along \( (\mu, \nu) \). Since the HJE (12) have the symmetric property, where \( (\Psi, \hat{\Psi}) \) both involved regardless of path distributions \( \mathbb{P}^\mu \) and \( \mathbb{P}^\nu \), we propose to consider the current vector field \( v_t \) as a symmetrized learning target for achieving the SB optimality. 4.3 ITERATIVE PROBABILITY FLUX MATCHING Under the Girsanov theorem (Øksendal, 2003), maximizing log-likelihoods by matching drifts corresponds to KL projections for another path measures; consequently, this has inspired consecutive SB methods in previous studies (Algorithm 3; De Bortoli et al., 2021; Vargas et al., 2021). **Proposition 4.2** (Girsanov theorem). For two drifts \( f^+ \) and \( f^- \) from \( \mathbb{P}^\mu \in \mathcal{P}(\mu, \cdot) \) and \( \mathbb{P}^\nu \in \mathcal{P}(\cdot, \nu) \), define respective probability densities as \( (\rho^+, \rho^-) \) and time-reversal drifts \( (\gamma^+, \gamma^-) \). Then, \[ H(\mathbb{P}^\mu|\mathbb{P}^\nu) = \frac{1}{2} \int_0^T \mathbb{E}_{x \sim \rho^+(t, \cdot)} \left\| (f^+ - \gamma^+)(t, x) \right\|_{gg^\top}^2 dt \tag{13a} \] \[ H(\mathbb{P}^\nu|\mathbb{P}^\mu) = \frac{1}{2} \int_0^T \mathbb{E}_{x \sim \rho^-(s, \cdot)} \left\| (f^- - \gamma^-)(s, \bar{x}) \right\|_{gg^\top}^2 ds \tag{13b} \] where \( H(\cdot|\cdot) \) denotes relative entropy between two path measures. Algorithm 3 Schrödinger bridge matching (SBM). Input: \( \mu, \nu, \mathbb{P}_0^\mu \) 1: \( n = 1 \) 2: repeat # consecutive KL projection. 3: \( \mathbb{P}_{2n-1}^\mu = \text{arginf}_{\mathbb{P}^\nu \in \mathcal{P}(\cdot, \nu)} H(\mathbb{P}^\nu || \mathbb{P}_{2n-2}^\mu) \) 4: \( \mathbb{P}_{2n}^\nu = \text{arginf}_{\mathbb{P}^\mu \in \mathcal{P}(\cdot, \mu)} H(\mathbb{P}^\mu || \mathbb{P}_{2n-1}^\nu) \) 5: \( n := n + 1 \) 6: until convergence; 7: return \( \mathbb{P}_*^\mu, \mathbb{P}_*^\nu \). Algorithm 4 Symmetrized SB matching (SSBM). Input: \( \mu, \nu, \mathbb{P}_0^\mu, \mathbb{P}_0^\nu \) 2: repeat # concurrent KL projection. 3: \( \mathbb{P}_\ell^\mu = \text{arginf}_{\mathbb{P}^\nu \in \mathcal{P}(\cdot, \nu)} H(\mathbb{P}^\nu || \mathbb{P}_{\ell-1}^\mu) \) 4: \( \mathbb{P}_\ell^\nu = \text{arginf}_{\mathbb{P}^\mu \in \mathcal{P}(\cdot, \mu)} H(\mathbb{P}^\mu || \mathbb{P}_{\ell-1}^\nu) \) 5: Obtain \( j_\ell \) using \( (\mathbb{P}_\ell^\mu, \mathbb{P}_\ell^\nu) \) via HJE (15). 6: Update \( (\mathbb{P}_\ell^\mu, \mathbb{P}_\ell^\nu) \) using \( j_\ell \) via \( L_{\text{SDE}} \) and \( L_{\text{BSDE}} \). 7: \( \ell := \ell + 1 \) 8: until convergence; return \( \mathbb{P}_*^\mu, \mathbb{P}_*^\nu \). The theorem suggests that one way to achieve KL projection between path measures is by matching drifts with the time reversal drifts of \((\gamma^+, \gamma^-)\). Following DSB and CFM (De Bortoli et al., 2021; Lipman et al., 2023), we train two target drifts \((\tilde{f}_\ell^+, \tilde{f}_\ell^-)\) with conditional drift matching (CDM) loss: \[ L_{\text{CDM}}(\ell) = \mathbb{E}_{t,q(x_t)p_{\ell-1}^\pm(x'|x_t)} \left[ \|\tilde{f}_\ell^\pm(x') - (x - x')/\varepsilon\|^2_{g g^\top(t \mp \varepsilon, x')} \right] \] where \( \pm \) and \( \mp \) indicates consideration of signs regarding its timelines (+ and −), and \( p^\pm \) is a discrete Markovian kernel of \( \rho^\pm \) for a small time interval \( \varepsilon \). For instance, we can sample data using the Euler-Maruyama integration. If the distribution \( q(\cdot) \) covers the desired support set, the relative entropies Eq. (13) and conditional matching loss (14) offer identical gradients to the target networks. In order to model the dynamic version of symmetrized Sinkhorn, we need a learning method for SDE drifts \( f_\ell^\pm \) that preserves probability density along \((\mu, \nu)\). Therefore, we utilize Proposition 4.1 and define the estimated target current drift \( v_\ell(t, x) := 1/2[\tilde{f}_\ell^+(t, x) - \tilde{f}_\ell^-(t, x)] \) and nonlinear FK transformations \( Y_\ell \approx \log \Psi \) and \( \hat{Y}_\ell \approx \log \hat{\Psi} \). Hence, we propose the following loss function: \[ L_\Phi(\ell) = \mathbb{E}_{t,x} \left[ \frac{\partial \Phi_\ell}{\partial t} + v_\ell \cdot \nabla \Phi_\ell - \frac{1}{4} \|\nabla Y_\ell\|^2_{g g^\top} - \frac{1}{4} \|\nabla \hat{Y}_\ell\|^2_{g g^\top} \right], \] \[ \Phi_\ell(t, x) := 1/2(Y_\ell(t, x) - \hat{Y}_\ell(s, \bar{x})), \quad s = T - t \] We also keep the marginal score consistency \( \nabla Y_\ell(0, \cdot) + \nabla \hat{Y}_{\ell-1}(0, \cdot) = \nabla \log \mu \) and \( \nabla Y_{\ell-1}(T, \cdot) + \nabla \hat{Y}_\ell(T, \cdot) = \nabla \log \nu \) with an auxiliary loss using score-based methods. Consequently, we can achieve the SB model \((Y_\ell, \hat{Y}_\ell)\) that traverses with the target vector field \( v_\ell \) with density preservation (7), and also preserves marginal score functions for both sides (Remark 2.2). The updates are uniquely defined for every iteration. Finally, the obtained \( Y_\ell \) and \( \hat{Y}_\ell \) are used to train drifts, the forward drift is trained with the following loss functions achieving maximum likelihood estimation for \( \mathcal{P}(\mu, \cdot) \) and \( \mathcal{P}(\cdot, \nu) \). \[ L_{\text{FSDE}}(\ell) = \mathbb{E}_{t,x} \left[ f_\ell^+ - \{ f(t, x) + g^\top \nabla Y_\ell(t, x) \} \right] \tag{16a} \] \[ L_{\text{BSDE}}(\ell) = \mathbb{E}_{s,\bar{x}} \left[ f_\ell^- - \{ -f(s, \bar{x}) + g^\top \nabla \hat{Y}_\ell(s, \bar{x}) \} \right] \tag{16b} \] Algorithm 4 summarizes the SSBM procedure. Notice that we make abstraction for some of the details by using the notion of path measures and probability flux, which have equivalent meanings to training with the proposed loss functions. This abstraction also allows us to understand Fig. 2, which illustrates SSBM, symmetric learning toward an optimum by matching probability flux \( j_\ell = v_\ell \rho_\ell \). We leave more algorithmic details in the appendix. 5 RELATED WORKS We are interested in one fundamental aspect of Schrödinger bridge (Schrödinger, 1931; 1932), specifically its equivalence with EOT structures (Peyré et al., 2019; Nutz, 2021). In machine learning, there has been progress in training SB with nonlinear networks with Sinkhorn algorithm (Vargas et al., 2021; De Bortoli et al., 2021; Chen et al., 2022). Recently, the general convergence of... Sinkhorn for various conditions has been extensively studied (Peyré et al., 2019; Nutz & Wiesel, 2023; Deng et al., 2023; Chen et al., 2023). As a symmetric counterpart, we propose symmetrized Sinkhorn, which extends PSIPF (Kurra, 2015) to SB problems while retaining theoretically pleasing features of PSIPF, which are advancements from the analyses of (Carlier, 2022; Nutz, 2021). Score-based methods have exhibited exceptional image generation for diffusion models (Ho et al., 2020; Song et al., 2021). From a perspective of variational methods, such score-matching algorithms can be considered as iteratively elevating a lower bound of maximum likelihood estimation through backward stochastic integration (Huang et al., 2021). On the other hand, the flow matching algorithms (Lipman et al., 2023) model vector fields of conditional flow, which often leads to efficient regression model for static OT. It has been verified that the SB is aligned with both score and flow matching (Liu et al., 2023; Shi et al., 2023). By leveraging iterative minimization of KL divergence between path measures (Øksendal, 2003; Vargas, 2021), training SB models have been more inclined toward score matching. Nelson (2001) displayed a time-symmetric configuration of diffusion bridge, uncovering the duality between stochastic process and vector field of mass flow. The optimal control formalization of SB (Pavon & Wakolbinger, 1991; Léonard, 2012) put each control agent in the symmetrical game with their respective timeline; the goal is to model controlled SDEs with minimum control (Van Handel, 2007). The Hamilton-Jacobi equation offers a dynamic programming approach through a well-formulated PDE (Kirk, 1970; Zavidovique, 2020). Using the SB-FBSDE theory, a multi-step temporal difference method has been proposed Liu et al. (2022a) via backward stochastic integration of the BSDEs (Bellman, 1954; Sutton & Barto, 2018). However, recent work shows that treating BSDE as updates could struggle to find convergent solutions due to the stochastic cost variance (Andersson et al., 2023). The time-symmetrical HJE has been theoretically studied (Chen et al., 2016; Gigli & Tamanini, 2020), which models entropy regularized Benamou-Brenier formula (Benamou & Brenier, 2000) for EOT in mass-preserving fluid dynamics. ![Figure 3](image.png) **Figure 3**: An overview of the experiment. (a) Classical OT experiment in Euclidean space. We also varied the marginal distributions and dimensional to measure stability and solubility. (b) Stochastic optimal control experiments with various physical dynamics. The control is represented as an external force, and there are other forces in the environment, such as drag and gravitational force. ### 6 EXPERIMENTAL RESULTS We validated our SSBM on two classes of SB problems, including classical OT problem and general optimal control problem (Fig. 3). The goal of the OT experiment was to validate the stability of the SSBM approach in comparison to prior methods under diverse configurations. Also, we subjected the optimal control variant of SSBM to validation within second-order dynamics as the underlying physical system. These systems have served as the foundational physical framework and have been a subject of classical control studies (Abraham & Marsden, 2008). We parameterized the functions with fully connected deep neural networks. OT networks adopted sinusoidal time embeddings and were trained with AdamW. We set $\lambda = 1$ and SDEs were solved with the Euler-Maruyama method. #### 2D OT Experiments. We first show our proposed method achieved competitive OT performance in 2D problems. We compared our method SSBM with DSB (De Bortoli et al., 2021), DSB-IMF (Shi et al., 2023), Rectified Flow (RF; Liu et al., 2022b), SB-CFM (Tong et al., 2023). RF uses an iterative flow matching procedure in order to improve the straightness of the flow, and SB-CFM utilizes batch-wise Sinkhorn solvers to define an approximate SB static coupling. Since our algorithm used the conditional drift matching loss Eq. (14) to construct a target current drift $v_t$, the algorithmic improvements compared to standard Sinkhorn are closely related to comparison with DBM. Table 2 shows the OT experiment among four different types of marginal distributions. In total variations, SSBM achieved the five best results for eight configurations. In path relative Table 2: OT performance evaluated using path relative entropies and marginal total variations across four 2D experiments (5 runs). The best outcomes among these are highlighted in bold. | Dataset | Forward Path Relative Entropy | Backward Path Relative Entropy | |---------|-------------------------------|--------------------------------| | | gaussian multimodal s-curve moon | gaussian multimodal s-curve moon | | DSB | 411.872±63.015 22.432±13.328 19.481±11.717 6.398±1.599 | 33.097±17.383 120.255±234.005 19.089±10.305 6.369±1.755 | | DSBM | 8.936±0.294 **3.864±0.276** 6.866±0.304 3.518±0.190 | 8.942±0.291 **3.942±0.256** 6.939±0.203 3.469±0.193 | | SB-CFM | **8.877±0.310** 4.067±0.360 **6.829±0.175** 3.342±0.150 | **8.893±0.324** 4.119±0.287 **6.912±0.042** 3.341±0.120 | | SSBM | 9.560±0.540 5.874±0.389 7.626±1.418 5.718±0.970 | 9.593±0.623 5.629±0.294 9.544±0.591 5.075±0.811 | | Dataset | Temporal Variation ($\mu$) | Temporal Variation ($\nu$) | |---------|---------------------------|---------------------------| | | gaussian multimodal s-curve moon | gaussian multimodal s-curve moon | | DSB | 2.903±0.797 8.893±13.599 0.930±0.009 2.352±0.215 | 13.140±15.666 5.105±2.846 3.997±0.269 3.141±0.079 | | DSBM | 2.301±0.037 2.276±0.039 0.494±0.016 2.280±0.024 | 2.260±0.040 3.383±0.060 3.651±0.029 3.216±0.029 | | RF | 2.802±0.117 2.384±0.034 1.417±0.037 2.374±0.017 | 2.345±0.046 3.241±0.032 **2.870±0.036** 3.071±0.030 | | SB-CFM | 2.259±0.032 2.226±0.058 **0.452±0.017** 2.198±0.043 | 2.285±0.049 3.385±0.055 3.633±0.037 3.192±0.024 | | SSBM | **1.906±0.070** 1.911±0.064 0.493±0.008 **1.877±0.033** | **1.928±0.073** 3.130±0.032 3.091±0.037 3.147±0.089 | entropy, DSBM and SB-CFM showed remarkable performance. This is due to the usage of reference measures and static SB solvers, which often leads to stabilized energy levels in the static settings. This showed that our SSBM method achieved stable OT results, which are aligned with our theory. High-Dimensional Gaussian. Next, we conducted large-scale Gaussian OT experiment that appeared in (De Bortoli et al., 2021) with $d \in \{1, 20, 50\}$ to verify the scalability of our method. In Table 3, we quantified the accuracy, which shows that our SSBM showed better results for both marginals, and the gap between SSBM and other algorithms increased with the dimension. The results are closely related to the analysis of SB for Gaussian measures from (Bunne et al., 2023), that the entropic Benamou-Brenier equation dictates how the mass should be transported globally, rather than focusing on the quantity of the mass. Thus, we conclude that the symmetrization incurs scalability for large data. 2D Physical Mass Control. We considered environments characterized by point mass dynamics operating under second-order principles. In these scenarios, two agents are initially located at distant positions at a steady state. The objective was to establish dynamic SB between starting and goal positions, where we considered stochastic control by generating force. The simulations focused on two distinct settings. In the Branching environment, there are one initial and two goal positions. The environments consist of drag forces proportional to the velocities, which take the kinetic energy, and eventually, the systems halt without external control. In the Gravitational environment, there is a constant gravitation force in the $y$-axis; thus, each control agent needs to control against gravity. Table 3: Average total variation of OT in multi-dimensional Gaussian distributions. | TV($\mu$) | d = 1 | d = 20 | d = 50 | |-----------|-------|--------|-------| | DSB | 1.458±0.450 | 27.119±0.472 | 70.927±2.200 | | DSBM | 1.138±0.027 | 23.742±0.169 | 106.980±1.104 | | RF | 1.227±0.004 | 22.935±0.062 | 57.592±0.173 | | SB-CFM | 1.131±0.065 | 22.618±0.024 | 64.792±0.756 | | SSBM | **0.966±0.017** | **19.955±1.516** | **48.646±3.754** | | TV($\nu$) | d = 1 | d = 20 | d = 50 | |-----------|-------|--------|-------| | DSB | 10.327±13.374 | 26.934±0.769 | 70.690±1.805 | | DSBM | 1.121±0.013 | 23.601±0.116 | 106.234±0.818 | | RF | 1.078±0.031 | 22.942±0.063 | 57.598±0.179 | | SB-CFM | 1.091±0.023 | 22.618±0.035 | 64.797±0.847 | | SSBM | **0.931±0.029** | **19.646±1.483** | **44.995±0.994** | Figure 4: Trajectories in the point control problems. If SB exists in the acceleration space, particles will show a time-symmetric maneuver along with the initialization points (Blue & Red). Top: Our SSBM shows stochastic control that reaches multiple goals in a given time. Bottom: SSBM controls against gravity, successfully formulating SB regardless of apparent external forces. In order to successfully model SB in the control problems, the maneuver between particles should be time-symmetric regardless of external drag and gravitational forces. The model-based control result in Fig. 4 shows that SSBM successfully induces Schrödinger bridge structure so that each position in the $t$ coordinates corresponds to positions of the reverse process in $T - t$ of the $s$ time coordinate. Notably, we observed that the induced SB structure was curved downwards when there is a gravitational force. This verifies that our SSBM is able to model path distributions with the principle of minimum control. Table 4: Control performance measured by positional distances (5 runs, gravitation=1.0). | goal dist. | Branching | Gravity | Pendulum | reverse | Branching | Gravity | Pendulum | |------------|-----------|---------|----------|---------|-----------|---------|----------| | DSB | 0.316±0.039 | **0.092±0.026** | 2.634±0.692 | DSB | 0.316±0.039 | 0.352±0.068 | **0.007±0.001** | | SSBM | **0.255±0.023** | 0.151±0.024 | **0.230±0.079** | SSBM | **0.117±0.010** | **0.177±0.030** | 0.135±0.125 | A Pendulum. Lastly, we considered a physical control problem of the pendulum environment. Since a pendulum is connected to a rod, this particular problem consists of variable gravitational force depending on the pendulum’s angular position. In Fig. 5, the forward and reverse control agents swing the pendulum in a time-symmetrical manner, changing their angular positions throughout the time. Table 4 shows the numerical results compared to the DSB algorithm. In all cases, SSBM induced much more stable results in terms of forward and reverse control. By considering the success of the pendulum problem as reaching the top with the red pendulum by reaching the goal angles $\pi$ and $-\pi$, only SSBM was able to show the success of modeling SB in the task. Therefore, we conclude that our theoretical claims were also verified in the dynamic SB problems. 7 CONCLUSION In this paper, we presented a symmetrization framework developed to solve both static and dynamic SBPs. Our approach allowed us to construct an optimal transport algorithm, with theoretical guarantees of linear convergence and monotonic improvements for a divergence. Based on the evidence, we claimed that the proposed SSBM method mitigate exaggerated displacement of couplings in Sinkhorn, by reflecting both sides of projection for each iteration. Compared to prior methods, our method empirically showed overall better stability in terms of learning and control. The computational success of EOT methods was hinged upon the information geometrical properties of the KL divergence. Our work complements this key idea with a few more insights, implementing an efficient algorithm for finding the solution, which is more generally applicable to arbitrary space with Bregman projection (Bregman, 1967). A distinguishing feature—and concurrently a limitation—of SSBM is the dependency of the construction of target current drift on the conditional drift matching along with its corresponding samples. Such advancements in solving the IPF projections in non-compact high-dimensional spaces will help actualize the general application of SB methods in various subfields of machine learning. REFERENCES Ralph Abraham and Jerrold E. Marsden Marsden. *Foundations of Mechanics*. American Mathematical Soc., second edition, 2008. David Alvarez-Melis and Tommi S Jaakkola. Gromov-wasserstein alignment of word embedding spaces. *arXiv preprint arXiv:1809.00013*, 2018. Kristoffer Andersson, Adam Andersson, and Cornelis W Oosterlee. Convergence of a robust deep fbsde method for stochastic control. *SIAM Journal on Scientific Computing*, 45(1):A226–A255, 2023. Heinz H Bauschke and Jonathan M Borwein. Dykstra’ s alternating projection algorithm for two sets. *Journal of Approximation Theory*, 79(3):418–443, 1994. Richard Bellman. The theory of dynamic programming. *Bulletin of the American Mathematical Society*, 60(6):503–515, 1954. Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the monge-kantorovich mass transfer problem. *Numerische Mathematik*, 84(3):375–393, 2000. Alain Bensoussan, Jens Frehse, Phillip Yam, et al. *Mean field games and mean field type control theory*, volume 101. Springer, 2013. Garrett Birkhoff . Extensions of jentzsch’s theorem. *Transactions of the American Mathematical Society*, 85(1):219–227, 1957. Lev M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. *USSR computational mathematics and mathematical physics*, 7(3):200–217, 1967. Rainer Buckdahn, Juan Li, Shige Peng, and Catherine Rainer. Mean-field stochastic differential equations and associated PDEs. *The Annals of Probability*, 45(2):824 – 878, 2017. Charlotte Bunne, Ya-Ping Hsieh, Marco Cuturi, and Andreas Krause. The schrödinger bridge between gaussian measures has a closed form. In *International Conference on Artificial Intelligence and Statistics*, pp. 5802–5833. PMLR, 2023. P. J. Bushell. Hilbert’s metric and positive contraction mappings in a banach space. *Archive for Rational Mechanics and Analysis*, 52:330–338, 1973. Guillaume Carlier. On the linear convergence of the multimarginal sinkhorn algorithm. *SIAM Journal on Optimization*, 32(2):786–794, 2022. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. *Advances in neural information processing systems*, 33:9912–9924, 2020. Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. In *Advances in Neural Information Processing Systems*, pp. 6572–6583, 2018. Tianrong Chen, Guan-Horng Liu, and Evangelos A. Theodorou. Likelihood training of schrödinger bridge using forward-backward sdes theory. In *10th International Conference on Learning Representations*, 2022. Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. On the relation between optimal transport and schrödinger bridges: A stochastic control viewpoint. *Journal of Optimization Theory and Applications*, 169:671–691, 2016. Yongxin Chen, Tryphon T Georgiou, and Allen Tannenbaum. Matrix optimal mass transport: a quantum mechanical approach. *IEEE Transactions on Automatic Control*, 63(8):2612–2619, 2017.
ykW3hvy6DL
In theorem 6.4, the results show the existence of a subsequence of weights produced by NGD and GD that have a slow margin maximization rate. Is there a dataset where if one instead considers the sequence $w'_t$ where $$ w'_t \in \arg\min_{s \leq t} \| \frac{ w*(s) }{ \| w*{s}\| } - w^*} \| $$ has a much better margin maximization rate?
ACHIEVING Margin Maximization EXPONENTIALLY FAST via PROGRESSIVE NORM RESCALING Anonymous authors Paper under double-blind review ABSTRACT In this work, we investigate the margin-maximization bias exhibited by gradient-based algorithms in classifying linearly separable data. We present an in-depth analysis of the specific properties of the velocity field associated with (normalized) gradients, focusing on their role in margin maximization. Inspired by this analysis, we propose a novel algorithm called Progressive Rescaling Gradient Descent (PRGD) and show that PRGD can maximize the margin at an exponential rate. This stands in stark contrast to all existing algorithms, which maximize the margin at a slow polynomial rate. Notably, we identify mild conditions under which we show that existing algorithms such as gradient descent (GD) and normalized gradient descent (NGD) provably fail in maximizing the margin efficiently. To validate our theoretical findings, we present both synthetic and real-world experiments. Notably, PRGD also shows promise in enhancing the generalization performance when applied to linearly non-separable datasets and deep neural networks. 1 INTRODUCTION In modern machine learning, models are often over-parameterized in the sense that they can easily interpolate training data, giving rise to a loss landscape with many global minima. Although all these minima yield zero training loss, their generalization ability can vary significantly. Intriguingly, it is often observed that Stochastic Gradient Descent (SGD) and its variants consistently converge to solutions with favorable generalization properties even without needing any explicit regularization (Neyshabur et al., 2014; Zhang et al., 2021). This phenomenon implies that the “implicit bias” inherent in SGD plays a crucial role in ensuring the efficacy of deep learning; therefore, revealing the underlying mechanism is of paramount importance. Soudry et al. (2018) investigated this issue in the context of classifying linearly separable data with linear models. The study showed that gradient descent (GD) trained with exponentially-tailed loss functions can implicitly maximize the $\ell_2$-margin during its convergence process, ultimately locating a maximum-margin solution. This discovery offers valuable insights into the superior generalization performance often observed with GD, as larger margins are generally associated with improved generalization (Boser et al., 1992; Bartlett et al., 2017). However, the margin maximization rate of GD has been proven to be extremely slow, at a rate of $O(1/\log t)$. Since then, many researchers have dedicated themselves to designing algorithms aimed at accelerating the margin maximization in this problem. Notably, Nacson et al. (2019b); Ji & Telgarsky (2021) employ GD with an aggressive step size to improve this margin maximization rate, and Ji & Telgarsky (2021) demonstrated that GD with an aggressive step size can achieve polynomially fast margin maximization at a $O(1/t)$ rate. More recently, Ji et al. (2021) introduced a momentum-based gradient method by applying Nesterov acceleration to the dual formulation of this problem. Their approach attains a remarkable margin maximization rate of $\tilde{O}(1/t^2)$, currently standing as the state-of-the-art algorithm for this problem. In this work, we present a systematic analysis of the unique properties of the velocity field related to (normalized) gradients, highlighting that the centripetal velocity is a key factor in determining the rate of margin maximization. Notably, we identify mild conditions, under which the above margin-maximization rates: $O(1/t)$ for NGD and $O(1/\log t)$ for GD are tight, explaining why GD and NGD are inefficient in maximizing the margin. This is due to the fact that the gradients tend to align closely with the direction of the regularization path, causing the centripetal velocity to diminish... during convergence. These insights inform a strategy to speed up the margin maximization via maintaining a non-degenerate centripetal velocity: - We first show that there exists a favorable semi-cylindrical surface that is away from the regularization path and as such, the centripetal velocity is uniformly lower-bounded there. Leveraging this property, we introduce a novel algorithm called PRGD, which cyclically rescales parameters to semi-cylindrical surfaces with progressive radius. In order to keep the iterations on the semi-cylindrical surfaces, we perform projection in each step. Notably, we prove that PRGD can maximize the margin at an exponential rate \( O(e^{-\Omega(t)}) \). - We then validate our theoretical findings through both synthetic and real-world experiments. In particular, when applying PRGD to non-separable datasets and homogenized deep neural networks—beyond the scope of our theory—we still observe consistent test performance improvements. This suggests that our theory can be potentially extended to nonlinear homogenized networks. We summarize our theoretical results and the comparison with existing ones in Table 1. Table 1: Comparison of the directional convergence rates of different algorithms under Assumption 3.1, 5.4, and \( w^* \neq \frac{1}{|\mathcal{I}|} \sum_{i \in \mathcal{I}} x_i y_i \). | Algorithm | Directional Convergence Rate \( e(t) = \frac{\|w(t) - w^*\|}{\|w(t)\|} \) | |--------------------|---------------------------------------------------------------| | GD | \( e(t) = O(1/\log t) \) (Soudry et al., 2018), \( e(t_k) = \Theta(1/\log t_k) \) (Thm 6.4) | | NGD | \( e(t) = O(1/t) \) (Ji & Telgarsky, 2021), \( e(t_k) = \Theta(1/t_k) \) (Thm 6.4) | | Dual Acceleration | \( e(t) = O(1/t^2) \) (Ji et al., 2021) | | PRGD | \( e(t) = e^{-\Omega(t)} \) (Thm 6.2) | 2 RELATED WORK Understanding the implicit bias of optimization algorithms is one of the most important problems in deep learning theory. This topic has been extensively studied recently, and in this section, we only review those that are closely related to the current work. Margin maximization of GD. The margin-maximization bias of GD trained with exponentially-tailed loss functions was originally studied in Soudry et al. (2018). Except for works mentioned above, Ji & Telgarsky (2018b) investigated the margin-maximization bias of GD for classifying datasets that are not linearly separable. Nacson et al. (2019c) proved margin maximization for SGD. Furthermore, Gunasekar et al. (2018a); Wang et al. (2022); Sun et al. (2022) characterized the implicit bias of many other optimization algorithms. Recently, Wu et al. (2023) analyzed the impact of edge of stability (Cohen et al., 2021; Wu et al., 2018) for achieving margin maximization. In a related study, Ji et al. (2020) examined other types of loss functions and regularization path. In a similar setup, researchers also explored the implicit bias of GD on nonlinear models, such as deep neural networks (DNNs). Specifically, Ji & Telgarsky (2018a); Gunasekar et al. (2018b) investigated the implicit bias on deep linear fully-connected and convolutional networks. Nacson et al. (2019a); Lyu & Li (2019); Ji & Telgarsky (2020) proved that GD on homogeneous DNNs converges to the KKT direction of an \( l_2 \) max-margin problem. Recently, Kunin et al. (2023) extended this result to quasi-homogeneous networks. Other Implicit biases. It is widely believed that flatter minima lead to better generalization (Hochreiter & Schmidhuber, 1997; Keskar et al., 2016). Recent studies (Wu et al., 2018; Ma & Ying, 2021; Wu et al., 2022) provided explanations for why SGD tends to select flat minima on DNNs, using dynamical stability analysis. Additionally, Woodworth et al. (2020); Nacson et al. (2022) investigated how initialization scale and step size affect the selection bias of GD between the “kernel” and “rich” regime on linear diagonal neural networks. 3 PRELIMINARIES Notation. We use bold letters for vectors and lowercase letters for scalars, e.g. \( x = (x_1, \ldots, x_d)^T \in \mathbb{R}^d \). For any vector \( v \), we use \( \hat{v} = v/\|v\| \) the normalized vector. We use \( \langle \cdot, \cdot \rangle \) for the standard Euclidean inner product between two vectors, and \( \|\cdot\| \) for the \( \ell_2 \) norm of a vector or the spectral norm of a matrix. We use standard big-O notations \( O, \Omega, \Theta \) to hide absolute positive constants, and use \( \tilde{O}, \tilde{\Omega}, \tilde{\Theta} \) to further hide logarithmic constants. For any positive integer \( n \), let \([n] = \{1, \cdots, n\}\). **Classification Problem.** In this paper, we consider the binary classification problem. Given a dataset \( S = \{(x_1, y_1), \cdots, (x_n, y_n)\}_{i=1}^n \subset \mathbb{R}^d \times \{\pm 1\} \), we need to find some model \( f(\cdot; \theta): \mathbb{R}^d \rightarrow \mathbb{R} \) to classify all data correctly, i.e., \( y_i f(x_i; \theta) > 0 \). Without loss of generality, we assume \( \|x_i\| \leq 1 \) for all \( i \in [n] \). Moreover, we consider the following linearly separable data, which is a standard assumption in analyzing the implicit bias of GD (Soudry et al., 2018; Nacson et al., 2019b; Ji & Telgarsky, 2021). **Assumption 3.1 (linear separability).** There exists \( w \in S^{d-1} \) such that \( \min_{i \in [n]} y_i \langle w, x_i \rangle > 0 \). A classical method to solve the binary classification problem is the \( \ell_2 \) Support Vector Machine (\( \ell_2 \)-SVM), which need to solve the optimization problem: \[ \max_{\|w\| \leq 1} \min_{i \in [n]} y_i \langle w, x_i \rangle. \] Since \( \ell_2 \)-SVM is equivalent to a strongly convex quadratic programming problem with linear constraints, we have the following classical result: under Assumption 3.1, the \( \ell_2 \) SVM problem has a unique optimal solution \( w^* \in S^{d-1} \). **Margin and Max-margin.** Consequently, under Assumption 3.1, we can define the margin of \( w \in \mathbb{R}^d \) as \[ \gamma(w) := \min_{i \in [n]} y_i \left\langle \frac{w}{\|w\|}, x_i \right\rangle. \] Moreover, we denote the max-margin and max-margin direction as \[ \gamma^* := \max_{\|w\| \leq 1} \min_{i \in [n]} y_i \langle w, x_i \rangle \quad \text{and} \quad w^* := \arg \max_{\|w\| \leq 1} \min_{i \in [n]} y_i \langle w, x_i \rangle, \] respectively. **Logistic Regression.** Another classical machine learning algorithm to solve the binary classification problem is the following logistic regression: \[ \min_{w \in \mathbb{R}^d} L(w) = \frac{1}{n} \sum_{i=1}^n \ell(-y_i \langle w, x_i \rangle). \] where \( \ell(\cdot): \mathbb{R} \rightarrow \mathbb{R} \) is an exponential-type loss function (Soudry et al., 2018; Nacson et al., 2019b). This includes widely-used classification loss functions such as the exponential loss and logistic loss. For the sake of simplicity, our analysis will focus on the exponential loss \( \ell(z) = e^{-z} \), although it can be easily extended to the logistic loss \( \ell(z) = \log(1 + e^{-z}) \). As a baseline algorithm, GD can be implied to solve Problem (1). \[ \text{GD: } w(t+1) = w(t) - \eta \nabla L(w(t)), \] Soudry et al. (2018) showed under Assumption 3.1, GD (2) with \( \eta \leq 1 \) converges to the \( \ell_2 \) max-margin solution while minimizing the loss. However, this occurs at a slow rate \( \gamma^* - \gamma(w(t)) = O(1/\log t) \). To enhance this implicit bias, one can adopt the following Normalized Gradient Descent (NGD) with \( \eta \leq 1 \) (GD with an aggressive step size) to achieve margin maximization at a polynomial rate \( \gamma^* - \gamma(w(t)) = O(1/t) \) (Ji & Telgarsky, 2021). The update rule of NGD is: \[ \text{NGD: } w(t+1) = w(t) - \eta \frac{\nabla L(w(t))}{L(w(t))}. \] **Regularization Path.** Our subsequent analysis will leverage the properties of regularization path (Hastie et al., 2004; Ji et al., 2020). Consider the regularized solution defined by \( w^*_\text{reg}(B) := \arg \min_{\|w\|_2 \leq B} L(w) \). Then, the regularization path refers to the curve traced by \( w^*_\text{reg}(\cdot) \) as \( B \) varies, formally given by \( \{w^*_\text{reg}(B)\}_{B>0} \). ### 4 Motivations and the Algorithm In this section, we introduce our proposed algorithm and explain the motivation behind through two toy examples. First, we state some of our key observations of the structure of the problem (1): - **Homogeneity.** For the linear model \( f(x; w) = \langle w, x \rangle \), rescaling the parameter \( w \) does not change the margin, i.e., \( \gamma(w) = \gamma(cw) \) for any \( c > 0 \) and \( w \in \mathbb{R}^d \). • **Directional Convergence.** Under Assumption 3.1, it holds that \( \gamma^* - \gamma(w) \leq \|w - w^*\| \) (Lemma A.4). This implies that the margin maximization rate can be controlled by the rate of directional convergence. • **Convexity.** The Hessian is \( \nabla^2 L(w) = \frac{1}{n} \sum_{i=1}^{n} e^{-\langle w, x_i y_i \rangle} x_i x_i^\top \). If all data has been classified correctly, i.e., \( \min_{i \in [n]} \langle w, x_i y_i \rangle > 0 \), then the convexity of the loss landscape is stronger in regions with smaller norm. • **Centripetal Velocity.** Intuitively, if the descent direction \( -\nabla L(w)/L(w) \) at some \( w \in \mathbb{R}^d \) has larger “centripetal” component (orthogonal to \( w^* \)), it will make more effective progress on the directional convergence towards \( w^* \). Furthermore, notice that for lots of datasets, the centripetal velocity is greater at the points farther from the regularized path, which we will explain in detail below using Dataset 4.1 as an example. Following the above observations, we can conclude • On the one hand, in order to obtain greater centripetal velocity for faster directional convergence, we should rescale the parameter \( w \rightarrow cw \) (\( c > 1 \)) sufficiently away from the regularized path. In Algorithm 1, this point corresponds to the progressive scaling steps. • On the other hand, the landscape convexity in small-norm region is stronger than that in large-norm region. Therefore, one should accelerate the local optimization by taking as many steps as possible by using small \( w \). In Algorithm 1, this point corresponds to the projected GD steps. By combining the above two intuitions, we propose the Progressive Projected Gradient Descent (PPGD) in Algorithm 1. **Algorithm 1:** Progressive Rescaling Gradient Descent (PRGD) **Input:** Dataset \( S \); Initialization \( w(0) \); Progressive Time \( \{T_k\}_{k=0}^{K} \); Progressive Radius \( \{R_k\}_{k=0}^{K} \); for \( k = 0, 1, 2, \cdots, K \) do \( w(T_k + 1) = R_k \frac{w(T_k)}{\|w(T_k)\|} \); for \( T_k + 1 \leq t \leq T_{k+1} - 1 \) do \( v(t + 1) = w(t) - \eta \frac{\nabla L(w(t))}{L(w(t))} \); \( w(t + 1) = \text{Proj}_{B(0,R_k)}(v(t + 1)) \); **Output:** \( w(T_K + 1) \). Next, we substantiate the above observations and explain the mechanisms by which PRGD works via the toy problem: **Dataset 1.** \( S = \{(x_1, y_1), (x_2, y_2), (x_3, y_3)\} \) where \( x_1 = (\sqrt{1 - \gamma^*}, \gamma^*)^\top \), \( y_1 = 1 \), \( x_2 = (-\sqrt{1 - \gamma^*}, \gamma^*)^\top \), \( y_2 = 1 \), \( x_3 = (\sqrt{1 - \gamma^*}, -\gamma^*)^\top \), \( y_3 = -1 \), and \( \gamma^* > 0 \) is small enough. For this dataset, we have the following (tight) margin maximization and directional convergence results for both NGD and PRGD. **Proposition 4.1.** Consider Dataset 1. Then NGD (3) can only maximize the margin polynomially fast, while PRGD (Alg 1) can maximize the margin exponentially fast. Specifically, (I) Let \( w(t) \) be NGD (3) solution at time \( t \) with \( \eta = 1 \) starting from \( w(0) = 0 \). Then both the margin maximization and directional convergence are at (tight) polynomial rates: \[ \left\| \frac{w(t)}{\|w(t)\|} - w^* \right\| = \Theta(1/t), \quad \gamma^* - \gamma(w(t)) = \Theta(1/t); \] (II) Let \( w(t) \) be the PRGD solution (Algorithm 1) with \( \eta = 1 \) starting from \( w(0) = 0 \). If we choose \( R_k = e^{\Theta(k)} \) and \( T_k = \Theta(k) \), then both the margin maximization and directional convergence are at (tight) exponential rate: \[ \left\| \frac{w(t)}{\|w(t)\|} - w^* \right\| = e^{-\Theta(t)}, \quad \gamma^* - \gamma(w(t)) = e^{-\Theta(t)}. \] Proposition 4.1 provides a comparative analysis of the efficiency of PRGD and the challenges faced by NGD. Next, we provide an intuitive explanation and a brief outline of the proof and the complete proof is available in Appendix A. For this dataset, the max-margin direction is \( w^* = (0, 1)^T \) and the regularized path satisfies \[ \lim_{R \to \infty} w_{\text{reg},1}(R) = -\frac{\log 2}{2\sqrt{1-\gamma^2}}. \] And in Figure 1, we plot the asymptotic line of the regularization path (the green curve) and the max-margin direction \( w^* \) (the red curve) for this dataset. Notably, the two lines are parallel with each other. **Centripetal Velocity.** In this example, the centripetal velocity (orthogonal to \( w^* \)) is \[ [-\nabla L(w)/L(w)]_1 \text{sgn}(w_1). \] As shown in Figure 1, one can see that the centripetal velocity is significant for \( w \) far away from the regularized path (outside the green zone), while the centripetal velocity in the green zone is tiny. **Inefficiency of NGD.** As shown in Figure 1, the trajectory of NGD (the orange line) always remains near the regularized path in which \(-\nabla L(w)/L(w)\) is nearly parallel to \( w^* \) and the centripetal component (along \( e_1 \)) there is very small. Actually, we can show that NGD always stays in the green zone \[ A := \left\{ w : w_1 \in \left[ -3\log 2/4\sqrt{1-\gamma^2}, -\log 2/4\sqrt{1-\gamma^2} \right] \right\}. \] Since the norm grows at \( \Theta(t) \) rate (Lemma C.3), NGD is cursed to have only \( \Theta(1/t) \) directional convergence rate. **Efficiency of PRGD.** We consider a simple hyperparameter selection situation, \( T_{k+1} - T_k = 2 \), that is, do one step of progressive scaling and one step of projected normalized gradient descent in each period. As Fig 1 shown, PRGD (the purple line) can just solve the hardness that NGD is trapped in the green zone \( A \) with small centripetal velocity. The stretching step can ensure that PRGD escapes from \( A \) and arrives in \( w_1 = -1 \), where the centripetal velocity is significant; then the projected gradient step can achieve use this significant centripetal velocity to make progress on the directional convergence. Moreover, the centripetal velocity in \( \{ w : w_1 = -1 \} \) has a uniformly positive lower bound, one can use this to prove that the directional convergence is exponentially fast via simple geometric calculation. 5 CENTRIPETAL VELOCITY ANALYSIS Motivated by our analysis for Dataset 1, we formally study the angular velocity in this section. Moreover, inspired by the proof and visualization, we actually only need to focus on the angular velocity on an infinitely long semi-cylindrical surface in high-dimensional setting. First, we give the following definition, which helps us decompose the parameters on \( \mathbb{R}^d \) into essential directions. **Definition 5.1 (Orthogonal Projection).** we denote the projections of \( w \in \mathbb{R}^d \) along the direction \( w^* \) and onto the orthogonal space of \( w^* \) as \[ P(w) := \langle w, w^* \rangle w^* \quad \text{and} \quad P_\perp(w) := w - \langle w, w^* \rangle w^*, \] respectively. It is worth noting that \( P(w) + P_\perp(w) = w \) holds for any \( w \in \mathbb{R}^d \). Using this orthogonal projection, we can establish a formal definition for the “centripetal velocity”. **Definition 5.2 (Centripetal Velocity).** The normalized gradient at \( w \in \mathbb{R}^d \) is \( \nabla L(w)/L(w) \), and we define the centripetal velocity \( \varphi(w) \) (towards \( w^* \)) at \( w \) by \[ \varphi(w) := \left\langle -\frac{\nabla L(w)}{L(w)}, \frac{P_\perp(w)}{\|P_\perp(w)\|} \right\rangle = \left\langle \frac{\nabla L(w)}{L(w)}, \frac{P_\perp(w)}{\|P_\perp(w)\|} \right\rangle. \] Figure 1: The vector field and the trajectories of NGD and PRGD for Dataset 1. The gray arrows plot the vector field \(-\nabla L(\cdot)/L(\cdot)\). The red line corresponds to the max-margin direction, and the green area is around the regularized path. We run and visualize the trajectories of PPGD (purple) and NGD (orange) for 6 steps starting from the same initial point (black). Then we introduce the definition of infinitely long semi-cylindrical surface, which is the crucial geometry in our subsequent analysis. **Definition 5.3 (Semi-cylindrical Surface).** We use \[ C(D; H) := \{ w \in \text{span}\{x_i : i \in [n]\} : \|P_\perp(w)\| = D; \langle w, w^* \rangle \geq H \} \] to denote the infinitely long semi-cylindrical surface with the central direction \( w^* \), the radius \( D > 0 \), and starting height \( H > 0 \). Our subsequent analysis will concentrate on the semi-cylindrical surface as PRGD ensures the iterations will be confined in the surface. This surface is defined by its central direction, denoted by \( w^* \), a radius \( D > 0 \), and extends infinitely in the direction of \( w^* \) starting from a height \( H \). Additionally, it is crucial to note that our attention is restricted to the smaller subspace \( \text{span}\{x_i : i \in [n]\} \), rather than the entire space \( \mathbb{R}^d \). This is justified by the observation that the trajectories of GD, NGD, and PRGD, when initialized from 0, will remain confined within this subspace indefinitely. ### 5.1 Theoretical Analysis In this subsection, we undertake a theoretical examination of the centripetal velocity, as defined in Definition 5.2, on the semi-cylindrical surface described in Definition 5.3. Our investigation aims to address the following query: > Does a “favorable” semi-cylindrical surface exist where the centripetal velocity consistently maintains a positive lower bound? We demonstrate that such a favorable semi-cylindrical surface indeed exists, provided that the data are non-degenerate to a modest extent. **Assumption 5.4 (Non-degenerate data (Soudry et al., 2018; Wu et al., 2023)).** Let \( I \) be the index set of the support vectors, i.e., there exist \( \alpha_i > 0 \) (\( i \in I \)) such that \( w^* = \sum_{i \in I} \alpha_i y_i x_i \). We assume \( \text{span}\{x_i : i \in I\} = \text{span}\{x_i : i \in [n]\} \). We remark Assumption 5.4 is widely used in prior implicit bias analysis, such as Theorem 4.4 in (Soudry et al., 2018) and (Wu et al., 2023). Now we can state our main results about the centripetal velocity analysis. **Theorem 5.5 (Centripetal Velocity Analysis, Main result).** Under Assumption 3.1 and 5.4, there exists a semi-cylindrical surface \( C(D; H) \) and a positive constant \( \mu > 0 \) such that \[ \inf_{w \in C(D; H)} \varphi(w) \geq \mu. \] Theorem 5.5 establishes that for linearly separable and slightly non-degenerate dataset, there indeed exists a “good” semi-cylindrical surface in which the centripetal velocity has a uniformly positive lower bound. That is to say, on this semi-cylindrical surface, the negative normalized gradient has a significance component orthogonal to \( w^* \) consistently. The proof is deferred to Appendix B. ### 6 Margin Maximization and Directional Convergence Rate #### 6.1 Exponential Fast Margin Maximization via PRGD We have identified the condition ensuring the existence of the “good” semi-cylindrical surface, where the centripetal velocity is uniformly lower-bounded. For simplicity, in this section, we set this result as an assumption. **Assumption 6.1.** There exists a semi-cylindrical surface \( C(D; H) \) and a positive constant \( \mu > 0 \) such that \[ \inf_{w \in C(D; H)} \varphi(w) \geq \mu. \] The subsequent theorem shows that by leveraging our PRGD (Alg 1) and under the above assumption, the rate of directional convergence—and consequently, margin maximization—can be boosted to be exponential. **Theorem 6.2 (PRGD, Main Result).** Under Assumption 3.1 and 6.1, let \( w(t) \) be solutions generated by the following two-phase algorithms starting from \( w(0) = 0 \): • **Warm-up Phase:** Run GD (2) with \( \eta = 1/2 \) for \( T_w = \Theta(1) \) steps starting from \( w(0) \); • **Acceleration Phase:** Run PRGD (Alg 1) with \( \eta = \Theta(1), R_k = e^{\Theta(k)} \) and \( T_k = \Theta(k) \) starting from \( w(T_w) \). Then, both directional convergence and margin maximization are achieved at exponential rate: \[ \left\| \frac{w(t)}{\|w(t)\|} - w^* \right\| \leq e^{-\Omega(t)}; \quad \gamma^* - \gamma(w(t)) \leq e^{-\Omega(t)}. \] The complete proof is deferred to Appendix C and here, we provide a sketch of the proof to illustrate the intuition behind: • In the warm-up phase, we employ GD to achieve a preliminary (slow) directional convergence, satisfying \[ \left\| \frac{w(T_w)}{\|w(T_w)\|} - w^* \right\| < \min\{D/2H, 1/2\}, \] which is prepared to rescale \( w \) to the good semi-cylindrical surface \( C(D; H) \). • After the warm up, we can rescale \( w(T_w) \) to the good semi-cylindrical surface \( C(D; H) \) by setting \( R_1 = \frac{D}{\|P_\perp(w(T_w))\|} \). Then, applying projected gradient descent there can significantly speed up the directional convergence since the centripetal velocity on \( C(D; H) \) is well lower-bounded (Assumption 6.1). Then, by selecting suitable progressive scaling \( R_k \), we can reposition the parameter back to \( C(D; H) \) again. Repeating this process, we will get effective directional convergence in each cycle. Finally, by simple geometric calculation, it can be proven that such directional convergence is exponentially fast. Notice that in Proposition 4.1, we have provided a tightly exponentially fast rate on Dataset 1, which satisfies Assumption 3.1 and 6.1, hence, the tightness of Theorem 6.2 can be ensured. **Corollary 6.3** (PRGD, non-degenerate dataset). Under Assumption 3.1 and 5.4, let \( w(t) \) be solutions generated by the following two-phase algorithms starting from \( w(0) = 0 \): • **Warm-up Phase:** Run GD (2) or NGD (3) with \( \eta \leq 1 \) for \( T_w \) steps starting from \( w(0) \); • **Acceleration Phase:** Run PRGD (Alg 1) with \( \eta = \Theta(1), R_k = e^{\Theta(k)} \) and \( T_k = \Theta(k) \) starting from \( w(T_w) \). Then, both directional convergence and margin maximization are achieved at exponential rate: \[ \left\| \frac{w(t)}{\|w(t)\|} - w^* \right\| \leq e^{-\Omega(t)}; \quad \gamma^* - \gamma(w(t)) \leq e^{-\Omega(t)}. \] It is worth noting that Assumption 5.4 can imply Assumption 6.1. Therefore, Theorem 6.2 implies Theorem 6.3 with GD (Phase I) + PRGD (Phase II) directly. Additionally, a slight difference is that in Theorem 6.3, we can use NGD in Phase I (to obtain faster directional warm-up than GD), because Assumption 5.4 can further guarantee the directional convergence of NGD (Ji & Telgarsky, 2021). ### 6.2 INEFFICIENCY OF GD AND NGD **Theorem 6.4** (GD and NGD, Main results). Suppose Assumption 3.1 and 5.4 hold. Additionally, we assume \( \gamma^* w^* \neq \frac{1}{|I|} \sum_{i \in I} y_i x_i \). • For NGD (3) with \( \eta \leq \eta_0 \) starting from \( w(0) = 0 \) (where \( \eta_0 \) is a constant), we have there exists a subsequence \( w(t_k) \) (\( t_k \to \infty \)) such that \[ \left\| \frac{w(t_k)}{\|w(t_k)\|} - w^* \right\| = \Theta(1/t_k). \] • For GD (2) with \( \eta \leq \eta_0 \) starting from \( w(0) = 0 \) (where \( \eta_0 \) is a constant), there exists a subsequence \( w(t_k) \) (\( t_k \to \infty \)) such that \[ \left\| \frac{w(t_k)}{\|w(t_k)\|} - w^* \right\| = \Theta(1/\log t_k). \] As presented in Table 1, under the same conditions—Assumption 3.1, 5.4, and \( \gamma^* w^* \neq \frac{1}{|I|} \sum_{i \in I} x_i y_i \), PRGD can achieve directional convergence exponentially fast with the rate \[ \left\| \frac{w(t)}{\|w(t)\|} - w^* \right\| = e^{-\Omega(t)}. \] In contrast, Theorem 6.4 ensures that NGD maintains a tight bound of polynomial speed, and GD exhibits a tight bound with exponentially slow rate. The detailed proof of Theorem 6.4 is provided in Appendix C. Although this proof is more complicated than Proposition 4.1 due to the more general dataset, their proof insights are highly similar. In this proof, we still focus on the dynamics of \( P_\perp(w(t)) \). Actually, we can prove that there exists a subsequence \( P_\perp(w(t_k)) (t_k \to \infty) \), which converges to some \( v \in \text{span}\{P_\perp(x_i) : i \in I\} \). Moreover, our condition \( \gamma^* w^* \neq \frac{1}{|I|} \sum_{i \in I} x_i y_i \) can ensure that \( v^* \neq 0 \). Therefore, \( \|P_\perp(w(t_k))\| = \Theta(1) \). Since the norm grows at \( \|w(t_k)\| = \Theta(t_k) \) (Lemma C.3), NGD must have only \( \Theta(1/t) \) directional convergence rate. 7 Numerical Experiments 7.1 Linearly Separable Datasets Experiments on Synthetic Dataset. We initiate our experimental evaluation with two synthetic linearly separable datasets, as depicted in Fig. 2. For the two synthetic datasets, the value of \( \gamma^* \) is explicit, and as such, we can explicitly compute the margin gap. To ensure a fair comparison, we maintain the same step size \( \eta = 1 \) for GD, NGD, and PRGD. Following the guidelines provided in Theorem 6.2, we employ PRGD(exp) with hyperparameters \( T_{k+1} - T_k \equiv 5, R_k = R_0 \times 1.2^k \). To illustrate the role of the progressive radius, we also examine PRGD(poly) configured with \( T_{k+1} - T_k \equiv 5, R_k = R_0 \times k^{1.2} \), where the progressive radius increases polynomially. For more experimental details, refer to Appendix E. The experimental results are provided in Fig. 2. Consistent with Theorem 6.2, PRGD(exp) indeed maximizes the margin exponentially fast, and surprisingly, PRGD(poly) also performs equally well for this task. In contrast, NGD and GD reduce the margin gaps significantly slower, which substantiates our Theorem 6.4. ![Figure 2](image) (a) Dataset I (b) Dataset II Figure 2: Comparison of margin Maximization rates of different algorithms on Synthetic datasets. Experiments on Real-World Datasets. In this case, we extend our experiments to real-world datasets. Specifically, we employ the digit datasets from sklearn, which are image classification tasks with \( d = 64, n = 300 \). In this real-world setting, we lack prior knowledge of the exact \( \gamma^* \). Instead, we approximate \( \gamma^* \) by employing \( \gamma(w(t)) \) obtained by a sufficiently trained NGD. In real experiments, we test both PRGD(exp) and PRGD(poly) and consistently observe that the latter performs much better. Therefore, in this experiment, we employ a modified variant of PRGD with slower progressive norms: \( R_k = R_0 \cdot k^\alpha, T_{k+1} - T_k = T_0 \cdot k^\beta \) where \( R_0, T_0, \alpha, \beta \) are hyperparameters to be tuned. The numerical results with well-tuned hyperparameters are presented in Fig. 3. It is evident that, in this real-world dataset, PRGD consistently beats GD and NGD in terms of margin maximization rates. 7.2 Linearly Non-separable Datasets and Deep Neural Networks In this subsection, we further explore the practical performance of PRGD for datasets that are not linearly separable. In the first experiment, we still consider linear models but for classifying a linearly non-separable dataset, Cancer in sklearn, and we employ the same PRGD technique as used in real-world linearly separable datasets. For the second experiment, we examine the performance of Figure 3: Comparison of margin Maximization rates of different algorithms on digit (real-word) datasets. (Left) the results on digit-01 dataset; (Right) the results on digit-04 dataset. (a) Linear Model on Cancer (b) VGG on Cifar-10 Figure 4: Comparison of the generalization performance of GD, NGD, and PRGD for non-linearly separable datasets and deep neural networks. PRGD for VGG network (Simonyan & Zisserman, 2015) on the full CIFAR-10 dataset (Krizhevsky & Hinton, 2009), without employing any explicit regularization. Additionally, in this setting, we employ mini-batch stochastic gradient instead of the full gradient for these algorithms, and we also fine-tune the learning rate of GD, NGD, and PRGD. Both of these two algorithms share the same learning rate scheduling strategy as described in Lyu & Li (2019). As for the hyperparameter strategy of PRGD, we still follow the same strategy as used on the real-world linearly separable datasets. The numerical results are presented in Fig 4a and Fig 4b, respectively. One can see that our PRGD algorithm outperforms GD and NGD for both tasks. 8 CONCLUDING REMARK In this work, we investigate the mechanisms driving the convergence of gradient-based algorithms towards max-margin solutions. Specifically, we elucidate why GD and NGD can only achieve polynomially fast margin maximization by examining the properties of the velocity field linked to (normalized) gradients. This analysis inspires the design of a novel algorithm called PRGD that significantly accelerates the process of margin maximization. To substantiate our theoretical claims, we offer both synthetic and real-world experimental results, thereby underscoring the potential practical utility of our approach. Looking ahead, an intriguing avenue for future research is the application of progressive norm rescaling techniques to state-of-the-art real-world models. It would be worthwhile to explore how PRGD can synergize with other explicit regularization techniques, such as data augmentation, dropout, and sharpness-aware minimization (Foret et al., 2020). REFERENCES Peter Bartlett, Dylan J Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1706.08498, 2017. Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pp. 144–152, 1992. Jeremy M Cohen, Simran Kaur, Yuanzhi Li, J Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability. *arXiv preprint arXiv:2103.00065*, 2021. Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. *arXiv preprint arXiv:2010.01412*, 2020. Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in terms of optimization geometry. In *International Conference on Machine Learning*, pp. 1832–1841. PMLR, 2018a. Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. *Advances in neural information processing systems*, 31, 2018b. Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. The entire regularization path for the support vector machine. *Journal of Machine Learning Research*, 5(Oct):1391–1415, 2004. Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. *Neural computation*, 9(1):1–42, 1997. Ziwei Ji and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. *arXiv preprint arXiv:1810.02032*, 2018a. Ziwei Ji and Matus Telgarsky. Risk and parameter convergence of logistic regression. *arXiv preprint arXiv:1803.07300*, 2018b. Ziwei Ji and Matus Telgarsky. Directional convergence and alignment in deep learning. *Advances in Neural Information Processing Systems*, 33:17176–17186, 2020. Ziwei Ji and Matus Telgarsky. Characterizing the implicit bias via a primal-dual analysis. In *Algorithmic Learning Theory*, pp. 772–804. PMLR, 2021. Ziwei Ji, Miroslav Dudík, Robert E Schapire, and Matus Telgarsky. Gradient descent follows the regularization path for general losses. In *Conference on Learning Theory*, pp. 2109–2136. PMLR, 2020. Ziwei Ji, Nathan Srebro, and Matus Telgarsky. Fast margin maximization via dual acceleration. In *International Conference on Machine Learning*, pp. 4860–4869. PMLR, 2021. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In *International Conference on Learning Representations*, 2016. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009. URL https://www.cs.toronto.edu/~kriz/cifar.html. Daniel Kunin, Atsushi Yamamura, Chao Ma, and Surya Ganguli. The asymmetric maximum margin bias of quasi-homogeneous neural networks. *International Conference on Learning Representations*, 2023. Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks. *arXiv preprint arXiv:1906.05890*, 2019. Chao Ma and Lexing Ying. The sobolev regularization effect of stochastic gradient descent. *Advances in Neural Information Processing Systems*, 2021. Mor Shpigel Nacson, Suriya Gunasekar, Jason Lee, Nathan Srebro, and Daniel Soudry. Lexicographic and depth-sensitive margins in homogeneous and non-homogeneous deep models. In *International Conference on Machine Learning*, pp. 4683–4692. PMLR, 2019a. Mor Shpigel Nacson, Jason Lee, Suriya Gunasekar, Pedro Henrique Pamplona Savarese, Nathan Srebro, and Daniel Soudry. Convergence of gradient descent on separable data. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 3420–3428. PMLR, 2019b. Mor Shpigel Nacson, Nathan Srebro, and Daniel Soudry. Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 3051–3059. PMLR, 2019c.
IOrnCVIKIZ
Since the performance of LETI relies on the solution evaluator's implementation, could you elaborate on the potential biases that may arise from different evaluator designs? How might these biases impact the performance of LETI in optimizing towards certain metrics?
LETI: Learning to Generate from Textual Interactions Anonymous authors Paper under double-blind review Abstract Finetuning pre-trained language models (LMs) is essential for enhancing their capabilities and is a crucial phase in their lifecycles. Existing techniques commonly fine-tune on input-output pairs (e.g., instruction fine-tuning [Wei et al., 2022a]) or with numerical rewards that gauge the output quality (e.g., reinforcement learning from human feedback [Ouyang et al., 2022]). We explore LMs’ potential to learn from textual interactions (LETI) that not only check their correctness with binary labels but also pinpoint and explain errors in their outputs through textual feedback. Our focus is the code generation task, where the model produces code based on natural language instructions. This setting invites a natural and scalable way to acquire textual feedback: the error messages and stack traces from code execution using a Python interpreter. LETI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback, which is only provided when the generated program fails to solve the task. Prepended to this fine-tuning text, a binary reward token is used to differentiate correct and buggy solutions. LETI requires no ground-truth outputs for training and even outperforms a fine-tuned baseline that does. LETI not only improves the performance of two base LMs of different scales on a code generation dataset MBPP, but also generalizes to other datasets. Trained on MBPP, it achieves comparable or better performance than the base LMs on unseen problems in HumanEval. Furthermore, compared to binary feedback, we observe that textual feedback leads to improved generation quality and sample efficiency, achieving the same performance with fewer than half of the gradient steps. LETI is equally applicable in natural language tasks when they can be formulated as code generation, which we empirically verified on event argument extraction.\footnote{Our code will be available at <anonymized>.} 1 Introduction Large-scale language models have fundamentally shifted the paradigms of natural language processing (NLP). Based on LMs pre-trained on raw text, subsequent fine-tuning stages have proven crucial to enhance their capabilities in solving benchmark NLP tasks and generating texts that align with human preferences. Success has been achieved by fine-tuning with direct training signals that measure whether the model, e.g., classifies the input into the right category [Devlin et al., 2019], answers a question correctly [Li et al., 2017; Ramamurthy et al., 2022], summarizes documents well [Stiennon et al., 2020; Wu et al., 2021], and generates outputs that align with human preferences [Ouyang et al., 2022; Korbak et al., 2023]. We hypothesize that LMs can harness the much richer training signals from textual interactions with the environment (e.g., a human or a Python interpreter) that not only check the correctness of LM’s outputs but also pinpoint the errors and explain why. We propose LETI, a new LM fine-tuning paradigm that aims to explore LMs’ potential to learn from nuanced textual interactions. We evaluate LETI on code generation tasks, where the LM is supposed to generate code pieces to solve tasks described in natural language. This setting invites a natural and scalable way to acquire automatic interactive textual feedback: the stack traces and error message outputs by established programming language (PL) tools such as a Python interpreter. LETI’s improvement process naturally mirrors a typical software development cycle: a human developer writes an initial program, executes it, and improves the program based on feedback obtained from... Figure 1: Qualitative example of LETI improving an LM on code generation by leveraging feedback from a solution evaluator (e.g., a Python interpreter). At each LETI iteration, the LM is first asked to generate candidate solutions. As a case study, we obtain binary and textual feedback by executing the solution against test cases using a Python interpreter. Feedback and the generated solutions are used to improve the LM generator for the next LETI iteration through feedback-conditioned fine-tuning (\S2.3). This is a code generation (MBPP; Austin et al., 2021) test set example generated by a 2B model optimized with LETI. We omit a few iterations and repetitive code for clarity. the programming environment until a satisfying solution is found (e.g., successfully executed with no error); Furthermore, the human developer learns from mistakes in this process and becomes a (slightly) better developer who can avoid similar mistakes in the future. Similarly to the human development process, we provide empirical evidence that LETI can learn from past mistakes and avoid similar errors in \S3.2. In LETI, a base LM pre-trained on both natural language and code\(^2\) is asked to generate a piece of program conditioning on the natural language instruction, which is then tested on a suite of test cases. LETI fine-tunes the model on a concatenation of natural language instruction, LM-generated program, and the textual feedback (e.g., stack traces and error messages) that pinpoints the bug, which is only provided when the generated program fails to solve the task. In addition to textual feedback, we prepend the fine-tuning sequences with a reward token (i.e., binary feedback), which differs for correct (<|good|>) and buggy solutions (<|bad|>), to encourage the LM to generate correct solutions when conditioning on <|good|>. LETI repeats this procedure for multiple rounds. During this iterative process, LETI assumes no instruction-code paired data. We find that LETI improves LM’s performance on code generation tasks in MBPP (Austin et al., 2021) without using any ground-truth code. Specifically, it generates 63.2% more syntactically correct and executable code (on the 2B LM) compared to the pre-trained model without any commonly employed post-processing heuristic\(^3\). When post-processing is applied, LETI (2B) improves performance and eliminates most NameError issues that occur when a variable or function is not defined (from 10% to 1%, on the 2B LM) in two iterations. The optimized LM also shows generalized performance --- \(^2\) Almost all modern large language models train on both natural language and code (Brown et al., 2020; OpenAI, 2023; Chowdhery et al., 2022; Touvron et al., 2023). \(^3\) Stop-word-based post-processing heuristics (Fig. A.1T) are commonly used by Code-LM (Chen et al., 2021b) to remove irrelevant code (e.g., only keep the first block of generated code). improvement on another code generation dataset HumanEval (Chen et al., 2021b) (§3.2). Such improvement in in-domain tasks does not come at the cost of the capability of the original LM (e.g., reasoning and chain-of-thought capability Wei et al., 2022b) due to LETI’s auxiliary objective that continuing pre-train along with fine-tuning (§3.4). We observe that textual feedback is advantageous in terms of improving the LM compared to baselines that only use binary feedback, as it offers enhanced performance and greater sample efficiency that only requires about half of the gradient steps to reach the same performance for the 2B-scale model (§3.5). Furthermore, we find LETI is equally applicable to NLP tasks (e.g., event argument extraction Wang et al., 2023a) when they can be formulated into a code generation problem (§3.5). 2 LETI: Learning from Textual Interactions Each iteration, LETI prompts the LM (§2.1) with the natural language problem description to generate a set of \( n \) solutions. The solutions are then evaluated on a suite of test cases by a Solution Evaluator (§2.2) to generate textual feedback (i.e., stack traces and error messages). This work uses a Python interpreter as the solution evaluator to assess LM-generated solutions. The textual feedback is used to fine-tune the LM with Feedback-Conditioned Fine-Tuning (FCFT, §2.3). We assume no ground-truth solutions while fine-tuning the LM, as LETI directly learns from solution evaluator’s feedback. Intuitively, FCFT leverages textual feedback to associate various types of errors (e.g., SyntaxError) and solutions that commit them. Furthermore, with binary feedback, FCFT aligns correct or wrong solutions with corresponding pre-pended reward tokens \( <|\text{good}|> \) or \( <|\text{bad}|> \), so that better solutions can be sampled from a trained LM by conditioning it on \( <|\text{good}|> \). The workflow (one iteration) is described in Algorithm 1 and Fig. A.6. 2.1 Language Model The base LM can be any generative language model \( p_\theta \), pre-trained on both natural and programming languages. For a given problem \( x_i \in \mathcal{P} \), we sample \( n \) solutions \( S_i = \{\hat{y}_{i,1}, \ldots, \hat{y}_{i,n}\} \) from \( p_\theta(\cdot | x_i) \) (conditioned on reward token \( <|\text{good}|> \) when \( p_\theta \) is fine-tuned for at least one iteration using FCFT), where each solution \( \hat{y}_{i,j} \) is a sequence of tokens. We analyze the importance of problem set size \( |\mathcal{P}| \) and the number of sampled solutions \( n \) in §B.2 and §B.1. Since \( p_\theta \) is trained on code, we assume that it can generate programs reasonably well in the training problem set, and at least some of the \( n \) solutions are correct when an arbitrarily large \( n \) is chosen. We use \( n = 128 \) for code generation experiments on MBPP (§3.2) and \( n = 64 \) for event argument extraction (§3.5). 2.2 Solution Evaluator Given a problem \( x_i \), its test cases \( T_i \), and any generated solution \( \hat{y}_{i,j} \), the Solution Evaluator \( \phi \) (a Python interpreter) provides feedback \( F_{i,j} \), which consists of binary \( f_{\text{binary}} \) and textual feedback \( f_{\text{text}} \) (i.e., \( f_{\text{binary}}, f_{\text{text}} = \phi(x_i, \hat{y}_{i,j}, T_i) \)). \( f_{\text{binary}} \in \{0, 1\} \) reflects the correctness of a solution, where \( f_{\text{binary}} = 1 \) means the given solution \( \hat{y}_{i,j} \) can successfully solve the given problem \( x_i \), and vice versa. \( f_{\text{text}} \) is a concatenation of stack traces and a textual error message provided by the Python interpreter only when the generated solution commits an error on a test case. Examples of \( f_{\text{text}} \) can be found in Fig. 1 and A.6. Generally speaking, we can implement \( \phi \) differently for different types of problems; in §3.5, we show that it is possible to implement a \( \phi \) that works for an NLP task. 2.3 Feedback-conditioned Fine-tuning (FCFT) Each LETI iteration samples solutions from LM \( p_\theta \), evaluates generated solutions to obtain feedback using \( \phi \), and improves the generator LM with feedback-conditioned fine-tuning (FCFT). FCFT fine-tunes \( p_\theta \) on each problem \( x_i \) and generated solution \( \hat{y}_{i,j} \) conditioned on feedback \( F_{i,j} \) (a sequence of tokens comprised of binary \( f_{\text{binary}} \) and textual feedback \( f_{\text{text}} \)). This resembles on-policy reinforcement learning, where \( p_\theta \) is the policy and the solution evaluator \( \phi \) plays the role of a reward function. Feedback \( F_{i,j} \) concatenates one initial reward token that denotes the binary feedback \( f_{\text{binary}} \) indicating whether the solution is correct, and textual feedback \( f_{\text{text}} \), if provided. If the solution evaluator \( \phi \) finds solution \( \hat{y}_{i,j} \) correct, we use a reward token \( <|\text{good}|> \), and \( <|\text{bad}|> \) otherwise. Follow- ing the initial reward token, we include the textual feedback \( f_{\text{text}} \), if provided, enclosed by two special tokens denoting the beginning and end of textual feedback (i.e., \( <|\text{text\_feedback}|> \), \(<|/text\_feedback|>\)). That is, both feedback for the problem \( x_i \) and solution \( \hat{y}_{i,j} \) are a concatenated sequence of tokens: \( F_{i,j} = f_{\text{binary}} \oplus <|\text{text\_feedback}|> \oplus f_{\text{text}} \oplus <|/text\_feedback|> \). In the case when \( f_{\text{text}} \) is not provided (e.g., when \( f_{\text{binary}} = 1 \)), only the initial reward token is included as feedback: \( F_{i,j} = f_{\text{binary}} \). We expand the vocabulary of the initial pre-trained LM \( p_\theta \) to include these additional tokens. LETI optimizes \( p_\theta \) with the language modeling objective on sequence \( s = F_{i,j} \oplus x_i \oplus \hat{y}_{i,j} \) (i.e., a concatenation of instruction and generated solution conditioned on the feedback) as shown in part (1) of equation [1]. A concrete example of a data instance can be found in Fig. A.6. ### 2.4 Regularization with Continued Pre-training To alleviate distribution shifts that may be caused by fine-tuning on generated solutions, we interleave FCFT optimization (§2.3) with LM objective optimization on the pre-training data. Equation [1] puts the entire LETI’s training loss together. Our ablation study shows that the regularization by continued pre-training is essential to maintain LM’s original capability on tasks that it was not trained on (§3.4). \[ L(\theta) = \frac{1}{|D_{\text{FCFT}}|} \sum_{s=F \oplus x \oplus y \in D_{\text{FCFT}}} L_{\text{LM}}(s, \theta) + \frac{1}{|D_{\text{pre-train}}|} \sum_{s' \in D_{\text{pre-train}}} L_{\text{LM}}(s', \theta) \] (1) **Algorithm 1** One iteration of LETI Improvement using Feedback-conditioned Fine-tuning (FCFT). **Require:** \( D_{\text{pre-train}} \) ▷ Pre-training Dataset \( D_{\text{FCFT}} \leftarrow \{\} \) ▷ Dataset for FCFT for each problem \( x_i \in P \) and its test cases \( T_i \) do for \( j = 1 \) to \( n \) do Sample a solution \( \hat{y}_{i,j} \) from \( p_\theta(\cdot | x_i) \), conditioned on \( <|good|> \) for fine-tuned \( p_\theta \) (§2.1) \( f_{\text{binary}}, f_{\text{text}} \leftarrow \phi(x_i, \hat{y}_{i,j}, T_i) \) ▷ Generate feedback using evaluator \( \phi \) (§2.2) \( F_{i,j} = f_{\text{binary}} \oplus <|\text{text\_feedback}|> \oplus f_{\text{text}} \oplus <|/text\_feedback|> \) \( D_{\text{FCFT}} \leftarrow D_{\text{FCFT}} \cup \{F_{i,j} \oplus x_i \oplus \hat{y}_{i,j}\} \) ▷ Construct the feedback-conditioned dataset end for end for Fine-tune the LM \( p_\theta \) for a fixed epochs on \( D_{\text{FCFT}} \) and \( D_{\text{pre-train}} \) (equation [1]) ### 3 Experimental Results #### 3.1 Experiment Setup **Base model.** We experiment with CodeGen-mono LMs (Nijkamp et al., 2022), a series of open-sourced LMs pre-trained with both natural language and code with a range of model sizes. The NL and PL mixture of pre-training data makes it possible to evaluate LETI on both NL and PL tasks. Due to limited computational resources, we choose to experiment with 350M and 2B sized models. **Dataset for continued pre-training.** We use the Python subset of TheStack v1.1 dataset (Kocetkov et al., 2022) as the continued pre-training dataset for the mixture pre-train objective (§2.4). #### 3.2 LETI Makes LMs Better Code Generators ##### 3.2.1 Mostly Basic Python Problems (MBPP) **Setup.** We use the Mostly Basic Python Problems (MBPP) dataset (Austin et al., 2021) for training and evaluation. It contains 974 short Python problems described in natural language targeting entry-level programmers. LETI requires no ground-truth code but assumes a test suite for each problem. --- *The pre-training dataset BigPYTHON of CodeGen-mono is not publicly available at the time of writing.* that MBPP provides to check solutions’ correctness. Additional details (e.g., hyper-parameters) can be found in §C. We allow the model to generate 512 tokens at max for each problem and evaluate the generated solutions by executing them against a test suite. **Post-Processing.** Stop-word-based post-processing heuristics (Fig. A.11) are commonly employed by Code-LM [Chen et al., 2021b] to remove irrelevant code (e.g., only keep the first block of generated code) and improve performance. However, such post-processing heuristics require manual effort and are less scalable to extend to different tasks. Whether or not LMs can improve code generation without postprocessing is a great testbed to evaluate their capabilities of learning from textual feedback and is central to answering our research question. Therefore, we test the general applicability of LETI both with and without postprocessing. Unless otherwise noted, we default to without post-processing setting in the following experiments. **Evaluation metrics.** We use the pass@k metric. The model generates k solutions for each problem; it is considered successfully solving the problem if at least one of the k solutions passes all test cases. With higher k values, the chance of observing a correct output for a problem increases. To reduce variances, we sample more than k solutions to estimate pass@k, see §C.1 for details. ![Figure 2](https://docs.python.org/3/library/exceptions.html#concrete-exceptions) **Results.** As shown in Fig. 2, LETI (w/o post-processing) learns from interactions with MBPP training set problems (i.e., iteratively generate, evaluate solutions, and learn from textual feedback) to generate better solutions for both training and testing problems. Despite not being fine-tuned on any ground truth solutions, LETI improves test set Pass@1 with increasing iterations and outperforms a supervised fine-tuned baseline (for the 2B model). LETI is also helpful when the post-processing heuristic is applied to the LM’s output: 2B LM improves from 26.89% to 29.53% within two iterations (Tab. 1). We include a qualitative example for the 2B model in Fig. 1. **Error analysis.** On MBPP test set with 8,000 instances (500 test examples, 16 generations per example), we show how the distribution of error types changes for LETI (2B) in Tab. 1. These error types are concrete exceptions of Python3 programming language. On LETI (2B, w/o post-processing), we initially observed that most errors are SyntaxError (5179, 64.7%) due to no post-processing. We find that LETI can gradually reduce the proportion of generated code that causes SyntaxError by 56.5% (5179 → 652) and produce 63.2% more executable code (pass test + AssertionError). Most of the remaining errors (54.5% out of 71.8%) are due to the generated code being functionally incorrect as validated by the test suite (AssertionError), which can be hard to fix using the error message and stack traces alone [Jones et al., 2002], even for humans. Similarly, on LETI (2B, w/ post-processing), we observe NameError, which can be fixed using the error message alone, is mostly eliminated (810 → 94) within two iterations, demonstrating the effectiveness of LETI. These results also expose the limitation of automated textual feedback from Python interpreter, which can be mitigated by (1) increasing exploration in the hope of finding better code by sampling more per problem (§B.1) [Li et al., 2022], (2) leveraging more powerful sources of feedback [Wang et al., 2023b], or (3) keeping pre-training base LM on more relevant solutions. Table 1: Count of top-3 error types on MBPP test set before and after LETI fine-tuning. | LETI (2B) w/o post-processing | Pre-trained | Fine-tuned | |-------------------------------|-------------|------------| | # of AssertionError | 1189 | 4356 | | # of SyntaxError | 5179 | 652 | | # of IndentationError | 467 | 165 | | # of Other Errors | 799 | 572 | | # of Pass Test | 366 | 2255 | | Pass@1 (%) | 4.50 | 28.00 | | LETI (2B) w/ post-processing | Pre-trained | Fine-tuned | |-------------------------------|-------------|------------| | # of AssertionError | 3835 | 4376 | | # of SyntaxError | 437 | 458 | | # of NameError | 810 | 94 | | # of Other Errors | 652 | 657 | | # of Pass Test | 2266 | 2415 | | Pass@1 (%) | 26.89 | 29.53 | Table 2: HumanEval performance of LMs finetuned on MBPP using LETI. We observe consistent Pass@10 and Pass@100 improvement across different model sizes. The top-ranked results are presented in **bold**, while the second-ranked results are underlined. | HumanEval | Pass@1 | Pass@10 | Pass@100 | |----------------------------|--------|---------|----------| | Pre-trained (350M) | 12.56 | 23.11 | 35.19 | | LETI (350M) w/o textual feedback | 12.19 | 21.69 | 35.62 | | LETI (350M) | **13.19** | **23.36** | **36.95** | | Pre-trained (2B) | 23.70 | 36.64 | 57.01 | | LETI (2B) w/o textual feedback | 19.90 | 35.62 | 58.48 | | LETI (2B) | 21.60 | 37.03 | 58.28 | | LETI (2B, trained w/ post-processing) | 21.60 | **39.51** | **61.46** | ### 3.2.2 HumanEval **Setup.** We evaluate LM trained on MBPP on another code generation dataset HumanEval (Chen et al., 2021b), which contains 164 handwritten problems to assess language comprehension, reasoning, algorithms, and simple math capabilities. We use the same pass@k metric as described in §3.2.1 and apply post-processing for the generated solution. **Results.** Despite being trained on a problem set MBPP that contains the most basic Python problems, as shown in Tab. 2, LETI can improve LM’s capability in other code generation problems in the HumanEval dataset. Compared to pre-trained LM, we observe consistent Pass@10 and Pass@100 improvement across both 350M and 2B LMs, while the 2B LM has a degraded Pass@1 performance. We observe larger improvements for LETI (2B) trained with post-processing as it allows LETI to focus on improving common error (e.g., NameError) in evaluation that applies post-processing. ### 3.3 Learning from Textual Feedback is More Sample-Efficient To study the effect of learning from textual feedback, Fig. 2 compares LETI against a baseline that only uses binary feedback. Regardless of model sizes, LMs trained with textual feedback obtain better final performance and improve faster (up to 2.2x for 2B; Tab. 3). **LM’s ability to leverage textual feedback increases with scale.** A larger model is more effective in learning from textual feedback and can obtain a larger (average) improvement per iteration than a baseline that only uses binary feedback (Tab. 3). 2B model that uses textual feedback improves 2.24x faster than binary feedback, while 350M is only 1.57x faster. Similar to Kaplan et al. (2020), we also find that a larger LM (2B) optimized using LETI obtains larger improvements per iteration (approx. 8x more compared to 350M LM) for both training and testing problems when both are given textual feedback. In other words, a larger model requires fewer gradient updates to achieve similar performance in a smaller model. These observations suggest that we might see more significant gains by applying LETI on LMs of a larger scale (e.g., 6B, 16B), which we leave for future work. **LMS trained with textual feedback can use samples more efficiently.** As shown in Fig. 3 compared to a baseline that only uses binary feedback, LETI (2B) yields better accuracy and sample efficiency: 2.74x and 2.24x higher improvement rate for \(|\mathcal{P}| = 128\) and \(|\mathcal{P}| = 374\) (Tab. 4). Interestingly, we observe a different trend for the smaller LM (350M). When decreasing the number of training problems from 374 to 128, LETI actually underperforms the baseline that only uses binary feedback. We conjecture that this is because (1) a smaller LM may lack the capacity to learn from textural feedback, and (2) LMs can benefit from a larger \(|\mathcal{P}|\) by seeing a more diverse set of problems. ### 3.4 LETI Retains Reasoning and Chain-of-Thought Performance **Setup.** We evaluate LETI-optimized LM (w/o post-processing) on additional reasoning tasks, including GSM8K (Grade School Math) Cobbe et al. (2021), a mathematical reasoning dataset that includes grade school math problems, and Big-Bench-Hard (BBH) Suzgun et al. (2022) that includes 26 challenging and diverse tasks (e.g., date understanding, sport understanding) testing Figure 3: LETI performance with different numbers of training problems \(|P| \in \{128, 374\}\). LETI (2B) with textual feedback can use samples more efficiently than a baseline that does not leverage textual feedback by always achieving higher performance and improvement rate (Tab. 4). Table 3: On MBPP, LETI improves the LMs’ code generation performance by up to 2.24x more per iteration when textual feedback is provided. | Model Size | Textual Feedback | Initial Pass@1 | Max Pass@1 | #Iter to Max | Avg. improvement per iteration | |------------|------------------|---------------|-----------|-------------|-------------------------------| | 2B | ✓ | 4.50 | 28.00 | 6 | 3.92 (2.24x) | | | × | 4.50 | 18.54 | 8 | 1.75 | | 350M | ✓ | 7.40 | 13.96 | 14 | 0.47 (1.57x) | | | × | 7.40 | 10.75 | 11 | 0.30 | Table 4: LETI’s average improvement per iteration for different numbers of training problems \(|P| \in \{128, 374\}\). | Model Size | Textual Feedback | # Train Problems \(|P|\) | Avg. improvement per iteration | |------------|------------------|--------------------------|-------------------------------| | 2B | ✓ | 128 | 2.60 (2.74x) | | | × | 374 (full dataset) | 0.95 | | 350M | ✓ | 128 | 0.17 (0.63x) | | | × | 374 (full dataset) | 0.27 | model’s generic reasoning capability. For GSM8K, we evaluate on PaL-style prompting (Gao et al., 2022) settings that ask LM to generate code and execute them to solve the given reasoning problem. Solutions for these reasoning tasks are generated without being conditioned on any reward token (e.g., \(<|\text{good}|>\)). We evaluate Big-Bench-Hard on two prompt settings: direct prompting that asks the model to generate an answer directly and chain-of-thought (CoT) prompting (Wei et al., 2022b) that elicits a series of intermediate reasoning steps from the LM before generating the answer. We calculate the performance gain \(\Delta_{\text{CoT-direct}}\) from doing chain-of-thought by calculating the performance difference between CoT and direct prompting. Results. As shown in Tab. 5, we observe no significant degradation in out-of-domain reasoning performance (i.e., GSM8K and BBH) after LETI fine-tuning. Moreover, as shown on BBH, applying LETI on a 2B LM improves its chain-of-thought capability compared to its pre-trained checkpoint (i.e., higher CoT and \(\Delta_{\text{CoT-direct}}\)). In a smaller 350M model, we observe some degradation in BBH’s CoT performance despite also applying regularization via continued pre-training (\$2.4). Removing regularization degrades performance outside MBPP. We compare LMs (350M) trained with and without the continued pre-training regularization (\$2.4). We observe no significant difference between in-domain task performance (MBPP) shown in Fig. A.9. However, as shown in Tab. 5, removing regularization significantly degrades LM’s capability on PaL-prompted GSM-8K, similar to findings from Fu et al. (2023), it also degrades BBH’s chain-of-thought performance. Table 5: Performance on additional reasoning tasks, including math reasoning benchmark GSM8K (Cobbe et al., 2021) and Big-Bench-Hard (i.e., BBH) (Suzgun et al., 2022). *250 out of 6,511 BBH\(_{\text{CoT}}\) prompts have more than 2048 tokens, which exceed CodeGen models’ context window. Scores are set to 0 for these prompts. | | GSM8K PaL | Big-Bench-Hard direct | CoT* | \(\Delta_{\text{CoT-direct}}\) | |---------------------|-----------|-----------------------|------|-------------------------------| | Pre-trained (2B) | 40.03 | 29.67 | 36.81| 7.14 | | LETI (2B) | 38.97 | 29.41 | 37.46| 8.05 | | LETI (2B, w/ post-processing) | 42.99 | 29.81 | 36.72 | 6.91 | | LETI (2B) w/o textual feedback | 41.93 | 29.23 | 36.71 | 7.48 | | LETI (2B) w/o regularization | 32.15 | 30.06 | 35.82 | 5.76 | | Pre-trained (350M) | 13.01 | 28.89 | 28.86| -0.03 | | LETI (350M) | 16.68 | 28.89 | 28.86| -0.03 | | LETI (350M) w/o textual feedback | 16.07 | 28.81 | 28.72 | -0.09 | | LETI (350M) w/o regularization | 7.88 | 28.00 | 28.31 | 0.31 | 3.5 LETI IS APPLICABLE TO NLP TASKS LIKE EVENT ARGUMENT EXTRACTION (EAE) When an NLP task can be formulated into a code generation problem, LETI is equally applicable. We experiment with event argument extraction (EAE), cast as a code generation problem by Wang et al. (2023a). Given an event ontology (Fig. 4 upper left) and a natural language sentence (Fig. 4 bottom left), we ask the LM to generate code to instantiate an event class using correct argument roles extracted from the sentence. Then we can check and examine the instantiated event object to validate the correctness of the solution (Fig. 4 right). Solution evaluator implementation. We build a rule-based solution evaluator for the EAE task that checks the instantiated event object in Python (Fig. 4). Specifically, we first check whether the generation satisfies argument constraints by providing a list of Entity objects for each event argument role (1, 2 in Fig. 4). Then we check whether all the predicted arguments match any of the ground truths (3, Fig. 4) and whether all the correctly identified arguments are classified to the correct event role (4, Fig. 4); Finally, we check if the prediction is complete by identifying all arguments in the ground truth solution (5, Fig. 4). We say the solution is correct with $f_{\text{binary}} = 1$ when it meets all of the above criteria. Note that the design decision of the solution evaluator (e.g., which error to check first) can influence what type of error LETI-optimized LM will prioritize to avoid. ![Figure 4: Rule-based Solution Evaluator for Event Argument Extraction (EAE) formulated as code generation task Wang et al. (2023a). Content enclosed by \{\ldots\} in $f_{\text{text}}$ is automatically populated by a Python implementation of Evaluator for any given solution.] Results. LETI’s performance on EAE task is summarized in Fig. 5. In Fig. 5(left), we find that LETI is capable of improving the train and test pass rate of generated solutions (i.e., a larger proportion of $f_{\text{binary}} = 1$ for both training and testing test). We also observe increased test performance on task-specific metrics: Argument Identification (Arg-I) F1 increases by 12.3% (21.2% $\rightarrow$ 33.5%), and Argument Classification (Arg-C) F1 increases 2.6% (8% $\rightarrow$ 10.6%) with three iterations. Implementation of solution verifier could influence the target metric of optimization. Interestingly, we find that improving $f_{\text{binary}}$ using our solution evaluator results in better performance in some task-specific metrics (e.g., Arg-I and Arg-C precision) but not others (e.g., Arg-I and Arg-C F1). As shown in Fig. 5, Arg-I and Arg-C precision, among other task-specific metrics, has the highest Pearson correlation of 0.93 and 0.73 with test Pass@1, while Arg-I F1 and Arg-C F1 only moderately (0.51) or weakly (0.29) correlate with test Pass@1. One possible reason is that LETI forces the model to be correct on every argument it identified in the evaluator implementation (Fig. 4 step 3). This could inhibit the model from generating arguments very close to the ground truth solutions, reflected in the degrading recall (correlation with Test Pass@1 of -0.08 and -0.24 for Arg-I and Arg-C recall) and improved precision in Fig. 5. This is similar to the reward-shaping problem in reinforcement learning. One can implement solution evaluators that suit better certain metrics. ![Figure 5](image) **Figure 5:** Event Argument Extraction performance and their correlation with Test Pass@1 when using LETI to optimize towards success rate. We found that the rule-based solution evaluator (Fig. 4) can be designed to biased towards optimizing precision as discussed in §3.5. ## 4 RELATED WORK ### Using feedback to improve code generation. Leveraging non-textual feedback from an interpreter, prior work can generate solutions following natural language instructions by sampling and filtering large amounts of programs (Li et al., 2022; Chen et al., 2022), training a model to rank generated solutions (Inala et al., 2022), fine-tuning a Code-LM on generated solutions verified by test cases (Haluptzok et al., 2022), or training a reward model and using reinforcement learning (RL) to improve Code-LMs (Le et al., 2022). Recent work has explored textual feedback (e.g., error messages, human language feedback) to improve LM for code-related problems. Chen et al. (2023a) improves code generation by fine-tuning the original LM on code refinement generated by conditioning on human language feedback; Different from our work, their fine-tuned LM uses more expensive human feedback and is not trained directly on the provided textual feedback. Chen et al. (2023b); Madaan et al. (2023) improve code generation by allowing LM to look at self-generated (and/or interpreter) feedback; however, the generator LM was frozen and couldn’t generate better code on the original problem without these methods, while LETI improves the underlying LM directly. ### Improving LMs with reinforcement learning. Using PPO, Stiennon et al. (2020); Ouyang et al. (2022) align LMs with human preferences. CodeRL (Le et al., 2022) follows REINFORCE (Williams, 1992) and policy gradient (Sutton et al., 1999) to improve Code-LMs with a scalar reward from the interpreter. Different from LETI that directly leverages textual feedback, these algorithms require either manually crafting (Le et al., 2022) or training (Stiennon et al., 2020; Ouyang et al., 2022) reward/value functions, which could be less scalable for various tasks. Another strand of work leverages Transformer architecture (Vaswani et al., 2017) to perform RL with sequence modeling (Janner et al., 2021; Chen et al., 2021a; Lu et al., 2022; Korbak et al., 2023; Zhang et al., 2023; Liu et al., 2023) improve LM by performing condition training, similar to conditioning LM on binary feedback $f_{\text{binary}}$ in LETI. LETI goes beyond the aforementioned work conditioning on the coarse-grained label: we are asking the LM to comprehend and improve directly based on textual feedback (e.g., error messages) that generally contains richer information compared to binary feedback. ## 5 CONCLUSION We proposed LETI, a new LM fine-tuning paradigm that explores LM’s potential to learn from textual interactions. We focused on code generation tasks and showed that one can effectively leverage automatic textual feedback from a Python interpreter to improve LMs. Textual feedback outperforms baselines that only use binary feedback in both generation quality and sample efficiency. Furthermore, LETI is equally applicable in NLP tasks that can be formulated as code generation, which we empirically verified on Event Argument Extraction. We refer to §A for a discussion of limitations and future work. REFERENCES Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. Program synthesis with large language models. *ArXiv*, abs/2108.07732, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R Bowman, Kyunghyun Cho, and Ethan Perez. Improving code generation by training with natural language feedback. *arXiv preprint arXiv:2303.16749*, 2023a. Bei Chen, Fengji Zhang, A. Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests. *ArXiv*, abs/2207.10397, 2022. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems*, 34:15084–15097, 2021a. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *arXiv preprint arXiv:2107.03374*, 2021b. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. *ArXiv*, abs/2304.05128, 2023b. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. *ArXiv*, abs/2110.14168, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. pp. 4171–4186, 2019. Yao Fu, Hao-Chun Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language models towards multi-step reasoning. *ArXiv*, abs/2301.12726, 2023. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. *ArXiv*, abs/2211.10435, 2022. Patrick M. Haluptzok, Matthew Bowers, and Adam Tauman Kalai. Language models can teach themselves to program better. *ArXiv*, abs/2207.14502, 2022. Jeevana Priya Inala, Chenglong Wang, Mei Yang, Andres Codas, Mark Encarnación, Shuvendu Lahiri, Madanlal Musuvathi, and Jianfeng Gao. Fault-aware neural code rankers. *Advances in Neural Information Processing Systems*, 35:13419–13432, 2022. Michael Janner, Qiyang Li, and Sergey Levine. Reinforcement learning as one big sequence modeling problem. In *Neural Information Processing Systems*, 2021. James A Jones, Mary Jean Harrold, and John Stasko. Visualization of test information to assist fault localization. In *Proceedings of the 24th international conference on Software engineering*, pp. 467–477, 2002. Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. Scaling laws for neural language models. *ArXiv*, abs/2001.08361, 2020.
sBSC0OXEQG
Eq. (2) presents the linear combination between auto- and hetero-associative memory using the hyperparameter $b$ and $(1-b). But this notation hints at convex combination $b \in [0, 1]$, where $b=0$ implies only hetero-associative memory and $b=1$ implies only auto-associative memory. **Why do we need a full linear combination** instead of the more intuitive convex combination?
CORRELATED DENSE ASSOCIATIVE MEMORIES Anonymous authors Paper under double-blind review ABSTRACT We introduce a novel associative memory model named Correlated Dense Associative Memory (CDAM), which integrates both auto- and hetero-association in a unified framework for continuous-valued memory patterns. Employing an arbitrary graph structure to semantically link memory patterns, CDAM is theoretically and numerically analyzed, revealing four distinct dynamical modes: auto-association, narrow hetero-association, wide hetero-association, and neutral quiescence. Drawing inspiration from inhibitory modulation studies, we employ anti-Hebbian learning rules to control the range of hetero-association, extract multi-scale representations of community structures in graphs, and stabilize the recall of temporal sequences. Experimental demonstrations showcase CDAM’s efficacy in handling real video data and replicating a classical neuroscience experiment. 1 INTRODUCTION 1.1 BACKGROUND Mathematical models of ferromagnetism in statistical mechanics, as developed by Lenz, Ising, Schottky, and others (Brush [1967], Folk & Holovatch [2022]), model the interactions between collections of discrete variables. When connected discrete variables disagree in their values, the energy of the system increases. The system trends toward low energy states via recurrent dynamics, but can be perturbed or biased by external input. Marr [1971] proposed a conceptual framework of associative memory in neurobiological systems using a similar principle but of interacting neurons, which was subsequently formalised in a similar way (Nakano [1972], Amari [1972], Little [1974], Stanley [1976], Hopfield [1982]). A key difference between these associative memory and ferromagnetism models is that the neurons are typically connected all-to-all with infinite-range interactions whereas in the ferromagnetism models variables were typically connected locally within a finite range. The principle by which these associative memory networks store memories is by assigning recurrent connection weights and update rules such that the energy landscape of the network forms dynamic attractors (low energy states) around memory patterns (particular states of the neurons). In the case of pairwise connections, these weights translate to the synaptic strength between pairs of neurons in biological neural networks. The network therefore acts as a content addressable memory – given a partial or noise-corrupted memory, the network can update its states through recurrent dynamics to retrieve the full memory. Of particular interest to the machine learning community is the recent development of dense associative memory networks (Krotov & Hopfield [2016]) (also referred to as modern Hopfield networks) and their close correspondence (Ramsauer et al. [2021]) to the attention mechanism of Transformers (Vaswani et al. [2017]). In particular, the dense associative memory networks introduced by Krotov & Hopfield [2016] (including with continuous variables) were generalised by using the softmax activation function, whereby Ramsauer et al. [2021] showed a connection to the attention mechanism of Transformers (Vaswani et al. [2017]). Indeed, Krotov & Hopfield [2016] make a mathematical analogy between their energy-based update rule and setwise connections given their energy-based update rule can be interpreted as allowing individual pairs of pre- and post-synaptic neurons to make multiple synapses with each other – making pairwise connections mathematically as strong as equivalently-ordered setwise connections. Demircigil et al. [2017] later proved this analogy to be --- 1 Simultaneously, work in spin glasses followed a similar mathematical trajectory in the works of Sherrington & Kirkpatrick [1975] and Pastur & Figotin [1977]. accurate in terms of theoretical memory capacity. As shown subsequently, by explicitly modelling higher-ordered connections in such networks, the energy landscape becomes sharper and memory capacity is increased (Burns & Fukai, 2023). In the majority of the prior associative memory works discussed so far, memory recall is auto-associative, i.e., given some partial memory the dynamics of the network ideally lead to recalling the (same) full memory. However, hetero-association is just as valid dynamically (Amari, 1972; Gutfreund & Mezard, 1988; Griniasy et al., 1993; Gillett et al., 2020; Tyulmankov et al., 2021; Millidge et al., 2022; Karuvally et al., 2023; Chaudhry et al., 2023). Instead of a partial memory directing the dynamics to recalling the same memory pattern, we can instead recall something else. Such hetero-associations are believed to naturally occur in the oscillatory dynamics of central pattern generators for locomotion (Stent et al., 1978), sequence memory storage in hippocampus (Treves & Amit, 1988), and visual working-memory in primate temporal cortex (Miyashita, 1988). 1.2 Motivations from neuroscience A classical result in the hetero-association neuroscience literature is due to Miyashita (1988). This work demonstrated hetero-association of stimuli in monkey temporal cortex could arise semantically via repeated presentations of the same stimuli in the same order, not only spatially via similarities in the stimuli themselves. Miyashita (1988) showed neurons responsive to presentation of randomly-generated fractal patterns had a monotonically-decreasing auto-correlation between the firing rates due to the current pattern and the next expected patterns, up to a distance of 6 patterns into the future. Work on numerosity in birds, non-human primates, and humans (Nieder et al., 2002; Ditz & Nieder, 2015; Nieder, 2012; Kutter et al., 2018) have repeatedly provided evidence of neurons responding to specific numbers or quantities. In these experiments, the stimuli (numbers or quantities) can be both semantically and spatially correlated – i.e., they can have the known semantic ordering of 1, 2, 3 . . . or ‘some, more, even more . . . ’, as well as the spatial or statistical relationships between the stimuli. Notably, even in abstract number experiments where spatial correlations are moot, semantic distances up to a range of ~ 5 numbers (as measured by significant auto-correlations of the neural activity) are common. This phenomenon extends beyond simple 1D, sequence relationships, however. Schapiro et al. (2013) presented human participants with a series of arbitrary visual stimuli which were ordered by a random walk on a graph with community structure (where each image was associated with a vertex in the graph). Functional magnetic resonance imaging analysis of the blood-oxygen-level-dependent response showed the representations of different stimuli were clustered by brain activity into the communities given by the underlying graph and unrelated to the actual stimuli features. In all of these studies, both auto-association (for the present stimulus) and hetero-association (for the semantically-related stimuli) is present. And such mixtures, where they encode a more general structures relevant for tasks, may be behaviourally useful. For instance, mice trained on goal-sequence tasks sharing a common semantic basis arising from a 2D lattice graph develop task-progress cells which generalise across tasks, physical distances, behavioural timescales, and stimuli modality (El-Gaby et al., 2023). Furthermore, such dynamics may be modulated by inhibitory signals (King et al., 2013; Honey et al., 2017; Hertag & Sprekeler, 2019; Haga & Fukai, 2019, 2021; Burns et al., 2022; Tobin et al., 2023) to shift the locus of attention, learning, or behaviour. Such function could account for the many instances of anti-Hebbian learning found throughout neural systems (Roberts & Leen, 2010; Schulz & Feldman, 2013), as well as their implications in the role of sleep for memory pruning (Crick & Mitchison, 1983; Hopfield et al., 1983; Diekelmann & Born, 2010; Poc, 2017; Zhou et al., 2020), motor control learning (Nashef et al., 2022), dendritic selectivity (Hayama et al., 2013; Paille et al., 2013) and input source separation (Brito & Gerstner, 2016). --- 2 An interesting alternative or supplementary technique is to use synaptic delays to generate such sequences (Tank & Hopfield, 1987; Kleinfeld & Sompolinsky, 1988; Karuvally et al., 2023), however here we will focus on non-delayed hetero-association where synapses all operate at the same timescale. 3 Depending on the species, brain area, and stimuli modality. 1.3 Motivations from Machine Learning Given the storied history of classical hetero-associative modelling work, extensions to dense associative memory are a natural next step. Some work in this direction has already begun. Millidge et al. (2022) present an elegant perspective which makes it straightforward to construct dense associative memory networks with hetero-association, and demonstrated recalling the opposite halves of MNIST or CIFAR10 images. Karuvally et al. (2023) construct an adiabatically-varying energy surface to entrain sequences in a series of meta-stable states, using temporal delays for memories to interact via a hidden layer. Application to a toy sequence episodic memory task showed how the delay signal can shift the attractive regime. And, recently, Chaudhry et al. (2023) studied a sequence-based extension of the dense associative memory model by adopting the polynomial or exponential update rule for binary-valued sequences of memories. This work also introduces a generalisation of the Kanter & Sompolinsky (1987) pseudoinverse rule to improve distinguishability between correlated memories. As Chaudhry et al. (2023) concludes, many potential research avenues remain, including extending these methods to continuous-valued patterns. Chaudhry et al. (2023) also note the potential to study different network topologies. There are several distinct notions of network topology which we could study, including that of neuronal connections (as in Löwe & Vermeij (2011); Burns & Fukai (2023)), spatial or statistical relationships between memory patterns (as in Löwe (1998); De Marzo & Iannelli (2023)), or semantic relationships between memory patterns (as in Amari (1972); Chaudhry et al. (2023)). A majority of classical work has focused on semantic correlation, likely due to its relevance to neuroscience (see Subsection 1.2). To extend the study of such semantic relationships to interesting topologies, it is necessary to introduce a basic topology, such as via embedding memories in graphs (as in Schapiro et al. (2013)). Being highly versatile mathematical structures, upon generalising semantic relationships with graphs, this additionally generates opportunities to study graph-based computations such as community detection or graph segmentation and learning or simulation of (finite) automata by neural networks (Balle & Maillard (2017); Ardakani et al. (2020); Liu et al. (2023)). Section 3.5 of Millidge et al. (2022) describes how we may generally consider the relationships between auto- and hetero-associative models, and notes how Transformers’ attention mechanisms take the hetero-associative form mathematically. Functionally, however, the attention mechanism is not obligated to perform hetero-association, since its values and keys are created independently by their respective weight matrices (see Vaswani et al. (2017)) and can in-principle make these identical so as to perform auto-association, or otherwise some mixture of auto- and hetero-association. Taking this perspective seriously opens the way for analysing Transformers through the lense of potential mixtures of auto- and hetero-associative dynamics, à la the analysis of a large language model in Ramsauer et al. (2021) by considering the implied energy landscapes in each of its attention heads. For this to be possible, however, a first step is to rigorously develop and study a dense auto- and hetero-association model and its inherent computational capabilities. 1.4 Our Contributions With these joint motivations from neuroscience and machine learning in mind, we • Introduce a dense associative memory model, called Correlated Dense Associative Memory (CDAM), which combines a controllable mixture of auto- and hetero-association in a single model for dynamics on continuous-valued memory patterns, using an underlying (arbitrary) graph structure to semantically hetero-associate the memory patterns; • Theoretically and numerically analyse CDAM’s dynamics, demonstrating connections to graph theory and four distinct dynamical modes – auto-association, narrow hetero-association, wide hetero-association, and neutral quiescence; • Taking inspiration from inhibitory modulation studies, we demonstrate how anti-Hebbian learning rules can be used to: (i) widen the range of hetero-association across memories; (ii) extract multi-scale representations of community structures in memory graph structures; and (iii) stabilise recall of temporal sequences; and • Illustrate, via experiments, CDAM’s capacity to work with real video data and replicate data from a classical neuroscience experiment. 2 Correlated Dense Associative Memory (CDAM) 2.1 Model To embed memories in the network, we first create \( P \in \mathbb{N} \) patterns as continuous-valued vectors of length \( N \in \mathbb{N} \), the number of neurons in the network. These memory patterns can be random, partially-random, or themselves contain content we wish to store. In the random case, each value of a memory vector is independently sampled from the interval \([0, 1]\). In the partially-random case, we reserve half of the vector for structured memory and the rest is random in the same sense as before. We denote the value for a neuron \( i \) in an individual memory pattern \( \mu \) as \( \xi_i^\mu \). For convenience, we organise these vectors into a memory matrix \( \Xi \in \mathbb{R}^{N \times P} \) for convenience. We also define a vector \( \bar{\xi} \in \mathbb{R}^N \) whose values are \( \bar{\xi}_i = P^{-1} \sum_{\mu=1}^{P} \xi_i^\mu \) to represent the average ‘memory load’ of each neuron. Next, we choose a graph \( M = (V, \Delta) \) with \( |V| = P \) vertices and \( |\Delta| \in \mathbb{N} \) edges. This graph, which we also refer to as the memory graph, forms the basis for the inter-pattern hetero-associations via its adjacency matrix \( A \). We use discretised time and denote the network state at time \( t \) as \( S^{(t)} \in \mathbb{R}^N \). To use the language of Millidge et al. (2022), we use softmax as our separation function, which is defined for a vector \( z \) as \( \text{softmax}(z_i) := \frac{\exp(z_i)}{\sum_j \exp(z_j)} \). Starting at a chosen initial state \( S^{(0)} \), subsequent states are given by \[ S^{(t+1)} := S^{(t)} + \eta((\text{softmax}(\beta S^{(t)} \Xi) Q - N^{-1} \bar{\xi}^T) - S^{(t)}), \quad Q = a \Xi + h (\Xi A)^T, \] where \( \eta \in \mathbb{R}^+ \) is the magnitude of each update, \( \beta \in \mathbb{R}^+ \) is the inverse temperature (which can be thought of as controlling the level of mixing between memory patterns during retrieval), and \( a, h \in \mathbb{R} \) is the strength of auto- and hetero-association in the retrieval projection matrix \( Q \), respectively. 2.2 Theoretical analysis A typical analysis to perform on associative memory networks is to probe its memory storage capacity, i.e., how many memories can be stored given \( N \) neurons? In CDAM, when \( a, h \neq 0 \), the regular notions of ‘capacity’ seem inapplicable. This is because ‘capacity’ is normally measured in the pure auto-associative case by giving a noise-corrupted or partial memory pattern, and observing whether and how closely the model’s dynamics converge to the uncorrupted or complete memory pattern (e.g., see Amit et al. (1985) for the classical model and Demircigil et al. (2017) for the dense model). In the pure hetero-associative case, ‘capacity’ has (to our knowledge) only ever been studied in the linear sequences case (e.g., see Löwe (1998) for the classical model and Chaudhry et al. (2023) for the dense model). However, in our model we study general mixtures of both auto- and hetero-association, as well as arbitrary memory graphs (not just linear cycles). It is therefore unclear whether there exists an appropriate notion of ‘capacity’ for this mixture. One can, however, study the model in a similar spirit of analysis. To this end, we demonstrate the dynamics of our model in the thermodynamic limit. First, let us set aside the choices of \( \eta \) and \( \beta \), which control the amplitude of each step’s update. For an undirected memory graph \( M \), our energy function is \[ E \propto -a \sum_{\mu=1}^{P} \exp(\beta \xi^\mu S) - h \sum_{\{\alpha, \sigma\} \in \Delta(M)} \exp(\beta (\xi^\alpha S)(\xi^\sigma S)), \] where \( \Delta(M) \) is the set of edges in the undirected memory graph. Assume, for a moment, that \( M \) is \( k \)-regular, meaning each vertex has degree \( k \). In this case, we could rewrite Equation 2 as \[ E \propto -a \sum_{\mu=1}^{P} \exp(\beta \xi^\mu S) - h \sum_{\alpha=1}^{P} \sum_{\sigma=1}^{P} \frac{A_{\alpha,\sigma}}{k} \exp(\beta (\xi^\alpha S)(\xi^\sigma S)) \] (3) \[ = -(a + hk) \sum_{\mu=1}^{P} \exp(\beta \xi^\mu S) + h \sum_{\alpha=1}^{P} \sum_{\sigma=1}^{P} \frac{A_{\alpha,\sigma}}{k} \exp(\beta (\xi^\alpha S - \xi^\sigma S)^2). \] (4) From Equation (4), we can see that while in a Hebbian hetero-associative regime, i.e., \( h > 0 \), setting \( a < -kh \) gives the trivial minimisation of letting all \( \xi^\mu S \) terms vanish, i.e., having a state which is far from any pattern. However, when \( a > -kh \), minimisation of the energy demands maximising the auto-association under penalty of the consequent hetero-association. In the absence of the hetero-association penalty, we have the model of Equation 13 in Lucibello & Mézard (2023), where scaling comes from \( a \); similarly, in the absence of auto-association, we have Chaudhry et al. (2023) scaled by \( h \) and with arbitrary semantic correlations according to \( M \). In our case, however, where there is a mixture of auto- and hetero-associations (which act simultaneously), the hetero-associative component of Equation (4) causes a large number of pattern activations for negative values of \( a \), i.e., while \( 0 > a > -kh \). When \( a, h > 0 \), hetero-association remains but across a narrower range. In Appendix A.1, we perform numerical simulations to demonstrate these four modes: auto-association, narrow hetero-association, wide hetero-association, and neutral quiescence. These simulations also demonstrate how in \( k \)-regular graphs we need \( h = \frac{1-a}{k} \) for the mean neural activity to converge to 0 in the limit of \( T \to \infty \), i.e., to keep an unbiased excitatory–inhibitory (E–I) balance. And while the above analysis assumes \( M \) is \( k \)-regular, this E–I balance finding applies for non-regular memory graphs by setting \( h = \frac{1-a}{m} \), where \( m = |V|^{-1} \sum_{\mu \in V(M)} \deg(\mu) \), in the limit of \( P \to \infty \) and \( N \to \infty \). For the case of a directed memory graph \( \vec{M} \), the energy function is \[ E \propto -\beta^{-1} \log \left( a \sum_{\mu=1}^{P} \exp(\beta \xi^\mu S) + h \sum_{(\alpha,\sigma) \in \Delta(\vec{M})} \exp(\beta (\xi^\alpha S)(\xi^\sigma S)) \right), \] (5) where \( \Delta(\vec{M}) \) is the set of edges in the directed memory graph. As done for Equation (4), a similar analysis for Equation (5) is possible, but is complicated by the directed edges, e.g., consider the difference between an \( \vec{M} \) where all but one vertex \( \mu \) point their edges to \( \mu \) and an \( \vec{M} \) in which each vertex has equal in- and out-degree. Relatedly, we conjecture when \( \vec{M} \) is an Erdős-Renyi graph (a random graph constructed by allowing any edge with probability \( p \)), the critical value of \( a \) which marks the transition from neutral quiescence to wide hetero-association will be proportional to \( p \) when \( p > \frac{(1-\varepsilon) \ln N}{N} \), i.e., when \( \vec{M} \) is asymptotically connected. **General interaction between auto- and hetero-association.** Of natural interest is when \( h \neq 0 \), which provides interactions between the patterns. What is interesting about Equation (1) is the possibility of both auto- and hetero-associative terms affecting the dynamics when both \( a, h \neq 0 \). As implied informally above, this means the model cannot perform pure pattern retrieval, i.e., retrieval of a single memory pattern \( \xi^\mu \) without at least partial retrieval of other patterns. To show this, it is useful to refer to the alignment between a pattern \( \xi^\mu \) and a state \( S(t) \). For this, we use the Pearson product-moment correlation coefficient, which for pattern \( \mu \) at time \( t \) we denote \( r(\mu(t)) \). **Proposition 2.1** (Hebbian auto- and hetero-associative mixtures cannot perform pure pattern retrieval for patterns not isolated in the memory graph). Suppose \( \mu \) is not an isolated vertex in \( M \). Let \( a, h > 0 \). Then the model cannot perform pure pattern retrieval of \( \xi^\mu \). **Proof.** Let \( \{\xi^\mu, \xi^\nu\} \in E \) if \( M \) is undirected, and let \( (\xi^\mu, \xi^\nu) \in E \) if \( M \) is directed. Setting \( S(t) = \xi^\mu \) will cause the second term of \( Q \) to be non-negative because \( h > 0 \), and therefore \( r(v(t+1)) \) will be proportionally large. Simultaneously, \( r(\mu(t+1)) \) will be non-vanishing, since \( a > 0 \). Therefore, no single pattern can be purely retrieved. \(^4\)Note this does not imply there is no activity in the network, since neurons can take negative values. Corollary 2.2 (Pure pattern retrieval is possible for some memory graphs when the dynamics are Hebbian auto-associative or Hebbian hetero-associative, but not both). If: - \(a > 0\) and \(h = 0\); or if - \(a = 0\), \(h > 0\), the out-degree of all vertices in \(M\) is 1, and we have a sufficient \(\beta\) and \(\eta\), then the model can perform pure pattern retrieval of some memory patterns. Proof. The excitatory auto-associative result, where \(a > 0\) and \(h = 0\), is simply a weighted version of Theorems 1–3 from Ramsauer et al. (2021). The excitatory hetero-associative result, where \(a = 0\) and \(h > 0\), is indicated by Proposition 2.1, with the added restriction that there exists only one memory pattern, \(\xi^\mu\), projecting from \(\xi^\mu\) in \(M\). This restriction is because if the out-degree of \(\xi^\mu\) was 0, then after setting \(S^{(t)} = \xi^\mu\), the projection matrix \(Q\) would be filled with zeroes since \(a = 0\). Therefore, values of \(S\) would converge to a value of \(-\xi\). If the out-degree of \(\xi^\mu\) was > 1, we would have the situation of Proposition 2.1, only with multiple memory patterns having large \(r\) values (with their strengths proportional to the weights of their respective in-edges from \(\xi^\mu\) in \(M\)). Finally, we need to achieve \(S^{(t+1)} = \xi^\nu\) (or something arbitrarily close) to have pure pattern retrieval of \(\xi^\nu\), since \(a = 0\) means we will not have the luxury of additional time-steps to achieve convergence. Fortunately, by Theorem 4 of Ramsauer et al. (2021), we can get arbitrarily close by requiring sufficiently large values of \(\beta\) and \(\eta\) to update the state to \(\xi^\nu\) in a single step. This naturally comports with Theorems 2.1 and 2.2 of Löwe (1998), wherein the classical associative memory model with binary-valued memories is studied when \(M\) is a 1D Markov chain. There, Löwe (1998) showed that sequence capacity increases given large semantic correlations. Proposition 2.3 (Connected components in an undirected memory graph are retrieved in some Hebbian hetero-associative regimes). Let \(Y \subset M\) be a finitely-sized connected component of \(M\). Set \(h > 0\) and \(|a| < h\). Then setting \(S^{(t)} = \xi^\mu\), where \(\mu \in Y\), will cause \(r(v^{(t+1)})\) for all \(v \in Y\) to be non-vanishing, for some \(\lambda \in \mathbb{N}^+\) and thereafter for all time-steps. Proof. Set \(S^{(t)} = \xi^\mu\). If \(a = 0\), then \(r(v^{(t+1)})\) will be non-vanishing for all \(v \in Y\) which are adjacent to \(\mu\) in \(Y\). Similarly, the adjacent vertices of those \(v \in Y\) which are adjacent to \(\mu\) will have non-vanishing \(r(v^{(t+2)})\), and so on. If \(h > a > 0\), the same argument applies. If \(h < 0\), the same argument applies but the rate at which the values of \(r(v)\) grow is slower. 3 NUMERICAL SIMULATIONS Now we study a wider collection of memory patterns and graphs, starting with a simple 1D cycle and gradually increasing complexity. Along the way, there are primarily two inter-weaving stories: 1. Anti-Hebbian auto-association increases the relative contribution of Hebbian hetero-association, which provides control over the range of hetero-association, extraction of multi-scale community structures in memory graphs, and stabilisation of temporal sequence recall; and 2. The flexibility of CDAM and its underlying graphical structure enables modelling a variety of phenomena, including graph community detection and sequence memory. Unless otherwise stated, in the following numerical analyses we use \(N = 1,000\), \(\beta = 0.1\), and \(\eta = 0.01\). We run our simulations until convergence, at which point we measure the Pearson product-moment correlation coefficient between each memory \(\mu\) and the final state \(S\). To initialise the network state, we choose a memory pattern \(\mu\) and set \(S^{(0)} = \xi^\mu + cX\), where \(X\) is a random vector with elements independently drawn from the interval \([-0.5, 0.5]\) and \(c \in \mathbb{R}^+\) is the amplitude of the additive random noise. Here we use \(c = 1\). Löwe (1998) also studied the case of spatial correlations between neurons, as may arise in naturalistic data. This work has been recently continued for dense associative memory by De Marzo & Iannelli (2023). 3.1 Controlling the range of recalled correlated memories Modulating the balance of auto- and hetero-association using $a$ and $h$ allows us to control the range of memory retrieval in $M$. To demonstrate this, we use an undirected cycle graph. A cycle graph $C_n$ has $n$ vertices connected by a single cycle of edges through all vertices. As described and illustrated in Appendix A.2, cycle graphs are the most commonly studied semantic hetero-associative memory structure previously studied, most likely due to it being a fitting representation of temporal sequences. In Appendix A.3, we show choices of $a$ and $h$ which achieve good fits ($R^2 = 0.996$) with the experimental data reported in Miyashita (1988). Figure 1 measures the range of the spread across values of $a$ and $h$, with significant differences observed between the tested conditions (one-way ANOVA, $F = 5.41, p = 0.001$); the range of recalled memories in terms of graph distance is controllable within the range of 0 to 5. In Appendix A.4, we show the correlation matrices for all patterns. 3.2 Multi-scale representations of community structures in graphs Now we will consider more interesting memory graph topologies. Zachary’s karate club graph (Zachary, 1977) consists of 34 vertices, representing karate practitioners, where edges connect individuals who consistently interacted in extra-karate contexts. Notably, the club split into two halves. Setting Zachary’s karate club graph as $M$ and varying $a$ and $h$, however, reveals that there were even finer social groupings than these, as Figure 2 reveals and as we discuss in Appendix A.5. Figure 2: Memory pattern correlations for each vertex in $M$, set as Zachary’s karate club graph. The top row shows correlations between each pair of attractors, where colour indicates the correlation coefficient, 1 (red) to −1 (blue). The bottom row draws $M$ with vertices coloured by the correlation coefficients at $S^{(101)}$, which is dynamically stable (see Appendix A.5). To more clearly illustrate the multi-scale representations of graph communities, we also test CDAM on the barbell graph (see Appendix A.6 for further details) and the Tutte graph (see Figure 3 and Appendix A.1). Figure 3: Correlations between the convergent meta-stable states ($S^{(101)}$ values from Figure 7) for all pairs of trigger stimuli (top row); and $M$ drawn with vertices coloured by these meta-stable state correlations for a particular trigger stimulus (bottom row). ### 3.3 SPARSE TEMPORAL SEQUENCE RECALL OF REAL VIDEO DATA Hetero-association is naturally suited for encoding temporal sequences. Here we use a directed cycle graph $\overrightarrow{C_{50}}$ where the patterns are sparsely sampled frames of videos (see Appendix A.7 for details). Figure 4 shows activity over time in a network with $M = \overrightarrow{C_{50}}$. At each step $t$ of the simulation, we calculate the correlation of $S^{(t)}$ with each pattern. We start the simulation by triggering the first pattern (frame) and thereafter leave the network to continue its dynamics according to Equation 1. Importantly, we require sufficient Anti-Hebbian auto-association, i.e., $a < 0$, in combination with relatively strong Hebbian hetero-associations, i.e., $h > 0$. Otherwise, the sequence recall can become stuck or lags due to auto-correlations. Figure 4: Correlations of memory patterns over time for each vertex in $M = \overrightarrow{C_{50}}$, where each memory pattern is a sparsely sampled video frame (see Appendix A.7 for details) from video 1. Notably, similar settings for Anti-Hebbian auto-association and Hebbian hetero-association is required for a different sparsely sampled video, as shown in Figure 5. Only in the case of $a = -2, h = 3$ can we recall the sequence without skips or delays. Present in both Figures 4 and 5 we can see more global features in the video and sharp context switches. These structures can be also be seen in the correlations between the attractors (see Appendix A.7). ![Correlations of memory patterns over time for each vertex in \( M = \overrightarrow{C_{50}} \), where each memory pattern is a sparsely sampled video frame (see Appendix A.7 for details) from video 2.](image) 4 CONCLUSION In this paper we have introduced a new dense associative memory model, called Correlated Dense Associative Memory (CDAM), which auto- and hetero-associates continuous-valued memory patterns using an underlying (arbitrary) graph structure. Using such memory graph structures, and especially by modulating recall using anti-Hebbian auto- or hetero-association, we demonstrated extraction of multiple scales of representation of the community structures present in the underlying graphs. We additionally tested CDAM with perhaps the most traditional and obvious application of hetero-associative memory networks – temporal sequence memory – with sparsely sampled real-world videos. Here, the benefits of anti-Hebbian modulations were highlighted once again, this time in its role as a stabiliser against internal correlations (natural distractors) within a sequence and of ordered recall generally. 4.1 IMPLICATIONS AND FUTURE WORK For neuroscience, this work highlights the highly non-trivial contributions of anti-Hebbian learning to the proper functioning across a range of tasks, including controlling the sequence recall range, community detection in graphically-organised memories, and temporal sequence retrieval. These findings invite experimentalists to further explore the contribution of inhibitory neurons in cognition. For machine learning, perhaps one of the most impactful uses of this work will be in its application to improving the performance and/or understanding of Transformer models (Vaswani et al., 2017) through their connection to continuously-valued dense associative memory networks (Krotov & Hopfield, 2016; Ramsauer et al., 2021). Indeed, Ramsauer et al. (2021) used this connection to study the ‘attractive schemas’ of the implied energy landscape in a large language model. This generated hypotheses about the function of particular layers and attention heads in the model, and may potentially help us further elucidate the internal representational structure of similar models. As Millidge et al. (2022) notes, Transformers’ attention mechanism can be interpreted in its mathematical form as performing hetero-association between its keys and values in the associative memory sense. Can we use this insight to identify the topology of the attractor or energy landscape in models trained on language, image recognition, or other tasks? Do such models entrain particular structures such as memory graphs (or higher dimensional analogues) to reflect the topology of the underlying data structures and correlations within the training set? And could a modulatory mechanism such as an anti-Hebbian learning rule help direct the ‘flow’ of temporally-evolving cognition, such as in-context or one-shot learning in large language models (Brown et al., 2020)? These and many other questions are now open for exploration, and will hopefully offer us deeper insights into the inner-workings of some of the most performant and powerful ML systems used today. REFERENCES S.-I. Amari. Learning patterns and pattern sequences by self-organizing nets of threshold elements. *IEEE Transactions on Computers*, C-21(11):1197–1206, 1972. doi: 10.1109/T-C.1972.223477. Daniel J. Amit, Hanoch Gutfreund, and H. Sompolinsky. Storing infinite numbers of patterns in a spin-glass model of neural networks. *Phys. Rev. Lett.*, 55:1530–1533, Sep 1985. doi: 10.1103/PhysRevLett.55.1530. URL https://link.aps.org/doi/10.1103/PhysRevLett.55.1530 Arash Ardakani, Amir Ardakani, and Warren Gross. Training linear finite-state machines. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 7173–7183. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/4fc28b7093b135c21c7183ac07e928a6-Paper.pdf Borja Balle and Odalric-Ambrym Maillard. Spectral learning from a single trajectory under finite-state policies. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 361–370. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/balle17a.html Carlos SN Brito and Wulfram Gerstner. Nonlinear hebbian learning as a unifying principle in receptive field formation. *PLoS computational biology*, 12(9):e1005070, 2016. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Stephen G. Brush. History of the lenz-ising model. *Rev. Mod. Phys.*, 39:883–893, Oct 1967. doi: 10.1103/RevModPhys.39.883. URL https://link.aps.org/doi/10.1103/RevModPhys.39.883 Thomas F Burns and Tomoki Fukai. Simplicial hopfield networks. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=QLsH8gatwx Thomas F Burns, Tatsuya Haga, and Tomoki Fukai. Multiscale and extended retrieval of associative memory structures in a cortical model of local-global inhibition balance. *eNeuro*, 9(3), 2022. doi: 10.1523/ENEURO.0023-22.2022. URL https://www.eneuro.org/content/9/3/ENEURO.0023-22.2022 Hamza Tahir Chaudhry, Jacob A. Zavatone-Veth, Dmitry Krotov, and Cengiz Pehlevan. Long sequence hopfield memory, 2023. Francis Crick and Graeme Mitchison. The function of dream sleep. *Nature*, 304(5922):111–114, 1983. Giordano De Marzo and Giulio Iannelli. Effect of spatial correlations on hopfield neural network and dense associative memories. *Physica A: Statistical Mechanics and its Applications*, 612:128487, 2023. ISSN 0378-4371. doi: https://doi.org/10.1016/j.physa.2023.128487. URL https://www.sciencedirect.com/science/article/pii/S0378437123000420 Mete Demircigil, Judith Heusel, Matthias Löwe, Sven Uppgang, and Franck Vermet. On a model of associative memory with huge storage capacity. *Journal of Statistical Physics*, 168(2):288–299, Jul 2017. ISSN 1572-9613. doi: 10.1007/s10955-017-1806-y. URL https://doi.org/10.1007/s10955-017-1806-y Susanne Diekelmann and Jan Born. The memory function of sleep. *Nature Reviews Neuroscience*, 11(2):114–126, 2010. Helen M. Ditz and Andreas Nieder. Neurons selective to the number of visual items in the corvid songbird endbrain. *Proceedings of the National Academy of Sciences*, 112(25):7827–7832, 2015. doi: 10.1073/pnas.1504245112. URL https://www.pnas.org/doi/abs/10.1073/pnas.1504245112
EBUoTvVtMM
Framing existing attack terminology (“user inference”) as something new is confusing as there are already multiple works proposing the stronger threat model where either X texts of a user’s data or none were used to train the model (e.g., Song and Shmatikov). To the best of my understanding, it seems that user-level + fine-tuning + LLM is new, but the current wording doesn't reflect that, instead suggesting that the paper introduces user-level MIAs.
USER INFERENCE ATTACKS ON LARGE LANGUAGE MODELS Anonymous authors Paper under double-blind review ABSTRACT Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications. In this paper, we study the privacy implications of fine-tuning LLMs on user data. To this end, we consider a realistic threat model, called user inference, wherein an attacker infers whether or not a user’s data was used for fine-tuning. We implement attacks for this threat model that require only a small set of samples from a user (possibly different from the samples used for training) and black-box access to the fine-tuned LLM. We find that LLMs are susceptible to user inference attacks across a variety of fine-tuning datasets, at times with near perfect attack success rates. Further, we investigate which properties make users vulnerable to user inference, finding that outlier users (i.e. those with data distributions sufficiently different from other users) and users who contribute large quantities of data are most susceptible to attack. Finally, we explore several heuristics for mitigating privacy attacks. We find that interventions in the training algorithm, such as batch or per-example gradient clipping and early stopping fail to prevent user inference. However, limiting the number of fine-tuning samples from a single user can reduce attack effectiveness, albeit at the cost of reducing the total amount of fine-tuning data.\footnote{Notable changes made during the rebuttal are highlighted in blue.} 1 INTRODUCTION Successfully applying large language models (LLMs) to real-world problems is often best achieved by fine-tuning on domain-specific data (Liu et al., 2022; Mosbach et al., 2023). This approach is seen in a variety of commercial products deployed today, e.g., GitHub Copilot (Chen et al., 2021), Gmail Smart Compose (Chen et al., 2019), GBoard (Xu et al., 2023), etc., that are based on LMs trained or fine-tuned on domain-specific data collected from users. The practice of fine-tuning on user data—particularly on sensitive data like emails, texts, or source code—comes with privacy concerns, as LMs have been shown to leak information from their training data (Carlini et al., 2021), especially as models are scaled larger (Carlini et al., 2023). In this paper, we study the privacy risks posed to users whose data are leveraged to fine-tune LLMs. Most existing privacy attacks on LLMs can be grouped into two categories: membership inference, in which the attacker obtains access to a sample and must determine if it was trained on (Mireshghallah et al., 2022; Mattern et al., 2023; Niu et al., 2023); and extraction attacks, in which the attacker tries to reconstruct the training data by prompting the model with different prefixes (Carlini et al., 2021; Lukas et al., 2023). These threat models make no assumptions about the training data and thus cannot estimate the privacy risk to a user when that user contributes many, likely correlated, training samples. To this end, we consider the threat model of user inference (Miao et al., 2021; Hartmann et al., 2023), a realistic privacy attack for models trained on user data, in the context of LLMs. As depicted in Figure 1, the attacker’s goal in user inference is to determine if a particular user participated in LLM fine-tuning using only black-box access to the fine-tuned model and a small set of i.i.d. samples from the user. Importantly, these samples need not be part of the fine-tuning set. This threat model lifts the concept of membership inference from privacy of individual samples to privacy of users who contribute multiple samples, while also relaxing the stringent assumption that the attacker has access to samples from the fine-tuning dataset. By itself, user inference could Figure 1: Overview of user inference threat model. An LLM model is fine-tuned on user-stratified data. The adversary can query text samples on the fine-tuned model and compute likelihoods. The adversary has knowledge of several samples from a user’s distribution (different than the user training samples) and computes a likelihood score to determine if the user participated in training. be a privacy threat if the fine-tuning task reveals sensitive information about participating users (for instance, if a model is fine-tuned only on users with a rare disease). Moreover, user inference may also enable other attacks such as sensitive information extraction, similarly to how membership inference is used as a subroutine in training data extraction attacks (Carlini et al., 2021). In this paper, we formally define the user inference threat model and propose a practical attack that determines if a user participated in fine-tuning by computing a likelihood ratio test statistic normalized relative to a reference model (Section 3). We then empirically study the effectiveness of this attack on the GPT-Neo family of LLMs (Black et al., 2021) when fine-tuned on a diverse variety of domain-specific data, including emails, scientific writing, and news articles (Section 4.2). Our investigation gives insight into various parameters that affect how easily a user’s participation can be inferred—parameters such as uniqueness of a user’s data distribution, amount of fine-tuning data contributed by a user, and amount of attacker knowledge about a user. Furthermore, we evaluate the attack on synthetically generated canary users to characterize the privacy leakage for worst-case users (Section 4.3). We show that canaries generated via minimal modifications to the real data distribution increase the attack’s effectiveness by more than 40% in terms of attack AUROC. Importantly, this canary study indicates that simple features shared across a user’s samples, such as an email signature or short characteristic phrase, can exacerbate user inference. Finally, we evaluate several methods for mitigating privacy attacks, such as per-example or batch gradient clipping, early stopping, and limiting the number of samples a user can contribute to the fine-tuning set (Section 4.4). We find that interventions in the training algorithm, like gradient clipping and early stopping fail to mitigate user inference, but limiting user contribution reduces the attack’s effectiveness on both real and synthetic canary users. Based on these results, we highlight the importance of future work on user-level differential privacy to mitigate user inference (McMahan et al., 2018; Levy et al., 2021). Overall, our work is the first to study user inference attacks against LLMs and provides key insights to inform future deployments of LLMs fine-tuned on user data. 2 RELATED WORK Over the years, a range of ML privacy attacks with different objectives have been studied (Oprea & Vassilev, 2023): membership inference attacks determine if a particular data sample was part of a model’s training set (Shokri et al., 2017; Yeom et al., 2018; Carlini et al., 2022; Ye et al., 2022; Watson et al., 2022; Choquette-Choo et al., 2021; Jagielski et al., 2023a); data reconstruction aims to exactly reconstruct the training data of a model (typically for a discriminative model) (Haim et al., and extraction attacks aim to extract training data from generative models like LLMs (Carlini et al., 2021; Lukas et al., 2023; Ippolito et al., 2023; Anil et al., 2023; Kudugunta et al., 2023). Membership inference attacks on LLMs. Mireshghallah et al. (2022) introduce a likelihood ratio-based attack on LLMs, designed for masked language models, such as BERT. Mattern et al. (2023) compare the likelihood of a sample against the average likelihood of a set of neighboring samples, and eliminate the assumption of attacker knowledge of the training distribution used in other membership inference attacks. Debenedetti et al. (2023) study how systems built on LLMs may amplify membership inference. Carlini et al. (2021) use a perplexity-based membership inference attack to extract training data from GPT-2. Their attack prompts the LLM to generate sequences of text, and then uses membership inference to identify sequences copied from the training set. Note that membership inference requires access to exact training samples while user inference does not. Extraction attacks. Following Carlini et al. (2021), memorization in LLMs received much attention (Zhang et al., 2021; Tirumala et al., 2022; Biderman et al., 2023; Anil et al., 2023). These works found that memorization scales with model size (Carlini et al., 2023) and data repetition (Kandpal et al., 2022), may eventually be forgotten (Jagielski et al., 2023b), and can exist even on models trained for specific restricted use-cases like translation (Kudugunta et al., 2023). Lukas et al. (2023) develop techniques to extract PII information from LLMs and (Inan et al., 2021) design metrics to measure how much of user’s confidential data is leaked by the LLM. Once a user’s participation is identified by user inference, these techniques can be used to estimate the amount of privacy leakage. User-level membership inference. Much prior work on inferring whether a user’s data was part of the training set makes the stronger assumption that the attacker has access to a user’s exact training samples — we refer to this as user-level membership inference to distinguish it from user inference (which does not require access to the exact training samples). Song & Shmatikov (2019) give the first such an attack for generative text models. Their attack is based on training multiple shadow models and does not scale to LLMs due to its high computational cost. This threat model has also been studied for text classification via reduction to membership inference (Shejwalkar et al., 2021). User inference. Finally, the user inference threat model has also been considered for speech recognition in IoT devices (Miao et al., 2021), representation learning for vision (Li et al., 2022) and face recognition (Chen et al., 2023). Hartmann et al. (2023) formally define user inference for classification and regression problems, under the name distributional membership inference. These attacks are either domain-specific or require shadow models and do not apply/scale to LLMs. Instead, we design an efficient user inference attacks that scale to LLMs and illustrate the user-level privacy risks posed by fine-tuning on user-generated text data. We refer to Appendix B for a detailed discussion. 3 USER INFERENCE ATTACKS Consider an autoregressive language model $p_\theta$ that defines a distribution $p_\theta(x_t | x_{<t})$ over the next token $x_t$ in continuation of a prefix $x_{<t} = (x_1, \ldots, x_{t-1})$. We are interested in a setting where a pre-trained LLM $p_{\theta_0}$, with initial parameters $\theta_0$ is fine-tuned on a dataset $D_{FT}$ sampled i.i.d. from a distribution $D_{task}$. The most common objective is to minimize the cross entropy of predicting each next token $x_t$ given the context $x_{<t}$ for each fine-tuning sample $x \in D_{FT}$. Thus, the fine-tuned model $p_\theta$ is trained to maximize the log-likelihood $\sum_{x \in D_{FT}} \log p_\theta(x) = \sum_{x \in D_{FT}} \sum_{t=1}^{|x|} \log p_\theta(x_t | x_{<t})$ of the fine-tuning set $D_{FT}$. Fine-tuning with user-stratified data. Much of the data used to fine-tune LLMs has a user-level structure. For example, emails, messages, and blog posts can reflect the specific characteristics of the user who wrote them. Two text samples from the same user are more likely to be similar to each other than samples across users in terms of language use, vocabulary, context, and topics. To capture user-stratification, we model the fine-tuning distribution $D_{task}$ as a mixture $$D_{task} = \sum_{u=1}^{n} \alpha_u D_u$$ of $n$ user data distributions $D_1, \ldots, D_n$ with non-negative weights $\alpha_1, \ldots, \alpha_n$ that sum to one. One can sample from $D_{task}$ by first sampling a user $u$ with probability $\alpha_u$ and then sampling a document $x \sim D_u$ from the user’s data distribution. We note that the fine-tuning process of the LLM is oblivious to user-stratification of the data. The user inference threat model. The task of membership inference assumes that an attacker has full access to a text sample \( x \) and must determine whether \( x \) was a part of the training or fine-tuning data (Shokri et al., 2017; Yeom et al., 2018; Carlini et al., 2022). We relax this assumption on the knowledge of an attacker by considering a realistic threat model called user inference. Given access to \( m \) i.i.d. samples \( x^{(1)}, \ldots, x^{(m)} \sim D_u \) from user \( u \)'s distribution, the task of the adversary is to determine if any data from user \( u \) was involved in fine-tuning the model \( p_\theta \). Crucially, we allow \( x^{(i)} \notin D_{FT} \), i.e., the attacker is not assumed to have access to the exact samples of user \( u \) that were a part of the fine-tuning set. For instance, if an LLM is fine-tuned on user emails, the attacker can reasonably be assumed to have access to some emails from a user, but not necessarily the ones used to fine-tune the model. We believe this is a realistic threat model for LLMs, as it does not require exact knowledge of training set samples, as in membership inference attacks. In terms of the adversarial capabilities, we assume that the attacker has black-box access to the LLM \( p_\theta \) — they can only query the model’s likelihood on a sequence of tokens and might not have knowledge of either the model architecture or parameters. Following standard practice in membership inference (Miresghallah et al., 2022; Watson et al., 2022), we allow the attacker access to a reference model \( p_{\text{ref}} \) that is similar to the target model \( p_\theta \) but has not been trained on user \( u \)'s data. This can simply be the pre-trained model \( p_{\theta_0} \) or another LLM. **Attack strategy.** The attacker’s task can be formulated as a statistical hypothesis test. Letting \( P_u \) denote the set of models trained on user \( u \)'s data, the attacker’s goal is to decide between: \[ H_0 : p_\theta \notin P_u, \quad H_1 : p_\theta \in P_u . \] There is generally no prescribed recipe to test for a composite hypothesis corresponding to a set of models. The model likelihood is a natural test statistic as we might expect \( p_\theta(x^{(i)}) \) to be high if \( H_1 \) is true and low otherwise. Unfortunately, this is not always true, even for membership inference. Indeed, \( p_\theta(x) \) can be large for \( x \notin D_{FT} \) for easy-to-predict \( x \) (e.g., generic text using common words), while \( p_\theta(x) \) can be small even if \( x \in D_{FT} \) for hard-to-predict \( x \). This necessitates the need for calibrating the test using a reference model (Miresghallah et al., 2022; Watson et al., 2022). Our insight for designing an efficient attack strategy is to formalize the attacker’s task with simpler surrogate hypotheses that are easier to test: \[ H'_0 : x^{(1)}, \ldots, x^{(m)} \sim p_{\text{ref}}, \quad H'_1 : x^{(1)}, \ldots, x^{(m)} \sim p_\theta . \] By construction, \( H'_0 \) is always false since \( p_{\text{ref}} \) is not fine-tuned on user \( u \)'s data. However, \( H'_1 \) is more likely to be true if the user \( u \) participates in training and the samples contributed by \( u \) to the fine-tuning dataset \( D_{FT} \) are similar to the samples \( x^{(1)}, \ldots, x^{(m)} \) known to the attacker even if they are not identical. In this case, the attacker rejects \( H'_0 \). Conversely, if user \( u \) did not participate in fine-tuning and no samples from \( D_{FT} \) are similar to \( x^{(1)}, \ldots, x^{(m)} \), then the attacker finds both \( H'_0 \) and \( H'_1 \) to be equally (im)plausible, and fails to reject \( H'_0 \). Intuitively, to faithfully test \( H_0 \) vs. \( H_1 \) using \( H'_0 \) vs. \( H'_1 \), we require that a sample \( x \sim D_u \) is more similar on average to any other sample from the same user \( x' \sim D_u \) than to a sample from another user \( x'' \sim D_{u'} \) for any other \( u' \neq u \). The Neyman-Pearson lemma tells us that the likelihood ratio test is the most powerful for testing \( H'_0 \) vs. \( H'_1 \), i.e., it achieves the best true positive rate at any given false positive rate (e.g., Lehmann et al., 1986, Thm. 3.2.1). This involves constructing a test statistic using the log-likelihood ratio \[ T(x^{(1)}, \ldots, x^{(m)}) := \log \left( \frac{p_\theta(x^{(1)}, \ldots, x^{(m)})}{p_{\text{ref}}(x^{(1)}, \ldots, x^{(m)})} \right) = \sum_{i=1}^{m} \log \left( \frac{p_\theta(x^{(i)})}{p_{\text{ref}}(x^{(i)})} \right) , \] where the last equality follows from the independence of each \( x^{(i)} \), which we assume. Given a threshold \( \tau \), the attacker rejects the null hypothesis and declares that \( u \) has participated in fine-tuning if \( T(x^{(1)}, \ldots, x^{(m)}) > \tau \). In practice, the number of samples \( m \) available to the attacker might vary for each user, so we normalize the statistic by \( m \). Thus, our final attack statistic is the empirical mean \( \bar{T}(x^{(1)}, \ldots, x^{(m)}) = \frac{1}{m} T(x^{(1)}, \ldots, x^{(m)}) \). **Analysis of the attack statistic.** We analyze this attack statistic in a simplified setting to gain some intuition on when we can infer the participation of user \( u \). In the large sample limit as \( m \to \infty \), the mean statistic $\bar{T}$ approximates the population average $$\bar{T}(D_u) := \mathbb{E}_{x \sim D_u} \left[ \log \left( \frac{p_\theta(x)}{p_{\text{ref}}(x)} \right) \right].$$ We will analyze this test statistic for the choice $p_{\text{ref}} = D_{-u} \propto \sum_{u' \neq u} \alpha_{u'} D_{u'}$, which is the fine-tuning mixture distribution excluding the data of user $u$. This is motivated by the results of Watson et al. (2022) and Sablayrolles et al. (2019), who show that using a reference model trained on the whole dataset excluding a single sample approximates the optimal membership inference classifier. Let KL($\cdot || \cdot$) and $\chi^2(\cdot || \cdot)$ denote the Kullback–Leibler and chi-squared divergences respectively. We establish the following bound, assuming $p_\theta$ and $p_{\text{ref}}$ perfectly capture their target distributions. **Proposition 1.** Assume $p_\theta = D_{\text{task}}$ and $p_{\text{ref}} = D_{-u}$ for some user $u \in [n]$. Then, we have $$\log (\alpha_u) + \text{KL}(D_u || D_{-u}) < \bar{T}(D_u) \leq \alpha_u \chi^2(D_u || D_{-u}).$$ The upper and lower bounds, proved in Appendix A, provide two intuitive insights. Two types of users are susceptible to user inference: (a) users who contribute more data to fine-tuning (such that $\alpha_u$ is large), or (b) users who contribute unique data (such that $\text{KL}(D_u || D_{-u})$ and $\chi^2(D_u || D_{-u})$ are large). Conversely, if neither condition holds, then a user’s participation in fine-tuning cannot be reliably detected. Our experiments later corroborate these observations; we use them to design mitigations. ### 4 EXPERIMENTS In this section, we empirically study the susceptibility of models to user inference attacks, the factors that affect attack performance, and potential mitigation strategies. | Dataset | User Field | #Users | #Examples | Percentiles of Examples/User | |------------------|----------------|--------|-----------|------------------------------| | ArXiv Abstracts | Submitter | 16511 | 625K | P0 20, P25 24, P50 30, P75 41, P100 3204 | | CC News | Domain Name | 2839 | 660K | P0 30, P25 50, P50 87, P75 192, P100 24480 | | Enron Emails | Email Address | 150 | 491K | P0 150, P25 968, P50 1632, P75 3355, P100 28229 | **Table 1:** Evaluation dataset summary statistics: The three evaluation datasets vary in their notion of “user” (i.e. an ArXiv abstract belongs to the user who submitted it to ArXiv whereas a CC News article belongs to the web domain where the article was published). Additionally, these datasets span multiple orders of magnitude in terms of number of users and number of examples contributed per user. #### 4.1 EXPERIMENTAL SETUP **Datasets.** We evaluate user inference attacks on three user-stratified text datasets: ArXiv Abstracts (Clement et al., 2019) for scientific paper abstracts, CC News (Hamborg et al., 2017; Charles et al., 2023) for news articles, and Enron Emails (Klimt & Yang, 2004) for real-world emails. These datasets provide a diverse test bench not only in their domain, but also in the notion of a user, the number of distinct users, and the amount of data contributed per user; see Table 1. To make these datasets suitable for evaluating user inference attacks, we split them into a held-in set of users, that we use to fine-tune models, and a held-out set of users that we use to evaluate attacks. We set aside 10% of a user’s sample as the attacker’s knowledge to run user inference attacks; these samples are not used for fine-tuning. For more details on the dataset preprocessing, see Appendix C. **Models.** We evaluate user inference attacks on the 125M and 1.3B parameter decoder-only LMs from the GPT-Neo (Black et al., 2021) model suite. These models were pre-trained on The Pile dataset (Gao et al., 2020), an 825 GB diverse text corpus, and use the same architecture and pre-training objectives as the GPT-2 and GPT-3 models. Further details on how we fine-tune these models are given in Appendix C. Due to the size of The Pile, we found it challenging to find user-stratified datasets that were not part of model pre-training; this is a problem with LLMs in general (Sainz et al., 2023). However, we believe that our setup still faithfully evaluates the fine-tuning setting for two main reasons. First, the overlapping fine-tuning data constitutes only a small fraction of all the data in The Pile. Second, our attacks are likely only weakened (and thus, underestimate the true risk) by this setup. This is because inclusion of the held-out users in pre-training should only reduce the model’s loss on these samples, making the loss difference smaller and thus our attack harder to employ. **Attack Setup and Evaluation.** We implement the user inference attack described in Section 3 using the pre-trained GPT-Neo models as our reference models $p_{\text{ref}}$. We evaluate the aggregate attack success using the Receiver Operating Characteristic (ROC) curve across held-in and held-out users; this is a plot of the true positive and false positive rates of the attack across all possible thresholds. We use the area under this curve (AUROC) as a single-number summary. This metric is commonly used to evaluate the performance of membership inference attacks (Carlini et al., 2022). ### 4.2 User Inference: Results and Properties We experimentally examine how user inference is impacted by factors such as the amount of user data and attacker knowledge, the model scale, as well as the connection to overfitting. **Attack Performance.** We begin by attacking GPT-Neo 125M trained on each of the three fine-tuning datasets and evaluating the attack performance. We see from Figure 2 that the user inference attacks on all three datasets achieve non-trivial performance, with the attack AUROC varying between 92% (Enron Emails) to 66% (CC News) and 57% (ArXiv Abstracts). The disparity in performance between the three datasets can be explained in part by the intuition from Proposition 1, which points out two factors. First, a larger fraction of data contributed by a user makes user inference easier. The Enron dataset has fewer users, each of whom contributes a significant fraction of the fine-tuning data (cf. Table 1), while, the ArXiv dataset has a large number of users, each with few datapoints. Second, distinct user data makes user inference easier. Enron emails are more distinct due to identifying information such as names (in salutations and signatures) and addresses, while the scientific writing style of ArXiv abstracts, which is predominantly impersonal and formal, makes them less distinct. **The Effect of the Attacker Knowledge.** We examine the effect of the attacker knowledge, i.e., the amount of user data used by the attacker to compute the test statistic, in Figure 3. First, we find that greater attacker knowledge leads to higher attack AUROC and lower variance on the attack success. For CC News, the AUROC increases from $62.0 \pm 3.3\%$ when the attacker has only one document to $68.1 \pm 0.6\%$ when the attacker has 50 documents. We also observe that the user inference attack already leads to non-trivial results with an attacker knowledge of *one document per user* for CC News (AUROC 62.0%) and Enron Emails (AUROC 73.2%). This performance for ArXiv Abstracts is, however, not much better than random (AUROC 53.6%). Overall, the results show that an attacker does not need much data to mount a strong attack, but more data only helps. **User Inference and User-level Overfitting.** It is well-established that overfitting to the training data is sufficient for successful membership inference (Yeom et al., 2018). We find that a similar phenomenon holds for user inference, which is enabled by *user-level overfitting*, i.e., the model overfits not to the training samples themselves, but rather the distributions of the training users. We see from Figure 4 that the validation loss of held-in users continues to decrease for CC News and Enron Emails, while the loss of held-out users increases. These curves display a textbook example of overfitting, not to the training data (since both curves are computed using validation data), but to the distributions of the training users. We can see that the attack AUROC improves with the widening generalization gap between these two curves. Indeed, the Spearman correlation between the generalization gap and the attack AUROC is at least 99.4% for all three datasets including ArXiv, where the trend is not as clear visually. This demonstrates the close relation between user-level overfitting and user inference. **Attack Performance and Model Scale.** Next, we investigate the role of model scale in user inference. We fine-tune GPT-Neo 125M and 1.3B on CC News and evaluate attack performance. We see from Figure 5, that the attack performance is nearly identical on both models with AUROCs of 65.3% for the 1.3B model and 65.8% for the 125M model. While the 1.3B parameter model achieves better validation loss on both held-in users (2.24 vs. 2.64) and held-out users (2.81 vs. 3.20), the generalization gap is nearly the same for both models (0.57 vs. 0.53). This shows a qualitative difference between user inference and membership inference, where in the latter threat model attack performance reliably increases with model size (Carlini et al., 2023; Tirumala et al., 2022; Kandpal et al., 2022; Mireshghallah et al., 2022). ### 4.3 User Inference in the Worst-Case The disproportionately large downside to privacy leakage necessitates looking beyond the average-case privacy risk to worst-case settings. To this end, we analyze attack performance on datasets containing synthetically generated users, known as *canaries*. There is usually a trade-off between making the canary users realistic and worsening their privacy risk. We intentionally err on the side of making them realistic to illustrate the potential risks of user inference. Figure 5: Attack success vs. model scale: User inference attack performance in 125M and 1.3B parameter models trained on CC News. **Left**: Although the 1.3B model achieves lower validation loss, the difference in validation loss between held-in and held-out users is the same as that of the 125M parameter model. **Center & Right**: User inference attacks against the 125M and 1.3B models achieve the same performance. Figure 6: Canary experiments. **Left two**: Attack performance for canaries with different shared substring lengths. **Right two**: Attack performance on canary users and real users with different amounts of fine-tuning data per user. On all plots, we shade the AUROC std over 100 bootstrap samples of held-in and held-out users. To construct a canary user, we first sample a real user from the dataset and insert a particular substring into each of that user’s examples. The substring shared between all of the user’s examples is a contiguous substring randomly sampled from one of their documents (for more details, see Appendix C). We construct 180 canary users with shared substrings ranging from 1-100 tokens in length and inject these users into the ArXiv Abstracts and CC News datasets. We do not experiment with synthetic canaries in Enron Emails, as the attack AUROC already exceeds 92% for real users. As expected, Figure 6 (left) shows that the attack effectiveness is significantly higher on canary users than real users, and increases monotonically with the length of the shared substring. However, we find that canaries with a short substring (5 tokens or smaller) is enough to significantly increase the attack AUROC from 57% to 72% for ArXiv and from 63% to 69% for CC News. This increase of attack performance raises a question if canary gradients can be filtered out easily (e.g., using the $\ell_2$ norm). However, Figure 7 (right) shows that the gradient norm distribution of the canary gradients and those of real users are nearly indistinguishable. This shows that our canaries are close to real users from the model’s perspective, and thus hard to filter out. This experiment also demonstrates the increased privacy risk for users who use, for instance, a short and unique signature in emails or characteristic phrases in documents. 4.4 Mitigation Strategies Finally, we investigate existing techniques for limiting the influence of individual examples or users on model fine-tuning as methods for mitigating user inference attacks. Gradient Clipping. Since we consider a fine-tuning setup that is oblivious to the user-stratification of the data, a natural method to limit the model’s sensitivity is to clip the gradients at the batch (Pascanu et al., 2013) or example level (Abadi et al., 2016). We show the results for the 125M model on the CC News dataset in Figure 7 (left): both batch and per-example gradient clipping have no effect on mitigating user inference. The reason behind this is immediately clear from Figure 7 (right): canary examples do not have large outlying gradients and clipping affects real and canary data similarly. Thus, gradient clipping is an ineffective mitigation strategy. Figure 7: User inference mitigation with gradient clipping. **Left**: Attack effectiveness for canaries with different shared substring lengths when training GPT-Neo 125M on CC News with (1) no gradient clipping, (2) per-example gradient clipping, and (3) batch gradient clipping. **Right**: The distribution of gradient norms for canary examples and real examples. **Early Stopping.** The connection between user inference and user-level overfitting from Section 4.2 suggests that early stopping, a common heuristic used to prevent overfitting (Caruana et al., 2000), could potentially mitigate the privacy risk of user inference. Unfortunately, we find that 95% of the final AUROC is obtained quite early in training: 15K steps (5% of the fine-tuning) for CC News and 90K steps (27% of the fine-tuning) for ArXiv. Typically, the overall validation loss still decreases far after this point. This suggests an explicit tradeoff between overall model utility (e.g., in terms of validation loss) and privacy risks from user inference. **Data Limits Per User.** Since we cannot change the fine-tuning procedure, we consider limiting the amount of fine-tuning data per user. Figure 6 (right two) show that this can be effective. For ArXiv, the AUROCs for real and canary users reduce from 66% and 88% at 100 fine-tuning documents per user to almost random chance at 10 documents per user. A similar trend also holds for CC News. **Summary.** Our results show that the proposed user inference attack is hard to mitigate with common heuristics. Enforcing data limits per user can be effective but this only works for data-rich applications with a large number of users. However, developing an effective mitigation strategy that also works in data-scarce applications remains an open problem. ## 5 DISCUSSION AND CONCLUSION When collecting fine-tuning data for specializing an LLM, data from a company’s users is often the natural choice since it closely resembles the types of inputs a deployed LLM will encounter in production. However, user structure in fine-tuning data also exposes new opportunities for privacy leakage. Up until now, most studies investigating privacy of LLMs have ignored any structure in the training data, but as the field shifts towards collecting data from new, potentially sensitive, sources, it is important to adapt our privacy threat models accordingly. Our work introduces a novel privacy attack exposing user participation in fine-tuned LLMs, and future work should explore other LLM privacy violations beyond the standard settings of membership inference and training data extraction. Furthermore, our work demonstrates the effectiveness of user inference attacks across a diverse variety of fine-tuning distributions, but, beyond simply limiting the amount of data per user, none of the mitigation heuristics we explored were effective. This motivates future work on user inference defenses — both heuristic defenses based on new understanding of the threat model, as well as methods for efficiently applying defenses with rigorous guarantees, such as user-level differential privacy (DP). User-level DP has been deployed in production settings for federated learning models of a much smaller size (Ramaswamy et al., 2020; Xu et al., 2023), but additional work is needed to effectively scale these techniques to large language models. ## REFERENCES Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the ACM SIGSAC Conference on Computer and Communications Security*, 2016. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv:2305.10403, 2023. Stella Biderman, USVSN Sai Prashanth, Lintang Sutawika, Hailey Schoelkopf, Quentin Anthony, Shivanshu Purohit, and Edward Raf. Emergent and predictable memorization in large language models. arXiv:2304.11158, 2023. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, March 2021. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models. In USENIX, 2021. Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramèr. Membership inference attacks from first principles. In IEEE Symposium on Security and Privacy, 2022. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. In ICLR, 2023. Rich Caruana, Steve Lawrence, and C Giles. Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping. NeurIPS, 2000. Zachary Charles, Nicole Mitchell, Krishna Pillutla, Michael Reneer, and Zachary Garrett. Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning. arXiv:2307.09619, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Łukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. arXiv 2107.03374, 2021. Mia Xu Chen, Benjamin N. Lee, Gagan Bansal, Yuan Cao, Shuyuan Zhang, Justin Lu, Jackie Tsay, Yinan Wang, Andrew M. Dai, Zhifeng Chen, Timothy Sohn, and Yonghui Wu. Gmail smart compose: Real-time assisted writing. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019. Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, and Yang Zhang. FACE-AUDITOR: Data auditing in facial recognition systems. In 32nd USENIX Security Symposium (USENIX Security 23), pp. 7195–7212, Anaheim, CA, August 2023. USENIX Association. ISBN 978-1-939133-37-3. URL https://www.usenix.org/conference/usenixsecurity23/presentation/chen-min. Christopher A Choquette-Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. Label-only membership inference attacks. In ICML, 2021. Colin B. Clement, Matthew Bierbaum, Kevin P. O’Keeffe, and Alexander A. Alemi. On the use of arxiv as a dataset. arXiv 1905.00075, 2019. Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, and Florian Tramèr. Privacy side channels in machine learning systems. arXiv:2309.05610, 2023.
JuyFppXzh2
The authors focus on short text. While it is widely used across the industry, it will be good to demonstrate why their approach is better for short text when compared to other approaches. In a sense what makes the approach more suited for short-text?
GANDALF: LEARNING LABEL CORRELATIONS IN EXTREME MULTI-LABEL CLASSIFICATION VIA LABEL FEATURES Anonymous authors Paper under double-blind review ABSTRACT Extreme Multi-label Text Classification (XMC) involves learning a classifier that can assign an input with a subset of most relevant labels from millions of label choices. Recent works in this domain have increasingly focused on a symmetric problem setting where both input instances and label features are short-text in nature. Short-text XMC with label features has found numerous applications in areas such as query-to-ad-phrase matching in search ads, title-based product recommendation, prediction of related searches, amongst others. In this paper, we propose Gandalf, a novel approach which makes use of a label correlation graph to leverage label features as additional data points to supplement the training distribution. By exploiting the characteristics of the short-text XMC problem, it leverages the label features to construct valid training instances, and uses the label graph for generating the corresponding soft-label targets, hence effectively capturing the label-label correlations. While most recent advances in XMC have been algorithmic, mainly aimed towards developing novel deep-learning frameworks, our data-centric augmentation approach is orthogonal to these methodologies, and can be applied in a plug-and-play manner to a variety of them. This generality and effectiveness of Gandalf is demonstrated by showing up to 30% relative improvements for 5 state-of-the-art algorithms across 4 benchmark datasets consisting of up to 1.3 million labels. 1 INTRODUCTION Extreme Multilabel Classification (XMC) has found numerous applications in the domains of related searches [Lain et al., 2019], dynamic search advertising [Prabhu et al., 2018] and recommendation tasks such as query-to-ad-phrase [Dahiya et al., 2021b], query-to-product [Medini et al., 2019], product-to-product [Mittal et al., 2021a], etc., which require predicting the most relevant results that frequently co-occur together [Chiang et al., 2019] [Hu et al., 2020], or are highly correlated to the given product or search query. These tasks are often modeled through embedding-based retrieval-cum-ranking pipelines over millions of possible web page titles, products titles, or ad-phrase keywords forming the label space. A major challenge in XMC problems is caused by the long-tailed label distribution, i.e., the presence of tail labels with extremely scarce training data. In this paper, we focus on the short-text setting, where we argue there exists a symmetry between inputs and labels, which can be exploited for improved learning of label correlations. Extreme class imbalance: The real world datasets in XMC are highly imbalanced towards popular or trending items. Moreover, these datasets adhere to Zipf’s law [Adamic & Huberman, 2002] [Ye et al., 2020], i.e., following a long-tailed distribution, where most labels are tail labels with very few ($\leq 5$) positive data-points in a training set spanning $\geq 10^6$ total data points (Table 1). Consequently, the label co-occurrence graph of XMC datasets is extremely sparse, i.e., the presence of a given label does not really imply the presence of other labels [Babbar & Schölkopf, 2019]. This makes capturing correlations among labels for the encoder a challenging task. More so, since the encoder is forced to rely solely on the sparse instance-to-label (i.e. input-to-output) mappings, which are often insufficient, especially for tail labels. Such an inherent disconnect in the label co-occurrence graph also motivates utilisation of the one-vs-rest classifier as a popular choice for XMC algorithms. Symmetric nature of short-text XMC with Label Features: Applications of short-text XMC ranging from query-to-ad-phrase prediction to title-based product-to-product recommendation witness short-text instances not only in the input space, but also in the output space. Ad-phrases or product-titles spanning the output space as label descriptors are, like the input query, short-text instances which, on average, consist of only 3-8 words (Dahiya et al., 2021a). Earlier works in XMC primarily focused on problems where the labels were identified by numeric IDs, and hence devoid of any semantic meaning. Here, works (Dahiya et al., 2021b) focused only on learning the nuances of short-text inputs for the XMC task. However, more recently, with inclusion of label descriptors – known as label features – in the output space, short-text XMC has taken a symmetric form. This has enabled researchers to more effectively capture the nuances shared between input text and label features in a common embedding space (Mittal et al., 2021a,b; Dahiya et al., 2021a). Learning Label Correlations: While label correlations are difficult to learn in XMC, previous approaches like (Guo et al., 2019; Mittal et al., 2021b) have tried modelling them in different ways. However, the general idea has been to introduce a label correlation graph (LCG) in the pipeline to implicitly capture higher-order correlations missing in the query’s ground-truth. In contrast, we take a unique data-centric approach and propose to leverage the innate symmetry of short-text XMC along the LCG to construct valid data-points. Our proposed approach, which we refer to as Gandalf, is a novel method to leverage label features as data points trained through supervisory signals in the form of higher-order correlations from the LCG. The output label vector, when the label features are used as the input data, is constructed by its normalized correlation vector with other labels, that is captured by the LCG. As a consequence, projecting labels from the input (as features) to output (as classification label vectors) space helps the encoder learn representations which are inherently endowed with stronger label correlation information via supervised learning as opposed to simply (i) leveraging the label features for contrastive learning (Dahiya et al., 2021a, 2023) or, (ii) augmenting the classifiers with LCG (Mittal et al., 2021b). To summarise, our contributions are the following: • We propose Gandalf — Graph AugmeNted DAta with Label Features — a simple yet effective algorithm to efficiently leverage label features to construct additional training instances based on exploiting the unique setting of short-text XMC in a novel manner. • In terms of prediction performance, we demonstrate the generality and effectiveness of Gandalf by showing up to 30% gains on 5 state-of-the-art extreme classifiers across 4 public benchmarks. We show that by using Gandalf, XMC methods which inherently do not leverage label features beat or parallel strong baselines which either employ elaborate training pipelines (Dahiya et al., 2021a), large transformer encoders (You et al., 2019; Zhang et al., 2021b; Dahiya et al., 2023) or make heavy architectural modifications (Mittal et al., 2021a,b) to leverage label features. • Finally, Gandalf does not add any additional computational overhead during inference over the base algorithm. Moreover, unlike other methods which try to capture label correlations (Saini et al., 2021; Chien et al., 2023; Mittal et al., 2021b), Gandalf is designed to keep the memory costs constant while learning, only compromising on the training time due to added data points, thus widening its applicability. 2 RELATED WORK Previous XMC Works: Prior works in XMC focused on annotating long-text documents, consisting of hundreds of word tokens, such as those encountered in tagging for Wikipedia (Babbar & Scholkopf, 2017; You et al., 2019) with numeric label IDs. Most works under this setting were aimed towards scaling up transformer encoders for XMC task (Zhang et al., 2021b; Kharbanda et al., 2022). With the introduction of label features, there exist three correlations that can be exploited for better representation learning: (i) query-label, (ii) query-query, and (iii) label-label correlations. Exploiting Correlations in XMC: Recent works have been successful in leveraging label features and pushing state-of-the-art by exploiting the first two correlations. For example, SIAMESEXML and NGAME (Dahiya et al., 2021a, 2023) employ a two-tower pre-training stage applying contrastive learning between an input text and its corresponding label features. GALAXC (Saini et al., 2021) & PINA (Chien et al., 2023), motivated by graph convolutional networks, create a combined query-label bipartite graph to aggregate predicted instance neighbourhood. This approach, however, leads to a multifold increase in the memory footprint. DECAF and ECLARE (Mittal et al., 2021a,b) make architectural additions to embed label-text embeddings (LTE) and graph-augmented label embeddings (GALE) in each label’s OVA classifier to exploit higher order correlations from LCG. **Two-tower Models & Classifier Learning:** Typically, due to the single-annotation nature of most dense retrieval datasets (Nguyen et al., 2016; Kwiatkowski et al., 2019; Toshi et al., 2017), two-tower models (Karpukhin et al., 2020) solving this task eliminate classifiers in favour of modelling implicit correlations by bringing query-document embeddings closer in the latent space of the encoder. These works are conventionally aimed at improving encoder representations by innovating on hard-negative mining (Zhang et al., 2021a; Xiong et al., 2021; Lu et al., 2022), teacher-model distillation (Qi et al., 2021; Ren et al., 2021), and combined dense-sparse training strategies (Khattab & Zaharia, 2020). While these approaches result in enhanced encoders, the multilabel nature of XMC makes them, in itself, insufficient for this domain. This has been demonstrated in two-stage XMC works like (Dahiya et al., 2021a, 2023) where these frameworks go beyond two-tower training and train classifiers with a frozen encoder in the second stage for better empirical performance (Table 2). ### 3 Preliminaries For training, we have available a multi-label dataset \( D = \{(\{x_i, y_i\})_{i=1}^N, \{z_l\}_{l=1}^L\} \) comprising of \( N \) data points. Each \( i \in [N] \) is associated with a small ground truth label set \( y_i \subset [L] \) from \( L \sim 10^6 \) possible labels. Further, \( x_i, z_l \in X \) denote the textual descriptions of the data point \( i \) and the label \( l \) which, in this setting, derive from the same vocabulary universe \( V \) (Dahiya et al., 2021a). The goal is to learn a parameterized function \( f \) which maps each instance \( x_i \) to the vector of its true labels \( y_i \in \{0, 1\}^L \) where \( y_{il} = 1 \iff l \in y_i \). A common strategy for handling this learning problem, the **two towers** approach, is to map instances and labels into a common Euclidean space \( E = \mathbb{R}^d \), in which the relevance \( s_l(x) \) of a label \( l \) to an instance is scored using an inner product, \( s_l(x) = \langle \Phi(x), \Psi(l) \rangle \). We call \( \Phi(x) \) the encoding representation of the instance \( x \), and \( w_l := \Psi(l) \) the decoding representation of label \( l \). If labels are featureless integers, then \( \Psi \) turns into a simple table lookup. In our setting, \( l \) is associated with features \( z_l \), so we identify \( \Psi(l) = \Psi(z_l) \). The prediction function selects the \( k \) highest-scoring labels, \( f(x) = \text{top}_k(\langle \Phi(x), \Psi(\cdot) \rangle) \). Training is usually handled using the **one-vs-all** paradigm, which applies a binary loss function \( \ell \) to each entry in the score vector. In practice, performing the sum over all labels for each instance is prohibitively expensive, so the sum is approximated by a shortlist of labels \( S(x_i) \) that typically contains all the positive labels, and only those negative labels which are expected to be challenging for classification (Dahiya et al., 2021a, 2023; Zhang et al., 2021b; Kharbanda et al., 2023), leading to \[ L_D[\Phi, \Psi] = \sum_{i=1}^N \sum_{l=1}^L \ell(y_{il}, \langle \Phi(x), \Psi(l) \rangle) \approx \sum_{i=1}^N \sum_{l \in S(x_i)} \ell(y_{il}, \langle \Phi(x), \Psi(l) \rangle). \] Even though these approaches have been used with success, they still struggle in learning good embeddings \( w_l \) for long-tail labels: A classifier that learns solely based on instance-label pairs has little chance of learning similar label representations for labels that do not co-occur within the dataset, even though they might be semantically related. Consequently, training can easily lead to overfitting even with simple classifiers (Guo et al., 2019). To reduce the generalization gap, regularization needs to be applied to the label decoder \( \Psi \), either explicitly as a new term in the loss function (Guo et al., 2019), or implicitly through the inductive biases of the network structure (Mittal et al., 2021a,b) or by a learning algorithm (Dahiya et al., 2021a, 2023). These approaches incorporate additional label metadata – **label features** – to generate the inductive biases. For short-text XMC, these features themselves are often short textual description, coming from the same space as the instances, as the following examples, taken from (i) LF-AmazonTitles-131K (recommend related products given a product name) and (ii) LF-WikiTitles-500K (predict relevant categories, given the title of a Wikipedia page) illustrate: **Example 1:** For “Mario Kart: Double Dash!!” on Amazon, we have available: Mario Party 7 | Super Smash Bros Melee | Super Mario Sunshine | Super Mario Strikers as the recommended products. 1 bold symbols \( y \) indicate vectors, capital letters \( Y \) indicate random variables, and sans-serif \( y \) denotes a set Figure 1: Figure showing how Gandalf augments the training dataset. Soft-targets for each label in the label-space are derived from the label correlation graph. These additional datapoints are simply concatenated with the traditional dataset (that contains queries with hard-targets) for training. Example 2: For the Wikipedia page “2022 French presidential election”, we have the available categories: April 2022 events in France | 2022 French presidential election | 2022 elections in France | Presidential elections in France. Further, a google search of the same query, leads to the following related searches - French election 2022 - The Economist | French presidential election coverage on FRANCE 24 | Presidential Election 2022: A Euroclash Between a “Liberal... | French polls, trends and election news for France - POLITICO.eu, amongst others. In view of these examples, one can affirm two important observations: (i) the short-text XMC problem indeed requires recommending similar items which are either highly correlated or co-occur frequently with the queried item, and (ii) the queried item and the corresponding label-features form an “equivalence class” and convey similar intent (Dahiya et al., 2021a). For example, a valid news headline search should either result in a page mentioning the same headline or similar headlines from other media outlets (see Example 2). As a result, it can be argued that data instances are interchangeable with their respective labels’ features. Exploiting this interchangeability of label and instance text, Dahiya et al. (2021a, 2023) proposes to tie encoder and decoder together and require $\Psi(l) = \Phi(z_l)$. While indeed yielding improved test performance, this approach has two drawbacks: Firstly, the condition $\Psi(l) = \Phi(z_l)$ turns out to be too strong, and it has to allow for some fine-tuning corrections $\eta_l$, yielding $\Psi(l) = \Phi(z_l) + \eta_l$. Consequently, training of SiameseXML and NGAME is done in two stages: Initially, a contrastive loss needs to be minimized, followed by fine-tuning with a classification objective. 4 Gandalf: Learning Label-Label Correlations Dahiya et al. (2021a) motivate their approach by postulating a self-annotation property (Label Self Proximity), which claims that a label $l$ is relevant to its own textual features with high probability, $P[Y_l = 1 | X = z_l] > 1 - \epsilon$ for some small $\epsilon \ll 1$. Another recent work Chien et al. (2023), in its pretraining step, attempts to leverage label features as data points but does so by expanding the label space $\{0, 1\}^L$ to also include instances as $\{0, 1\}^{L+N}$ leveraging the self-annotation property of labels and inverting the initial instance-label mappings to have instances $x_i$ as labels for label features $z_l$ as data points. This, however, leads to an explosion in an already enormous label space. Most instances in an XMC problem are associated with multiple labels, yet the self-annotation postulate only provides a single label. Thus, one might ask, Question: In a label space spanning the order of $10^6$ labels, what are the other labels which annotate $z_l$, when posed as a data point? In order to effectively augment the training set with $z_l$ as a data point, we need to provide values for the other entries of the label vector $y_l$. Ideally, these labels would be sampled according to $y_l \sim P[Y | X = z_l]$, which means we need to find sensible approximations to the probabilities for the other labels $P[Y_j = 1 | X = z_l]$. When using the cross-entropy loss, sampling can be forgone by replacing the discrete labels $y_l \in \{0, 1\}^L$ by soft labels $y_{l,\text{soft}} = P[Y | X = z_l]$. In order to derive a model for \( P[Y_{l'} = 1 \mid X = z_l] \), we can take inspiration from the GLAS regularizer (Guo et al., 2019). This regularizer tries to make the Gram matrix of the label embeddings \( \langle w_l, w_{l'} \rangle \) reproduce the co-occurrence statistics of the labels \( S \), \[ R_{GLAS}[\Psi] = L^{-2} \sum_{l=1}^{L} \sum_{l'=1}^{L} (\langle w_l, w_{l'} \rangle - S_{ll'})^2. \] (2) Here, \( S \) denotes the symmetrized conditional probabilities, \[ S_{ll'} := 0.5(P[Y_l = 1 \mid Y_{l'} = 1] + P[Y_{l'} = 1 \mid Y_l = 1]). \] (3) Plugging in \( w_l = \Psi(z_l) \), this regularizer reaches its minimum if \[ \langle \Psi(z_l), \Psi(z_{l'}) \rangle = S_{ll'}. \] (4) By the self-proximity postulate, we can assume \( \Psi(z_l) \approx \Phi(z_l) \). For a given label feature instance with target soft-label \( (z_l, y_{ll'}^{\text{soft}}) \), the training will try to minimize \( \ell(\langle \Phi(z_l), \Psi(z_{l'}) \rangle, y_{ll'}^{\text{soft}}) \). To be consistent with Equation 4, we therefore want to choose \( y_{ll'}^{\text{soft}} \) such that \( S_{ll'} = \arg \min \ell(\cdot, y_{ll'}^{\text{soft}}) \). This is fulfilled for \( y_{ll'}^{\text{soft}} = \sigma(S_{ll'}) \) for \( \ell \) being the binary cross-entropy, where \( \sigma \) denotes the logistic function. If \( \ell \) is the squared error, then the solution is even simpler, with \( y_{ll'}^{\text{soft}} = S_{ll'} \). For simplicity, and because of strong empirical performance, we choose \( y_{ll'}^{\text{soft}} = S_{ll'} \) even when training with cross-entropy loss. This results in the extended version of the self-proximity postulate: **Postulate 1 (Soft-Labels for Label Features)** Given a label \( l \) with features \( z_l \in X \), and a proxy for semantic similarity of labels \( S \), the labels features, when interpreted as an input instance, should result in predictions \[ P[Y_{l'} = 1 \mid X = z_l] \approx S_{ll'}. \] (5) More intuitively, Postulate 1 answers the Question posed above by stating that the label vector \( y \), when a given label feature \( z_l \) is used as an instance, should have the soft-label value \( y_{ll'}^{\text{soft}} \) that is in proportion to the \( l' \) entry in the LCG corresponding for the given \( l \). The label-similarity measure \( S \) (Equation 3) used in the original GLAS regularizer uses only direct co-occurrences of labels, which results in a noisy signal that does not capture higher-order label interdependencies. In contrast, the LCG \( \in \mathbb{R}^{L \times L} \) (Mittal et al., 2021b) is inferred by performing a random walk (with restarts) over the bipartite graph connecting input data instances with their corresponding ground-truth labels. Since the entries in the LCG are normalized and skewed in favor of tail labels, the LCG can be interpreted as a smoothed and regularized variant of the label co-occurrence matrix. This enables LCG to correctly identify a set of semantically similar labels that either share tokens with the queried label, or co-occur frequently in the same context. We show this qualitatively in Appendix B where the “History of Computing” is the most similar to “Computer Museums” and “Charles Babbage Institute”. This property makes its edge weights \( (G_{ij}) \) a good candidate for the similarity measure \( S_{ij} \). While originally LCG was utilized to efficiently mine higher order query tail-label relations by augmenting the classifier \( \Psi \) with graph information, we propose to leverage the graph weights (with an additional row-wise normalization to get values in range [0, 1]) as probabilistic soft labels for \( z_l \) as data instance. Further, to restrict the impact of noisy correlations in large output spaces (Babbar & Scholkopf, 2019), we empirically find it beneficial to threshold the soft labels obtained from LCG at \( \delta \). Thus, we propose a novel method - Gandalf - to augment the training dataset with label features as additional data points, annotated by label vectors given by: \[ y_{ij} = P[Y_i = 1 \mid X = z_j] \approx S_{ij} = \begin{cases} G_{ij}/G_{jj}, & G_{ij} > \delta \\ 0, & G_{ij} \leq \delta \end{cases} \] (6) A diagrammatic representation of our method is provided in Figure 1. We hypothesize that the models benefit from Gandalf in two ways. First, Gandalf leverages label features, existing in the same distribution and vocabulary as \( D \) (Dahiya et al., 2021a), as novel data points which are then used for training in a supervised setting to mimic the apriori statistical correlations between labels that exist in the label space. The model is able to capture these correlations because Gandalf is multilabel in nature and forces the encoder to learn to create a single representation for the label features that maximizes the probability score across all positive labels (OVA classifiers). Secondly, as shown in section 3, data points more-often-than-not share tokens with their label’s features. Therefore training models with these additional data points helps better learn both input token embeddings and classifier label embeddings i.e. OVA decision boundaries. As a result, the encoded representation of correlated labels, learnt by an underlying algorithm, are closer in the representation space. This especially benefits the tail labels which, more often than not, either get missed out during shortlisting or rank outside the desired top-k predictions. 5 EXPERIMENTS & DISCUSSION | Datasets | N | L | APpL | ALpP | AWpP | |-------------------|-------|-------|------|------|------| | LF-AmazonTitles-131K | 294,805 | 131,073 | 5.15 | 2.29 | 6.92 | | LF-WikiSeeAlsoTitles-320K | 693,082 | 312,330 | 4.67 | 2.11 | 3.01 | | LF-WikiTitles-500K | 1,813,391 | 501,070 | 17.15 | 4.74 | 3.10 | | LF-AmazonTitles-1.3M | 2,248,619 | 1,305,265 | 38.24 | 22.20 | 8.74 | Table 1: Details of short-text benchmark datasets with label features. APpL stands for avg. points per label, ALpP stands for avg. labels per point and AWpP is the length i.e. avg. words per point. Benchmarks, Baselines & Metrics We benchmark our experiments on 4 standard public datasets, the details of which are mentioned in Table 1. To test the generality and effectiveness of our proposed Gandalf, we apply the algorithm across multiple state-of-the-art short-text extreme classifiers: (i) ASTEC, (ii) DECAF, (iii) ECLARE, and (iv) INCEPTIONXML. Furthermore, we also compare against two-tower approaches like DPR [Karpukhin et al., 2020], ANCE [Xiong et al., 2021], RocketQA [Qu et al., 2021], including XMC-specific ones - SIAMESEXML and NGAME. We do not evaluate Gandalf on two-tower approaches since it is non-trivial and part of our future work to implement it with a two-tower training objective. Additionally, we also compare against transformer-encoder based AttentionXML [You et al., 2019] and XR-Transformer [Zhang et al., 2021b]. As an algorithmic contribution, we extend the INCEPTIONXML encoder to leverage label features further the state-of-the-art on benchmark datasets and call it INCEPTIONXML-LF. For this, we augment the OVA classifier with additional label-text embeddings (LTE) and graph-augmented label embeddings (GALE), as done in Mittal et al. (2021b). The implementation details and training strategy can be found in Appendix A. We measure the performance using standard metrics P@k, and its propensity-scored variant, PSP@k [Jain et al., 2016]. 5.1 EMPIRICAL PERFORMANCE With Gandalf, gains of up to 30% can be observed in case of ASTEC and INCEPTIONXML which, by default, do not leverage label features and yet perform at par and sometimes better as compared their LF-counterparts, i.e. DECAF and ECLARE, and INCEPTIONXML-LF across all datasets. DECAF and ECLARE tie the embedding layer with one (LTE) or two (LTE + GALE) additional ASTEC-like encoders respectively to take advantage of label text embeddings (LTE) and graph augmented label embeddings (GALE) in order to feed label feature information in classifiers. While architectural modifications help capture higher order query-label relations and help model predict unseen labels better, they add significant computational overhead. DECAF (having LTE) is ~ 2× expensive to train as compared to its base model ASTEC while ECLARE (having both LTE & GALE) adds up to ~ 3× computational cost. Similar trend is witnessed between INCEPTIONXML and its modified LF counterpart. On the other hand, base encoders trained with Gandalf, imbue necessary correlations without needing to make any additional modifications or employ complicated training pipelines. It may be noted that all 5 encoders trained with additional Gandalf-generated data points are frugal architectures trained from scratch. While ASTEC merely consists of a linear layer as an encoder and makes use of an ANNS for prediction, INCEPTIONXML employs only a self-attention layer followed by two convolutional and two linear layers. On the other hand, two tower approaches, except SIAMESEXML – which uses an ASTEC encoder, employ pre-trained DistilBERT or BERT models. Our findings from Table 2 are in-line with those from Kharbanda et al. (2023) where they show that transformer models are perhaps an overkill for the short-text XMC task at hand. Frugal | Method | P@1 | P@3 | P@5 | PSP@1 | PSP@3 | PSP@5 | P@1 | P@3 | P@5 | PSP@1 | PSP@3 | PSP@5 | |------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | AttentionXML | 32.25 | 21.70 | 15.61 | 23.97 | 28.60 | 32.57 | 45.04 | 39.71 | 36.25 | 15.97 | 19.90 | 22.54 | | DPR* | 41.85 | 28.71 | 20.88 | 38.17 | 43.93 | 49.45 | 44.64 | 39.05 | 34.83 | 32.62 | 35.37 | 36.72 | | ANCE* | 42.67 | 29.05 | 20.98 | 38.16 | 43.78 | 49.03 | 46.44 | 41.48 | 37.59 | 31.91 | 35.31 | 37.25 | | ROCKETQA* | 42.75 | 29.22 | 20.98 | 39.97 | 44.50 | 49.21 | - | - | - | - | - | - | | XR-TRANSFORMER | 38.10 | 25.57 | 18.32 | 28.86 | 34.85 | 39.59 | 50.14 | 44.07 | 39.98 | 20.06 | 24.85 | 27.79 | | NGAME* + classifier | 42.61 | 28.86 | 20.69 | 38.27 | 43.75 | 48.71 | 45.82 | 39.94 | 35.48 | 33.03 | 35.63 | 36.80 | | 44.95 | 29.87 | 21.20 | | | | | 54.69 | 47.08 | 42.80 | | | | | SiameseXML* + classifier | 38.36 | 26.20 | 19.26 | 34.83 | 39.87 | 45.18 | 49.02 | 42.72 | 38.52 | 27.12 | 30.43 | 32.52 | | ASTEC + Gandalf | 37.12 | 25.20 | 18.24 | 29.22 | 34.64 | 39.49 | 48.82 | 42.62 | 38.44 | 21.47 | 25.41 | 27.86 | | DECAF + Gandalf | 38.40 | 25.84 | 18.65 | 30.85 | 36.44 | 41.42 | 50.67 | 44.49 | 40.35 | 22.07 | 26.54 | 29.30 | | ECLARE + Gandalf | 40.46 | 27.54 | 19.63 | 33.18 | 39.55 | 44.10 | 50.14 | 44.09 | 40.00 | 23.43 | 27.90 | 30.56 | | InceptionXML + Gandalf | 36.79 | 24.94 | 17.95 | 28.50 | 34.15 | 38.79 | 48.21 | 42.47 | 38.59 | 20.72 | 24.94 | 27.52 | | InceptionXML-LF + Gandalf | 44.67 | 30.00 | 21.50 | 37.98 | 43.83 | 48.93 | 50.80 | 44.54 | 40.25 | 25.49 | 29.42 | 31.59 | | InceptionXML-LF | 40.74 | 27.24 | 19.57 | 34.52 | 39.40 | 44.13 | 49.01 | 42.97 | 39.46 | 24.56 | 28.37 | 31.67 | | 43.84 | 29.59 | 21.30 | | 38.22 | 43.90 | 49.03 | 52.91 | 47.23 | 42.84 | 30.02 | 33.18 | 35.56 | | AttentionXML | 17.56 | 11.34 | 8.52 | 9.45 | 10.63 | 11.73 | 40.90 | 21.55 | 15.05 | 14.80 | 13.97 | 13.88 | | SiameseXML++ | 31.97 | 21.43 | 16.24 | 26.82 | 28.42 | 30.36 | 42.08 | 22.80 | 16.01 | 23.53 | 21.64 | 21.41 | | ASTEC + Gandalf | 22.72 | 15.12 | 11.43 | 13.69 | 15.81 | 17.50 | 44.40 | 24.69 | 17.49 | 18.31 | 18.25 | 18.56 | | DECAF + Gandalf | 25.14 | 16.90 | 12.86 | 16.73 | 18.99 | 21.01 | 44.21 | 24.64 | 17.36 | 19.29 | 19.82 | 19.96 | | ECLARE + Gandalf | 29.35 | 19.83 | 15.05 | 22.01 | 24.23 | 26.27 | 44.36 | 24.29 | 16.91 | 21.58 | 20.39 | 19.84 | | InceptionXML + Gandalf | 23.10 | 15.54 | 11.52 | 14.15 | 16.71 | 17.39 | 44.61 | 24.79 | 19.52 | 18.65 | 18.70 | 18.94 | | InceptionXML-LF + Gandalf | 32.54 | 22.15 | 16.86 | 25.27 | 27.76 | 30.03 | 45.93 | 25.81 | 20.36 | 21.89 | 21.54 | 22.56 | | InceptionXML-LF | 28.99 | 19.53 | 14.79 | 21.45 | 23.65 | 25.65 | 44.89 | 25.71 | 18.23 | 23.88 | 22.58 | 22.50 | | 33.12 | 22.70 | 17.29 | | 26.68 | 29.03 | 31.27 | 47.13 | 26.87 | 19.03 | 24.12 | 23.92 | 23.82 | Table 2: Results showing the effectiveness of Gandalf on state-of-the-art extreme classifiers. (*) denotes two-tower models. The best-performing approach are put in **bold**. For Amazon datasets, best results using frugal architectures and those using transformers(highlighted) are **bold** separately. architectures, which by themselves, under performed as compared to two-tower approaches, parallel or surpass the same when trained with Gandalf augmented data. ### 5.2 Discussion We can make some key observations and develop strong insights not only about the short-text XMC problem with label features but also about specific dataset properties from Table 2. For example, training on label features as data points generated via Gandalf gives remarkable improvements on top of existing algorithms, especially on LF-AmazonTitles-131K and LF-WikiSeeAlsoTitles-320K where most labels have ~5 training data points on average. In these low data regimes, Gandalf helps imbue important label correlation information in the encoder which is missed by most training algorithms employed by existing models. In contrast, improvements on LF-WikiTitles-500K remain relatively mild, where there is enough data per label for the existing model training algorithms to be able to learn inherent label correlation information by existing query-label mappings. **Gandalf vs ECLARE** ECLARE leverages LCG to encode higher-order document-label correlations whereas Gandalf explicitly aims to learn label-label correlations. More specifically, in ECLARE, the loss participants are \(L(\phi(x_i), \psi + \phi_{LTE}(z_i) + \phi_{GALE}(z_i))\) whereas in Gandalf, they are \(L(\phi(z_i), \psi)\). Notably, the correlations learned by Gandalf are independent of those captured by GALE in ECLARE. Thus, we find ECLARE and InceptionXML-LF, both of which employ GALE, to benefit off training on data points generated using Gandalf, as is evident from the significant improvement in results in Table 2. Further, Gandalf simplifies ECLARE’s approach and does not need to employ the additional \(\phi_{LTE}, \phi_{GALE}\) encoders. Two-tower Approaches As mentioned before, two-tower approaches without classifiers are insufficient for XMC due to the multi-label nature of the problem. This can be clearly seen by the limited performance of state-of-the-art two-tower approaches like DPR (Karpukhin et al., 2020), ANCE (Xiong et al., 2021), and RocketQA (Qu et al., 2021). Even though RocketQA’s elaborate training pipeline consists of cross-encoder teacher-model distillation and data augmentation, it performs only marginally better as compared to NGAME’s two-tower training approach. Notably, while two-tower approaches in XMC (Dahiya et al., 2023; 2021a) parallel these dense retrieval methods, they still benefit from the addition of discriminative classifier training. Hence, classifier training is a crucial aspect of XMC pipelines which benefits from the additional supervised signals provided by Gandalf. 5.3 Ablations and Qualitative Results | Method | P@1 | P@3 | P@5 | PSP@1 | PSP@3 | PSP@5 | P@1 | P@3 | P@5 | PSP@1 | PSP@3 | PSP@5 | |-------------------------|-----|-----|-----|-------|-------|-------|-----|-----|-----|-------|-------|-------| | LF-AmazonTitles-131K | | | | | | | | | | | | | | InceptionXML | 35.62 | 24.13 | 17.35 | 27.53 | 33.06 | 37.50 | 21.53 | 14.19 | 10.66 | 13.06 | 14.87 | 16.33 | | + Gandalf w/o SL | 37.59 | 25.25 | 18.18 | 30.75 | 35.54 | 40.06 | 24.43 | 16.16 | 12.15 | 16.89 | 18.45 | 20.02 | | + Gandalf | 43.52 | 29.23 | 20.92 | 36.96 | 42.71 | 47.64 | 31.31 | 21.38 | 16.22 | 24.31 | 26.79 | 28.83 | | INCEPTIONXML-LF | 40.74 | 27.24 | 19.57 | 34.52 | 39.40 | 44.13 | 49.01 | 42.97 | 39.46 | 24.56 | 28.37 | 31.67 | | + Gandalf (δ = 0.1) | 41.71 | 28.03 | 20.14 | 36.94 | 41.93 | 46.64 | 31.40 | 21.56 | 16.53 | 26.01 | 27.89 | 29.99 | | + Gandalf (δ = 0.2) | 42.09 | 28.38 | 20.45 | 37.09 | 42.19 | 47.04 | 32.20 | 21.86 | 16.60 | 26.06 | 28.01 | 30.03 | | + Gandalf (δ = 0.3) | 41.73 | 28.10 | 20.18 | 37.01 | 41.99 | 46.67 | 31.29 | 21.35 | 16.28 | 25.68 | 27.59 | 29.65 | | + Gandalf (δ = 0.4) | 41.39 | 27.74 | 19.89 | 36.71 | 41.51 | 46.09 | 31.03 | 20.92 | 15.99 | 25.11 | 27.12 | 29.14 | Table 3: Results demonstrating the effectiveness of leveraging label features as data points annotated by soft-labels (denoted SL) i.e. Gandalf on a single InceptionXML model. The table also shows the method’s sensitivity to δ, as defined in Equation 6. Notably, soft-labels are a central aspect in imbuing the encoder with stronger label correlation information. Effect of Soft Labels & Sensitivity to δ: We examine Gandalf’s performance without soft-labels on INCEPTIONXML (Table 3) where Gandalf w/o SL is essentially equivalent to leveraging label features as data points but with self-annotation property alone. However, that only helps the model learn token-to-label associations, like LTE in DECAF. Notably, soft-targets play an important role in enabling the encoder to intrinsically learn the label-label correlations and imbue the necessary inductive bias in the models. We further examine Gandalf’s sensitivity to δ by training INCEPTIONXML-LF on data generated with varying values of δ on two datasets. As shown in Table 3, the empirical performance peaks at a δ value of 0.1 which is sufficient to suppresses the impact of noisy correlations. Higher values of δ tend to suppress useful information from the LCG. Figure 2: Contributions to P@5 in (a) LF-AmazonTitles-131K and (b) LF-WikiSeeAlsoTitles-320K. The number of labels in each bin is provided after the # in the second row of the tags on the x-axis. The bottommost row denotes the mean label frequency in that bin. Specifically, note the improvements on tail labels in the earlier bins (5 - 3). Improvements on tail labels: We perform a quantile analysis (Figure 2) across 2 datasets – LF-AmazonTitles-131K and the LF-WikiSeeAlsoTitles-320K – where we examine performance (con- tribution to P@5 metric) over 5 equi-voluminous bins based on increasing order of mean label frequency in the training dataset. Consequently, performance on head labels can be captured by the bin #1 and that of tail labels by bin #5. We note that introducing the additional training data with Gandalf consistently improves the performance across all label frequencies, with more profound gains on bins with more tail labels. | Method | Datapoint | Baseline Predictions | Gandalf Predictions | |-----------------|---------------------------------------------------------------------------|--------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| | INCEPTIONXML-LF | Pontryagin duality, Topological order, Topological quantum field theory, | Compact group, Haar measure, Lie group, Algebraic group, Topological ring | | | Quantum number, Quantum topology | | | DECAF | Topological quantum computer, Topological order, Topological quantum field | Compact group, Haar measure, Lie group, Algebraic group, Topological ring | | | theory, Topological quantum number, Quantum topology | | | ECLARE | Topological quantum computer, Topological order, Topological quantum field | Compact group, Topological order, Lie group, Algebraic group, Topological ring | | | theory, Topological quantum number, Quantum topology | | | INCEPTIONXML-LF | List of lighthouses in Scotland, List of Northern Lighthouse Board | Oatcake, Oatmeal, Oat milk, Porridge, Rolled oats | | | lighthouses, Oatcake, Communes of the Côtes-d’Armor department | | | DECAF | Oatcake, Oatmeal, Design for All the ICFP, Oatley Point Reserve, Oatley | Oatcake, Oatmeal, Oat milk, Porridge, Rolled oats | | | Pleasure Grounds | | | ECLARE | Oatmeal, Oat milk, Parks in Sydney, Oatley Point Reserve, Oatley Pleasure | Oatcake, Porridge, Rolled oats, Oatley Point Reserve, Oatley Pleasure Grounds | | | Grounds | | | INCEPTIONXML-LF | Lunar Orbiter Image Recovery Project, Lunar Orbiter 3, Lunar Orbiter | Surveyor program, Luna programme, Lunar Orbiter Image Recovery Project, Lunar Orbiter | | | 5, Chinese Lunar Exploration Programme | 3, Lunar Orbiter 5 | | DECAF | Exploration of the Moon, Lunar Orbiter Image Recovery Project, Lunar | Exploration of the Moon, Apollo program, Surveyor program, Luna programme, Lunar | | | Orbiter 3, Lunar Orbiter 5 | Orbiter program | | ECLARE | Exploration of the Moon, Lunar Orbiter program, Lunar Orbiter Image | Exploration of the Moon, Apollo program, Surveyor program, Luna programme, Lunar | | | Recovery Project, Lunar Orbiter 3, Lunar Orbiter 5 | Orbiter program | Table 4: Qualitative predictions from the LF-WSAT-320K dataset. Labels indicate mispredictions. **Qualitative Results:** We further analyse qualitative examples via the top 5 predictions obtained by training the base encoders with and without *Gandalf* augmented data points in Table 4. Additional examples are provided in Appendix B. We note that the quality of predictions increases when the encoder is trained with *Gandalf*-generated data points. For the query “Topological group”, where all baselines fail to produce a single correct prediction in top 5, encoders trained with *Gandalf* results in 4/5 correct predictions. Queries with even just a single word, like “Oat”, which predicts unrelated labels in the case of the baseline prediction, gets all the labels right with the addition of *Gandalf*. Furthermore, even the quality of incorrect predictions improves i.e. relevance to the input data point increases. For example, in case of “Lunar Orbiter program”, the only incorrect *Gandalf* predictions are “Lunar Orbiter 3”, “Lunar Orbiter 5” and “Pioneer program” (US lunar and planetary space probes exploration program) which are potential false negatives. ## 6 Conclusion In this paper, we proposed *Gandalf*, a strategy to learn label correlations, a notoriously difficult challenge. In contrast to previous works which model these correlations implicitly through model training, we propose a supervised approach to explicitly learn them by leveraging the inherent query-label symmetry in short-text extreme classification. We further performed extensive experimentation by implementing on various SOTA XMC methods and demonstrated dramatic increases in prediction performances uniformly across all methods. Moreover, this is achieved with frugal architectures without incurring any computational overheads in inference latency or training memory footprint. We hope our treatment of label correlations in this domain will spur further research towards crafting data-points with more expressive annotations, and further extend it to long-text XMC approaches where the instance-label symmetry is quite ambiguous. Learning label correlations in contrastive settings, instead of discriminative ones, as done in two-tower approaches is part of a future work. REFERENCES Lada A Adamic and Bernardo A Huberman. Zipf’s law and the internet. *Glottometrics*, 3(1):143–150, 2002. R. Babbar and B. Schölkopf. DiSMEC: Distributed Sparse Machines for Extreme Multi-label Classification. In *WSDM*, 2017. R. Babbar and B. Schölkopf. Data scarcity, robustness and extreme multi-label classification. *Machine Learning*, 108:1329–1351, 2019. W-C. Chang, H.-F. Yu, K. Zhong, Y. Yang, and I. Dhillon. Taming Pretrained Transformers for Extreme Multi-label Text Classification. In *KDD*, 2020. Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In *Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, KDD ’19, pp. 257–266, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450362016. doi: 10.1145/3292500.3330925. URL https://doi.org/10.1145/3292500.3330925 Eli Chien, Jiong Zhang, Cho-Jui Hsieh, Jyun-Yu Jiang, Wei-Cheng Chang, Olgica Milenkovic, and Hsiang-Fu Yu. Pina: Leveraging side information in extreme multi-label classification via predicted instance neighborhood aggregation. *arXiv preprint arXiv:2305.12349*, 2023. Kunal Dahiya, Ananye Agarwal, Deepak Saini, Gururaj K, Jian Jiao, Amit Singh, Sumeet Agarwal, Purushottam Kar, and Manik Varma. Siamesexml: Siamese networks meet extreme classifiers with 100m labels. In *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 2330–2340. PMLR, 18–24 Jul 2021a. URL https://proceedings.mlr.press/v139/dahiya21a.html Kunal Dahiya, Deepak Saini, Anshul Mittal, Ankush Shaw, Kushal Dave, Akshay Soni, Himanshu Jain, Sumeet Agarwal, and Manik Varma. Deepxml: A deep extreme multi-label learning framework applied to short text documents. In *Proceedings of the 14th ACM International Conference on Web Search and Data Mining*, WSDM ’21, pp. 31–39, New York, NY, USA, 2021b. Association for Computing Machinery. ISBN 9781450382977. doi: 10.1145/3437963.3441810. URL https://doi.org/10.1145/3437963.3441810 Kunal Dahiya, Nilesh Gupta, Deepak Saini, Akshay Soni, Yajun Wang, Kushal Dave, Jian Jiao, Gururaj K, Prasenjit Dey, Amit Singh, et al. Ngame: Negative mining-aware mini-batching for extreme classification. In *Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining*, pp. 258–266, 2023. C. Guo, A. Mousavi, X. Wu, Daniel N. Holtmann-Rice, S. Kale, S. Reddi, and S. Kumar. Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces. In *NeurIPS*, 2019. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In *Proceedings of the 34th International Conference on Neural Information Processing Systems*, NIPS’20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. Himanshu Jain, Yashoteja Prabhu, and Manik Varma. Extreme multi-label loss functions for recommendation, tagging, ranking & other missing label applications. In *KDD*, pp. 935–944, 2016. Himanshu Jain, Venkatesh Balasubramanian, Bhavan Chunduri, and Manik Varma. Slice: Scalable linear extreme classifiers trained on 100 million labels for related searches. *Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining*, 2019. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 1601–1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://aclanthology.org/P17-1147