Paper_ID
stringlengths 10
10
| Question
stringlengths 201
1.81k
| ocr_output
stringlengths 252
54k
⌀ |
---|---|---|
VsqVhrgjCt
|
The results of E2EVarNet on motion-corrupted data needs more justification. In this paper’s setting, E2EVarNet perfectly knows the motion information, therefore E2EVarNet is equipped with the optimal forward model that can map the intermediate results to the k-space. That being said, the key idea of E2EVarNet, compared with end-to-end UNet, is its ability to compare against the intermediate reconstruction to the raw undersampled measurements. If the E2EVarNet has the optimal forward model, its performance shouldn’t be affected by motion.
|
Rigid Motion Compensated Compressed Sensing MRI with Untrained Neural Networks
Anonymous authors
Paper under double-blind review
Abstract
Deep neural networks trained end-to-end for accelerated magnetic resonance imaging give excellent performance. Typically, these networks are trained and evaluated under a setup where the object to be imaged is static. However, in practice, patients often move during data acquisition which leads to motion artifacts in the reconstructed images. In this work, we first demonstrate that in the presence of motion, significantly larger training sets are required for good performance when training state-of-the-art neural networks to reconstruct an image for accelerated MRI. Second, we demonstrate that as an alternative, one can resort to utilizing untrained neural networks for this task. We propose a modified untrained network which does not rely on any training set and performs single-instance rigid motion-compensated compressed sensing MRI. Our approach outperforms untrained and trained optimization-based baselines such as $\ell_1$-norm minimization and score-based generative models.
1 Introduction
Deep learning methods give state-of-the-art performance for many image restoration applications (Dong et al., 2014; Jin et al., 2017; Zhang et al., 2017; Sriram et al., 2020; Rivenson et al., 2018; Jalal et al., 2021; Zhang et al., 2023), including for accelerated MRI reconstruction where the goal is to reconstruct a high-quality MRI scan from a set of undersampled measurements. Most successful deep learning-based accelerated MRI reconstruction models assume a static imaging setup, meaning that a potential patient movement is not anticipated. Consequently, in case the patient moves during data acquisition, motion artifacts arise and the image quality significantly degrades.
One possible approach to deal with motion artifacts is to simply train a network to reconstruct motion-corrupted data. In this work, we first investigate this avenue, and find that motion-compensated accelerated MRI reconstruction is very costly in terms of the amount of data required for training. Thus, switching the task from artifact-free to motion-compensated accelerated MRI reconstruction brings a significant burden in terms of the amount of data to be collected to train state-of-the-art MRI models.
Subsequently, we propose to resort to untrained neural networks as an alternative. These models operate in a single-instance reconstruction mode and do not require a large training set. We propose an untrained network based on the ConvDecoder (Zalbagi Darestani & Hecke, 2021), an untrained network tailored to MRI reconstruction. We specifically modify ConvDecoder’s loss function to handle motion correction in addition to compressed sensing.
To summarize, here are our contributions:
• We demonstrate that state-of-the-art MRI reconstruction models require significantly more data than the currently available large training sets in order to solve motion correction and compressed sensing MRI at the same time.
• We propose an untrained network-based approach to perform motion-compensated accelerated MRI reconstruction.
• We evaluate our approach for 2D and achieve competitive performance against other baselines such as sparsity-based and score-based models. Furthermore, proof of principle is also demonstrated for 3D MRI data.
1.1 Prior work
Over the past few years, several works have tackled the problem of motion artifact correction in MRI using prospectively or retrospectively deep learning approaches. In general, one may categorize those works as follows:
**Model-based:** These methods typically solve an optimization problem for each input sample by incorporating knowledge of the physical measurement model (i.e., the forward operator $A$). In order to perform motion correction, optimization is often done with respect to two sets of variables, one parameterizing the image and one for the motion parameters. After convergence, the outputs are estimates of the ground-truth image and motion parameters. Sparsity-based methods fall under this category (Reyes et al., 2007; Yang et al., 2013; Mayer et al., 2022).
**Data-driven:** Several end-to-end deep learning-based models have made efforts to solve the motion correction problem by training a neural network to learn a mapping from the motion-corrupted image domain to the artifact-free image domain (Pawar et al., 2018; Al-Masni et al., 2022). These models typically ignore the forward model and tackle the problem in a data-driven manner. A major limitation of data-driven approaches is that reconstructed images tend to be blurry (this is an observation we made for U-Net (Ronneberger et al., 2015) and E2E-VarNet (Sriram et al., 2020) but is also seen in several other works (Pawar et al., 2018; Armanious et al., 2020)).
**Data-driven and model-based:** These methods tend to combine deep learning with model-based optimization in order to correct motion artifacts. For example, Hossbach et al. (2022) trained a neural network to predict motion parameters from the data, and then used those predictions as an initialization for a sparsity-based method to correct motion artifacts. Score-based generative models are also an example of this category. They rely on a pre-trained generator that is used inside an optimization problem at inference. In this manner, they are claimed to be more robust against variable motion patterns (Levac et al., 2022). Score-based generative models also outperform traditional generative models for medical imaging (Armanious et al., 2020).
2 Problem setup: Motion corrupted compressed sensing
Our goal is to reconstruct an image $x^* \in \mathbb{C}^N$ from undersampled measurements $y = MFTx^* + z \in \mathbb{C}^M$, where the number of measurements, $M$, is typically lower than the dimension of the image, $N$, and $z$ is measurement noise. In the forward map, $M$ is the known undersampling mask, $F$ is the Fourier transform, and $T$ denotes the unknown rigid motion transform discussed in more detail below. The measurement $y$ is usually called the $k$-space in the context of MRI.
In practice, multiple receiver coils are used for signal reception, so there are $n_c$ coils each capturing a $k$-space measurement with an at least a slightly different spatial sensitivity profile. Thus, there are $n_c$ many $k$-spaces obtained as
$$y_i = MFTS_ix^* + z_i \in \mathbb{C}^M, \quad i = 1, \ldots, n_c.$$
Here, $n_c$ denotes the number of receiver coils, $S_i$ is the complex-valued spatially-varying coil-dependent sensitivity map of the $i$-th coil, that is applied through element-wise multiplication to the image $x^*$, and $z_i$ is measurement noise.
2.1 Motion artifact synthesis
We now specify the assumptions we make on the unknown motion transform $T$. Assuming a model for the motion transform is important for our study, since patient movements are naturally unknown, and thus one needs to make certain assumptions about these motion patterns in practice.
There are in general two types of motion occurring during an MRI scan: rigid motion and nonrigid motion. Rigid motion results in linear transformations in the image and is typically caused by translations or rotations in 3D (e.g., head movements). Nonrigid motion results in anatomical deformations in the scanned image and is typically caused by non-shape-preserving object transformations (e.g., respiratory motion).
Figure 1: An example of interleaved trajectory with equispaced undersampling. In this example, there are 3 repetition times (TRs) corresponding to 3 batches with 3 acquired lines per batch. This means that for instance $k$-space lines corresponding to the 3 blue lines in the trajectory are recorded during the first repetition time.
In this work, we primarily consider rigid motion caused by 2D translations. However, to demonstrate that our approach is easily applicable to more complicated motion models (i.e., also including rotations), we provide experimental results for 3D motion as well.
For 2D motion synthesis, we consider an interleaved trajectory with a 1D equispaced undersampling pattern (with a fully-sampled center region), see Figure 1 for an example. We synthesize translation artifacts by a simple linear phase shift in the $k$-space. Specifically, the $k$-space pixel value at coordinates $(x, y)$ is transformed as follows under $(t_x, t_y)$ translations along the x and y axes:
$$\tilde{k}_{xy} = k_{xy} \ast e^{2\pi j(t_xx + t_yy)}.$$
Note that all $k$-space lines acquired during a given repetition time (TR) are, in a first approximation, assumed to be acquired instantaneously, and thus these lines are affected by the same transformation. Therefore, $t$ number of x- and y-axis translation coefficients form the motion transform ($t$ is the number of TRs). From this point onward, we denote a motion transform as $T_\phi$ where $\phi \in \mathbb{R}^{2t}$ contains all translation parameters. For experiments with 3D data, $\phi \in \mathbb{R}^{6t}$ models 6 degrees of freedom which are $(t_x, t_y, t_z)$ translations and $(\alpha, \beta, \gamma)$ rotations.
3 END-TO-END NETWORKS ARE COSTLY FOR MOTION-COMPENSATED COMPRESSED SENSING MRI
Neural networks trained end-to-end give state-of-the-art accuracy for accelerated MRI reconstruction for a static setup, i.e., for a setup where the patient does not move. Thus, a natural starting point to develop a neural network for motion-compensated accelerated MRI is to train a neural network end-to-end for reconstruction from motion-corrupted data. In this section, we demonstrate that training a neural network end-to-end for motion-compensation is very expensive in the number of training examples required.
We consider the popular class of unrolled networks, the best-performing networks for accelerated MRI reconstruction (Sriram et al., 2020; Fabian & Soltanolkotabi, 2022). The idea behind these models is to unroll an optimization problem and learn several iterates of it in an end-to-end manner. Here, we study the end-to-end variational network architecture (Sriram et al., 2020) (E2E-VarNet).
For motion-corrupted accelerated MRI reconstruction, we modify each cascade of the E2E-VarNet’s from
$$k^{i+1} = k^i - \eta(Mk^i - y) + G(k^i)$$
to
$$k^{i+1} = k^i - \eta(MT_\phi k^i - y) + G(k^i),$$
(1)
in order to account for the change in forward map. Note that only the data consistency block is modified by incorporating the motion transform $T_\phi$. Here, $G : \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a trainable neural network (i.e., the learned regularizer) which performs refinement by mapping the current estimate of the $k$-space to a refined $k$-space estimate for the next step. In this setup, the parameters of network $G$ and the parameters of a network that learns motion parameters $\phi$ are trained.
To evaluate the potential performance of this modified E2E-VarNet, we conduct the following experiment. We assume that motion parameters (i.e., $\phi^*$) are perfectly known during training and inference. This is an idealized situation since in practice the motion parameters are unknown and have to be estimated. However, studying this idealized situation clarifies whether this natural extension of a state-of-the-art approach is capable of accurate image recovery for joint motion correction and compressed sensing.
**Experiments.** We use the 2D-recorded multi-coil brain T2 portion of the fastMRI dataset [Zbontar et al., 2020]. We created a validation/test split of 160/300 slices. For the training dataset, depending on the setup, we use a total of 850/3400/7587/21296/63888 training samples.
To vary the training set size, we compare two cases: one where we add additional slices from the fastMRI dataset, and one where we keep the number of slices fixed but augment the dataset with more motion patterns. For motion synthesis, we sample $x$ and $y$ translation parameters from a uniform distribution $t_x, t_y \sim \text{Unif}(5, 10)$ according to the model from Section 2.1. Finally for undersampling, we work with a 1D equispaced variable density mask (with 4x acceleration) which is the same for all training and inference samples.
Figure 2 shows the result. Augmenting the training set with more slices (and not with more motion patterns) improves reconstruction accuracy according to a power law. The improvement as a function of training examples does not saturate in the span of the training set sizes that we consider. Contrary, without motion corruption (i.e., the artifact-free regime) we are already in a regime of the power law where only minimal performance improvements occur. The artifact-free power law is consistent with that established for clean (without motion corruption) accelerated MRI reconstruction [Klug & Heckel, 2023]. This demonstrates that in order to train a network for motion-corrupted reconstruction, we need a significantly larger dataset size for good performance, even in an ideal setup where we know the motion corruption pattern.
Finally, note that according to Figure 2, a network trained on $\approx 60,000$ images achieves 0.92 SSIM for motion-compensated accelerated MRI reconstruction. However, in the artifact-free regime (i.e., when no motion appears during training/inference), the same performance is obtainable by training the same network on only 1000 images. This demonstrates that motion-compensated accelerated MRI reconstruction via E2E-VarNet is much more costly than solving artifact-free accelerated MRI reconstruction.

Increasing the training set size by adding more slices to the training set. Increasing the training set size by adding more motion patterns to a fixed set of slices. Increasing the number of slices in the artifact-free regime (i.e., reconstruction from clean undersampled data). By comparing the curves, the test accuracy scales differently based on the number of training slices which demonstrates the excessive cost of motion-compensated compressed sensing MRI.
With respect to reconstruction quality, Figure 3 shows reconstructions for the experiment above. Note that the reconstruction becomes blurry whenever the input sample is corrupted with motion artifacts and this starts to alleviate with more training examples.
Figure 3: Quality of modified E2E-VarNet reconstruction from motion-degraded undersampled measurements improves significantly with more training data points. **clean E2E-VarNet** is a network that is trained on 850 clean 4x undersampled slices and is applied to a clean test sample (this is the best reconstruction E2E-VarNet can achieve for this test sample). **vanilla E2E-VarNet** is a network that is trained on 850 motion-degraded 4x undersampled slices and is applied to a motion-degraded test sample. **modified E2EVarNet** is a network with a modified DC block for motion correction and is trained on motion-degraded 4x undersampled data, then applied to a motion-degraded test sample. Our modified E2E-VarNet is trained on 850, 3400, 7587, 21296, and 63888 motion-degraded training slices.
4 UNTRAINED NETWORKS FOR MOTION-COMPENSATED COMPRESSED SENSING
We propose an approach for motion compensated accelerated MRI based on untrained neural networks. Without any training, convolutional neural networks (CNNs) can regularize inverse problems as first demonstrated by Ulyanov et al. [2018]. Untrained network perform well for general compressive sensing tasks Veen et al. [2018], Heckel & Hand [2019], and in particular for accelerated MRI reconstruction Arora et al. [2020], Zalbagi Darestani & Heckel [2021], Slavkova et al. [2022]. Untrained networks outperform traditional untrained methods (such as $\ell_1$-regularized least squares) but perform worse than state-of-the-art MRI reconstruction models such as unrolled neural networks (e.g., the VarNet for static accelerated MRI).
In a nutshell, an untrained network reconstructs an image by fitting a randomly initialized neural network to a measurement. The network is not pretrained on any training data, and the structure of the network alone acts as a prior for the images. Note that for a given task, a few images from the target domain are required only to tune the hyper-parameters of the network.
Although untrained CNNs are successful tools for various image restoration tasks Ulyanov et al. [2018], Veen et al. [2018], Heckel & Hand [2019], Jin et al. [2021], Arora et al. [2020], Zalbagi Darestani & Heckel [2021], Jagatap & Hegde [2019], Heckel [2019], they have not yet been explored for image reconstruction from motion-corrupted undersampled data. Here, we propose a variant of the ConvDecoder Zalbagi Darestani & Heckel [2021] whose loss function is adjusted to handle motion correction in addition to compressed sensing.
4.1 METHOD
Let \( G : \mathbb{R}^p \rightarrow \mathbb{R}^n \) be a neural network parameterized by \( \theta \in \mathbb{R}^p \), specifically we use the convolutional decoder architecture from (Zalbagi Darestani & Heckel, 2021). Given a measurement \( y \) we minimize the loss
\[
L(\theta, \phi) = \| \text{MFT}_\phi S G(\theta) - y \|_2^2
\]
with gradient descent starting from a random initialization of the network’s parameters and zero initialization of the motion parameters. Note that we are optimizing jointly over the networks’ parameters, and thus over different images, as well as over the motion parameters, thus over different forward maps.
This optimization yields the estimate \( \hat{\theta} \) of the network’s parameters, and with this estimate we reconstruct the ground truth image as \( \hat{x} = G(\hat{\theta}) \).
The network \( G \) we use throughout is based on (Zalbagi Darestani & Heckel, 2021) tuned on 10 randomly-selected samples from the training set of the fastMRI brain dataset (Zbontar et al., 2020). Specifically, the network is a convolutional network with 8 layers and 64 channels per layer. Each convolutional layer comprises upsampling, convolution, ReLU activation, and batch normalization (Ioffe & Szegedy, 2015) blocks. Finally, we use ESPiRiT (Uecker et al., 2014) to estimate coil sensitivity maps \( S \) from the motion-degraded undersampled measurement.
Note that because the sensitivity maps are obtained from the corrupted undersampled data, they are prone to an error caused by patient movements. We therefore assume mild patient movements (which is often the case in practice), and thus the error in the coil sensitivity estimates becomes negligible.
4.2 EXPERIMENTS
We evaluate our approach for 2D and 3D motion correction tasks in the following two subsections, respectively.
4.2.1 2D MOTION-COMPENSATED COMPRESSED SENSING MRI
Here, we conduct evaluations on 336 middle slices of AXT2 volumes from the validation portion of the fastMRI multicoil brain dataset (Zbontar et al., 2020). Each \( k \)-space in the dataset we consider has the shape (\#coils, 640, 320) with an undersampling ratio of 4; thus 80 out of 320 lines in the \( k \)-space are recorded. We compare our method with the score-based generative model proposed by (Levac et al., 2022) and \( \ell_1 \)-norm wavelet regularized least-squares.
For motion artifact synthesis, we follow our approach detailed in Section 2.1. Specifically, we first corrupt the \( k \)-space with motion transform \( T_{\phi^*} \) to obtain a measurement \( y \) of size (\#coils, 640, 320), and then undersample the measurement with a factor of 4 using a 1D equispaced variable density mask. Note that three quarters of the 320 vertical lines in \( y \) are now equal to zero due to undersampling.
As for the motion pattern and trajectory of sampling, we consider three settings:
1. 10 TRs and random \( x \) and \( y \) translations \( t_x, t_y \sim \text{Unif}(-2, 2) \) which results in the ground-truth motion parameter \( \phi^* \in \mathbb{R}^{10 \times 2} \). This means every 8 lines in the \( k \)-space are affected by the same motion state.
2. 24 TRs and random \( x \) and \( y \) translations \( t_x, t_y \sim \text{Unif}(-2, 2) \) which results in the ground-truth motion parameter \( \phi^* \in \mathbb{R}^{24 \times 2} \).
3. 10 TRs and \( x \) and \( y \) translations \( t_x \) and \( t_y \) which results in the ground-truth motion parameter \( \phi^* \in \mathbb{R}^{10 \times 2} \). \( t_x \) and \( t_y \) are generated using sine and cosine functions to create a more realistic motion pattern in the sense that two consecutive motion states are very close to each other.
Table 1 shows the results averaged over 336 slices. The ranking of the methods is Ours > score-based model > \( \ell_1 \)-minimization and this is observed for various types of motion patterns. Figure 7 illustrates reconstruction examples along with motion parameter plots for each method.\(^1\) Looking at those
\(^1\)Results of the score-based model are obtained by reproducing the code provided by the authors (Levac et al., 2022).
Figure 4: From the SSIM values and the reconstructions itself, we can see that our method outperforms $\ell_1$-minimization and score-based reconstruction methods. From the plots below which show the reconstructed motion parameters $t_x, t_y$ for each motion state, we can see that ConvDecoder performs best as it reconstructs the motion parameters better. Here, motion parameters are sampled from $\sim \text{Unif}(-2, 2)$ for each method and the acceleration factor is 4.
| pattern | #states | SSIM |
|-----------------|---------|---------------|
| | | ConvDecoder (ours) | $\ell_1$-min. | score-based |
| random | 10 | **0.8864** | 0.7406 | 0.7967 |
| random | 24 | **0.8831** | 0.7366 | 0.7643 |
| pseudo-realistic| 10 | **0.8824** | 0.7326 | 0.7612 |
Table 1: Our untrained network outperforms the $\ell_1$-minimization and score-based reconstruction algorithms for three motion pattern settings. SSIM scores are averaged over 336 AXT2 slices.
examples, we find the same ranking of algorithms as when ranking by SSIM in Table[1]. Please see the supplement for further examples.
In terms of computational efficiency, our method takes approximately 6 minutes per slice (similar to $\ell_1$-minimization), whereas the score-based model takes approximately 30 minutes per slice. Runtimes were recorded on a single RTX A6000 GPU.
### 4.2.2 3D motion-compensated compressed sensing MRI with untrained networks
A popular MRI protocol in practice that offers higher resolution is 3D volumetric MRI. As opposed to a 2D slice-by-slice measurement such as the fastMRI dataset (which we explored in the previous section), in volumetric MRI, there are two phase encoding dimensions.
Patient movements in 3D cause serious motion artifacts in volumetric MRI. In this section, we explain how our method can be applied to such 3D data and present an example reconstruction result. Our untrained network operates in a 2D space by default for the fastMRI dataset. To extend it to 3D, we simply replace every 2D operator by its 3D variant (e.g., replacing 2D convolutions by 3D convolutions). In this manner, the network generates a volume instead of a slice. An immediate consequence of this modification is a higher memory consumption and a larger inference time. Please see Table[2] for details.
To evaluate our method on a real-world clinically-recorded sample, we consider a 3D brain volume of size (#coil, H, W, D) = (31, 176, 176, 50). The volume is derived by downsampling a 3D Cartesian FLAIR scan recorded at a field strength of 3T with an original matrix size of (31, 704, 352, 281).
| data type | data size (#coils, H, W, D) | memory (GB) | runtime (mins) |
|-----------|-----------------------------|-------------|---------------|
| 2D | (4, 640, 320, 1) | 2.1 | 6.3 |
| 3D | (31, 176, 176, 50) | 14.9 | 175.6 |
Table 2: Computational cost comparison between running our untrained network on a 2D or 3D sample. GPU memory and runtime numbers are reported for an RTX A6000 GPU.
Figure 5: The 3D sampling trajectory type we consider in our 3D motion-compensated accelerated MRI reconstruction. Each readout along the frequency encoding direction is recorded via one excitation.
The 3D sampling trajectory using which the volume was recorded is shown in Figure 5. For motion artifacts, we considered 5 degrees of freedom: 3 rotations and 2 translations (we omitted z-axis translation (feet to head direction) as the patient’s primary movement along this axis is expected to be nodding, which is already modelled by rotation).
Figure 6: **3D untrained motion-compensated compressed sensing MRI**. Our qualitative analysis shows that for the depicted slices, an untrained network reconstructs a quality image.
To reconstruct the unknown ground truth volume, we fitted the network to the $2.4 \times$ accelerated motion corrupted volume. Figure 6 shows a few slices of the reconstructed 3D volume. We observe an amount of blurriness in all the reconstructed slices. Further, reconstructed slices 13 and 26 are of better quality in terms of the low amount of present motion artifacts, whereas slice 28 contains some residual artifacts.
Finally in Figure 7, accurate recovery of motion parameters is shown. Note the offset between ground truth and predicted translation parameters which is due to the ambiguity of the reconstruction problem (i.e., a perfect reconstruction which is just a translated version of the ground truth image is still a valid solution to the problem).
5 DISCUSSION AND CONCLUSION
Deep learning achieves excellent performance in controlled scenarios for solving accelerated MRI reconstruction. However, in more realistic settings (such as accelerated MRI reconstruction from motion-degraded data), the performance and robustness of deep learning models is unclear.
In this work, we first demonstrated that state-of-the-art MRI reconstruction models become very expensive to use for motion-degraded MRI compressed sensing. This cost is reflected in the excessive amount of training data they require to achieve a similar performance compared to when they are employed for clean (artifact-free) MRI reconstruction.
We further proposed an approach based on untrained neural networks to solve the challenging task of motion-degraded compressed sensing MRI without any need for training data. Our method outperforms existing trained and untrained baselines w.r.t. to quantitative metrics as well as visual quality of the reconstruction.
Our work motivates further research in the direction of untrained network based motion-compensated compressed sensing MRI in multiple aspects. First, to study real-world (and not simulated) motion-degraded samples recorded with motion-recording sensors attached to the patient. Second, investigating the performance of trained and untrained networks under other important types of artifacts (e.g., respiratory artifacts). Finally, exploring the role of undersampling trajectory in motion-degraded compressed sensing MRI and its effect on the performance of reconstruction models.
REFERENCES
M. A. Al-Masni, S. Lee, J. Yi, S. Kim, S. Gho, Y. H. Choi, and D. H. Kim. Stacked U-Nets with self-assisted priors towards robust correction of rigid motion artifact in brain MRI. In NeuroImage, volume 259, 2022.
K. Armanious, C. Jiang, M. Fischer, T. Küstner, T. Hepp, K. Nikolaou, S. Gatidis, and B. Yang. MedGAN: Medical image translation using GANs. In Computerized Medical Imaging and Graphics, 2020.
S. Arora, V. Roeloffs, and M. Lustig. Untrained modified deep decoder for joint denoising parallel imaging reconstruction. In International Society for Magnetic Resonance in Medicine Annual Meeting, 2020.
C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In European Conference on Computer Vision (ECCV), pp. 184–199, 2014.
Z. Fabian and M. Soltanolkotabi. HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
R. Heckel. Regularizing linear inverse problems with convolutional neural networks. In NeurIPS Medical Imaging Workshop, 2019.
R. Heckel and P. Hand. Deep decoder: Concise image representations from untrained non-convolutional networks. In International Conference on Learning Representations (ICLR), 2019.
J. Hossbach, D. Splitthoff, S. Cauley, B. Clifford, D. Polak, W. Lo, H. Meyer, and A. Maier. Deep learning-based motion quantification from k-space for fast model-based MRI motion correction. In Medical Physics, 2022.
S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. pp. 448–456, 2015.
G. Jagatap and C. Hegde. Algorithmic guarantees for inverse imaging with untrained network priors. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
A. Jalal, M. Arvinte, G. Daras, E. Price, A. G. Dimakis, and J. Tamir. Robust compressed sensing MRI with deep generative priors. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
K. H. Jin, M. T. McCann, E. Froustey, and M. Unser. Deep convolutional neural network for inverse problems in imaging. In IEEE Transactions on Image Processing, pp. 4509–4522, 2017.
K. H. Jin, H. Gupta, J. Yerly, M. Stuber, and M. Unser. Time-dependent deep image prior for dynamic MRI. In IEEE Transactions on Medical Imaging, 2021.
T. Klug and R. Heckel. Scaling laws for deep learning based image reconstruction. 2023.
B. Levac, A. Jalal, and J. I. Tamir. Accelerated motion correction for MRI using score-based generative models. In arXiv preprint: 2211.00199[eess.IV], 2022.
J. Mayer, E. Blaszczyk, A. Cipriani, G. Ferrazzi, J. Schulz-Menger, T. Schaeffter, and C. Kolbitsch. Cardio-respiratory motion-corrected 3d cardiac water-fat MRI using model-based image reconstruction. volume 88, pp. 1561–1574, 2022.
K. Pawar, Z. Chen, J. Shah N, and G. F. Egan. MoCoNet: Motion correction in 3D MPRAGE images using a convolutional neural network approach. In arXiv preprint: 1807.10831[eess.IV], 2018.
M. Reyes, G. Malandain, P. M. Koulibaly, M. A. González-Ballester, and J. Darcourt. Model-based respiratory motion compensation for emission tomography image reconstruction. volume 52, pp. 3579, 2007.
Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan. Phase recovery and holographic image reconstruction using deep learning in neural networks. In Light: Science & Applications, volume 7, pp. 17141–17150, 2018.
O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 234–241, 2015.
KP. Slavkova, J. C. DiCarlo, V. Wadhwa, S. Kumar, C. Wu, J. Virostko, T. E. Yankelev, and J. Tamir. An untrained deep learning method for reconstructing dynamic MR images from accelerated model-based data. In Magnetic Resonance in Medicine, 2022.
A. Sriram, J. Zbontar, T. Murrell, A. Defazio, C. L. Zitnick, N. Yakubova, F. Knoll, and P. Johnson. End-to-end variational networks for accelerated MRI reconstruction. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 64–73, 2020.
M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M. Pauly, S. S. Vasanawala, and M. Lustig. ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA. In Magnetic Resonance in Medicine, pp. 990–1001, 2014.
|
RIaIpdUCPb
|
How well the model generalizes likely depends on the structure of the data. I feel that there is a lack of discussion on the underlying assumptions of the properties of the data distribution that would make the proposed geometrical configurations ideal for generalization.
|
Withdrawal Statement
We apologize for the writing and presentation issues in our submitted paper, which resulted in a lack of understanding from the reviewers.
In light of this, we have decided to withdraw the manuscript and make substantial revisions before resubmitting it. We appreciate your understanding and patience in this matter. Thank you.
|
pHaX00wxFy
|
I am not sure how relevant experiment 4.1 is on MaxJDiv, especially given the conclusion that maximizing the joint is better than maximizing the conditional but the proposed approach is to maximize the conditional.
|
REWARD-FREE EXPLORATION BY CONDITIONAL DIVERGENCE MAXIMIZATION
Anonymous authors
Paper under double-blind review
ABSTRACT
We propose maximum conditional divergence (MaxCondDiv), a new curiosity-driven exploration strategy that encourages the agent to learn in the absence of extrinsic rewards, effectively separating exploration from exploitation. Our central idea is to define curiosity as the divergence between the agent’s estimation of the transition probability between the next state given current state-action pairs (i.e., \( P(s_{t+1} | s_t, a_t) \)) in two adjacent trajectory fractions. Distinct to other recent intrinsically motivated exploration approaches that usually incur complex models in their learning procedures, our exploration is model-free and explicitly estimates this divergence from multivariate continuous observations, thanks to the favorable properties of the Cauchy-Schwarz divergence. Therefore, MaxCondDiv is less computationally expensive and reduces internal model selection bias. We establish a connection between the MaxCondDiv and the famed maximum entropy (MaxEnt) exploration, and observe that our MaxCondDiv achieves wider exploration range and faster convergence. Our exploration also encourages the agent to acquire intricate skills in a fully reward-free environment.
1 INTRODUCTION
Over the past few years, Reinforcement Learning (RL) has achieved remarkable success in addressing challenges in fields like robotics (Mnih et al., 2015) and games (Silver et al., 2016). Nonetheless, RL’s practical applications in real-world scenarios are still restricted due to the high variability and lack of user control on the availability of dense rewards, which are critical to the timeliness and success of RL. To counteract this shortcoming, intrinsically motivated exploration (Amin et al., 2021) has been put forth to encourage the agent to explore unknown states in the absence of extrinsic rewards, by offering an internal motivation, such as diversity (Eysenbach et al., 2019), novelty (Ostrovski et al., 2017; Tao et al., 2020), curiosity (Pathak et al., 2017; Burda et al., 2019).
Existing intrinsically motivated exploration approaches can be roughly divided into two categories with distinct goals (Amin et al., 2021): the space coverage approaches encourage an agent to visit more unexplored states or state-action pairs in a shorter amount of time; whereas the curiosity-driven approaches seek to explore areas where the agent’s prediction on next state given current state-action pairs (i.e., \( P(s_{t+1} | s_t, a_t) \)) has high uncertainty.
Maximum entropy (MaxEnt) principle has emerged as a prominent technique in the first category. One way to achieve MaxEnt is by minimizing the KL divergence between a uniform distribution and a target distribution. This is because a uniform distribution can guarantee full coverage of the space, which also displays maximum entropy. Recently, (Hazan et al., 2019) introduced the concept of maximum state entropy exploration (MSEE) in a broader spectrum of RL environments. Subsequently, multiple approaches have been proposed to enhance it, such as (Zhang et al., 2021; Seo et al., 2021; Yuan et al., 2022; Nedergaard & Cook, 2022; Yarats et al., 2021; Tiapkin et al., 2023), just to name a few. However, the utilization of multiple policies in these MaxEnt-based methods and the objective of uniform distribution over state space may cause the agent to spend a considerable amount of time near the starting states, leading to longer training time.
Conversely, curiosity-driven methods encourage exploration of unpredictable parts of the environment and prioritize the discovery of novel states beyond the initial ones. Usually, the dynamic of the environment is characterized by the transition probability of next state given current state-action pairs, i.e., \( P(s_{t+1} | s_t, a_t) \). Hence, most approaches in this category use an auxiliary predictive model...
\( P_{\theta}(s_{t+1}|s_t, a_t) \) with parameters \( \theta \), such as linear regression (Schmidhuber [1991]), convolution neural networks (Pathak et al. [2017]) and fully-connected neural networks (Stadie et al. [2015], Yu et al. [2020], Pathak et al. [2017]), to model the transition probability. Once the model is trained, intrinsic rewards can be defined with either the prediction error of the next state \( s_{t+1} \) or the information gain (Lopes & Mengue [2022]), which can be approximated by the difference between the estimate of the transition probability before and after new triple samples \( \{s_{t+1}, s_t, a_t\} \) are included.
The above-mentioned curiosity-driven techniques are model-based in the sense that they never explicitly estimate the true divergence of transition probability \( P(s_{t+1}|s_t, a_t) \) from observations \( \{s_{t+1}, s_t, a_t\}_{t=1}^{\infty} \) in the trajectory. Rather, they model it implicitly with an internal parametric auxiliary model \( P_{\theta}(s_{t+1}|s_t, a_t) \) for the ease of estimation. Hence, the exploration depends heavily on the predictive performance of the auxiliary models; and it is also hard for practitioners to decide which models to choose. If the model is well trained such that it learns precisely the conditional distribution \( P(s_{t+1}|s_t, a_t) \), the RL agent may encounter vanishing intrinsic rewards. On the contrary, intrinsic rewards explode. Besides, the inclusion of auxiliary models introduces additional hyperparameters and parameters, making it challenging to maintain a balance between the model and the RL agent.
In this paper, we develop Maximum Conditional Divergence (MaxCondDiv), a new curiosity-driven exploration approach for exploration RL that does not rely on external rewards or parameterized models for prediction. To this end, akin to the MaxEnt principle, it leverages an information-theoretic measure to explicitly model and estimate the divergence of \( P(s_{t+1}|s_t, a_t) \) in two adjacent trajectory fractions (i.e., \( \max D(P_c(s_{t+1}|s_t, a_t); P_f(s_{t+1}|s_t, a_t)) \), where \( c \) stands for “current” and \( f \) stands for “former”) only based on observation triplets \( \{s_{t+1}, s_t, a_t\} \), in a model-free manner. Despite the simplicity of this idea, a precise estimation of the divergence between two conditional distributions is a non-trivial task, especially when the estimator is required to operate in a multivariate continuous space. To address this issue, we estimate \( D(P_c(s_{t+1}|s_t, a_t); P_f(s_{t+1}|s_t, a_t)) \) by introducing the notion of the Cauchy-Schwarz (CS) divergence (Príncipe et al. [2000], Yu et al. [2023]), which significantly reduces the difficulty of estimation and enjoys several desirable properties than conventional Kullback–Leibler (KL) divergence and maximum mean discrepancy (MMD) (Gretton et al. [2012]).
To summarize, we make the following key contributions:
• By explicitly modeling the conditional distribution \( P(s_{t+1}|s_t, a_t) \) without training an auxiliary model, we propose MaxCondDiv as a new reward-free exploration strategy applicable to multivariate observations, which encourages the agent to explore divergent transition probability and leads to high information gain.
• Distinct to \( f \)-divergence such as KL divergence and integral probability metric such as MMD, the use of CS divergence simplifies the estimation and avoids unstable training.
• We also establish the connection between MaxCondDiv with respect to MaxEnt.
• Using MaxCondDiv, our agent can acquire intricate skills such as jumping and flipping in a fully reward-free environment. Using the visited states as a metric, our method outperforms other state-of-the-art reward-free exploration methods in three Mujoco environments.
2 BACKGROUND KNOWLEDGE AND RELATED WORKS
2.1 Rényi’s \( \alpha \)-Entropy and Cauchy-Schwarz Divergence
In information theory, a natural extension of the well-known Shannon’s entropy is Rényi’s \( \alpha \)-entropy (Rényi [1961]). For a random variable \( x \) with probability density function (PDF) \( p(x) \) in a finite set \( X \), the \( \alpha \)-entropy \( H_\alpha(x) \) is defined as:
\[
H_\alpha(x) = \frac{1}{1 - \alpha} \log \int_X p^\alpha(x) dx.
\]
Similarly, for two random variables \( x \) and \( y \) with joint PDF \( p(x, y) \), the joint entropy is given by:
\[
H_\alpha(x, y) = \frac{1}{1 - \alpha} \log \int_Y \int_X p^\alpha(x, y) dxdy.
\]
Thus, the $\alpha$-order mutual information\footnote{There is no generally accepted definition on $\alpha$-order mutual information (Verdú, 2015). We took the one (Cachin, 1997) that is inspired by the strong chain rule of Shannon entropy, i.e., $H(x, y) = H(x) + H(y|x)$.} can be expressed as (Cachin, 1997; Teixeira et al., 2012):
$$I_\alpha(x, y) = H_\alpha(x) + H_\alpha(y) - H_\alpha(x, y). \quad (3)$$
Likewise, extensions for the relative entropy also exist; a modified version of Rényi’s $\alpha$-relative entropy (or divergence) between PDFs $p$ and $q$ is given by (Lutwak et al., 2005):
$$D_\alpha(p; q) = \log \left( \frac{\int q^{\alpha-1} p}{\int p^\alpha} \right)^{\frac{1}{1-\alpha}}. \quad (4)$$
The limiting case of (1) and (4) for $\alpha \to 1$ are Shannon’s entropy and KL divergence, respectively.
It turns out that for the case of $\alpha = 2$, the above quantities can be expressed as functions of inner products between PDFs, which makes them easy to estimate in reproducing kernel Hilbert spaces (RKHS) (Principe, 2010). In particular, the quadratic entropy and divergence are given by:
$$H_2(x) = -\log \int_X p^2(x) dx, \quad \text{and} \quad D_{CS}(p; q) = -\frac{1}{2} \log \left( \frac{\int pq}{\int p^2} \right)^2. \quad (5)$$
Eq. (5) is also called the Cauchy-Schwarz (CS) divergence as it can be obtained by applying the CS inequality associated with $p(x)$ and $q(x)$:
$$\left| \int p(x)q(x) dx \right|^2 \leq \int |p(x)|^2 dx \int |q(x)|^2 dx. \quad (6)$$
The CS inequality also holds for two conditional distributions $p(y|x)$ and $q(y|x)$ (Yu et al., 2023), the resulting conditional CS divergence can be expressed naturally as:
$$D_{CS}(p(y|x); q(y|x)) = -2 \log \left( \int_X \int_Y p(y|x)q(y|x) dx dy \right) + \log \left( \int_X \int_Y p^2(y|x) dx dy \right) + \log \left( \int_X \int_Y q^2(y|x) dx dy \right)$$
$$= -2 \log \left( \int_X \int_Y \frac{p(x,y)}{p(x)q(x)} dx dy \right) + \log \left( \int_X \int_Y \frac{p^2(x,y)}{p^2(x)} dx dy \right) + \log \left( \int_X \int_Y \frac{q^2(x,y)}{q^2(x)} dx dy \right). \quad (7)$$
### 2.2 INTRINSICALLY-MOTIVATED EXPLORATION RL
The existing intrinsically motivated exploration approaches can be broadly categorized into two types: the space coverage and the curiosity-driven approaches. Approaches rooted in the MaxEnt principle for achieving space coverage have gained popularity recently due to their strong mathematical interpretability and performance. For example, maximum state entropy exploration (MSEE) by (Hazan et al., 2019) guarantees uniform coverage of the state space. It offers a proof of policy improvement when utilizing the APPROXPLAN/DENSITYEST oracle. Later, MaxRényi (Zhang et al., 2021) replaced the Shannon entropy with Rényi entropy, and maximizes the entropy in the joint space of action and state. RE3 (Seo et al., 2021) incorporates neural encoders, enabling their application in video-oriented environments like Atari. RISE (Yuan et al., 2022) integrates both RE3 and MaxRényi, leveraging them to accelerate the learning process. In (Nedergaard & Cook, 2022) and (Yarats et al., 2021), $k$-means and prototypical representations are introduced to enhance the quality of latent vectors. (Tiapkin et al., 2023) studies MSEE to learn a policy leading to $\epsilon$-optimal maximum and reduces the sample complexity.
The other type, curiosity-driven approaches, have their roots traced back to the 70’s when the concept of “observer’s information” and “interestingness” were introduced (Pfaffelhuber, 1972; Lenat, 1976). Recent popular prediction error-based approaches, largely driven by advancements in deep neural networks (DNNs), fall under this category. For instance, ICM (Pathak et al., 2017) utilizes CNN as the auxiliary model to predict the next image, whereas GIRL (Yu et al., 2020) implements variational
autoencoder (VAE) (Kingma & Welling, 2014) to model the transitions in environments. Similarly, (Shyam et al., 2019) aims to maximize the Jensen-Shannon divergence of fully-connected neural network outputs. In contrast to these methods, we pursue a model-free approach.
Our method shares the closest resemblance with the model-free curiosity-driven approach introduced by (Storck et al., 1995), which estimates the transition probability directly from observations. However, this method can only be used in tabular discrete environments, as it calculates the transition probability by counting. In contrast, our method is applicable to both discrete and continuous environments and is compatible with arbitrary RL techniques, thanks to the use of CS divergence.
3 Maximum Conditional Divergence (MaxCondDiv) Exploration
The intrinsically-Motivated exploration RL problem can be defined as policy search in an infinite-horizon Markov decision process (MDP) defined by a 6-tuple \((S, A, p_s, r^E, r^I, \gamma)\), where \(S\) is the set of all possible states, \(A\) is the set of all possible actions. \(p_s(s_{t+1}|s_t, a_t)\) is the transition probability density of the next state \(s_{t+1} \in S\) given the current state \(s_t \in S\) and action \(a_t \in A\). The environment omits extrinsic rewards given by the extrinsic reward function \(r^E(s_t, a_t)\). Meanwhile, the intrinsic reward function \(r^I(\rho_{t-})\) determines the intrinsic rewards based on historical data \(\rho_{t-}\) collected before time step \(t\). \(\gamma \in [0, 1)\) is a discount factor. The optimal policy aims to learn a policy \(\pi(a_t|s_t): S \mapsto A\) by maximizing extrinsic and intrinsic rewards:
\[
\pi^* = \arg\max_\pi \mathbb{E}_{\rho \sim \pi} \left( \sum_{t=0}^{T-1} \gamma^t [r^E(s_t, a_t) + \beta r^I(\rho_{t-})] \right),
\]
where \(\beta\) is a hyperparameter that determines the relative importance of intrinsic and extrinsic reward, and \(\rho = \{s_{t+1}, s_t, a_t\}_{t=0}^{T-1}\) is the data collection by executing policy \(\pi\). We specifically consider the case of reward-free cases, where the extrinsic reward \(r^E(s_t, a_t)\) is consistently zero. Our method aims to design an intrinsic reward function \(r^I(\rho_{t-})\) for exploring the functional space of transitions \(p_s(s_{t+1}|s_t, a_t)\), without relying on any extrinsic reward \(r^E(s_t, a_t)\).
3.1 Conditional Cauchy-Schwarz Divergence (CCSD) Reward Function
Our focus is on enabling the agent to acquire novel transitions in contrast to recent visited samples. To specify a transition sample, we require a triplet sample consisting of the next state \(s_{t+1}\), the current state \(s_t\), and the action \(a_t\). Hence, a complete trajectory \(T_E = \{(s_2, s_1, a_1), (s_3, s_2, a_2), \ldots, (s_T, s_{T-1}, a_{T-1}), \ldots\}\), \(E\) for “entire”, is defined to be the sequence of triplet samples, as illustrated in Fig. 1. Meanwhile, we utilize a first-in-first-out replay buffer (sliding window) to locally store transition samples. We refer to this subsequence of trajectory \(T_E\) as a “trajectory fraction”, denoted as \(T\). The trajectory fraction \(T\) contains data for a maximum of \(2\tau\) previous time steps. Then we define \(P_f(s_{t+1}|s_t, a_t) = P(s_{t+1}, (s_t, a_t)) \sim T_{1:\tau}(s_{t+1}|s_t, a_t)\), \(f\) for “former”, as the transition probability \(P(s_{t+1}|s_t, a_t)\) for triplets that are sampled from \(T_{1:\tau}\) (i.e., 1-st to \(\tau\)-th elements of \(T\)). Similarly, \(P_c(s_{t+1}|s_t, a_t) = P(s_{t+1}, (s_t, a_t)) \sim T_{\tau+1:2\tau}(s_{t+1}|s_t, a_t)\), \(c\) for “current”, be the transition probability \(P(s_{t+1}|s_t, a_t)\) for triplets that are sampled from \(T_{\tau+1:2\tau}\). Our framework estimates an intrinsic reward defined as the divergence between \(P_f(s_{t+1}|s_t, a_t)\) and \(P_c(s_{t+1}|s_t, a_t)\), and learns a policy by maximizing this conditional divergence (MaxCondDiv). More formally, our optimal policy aims to maximize the divergence between “former” and “current” transitions in the trajectory fraction \(T\):
\[
\pi^*_{\text{MaxCondDiv}} = \arg\max_\pi \mathbb{E}_{\rho \sim \pi} \left( \sum_{T \in [T_E]} D(P_c(s_{t+1}|s_t, a_t); P_f(s_{t+1}|s_t, a_t)) \right).
\]
Figure 1: The structure of our replay buffer. We choose the length of \( T \) to be \( 2\tau \) and divide it equally into "current" and "former" parts. The split point is arbitrary, and overlapping fractions are also possible. If we designate the \( T_{1:2\tau-1} \) as "former" and the \( T_{1:2\tau} \) samples as "current", our approach is consistent with that of (Storck et al., 1995).
3.2 Why Conditional Cauchy-Schwarz Divergence (CCSD) for MaxCondDiv?
In principle, any divergence could be used in the context of MaxCondDiv. We underscore the rationale for choosing CS divergence, rather than the popular KL divergence and MMD. For KL divergence
\[ D_{KL}(p; q) = \int p \log \left( \frac{p}{q} \right), \]
its conditional extension follows a decomposition rule (Cover, 1999):
\[
D_{KL}(\mathbb{P}_c(s_{t+1}|s_t, a_t); \mathbb{P}_f(s_{t+1}|s_t, a_t)) = D_{KL}(\mathbb{P}_c(s_{t+1}, s_t, a_t); \mathbb{P}_f(s_{t+1}, s_t, a_t)) \\
- D_{KL}(\mathbb{P}_c(s_t, a_t); \mathbb{P}_f(s_t, a_t)),
\]
in which both terms are usually evaluated by \( k \)-NN estimator (Wang et al., 2009). However, the term \( \log \left( \frac{p}{q} \right) \) will explode when \( q \to 0 \), a scenario commonly encountered in our RL experiments. This instability can disrupt the learning process for RL agents. Further empirical details regarding MaxCondDiv’s use of KL divergence can be found in both Section 4.3 and the Appendix C.2.
Fortunately, CS divergence does not have this issue: it is much more stable and never explodes (see also our discussions in Appendix D.1). Theoretically, CS divergence is no greater than KL divergence in Gaussian distributed data. Therefore, it provides a viable alternative objective when KL divergence is hard to be applied in practice.
Proposition 1. For two arbitrary d-variate Gaussian distributions \( p \sim \mathcal{N}(\mu_1, \Sigma_1) \) and \( q \sim \mathcal{N}(\mu_2, \Sigma_2) \), we have:
\[
D_{CS}(p; q) \leq \min \{ D_{KL}(p; q), D_{KL}(q; p) \}.
\]
All Proofs can be found in Appendix A. Moreover, compared with the \( k \)-NN estimator, our empirical estimator of CS divergence is differentiable, which makes it promising for potential applications in deep multi-modal learning, where the RL module may play a critical role.
MMD embeds probability functions in a reproducing kernel Hilbert space (RKHS). If we take the conditional MMD definition in (Ren et al., 2016), the estimator involves matrix inverse and an extra hyper-parameter, which also makes the training highly unstable and time consuming. See the Appendix C.1 for more discussions.
In this paper, we suggest conditional Cauchy-Schwarz divergence (CCSD) for MaxCondDiv:
\[
D_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) = -2 \log \left( \int_{S_{t+1}} \int_{\{s_t, a_t\}} \frac{\mathbb{P}_f(s_{t+1}, \{s_t, a_t\})}{\mathbb{P}_f(\{s_t, a_t\})} d\{s_t, a_t\} ds_{t+1} \right) \\
+ \log \left( \int_{S_{t+1}} \int_{\{s_t, a_t\}} \frac{\mathbb{P}_c^2(s_{t+1}, \{s_t, a_t\})}{\mathbb{P}_c(\{s_t, a_t\})} d\{s_t, a_t\} ds_{t+1} \right) \\
+ \log \left( \int_{S_{t+1}} \int_{\{s_t, a_t\}} \frac{\mathbb{P}_c^2(s_{t+1}, \{s_t, a_t\})}{\mathbb{P}_c(\{s_t, a_t\})} d\{s_t, a_t\} ds_{t+1} \right),
\]
where \( S_{t+1}, \{S_t, A_t\} \) are world set of \( s_{t+1} \) and \( \{s_t, a_t\} \), respectively.
### 3.3 Practical Methods for Accurately Estimating the CCSD Intrinsic Reward
**Proposition 2** (Empirical Estimator of \( D_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) \) (Yu et al., 2023).
Given observations in the \( 2\tau \)-length trajectory fraction \( T = \{[(s_{t+1}), \{s_t, a_t\}]\}_{i=1}^{2\tau} \), dividing them into two fractions such that \( \{[(s_{t+1}), \{s_t, a_t\}]\}_{i=1}^{\tau} \) are sampled from distribution \( \mathbb{P}_f(s_{t+1}, \{s_t, a_t\}) \) and \( \{[(s_{t+1}), \{s_t, a_t\}]\}_{i=\tau+1}^{2\tau} \) are sampled from \( \mathbb{P}_c(s_{t+1}, \{s_t, a_t\}) \). Let \( K_f^i \) and \( L_f^i \) denote, respectively, the Gram matrices for the concatenated variable \( \{s_t, a_t\} \) and the variable \( s_{t+1} \) in the distribution \( \mathbb{P}_f \). That is, \((K_f^i)_{ij} = \kappa(\{s_t, a_t\}_i - \{s_t, a_t\}_j)\), \((L_f^i)_{ij} = \kappa(\{s_{t+1}\}_i - \{s_{t+1}\}_j)\) for \( i, j = 1 : \tau \), in which \( \kappa \) is a Gaussian kernel and takes the form of \( \kappa = \exp \left( -\frac{\|a\|^2}{2\sigma^2} \right) \). Similarly, let \( K_c^i \) and \( L_c^i \) denote, respectively, the Gram matrices for the variable \( \{s_t, a_t\} \) and the variable \( s_{t+1} \) in the distribution \( \mathbb{P}_c \). Meanwhile, let \( K_{fc}^i \in \mathbb{R}^{\tau \times \tau} \) (i.e., \((K_{fc}^i)_{ij} = \kappa(\{s_t, a_t\}_i - \{s_t, a_t\}_j)\), \( i = 1 : \tau \) and \( j = \tau + 1 : 2\tau \)) denote the Gram matrix for variable \( \{s_t, a_t\} \) from distribution \( \mathbb{P}_f \) to distribution \( \mathbb{P}_c \), and \( L_{fc}^i \in \mathbb{R}^{\tau \times \tau} \) the Gram matrix for variable \( s_{t+1} \) from distribution \( \mathbb{P}_f \) to distribution \( \mathbb{P}_c \). The Gram matrices \( K_{cf}^i \) and \( L_{cf}^i \) can be defined similarly. The empirical estimation of \( D_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) \) is given by:
\[
\hat{D}_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) = \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{fi}^j L_{fi}^j}{\left( \sum_{i=1}^{\tau} K_{fi}^j \right)^2} \right) \right) + \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{ci}^j L_{ci}^j}{\left( \sum_{i=1}^{\tau} K_{ci}^j \right)^2} \right) \right) \\
- \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{fc}^j L_{fc}^j}{\left( \sum_{i=1}^{\tau} K_{fc}^j \right)^2} \right) \right) - \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{cf}^j L_{cf}^j}{\left( \sum_{i=1}^{\tau} K_{cf}^j \right)^2} \right) \right).
\]
We offer a visualization in Appendix D.2 and provide implementation in Appendix E to facilitate comprehension of the Gram matrix. The estimator exhibits low computational complexity with \( O(N^2) \). The CCSD intrinsic reward can be combined with any RL methods, e.g., Q-learning (Watkins & Dayan, 1992), PPO (Schulman et al., 2017). We summarize the training pseudo code in Algorithm 1.
### 3.4 Connection between MaxCondDiv and MaxEnt
**Proposition 3.** Let \( X \) and \( Y \) be two random variable with marginal PDFs \( p_X(x) = p_X(X = x) \) and \( p_Y(y) = p_Y(Y = y) \), respectively, where \( x \in \mathcal{R} \) and \( y \in \mathcal{R} \). Let \( p_{XY}(x, y) = p_{XY}(X = x, Y = y) \) denotes the joint PDF. We have:
\[
\frac{1}{2} H_2(x) + \frac{1}{2} H_2(y) - I_2(x, y) \geq D_{cs}(p_X; p_Y)
\]
iff:
\[
\int_{r \in \mathcal{R}} p_X(X = r)p_Y(Y = r)dr \geq \int_{y \in \mathcal{R}} \int_{x \in \mathcal{R}} p_{XY}^2(x, y)dxdy
\]
---
2In kernel learning, the Gram or kernel matrix is a symmetric matrix where each entry is the inner product of the corresponding data points in a reproducing kernel Hilbert space (RKHS), defined by kernel function \( \kappa \).
where $H_2(\cdot)$ and $I_2(\cdot,\cdot)$ are 2nd-order Rényi entropy and mutual information, as defined in Eq.(1) and Eq.(3), respectively.
We justify Proposition 3 in the Appendix A.3. For two variables $X$ and $Y$, maximizing their CS divergence $D_{cs}(p_X; p_Y)$ also maximizes a lower bound of the sum of 2nd-order Rényi entropy of $X$ and $Y$ minus their 2nd-order Rényi mutual information. It applies to our case by substituting $X$ and $Y$ with $s_{t+1} \sim P_f(\cdot|s_t, a_t)$ and $s_{t+1} \sim P_c(\cdot|s_t, a_t)$, respectively. Therefore, maximizing our CCSD is closely related to the maximum trajectory entropy exploration (Ekroot & Cover [1993], Fiechter [1994]), i.e., $\text{argmax}_\pi H_{traj}(p^{\pi}_{traj})$, where $p^{\pi}_{traj} = \pi(a_1|s_1) \prod_{t=2}^{T} P_{t-1}(s_t|s_{t-1}, a_{t-1}) / \pi_t(a_t|s_t)$, which can also be obtained by solving entropy regularized Bellman equations using entropy of the transition probabilities $H(s_{t+1}|s_t, a_t)$ as rewards (Fiechter [1994], Tiapkin et al. [2023]). Meanwhile, the last term in Eq. (15) incentivizes the independence between “former” and “current”.
4 EXPERIMENTS
4.1 A THOUGHT EXPERIMENT
To highlight the contrast between MaxEnt and MaxCondDiv, let us consider a thought experiment visualized in Fig. 2, where optimal policies are realized through a retrospective step: The scenario involves a 2-D open-world environment. In each trial, the agent initiates from the central position $(100, 100)$ and undergoes a sequence of 200 time steps. At each time step, the agent’s action is to select an arbitrary direction and move one unit distance. We replicate this procedure 100 times for each exploration policy.
As illustrated in Fig. 2, the random policy keeps the agent predominantly near its starting point. Conversely, the optimal MaxEnt policy distributes the agent’s trajectory evenly, covering a range that far exceeds what a random policy achieves. Our MaxCondDiv agent selects a random direction to move in the first step because the sample in the buffer is $(100, 100)$, and moving in any radial direction is equally divergent. For instance, if the agent moves to $(100, 101)$, the samples in the buffer become $[(100, 100), (100, 101)]$. To maximize the divergence, the agent needs to move to $(100, 102)$ in the next step. Deviating from this will result in smaller distances to the previously visited states. Consequently, during each trial, the agent consistently moves in one single random direction. Over 100 trials, the MaxCondDiv agent explores the world more radially, akin to a fireworks display. We also explore maximizing the divergence between joint distributions (MaxJDiv), i.e., $D_{CS}(P_f(s_{t+1}, s_t, a_t); P_c(s_{t+1}, s_t, a_t))$, an alternative to MaxCondDiv. For the joint probability $P(s_{t+1}, s_t, a_t) = P(s_{t+1}|s_t, a_t)P(s_t, a_t)$, if $P(s_t, a_t)$ is small (that is, the corresponding state-action pairs are not fully explored), the corresponding $P(s_{t+1}|s_t, a_t)$ plays a minor role in the learning objective. If $P(s_t, a_t)$ is large, the corresponding $P(s_{t+1}|s_t, a_t)$ would have higher weight. This is in contrast to our goal, in which we expect that regions with low $P(s_t, a_t)$ should be explored more during exploration. Hence, we expect the performance of MaxJDiv is outperformed by MaxCondDiv.

**Figure 2:** The realization of the thought experiment using random, optimal MaxEnt, MaxJDiv and MaxCondDiv policies. The MaxEnt principle facilitates exploration by uniformly visiting more states, whereas our MaxCondDiv principle guides exploration by maintaining distance from previously visited states.
4.2 RESULTS ON MOUNTAINCAR AND MAZE
In this section, we experiment with MountainCar and Maze, using Q-learning as the oracle, and compare it to the MaxEnt principle and random policy. For MaxEnt principle, we adopt the MSEE of (Hazan et al. [2019]). The agent is trained 100 episodes, i.e., around 50,000 steps. In Figure 3
Figure 3: Trajectories of different trained policies on Mountain Car and Maze. The flag positions are indicated by red vertical lines. Both MaxEnt and MaxCondDiv can facilitate environment exploration to achieve a defined goal. MaxEnt emphasizes uniform visits to all states, while our MaxCondDiv strategy involves maintaining distance from starting points.
(top-left), we illustrate the MountainCar environment, where the most challenging state to explore is indicated by the flag. We executed trained policies, and visualize their trajectories using kernel density estimation in heatmaps. As expected, the random policy fails to reach the flag and remains close to the starting points. Although MaxEnt can reach the flag, it focuses more on states near the starting point. In contrast, MaxCondDiv reaches the flag more frequently but tends to ignore regions near visited states.
In Maze, as shown in Fig. 3, the agent drives the red point to explore the maze. We record trajectories for 50,000 steps. The agent is reset to the start point every 1,000 steps. The random policy remains near the start points, while MaxEnt explores the entire state space evenly. Our MaxCondDiv also explores the entire maze, but tends to stay away from the start point. In the heatmap of MaxCondDiv, the probability at the start point is much lower than that at challenging states, such as top-right and bottom-right corner, indicating that our method has a higher probability to “reach the boundary”.
4.3 Results on Mujoco
Figure 4: Trajectories of various trained policies on Mujoco. Consistent with prior findings, our MaxCondDiv approach is characterized by a deliberate maintenance of distance from visited states.
Mujoco is an advanced physics simulation with continuous spaces and multiple tasks in which we select Hopper, Halfcheetah and Ant. In our experiments, observation noise is introduced by rounding state and action values to two decimal places, and the RL backbone is a PPO agent. Details of hyper-parameters are in Appendix B. The agent is trained for 1,000 episodes, i.e., 1,000,000 steps in total. The agent restarts from the initial state with uniform noise in each episode.
Divergence vs Entropy. We depict the distribution of visited states within 10,000 steps using trained agents in Fig. 4. For both Hopper and Halfcheetah, we visualize the first two states, which are the z-coordinate of the front tip and the top angle for Hopper, and the x-y coordinates for Ant. In Hopper, our method outperforms others and effectively learns the necessary degrees of freedom to walk forward or backward. In HalfCheetah and Ant, MaxEnt explores a broad space, generating trajectories evenly distributed around the start point. Our MaxCondDiv diverges by concentrating
on exploration far from the start point, as confirmed by the radial exploration trajectories from our thought experiment.
Comparison to SOTA Exploration RL Approaches. We compare our method with random policy, curiosity-driven exploration (Pathak et al., 2017) (ICM), Exploration by Random Network Distillation (Burda et al., 2019) (RND), Rényi State Entropy Maximization (Yuan et al., 2022) (RISE), Exploration by Maximizing Rényi Entropy (MaxRényi), Maximum State Entropy Exploration (Hazan et al., 2019) (MSEE) and our MaxCondDiv using KL divergence. Similar to (Hazan et al., 2019), we evaluate the exploration performance of trained agents using the number of visited states. It is widely accepted that a large number of visited states will result in improved performance in downstream tasks because the agent can gather sufficient information (Jin et al., 2020). We execute the trained policy for 10,000 steps every 100,000 steps of training, depicting the numbers of visited states in Fig. 5. MaxCondDiv outperforms the baseline methods in terms of exploration range and training speed. Furthermore, we carry out experiments on downstream tasks which is commonly referred to as the "planning phase" within a reward-free RL framework (Jin et al., 2020). However, the results in downstream tasks are significantly influenced by the subsequent offline RL algorithms employed and are limited compared to online RL. Consequently, we have not utilized them as our primary results but have included them in Appendix C.5 for reference.
Learned Skill without Using Extrinsic Rewards. We have included images depicting agent motions in Fig. 6. MaxCondDiv acquires a range of basic behaviors such as jumping forward, flipping, etc, without extrinsic rewards. Videos are shown in Appendix C.3 and attached zip file.
5 CONCLUSION
We propose Maximum Conditional Divergence (MaxCondDiv), a model-free method for exploration without extrinsic rewards that estimates the difference of transition probabilities in two trajectory fractions using a conditional Cauchy-Schwarz divergence estimator. MaxCondDiv exhibits distinct exploration behaviors compared to maximum entropy principle and avoids auxiliary model selection bias observed in other curiosity-driven approaches. We evaluate MaxCondDiv in two discrete and three continuous environments, consistently achieving exploration of more states or successfully reaching challenging states.
REFERENCES
Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, and Doina Precup. A survey of exploration methods in reinforcement learning. *arXiv preprint arXiv:2109.00157*, 2021.
Jihye Bae, Luis Sanchez Giraldo, Pratik Chhatbar, Joseph Francis, Justin Sanchez, and Jose Principe. Stochastic kernel temporal difference for reinforcement learning. In *2011 IEEE International Workshop on Machine Learning for Signal Processing*, pp. 1–6. IEEE, 2011.
Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. In *International Conference on Learning Representations*, 2019.
Christian Cachin. *Entropy measures and unconditional security in cryptography*. PhD thesis, ETH Zurich, 1997.
Thomas M Cover. *Elements of information theory*. John Wiley & Sons, 1999.
Laura Ekroot and Thomas M Cover. The entropy of markov trajectories. *IEEE Transactions on Information Theory*, 39(4):1418–1421, 1993.
Tom Erez, Yuval Tassa, and Emanuel Todorov. Infinite-horizon model predictive control for periodic tasks with contacts. *Robotics: Science and systems VII*, pp. 73, 2012.
Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In *International Conference on Learning Representations*, 2019.
Claude-Nicolas Fiechter. Efficient reinforcement learning. In *Proceedings of the seventh annual conference on Computational learning theory*, pp. 88–97, 1994.
Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. *arXiv preprint arXiv:2004.07219*, 2020.
Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In *International conference on machine learning*, pp. 2052–2062. PMLR, 2019.
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012.
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *International conference on machine learning*, pp. 1861–1870. PMLR, 2018.
Elad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. In *International Conference on Machine Learning*, pp. 2681–2691. PMLR, 2019.
Roger A Horn and Charles R Johnson. *Matrix analysis*. Cambridge university press, 2012.
Chi Jin, Akshay Krishnamurthy, Max Simchowitz, and Tiancheng Yu. Reward-free exploration for reinforcement learning. In *International Conference on Machine Learning*, pp. 4870–4879. PMLR, 2020.
Kittipat Kampa, Erion Hasanbelliu, and Jose C Principe. Closed-form cauchy-schwarz pdf divergence for mixture of gaussians. In *The 2011 International Joint Conference on Neural Networks*, pp. 2578–2585. IEEE, 2011.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In *International conference on learning representations*, 2014.
Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:1179–1191, 2020.
Douglas Bruce Lenat. *AM: an artificial intelligence approach to discovery in mathematics as heuristic search*. Stanford University, 1976.
|
8zJevzvk64
|
No direct assumptions on $\mu$ looks very strange for off-policy evaluation algorithms since in the worst case $\mu$ may be degenerate distribution whereas the goal is to evaluation non-degenerate one; It requires additional comments.
|
Schrödinger Bridge to Bridge Generative Diffusion Method to Off-Policy Evaluation
Anonymous authors
Paper under double-blind review
Abstract
The problem of off-policy evaluation (OPE) in reinforcement learning (RL), which evaluates a given policy using data collected from a different behavior policy, plays an important role in many real-world applications. The OPE under the model of episodic non-stationary finite-horizon Markov decision process (MDP) has been widely studied. However, the general model-free importance sampling (IS) methods suffer from the curse of horizon and dimensionality, while the improved marginal importance sampling (MIS) can only be restrained to the case where the state space $S$ is sufficiently small. The model-based methods often have limited scope of application. To find a widely-applicable OPE algorithm when $S$ is continuous and high-dimensional that avoids the curse of horizon and dimensionality, which means the error of the estimator grows exponentially with the number of horizon $H$ and the dimension $d$ of the state space $S$, we apply the diffusion Schrödinger bridge generative model to construct a model-based estimator (CDSB estimator). Moreover, we established the statistical rate of the estimation error of the value function with a polynomial rate of $O(H^2 \sqrt{d})$, which, to the best of our knowledge, is one of the first theoretical rate results on applying Schrödinger bridge to reinforcement learning. This breaks the restraint of the complexity of the state space for OPE under MDP with large horizon and can be applied to various real-life decision problems with continuous setting, which is shown in our simulation using our method in continuous, high-dimensional and long-horizon RL environments and its comparison with other existing algorithms.
1 Introduction
The problem of off-policy evaluation (OPE) in reinforcement learning is evaluating the average return value of a given unknown policy (referred to as the target policy) leveraging data gathered from a distinct behavior policy. Given the increasing need for OPE in domains like self-driving and healthcare, the development of efficient algorithms for off-policy evaluation has emerged as a critical priority.
Of all the OPE problems, OPE under the setting of Markov decision process (MDP) is of great importance. For MDP-setting OPE problems, there are various, both model-free and model-based algorithms in the literature. For model-free algorithms, the method of importance sampling (IS) is the most representative and serves as an efficient bridge between the target policy and behavior policy and is widely used for short-horizon OPE problems. (Precup et al., 2000; Hanna et al., 2018; Robins et al., 2000) However, the traditional IS algorithm as well as many other model-free algorithms (for example, Kallus & Uehara, 2020) suffers from the curse of horizon, which means the MSE of IS estimator grows exponentially with the number of horizon $H$. (Liu et al., 2020; Jiang & Li, 2016; Precup et al., 2000; Thomas et al., 2015; Farajtabar et al., 2018; Guo et al., 2017; Thomas & Brunskill, 2016) Xie et al., (2019) proposes the Marginal Importance Sampling (MIS) estimator, reducing the dependence of the number of horizons to polynomial. However, the applicability of the MIS estimator is limited to the case where the state space $S$ is sufficiently small and discrete. (Uehara et al., 2020) employs minimax optimization to avoid curse of horizon and dimensionality, however it is generally challenging to compute. It necessitates additional properties, such as the Q-function of the MDP belonging to a Reproducing Kernel Hilbert Space (RKHS) function class, to ensure the effectiveness of minimax optimization.
There are also many model-based methods for MDP-setting OPE problems where the transition functions of the MDP system are directly estimated (Liu et al., 2018; Gottesman et al., 2019; Hallak et al., 2015). Some model-based estimators can efficiently avoid the curse of horizon and work well in the case that the state space is continuous. However, a common problem with model-based estimators is that they usually require sharp conditions on the transition and policy functions, which in turn results in a relatively small coverage of the MDP setting of the OPE problem. For example, The model-based approach discussed in Uehara & Sun (2021), which focuses on continuous state spaces, mandates policy functions to belong to a finite function class due to the PAC-learning bound incorporating the term.
Generally speaking, there hasn’t been a practical algorithm for MDP-setting off-policy evaluation that can be applied to scenarios where state space $S$ is sufficiently large and avoids the curse of horizon and dimensionality at the same time, while covers a wide range of MDP settings.
In deep learning, a generative model describes how a dataset is generated, which empowers the generation of a substantial volume of data that conforms to a desired distribution possible, even if the target distribution is in a very complex space. This intrinsic capability renders generative modeling highly relevant and applicable in the context of distribution estimation (Liu et al., 2021; Chen et al., 2019; Liang, 2021; Li et al., 2019; Abbasnejad et al., 2019; Zhang et al., 2020; Liang, 2018). In recent studies, the methodology of diffusion and score-matching is widely used in generative modeling to solve problems in image synthesis and data recovery (Ho et al., 2020; Hyvärinen, 2005; Song & Ermon, 2020; Song et al., 2021; Vahdat et al., 2021; Jo et al., 2022; Dockhorn et al., 2023; Janner et al., 2022). Moreover, recent studies (Wang et al., 2021; De Bortoli et al., 2021; Winkler et al., 2023; Shi et al., 2023) view the classical Schrödinger bridge problem (Rüschendorf & Thomsen, 1993) revised under the methodologies of machine learning (Vargas, 2021; Pavon et al., 2021) as a generative modeling problem and uses score-based diffusion to find solutions for Schrödinger bridge problem.
To tackle the problem that conventional density estimators cannot handle complex state and action space, in this paper we implement the methodology of diffusion Schrödinger bridge to directly estimate the transition functions and construct a model-based estimator (the CDSB estimator). The idea of using generative model as transition function estimator in RL, to our knowledge, has not been discovered in the literature. In comparison of Xie et al. (2019), our approach avoids the curse of horizon, meanwhile it is applicable for OPE problems in continuous and high-dimensional space. In comparison of Uehara et al. (2020) and Uehara & Sun (2021), our approach covers a wider range of MDP settings, as it does not impose the requirement for MDP functions to belong to specific function classes; it solely necessitates boundedness and smoothness of transition and policy functions.
Previous studies have discussed the convergence rate and asymptotic properties of the solution to Schrödinger bridge, most of which based on the iterative proportional fitting (IPF) method of solving the Schrödinger bridge (Deligiannidis et al., 2021; Gibbs & Su, 2002). Instead, our paper apply the likelihood training method to solve the diffusion Schrödinger bridge as in Chen et al. (2023b) and Chen et al. (2023c). To derive the convergence rate under this method, we take advantage of the score-matching error estimation in Chen et al. (2023a) and derive an total-variation error bound using Girsanov’s theorem, which is the first likelihood training Schrödinger bridge error bound in the literature. With this error bound, we ultimately derive an $O(H^2\sqrt{d})$-bound of absolute-value error for the estimation of the value function $V^\pi$ under an assumption of universal score estimation error.
Contributions. We conclude our main contributions as follows. First, we introduce the diffusion Schrödinger bridge generative model for density estimation and design an applicable algorithm to adapt such estimator in model-based off-policy evaluation, therefore extending solveability of OPE problems to the setting of high-dimensional and complex state and action space. Second, we prove the quantitative statistical convergence rate for diffusion Schrödinger bridge solved by likelihood training in total variance norm. Third, we bound the absolute value (1-norm) error of our model-based value function estimator, which has a $O(H^2\sqrt{d})$ convergence rate. To the best of our knowledge, this is the first quantitative convergence result employing diffusion Schrödinger bridge into the context of reinforcement learning.
1.1 Related Work
Off-Policy-Evaluation In reinforcement learning, Off Policy Evaluation refers to accurately evaluating a target policy using previously logged feedback data of a behavior policy (Dudík et al.,
Importance sampling (IS) and marginal importance sampling (MIS) estimators are widely used for OPE problems. Precup et al. (2000), Hanna et al. (2018), Robins et al. (2000), Xie et al. (2019), Kostrikov & Nachum (2020) uses self-normalized step-wise importance sampling for the problem. Le et al. (2019) trains a neural network to estimate the value of the evaluation policy $\pi$ by bootstrapping from $Q(s', \pi'(s'))$. Model-based methods are also adopted as in the work of Zhang et al. (2018), Liu et al. (2018), Gottesman et al. (2019) and Hallak et al. (2015). Uehara et al. (2020) uses minimax optimization to solve the problem which performs well in continuous state space. A more thorough review of the literature on OPE can be found in Uehara et al. (2022).
**Schrödinger Bridge Problem** The SB problem is an entropy-regularized Optimal Transport problem introduced by Schrödinger (1932). Genevay et al. (2018) deals with SB problem in the context of discrete distribution. Finlay et al. (2020) solves SB problem by approximating the SB solution by a diffusion whose drift is computed using potentials. Another prevalent method for solving SB is using Iterative Proportional Fitting which is also adopted in De Bortoli et al. (2021) to formulate a generative model for faster generation. The convergence results for IPF have been resolved under classical compactness assumptions as in Chen et al. (2016).
## 2 Problem Formulation
**Symbols and notations.** We consider the problem of offline policy evaluation for a finite horizon MDP, which is defined by $M = (S, A, T, R, H)$, where $S$ is a continuous state space, $A$ a continuous action space, $T_t : S \times A \times S \rightarrow [0, 1]$ is the transition function with $T_t(s'|s, a)$ defined by probability of transitioning into state $s'$ upon taking action $a$ in state $s$ at time $t$, and $R_t : S \times A \rightarrow \mathbb{R}$ is the reward function. $R_t(s, a)$ is the deterministic immediate reward associated with taking action $a$ in state $s$ at time $t$, and $H$ denotes the finite horizon. Without loss of generality, we study the case where $S = A = [0, 1]^d \subset \mathbb{R}^d, d \geq 1$. We use $\Pr\{E\}$ and $\mathbb{E}\{E\}$ to denote the probability and expectation of an event $E$, $\mathbb{E}\{E|F\}$ to denote the conditional expectation of event $E$ given the condition $F$. Denote $[n]$ to be the set of natural numbers $\{1, \cdots, n\}$. Use $\mathcal{P}(p_1, p_2)$ to denote the set of all path measures on $S$ throughout time interval $[0, T]$ with $p_1$ and $p_2$ as its marginal densities at $t = 0$ and $T$, $n \in \mathbb{N}$. Denote the Kullback-Leibler divergence between $p$ and $q$ to be $\text{KL}(p||q)$, and denote the total-variation norm between $p$ and $q$ to be $\text{TV}(p, q)$. For a random variable $X$ with probability density $p$, for a map $f$, we denote $f_\# q$ the probability density of random variable $f(X)$.
Let $\mu, \pi$ be policies whose output is a distribution of actions given an observed state. Make $\mu$ the behavioral policy and $\pi$ the target policy. Denote $\mu(a|s)$ the probability density function of actions given state. Moreover, we denote $d^\pi_t(s_t)$ the induced state distribution by $\pi$ at time $t$. When $t = 1$, the initial distributions are known and identical $d^\pi_1 = d_0$. For $t > 1$, $d^\pi_t(s_t)$ is defined recursively as follows:
$$d^\pi_t(s_t) = \int_S P^\pi_t(s_t|s_{t-1})d^\pi_{t-1}(s_{t-1}),$$
where $P^\pi_t(s_t|s_{t-1}) = \int_A T_t(s_t|s_{t-1}, a_{t-1})\pi(a_{t-1}|s_{t-1})da_{t-1}$.
**Problem setup.** The key to offline policy evaluation is to find an estimator $\hat{V}^\pi$ using the data collected by the behavior policy $\mu$ and the known action probabilities to estimate the value function
$$V^\pi = \sum_{t=1}^{H} \int_A \int_S d^\pi_t(s_t)\pi(a_t|s_t)R_t(s_t, a_t)ds_tda_t,$$
where we assume $\pi(a|s)$ and $\mu(a|s)$ is known for all $(s, a) \in S \times A$, $R_t(s_t, a_t)$ is unknown. The transition distributions $T_t(s_t|s_{t-1}, a_{t-1})$ is unknown and not easy to be observed.
Different from various previous studies in this field such as Xie et al. (2019), which focus on the case where $S$ and $A$ is discrete and low-dimensional, we provide an estimator $V^\pi$ under the condition that $S$ and $A$ is high-dimensional and continuous. In particular, we set $S = A = [0, 1]^d, d \geq 1$. Our main strategy is constructing model-based estimators, that is, directly estimating the transition function $T_t(s_t|s_{t-1}, a_{t-1})$.
3 MODEL-BASED CONDITIONAL DIFFUSION SCHRÖDINGER BRIDGE ESTIMATOR
To construct model-based estimators for OPE problem, one has to provide reliable estimation $\hat{T}_t(s_t|s_{t-1}, a_{t-1})$ of the transition function $T_t(s_t|s_{t-1}, a_{t-1})$ for all $t = 1, \cdots, H$. Consequently, we get an estimator for the value function for any given target policy $\pi$:
$$\hat{V}^\pi = \sum_{t=1}^{H} \int_A \int_S \hat{R}_t(s_t, a_t) \pi(a_t|s_t) \hat{P}_t^\pi(s_t|s_{t-1}) \cdots \hat{P}_2^\pi(s_2|s_1) d_0(s_1) ds_1 \cdots ds_t da_t,$$
(1)
where
$$\hat{P}_t^\pi(s_t|s_{t-1}) = \int_A \hat{T}_t(s_t|s_{t-1}, a_{t-1}) \pi(a_{t-1}|s_{t-1}) da_{t-1}, \quad t = 2, \cdots, H,$$
(2)
and $\hat{R}_t(s_t, a_t)$ being estimation of the reward function.
In our work, we will construct the estimation $\hat{T}_t(s_t|s_{t-1}, a_{t-1})$ using conditional diffusion Schrödinger bridge to get our estimator $\hat{V}^\pi$ as above.
3.1 SCHRÖDINGER BRIDGE PROBLEM FOR DENSITY ESTIMATION
The classical Schrödinger Bridge problem (Föllmer [1988]) in continuous time setting aims to find a path measure on time interval $[0, T]$ that achieves a minimum Kullback-Leibler divergence relative to a reference density under given marginal conditions, that is, to find $Q^* \in P(p_{data}, p_{prior})$ such that
$$Q^* = \arg\min\{\text{KL}(Q|P) : Q \in P(p_{data}, p_{prior})\},$$
(3)
where $P \in P_{N+1}$ is a reference path measure on $S$ in $[0, T]$ that can be designed, $p_{data}$ is the target distribution we aim to estimate, $p_{prior}$ is a known prior distribution. Suppose that $Q^*$ is available, then the target distribution $p_{data}$ can be generated by $Q^*$ using the known prior distribution $p_{prior}$ and $Q^*$, which means we can achieve density estimation of $p_{data}$ by solving the Schrödinger bridge problem.
If we set the reference density $P$ as the path measure of the add-noise SDE in score-based generative modeling, which is
$$dX_r = f(X_r, r)dr + g(r)dW_r, \quad X_0 \sim p_{data}, r \in [0, T],$$
(4)
where $f(\cdot, r) : \mathbb{R}^n \to \mathbb{R}^n$, $g(t) \in \mathbb{R}$ are the drift and diffusion, and $W_r \in \mathbb{R}^n$ is the standard Brownian process. Then we get the diffusion Schrödinger bridge. We denote $f(X_r, r) \equiv f$ and $g(r) \equiv g$ for simplicity.
For the diffusion Schrodinger bridge, the optimality condition can be characterized by two PDEs that are coupled through boundary conditions. The result is summarized as below.
**Theorem 3.1.1** ([Chen et al., 2021]; [Pavon & Wakolbinger, 1991]; [Caluya & Halder, 2021])
Let $\Psi(r, x)$ and $\hat{\Psi}(r, x)$ be the solutions to the following PDEs:
$$\begin{cases}
\frac{\partial \Psi}{\partial r} = -\nabla_x \Psi^\top f - \frac{1}{2} \text{Tr}(g^2 \nabla_x^2 \Psi) \\
\frac{\partial \hat{\Psi}}{\partial r} = -\nabla_x \cdot (\hat{\Psi} f) + \frac{1}{2} \text{Tr}(g^2 \nabla_x^2 \hat{\Psi})
\end{cases} \quad \text{s.t. } \Psi(0, \cdot) = p_{data}, \hat{\Psi}(T, \cdot) = p_{prior}.$$
(5)
Then, the solution to the optimization can be expressed by the path measure of the forward SDE
$$dX_r = [f + g^2 \nabla_x \log \Psi(r, X_r)]dr + gdW_r, \quad X_0 \sim p_{data}$$
(6)
or equivalently the backward SDE
$$dX_r = [f - g^2 \nabla_x \log \hat{\Psi}(r, X_r)]dr + gdW_r, \quad X_T \sim p_{prior},$$
(7)
So finding the solution to the diffusion Schrödinger bridge problem is equivalent to finding solutions $\Psi(r, x)$ and $\hat{\Psi}(r, x)$ to PDEs.
3.2 Solving Schrödinger Bridge Using Likelihood Training
Denote \( Z_r = g \nabla_x \log \Psi \) and \( \hat{Z}_r = g \nabla_x \log \hat{\Psi} \). Then the set \((Z_r, \hat{Z}_r)\) contains all the information of the diffusion Schrödinger bridge (DSB) model by the above analysis. Suppose \( q_r \) is the marginal distribution at time \( r \in [0, T] \) of the solution to the diffusion Schrödinger bridge problem, then the log-likelihood of a data point \( x_0 \) from \( p_{\text{data}} \) generated by the diffusion Schrödinger bridge is, by definition, \( \log q_0(x_0) \). We have the following theorem.
**Theorem 3.2.1** (Chen et al., 2023b) The log-likelihood of the DSB model \((Z_r, \hat{Z}_r)\) at data point \( x_0 \) can be expressed as
\[
\log q_0(x_0) = \mathbb{E}[\log q_T(X_T)] - \int_0^T \mathbb{E}\left[ \frac{1}{2} \|Z_r\|^2 + \frac{1}{2} \|\hat{Z}_r\|^2 + \nabla_x \cdot (g\hat{Z}_r - f) + \hat{Z}_r^\top Z_r \right] dt.
\]
Consequently, we can maximize \( L_{SB}(x_0; \theta, \phi) \), which shares the same expression as \( \log q_0(x_0) \) above with \( Z_r \approx Z(r; x; \theta) \) and \( \hat{Z}_r \approx \hat{Z}(r; x; \theta) \) are approximated by parameterized models, in order to solve the DSB problem. By Theorem 11 of Chen et al. (2023b), using the symmetric property of the Schrödinger bridge, we can convert maximizing \( L_{SB}(x_0; \theta, \phi) \) to maximizing the following two objectives:
\[
\tilde{L}_{SB}(x_0; \phi) = - \int_0^T \mathbb{E}[x_r] \left[ \frac{1}{2} \| \hat{Z}(r, X_r; \phi) \|^2 + g \nabla_x \hat{Z}(r, X_r; \phi) + Z_r^\top \hat{Z}(r, X_r; \phi) \right] dr,
\]
\[
\tilde{L}_{SB}(x_T; \theta) = - \int_0^T \mathbb{E}[x_r] \left[ \frac{1}{2} \| Z(r, X_r; \theta) \|^2 + g \nabla_x Z(r, X_r; \theta) + \hat{Z}_r^\top Z(r, X_r; \theta) \right] dr.
\]
3.3 Conditional Likelihood Training
The most straightforward way to apply DSB to our model-based OPE estimator is to construct a diffusion Schrödinger bridge with target distribution \( p_{\text{data}}(s_t) = T_t(s_t|s_{t-1}, a_{t-1}) \) for each \( t \in \{2, \cdots, H\} \) and each \((s_{t-1}, a_{t-1}) \in S \times A\), which is not computational achievable when \( S \) and \( A \) are continuous. Instead, we view \( T_t(s_t|s, a) \) as a conditional probability density function conditioned on parameter \((t, s, a)\), which can further be included in the training parameters as \( \phi = (\phi, t, s, a) \) and \( \theta = (\theta, t, s, a) \). Chen et al. (2023c) provide a practical algorithm implementation using a conditional mask (see Section 5.2 of Chen et al. (2023c)), which is an alternate training of the following loss with masks,
\[
\tilde{L}_{SB}(x_0; \phi) = - \int_0^T \mathbb{E}[x_r] \left[ \frac{1}{2} \| \hat{Z}(r, X_r; \phi) \circ M \|^2 + g \nabla_x [\hat{Z}(r, X_r; \phi) \circ M] \right]
\]
\[
+ [Z_r \circ M]^\top [\hat{Z}(r, X_r; \phi) \circ M] dr,
\]
where \( M \) is the target mask that has element 1 for the target index and 0 otherwise.
Meanwhile, in order to empirically generate data from SDEs, in practice we will make discretization for the time interval \([0, T]\). An \( N \)-step discretization is to divide \([0, T]\) into \([kh, (k + 1)h], k = 0, \cdots, N - 1\), where the step size \( h := \frac{T}{N} \).
Using the conditional maximum likelihood training of the DSB problem, we finally get the estimation \( \hat{T}_t(s_t|s_{t-1}, a_{t-1}) \) of the transition function \( T_t(s_t|s_{t-1}, a_{t-1}) \) for all \( t = 2, \cdots, H \) and \((s_t, s_{t-1}, a_{t-1}) \in S \times S \times A\), which we use to construct our OPE estimator by Equation 1 and Equation 2. We call our estimator the Conditional Diffusion Schrödinger Bridge (CDSB) estimator.
In implementation, \( X_0 \) is \((s_{t-1}, a_{t-1}, s_t)\). We stack them to be a longer vector. And the conditional masks will take element 1 on the index of \( s_t \). Besides, we will also train a neural network for reward function \( \hat{R}_t(s_t, a_t) \) which takes state and action as input to predict the reward. The detailed algorithm for training and OPE evaluation are summarised in algorithm 1.
Algorithm 1: CDSB Estimator Training and OPE
Training:
Input: Sampler $p_{\text{prior}}$ and $p_{\text{obs}}$, fixed condition-target masks $M$
Output: Trained backward policy $\hat{Z}(r, \tilde{\phi})$
for $k$ in $1:K$ do
Repeat:
Sample $X_{t} \in [0,T]$ following (6) where $x_0 \sim p_{\text{obs}}$.
Compute $\hat{L}_{SB}(x_0; \phi)$ using masks $M$.
Take gradient and update parameter $\phi$.
Sample $X_{T} \in [0,T]$ following (7) where $X_T \sim p_{\text{prior}}$.
Compute $\hat{L}_{SB}(x_T; \theta)$.
Take gradient and update parameter $\theta$.
end
# Use output $\hat{Z}(r, \tilde{\phi})$ and masks $M$ to form a conditional sampler $\hat{T}(s_t|s_{t-1}, a_{t-1}, t)$ where $(s_{t-1}, a_{t-1})$ is condition and $s_t$ is target. Conditional generation is done following equation (7)
Model-based OPE:
Input: Target policy $\pi$, sampled initial states $\{s^{(i)}_0\}_{i=1}^n$, trained conditional sampler $\hat{T}(s_t|s_{t-1}, a_{t-1}, t)$, trained reward network $\hat{R}$
Output: $\hat{V}^\pi$
for $t$ in $1:H$ do
# Sample $\{a^{(i)}_t\}_{i=1}^n$ from $\pi$
Sample $\{v^{(i)}_t\}_{i=1}^n$ from $\hat{T}$.
Predict $\{r^{(i)}_t\}_{i=1}^n$ using reward network $\hat{R}$.
end
# Compute $\hat{V}^\pi = \frac{1}{n} \sum_{i=1}^n \sum_{t=1}^H r^{(i)}_t$
4 THEORETICAL ANALYSIS OF THE CDSB ESTIMATOR
In this section, we provide the approximation property of the CDSB estimator. To get a convergent result, the Schrödinger bridge model derived from the MDP model, the parameterized model estimation error and target policies $\pi$ require the following assumptions:
1. $\Psi(r, x)$ and $\hat{\Psi}(r, x)$ in Section 3.1 satisfies that $\nabla_x \log \Psi(r, x)$ and $\nabla_x \log \hat{\Psi}(r, x)$ are $L$-Lipschitz with respect to variable $x$ for all $r \in [0, T]$.
2. For all $t \in \{2, \cdots, H\}$ and all $(s, a) \in S \times A$, $\mathbb{E}_{X \sim T_t(\cdot|s,a)} \|X\|^2 \leq m^2 < \infty$.
3. The drift $f$ and the diffusion $g$ in Equation (4) satisfies: $f$ has a finite upper bound $M < +\infty$, $g(r) \equiv c$ is a constant function with $0 < c \leq M$.
4. The unknown reward function $R_t(s_t, a_t)$ has a uniform upper bound $R_{\text{max}} = \sup_{s_t, a_t, t} R_t(s_t, a_t)$ with respect to all $t = 1, \cdots, H$.
5. For target policy $\pi$, $\tau := \sup_{s \in S, a \in A} |\pi(a|s)| < \infty$.
6. for all $k = 1, \cdots, N$, all $t = 1, \cdots, H$, all $(s, a) \in S \times A$,
\[
\mathbb{E}_{q_{kh,t,s,a}} \left[ \|Z(kh, X_{kh}, (\theta, t, s, a)) - Z_{kh}\|^2 \right] \leq \epsilon^2,
\]
\[
\mathbb{E}_{q_{kh,t,s,a}} \left[ \|\hat{Z}(kh, X_{kh}, (\phi, t, s, a)) - \hat{Z}_{kh}\|^2 \right] \leq \epsilon^2, \quad |\hat{R}_t(s, a) - R_t(s, a)|^2 \leq \epsilon^2,
\]
where $q_{kh,t,s,a}$ is the marginal density at time $kh \in [0, T]$ of the solution to the DSB (3) with $p_{\text{data}} = T_t(\cdot|s, a)$.
Assumption (4) is easily achievable, since an upper bound for reward function is guaranteed in almost every reinforcement learning problem. Assumption (5) (boundedness of the target policy $\pi$) also covers most off-policy evaluation problems. Assumption (2) requires a second moment bound of the transition function. Since in our setting, $S = [0, 1]^d$ is bounded and $\text{supp}\{T_t(\cdot|s, a)\} \subseteq S$ for all $t = 2, \cdots, H$ and $(s, a) \in S \times A$, this assumption naturally holds in our setting. Assumption (3) is also easily achievable since both drift and diffusion can be designed. In practice, we can apply the standard denoising diffusion probabilistic modeling (DDPM) setting $f(t, X_t) = -X_t$ (bounded since $X_t$ is bounded) and $g(t) = \sqrt{2}$. Assumption (1) requires lipschitzness of $\nabla_x \log \Psi(r, x)$ and $\nabla_x \log \hat{\Psi}(r, x)$, which could be derived from the lipschitzness and lower-boundedness of $p_{\text{data}} = T_t(\cdot|s, a)$.
Meanwhile, the lipschitzness and lower-boundedness of the transition function is a conventional setting in continuous MDP system. The final assumption (6) is an score estimation error assumption, which is similar to the assumption in [Lee et al., 2022]. Notice that our assumption requires the learning error $\epsilon$ uniformly on all $t = 2, \cdots, H$ and $(s, a) \in S \times A$, which is still a realistic assumption under the algorithm of conditional likelihood training.
**Theorem 4.1** Under Assumptions (1)-(6), let $\hat{V}^\pi$ be the output of CDSB estimator, and suppose that the step size $h := \frac{T}{N}$ satisfies $h \lesssim \frac{1}{L}$, where $L \geq 1$. Suppose the diffusion time $T \geq \max\{1, \frac{1}{\tau^2}\}$, then it holds that
$$|\hat{V}^\pi - V^\pi| \lesssim R_{\text{max}} T^2 H^2 (\epsilon + M^3 L^{3/2} T \sqrt{d h} + LM m h) \sqrt{T}. \quad (11)$$
We make a few remarks about the above theorem. Firstly, the error bound $|\hat{V}^\pi - V^\pi|$ only has a 2-order polynomial dependence on the number of horizon $H$, which shows that the CDSB estimator avoids the exponential curse of horizon in comparison with traditional IS estimators [Liu et al., 2020]. On the other hand, the bound of error requires only a $\sqrt{d}$-dependence on the dimension $d$ of the state space $S$, which indicates that our algorithm also avoids the curse of dimensionality, which means it has excellent performance on continuous and high-dimensional state and action space. Finally, The error bound can be easily controlled by narrowing the estimation error $\epsilon$ and the diffusion step size $h$, which are both easy to achieve during practical empirical computation.
To prove the above theorem, we need to compare the structure of $V^\pi$ and $\hat{V}^\pi$. Noticing that
$$V^\pi = \sum_{t=1}^{H} \int_A \int_S R_t(s_t, a_t) \pi(a_t | s_t) P_t^\pi(s_t | s_{t-1}) \cdots P_2^\pi(s_2 | s_1) d_0(s_1) ds_1 \cdots ds_t da_t,$$
and
$$\hat{V}^\pi = \sum_{t=1}^{H} \int_A \int_S \hat{R}_t(s_t, a_t) \pi(a_t | s_t) \hat{P}_t^\pi(s_t | s_{t-1}) \cdots \hat{P}_2^\pi(s_2 | s_1) d_0(s_1) ds_1 \cdots ds_t da_t.$$
It comes naturally that a uniform bound of $\int_S |\hat{P}_t^\pi(s_t | s_{t-1}) - P_t^\pi(s_t | s_{t-1})| ds_t$ on all $t = 2, ..., H$ and all $s_{t-1} \in S$ can be used to bound $|\hat{V}^\pi - V^\pi|$.
Since $\hat{P}_t^\pi(s_t | s_{t-1}) = \int_A \hat{T}_t(s_t | s_{t-1}, a_{t-1}) \pi(a_{t-1} | s_{t-1}) da_{t-1}$ and $P_t^\pi(s_t | s_{t-1}) = \int_A T_t(s_t | s_{t-1}, a_{t-1}) \pi(a_{t-1} | s_{t-1}) da_{t-1}$ and $\pi$ is upper-bounded with $\tau$, we only require a uniform bound of $\int_S |\hat{T}_t(s_t | s_{t-1}, a_{t-1}) - T_t(s_t | s_{t-1}, a_{t-1})| ds_t$ on all $t = 2, \cdots, H$ and all $(s_{t-1}, a_{t-1}) \in S \times A$, which is guaranteed in the following theorem:
**Theorem 4.2** For any $t = 2, \cdots, H$ and any $(s_{t-1}, a_{t-1}) \in S \times A$, suppose the diffusion time $T \geq \max\{1, \frac{1}{\tau^2}\}$, we have
$$TV(\hat{T}_t(\cdot | s, a), T_t(\cdot | s, a)) \lesssim (\epsilon + M^3 L^{3/2} T \sqrt{d h} + LM m h) \sqrt{T}.$$
This theorem is proved mainly using the Girsanov’s theorem. The method is similar to [Chen et al., 2023a], with some alternations under the diffusion Schrödinger bridge setting. With Theorem 4.2 proved, we are able to prove Theorem 4.1 using some iterations on $t$.
## 5 EXPERIMENTS
### 5.1 Setting and Result
We conduct our experiments on the DeepMind control suite [Tassa et al., 2018], a set of control tasks implemented in MuJoCo [Todorov et al.]. We use a subset of the offline datasets from RL Unplugged [Gulcehre et al., 2020], the details of which are provided in table 1. These environments capture a wide range of complexity, from 40K transitions in a 5-dimensional cartpole environment to 1.5 million transitions on complex manipulation tasks. We follow part of the evaluation protocol in the Deep OPE benchmark [Fu et al., 2020].
As for the policies, we adopt the policy trained by [Kostrikov & Nachum, 2020] for each task as behavior policies. Offline datasets are generated following such policies. Four different levels of noise
Figure 1: Mean Absolute Error with Error Bar
Table 1: Summary of the offline datasets used
| | Reacher | Hopper | HalfCheetah | Walker |
|------------------|---------|--------|-------------|--------|
| State dim. | 11 | 11 | 17 | 17 |
| Action dim. | 2 | 3 | 6 | 6 |
| Number of episodes | 1M | 1M | 1M | 1M |
| Infinite horizon | yes | yes | yes | yes |
is added to the behavior policies to form target policies. The evaluation is done by performing OPE on different behavior-target policy pairs for each task. After that, absolute error is measured for each OPE problem, and median absolute error is used to evaluate the performance of an OPE algorithm on a task. We compare our method (CDBS) with the following baseline: Fitted Q-Evaluation (FQE), Model-Based, DICE. These baselines include model-based and model-free methods. We follow the implementation of these baselines in [Kostrikov & Nachum, 2020].
The summary statistic is displayed in table 2. Our method achieves state-of-the-art performance on two among four OPE tasks measured by median absolute error. We also provide the result of the mean absolute error with error bar in figure 1 to show robustness of each method.
5.2 Conditional Generation Details
In this section, we briefly describe the pipeline of the conditional diffusion Schrödinger bridge network. More details about the neural networks, training procedure, inference, baseline models, and evaluation can be found in Appendix.
As described in section 3.3, we use two separate neural networks to model the forward or backward policy. The backward network needs to handle partially observed input and conduct conditional inference. More specifically, the backward policy has format $\hat{Z}(r, X_r, M, \phi)$ which takes in diffusion time, condition masks, and outputs the policy of the whole time window (its outputs at condition positions are usually ignored). While the forward network, as an assistant for training the backward policy, does not need to process partial input, and we use a modified U-Net as the neural network ([Ronneberger et al., 2015]). In both networks, the diffusion time is incorporated through embedding. Similar to the design [Tashiro et al., 2021], the backward policy handles the input with irregular conditions based on the transformer, where the condition information is encoded through channel concatenation, feature index embedding, and time index embeddings.
Table 2: OPE Evaluation Result
| Median Absolute Error | Reacher | Hopper | HalfCheetah | Walker |
|-----------------------|---------|--------|-------------|--------|
| FQE | 0.374 | 0.096 | **0.218** | 0.232 |
| MB | 0.336 | **0.064** | 0.286 | 0.781 |
| Dual Dice | 0.417 | 2.595 | 1.032 | 0.201 |
| CDSB(ours) | **0.318** | 1.0405 | 1.276 | **0.080** |
6 CONCLUSIONS
In this paper, we propose the CDSB estimator to solve off-policy evaluation under finite-horizon MDP with continuous and high-dimensional state space $S$. In comparison with traditional model-based approaches and classic model-free approaches such as importance sampling, our approach avoids the curse of horizon and dimensionality with only polynomial dependence on horizon $H$ and dimension $d$, making it possible to solve OPE problem efficiently under the complex state space $S$. Meanwhile, our estimator proves efficient under a wide range of MDP settings since it solely requires boundedness and smoothness of transition and policy functions.
REFERENCES
Absolutely Continuous Curves in Pp(X) and the Continuity Equation, pp. 167–200. Birkhäuser Basel, Basel, 2005. ISBN 978-3-7643-7309-2. doi: 10.1007/3-7643-7309-1_10. URL https://doi.org/10.1007/3-7643-7309-1_10
M. Ehsan Abbasnejad, Qinfeng Shi, Anton van den Hengel, and Lingqiao Liu. A generative adversarial density estimator. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Kenneth Caluya and Abhishek Halder. Wasserstein proximal algorithms for the schrödinger bridge problem: Density control with nonlinear drift. IEEE Transactions on Automatic Control, PP:1–1, 02 2021. doi: 10.1109/TAC.2021.3060704.
Ricky T. Q. Chen, Jens Behrmann, David Duvenaud, and Jörn-Henrik Jacobsen. Residual Flows for Invertible Generative Modeling. Curran Associates Inc., Red Hook, NY, USA, 2019.
Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R. Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions, 2023a.
Tianrong Chen, Guan-Hong Liu, and Evangelos A. Theodorou. Likelihood training of schrödinger bridge using forward-backward sdes theory, 2023b.
Yongxin Chen, Tryphon Georgiou, and Michele Pavon. Entropic and displacement interpolation: a computational approach using the hilbert metric. SIAM Journal on Applied Mathematics, 76(6): 2375–2396, 2016.
Yongxin Chen, Tryphon T. Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrödinger bridge. SIAM Review, 63(2):249–313, 2021. doi: 10.1137/20M1339982. URL https://doi.org/10.1137/20M1339982
Yu Chen, Wei Deng, Shikai Fang, Fengpei Li, Nicole Tianjiao Yang, Yikai Zhang, Kashif Rasul, Shandian Zhe, Anderson Schneider, and Yuriy Nevmyvaka. Provably convergent schrödinger bridge with applications to probabilistic time series imputation. arXiv preprint arXiv:2305.07247, 2023c.
Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695–17709, 2021.
George Deligiannidis, Valentin De Bortoli, and Arnaud Doucet. Quantitative uniform stability of the iterative proportional fitting procedure, 2021.
Tim Dockhorn, Arash Vahdat, and Karsten Kreis. Score-based generative modeling with critically-damped langevin diffusion. In *International Conference on Learning Representations*.
Miroslav Dudík, Dumitru Erhan, John Langford, and Lihong Li. Doubly robust policy evaluation and optimization. 2014.
Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. More robust doubly robust off-policy evaluation. In *International Conference on Machine Learning*, pp. 1447–1456. PMLR, 2018.
Chris Finlay, Augusto Gerolin, Adam M Oberman, and Aram-Alexandre Pooladian. Learning normalizing flows from entropy-kantorovich potentials. *arXiv preprint arXiv:2006.06033*, 2020.
Hans Föllmer. Random fields and diffusion processes. In Paul-Louis Hennequin (ed.), *École d’Été de Probabilités de Saint-Flour XV–XVII, 1985–87*, pp. 101–203, Berlin, Heidelberg, 1988. Springer Berlin Heidelberg. ISBN 978-3-540-46042-8.
Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, et al. Benchmarks for deep off-policy evaluation. In *International Conference on Learning Representations*, 2020.
Aude Genevay, Gabriel Peyré, and Marco Cuturi. Learning generative models with sinkhorn divergences. In *International Conference on Artificial Intelligence and Statistics*, pp. 1608–1617. PMLR, 2018.
Alison L. Gibbs and Francis Edward Su. On choosing and bounding probability metrics. *International Statistical Review / Revue Internationale de Statistique*, 70(3):419–435, 2002. ISSN 03067734, 17515823. URL [http://www.jstor.org/stable/1403865](http://www.jstor.org/stable/1403865).
Omer Gottesman, Yao Liu, Scott Sussex, Emma Brunskill, and Finale Doshi-Velez. Combining parametric and nonparametric models for off-policy evaluation. In *International Conference on Machine Learning*, 2019.
Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gómez Colmenarejo, Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, et al. RL unplugged: Benchmarks for offline reinforcement learning. *arXiv preprint arXiv:2006.13888*, 394, 2020.
Zhaohan Daniel Guo, Philip S. Thomas, and Emma Brunskill. Using options and covariance testing for long horizon off-policy policy evaluation. In *Proceedings of the 31st International Conference on Neural Information Processing Systems*, NIPS’17, pp. 2489–2498, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.
Assaf Hallak, Francois Schnitzler, Timothy Mann, and Shie Mannor. Off-policy model-based learning under unknown factored dynamics. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pp. 711–719, Lille, France, 07–09 Jul 2015. PMLR. URL [https://proceedings.mlr.press/v37/hallak15.html](https://proceedings.mlr.press/v37/hallak15.html).
Josiah P. Hanna, Scott Niekum, and Peter Stone. Importance sampling policy evaluation with an estimated behavior policy. In *International Conference on Machine Learning*, 2018.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in Neural Information Processing Systems*, 33:6840–6851, 2020.
Aapo Hyvärinen. Estimation of non-normalized statistical models by score matching. *J. Mach. Learn. Res.*, 6:695–709, dec 2005. ISSN 1532-4435.
Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In *International Conference on Machine Learning*, pp. 9902–9915. PMLR, 2022.
Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. In *International Conference on Machine Learning*, pp. 652–661. PMLR, 2016.
|
eUgS9Ig8JG
|
This calls into question the validity of the example in Section 4.1. You say that all simplices are given a feature value given by some scalar $a$ -- yet, the matrices acting on these feature vectors/matrices have an orientation associated to them. It seems as if you are using an *oriented operator* to act on *unoriented features*.
|
SaNN: Simple Yet Powerful Simplicial-Aware Neural Networks
Sravanthi Gurugubelli & Sundeep Prabhakar Chepuri
Indian Institute of Science, Bangalore, Karnataka, India
{sravanthig,spchepuri}@iisc.ac.in
Abstract
Simplicial neural networks (SNNs) are deep models for higher-order graph representation learning. SNNs learn low-dimensional embeddings of simplices in a simplicial complex by aggregating features of their respective upper, lower, boundary, and coboundary adjacent simplices. The aggregation in SNNs is carried out during training. Since the number of simplices of various orders in a simplicial complex is significantly large, the memory and training-time requirement in SNNs is enormous. In this work, we propose a scalable simplicial-aware neural network (SaNN) model with a constant run-time and memory requirements independent of the size of the simplicial complex and the density of interactions in it. SaNN is based on pre-aggregated simplicial-aware features as inputs to a neural network, so it has a strong simplicial-structural inductive bias. We provide theoretical conditions under which SaNN is provably more powerful than the Weisfeiler-Lehman (WL) graph isomorphism test and as powerful as the simplicial Weisfeiler-Lehman (SWL) test. We also show that SaNN is permutation and orientation equivariant and satisfies simplicial-awareness of the highest order in a simplicial complex. We demonstrate via numerical experiments that despite being computationally economical, the proposed model achieves state-of-the-art performance in predicting trajectories, simplicial closures, and classifying graphs.
1 Introduction
Graph Neural Network (GNN) models are extensively used for analyzing graph-structured data by embedding nodes as points in Euclidean space through neighborhood feature aggregation (Hamilton et al., 2017; Leskovec & Jeljeka, 2019; Velickovic et al., 2017; Rossi et al., 2020; Chen et al., 2020). The expressive power of a GNN model is often benchmarked against the Weisfeiler-Lehman (WL) isomorphism test (Lehman & Weisfeiler, 1968) and GNN architectures as powerful (in terms of expressiveness) as the WL test can be designed (Leskovec & Jeljeka, 2019) by appropriately choosing the aggregation functions. However, an inherent limitation of any graph-based neural model lies in its ability to encode only pairwise interactions between entities. In many real-world scenarios, interactions transcend pairwise relationships, for instance, group-based interactions are seen in biochemistry (e.g., reaction between reagents), social networks (e.g., interactions between friends), and trade networks (e.g., interaction between buyers, suppliers, and intermediaries), to name a few. Such supra-pairwise interactions can be effectively captured using simplicial complexes, a higher-order generalization of graphs.
A simplicial complex consists of simplices of various orders, including 0-simplices (or nodes), 1-simplices (or edges), 2-simplices (or triangles), and more generally, $k$-simplices (or simplices of order $k$). Higher-order simplices may also be oriented, ensuring a consistent node arrangement within each simplex, facilitating tasks like determining information flow directions along the simplices. In simplicial complexes, $k$-simplices have four types of adjacent simplices: boundary-adjacent ($(k - 1)$-simplices), co-boundary-adjacent ($(k + 1)$-simplices), upper-adjacent, and lower-adjacent ($k$-simplices). Similar to how GNNs create meaningful embeddings of nodes via sequential neighborhood aggregation, simplicial neural networks (SNNs) generate embeddings for all the $k$-simplices within a simplicial complex. Unlike GNNs that aggregate attributes only from adjacent nodes, SNNs leverage information from upper, lower, boundary, and co-boundary adjacent simplices of a $k$-simplex, enabling them to capture higher-order interactions and generate more expressive
embeddings compared to GNNs (Bodnar et al., 2021). While the expressive power of GNNs is evaluated through the WL test, Bodnar et al. (2021) introduces a theoretical characterization framework for SNNs, namely, the simplicial Weisfeiler-Lehman (SWL) test, which is provably more powerful than the WL test. Further, it is shown that under some conditions on the aggregation functions, SNNs are strictly more powerful than the WL test and are as powerful as the SWL test (Bodnar et al., 2021). Certain SNN models, such as Bodnar et al. (2021) and Roddenberry et al. (2021), also showcase properties like simplicial awareness, permutation equivariance, and orientation equivariance.
An important limitation of current SNN models (Ebli et al., 2020; Bunch et al., 2020; Bodnar et al., 2021; Roddenberry et al., 2021; Yang et al., 2022b), is the necessity of sequential feature aggregation during training, and this incurs significant memory and training time requirements. In this paper, we propose simplicial-aware neural networks (SaNN), a simpler model with a strong simplicial inductive bias and a constant training time and memory requirement (independent of the number of interacting simplicities) by augmenting pre-aggregated features as inputs to a neural model, such as a multi-layer perceptron (MLP).
Contributions: The main contributions and results are summarized as follows:
• We devise a recursive aggregating method with no learnable parameters to compute features of simplices that are aware of their different hops of neighborhood. The proposed precomputation of features results in a training time that is almost constant for simplicial complexes of any size and order.
• We then prescribe conditions on the aggregating functions involved in generating the simplicial-aware features in SaNN to ensure that the embeddings generated by SaNN are provably as expressive as those generated by the message passing simplicial network (MP SN). We theoretically prove that SaNN, although a simpler model, is strictly more powerful than the WL test and as powerful as MP SN, or equivalently, the SWL test (Bodnar et al., 2021).
• We also theoretically show that SaNN is permutation equivariant, orientation equivariant, and satisfies simplicial awareness of the highest order in a simplicial complex while being computationally efficient.
We demonstrate the efficacy of SaNN for three applications: trajectory prediction, simplicial closure (i.e., higher-order link) prediction, and graph classification. We observe that for all the three applications, SaNN outperforms non-deep baselines while being competitive with existing SNN models with a much smaller run-time. Specifically, for simplicial closure prediction, on large datasets with about a million simplices of various orders, existing SNN models run out of memory.
We next discuss a few works that are related to the proposed work.
Related Works: While existing SNN models (Bodnar et al., 2021; Ebli et al., 2020; Roddenberry et al., 2021; Bunch et al., 2020) differ in how the features of neighborhood simplices are aggregated and how the aggregated features from different types of neighborhoods are combined, Yang et al. (2022b) addresses scalability issues in higher-order graph-based neural networks. Unlike existing SNN models, it relaxes the fundamental definition of simplicial complexes, namely, the inclusion principle (subsets of simplices are simplices in the simplicial complex), to reduce runtime complexity. For tasks like predicting simplicial closure, the inclusion principle cannot be ignored Benson et al. (2018). For instance, in predicting whether a simplex is likely to form between three authors who have not all collaborated before, Benson et al. (2018) demonstrates that the frequency of collaboration between any two authors in the group positively affects the closure of the triangle. Moreover, Yang et al. (2022b) assumes all simplices in a complex are unoriented, rendering it unsuitable for tasks dependent on relative orientation, such as trajectory prediction (Roddenberry et al., 2021).
---
1 Simplicial-awareness ensures the dependence of embeddings on the simplices of all orders within a simplicial complex.
2 Permutation equivariance means that the embeddings of simplices remain unaltered even as the simplices are reordered, an important attribute for modeling complex structures like graphs and simplicial complexes.
3 Orientation equivariance implies that changing the relative orientations of certain simplices will only lead to the output embeddings of those simplices being the sign-inverted versions of the embeddings prior to the orientation change.
The proposed work is inspired by graph-based neural network models that attempt to simplify GNN models to improve their scalability. Graph-augmented multi-layer perceptrons (GAMLPs) (Chen et al., 2020), simple and parallel graph isomorphism network (SPIN) (Doshi & Chepuri, 2022), scalable inception graph neural networks (SIGN) (Rossi et al., 2020), and simple graph convolutional networks (SGCNs) (Wu et al., 2019) are some examples of scalable and efficient GNN models. GAMLPs, SPIN, and SIGN are related to SaNN in that they compute the node features as a preprocessing step before the training procedure. However, they are limited to generating node embeddings by accounting only the pairwise relations. Furthermore, the notion of a neighborhood in SaNN differs from that in GAMLPs, SPIN, or SIGN. A direct extension of the precomputation step in GAMLPs, SPIN, or SIGN to simplicial complexes would be to precompute the features of simplices using the integer powers of the so-called Hodge Laplacian matrix (the generalization of the graph Laplacian for simplicial complexes and an aggregation operator used in Ebli et al., 2020; Bunch et al., 2020). However, it does not account for information in the boundary and co-boundary adjacent simplices, which as we prove in this work, is required to propose an efficient model that is as powerful as the SWL test. Specifically, we theoretically prove that for a specific choice of functions involved in generating embeddings, SaNN is strictly more powerful than the existing GNN models. In other words, SaNN implicitly, is strictly more powerful than GAMLPs, SPIN, or SIGN. This also implies that even the node embeddings (i.e., embeddings of 0-simplices) from SaNN are more expressive than the 1-WL test or, equivalently, GNNs. Using the node embeddings alone, SaNN distinguishes a broader class of graphs than those that are distinguishable by GNNs. In summary, our proposed SNN model, denoted by SaNN, is significantly faster than existing SNN models while preserving the definition of simplicial complexes and the expressive power of the current SNN models.
2 BACKGROUND
In this section, we mathematically describe simplicial complexes and SNNs.
**Simplicial Complex:** Let \( V = \{v_0, v_1, \ldots, v_N\} \) be the node set of cardinality \( N + 1 \). A simplicial complex \( K \) is a collection of non-empty subsets of \( V \) with the inclusion property that if \( \sigma \) is an element of \( K \), then every subset of \( \sigma \) is an element of \( K \). A \( k \)-dimensional element \( \sigma = \{v_0, v_1, \ldots, v_k\} \) of \( K \) with cardinality \( k + 1 \) is called a \( k \)-simplex. Each simplex has an orientation defined by a standardized vertex order, typically ascending or descending, establishing a standard node arrangement within each simplex. The number of \( k \)-simplices in \( K \) is denoted by \( N_k \). A simplicial complex is said to have an order of \( K \) if the cardinality of its largest simplex is \( K + 1 \). Such a simplicial complex is also referred to as a \( K \)-simplicial complex.
A \( k \)-simplex \( \sigma_k \) has four kinds of adjacent simplices, namely, boundary adjacent, co-boundary adjacent, upper adjacent, and lower adjacent simplices. The incidence relationship between \((k - 1)\)-simplices and \( k \)-simplices along with their relative orientations can be represented by the oriented incidence matrix \( B_k \in \mathbb{R}^{N_{k-1} \times N_k} \). The \((i, j)\)th entry of \( B_k \) is non-zero if the \( i \)th \((k - 1)\)-simplex is a boundary simplex of the \( j \)th \( k \)-simplex. The non-zero entries of an oriented incidence matrix \( B_k \) can be either \(+1\) or \(-1\), reflecting the relative orientations of the \( i \)th \((k - 1)\)-simplex and the \( j \)th \( k \)-simplex. For unoriented simplices, we use unoriented incidence matrices in which the \((i, j)\)th entry is \(1\) if the \( i \)th \((k - 1)\)-simplex is a boundary simplex of the \( j \)th \( k \)-simplex, and \(0\) otherwise.
The upper and lower adjacencies of \( k \)-simplices can be defined using the upper and lower Laplacian matrices as \( A_{k,U} = B_{k+1}^T B_k \in \mathbb{R}^{N_k \times N_k} \) and \( A_{k,L} = B_k^T B_{k+1} \in \mathbb{R}^{N_k \times N_k} \), respectively. We will also define the matrices \( A_{k,B} = B_{k+1}^T \in \mathbb{R}^{N_k \times N_k} \) and \( A_{k,C} = B_k \in \mathbb{R}^{N_k \times N_k} \), for convenience. The \( t \)-hop upper, lower, boundary, and co-boundary neighbors of a simplex \( \sigma_k \) are denoted by \( U(t)(\sigma_k) \), \( L(t)(\sigma_k) \), \( B(t)(\sigma_k) \), and \( C(t)(\sigma_k) \), respectively. For example, \( U(1)(\sigma_0) \) for a 0-simplex \( \sigma_0 \) simply means (upper adjacent) neighborhood of the node \( \sigma_0 \).
**Simplicial Neural Networks:** Let us denote the attributes of \( k \)-simplices by \( X_k \in \mathbb{R}^{N_k \times D_k} \) and the \( D_k \)-dimensional feature of a \( k \)-simplex \( \sigma_k \) as \( X_k[\sigma_k] \). The sign of feature \( X_k[\sigma_k] \) is determined based on the reference orientation of \( \sigma_k \). For unoriented features, there is no need for a reference orientation of simplices. In such cases, we consider unoriented incidence matrices. SNNs take the simplicial complex structure and the data matrices \( X_k \) for \( k = 0, \ldots, K \) as input, and update the embeddings of the simplices by aggregating embeddings of their adjacent simplices. A generic way of expressing the update rule of the embeddings of \( k \)-simplices by the existing SNN models in the \( l \)th layer is given by
\[
H_k^{(l+1)} = \phi \left[ \psi \left( A_{k,S} H_k^{(l)} W_S^{(l)}, A_{k,U} H_k^{(l)} W_U^{(l)}, A_{k,L} H_k^{(l)} W_L^{(l)}, A_{k-1,B} H_{k-1}^{(l)} W_B^{(l)}, A_{k+1,C} H_{k+1}^{(l)} W_C^{(l)} \right) \right],
\]
(1)
where $\phi$ is a non-linear function (e.g., ReLU or sigmoid), $\psi$ is a combining function (e.g., summation or concatenation) that combines information from the different neighborhoods, and $H_k^{(0)} = X_k$. The matrix $A_{k,S}$ lets the model include the self-embeddings of simplices while updating their respective embeddings. Specifically, the matrix $A_{k,S}$ is taken as the identity matrix $I$ if self-embeddings are to be accounted for, and is otherwise set to an all-zero matrix $0$. The matrices $\{W_S^{(l)}, W_U^{(l)}, W_L^{(l)}, W_B^{(l)}, W_C^{(l)}\}$ are learnable weight matrices of the $l$th layer. A model with $L$ such layers in cascade sequentially learns embeddings of $k$-simplices using information from their $L$-hop neighborhood.
Different choices of the functions, $\phi$ and $\psi$, along with aggregating matrices result in various SNN models, namely, MPSN Bodnar et al. (2021), SCONE Roddenberry et al. (2021), SCNN Yang et al. (2022a), S2CNN Bunch et al. (2020), and SNN Ebli et al. (2020). More details about the specific choice of the functions in each of the SNN models is provided in Appendix A.
3 THE PROPOSED MODEL
The key idea behind the proposed model is to precompute simplicial structure-aware features by aggregating initial features of simplices of different orders from different neighborhood hops. The aggregated features are then transformed using nonlinear functions to obtain embeddings for simplices of different orders. The generic embeddings can be used to obtain task-specific embeddings for the desired application. The proposed generic SaNN model for each of the $k$-simplices has the following two main components.
Precomputing Simplicial-aware Features: Let us collect the feature vectors in the $t$-hop upper neighborhood of a $k$-simplex $\sigma_k$ in the multiset $X_{k,U}^{(t)}[\sigma_k] = \{X_k[\tau], \forall \tau \in U^{(t)}(\sigma_k)\}$. Similarly, define the following multisets $X_{k,L}^{(t)}[\sigma_k] = \{X_k[\tau], \forall \tau \in L^{(t)}(\sigma_k)\}$, $X_{k,B}^{(t)}[\sigma_k] = \{X_k[\tau], \forall \tau \in B^{(t)}(\sigma_k)\}$, and $X_{k,C}^{(t)}[\sigma_k] = \{X_k[\tau], \forall \tau \in C^{(t)}(\sigma_k)\}$. We sequentially pre-compute simplicial-aware features from different neighborhood depths as follows. We update the feature vector of a $k$-simplex $\sigma_k$ by aggregating $t$-hop information of $\sigma_k$, using the following partial updates that are dependent on $(t - 1)$-hop aware features as
$$Y_{k,U}^{(t)}[\sigma_k] = f_{k,U}(X_{k,U}^{(t-1)}[\sigma_k]), Y_{k,L}^{(t)}[\sigma_k] = f_{k,L}(X_{k,L}^{(t-1)}[\sigma_k]),$$
$$Y_{k,B}^{(t)}[\sigma_k] = f_{k,B}(X_{k,B}^{(t-1)}[\sigma_k]), Y_{k,C}^{(t)}[\sigma_k] = f_{k,C}(X_{k,C}^{(t-1)}[\sigma_k]),$$
where $f_{k,n} : X_{k,n}^{(t)}[\sigma_k] \rightarrow \mathbb{R}^{D_k}$, for $n \in \{U, L, B, C\}$ are the neighborhood aggregation functions that aggregate features of the $k$-order upper, $k$-order lower, $(k - 1)$-order boundary, and $(k + 1)$-order co-boundary adjacent simplices, respectively, of any $k$-simplex. The final aggregated $t$-hop aware embedding of a $k$-simplex $\sigma$ is computed by combining the partial updates obtained from the $(t - 1)$-hop aware embeddings as
$$X_k^{(t)}[\sigma_k] = \phi \left(Y_{k,U}^{(t)}[\sigma_k], Y_{k,L}^{(t)}[\sigma_k], Y_{k,B}^{(t)}[\sigma_k], Y_{k,C}^{(t)}[\sigma_k]\right)$$
for $t = 0, 1, \cdots , T$ with $X_k^{(0)} = X_k$ as the initial features, where $\phi$ is a function that combines aggregated features from the four types of neighborhoods and $T$ denotes the depth of neighborhood information considered by SaNN (which is analogous to the number of layers in SNNs). For any $k$-simplex $\sigma_k \in K$, the aggregated feature vectors $X_k^{(t)}[\sigma_k]$ can be efficiently precomputed for all $k$ and $t$ outside the training process as no learnable parameters are involved, but they have a strong simplicial-structural inductive bias. The aggregation scheme in an SaNN model is illustrated in Fig. 1.
Learning from Simplicial-aware Features: We use learnable nonlinear transformation functions, $g_k^{(t)}(\cdot)$, to transform the precomputed features into embeddings that are suitable for the task at hand as
$$S_k^{(t)} = g_k^{(t)}(X_k^{(t)})$$
for $k = 0, 1, \cdots , K$ and $t = 0, 1, 2, \cdots , T$. We finally combine the embeddings of $k$-simplices from different hops of neighborhood as
$$H_k^{(t)} = \Theta_k \left(S_k^{(0)}, S_k^{(1)}, \cdots , S_k^{(t)}\right),$$
where $\Theta_k$ is a combination function.
Figure 1: Simplicial-aware neighborhood aggregation involves iterative aggregation (no learnable parameters) from the upper, lower, boundary, and co-boundary simplices of $k$-simplices.
Figure 2: Feature transformation blocks of SaNN compute the matrix embedding $H_k$ of $k$-simplices from the precomputed features. Here, we use plate notation, with $K + 1$ at the bottom of the plate denoting the presence of $K + 1$ such blocks for $k = 0, \ldots , K$.
4 A multiset is a collection of elements that accommodates duplicate elements while accounting for the multiplicities of each element.
where $\Theta_k$ is a function that combines information from different hops for $k$-simplices. The transformation of precomputed features and the combination of features from different depths in an SaNN model is summarized in Fig. 2. We denote the final embedding of a $k$-simplex $\sigma_k \in K$ generated by SaNN using the information from $0, 1, \ldots, T$-hop neighborhood as $H_k^{(T)}[\sigma_k] = H_k[\sigma_k]$.
**Computational Complexity:** SaNN model incurs significantly lower time complexity than the existing SNN models as we are precomputing the aggregated features from different hops while allowing the features to have simplicial-structural inductive bias. Specifically, the existing $T$-layer SNN models have an overall time complexity of about $O(T((2N_k^2D_k + N_kN_{k-1}D_{k-1} + N_kN_{k+1}D_{k+1}) + (3N_kD_k^2 + N_kD_{k-1}^2 + N_kD_{k+1}^2)))$, while an SaNN model (an example architecture is provided in Section 4.1) capturing information from $0, \ldots, T$-hop neighborhood of $k$-simplices has a significantly smaller time complexity of $O(T(3N_kD_k^2 + N_kD_{k-1}^2 + N_kD_{k+1}^2))$ compared to the existing SNN models. More details on the contribution of different components of SNNs and SaNN to their respective computational complexities are provided in Appendix B. In Fig. 3 we show a comparison of the average run-time measurements of SaNN and MPSN on an example dataset. We observe that the average run-time of the proposed model for a forward pass is almost constant for simplicial complexes of any size and increases only slightly with simplices of higher orders, whereas the run-time of MPSN increases drastically when simplices of higher orders are considered.
In what follows, we theoretically characterize the expressive power of SaNN.
### 4 THEORETICAL CHARACTERIZATION OF SaNN
Given the computational advantage of the SaNN model, we now analyze its expressive power. Specifically, we characterize the discriminative power of SaNN with respect to the WL and SWL tests (Please refer to Appendix C for more details about the SWL test). The following theorem states the conditions under which the SaNN model is more powerful than the WL test.
**Theorem 4.1.** SaNN is strictly more powerful than the WL test in distinguishing non-isomorphic graphs with a complex-clique lifting if all the functions involved in generating the node embeddings, namely, $f_0, \mathcal{U}(\cdot), f_0, c(\cdot), \phi(\cdot), g_0^{(t)}(\cdot)$, and $\Theta_0(\cdot)$ for $t = 0, \ldots, T$ are injective.
Although MPSN is provably more powerful than the WL test, it has the same form of the sequential approach of aggregating transformed features as is the case with the WL test. The proposed model, however, does not have the same form as the WL test since it is based on pre-aggregating features from different hops. Hence, the proof of comparing the expressive powers of SaNN and the WL test is not trivial. The proof of Theorem 4.1 is provided in Appendix D. To prove the above theorem, we first prove that SaNN is at least as powerful as the WL test by showing that if the node embeddings of two nodes generated by SaNN are equal, then the WL node coloring for the two nodes are equal. We then give an example where SaNN, using the higher-order information, distinguishes two graphs that the WL test cannot, to show that SaNN is more powerful than the WL test. Thus for appropriately chosen functions in Equations (3), (4), and (5), SaNN is provably more powerful than the WL test.
The theorem implies that any arbitrary extension of GAMPLPs [Chen et al., 2020], SPIN [Doshi & Chepuri, 2022], or SIGN [Rossi et al., 2020] to higher-order simplices does not result in the node-embeddings from SaNN having a superior expressive power GNNs. In one possible extension, we could replace the integer powers of adjacency matrices in these graph models with those of the Hodge Laplacian matrices. These matrices generalize the graph Laplacian to simplicial complexes and are defined as the sum of upper and lower Laplacians. However, even with this modification, the expressive power of these extended models does not surpass that of GNNs.
The theorem states that the node embeddings from the proposed model, under the conditions given in Theorem 4.1, are more expressive than those from the WL test (or equivalently, any of the existing GNN models and scalable versions). Considering a graph with no higher-order interactions, the node embeddings ($0$-simplices) from SaNN have better expressive power than scalable GNNs because SaNN incorporates information from its co-boundary adjacent simplices (edges) while learning the embedding for a node, which in turn carry information from their co-boundary adjacent simplices (triangles), and so on.


An illustration to visualize the expressive power of SaNN in comparison to the WL test is provided in Fig. 4. We present one instance of a pair of clique-lifted graphs for which the WL test assigns the same representations and, therefore, fails to identify the two graphs as being non-isomorphic while SaNN assigns different representations to the two graphs and distinguishes them. To assign a representation for the graph as a whole, we follow the usual procedure of constructing the histogram of representations of the nodes. The histogram of colors assigned by the WL test and the histogram of embeddings assigned by SaNN for the first three iterations are shown next to the two graphs. The histogram of colors assigned by the WL test to both graphs is the same. However, as shown in the figure, SaNN generates different embeddings to the two graphs (more details in Section 4).
In the next theorem, we provide the conditions on the functions such that SaNN is as powerful as the SWL test.
**Theorem 4.2.** SaNN is as powerful as the SWL test in distinguishing non-isomorphic simplicial complexes if the functions involved in generating the embeddings of the simplices, namely, \( f_{k,U}(\cdot), f_{k,L}(\cdot), f_{k,B}(\cdot), f_{k,C}(\cdot), \phi(\cdot), g_k^t(\cdot) \), and \( \Theta_k(\cdot) \) for \( t = 0, \ldots, T \) and \( k = 0, \ldots, K \), where \( T \) is the depth of the neighborhood information considered by SaNN in generating embeddings of simplices of each of the \( K + 1 \) orders, are injective.
We prove Theorem 4.2 in Appendix E. To prove the above theorem, we propose a simpler yet equally expressive alternative representation of the SWL update. Using the alternative update rule, we first prove that SaNN is at most as powerful as the SWL test by showing that if the colors assigned by SWL are the same for two simplices, then SaNN also generates the same embeddings for the two simplices. We then prove that SaNN is at least as powerful as the SWL test by showing that if the embeddings generated by SaNN are the same for two simplices, then SWL also generates the same colors for the two simplices.
The theorem states that the embeddings of simplices from the proposed computationally efficient method, under the conditions given in Theorem 4.2, are as expressive as those from Bodnar et al. (2021), which is proved to be as powerful as the SWL test. While the SWL test and Bodnar et al. (2021) are both based on the sequential approach of aggregating transformed features, the pre-aggregated features are transformed only during training in the proposed method. Despite avoiding the non-linear transformation of features in every iteration of feature aggregation, we prove that SaNN is as powerful as the SWL test.
To conclude, it is sufficient to limit the choice of aggregator and transformation functions to those recommended by Theorems 4.1 and 4.2 to design an SaNN model that is guaranteed to be more powerful than the WL test and as powerful as the SWL test, respectively. In what follows, we discuss a few such choices of the aggregator and transformation functions.
### 4.1 An Example SaNN Architecture
In this section, we discuss example functions that fulfill the conditions outlined in Theorem 4 and demonstrate that SaNN is as powerful as the SWL test. The SWL test [cf. Appendix C] distinguishes non-isomorphic simplicial complexes based on structure, assuming uniform initial features (colors) across all simplices. To establish equivalence with the SWL test, we consider simplicial complexes with a uniform scalar feature \( a \) on all simplices, without attributing any orientation to the features or the simplices. However, it is worth noting that SaNN can also process oriented features in practice, and in such cases, we work with oriented simplicial complexes (or incidence matrices) as defined in Section 2.
**Precomputing Simplicial-aware Features:** Consider simplicial complexes with the same scalar feature \( a \) as initial feature on all the simplices. For such a scenario, one choice of the aggregation functions \( f_{k,U}(\cdot), f_{k,L}(\cdot), f_{k,B}(\cdot), \) and \( f_{k,C}(\cdot) \) that preserves injectivity is the summation function. The summation of embeddings of upper, lower, boundary, and coboundary adjacent simplices can be computed efficiently using the (sparse) aggregation matrices defined in Section 2. Specifically, we compute the partial updates in (2) for aggregation based on summation recursively as
\[
Y^{(t)}_{k,U} = A_{k,U} X^{(t-1)}_k, \quad Y^{(t)}_{k,L} = A_{k,L} X^{(t-1)}_k, \quad Y^{(t)}_{k,B} = A_{k-1,B} X^{(t-1)}_{k-1}, \quad Y^{(t)}_{k,C} = A_{k+1,C} X^{(t-1)}_{k+1}.
\]
Another common choice for neighborhood aggregation in GNNs and SNNs is degree-based weighted summation. To implement degree-based weighted summation of neighboring embeddings, we use the following normalized incidence matrices:
\[
B_{k,U} = D_k^{-1/2} A_{k,U} D_k^{-1/2}, \quad B_{k,L} = D_k^{-1/2} A_{k,L} D_k^{-1/2}, \quad B_{k,B} = D_{k,k-1}^{-1/2} A_{k-1,B} D_{k,k-1}^{-1/2}, \quad B_{k,C} = D_{k,k+1}^{-1/2} A_{k+1,C} D_{k,k+1}^{-1/2}.
\]
Here, \( D_k \in \mathbb{R}^{N_k \times N_k} \) is a diagonal matrix whose \((i,i)\)th entry is the number of \( k \)-simplices that are upper and lower adjacent neighbors of the \( i \)th \( k \)-simplex. \( D_{k,k-1} \in \mathbb{R}^{N_k \times N_k} \) is a diagonal matrix whose \((i,i)\)th entry is the number of \( k-1 \)-simplices that are boundary adjacent neighbors of the \( i \)th \( k \)-simplex, and \( D_{k,k+1} \in \mathbb{R}^{N_k \times N_k} \) is a diagonal matrix whose \((i,i)\)th entry is the number of \( k+1 \)-simplices that are co-boundary adjacent neighbors of the \( i \)th \( k \)-simplex.
Using normalized incidence matrices to aggregate features has the advantage of bringing all the features to the same scale, thus providing numerical stability. However, it is not always injective. For example, consider the two simplicial complexes in Fig. 5 with \( a \) as initial feature on simplices of all orders.
Although the triangles, denoted by $\sigma_1$ and $\sigma_2$, in the two simplicial complexes, have a different number of lower adjacent triangles in their 1-hop neighborhood, the two triangles will be assigned the same lower neighborhood aggregated partial embedding by the degree-based weighted summation aggregator, which, in this case, is $a$. The summation aggregator, on the other hand, will assign two different lower adjacent neighborhood aggregated partial embeddings, specifically, $3a$ and $2a$, to the two triangles with different neighborhoods. In Appendix F, we give a generalized case where the degree-based weighted sum aggregator is not injective, i.e., where the proposed architecture assigns the same partial updates to $k$-simplices in simplicial complexes with different structures.
We get the final updated $t$-hop aware feature vector of a $k$-simplex $\sigma$, denoted by $X_k^{(t)} \in \mathbb{R}^{N_k \times D_k^{(t)}}$, where $D_k^{(t)} = 2D_k^{(t-1)} + D_{k-1}^{(t-1)} + D_{k+1}^{(t-1)}$, by concatenating the partial updates in (6) as
$$X_k^{(t)} = [Y_U^{(t-1)}, Y_L^{(t-1)}, Y_B^{(t-1)}, Y_C^{(t-1)}]^T,$$
i.e., we use an injective concatenation read-out function (other commonly used sum, mean, or max read-out functions are not injective). Next, we discuss some properties of the proposed aggregation method.
**Property 1.** The devised aggregation scheme is permutation and orientation equivariant.
We prove the equivariance of the aggregation method in Appendix C. We first prove the permutation (orientation) equivariance for one-hop aggregation, i.e., we show that if the ordering (orientations) of the input features to SaNN are altered by some permutation operator $P$, ($O$, respectively), then the ordering (orientations) of the 1-hop aggregated features are changed by the same operator $P$, ($O$, respectively). We then prove, by induction, the permutation (orientation) equivariance of the proposed aggregation for any hop. The equivariance properties of the proposed aggregation method aids in the construction of permutation and orientation equivariant SaNN as we see in the next subsection.
Note that the proposed precomputation method is very different from that in GAMLPs, SPIN, or SIGN. The precomputation step in GAMLPs, SPIN, or SIGN involves taking integer powers of graph adjacency or Laplacian matrices to aggregate information from different hops of neighborhood. However, such a straightforward precomputation step for simplicial complexes will not result in a model that captures information from different types and hops of neighborhood while being (i) as powerful as the SWL test (ii) permutation equivariant, and (iii) orientation equivariant.
**MLPs for Transforming Precomputed Features:** Once the features are aggregated by preserving injectivity and ensuring equivariance properties, the next step is to transform the aggregated simplicial-aware features to more task-aware features using non-linear learnable functions. According to Theorem 4.2, the learnable functions should be injective in order for SaNN to be as powerful as the SWL test. Given the universal approximation ability and injectivity of multi-layer perceptrons (MLPs), we model the transformation functions $g_k^{(t)}$ for $t = 0, \ldots, T$ and $k = 0, \ldots, K$ using MLPs. Single-layer perceptrons, however, are not injective.
For $\Theta_k$ in (5), we use a concatenation read-out function to combine embeddings of simplices of any order $k$ and get the combined embeddings $H_k$ as
$$H_k = [S_k^{(0)}, S_k^{(1)}, \ldots, S_k^{(T)}].$$
Concatenating the embeddings results in an injective combination of embeddings of simplices of different orders. Typically, to avoid over smoothing, we use embeddings from only 2 or 3-hop neighborhood, thus concatenation does not result in very large dimensional embeddings.
### 4.2 Properties of SaNN
We next prove that SaNN, with the aggregation and transformation functions satisfying the requirements in Theorem 4.2 (as in the example architecture in the Subsection 4.1), possesses all the equivariance properties that some of the existing SNNs, namely, Roddenberry et al. (2021) and Bodnar et al. (2021), possess. SaNN with the summation-based neighborhood aggregation, feature transformation using MLPs, and using concatenation $\Theta_k$ has the following properties.
**Permutation equivariance:** As discussed in Property 1, the proposed aggregation scheme is permutation equivariant. With the transformation functions as MLPs and the functions that combine embeddings from different hops as concatenation, SaNN is a composition of permutation equivariant functions and hence is permutation equivariant.
**Orientation equivariance:** Using orientation equivariance of the proposed aggregation method [cf. Property 1] and the fact that an MLP is orientation equivariant if the activation functions in the MLP are odd, we prove that SaNN is orientation equivariant if all the activation functions involved in the network are odd in Appendix G.
Simplicial-awareness: Since the embeddings of SaNN depend on the boundary matrices of all orders, i.e., $B_k$ for $k \in \{1, \ldots, K\}$, of a $K$-simplicial complex, SaNN is said to satisfy simplicial-awareness of order $K$ for a $K$-simplicial complex.
5 Empirical Evidence
In this section, we empirically evaluate SaNN in terms of its ability to perform the following three tasks: trajectory prediction, simplicial closure prediction, and graph classification.
5.1 Downstream Tasks
The details of the three tasks are discussed next.
Trajectory Prediction: Trajectory prediction involves predicting the next node in a sequence formed by a series of nodes connected by edges, with oriented flows on the edges. As the features are oriented, we use oriented incidence matrices for aggregation. We evaluate the trajectory prediction ability of SaNN on four datasets, namely, Ocean (Roddenberry et al., 2021), Synthetic (Roddenberry et al., 2021), Planar (Cordonnier & Loukas (2018), and Mesh (Cordonnier & Loukas (2018). We compare the trajectory prediction accuracy of SaNN with state-of-the-art SNN variants, namely, SCOnE (Roddenberry et al., 2021) and SCNN (Ebli et al., 2020), and a non-deep projection based method, denoted by Projection, which projects the input edge features onto the Hodge Laplacian kernel (Schaub et al., 2020).
Simplicial-closure Prediction: The goal of simplicial closure prediction is to predict the closure of open simplices in a time series of simplicial complex data. We perform this task on the email-Eu, email-Enron, and contact-high-school datasets, as referenced in Benson et al. (2018). Our methodology involves a temporal split of the data for these datasets, using the initial 80% of the data to train the encoder. The remaining 20% of the data is set aside for inference. Given the highly skewed nature of the dataset, we employ the relative area under the precision-recall curve (AUC-PR) as the evaluation metric for model performance.
Graph Classification: Graph classification refers to classifying (clique-lifted) graphs as belonging to one of two or more possible known classes. For all graph classification experiments, we consider the initial features on simplices as the cumulative count of their lower and upper adjacent simplices. As these are unoriented, we use unoriented incidence matrices for aggregation. We compare the graph classification accuracy of SaNN with MPNN (Bodnar et al., 2021) and state-of-the-art GNN variants, DGCNN (Phan et al., 2018), GIN (Leskovec & Jegelka, 2019), and GraphSage (Hamilton et al., 2017), on benchmark datasets for binary as well as multi-class graph classification tasks from chemical and social domains. Specifically, we evaluate on the following TUDatasets (Morris et al., 2020): Proteins, NCII, IMDB-B, IMDB-M, Reddit-B and Reddit-M.
5.2 Results and Discussion
Performance: We report the performance of SaNN for trajectory prediction, simplicial closure prediction and graph classification in Tables 1, 2 and 3 respectively. We also provide the per epoch run-time values (in seconds) within brackets. We experimentally observe that SaNN is almost as effective as the existing SNN models, which successively update the features of simplices at every layer, while being several times faster.
Insights: For trajectory prediction, SaNN is observed to outperform the projection-based method and has competitive performance as the existing SNN models on all the datasets. The good performance of SaNN also signifies its effective use of the orientations of flows for trajectory prediction. We apply the existing SNN models and SaNN for simplicial closure prediction. The deep models are observed to perform exceptionally better than logistic regression. Of the three deep models, namely, MPNN, SCNN, and SaNN, MPNN is observed to have the best performance on some of the smaller datasets, namely, high-school, and primary-school datasets. However, on datasets with a large number of simplices, due to the dependence of each layer of the existing SNN models on, approximately, the square of the number of simplices, the existing SNN variants quickly run out of memory. SaNN, on the other hand, performs competitively with the existing SNN models while being many times faster. The computational savings of SaNN are the most evident for this application as the
---
Details about the experimental setup, datasets, attributes, hyperparameters, evaluation metrics, training, validation, and test splits for the three tasks are provided in Appendix A.
Table 1: Trajectory prediction accuracies on various datasets. The first and the second best performances are highlighted in red and blue, respectively. The values within parentheses are the average per epoch run-time values (in seconds). The first terms in the runtime values of SaNN correspond to the precomputation times the second terms to the per epoch training times. Run-time values of non-deep baseline are indicated by —. The best run-time values are highlighted in bold.
Table 2: Relative AUC-PR values for simplicial closure prediction. The first and the second best performances are highlighted in red and blue, respectively. The values within parentheses are the average per epoch run-time values (in seconds). The first terms in the runtime values of SaNN correspond to the precomputation times and the second terms to the per epoch training times. The best run-time values are highlighted in bold. Out-of-memory results are indicated by --.
| Dataset | Enron | High-School | Primary-School | NDC-Classes | Math-SX |
|---------------|-------|-------------|----------------|-------------|---------|
| Random Baseline (RB) | 0.0537 | 0.0112 | 0.0105 | 0.2190 | 0.0202 |
| Log. Reg./RB | 0.55 ± 0.0 | 0.59 ± 0.0 | 1.79 ± 0.0 | 2.32 ± 0.3 | 0.65 ± 0.0 |
| MP SN | 14.51 ± 0.1 | 30.83 ± 0.0 | 33.05 ± 0.0 | -- | -- |
| Bodnar et al., 2021/RB | (413) | (389) | (401) | -- | -- |
| SCNN | 14.17 ± 0.0 | 20.52 ± 0.0 | 26.19 ± 0.0 | -- | -- |
| Yang et al., 2022A/RB | (17) | (401) | (1891) | -- | -- |
| SaNN/RB | 15.45 ± 0.0 | 30.22 ± 0.0 | 32.89 ± 0.0 | 2.79 ± 0.0 | 6.88 ± 0.0 |
| | (0.01, 3) | (0.05, 112) | (0.76, 916) | (0.26, 13) | (95.91, 52883) |
Table 3: Graph classification accuracies on various datasets. The first and the second best performances are highlighted in red and blue, respectively. The values within parentheses are the average per epoch run-time values (in seconds). The first terms in the runtime values of SP IN and SaNN correspond to the precomputation times and the second terms to the per epoch training times. The best run-time values are highlighted in bold.
| Dataset | Proteins | NC11 | IMDB-B | Reddit-B | Reddit-M |
|---------------|----------|------|--------|----------|----------|
| MP SN | 76.5 ± 3.4 | 82.8 ± 2.2 | 75.6 ± 3.2 | 92.2 ± 1.0 | 57.3 ± 1.6 |
| Bodnar et al., 2021 | (33) | (292) | (46) | (242) | (1119) |
| DGCNN | 72.9 ± 3.5 | 76.4 ± 1.7 | 69.2 ± 3.0 | 87.8 ± 2.5 | 49.2 ± 1.2 |
| Phan et al., 2018 | (21) | (218) | (19) | (231) | (353) |
| GIN | 73.3 ± 4.5 | 80.0 ± 1.4 | 71.2 ± 3.9 | 80.9 ± 1.9 | 56.1 ± 1.7 |
| Leskovec & Jegelka, 2019 | (19) | (171) | (17) | (190) | (241) |
| GraphSAGE | 73.0 ± 4.5 | 76.1 ± 1.8 | 68.8 ± 4.5 | 84.1 ± 1.9 | 50.4 ± 1.3 |
| Hamilton et al., 2017 | (17) | (167) | (15) | (188) | (219) |
| SP IN | 75.6 ± 4.5 | 74.0 ± 1.7 | 71.1 ± 5.0 | 88.4 ± 2.5 | 53.8 ± 1.4 |
| Doshi & Chepuri, 2022 | (1.6, 0.3) | (4, 38) | (0.3, 7) | (19, 25) | (4, 88) |
| SaNN | 77.6 ± 2.2 | 74.9 ± 2.2 | 72.7 ± 2.1 | 91.7 ± 2.7 | 54.1 ± 1.6 |
| | (3.6, 0.4) | (6, 58) | (4, 8) | (6, 45) | (34, 104) |
Datasets have several thousands of simplices [cf. Table 1 in Appendix H]. SaNN is observed to have competitive performance as MP SN also for graph classification. Though MP SN has a slightly better performance in terms of the means of the accuracies, in most of the results, the accuracies have significant statistical overlap, i.e., the standard deviations are too high compared to the difference in their means. Hence, comparisons between the best, second best, and others are insignificant. Furthermore, as for the other applications mentioned above, SaNN has the advantage of being almost independent of the number of simplices in the simplicial complexes.
Additional Insights: To give additional insights into which aspects of SaNN contribute the most to its competitive performance, we perform ablation studies. We study the effect of features from different depths and observe that the proposed SaNN model that combines features from different depths outperforms ablated SaNN models that use specific neighborhood depths, indicating that although higher-hop-aware features implicitly contain lower-hop information, explicitly combining the information from all the hops has an empirical advantage. This agrees with the theoretical results since the model presented in Section 3, unlike the ablated SaNN models, is provably as expressive as MP SN, suggesting that its embeddings are equally expressive and capable of achieving similar performance in downstream tasks as MP SN. We also observed that local neighborhood information is crucial for all tasks, while higher-hop information seems less relevant. We also studied the effect of features of different orders and observed that combining the transformed features from simplices of different orders results in a much better performance than using only nodes, edges, triangles, or tetrahedrons. Specifically, for simplicial closure prediction, we observe that the constituent edges of open triangles carry the most crucial information about whether a simplex will be formed. This agrees with the observation made in Benson et al. (2018), which states that the tie strengths of the edges in an open triangle positively impact its simplicial closure probability. In graph classification, using only higher-order simplices such as triangles and tetrahedrons leads to poor results, possibly due to the small graph sizes with very few higher-order simplices. These limited higher-order simplices fail to capture the graph structures and distinguish between them. More details from the ablation studies are provided in Appendix J.
6 CONCLUSIONS
We have presented a class of simple simplicial neural network models, referred to as simplicial-aware neural networks (SaNN), which are based on the precomputation of simplicial features prior to the training process. We theoretically analyzed the expressive power of SaNN models and provided theoretical conditions under which they discriminate all the simplicial complexes that the SWL test can do. We also provided the conditions under which the class of SaNN models are more powerful than the WL test. We relate the discriminative power of the SaNN models to that of the SWL test by expressing the output of SaNN models as an injective function of the colors assigned by the WL and the SWL tests. We have prescribed viable functions in the SaNN model that result in a simplified yet powerful model that is as expressive as the SWL test. We have demonstrated via experiments that SaNN performs on par with the existing SNN models for trajectory prediction, simplicial closure prediction, and graph classification applications on various benchmark datasets. We thereby show through numerical experiments that SaNN models are computationally inexpensive and well-capture the structure of the simplicial complexes.
7 ACKNOWLEDGEMENTS
This research was partially supported by a Qualcomm Innovation Fellowship and by the Kotak IISc AI-ML Centre (KIAC).
REFERENCES
A. R. Benson, R. Abebe, M. T. Schaub, A. Jadbabaie, and J. Kleinberg. Simplicial closure and higher-order link prediction. *IEEE Transactions on Neural Networks and Learning systems*, 115(48):E11221–30, 2018.
C. Bodnar, F. Frasca, Y. Wang, N. Otter, G. F. Montufar, P. Lio, and M. Bronstein. Weisfeiler and lehman go topological: Message passing simplicial networks. In Lauren Cowles (ed.), *Proceedings of the 38th International Conference on Machine Learning (ICML 2021)*, pp. 1026–1037, Virtual, 2021. Cambridge University Press.
E. Bunch, Q. You, G. Fung, and V. Singh. Simplicial 2-complex convolutional neural nets. *arXiv preprint 2012.06010*, 2020.
L. Chen, Z. Chen, and J. Bruna. On graph neural networks versus graph-augmented MLPs. *arXiv preprint 2010.15116*, 2020.
J. B. Cordonnier and A. Loukas. Extrapolating paths with graph neural networks. *arXiv preprint 1903.07518*, 2018.
S Doshi and S.P Chepuri. Graph neural networks with parallel neighborhood aggregations for graph classification. *IEEE Transactions on Signal Processing*, 70:E11221–30, 2022.
S. Ebli, M. Defferrard, and G. Spreemann. Simplicial neural networks. *arXiv preprint 2010.03633*, 2020.
F. Errica, M. Podda, D. Bacciu, and A. Micheli. A fair comparison of graph neural networks for graph classification. In *Proceedings of the 8th International Conference on Learning Representations (ICLR)*, Addis Ababa, Ethiopia, 2020.
W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Proceedings of the 30th Advances in Neural Information Processing Systems (NIPS 2017)*, CA, USA, 2017. Neural Information Processing Systems Foundation, Inc. (NeurIPS).
A. A. Lehman and B. Weisfeiler. A reduction of a graph to a canonical form and an algebra arising during this reduction. *Nauchno-Technicheskaya Informatsiya*, 2(9):12–6, 1968.
K. X. Leskovec and S. Jegelka. How powerful are graph neural networks. In *Proceedings of the 7th International Conference on Learning Representations (ICLR 2019)*, New Orleans, LA, USA, 2019.
C. Morris, N. M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann. TUDataset: A collection of benchmark datasets for learning with graphs. *arXiv preprint 2007.08663*, 2020.
A. V. Phan, M. Le Nguyen, Y. L. Nguyen, and L. T. Bui. DGCNN: A convolutional neural network over large-scale labeled graphs. *Neural Networks*, 108:533–43, 2018.
T. M. Roddenberry, N. Glaze, and S. Segarra. Principled simplicial neural networks for trajectory prediction. In Lauren Cowles (ed.), *Proceedings of the 38th International Conference on Machine Learning (ICML 2021)*, pp. 9020–9029, Virtual, 2021. Cambridge University Press.
E. Rossi, F. Frasca, B. Chamberlain, D. Eynard, M. Bronstein, and F. Monti. SIGN: Scalable inception graph neural networks. *arXiv preprint 2004.11198*, 2020.
M. T. Schaub, A. R. Benson, P. Horn, G. Lippner, and A. Jadbabaie. Random walks on simplicial complexes and the normalized hodge 1-laplacian. *SIAM Review*, 62(2):353–91, 2020.
P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y.Bengio. Graph attention networks. *Statistics*, 1050(20), 2017.
F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. Simplifying graph convolutional networks. In *Proceedings of the 36th International Conference on Machine Learning (ICML)*, pp. 6861–6871, California, 2019. PMLR.
|
MZzlUyU2ih
|
IM-Oracle seems to outperform DoReMi in most cases in terms of Success Rates at the cost of a slightly higher Execution Time(s). This makes me wonder whether DoReMi's improvement comes just from the ability to prematurely stops low-level skill execution.
|
DoReMi: Grounding Language Model by Detecting and Recovering from Plan-Execution Misalignment
Anonymous authors
Paper under double-blind review
Abstract
Large language models (LLMs) encode a vast amount of semantic knowledge and possess remarkable understanding and reasoning capabilities. Previous work has explored how to ground LLMs in robotic tasks to generate feasible and executable textual plans. However, low-level execution in the physical world may deviate from the high-level textual plan due to environmental perturbations or imperfect controller design. In this paper, we propose DoReMi, a novel language model grounding framework that enables immediate Detection and Recovery from Misalignments between plan and execution. Specifically, we leverage LLMs to play a dual role, aiding not only in high-level planning but also generating constraints that can indicate misalignment during execution. Then vision language models (VLMs) are utilized to detect constraint violations continuously. Our pipeline can monitor the low-level execution and enable timely recovery if certain plan-execution misalignment occurs. Experiments on various complex tasks including robot arms and humanoid robots demonstrate that our method can lead to higher task success rates and shorter task completion times. Videos of DoReMi are available at https://sites.google.com/view/doremi-paper.
1 Introduction
Large language models (LLMs) pre-trained on web-scale data emerge with common-sense reasoning ability and understanding of the physical world. Previous works have incorporated language models into robotic tasks to help embodied agents better understand and interact with the world to complete challenging long-horizon tasks that require complex planning and reasoning (Ahn et al., 2022; Huang et al., 2022a; Liang et al., 2022).
To make the generated plan executable by embodied agents, we need to ground the language. One line of the works leverages pre-trained language models in an end-to-end manner that directly maps language and image inputs to the robot’s low-level action space (Brohan et al., 2022; 2023; Jang et al., 2022; Shridhar et al., 2023; Nair et al., 2022). These approaches often require large amounts of robot action data for successful end-to-end training, which is expensive to acquire (Brohan et al., 2022). Moreover, these action-output models often contain large transformer-based architectures and cannot run at high frequencies. Therefore, they may not be suitable for tasks with complex dynamics (e.g., legged robots) that require high-frequency rapid response. Recently, many works have adopted a hierarchical approach where language models perform high-level task planning, and then some low-level controllers are adopted to generate the complex robot control commands (Ahn et al., 2022; Huang et al., 2022a; Liang et al., 2022; Huang et al., 2022b). Under this hierarchical framework, we can leverage powerful robot control methods, such as reinforcement learning, to handle complex robot dynamic control problems with high frequency.
However, these grounding methods often assume that every low-level skill can perfectly execute the high-level plan generated by the language model. In practice, low-level execution may deviate from the high-level plan due to environmental perturbations or imperfect controller design. These misalignments between plan and execution may occur at any time during the task procedure. Previous works consider incorporating execution feedback into language prompts once the previous plan step is finished. If the step is unsuccessful, the process is repeated (Huang et al., 2022b). However, this
delayed feedback can be inefficient. For instance, as illustrated in Figure 1(b), when a human is carrying a box and performing the low-level skill "Go to the gray table", if the box is accidentally dropped, it becomes futile to continue with the current skill. The human will immediately abort the current skill and call for the skill "Pick up the box". However, agents without immediate re-planning will continue going forward and will take more time to pick up the box dropped halfway after reaching the destination.

(a) High-level Task Planning
(b) Low-level Skill Execution
Figure 1: Illustration of our motivation. Low-level execution may deviate from the high-level plan. DoReMi can immediately detect the misalignment between the plan and execution when the box drops accidentally and quickly recovers. Agents without immediate re-planning suffer from such misalignment.
In this paper, we propose a novel framework DoReMi which enables immediate Detection and Recovery from plan-execution Misalignments. Specifically, in addition to employing LLMs for high-level planning [Ahn et al., 2022], we further leverage LLMs to generate constraints for low-level execution based on their understanding of physical worlds. During the execution of low-level skills, a vision language model (VLM) [Li et al., 2023b] is employed as a general “constraint detector” to monitor whether the agent violates any constraints continuously. If some constraints are violated, indicating that the plan and execution may be misaligned, the language model is immediately called to re-plan for timely recovery. We summarize several advantages of our pipeline: (1) LLM plays a dual role, aiding not only in high-level planning but also in supervising low-level execution, enabling rapid detection and recovery; (2) The VLM can focus on the specific constraints suggested by the LLM and only need to pick binary answers, providing more precise feedback. This collaborative approach between the LLM and the VLM can help align the plan and execution during the whole task period. Furthermore, under mild assumptions, we conduct a theoretical analysis to estimate how much time can be saved or how much the success rate can be improved through immediate re-planning when misalignment occurs. Experiments in physical simulations, including robot arm manipulation tasks and humanoid robot tasks, demonstrate that DoReMi leads to a higher task success rate and shorter task execution time.
2 RELATED WORKS
Language Grounding Prior research has attempted to employ language as task abstractions and acquired control policies that are conditioned on language [MacMahon et al., 2006, Chaplot et al., 2018, Jiang et al., 2019a, Misra et al., 2017, Mei et al., 2016]. Furthermore, some studies have investigated the integration of language and vision inputs within embodied tasks to directly predict the control commands [Silva et al., 2021, Guhur et al., 2023, Goyal et al., 2021]. Recent works, including [Brohan et al., 2022, 2023, Shridhar et al., 2023, Zhang & Chai, 2021, Lynch et al., 2022], have demonstrated significant progress in utilizing transformer-based policies to predict actions. However, these end-to-end approaches heavily depend on the scale of expert demonstrations for model training.
Task Planning with Language Model Traditionally, task planning was solved through symbolic reasoning [Nau et al., 1999, Fikes & Nilsson, 1971] or rule-based planners [Fox & Long, 2003, Jiang et al., 2019b]. Recently, many works demonstrated that large language models (LLMs) can generate
executable plans in a zero/few-shot manner with appropriate grounding (Huang et al., 2022a; Ahn et al., 2022; Zeng et al., 2022; Ren et al., 2023). Some pre-trained low-level skills (primitives) are then adopted to execute steps in order. These LLM planners typically assume the successful execution of each skill, resulting in an open-loop system in physical worlds. Works in the instruction-following benchmark (Shridhar et al., 2020; Puig et al., 2018) like ReAct (Yao et al., 2022), and Reflexion (Shinn et al., 2023), incorporate feedback into LLM prompts to help planning after each step of the plan is finished. However, these benchmarks operate in discrete scenes and pay less attention to the skill execution period. The closest work to ours is Inner Monologue (Huang et al., 2022b), which also considers continuous physical worlds, and takes into account 3 types of feedback (e.g. success detectors, scene descriptions, and human feedback) upon the completion of each step. However, Inner Monologue’s feedback is impractical and hard to obtain at high frequency. In contrast to this, our framework enables precise and high-frequency feedback with practical detectors.
Vision Language Model for Embodied Control. The vision language model (VLM) is trained on image-text pairs, enabling it to simultaneously understand visual and textual inputs and address a variety of downstream tasks, such as visual question answering (VQA) (Li et al., 2023b; Antol et al., 2015), image captioning (Zhou et al., 2020), and object detection (Gu et al., 2021). VLMs align semantic information between vision and natural language, thereby aiding in grounding language models and facilitating embodied control. Pre-trained visual encoders or instruction encoders (Radford et al., 2021) can be connected with some action head to help train end-to-end policies (Shridhar et al., 2022) or generate textual plans (Driess et al., 2023). RT-2 (Brohan et al., 2023) directly fine-tuned on a VLM can generate texts and robot control actions simultaneously. VLMs can also act as scene descriptors (Huang et al., 2022b), success detectors (Du et al., 2023; Zhang et al., 2023), or object detectors (Stone et al., 2023) to facilitate the task execution. To ensure adherence to crucial constraints, we employ the VLM (Li et al., 2022) as a "constraint detector", periodically verifying whether the agent satisfies specific constraints.
3 Problem Statement
Our objective is to enable the embodied agent to accomplish long-horizon tasks specified as natural language instructions \( i \) in the physical world. The agent has a basic skill set \( \Pi \), with each skill \( \pi_j \in \Pi \) corresponding to a distinct function that can be described in natural language \( l_{\pi_j} \).
Previous work has illustrated that pre-trained large language models can be used as planners to decompose complicated language instructions into textual skill sequences: \( i \rightarrow (l_{\pi_1}, l_{\pi_2}, ..., l_{\pi_n}) \) (Huang et al., 2022a; Zeng et al., 2022), as shown in Figure 2h. Many works consider feedback at the end of each skill (Huang et al., 2022b; Yao et al., 2022; Shinn et al., 2023), which can be described as plan-level feedback in Figure 2h. In particular, Inner monologue (Huang et al., 2022b) assumes the accessibility of 3 sources of oracle feedback from success detectors, passive scene descriptors, and humans. However, such oracle feedback is impractical in most settings and can not be frequently obtained: the success detector can only assess success or failure upon the completion of each skill, humans are unable to provide high-frequency feedback, and frequently injecting passive scene descriptions into the LLMs risks exceeding its maximum input token length and may cause a performance drop in LLMs (Liu et al., 2023). How to incorporate frequent and precise feedback into the LLMs remains a challenge.
In the following section, we will introduce our DoReMi framework which leverages powerful LLMs to generate both high-level plans and low-level execution constraints, which then enables execution-level feedback by VLM during the entire execution period, as shown in Figure 2c).
4 Method
In this section, we introduce our DoReMi framework which enables immediate Detection and Recovery from Plan-Execution Misalignment. Our algorithm can be succinctly described in two stages depicted in Figure 2c):
1. At the high-level planning stage, given a set of low-level skills, prompts, and high-level task instruction, language models are leveraged to play a dual role, aiding not only in planning the next skill but also generating constraints for the next skill based on historical information.
2. During the low-level skill execution stage, we employ a vision-language model (VLM) (Li et al., 2023b) as a general "constraint detector" that periodically verifies the satisfaction of
all constraints. If any constraint is violated, the language model is invoked for immediate re-planning to facilitate recovery.
Figure 2: Previous methods perform open-loop planning or only re-plan when the previous skill is finished. Our DoReMi framework leverages LLM to generate both the plan and corresponding constraints. Then a VLM is employed to supervise the low-level execution period, which enables immediate recovery from plan-execution misalignment.
4.1 LANGUAGE MODEL FOR PLANNING
Following previous works that leverage LLM to generate feasible textual plans (Ahn et al., 2022), we utilize LLMs to plan the next steps through few-shot in-context learning. Furthermore, we employ language models for re-planning when our constraint detector identifies a plan-execution misalignment. In such scenarios, we additionally include the misalignment information in prompts and invoke the LLM for re-planning. Detailed planning prompts can be found in Appendix D. Practically, we deploy the Vicuna-13B model (Chiang et al., 2023) locally and pick the next skill with max output probability. We also try GPT4 (OpenAI, 2023) through OpenAI API to directly output the next step with zero temperature. Both LLMs exhibit effective planning capabilities in our tasks.
4.2 LANGUAGE MODEL FOR CONSTRAINT GENERATION
LLM planner helps agents decompose long-horizon tasks into skill sequences. However, LLMs are not inherently integrated into the execution of low-level skills, which potentially leads to misalignment between plan and execution. To further explore the ability of LLMs in embodied tasks, we utilize LLMs not only for next-step planning but also for constraint generation based on historical information. For instance, consider the execution period of the "go to" skill after the "pick up box" skill. In such cases, the constraint "robot holds box" must be satisfied and violation of this constraint could indicate a failure in the picking or possible dropping of the box. Similarly, after the skill "place red block on green block", the constraint "red block on green block" should always be met. LLMs possess the capability to automatically generate these constraints for planned steps, drawing upon their encoded understanding of the physical world. Moreover, the VLM detector can focus on these specific constraints and only need to pick binary answers from "Yes" or "No", resulting in much more precise feedback. In contrast, open-ended scene descriptions of VLMs may result in large ambiguity and miss essential information, as shown in Figure 5.
In practice, after the LLM selects the next step with the highest output probability, we continue the generation starting with "Constraint:" to derive specific constraints. Additionally, we conducted experiments to assess the quality of the LLM-generated constraints. First, we conduct a user study to compare the LLM-generated constraints with manually specified constraints. Survey results show that users think 98% of the LLM-generated constraints are reasonable and admissible. Second, We query VLM with manually specified constraints and LLM-generated constraints respectively, picking binary answers from {"Yes", "No"}. We find these two answers are the same in 97% of the queries. These results show the remarkable proficiency of LLMs in generating constraints, driven by their encoded understanding of the physical world. For a more comprehensive analysis, please refer to Appendix D.
Figure 3: Open-ended scene descriptions of VLMs are ambiguous. DoReMi leverages the LLM to generate specific constraints for steps and directly queries the VLM with these constraints, resulting in much more precise feedback.
Algorithm 1 DoReMi (Immediate Detection and Recovery from Misalignment)
Given: A high level instruction $i$, a skill set $\Pi$, language description $l_{\Pi}$ for $\Pi$, language model $L$, prompt $p_0$, and VLM constraint detector $D$.
1: Initialize the skill sequence $\pi \leftarrow \emptyset$, the number of steps $n \leftarrow 1$.
2: while $l_{\pi_{n-1}} \neq \text{done}$ do
3: $\pi_n \leftarrow \arg\max_{\pi \in \Pi} L(l_{\pi}|i, p_{n-1}, l_{\pi_{n-1}}, ..l_{\pi_0}), c_n \leftarrow L(i, p_{n-1}, l_{\pi_n}, ..l_{\pi_0})$
4: Update prompt $p_n$.
5: while $\pi_n$ is not finished do
6: Every $\Delta t$ second, query agent all the constraints $c_n$ using the constraint detector $D$.
7: if $\exists D(c_n) = \text{false}$ then
8: Add constraint violate information into prompt $p_n$ and break.
9: end if
10: end while
11: $n \leftarrow n + 1$.
12: end while
4.3 VLM as Constraint Detector
Subsequent to the constraint generation stage, the agent proceeds to execute the planned step while adhering to constraints suggested by the LLM. The LLM-generated constraints may include various types, such as "red block is on blue block," "no obstacles in front of the robot," "robot is holding an apple," and more. In this work, we adopt a vision language model(VLM) (Li et al., 2023b) as a general "constraint detector" to check all constraints through visual information. The visual input of the VLM is captured from either a first-person or third-person perspective camera, and the text input is automatically adapted from the LLM proposed constraints in the form "Question: Is the constraint $c_j$ satisfied? Answer:". For each query, the VLM only needs to select an answer from {"Yes", "No"}, which consists of very short token lengths and costs less than 0.1 second. We use $D(c_j)$ to denote the answer of the VLM $D$ when checking constraint $c_j$. If $c_j$ is satisfied, $D(c_j) = \text{True}$; otherwise, $D(c_j) = \text{False}$. The pseudo-code of the pipeline is provided in Algorithm 1. It's also worth mentioning that detectors in other modalities are also compatible with our framework and constraint detectors can run parallel to low-level controllers with different frequencies.
In practice, we use the pre-trained BLIP-2 model (Li et al., 2023b) as a general "constraint detector" to periodically check whether the agent satisfies all constraints every $\Delta t = 0.2$ second. If so, the robot continues executing the current low-level skill; otherwise, the robot aborts the current skill, and the re-planning process is triggered. We observe that pre-trained zero-shot VLM can perform well in most tasks, except those with extremely complex scenes. To enhance the performance in such complex tasks, we collect a small dataset and fine-tune the VLM using the parameter-efficient LoRA method (Hu et al., 2021). We also verify that the fine-tuned VLM detector can generalize to unseen objects, unseen backgrounds, and even unseen tasks.
4.4 Theoretical Analyses
Delayed re-planning may waste time (as shown in Figure 1) or even result in failures. In this section, we analyze the potential time savings and success rate improvements achievable through immediate detection and recovery. We denote the execution time of low-level skill with random variable $t$ with mean $\mathbb{E}[t] = \mu$ and variance $\text{Var}(t) = \sigma^2$. Misalignment can occur at any time $s$ within the
execution time interval \([0, t]\) where \(0 \leq s \leq t\). Additionally, we assume our constraint detector has probability \(p_d\) to detect each misalignment. We define the discrete random variable \(M\) as the number of misalignment occurrences under the following assumptions: (1) Plan-execution misalignments occur independently. (2) Misalignments occur at a constant ratio \(\lambda\) within a small time interval: \(\lim_{t \to 0} P(M = 1) = \lambda t\). (3) No two misalignments occur simultaneously: \(\lim_{t \to 0} P(M = k) = 0\) for \(k > 1\). Under these assumptions, the number of plan-execution misalignments follows a Poisson distribution [Papoulis & Unnikrishna Pillai, 2002]:
\[
P(M = k) = \frac{(\lambda t)^k e^{-\lambda t}}{k!} \quad k = 0, 1, 2, 3...
\]
(1)
**Theorem 1** The following equations describe the possible time-savings \(t_s\) and the success rate improvement \(P_s\) under immediate detection and re-planning:
\[
E(t_s) = \sum_k P(M = k)E(t_w|M = k) = \frac{p_d \lambda (\mu^2 + \sigma^2)}{2} - p_d \lambda \mu \Delta t
\]
(2)
\[
E(P_s) = 1 - E(e^{-\lambda t}) \approx p_d \lambda \mu - \frac{(2p_d - p_d^2)\lambda^2(\mu^2 + \sigma^2)}{2}
\]
(3)
The detector’s reaction time, \(\Delta t\), is much smaller than the average execution time \(\mu\), so time-saving \(E(t_s)\) is greater than 0. \(\lambda\) represents the misalignment occurrence ratio per second, which is very small, so success rate improvement \(E(P_s)\) is also greater than 0. Detailed proof can be found in Appendix A.
5 EXPERIMENTS
In this section, we conduct experiments involving both robotic arm manipulation tasks and humanoid robot tasks, as shown in Figure 4. These tasks incorporate various environmental disturbances and imperfect controllers, such as random dropping by the robot end-effector, noise in end-effector placement positions, failure in pick, and unexpected obstacles appearing in the robot’s path.
We aim to answer the following questions: (1) Does DoReMi enable immediate detection and recovery from plan-execution misalignment? (2) Does DoReMi lead to higher task success rates and shorter task execution time under environmental disturbances or imperfect controllers?

5.1 ROBOT ARM MANIPULATION TASKS
**Robot and Environment** This environment is adapted from Ravens [Zeng et al., 2020], a benchmark for vision-based robotic manipulation focused on pick-and-place tasks. An UR5e robot equipped with a suction gripper operates on a black tabletop, while a third-view camera provides a comprehensive view of the tabletop. The robot possesses a basic skill set including "pick obj" and "place obj on receptacle", both of which are pre-trained primitives conditioned on single-step instructions similar to the CLIPort [Shridhar et al., 2022] and Transporter Nets [Zeng et al., 2020]. To assess the effectiveness of our algorithm, we introduce additional disturbances into the original environment and the robot controller.
**Tasks:** (1) **Pick and Place.** The agent is required to pick a certain block and place it in a fixture. We assume the block has a probability \(p\) to drop every second when sucked by the end-effector, so the agent may need to perform pick and place several times to finish the task. (2) **Stack blocks in order.**
| Tasks with disturbance | Success Rate (%) ↑ | Execution Time(s) ↓ |
|------------------------|--------------------|---------------------|
| | SayCan | CLIPort | IM (ours) | DoReMi (ours) | IM-Oracle | DoReMi (ours) | IM-Oracle |
| Pick and place | | | | | | | |
| with random drop $p$ | $p=0.0$ | $100(±0)$ | $100(±0)$ | $100(±0)$ | $100(±0)$ | $2.7(±0.0)$ | $2.7(±0.0)$ | $2.7(±0.0)$ |
| | $p=0.2$ | $81(±9)$ | $100(±0)$ | $100(±0)$ | $100(±0)$ | $3.4(±0.2)$ | $3.0(±0.2)$ | $3.4(±0.2)$ |
| | $p=0.3$ | $63(±9)$ | $100(±0)$ | $100(±0)$ | $100(±0)$ | $4.0(±0.2)$ | $3.5(±0.2)$ | $4.0(±0.2)$ |
| Stack in order | | | | | | | |
| with noise $\tau$ | $\tau=0.0$ | $100(±0)$ | $100(±0)$ | $100(±0)$ | $100(±0)$ | $7.2(±0.0)$ | $7.2(±0.0)$ | $7.2(±0.0)$ |
| | $\tau=1.0$ | $96(±4)$ | $96(±4)$ | $96(±4)$ | $100(±0)$ | $8.0(±3.0)$ | $7.5(±0.5)$ | $7.4(±0.5)$ |
| | $\tau=2.0$ | $69(±7)$ | $85(±7)$ | $86(±7)$ | $96(±4)$ | $12.2(±5.3)$ | $10.2(±1.7)$ | $9.8(±2.0)$ |
| | $\tau=3.0$ | $31(±11)$ | $74(±10)$ | $75(±8)$ | $86(±8)$ | $15.6(±5.6)$ | $14.7(±5.3)$ | $14.7(±5.3)$ |
| Stack in order | | | | | | | |
| with noise $\tau$ | $\tau=0.0$ | $71(±9)$ | $94(±7)$ | $94(±6)$ | $98(±4)$ | $9.0(±3.6)$ | $9.4(±1.7)$ | $9.9(±1.9)$ |
| | $\tau=1.0$ | $71(±9)$ | $94(±7)$ | $94(±7)$ | $94(±7)$ | $10.7(±3.9)$ | $10.6(±3.2)$ | $10.9(±3.0)$ |
| | $\tau=2.0$ | $54(±12)$ | $79(±9)$ | $79(±8)$ | $92(±6)$ | $95(±3)$ | $14.5(±3.4)$ | $15.3(±3.5)$ |
| | $\tau=3.0$ | $21(±9)$ | $33(±10)$ | $34(±10)$ | $55(±10)$ | $64(±8)$ | - | - |
Table 1: Success rates and task execution time under different degrees of disturbances. We only measure execution time under high success rates. The results show the mean and standard deviation over 4 different seeds, each with 12 episodes.
Figure 5: A comparison example. The robot arm tries to finish the step "Place blue block on green block" but collapses (bcd). DoReMi detects this misalignment and replans to pick and place the green block first (e). The baseline continues to repeat the previous step (ef) and results in failure.
The robot is required to stack several blocks in an order given by language instructions. The agent must perform "pick" and "place" skills in a precise sequence to successfully accomplish the task. We assume the controllers are not perfect by introducing uniform $[0, n]$ cm noise to the place positions. There is also a probability $p$ that a block held by the end-effector might randomly drop every second. The max execution time for all tasks is set to 20 seconds. Any execution that takes time longer than 20 seconds is considered as failure.
**Experiment Details** Following the pipeline in Figure 2, we use Vicuna-13B (Chiang et al., 2023) as LLM planner and zero-shot transferred BLIP-2 (Li et al., 2023b) as VLM constraint detector. We compare DoReMi with 4 baselines: (1) **SayCan**: an LLM is utilized to decompose instructions into steps and execute them sequentially. However, this approach assumes the successful execution of each step without considering potential failures. (2) **CLIPort**: a multi-task CLIPort policy conditioned on the single pick-place step. It utilizes an LLM to decompose instructions into steps and repeat each step until success. The same VLM is leveraged as a success detector to determine whether the current step should be repeated. (3) **Inner Monologue (IM)**: The same VLM is employed as scene descriptors and success detector to help LLM re-plan upon completion of each step. (4) **IM-Oracle**: Inner-Monologue with oracle feedback which does not exist in practical real-world settings. Results are shown in Table 1.
**Result Analyses** In the presence of disturbances, SayCan consistently fails in all tasks due to its lack of success detectors and re-planning mechanisms. In simple pick-place tasks, CLIPort and Inner-monologue with success detector can repeat the step and recover. However, they do not have a mechanism to abort the current execution and only re-plan at the end of each skill, resulting in a longer execution time. In the stack-block task, when encountering situations that require re-planning (e.g., the blocks collapse), CLIPort that only repeats the previous step fails to recover, as shown in Figure 5. When provided with imperfect scene descriptors (VLM), Inner Monologue also struggles to recover due to ambiguous open-ended scene descriptions. In contrast, DoReMi leverages LLMs to propose specific constraints for every low-level skill, with the VLM focused on these constraints, leading to highly accurate feedback. Furthermore, our VLM continuously detects constraint violations throughout the execution period, which enables immediate re-planning and recovery. Under these two mechanisms, DoReMi reaches higher success rates and shorter execution times.
5.2 Humanoid Robot Tasks
Robot Description and Low-level Skill Set The humanoid robot utilized in our experiments possesses 6 degrees of freedom per leg and 4 degrees of freedom per arm, totaling 20 degrees of freedom. We equip the robot with a first-view camera on its base to provide visual information. Controlling complex humanoid robots with a single policy is challenging. Following the framework in Ma et al. (2022), we employ reinforcement learning to train the locomotion policy and leverage model-based controllers to acquire the manipulation policy. Specifically, we utilize the Deepmimic algorithm (Peng et al., 2018) to train a policy conditioned on commanded linear and angular velocity, allowing the robot to execute low-level skills such as "go forward 10 meters," "move forward at speed v," "go to target place," "turn right/left," and more. As for the manipulation policy, physically picking up objects is a challenging task, and we introduce an assistant pick-primitive similar to Li et al. (2023a), which can suck objects close to the end-effector. This enables the robot to execute low-level skills like "pick up object" and "place object on receptacle". Detailed architecture and training process can be found in Appendix B.
5.2.1 Task Categories
We consider 3 categories of tasks and set the max task execution time to 90 seconds.
(1) Obstacle-avoidance. The robot performs the skill "go forward" to reach a finish line located at various distances. However, unknown obstacles may appear on the way with density d. As we mentioned above, the robot lacks perfect navigation skills and only holds low-level skills such as "go forward", "turn left/right", etc. Therefore, the robot needs to satisfy the constraint "no obstacle in the front". If the constraint is violated, it must perform skill "turn left/right" to avoid the collision.
(2) Move-box. The robot is required to transport a certain box from one location to another. A proper solution might involve 1) Go to place A. 2) Pick up box. 3) Go to place B. 4) Put down box. We introduced additional perturbations to this task by assuming that the robot has a probability p of dropping the box every second during transport.
(3) Prepare-food. The robot is required to collect 2-5 types of foods from random positions according to abstract language instructions (example in Figure 3b). We introduced additional perturbations to this task by assuming that the robot has a probability p1 of failing to pick the object and p of dropping the carried object every second. These tasks may need 10-20 steps of low-level skills.
5.2.2 VLM Fine-tuning
In our experiments, we observed that the performance of zero-shot transferred VLM diminishes as the scene complexity increases, such as in the prepare-food task involving more than 20 objects. To address this, we collected a small dataset that only consisted of 5 demonstrations with 128 image-text pairs to fine-tune the BLIP-2 model (Li et al., 2023b). These 5 demonstrations only included fruit objects, while the test tasks involved entirely different scenarios, including unseen objects in random positions like junk food, vegetables, and seafood, as well as unseen backgrounds. It is worth noting that fine-tuning the VLM on the prepare-food task also yielded benefits for unseen tasks. We can use "detection time" to refer to the time interval between when the misalignment occurred and when detectors detected this violation. We find the fine-tuned VLM exhibited improved efficiency in detecting dropped boxes during move-box tasks, reducing the average "detection time" from 2.5 seconds to 0.6 seconds. Some out-of-distribution samples are shown in Figure 6. Ablations and analysis of the fine-tuned VLM can be found in Appendix B.4.
5.2.3 Results
Experiment details Following the pipeline in Figure 2(c), we use Vicuna-13B (Chiang et al., 2023) as the LLM planner and BLIP-2 (Li et al., 2023b) as the VLM constraint detector. We use DoReMi-FT to denote DoReMi with VLM fine-tuned on the prepare-food task, as described in Sec. 5.2.2. We compare our methods with (1) SayCan (Ahn et al., 2022) which assumes every step is executed successfully, and (2) Inner Monologue (IM) (Huang et al., 2022b) which plans at the end of each step and uses the same vision-language model as both success detectors and scene descriptors. (3) Periodic replan which re-plans at a fixed time interval of 3 seconds and obtains feedback from the
Table 2: Success rates and task execution time under different degrees of disturbances. We only evaluate execution time under high task success rates. The results show the mean and standard deviation over 5 different seeds each with 20 episodes.
| Tasks with disturbance | SayCan | IM | Periodic replan | DoReMi (ours) | DoReMi-FT (ours) | IM-Oracle | DoReMi (ours) | DoReMi-FT (ours) | IM-Oracle |
|------------------------|--------|----|-----------------|---------------|-----------------|-----------|---------------|-----------------|-----------|
| Obstacle-avoidance | | | | | | | | | |
| with density d | | | | | | | | | |
| d=0.0 | 100(±0)| 100(±0)| 100(±0)| 100(±0)| 100(±0)| 100(±0)| 24.2(±0.8)| 24.2(±0.8)| 24.2(±0.8)|
| d=0.3 | 68(±6)| 68(±6)| 59(±8)| 92(±6)| 92(±6)| 68(±6)| 31.2(±2.4)| 31.2(±2.4)| - |
| d=0.6 | 40(±8)| 40(±8)| 37(±10)| 90(±6)| 90(±6)| 40(±8)| 34.3(±3.2)| 34.3(±3.2)| - |
| Move-box with | | | | | | | | | |
| random drop p | | | | | | | | | |
| p=0.0 | 98(±2)| 98(±2)| 96(±3)| 97(±2)| 97(±2)| 98(±2)| 32.2(±2.5)| 32.2(±2.5)| 32.1(±2.5)|
| p=0.02 | 61(±7)| 63(±7)| 55(±9)| 95(±4)| 96(±4)| 98(±2)| 38.4(±3.0)| 35.0(±3.0)| 46.5(±4.7)|
| p=0.04 | 42(±9)| 46(±9)| 38(±8)| 94(±4)| 96(±4)| 96(±2)| 43.6(±3.5)| 37.3(±3.1)| 61.2(±7.6)|
| Prepare-food with | | | | | | | | | |
| pick failure p=0.1 | | | | | | | | | |
| random drop p | | | | | | | | | |
| p=0.0 | 78(±5)| 83(±4)| 81(±5)| 85(±6)| 96(±3)| 99(±1)| - | 27.6(±2.7)| 27.8(±3.0)|
| p=0.02 | 49(±5)| 56(±5)| 50(±5)| 66(±4)| 93(±5)| 97(±2)| - | 31.0(±3.8)| 36.8(±3.8)|
| p=0.04 | 18(±5)| 21(±7)| 16(±6)| 37(±8)| 91(±6)| 96(±2)| - | 35.2(±6.5)| 46.3(±7.5)|
Figure 6: VLM detector fine-tuned on the small dataset can benefit unseen objects, unseen background, and unseen tasks.
Figure 7: Box dropped during the execution of skill "Go to the sofa". Inner Monologue only re-plans when the current skill is finished, taking more time to complete the task.
Result Analyses The results are shown in Table 2. Similar to analysis in section 5.1, SayCan failed due to the absence of re-planning mechanisms and Inner-monologue failed because of the ambiguity and the low frequency of the feedback. Additionally, we find that naively increasing the re-plan frequency (Periodic replan baseline) does not necessarily improve success rates and can even lead to performance degradation. These results can be explained intuitively as follows: without sufficiently precise feedback, the more you re-plan, the more mistakes you may make. Higher frequency is beneficial only with precise enough feedback. These results further highlight the advance of DoReMi which enables more precise feedback, thanks to the seamless cooperation between LLMs and VLMs to propose and detect critical constraints.
In order to enhance the performance in extremely complex scenarios, such as the prepare-food task with over 20 objects, we fine-tuned the VLM on a small dataset as claimed in section B.4. DoReMi-FT with the fine-tuned BLIP-2 model performs better in all complicated scenes with unseen objects, unseen backgrounds, and even unseen tasks. For instance, in unseen move-box tasks, the detector can detect constraint violations more quickly and lead to a shorter total execution time. Furthermore, DoReMi-FT even surpasses IM-oracle in execution time while maintaining similar success rates due to its immediate detection and recovery mechanism, as depicted in Figure 7.
6 DISCUSSION
Limitation Our experiments indicate that the zero-shot transferred VLM is not a perfect constraint detector. We need to fine-tune the VLM in complicated tasks to improve detection accuracy and our framework can benefit from more advanced VLMs in the future. Furthermore, a detector fully based on vision may be limited by mis-detection, occlusion, and perspective. We may explore detectors in other modalities under our framework in the future.
Conclusion When employing language models for embodied tasks in a hierarchical approach, the low-level execution might deviate from the high-level plan. We emphasized the importance of continuously aligning the plan with execution and leveraged LLM to generate both plan and constraints, which enables grounding language through immediate detection and recovery. Theoretical analyses and a variety of challenging tasks in disturbed environments demonstrated the effectiveness of DoReMi.
REFERENCES
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. *arXiv preprint arXiv:2204.01691*, 2022.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In *Proceedings of the IEEE international conference on computer vision*, pp. 2425–2433, 2015.
Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. *arXiv preprint arXiv:2212.06817*, 2022.
Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. *arXiv preprint arXiv:2307.15818*, 2023.
Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language grounding. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018.
Wei-Lin Chiang, Zhuoqun Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://timsys.org/blog/2023-03-30-vicuna/.
Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. *arXiv preprint arXiv:2303.03378*, 2023.
Yuqing Du, Ksenia Konyushkova, Misha Denil, Akhil Raju, Jessica Landon, Felix Hill, Nando de Freitas, and Serkan Cabi. Vision-language models as success detectors. *arXiv preprint arXiv:2303.07280*, 2023.
Richard E Fikes and Nils J Nilsson. Strips: A new approach to the application of theorem proving to problem solving. *Artificial intelligence*, 2(3-4):189–208, 1971.
Maria Fox and Derek Long. Pddl2. 1: An extension to pddl for expressing temporal planning domains. *Journal of artificial intelligence research*, 20:61–124, 2003.
Prasoon Goyal, Scott Niekum, and Raymond Mooney. Pixl2r: Guiding reinforcement learning using natural language by mapping pixels to rewards. In *Conference on Robot Learning*, pp. 485–497. PMLR, 2021.
Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Open-vocabulary object detection via vision and language knowledge distillation. *arXiv preprint arXiv:2104.13921*, 2021.
Pierre-Louis Guhur, Shizhe Chen, Ricardo Garcia Pinel, Makarand Tapaswi, Ivan Laptev, and Cordelia Schmid. Instruction-driven history-aware policies for robotic manipulations. In *Conference on Robot Learning*, pp. 175–187. PMLR, 2023.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021.
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In *International Conference on Machine Learning*, pp. 9118–9147. PMLR, 2022a.
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. *arXiv preprint arXiv:2207.05608*, 2022b.
|
DT8ipHAAVz
|
Adjusted rand index measure used to compare the clustering performance is questionable. As far as I understand, adjusted rand index uses ground-truth class labels but clustering is an unsupervised problem. Reporting both the k-means objective and GEMINI objective might help as they are the objective function being optimized.
|
END-TO-END TRAINING OF UNSUPERVISED TREES: KAURI AND DOUGLAS
Anonymous authors
Paper under double-blind review
ABSTRACT
Trees are convenient models for obtaining explainable predictions on relatively small datasets. While many proposals exist for end-to-end construction of such trees in supervised learning, learning a tree end-to-end for clustering without labels remains an open challenge. As most works focus on interpreting with trees the result of another clustering algorithm, we present here two novel end-to-end trained unsupervised trees for clustering, respectively Kauri for datasets with a large number of features using binary decision trees, and Douglas for datasets with a large number of samples using $k$-ary differentiable trees. Both methods are composed of a learnable tree structure in which parameters are optimised according to a generalised mutual information (GEMINI) and present results on par with other existing methods while maintaining interpretability. We compare these two models on multiple datasets with the most recent unsupervised trees and provide guidelines for choosing the most suitable model.
1 INTRODUCTION
Decision tree classifiers are one of the most intuitive models in machine learning owing to their intrinsic interpretability (Molnar, 2020, Section 3.2). Trees consist of a set of hierarchically sorted nodes starting from one single root node. Each node comprises two or more conditions called rules, each of which leading to a different child node. Once a node does not have any child, a decision is returned. A childless node is named a leaf.
While the end model is eventually interpretable, building it implies some questions to be addressed, notably regarding the number of nodes, the feature (or set of features) on which to apply a decision rule, the construction of a decision rule i.e. the number of thresholds and hence the number of children per node. Learning the structure is easier in the case of supervised learning, whereas the absence of labels makes the construction of unsupervised trees more challenging. In recent related works, the problem was oftentimes addressed with twofold methods (Tavallali et al., 2021; Laber et al., 2023): first learning clusters using another algorithm e.g. KMeans, then applying a supervised decision tree to uncover explanations of the clusters. However, such unsupervised trees are not fully unsupervised in fact since their training still requires the presence of external labels for guidance which are provided by KMeans.
To achieve end-to-end unsupervised learning in trees, we propose a framework where we merge the view of trees as statistical models with learnable parameters and a clustering criterion to maximise: the generalised mutual information (GEMINI, Ohl et al., 2022), a distance-based score. We derive two new clustering algorithms from this framework: respectively binary decision trees for datasets with a large number of features (Kauri) and $k$-ary differentiable trees for datasets with a large number of samples (Douglas). A short description of these methods is provided in Fig. 1. The contributions of this framework are therefore:
• The introduction of two end-to-end unsupervised trees for clustering: Kauri and Douglas. Both approaches learn a tree architecture using GEMINI maximisation. The former uses binary decision trees and the latter differentiable trees.
• We show that Kauri displays equal performance in clustering to kernel KMeans+Tree using end-to-end training while obtaining shallower structures.
• A practical example showing how to interpret the obtained models in clustering
End-to-end unsupervised trees:
\[
\begin{align*}
\text{Tree structure} & \quad \text{and} \quad \text{Objective} \\
\{\text{Binary decision tree}\} & \quad \{\text{Differentiable tree}\} \quad \{\text{GEMINI}\} \\
\end{align*}
\]
Figure 1: Summary of the proposed framework for learning end-to-end unsupervised trees. The framework concatenates a tree structure with an objective to maximise: the generalised mutual information. The Kauri model corresponds to a binary decision tree with the squared-MMD GEMINI whereas the Douglas model corresponds to a differentiable tree and the Wasserstein GEMINI.
2 TRAINING TREES
We progressively present in this section the different means for creating a decision tree structure, with supervision or not and linking the algorithm with discriminative and hierarchical clustering methods.
2.1 HOW DO WE TRAIN SUPERVISED TREES?
In supervised learning, we have access to targets \( y \) which guides our tree construction for separating well our samples. In this field, we can refer to the well-known classification and regression tree (CART) (Breiman et al., 1984). At each node, we evaluate the quality of a split, i.e. a proposed rule on a given feature and data-dependent threshold, through gain metrics. We then add to the tree structure the split that achieved the highest possible gain. Common implementations of supervised trees use the Gini criterion developed by the statistician Corrado Gini (1912), which indicates how pure a tree node is given the proportion of different labels in its samples (Casquilho & Österreicher, 2018). Later works then proposed different gain metrics like the difference of mutual information in the ID3 (Quinlan, 1986) and C4.5 (Quinlan, 2014) algorithms.
When the number of leaves is unlimited, these approaches can produce deterministic outputs. Moreover, their greedy nature can lead to the construction of very deep trees which harms the interpretable nature of the model (Luštrek et al., 2016). This motivates for example the construction of multiple trees that are equivalent in terms of decision, yet different in terms of structure presenting thus an overview of the Rashomon set for interpretations (Xin et al., 2022). Other approaches tried to overcome the deterministic non-differentiable nature of the rule-based tree by introducing differentiable leaves (Fang et al., 1991; Yang et al., 2018) which allows to train trees through gradient descent. We will later come back to the definition of one such model for our method, the deep neural decision tree (Yang et al., 2018).
Whether differentiable or not, we choose to describe the decision trees as statistical models \( p_\theta(y|x) \) which assign the data sample \( x \) to a discrete variable \( y \), the cluster membership, according to some parameters \( \theta \). These parameters can be for example the set of thresholds and features on which decisions are carried at each node or matrix weights in differentiable trees as we will see in the next sections.
2.2 HOW DO WE TRAIN UNSUPERVISED TREES?
In clustering, we do not have access to labels making all previous notions of gains unusable so we need other tools for guiding the splitting procedure of the decision trees. A common approach is then to keep the algorithm supervised as described in the previous section, yet providing labels that were derived from a clustering algorithm e.g. KMeans (Laber et al., 2023; Held & Buhmann, 1997). In this sense, derived centroids from KMeans can be involved as well in splitting procedures (Tavallali et al., 2021), even to the point of not needing the data from which the centroids
are derived (Gamlath et al., 2021). However, such methods do not properly construct the tree from scratch in an unsupervised way despite potential changes in the gain formulations. We are interested in a method that can provide a directly integrated objective to optimise for tree training. For example, Bertsimas et al. (2021) directly optimise the silhouette score, an internal clustering metric, yet report the need for warm start to train multivariate decision trees. Other gains derived from entropy formulations can also be proposed (Bock, 1994; Basak & Krishnapuram, 2005). We even note the usage of recursive writing of the mutual information to achieve deeper and deeper refinements of binary clusters (Karakos et al., 2005). Oftentimes, these approaches assume that a leaf describes fully a cluster. Combining leaves into a single cluster requires then post-hoc methods (Fraiman et al., 2013).
If we allow post-hoc methods, an elegant approach to constructing an unsupervised tree was proposed by Liu et al. (2000) by adding uniform noise to data and tasking a decision tree to separate noise from true data. Such trees put in different leaves dense areas of the data which can then be labelled manually for example.
### 2.3 Generalised Mutual Information for Clustering
Inspired by the involvement of mutual information in tree gains for clustering (Karakos et al., 2005), we are interested in finding an easy-to-compute gain that requires no model-based hypotheses on the data and which does not involve a first-stage clustering algorithm for guidance.
The generalised mutual information (GEMINI) (Ohl et al., 2022) is a cost function introduced to perform clustering with any discriminative model taking the form \( p_\theta(y = k | x) \) linking the discrete cluster assignment \( y \) to the data \( x \) through parameters \( \theta \). Maximising this loss implies maximising a statistical distance \( D \) between the cluster distribution \( p_\theta(x | y = k) \) among randomly chosen clusters. While defined on the distributions \( p_\theta(x | y = k) \), Bayes theorem leads to a computable formula of this loss function involving only the prediction of the model \( p_\theta(y = k | x) \). Contrary to most recent unsupervised losses, especially contrastive losses, the GEMINI requires neither regularisations nor data augmentation to achieve clustering. Its most defining input is a well-chosen metric in the data space which can be a kernel if the statistical distance \( D \) is the maximum mean discrepancy (MMD) (Gretton et al., 2012) or a distance if the statistical distance is the Wasserstein (Peyré & Cuturi, 2019). GEMINI has two definitions; the one-vs-all:
\[
I_{D}^{\text{ova}}(x; y | \theta) = \mathbb{E}_{y \sim p_\theta(y)}[D(p_\theta(x | y) \| p(x))],
\]
and the one-vs-one:
\[
I_{D}^{\text{ovo}}(x; y | \theta) = \mathbb{E}_{y_a, y_b \sim p_\theta(y)}[D(p_\theta(x | y_a) \| p_\theta(x | y_b))].
\]
This metric was originally intended for gradient descent methods, especially neural networks. However, we will show here how we can revisit the GEMINI for tree models which can be non-differentiable, leveraging end-to-end learning.
### 3 KAURI: KMeans as Unsupervised Reward Ideal
The Kauri tree is a non-differentiable binary decision tree that looks in many ways alike the CART algorithm. It constructs from scratch a binary tree giving hard clustering assignments to the data by using an objective equivalent to both the optimisation of a kernel KMeans and an MMD-GEMINI. In the Kauri structure, a cluster can be described by several leaves.
#### 3.1 Notations and Modelling
We consider that we have a dataset of \( n \) samples: \( D = \{x_i\}_{i=1}^{n} \). We can model the classification/clustering distribution associated with decision trees as a delta Dirac:
\[
p_\theta(y = k | x) = \mathbb{1}[x \in X_k],
\]
with \( \{X_k\}_{k=1}^{K} \) a partition of the data space \( X \). Notice that we use the notation \( \mathbb{1} \) because \( y \) is discrete. We set \( X \subseteq \mathbb{R}^d \). We write the partition into \( K \) clusters as the sets of the indices of the samples that fall in the respective data subspace:
\[
C_k = \{i | x_i \in X_k\}, \forall k \leq K.
\]
We assume that the model sees all the data and that \( p(x) \) corresponds to the empirical distribution of the training data. Consequently, we do not use minibatches and write the expectations of the model turn to discrete sums. Notably, we have:
\[
p_\theta(y = k) = \frac{|C_k|}{n}.
\]
(5)
We note \( N_p \) the set of samples reaching the \( p \)-th node and \( b^p_j \) its threshold defined for a single feature \( j \). This threshold defines two binnings and produces two child nodes. For example, if \( x^j \leq b^p_j \), then this sample goes to the left child of the parent node \( p \), otherwise to the right child.
### 3.2 Tree branching
For supervised trees like CART or ID3, the types of splits are binary and guided by the labels which tell us to which class each child node should go. For unsupervised trees, we must consider all possibilities: to which cluster goes the left child, to which cluster goes the right child on which feature to do the split, on what threshold in this feature to split, on which nodes. Assuming to be located at a node \( p \) for a split, let \( S_L \) the subset of samples from the node samples \( N_p \) that will go to the left child node and \( S_R \) the complementary subset of samples that will go to the right child node. Each child node will be assigned to a different cluster, whether new, already existing or equal to the parent node’s cluster assignment. Let \( k_p \) be the current cluster membership of the parent node \( p \), \( k_L \) the future cluster membership for the left child node and \( k_R \) the future cluster membership of the right child node, i.e. \( S_L \cup S_R = N_p \subseteq C_{k_p} \) and after splitting: \( S_L \subseteq C_{k_L} \) and \( S_R \subseteq C_{k_R} \).
We enforce the following constraints: a child node must stay in the parent node’s cluster if both children leaving would empty the parent’s cluster; the creation of a new cluster can only be done under the condition that the number of clusters does not exceed a specified limit \( K_{\text{max}} \). We also impose a maximum number of leaves \( L_{\text{max}} \) which can be equal to at most the number of samples \( n \). It is nonetheless possible that the algorithm stops the splitting procedure if all gains become negative before reaching the maximum number of leaves allowed.
Thus, learning consists in greedily exploring from all nodes the best split and either taking this split to build a new cluster or merging with another cluster. We now present the objective function and related gains depending on the children’s cluster memberships.
### 3.3 Gain metrics
Kauri is designed to maximise the following objective function:
\[
L = \sum_{k=1}^{K_{\text{max}}} \frac{\sigma(C_k^2)}{|C_k|},
\]
(6)
where the function \( \sigma \) sums the kernel values \( \kappa = \langle \varphi(x_i), \varphi(x_j) \rangle \) of samples indexed by two sets:
\[
\sigma(E, F) = \sum_{i \in E} \sum_{j \in F} \kappa(x_i, x_j).
\]
(7)
We will refer to the \( \sigma \) function as the kernel stock. This function is bilinear with respect to the input spaces. The objective in Eq. 6 corresponds simultaneously to the maximisation of one-vs-all or one-vs-one squared MMD GEMINI or the minimisation of a kernel KMeans objective. The proofs are provided in App. B. We can derive from this objective four gains that evaluate how much score we get by assigning one child node to a new cluster, assigning both child nodes to two new clusters, merging one child node to another cluster or merging both child nodes to different clusters. We denote by \( C'_k \) the clusters after the split operation and \( C_k \) the clusters before the split. Hence, the global gain metric is:
\[
\Delta L(S_L : k_p \rightarrow k_L, S_R : k_p \rightarrow k_R) = \frac{\sigma(C'_{k_L}^2)}{|C'_{k_L}|} + \frac{\sigma(C'_{k_R}^2)}{|C'_{k_R}|} + \frac{\sigma(C'_{k_p}^2)}{|C'_{k_p}|} - \frac{\sigma(C_{k_L}^2)}{|C_{k_L}|} - \frac{\sigma(C_{k_R}^2)}{|C_{k_R}|} - \frac{\sigma(C_{k_p}^2)}{|C_{k_p}|},
\]
(8)
Table 1: Advantages and disadvantages of the Kauri and Douglas algorithms for unsupervised tree construction.
| | Splits | Scalable with $n$ | Scalable with $d$ | Hyperparameters |
|----------------|--------------|-------------------|-------------------|--------------------------|
| Kauri | Binary | No | Yes | $K_{\text{max}}, L_{\text{max}}$ |
| Douglas | $k$-ary | Yes with minibatches | No | Number of cut-points $T$ |
which corresponds to subtracting the contribution of the kernel stocks of the former clusters and adding the kernel stocks of the new clusters after splitting. From this global gain metric, we derive four different gains: the star gain $\Delta \mathcal{L}^*$ for assigning either the left or right child of a leaf to a new cluster, the double star gain $\Delta \mathcal{L}^{**}$ for assigning the left and right children of a leaf to two new clusters, the switch gain $\Delta \mathcal{L}^{=}$ for assigning either the left or right child of a leaf to another existing cluster and the reallocation gain $\Delta \mathcal{L}^{\rightarrow}$ for assigning respectively the left and right children to different existing clusters. The algorithm can be bound in App. D, with an extended explanation of the derivations of the gains in App. C.
4 DOUGLAS: DNDTs OPTIMISED USING GEMINI LEVERAGE APPRISRED SPLITS
The Douglas model seeks the full potential of GEMINI by combining it with differentiable trees. Thanks to this choice of architecture, we can optimise the Wasserstein-GEMINI, an objective more efficient for clustering than the MMD-GEMINI, with respect to the parameters through gradient descent. Indeed, the MMD-GEMINI only carries information through the means of cluster distributions and does not encompass all information on the data space whereas the expected Wasserstein distance between two randomly chosen clusters will take into account the complete distribution. However, the cost of Douglas is the loss of depth in tree as all rules are produced at the root level.
Deep neural decision trees (DNDTs, Yang et al., 2018) aim at learning individual rules per feature and then merge those rules to provide a final decision. Formally, each feature $f$ among a subset of selected features is assigned a vector of sorted thresholds $b^f_1, b^f_2, \ldots, b^f_T$ that determines the binnings of the feature. By defining a bias $c^f = [0, -b^f_1, -b^f_1 - b^f_2, \ldots, -b^f_1 - b^f_2 - \cdots - b^f_T]$ and a vector $a^f = [0, 1, \ldots, T]$, Yang et al. (2018) write a feature-wise probability distribution with:
$$p_{a^f,c^f}(\beta | x^f) = \text{SoftMax}\left(\frac{a^f x^f + c^f}{\tau}\right),$$
(9)
named soft-binning where $\tau$ is a temperature hyperparameter set to 0.1. After each individual soft binning is applied, all combinations of features are computed using a Kronecker product, making DNDTs hardly scalable in terms of features. For example, if the $d$ features are all separated in $T + 1$ binnings, the final decision will contain $(T + 1)^d$ entries per sample. To produce a decision from this entry, a matrix multiplication with some parameters $W$ is applied. The global model can be described as:
$$p_\theta(y = k | x) = \sum_{t_1=1}^{T} \sum_{t_2=1}^{T} \cdots \sum_{t_d=1}^{T} W_{k,t_1+d t_2+\cdots+d^{d-1} t_d} \prod_{f=1}^{d} p_{a^f,c^f}(\beta = t_f | x^f).$$
(10)
This model is therefore differentiable and can be trained by gradient descent.
For interpretation purposes, we choose to exploit active cut points as proposed by Yang et al. (2018). This is the number of features for which the respective cut points parameters do not lie outside of the feature boundaries in the dataset. For example, if for a single cut value (two bins) the bias is lower or greater than all samples on its respective feature, then this cut point is not active and does not participate in the decision.
5 EXPERIMENTS
We start by proposing a summary of the advantages and limitations of both tree algorithms in Table 1. Overall, Kauri is recommended for small-scale datasets whereas Douglas can be used with
Table 2: Summary of the datasets used in the experiments. *The number of features may be slightly larger than the actual number of variables as discrete variables were one-hot encoded.
| Name | Avila | Breast cancer | Car evaluation* | US Congress | Digits |
|-----------------------|---------|---------------|-----------------|-------------|--------|
| Samples | 20,867 | 683 | 1,728 | 435 | 1,797 |
| Features | 10 | 9 | 21 | 16 | 64 |
| Classes | 12 | 2 | 4 | 2 | 10 |
| Name | Haberman survival | Iris | Mice protein | Poker hand | Vowel |
|-----------------------|-------------------|------|--------------|------------|-------|
| Samples | 306 | 150 | 552 | 990 | 1,025,010 |
| Features | 3 | 4 | 77 | 10 | 10 |
| Classes | 2 | 3 | 8 | 2 | 10 |
| Name | Wine |
|-----------------------|------|
| Samples | 178 |
| Features | 13 |
| Classes | 3 |
large datasets on condition that there are few features. It is important to note that Kauri, Douglas and KMeans-based related works are distance-based clustering algorithms. Consequently, these algorithms are sensitive to the scaling of the data, unlike supervised trees. Therefore, we will scale most of our datasets with standard scaling to avoid the overtaking of specific features against all others due to large ranges. The summary of these datasets can be found in Table 2. For the sake of simplicity, we discarded most dataset samples with missing values unless specified otherwise. We will assess the general clustering performances and explanation power of the models before showing qualitative examples of their interpretation. Extended experiments can be found in App. H for an extended benchmark, App. I for model selection and App. G for an alternative version of Douglas.
5.1 ON THE CLUSTERING PERFORMANCES
We compare the performances of our two proposal algorithms on 10 datasets against recent methods for unsupervised tree constructions, namely ExShallow and RDM by Laber et al. (2023), and IMM (Moshkovitz et al., 2020). These methods are twofold and start by fitting KMeans centroids to the data, then learning a tree to explain the obtained clusters. The differences in all methods lie in attempts to limit the depth of the tree for the sake of simple explanations as deep trees tend to lose expressivity in explanation. We choose to provide a combination of KMeans and a standard CART decision tree classifier as a baseline for Kauri which is a kernel-KMeans-aimed clustering algorithm. For the twofold algorithms, we report the clustering performances according to the tree. As related works focus on trees with one leaf per cluster, we limit the Kauri tree and the KMeans+Tree to as many leaves as clusters. For results regarding more leaves than clusters and a comparison with related work ExKMC by Frost et al. (2020), please refer to App. H where the results remain consistent.
Since some algorithms are deterministic in nature, we introduce stochasticity in results by selecting 80% of the training data over 30 runs. Details on preprocessing and experimental hyperparameters are reported in App. E. We report the performances in terms of adjusted rand index (ARI, Hubert & Arabie, 1985), a common clustering external metric, for all algorithms in Table 3 and in terms of KMeans score normalised by the actual KMeans performance (Laber et al., 2023) in Table 4. Due to scores being all equal to 0, we discarded the Poker hand dataset from Table 3. As mentioned before, Douglas’ complexity grows exponentially with the number of features. For example, a binary cut on all features for $d$ features implies $2^d$ outputs per sample. That is why we choose not to run Douglas on datasets with more than 20 features. While the original implementation of Douglas by Yang et al. (2018) is made with Pytorch to benefit from automatic differentiation, we report the result of our own pure-numpy version with explicit derivatives in App. G.
First of all, we observed in Table 3 that Kauri often performs on par with related works. Notably, these performances are close to the KMeans+Tree baseline, except for the digits and wine datasets. Second of all, the performances of related works seem often close to Kauri or slightly below despite
Table 3: ARI scores std (greater is better) of Kauri, Douglas and other methods after 30 runs on random subsamples of 80% of the input datasets. Entries marked X were not run due to memory overflows for Douglas because of the large number of features. All models are limited to finding as many leaves as clusters.
| Dataset | Kauri | KMeans+Tree | Douglas | ExShallow | RDM | IMM |
|-----------|-----------|-------------|---------|-----------|--------|--------|
| Avila | 0.02<sub>0.01</sub> | 0.04<sub>0.01</sub> | 0.02<sub>0.01</sub> | **0.06<sub>0.02</sub>** | 0.05<sub>0.02</sub> | **0.06<sub>0.01</sub>** |
| Cancer | 0.74<sub>0.02</sub> | 0.73<sub>0.01</sub> | **0.84<sub>0.02</sub>** | 0.74<sub>0.01</sub> | 0.68<sub>0.02</sub> | 0.73<sub>0.02</sub> |
| Car | 0.06<sub>0.06</sub> | 0.08<sub>0.07</sub> | X | 0.05<sub>0.05</sub> | 0.07<sub>0.05</sub> | 0.05<sub>0.05</sub> |
| Congress | 0.49<sub>0.03</sub> | 0.46<sub>0.04</sub> | **0.56<sub>0.04</sub>** | 0.49<sub>0.03</sub> | 0.39<sub>0.02</sub> | 0.48<sub>0.03</sub> |
| Digits | 0.26<sub>0.02</sub> | **0.36<sub>0.05</sub>** | X | 0.31<sub>0.03</sub> | 0.16<sub>0.03</sub> | 0.27<sub>0.03</sub> |
| Haberman | 0.00<sub>0.03</sub> | 0.00<sub>0.00</sub> | 0.02<sub>0.04</sub> | 0.00<sub>0.00</sub> | 0.00<sub>0.02</sub> | 0.00<sub>0.00</sub> |
| Iris | **0.63<sub>0.07</sub>** | 0.60<sub>0.06</sub> | 0.47<sub>0.12</sub> | 0.62<sub>0.06</sub> | 0.49<sub>0.04</sub> | 0.59<sub>0.05</sub> |
| Mice | **0.21<sub>0.03</sub>** | 0.18<sub>0.04</sub> | X | 0.19<sub>0.03</sub> | 0.12<sub>0.04</sub> | 0.16<sub>0.03</sub> |
| Vowel | 0.01<sub>0.01</sub> | 0.03<sub>0.03</sub> | **0.07<sub>0.05</sub>** | 0.05<sub>0.04</sub> | **0.07<sub>0.03</sub>** | **0.08<sub>0.04</sub>** |
| Wine | 0.60<sub>0.10</sub> | 0.71<sub>0.05</sub> | 0.54<sub>0.13</sub> | **0.74<sub>0.04</sub>** | 0.33<sub>0.05</sub> | **0.75<sub>0.04</sub>** |
Table 4: KMeans score std (lower is better) of Kauri and related works after 30 runs on subsamples of 80% of the input datasets divided by the KMeans reference score (=1.0). All models are limited to finding as many leaves as clusters.
| Dataset | Kauri | KMeans+Tree | Douglas | ExShallow | RDM | IMM |
|-----------|-----------|-------------|---------|-----------|--------|--------|
| Avila | 1.22<sub>0.08</sub> | 1.95<sub>0.07</sub> | 1.72<sub>0.14</sub> | 1.23<sub>0.10</sub> | 1.30<sub>0.13</sub> | **1.15<sub>0.07</sub>** |
| Cancer | 1.08<sub>0.02</sub> | 1.08<sub>0.02</sub> | **1.00<sub>0.01</sub>** | 1.07<sub>0.02</sub> | 1.31<sub>0.02</sub> | 1.07<sub>0.01</sub> |
| Car | **1.00<sub>0.00</sub>** | **1.00<sub>0.00</sub>** | X | **1.00<sub>0.00</sub>** | 1.02<sub>0.03</sub> | **1.00<sub>0.00</sub>** |
| Congress | 1.05<sub>0.01</sub> | 1.04<sub>0.01</sub> | **1.00<sub>0.01</sub>** | 1.04<sub>0.01</sub> | 1.13<sub>0.02</sub> | 1.04<sub>0.01</sub> |
| Digits | **1.13<sub>0.01</sub>** | 1.19<sub>0.02</sub> | X | **1.13<sub>0.02</sub>** | 1.24<sub>0.04</sub> | **1.14<sub>0.02</sub>** |
| Haberman | **1.01<sub>0.00</sub>** | **1.01<sub>0.00</sub>** | 1.04<sub>0.02</sub> | **1.01<sub>0.00</sub>** | **1.01<sub>0.00</sub>** | **1.01<sub>0.00</sub>** |
| Iris | **1.06<sub>0.04</sub>** | **1.07<sub>0.04</sub>** | 1.49<sub>0.24</sub> | **1.06<sub>0.05</sub>** | 1.29<sub>0.08</sub> | **1.07<sub>0.05</sub>** |
| Mice | **1.05<sub>0.01</sub>** | 1.09<sub>0.03</sub> | X | **1.05<sub>0.01</sub>** | 1.33<sub>0.05</sub> | 1.11<sub>0.03</sub> |
| Poker | **1.03<sub>0.00</sub>** | 1.07<sub>0.02</sub> | 1.16<sub>0.02</sub> | 1.05<sub>0.00</sub> | 1.07<sub>0.02</sub> | 1.12<sub>0.05</sub> |
| Vowel | 1.06<sub>0.00</sub> | 1.07<sub>0.01</sub> | **1.04<sub>0.01</sub>** | 1.07<sub>0.01</sub> | 1.09<sub>0.01</sub> | 1.09<sub>0.01</sub> |
| Wine | 1.09<sub>0.05</sub> | 1.13<sub>0.05</sub> | 1.11<sub>0.09</sub> | **1.05<sub>0.02</sub>** | 1.33<sub>0.05</sub> | **1.05<sub>0.03</sub>** |
similar limits in the number of leaves. We believe that this difference can be explained by the order of the choice of splits in the trees owing to the presence of the KMeans objective among methods or just the usage of labels. Regarding the KMeans score in Table 4, Kauri and Douglas both obtain good performances, with scores always at most one standard deviation away from the best model for Kauri. To conclude, we observed encouraging performances from the Douglas algorithm which benefits from the multiple binnings at root level of all features.
5.2 ON THE EXPLANATION QUALITY
We are now interested in the explainable nature of the obtained trees. Indeed, several tree structures could easily yield the same clustering and consequently, we need to focus on the explanation quality of the structure.
We provide an example from Moshkovitz et al. (2020) of a non-optimal choice of splits for the KMeans+Tree compared to the optimal found by Kauri in App. F.
We choose to measure the weighted average depth (WAD, Laber et al., 2023) which measures the ratio of samples per leaf multiplied by their respective depth. The lower the WAD, the better the structure of the tree as it yields simpler explanations by being shallow. The benefit of this metric is that it encourages trees to be shallow, a property we seek in the context of limited leaves. This metric cannot be applied however to Douglas because its differentiable tree sets all rules at the same level, i.e. without any notion of path for ordering the rules and leaves. Additionally, we remove for
Table 5: WAD scores (lower is better) of Kauri and related works after 30 runs on random subsamples of 80% of the input datasets. All models are limited to finding as many leaves as clusters.
| Dataset | Kauri | KMeans+Tree | ExShallow | RDM | IMM |
|---------|---------|-------------|-----------|--------|--------|
| Avila | 5.47<sub>0.30</sub> | **4.00<sub>0.13</sub>** | 6.43<sub>0.56</sub> | 7.81<sub>0.33</sub> | 9.19<sub>0.10</sub> |
| Car | 2.00<sub>0.00</sub> | 2.04<sub>0.06</sub> | 2.05<sub>0.06</sub> | 2.03<sub>0.08</sub> | 2.04<sub>0.06</sub> |
| Digits | 3.45<sub>0.22</sub> | **3.48<sub>0.17</sub>** | 3.98<sub>0.19</sub> | 5.21<sub>0.83</sub> | 6.79<sub>0.34</sub> |
| Iris | 1.67<sub>0.02</sub> | 1.67<sub>0.02</sub> | 1.67<sub>0.02</sub> | **1.62<sub>0.03</sub>** | 1.67<sub>0.02</sub> |
| Mice | 3.04<sub>0.07</sub> | 3.16<sub>0.13</sub> | 3.23<sub>0.16</sub> | 3.47<sub>0.39</sub> | 4.85<sub>0.41</sub> |
| Poker | 3.26<sub>0.00</sub> | **3.26<sub>0.01</sub>** | 3.38<sub>0.05</sub> | **3.28<sub>0.11</sub>** | 4.40<sub>0.45</sub> |
| Wine | 1.58<sub>0.07</sub> | 1.65<sub>0.04</sub> | 1.69<sub>0.03</sub> | 1.75<sub>0.03</sub> | 1.71<sub>0.02</sub> |
This experiment datasets with 2 clusters as the only way to learn trees on these datasets is to have two leaves at the same depth, yielding a WAD of 1 for all methods.
We give the WAD scores of the previously described benchmark in Table 5. We observe that Kauri often outperforms related works, even though the KMeans+Tree baseline remains a tough competitor. Moreover, these gains in shallow structure still maintain good clustering on par with related works as seen in the previous section.
To highlight some differences in behaviour between Kauri and KMeans+Tree, we investigate with Figure 2 how the angles of the decision boundary and the number of samples in the dataset can change the performances on seemingly identical distributions. Indeed, KMeans easily builds linear boundaries that are not axis-aligned, hence as the boundaries become less and less aligned with the axes, the decision trees struggle to maintain a low number of leaves to mimic these "diagonal" boundaries. This effect gets worse if the number of samples to separate is high on this decision boundary. However, as soon as the decision boundaries are axis-aligned, the decision tree becomes again a fierce competitor. Both trees have unlimited leaves and only stop when no gain is any longer possible. We use the weighted average explanation size (WAES, Laber et al., 2023) which measures the number of non-redundant rules that define a leaf divided by the number of training samples. The lower the WAES, the better the structure of the tree as it yields simpler explanations. The benefit of this metric is that the depth of a tree matters little, but rather the number of leaves that explain a cluster.
Figure 2: Variations of WAES scores for aligned isotropic 2d Gaussian distributions separated by Kauri or KMeans+Tree as the angle of the alignment (red line in 2c) with the x-axis (blue line in 2c) grows or the number of samples increases over 30 runs. The distance between the means is $\sqrt{2}$ and the scale matrices are $0.2I_2$.
Figure 3: The unsupervised Kauri tree for 2 clusters on the Congressional votes dataset. SA stands for the El Salvador Aid vote, NC for the Nicaraguan Contras vote and MX for the MX-missile vote. The question mark means that the voter did not vote or was missing. Nodes contain their name, the associated cluster to which they assign samples and the type of split that occurred during learning.
5.3 A QUALITATIVE EXAMPLE OF THE OBTAINED DECISION TREE
In this example, we focus on the congressional votes dataset which details 16 key votes from the 435 members of the US Congress in 1985. The targets of the dataset are the Republican or Democrat affiliations of the voters. We preprocessed the dataset by binarising the vote outcome with $-1$ for “no” and $1$ for “yes”. Missing values due to the absence of votes were converted to $0$ which is midway between yes and no and hence does not influence the linear kernel by favouring one type of answer. The Kauri tree that was fitted on this dataset is described in Figure 3. The obtained clusters translate very well the Republican and Democrat opposition through arming and international assistance, with one cluster containing up to $73\%$ of Republicans and the second one adding up to $96\%$ of Democrats. The ARI is $0.47$ for this tree which corresponds to an unsupervised accuracy of $84\%$.
Upon running 30 times the Douglas tree on this dataset, we measure the number of active cut points. The most selected active cut points were on the exact same features as the ones selected by Kauri in Figure 3: the aid to Nicaraguan Contras (selected $93\%$ of time), the El Salvador aid ($83\%$) and the MX missile votes ($63\%$). The models had an average ARI of $0.53$.
6 FINAL WORDS
We introduced a framework for unsupervised tree end-to-end learning. By combining tree structures with the generalised mutual information for clustering, we derived two novel examples: Kauri and Douglas. The former maximises a kernel-KMeans-like objective to build iteratively unsupervised splits through the affectation of tree leaves to existing or new clusters while the latter exploits the combined potential of differential trees and the Wasserstein distance. Kauri can be privileged for small-scale datasets whereas Douglas is better suited for long datasets on condition of few features. Overall, both methods achieve good performances in clustering with Kauri being on par with related works for unsupervised trees using shallower trees. The strong advantage of these methods is building an interpretable by-nature clustering instead of seeking to explain another clustering output from a different algorithm. Finally, we think that the combination of KMeans and a decision tree remains a strong baseline that should be provided in works on unsupervised tree works.
REFERENCES
Jayanta Basak and Raghu Krishnapuram. Interpretable Hierarchical Clustering by Constructing an Unsupervised Decision Tree. IEEE transactions on knowledge and data engineering, 17(1): 121–132, 2005. Publisher: IEEE.
Dimitris Bertsimas, Agni Orfanoudaki, and Holly Wiberg. Interpretable clustering: an optimization approach. Machine Learning, 110(1):89–138, Jan 2021. ISSN 1573-0565. doi: 10.1007/s10994-
Christophe Biernacki, Gilles Celeux, and Gérard Govaert. Assessing a mixture model for clustering with the integrated completed likelihood. *IEEE transactions on pattern analysis and machine intelligence*, 22(7):719–725, 2000.
HH Bock. Information and entropy in cluster analysis. In *Proceedings of the First US/Japan Conference on the Frontiers of Statistical Modeling: An Informational Approach: Volume 2 Multivariate Statistical Modeling*, pp. 115–147. Springer, 1994.
Leo Breiman, Jerome H Friedman, Richard A Olshen, and Charles J Stone. Classification and regression trees belmont. *CA: Wadsworth International Group*, 1984.
Elliot Burghardt, Daniel Sewell, and Joseph Cavanaugh. Agglomerative and divisive hierarchical Bayesian clustering. *Computational Statistics & Data Analysis*, 176:107566, 2022. Publisher: Elsevier.
José AP Casquilho and Ferdinand Österreicher. On the gini–simpson index and its generalisation—a historic note. *South African Statistical Journal*, 52(2):129–137, 2018.
Anthony WF Edwards and Luigi Luca Cavalli-Sforza. A method for cluster analysis. *Biometrics*, pp. 362–375, 1965.
L Fang, Andrew Jennings, WX Wen, KQ-Q Li, and T Li. Unsupervised learning for neural trees. In [*Proceedings*] 1991 IEEE International Joint Conference on Neural Networks, pp. 2709–2715. IEEE, 1991.
Ricardo Fraiman, Badih Ghattas, and Marcela Svarc. Interpretable Clustering using Unsupervised Binary Trees. *Advances in Data Analysis and Classification*, 7:125–145, 2013. Publisher: Springer.
Guilherme França, Maria L Rizzo, and Joshua T Vogelstein. Kernel k-groups via hartigan’s method. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 43(12):4411–4425, 2020.
Nave Frost, Michal Moshkovitz, and Cyrus Rashtchian. Exkmc: Expanding explainable $k$-means clustering. *arXiv preprint arXiv:2006.02399*, 2020.
Buddhima Gamlath, Xinrui Jia, Adam Polak, and Ola Svensson. Nearly-tight and oblivious algorithms for explainable clustering. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 28929–28939. Curran Associates, Inc., 2021.
Corrado W Gini. Variability and mutability, contribution to the study of statistical distributions and relations. *Studi Economico-Giuridici della R. Universita de Cagliari*, 1912.
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A Kernel Two-Sample Test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. Publisher: JMLR. org.
Marcus Held and Joachim Buhmann. Unsupervised on-line learning of decision trees for hierarchical data analysis. *Advances in neural information processing systems*, 10, 1997.
Lawrence Hubert. Monotone invariant clustering procedures. *Psychometrika*, 38(1):47–62, 1973.
Lawrence Hubert and Phipps Arabie. Comparing partitions. *Journal of classification*, 2(1):193–218, 1985.
Damianos Karakos, Sanjeev Khudanpur, Jason Eisner, and Carey E Priebe. Unsupervised classification via decision trees: An information-theoretic perspective. In *Proceedings.(ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing*, 2005., volume 5, pp. v–1081. IEEE, 2005.
Leonard Kaufman and Peter J Rousseeuw. Finding groups in data. hoboken. *Applied Probability and Statistics, New York, Wiley Series in Probability and Mathematical Statistics*, 1990.
|
EyDPfGy4Wh
|
In what sense flash-attention reduces compute. Do you mean FLOP or wall clock time? FA is exact attention and does not reduce FLOP, it is just a series of clever fusion. If it is wall clock time, then this paper should keep the definition consistent, and provide wall clock time analysis instead of MAC.
|
ABSTRACT
The costly self-attention layers in modern Transformers require memory and compute quadratic in sequence length. Existing approximation methods usually underperform and fail to obtain significant speedups in practice. Here we present Expert Projection Attention (EPA)—a novel method that reduces both compute and memory requirements and achieves wall-clock speedup, while matching the language modeling performance of baseline Transformers with the same parameter budget. EPA uses Mixture-of-Experts (MoE) layers for the value and output projections and requires 4 to 8 times fewer attention matrices than standard Transformers. Our novel attention can also be combined with MoE MLP layers, resulting in an efficient “Fast Transformer.”
1 INTRODUCTION
Large language models (LLMs) have demonstrated remarkable abilities (Radford et al., 2019; Brown et al., 2020; OpenAI, 2022, 2023) and incredible versatility (Bubeck et al., 2023). However, training enormous Transformers (Vaswani et al., 2017; Schmidhuber, 1992) necessitates a compute and memory budget that is well above what is available to most researchers, academic institutions, and even companies. In fact, even running them in inference mode, where the requirements are much weaker, requires a huge engineering effort (Gerganov, 2023). Thus, smaller but more capable models have also received significant attention (Touvron et al., 2023; Taori et al., 2023; Chiang et al., 2023; MistralAI, 2023; Stanić et al., 2023). However, even with these cutting-edge techniques, LLM training is beyond the reach of most researchers.
Recently, Csordás et al. (2023) have proposed to use a new non-competitive Mixture of Experts (MoE) model to accelerate Transformer training. The authors have shown that it performs on par with or can even outperform their parameter-matched dense counterparts with a fraction of the resource requirements. Previously in the literature, MoE models have been successfully used to scale Transformers to a very large number of parameters (Shazeer et al., 2017; Lewis et al., 2021; Lepikhin et al., 2021; Fedus et al., 2022; Clark et al., 2022; Chi et al., 2022), but without paying attention to their parameter efficiency. Importantly, all of these methods focus on the MLP layer, and not on the attention.
However, the attention layer (Schmidhuber, 1991; Bahdanau et al., 2015) in Transformers accounts for a significant proportion of both their compute and memory usage, especially for long context sizes. Linear attention (Schmidhuber, 1991; Katharopoulos et al., 2020; Choromanski et al., 2021; Schlag et al., 2021) was proposed as a remedy, but in practice, most methods fail to achieve significant speedups (Dao et al., 2022) and sometimes underperform compared to the exact attention.
As an alternative, MoE-based attention has been proposed (Zhang et al., 2022; Peng et al., 2020). However, they only achieve a modest reduction in computing and memory requirements, and typically require a lot of engineering tricks for successful training. Generally, MoE-based attention remains underexplored.
In this paper, we propose a novel MoE-based attention mechanism, called Expert Projection Attention (EPA), that aims to minimize the number of attention matrices required to be computed and stored. Our method is based on the σ-MoE by Csordás et al. (2023) and does not require regularization or additional tricks for stable training. Our method is capable of achieving predictive performance on par with parameter-matched baselines, with a fraction of the required compute and
---
1Here we will add a link to our public GitHub code repository upon acceptance.
memory budget. We demonstrate this on a wide range of language modeling datasets and on two model sizes. We also show that models combining a σ-MoE-based MLP layer with our attention typically outperform dense baselines with identical parameter budgets, achieving a “Fast Transformer” Transformer model. Finally, we analyze the attention maps of our Expert Projection Attention, and show that the maximum of attention maps taken over all heads are qualitatively similar to the dense baselines, showing a significant reduction in redundancy without a loss of expressivity. Also, expert selections are often interpretable.
2 METHOD
2.1 BACKGROUND
The standard multi-head self-attention (MHA) layer (Vaswani et al., 2017) consists of four major steps: (1) computing key (K), query (Q), and value (V) projections, (2) computing the attention matrix, (3) using the attention matrix to project the values, and (4) mapping the projected values to the output. Let \( h, T, d_{\text{model}}, d_{\text{head}} \) denote positive integers. Let \( x \in \mathbb{R}^{T \times d_{\text{model}}} \) denote an input to the MHA layer, \( T \) be the sequence length, and \( d_{\text{model}} \) denote the size of the hidden representations of the model. \( W_{h}^{K,V,Q} \in \mathbb{R}^{d_{\text{model}} \times d_{\text{head}}} \) are the projection matrices for head \( h \). Then \( K_{h} = xW_{h}^{K}, Q_{h} = xW_{h}^{Q}, \) and \( V_{h} = xW_{h}^{V} \) (thus \( K_{h}, Q_{h}, V_{h} \in \mathbb{R}^{T \times d_{\text{head}}} \)) are the keys, queries, and values, respectively. The attention matrix for the head \( h, A_{h} \in \mathbb{R}^{T \times T} \), and the output \( y \in \mathbb{R}^{T \times d_{\text{model}}} \) are calculated as follows:
\[
A_{h} = \text{softmax} \left( \frac{1}{\sqrt{d_{\text{model}}}} Q_{h}K_{h}^{\top} \right)
\]
\[
y = W_{O}(A_{0}V_{0}|A_{1}V_{1}|...|A_{H}V_{H})
\]
where \( | \) denotes concatenation in the last dimension, the softmax(\(\cdot\)) is also over the last dimension, and \( W_{O} \in \mathbb{R}^{d_{\text{model}} \times H d_{\text{head}}} \). However, an alternative formulation reflects the role of \( W_{O} \) better. Let us divide \( W_{O} \) along the second dimension into submatrices for each head, \( W_{O}^{h} \in \mathbb{R}^{d_{\text{model}} \times d_{\text{head}}} \), such that \( W_{O} = W_{O}^{0}|W_{O}^{1}|...|W_{O}^{H} \). In this case, the output can be equivalently written as:
\[
y = \sum_{h} W_{O}^{h}A_{h}V_{h}
\]
From this, it can be seen that all computations are local to the heads. Computing the attention matrix \( A_{h} \) and the readout \( A_{h}V_{h} \) requires compute in order of \( O(H d_{\text{head}} T^{2}) \) MACs (multiplication-accumulation operation). During training, it requires the storage of \( O(H T^{2}) \) for the attention matrices and \( O(H T d_{\text{head}}) \) numbers for storing the sub-results of the projections. Given a sufficiently long sequence, computing the attention matrix and projecting the values will dominate the compute requirements due to the quadratic dependence on the sequence length \( T \).
2.2 FROM DENSE TO EXPERT PROJECTION ATTENTION
Our goal is to obtain resource reductions while maintaining the fundamental properties of attention and retaining a fully expressive attention matrix. In fact, there is still room for improvement: modern LLMs use tens of heads (Brown et al., 2020; Touvron et al., 2023). Are so many of them all necessary? As we show later in Sec. 3, indeed, naively reducing the number of heads (while keeping the same number of parameters by increasing the head dimension) results in performance loss.
Explaining the reason for the need for many heads is beyond the scope of this paper. Nevertheless, here are some hypotheses: (1) they provide multiple inputs for the operations that the network performs in each step, (2) they are specialized and provide inputs only for specific operations. In this case, each operation would use a different subset of heads. (3) They may also provide alternatives with different initializations, some being more successful than others, thus enabling better learning. Among these, some (2) and (3) offer an opportunity for resource savings: if not all heads are needed at the same time, it might be possible to switch between them. The simplest method of doing so is to produce a gating signal using a linear projection $W_S \in \mathbb{R}^{d_{\text{model}} \times H}$, and use the ones with the highest activation, by replacing Eq. 3 with Eq. 6
$$s = \sigma(xW_S)$$
$$E = \arg\topk(s, K), E \subset \{1, ..., H\}$$
$$y[t,c] = \sum_{h \in E} s[t,h](W_O^hA^hV^h)[t,c]$$
where $y[t,c]$ denotes indexing the specific element of the matrix, specifically denoting timestep $t$ and channel $c$. Following Csordás et al. (2023), we use a non-competitive selection function. Intuitively, this corresponds to choosing a subset of attention heads for each output position. Our preliminary experiments confirmed that this method is indeed feasible for language modeling on WikiText-103. However, it is difficult to achieve acceleration and memory savings with this method. To see why, notice that the entries of the attention matrix $A^h$ depend on pairs of inputs in different positions, but the choice is made only based on the output position. Thus, in the worst case, all possible projections have to be computed on the "source side" for the keys and values, which we would like to avoid.
An alternative approach, which we propose here, is to perform the conditional computation on the projections, independently for the source side ($K$ and $V$) and the destination side ($Q$ and output). This avoids conditional computation that involves the attention matrix itself. The obvious way to make the projections conditional is to use Mixture of Experts (MoEs). In this case, the concepts of "heads" are not well defined anymore. Therefore, we define a head to be a specific, computed attention matrix. For each head $h$, we define a list of $E$ experts. Then, the projection matrices become $W_K^{h,e}, W_Q^{h,e}, W_V^{h,e}$ and $W_O^{h,e}$, where $h$ denotes the head index and $e$ the specific expert. Then we compute the source-side expert selection as following:
$$s_S^h = \sigma(xW_S^h)$$
$$E_S^h = \arg\topk(s_S^h, K), E_S^h \subset \{1, ..., E\}$$
We compute the destination-side experts similarly: $s_D^h = \sigma(xW_D^h), E_D^h = \arg\topk(s_D^h, K), E_D^h \subset \{1, ..., E\}$. Then, the value projection $V^h$ is computed as a weighted sum of the selected experts:
$$V^h = \sum_{e \in E_S^h} s_S^h[e]xW_V^{h,e}$$
The key and query projections are computed similarly: $K^h = \sum_{e \in E_S^h} s_S^h[e]xW_K^{h,e}$, and $Q^h = \sum_{e \in E_D^h} s_D^h[e]xW_Q^{h,e}$. The output projection also becomes an MoE:
$$y = \sum_{h=0}^{H-1} \sum_{e \in E_D^h} W_O^{h,e}A^hV^h$$
As we’ll show, it is not necessary to make all projections MoEs. In Section 3.1 we show that keeping a single copy of the projections $Q$ and $K$ and reusing them for all experts is beneficial. We call this method Expert Projection Attention. If this method can reduce the number of heads $H$ by having more experts, $E$, then it provides an easy way to reduce the resource requirements of MHA. Note that our method does not depend on the specific implementation of the attention, allowing easy experimentation and research. A schematic representation is shown in Fig. 1.
Unlike standard MoE methods, we found that no regularization is necessary to achieve good performance with our method.
2.3 Resource Usage of Different Methods
In this section, we discuss the compute and memory usage of different attention variants. We will define the compute in terms of the number of multiply-accumulate operations (MACs, also used by Zhang et al. (2022)), which is arguably better defined than FLOPs (e.g., does one step of the matrix multiplication count as 1 FLOP or 2? Do we include the softmax?). All calculations will be presented for a single attention layer for a single sequence, and they are presented this way in all our tables. Both the memory and compute requirements scale linearly with both the batch size and the number of layers.
Consider a sequence of inputs of length $T$, with representation size $d_{\text{model}}$. Let $d_{\text{head}}$ be the width of the $K$, $Q$, and $V$ projections used for the attention layer. For Transformer XL-style attention, let the size of the context be $CT$, where $C - 1$ is the number of past chunks included in the context of the current attention step. We can divide the computation into two major parts: calculating the projections, which do not involve the attention map, and calculating the attention map and projecting the sequence of values using it.
First, consider the case of the standard Transformer XL (Dai et al., 2019). Here, from the input $x \in \mathbb{R}^{T \times d_{\text{model}}}$, we calculate the $K^h$, $Q^h$, $V^h \in \mathbb{R}^{T \times d_{\text{head}}}$ using projection matrices of shape $\mathbb{R}^{d_{\text{model}} \times d_{\text{head}}}$. The output after the attention is projected in a similar manner (Eq. 3). Thus, the projections take a total of $4Td_{\text{model}}d_{\text{head}}$ MACs per head. For backpropagation, we have to store all the intermediate results. This takes $Td_{\text{head}}$ numbers of $K^h$, $Q^h$, and $V^h$. Also, the projected values should be stored. They have an identical shape, therefore, the total memory used by projections is $4Td_{\text{head}}$ numbers per head. Now consider the resource usage related to the attention matrix. It involves calculating the product of $Q^h K^{hT}$, which takes $d_{\text{head}}CT^2$ MACs (multiplication by $C$ is needed because the shape of $K^h$ and $V^h$ for Transformer XL is $CT \times d_{\text{head}}$). The projection of the values with the attention matrix $A^h V^h$ is similar. For the memory usage, the attention needs $CT^2$ numbers, but it needs to be stored both before and after the activation function. In addition, calculating the projection of the position encodings is necessary. This depends on the implementation, but in our case, it involves a matrix multiplication, and the total amount of computation is $2d_{\text{head}}d_{\text{model}}TC$, and it needs $2d_{\text{head}}TC$ numbers of storage. Thus the resource requirements are:
$$N_{\text{MAC}}^{\text{XL}} = H \left( 4Td_{\text{head}}d_{\text{model}} + 2CT^2d_{\text{head}} + 2CTd_{\text{head}}d_{\text{model}} \right)$$
$$N_{\text{mem}}^{\text{XL}} = H \left( 4Td_{\text{head}} + 2CT^2 + 2CTd_{\text{head}} \right)$$
The resource usage of Expert Projection Attention is different. First, the number of heads $H$ is significantly reduced, but $d_{\text{head}}$ is typically larger. Additionally, there are $K$ experts active at the same time. Here, we only consider the case where the value and outputs are experts, but $Q^h$ and $K^h$ are not (this version performs the best; see Sec. 3.1). Then, we have two projections that are identical with that of Transformer XL, and two MoE-based projections. These use $TKd_{\text{model}}d_{\text{head}}$ MACs to calculate the projection and another $TKd_{\text{head}}$ to calculate their weighted average. With a smart kernel implementation, memory usage is not affected by $K$, thus the formula remains the same as Eq. [12] (note, however, that $H$ and $d_{\text{head}}$ are very different in practice). The compute requirement can be calculated as:
$$N_{\text{MAC}}^{\text{EPA}} = H \left( 2Td_{\text{head}}d_{\text{model}} + 2TKd_{\text{head}}(d_{\text{model}} + 1) + 2CT^2d_{\text{head}} + 2CTd_{\text{head}}d_{\text{model}} \right)$$
Additionally, the expert selection logic needs minimal additional resources, which can be ignored. Note that the comparison between the MACs of the standard (Eq. [11]) and Expert Projection Attention (Eq. [13]) depends on the exact values of the hyper-parameters. However, as we’ll see in Sec. 3, in our typical configurations, EPA provides good predictive performance with significantly lower $H$ compared to the standard Transformer, resulting in reduced resource usage in the end.
3 Experiments
Following Csordás et al. (2023), we conduct our experiments in a parameter-matched setting which better reflects the expressivity of language models (than the FLOPS-matched setting often used to evaluate MoEs). Without this constraint, with MoEs it is often possible to compensate for a weaker method by adding more experts. We use and adopt the CUDA kernel of Csordás et al. (2023) for our purposes. To match the number of parameters of different models, we follow a systematic procedure. First, we measure the parameter count of the dense Transformer, which serves as our
target. Then, for each method, we set the total number of experts (including between heads, $HE$ for Expert Projection Attention) to the same as the original number of heads. We increase the head projection size $d_{\text{head}}$ to the maximum that keeps the parameter count below our target. Because our CUDA kernel supports only $d_{\text{head}}$ with multiples of 4, this often remains below the parameter count of the baseline. For further compensation, we slightly increase $d_{\text{ff}}$ until we achieve a match that differs from our target with no more than 100k parameters but never exceeds it. We do not claim that this parameter-matching method is optimal, but we aim to have a consistent algorithm that does not require tuning, which is prohibitively expensive and would have to be done for each model separately. Detailed hyperparameters of all our models can be found in Sec. A.2 in the Appendix.
For all datasets except the character-level Enwik8 (Hutter, 2006), we use sub-word units (Sennrich et al., 2016; Schuster & Nakajima, 2012) obtained with a SentencePiece tokenizer (Kudo & Richardson, 2018) with a vocabulary size of 8k tokens. Unless otherwise noted, all models, including ours, are Transformer XL (Dai et al., 2019), with the context size being twice the size of the active/current chunk.
All models are trained for 100k batches. Some of the datasets we consider (C4 (Raffel et al., 2020), and peS2o (Soldaini & Lo, 2023)) are much larger. In this case, we train on the first $10^9 \times T + N_{\text{batch}}$ tokens of the dataset.
### 3.1 Which Projections Require an MoE?
As discussed in Sec. 2.2, each linear projection (K, V, Q, O) can potentially be replaced by an MoE. Here we first check which projection benefits from such a replacement. As we target the parameter-matched setting, having experts where they are not necessary can have a negative effect. Since they use a significant part of the parameter budget, they can reduce the number of parameters available for the more useful parts of the model. Thus, we did a search over all possible combinations of expert versus fixed projections with two active heads and compared them to the parameter-matched baseline on Wikitext 103. Our models have 47M parameters. We also include a parameter-matched baseline with two heads, which serves as a lower bound for the performance. The results are shown in Tab. 1. It can be seen that the output projection is necessary to match the performance of the baseline. Having key and query experts seems to be unnecessary. In fact, without the output and value experts, they even underperform the dense baseline with $H = 2$ heads. The best-performing model is the one with experts for both value and output projections. We use this model variant for all the other experiments in this paper.
#### Table 1: The performance of EPA with $E = 5$ experts and $H = 2$ heads. Different projections are either experts or fixed for the given head. Parameter-matched baseline with $H = 10$ and $H = 2$ are shown. Models sorted by perplexity. 47M parameters models on Wikitext 103.
| Model | $n_{\text{heads}}$ | V expert | K expert | Q expert | O expert | Perplexity |
|----------------|--------------------|----------|----------|----------|----------|------------|
| EPA | 2 | Y | N | N | Y | 12.27 |
| EPA | 2 | N | N | N | Y | 12.30 |
| Transformer XL | 10 | - | - | - | - | 12.31 |
| EPA | 2 | N | Y | N | Y | 12.36 |
| EPA | 2 | Y | Y | N | Y | 12.37 |
| EPA | 2 | Y | N | Y | Y | 12.42 |
| EPA | 2 | Y | N | N | N | 12.45 |
| EPA | 2 | N | N | Y | Y | 12.45 |
| EPA | 2 | Y | N | Y | N | 12.51 |
| EPA | 2 | Y | Y | Y | Y | 12.57 |
| EPA | 2 | N | Y | Y | Y | 12.59 |
| EPA | 2 | Y | Y | Y | N | 12.61 |
| EPA | 2 | Y | Y | N | N | 12.69 |
| Transformer XL | 2 | - | - | - | - | 12.74 |
| EPA | 2 | N | N | Y | N | 12.75 |
| EPA | 2 | N | Y | N | N | 12.79 |
| EPA | 2 | N | Y | Y | N | 12.90 |
3.2 Comparing with MoA
The method most related to ours is the so-called Mixture of Attention Heads, or MoA (Zhang et al., 2022). They use a selection mechanism to choose active attention heads from a set of experts. However, they have a single set of $K$ and $V$ projections shared between experts; thus, acceleration is possible. However, in the original paper, the authors use a high number of selected heads (8-16) which seems necessary to achieve good performance. Thus, the resource reductions are moderate. Moreover, MoA uses three different regularizers, which have to be tuned independently.
We compare our method with our reimplementation of MoA with a different number of selected heads. Given the complexity of tuning its regularization coefficients, we take them directly from Zhang et al. (2022). For a fair comparison, we also integrated the non-competitive selection mechanism of Csordás et al. (2023) into MoA. The results are shown in Table 2. Similarly to our method, we found that with non-competitive selection, no regularization is required, and the predictive performance usually is superior to the original formulation. However, it still underperforms our method given a similar computation and memory budget.
Table 2: The performance of EPA compared to different MoA variants. MoA can outperform the baseline, but only at a price of using significantly more computing and memory. Also, EPA outperforms the baseline dense Transformer. Results are on Wikitext 103.
| Model | sel. mode | $n_{\text{heads}}$ | #params | Perplexity | MACs | Mem (floats) |
|-------------|-----------|-------------------|---------|------------|--------|--------------|
| MoA | sigmoid | 8 | 47M | 12.13 | 390.2M | 2.6M |
| MoA | sigmoid | 6 | 47M | 12.16 | 306.8M | 1.9M |
| EPA | sigmoid | 2 | 47M | 12.27 | 170.4M | 0.8M |
| Transformer XL | - | 10 | 47M | 12.31 | 453.4M | 3.5M |
| MoA | sigmoid | 4 | 47M | 12.39 | 223.5M | 1.3M |
| MoA | softmax | 4 | 47M | 12.60 | 223.5M | 1.3M |
| MoA | softmax | 6 | 47M | 12.64 | 306.8M | 1.9M |
| MoA | sigmoid | 2 | 47M | 12.65 | 140.1M | 0.7M |
| MoA | softmax | 8 | 47M | 12.77 | 390.2M | 2.6M |
| MoA | softmax | 2 | 47M | 12.84 | 140.1M | 0.7M |
| MoA | softmax | 8 | 262M | 9.50 | 2.9G | 9.9M |
| EPA | sigmoid | 2 | 262M | 9.55 | 2.0G | 2.9M |
| MoA | sigmoid | 8 | 262M | 9.56 | 2.9G | 9.9M |
| MoA | sigmoid | 12 | 262M | 9.58 | 4.1G | 14.7M |
| Transformer XL | - | 16 | 262M | 9.66 | 5.4G | 21.0M |
| MoA | softmax | 12 | 262M | 9.68 | 4.1G | 14.7M |
| MoA | softmax | 4 | 262M | 9.69 | 1.7G | 5.1M |
| MoA | sigmoid | 4 | 262M | 9.77 | 1.7G | 5.1M |
| MoA | softmax | 2 | 262M | 9.87 | 1.1G | 2.7M |
| MoA | sigmoid | 2 | 262M | 10.02 | 1.1G | 2.7M |
3.3 Performance on Different Datasets
We test our methods on a diverse set of language modeling datasets, including C4 (Raffel et al., 2020), Enwik8 (Hutter, 2006), peS2o (Soldaini & Loi, 2023), at two different scales: a 47M and a 262M parameters. The results are shown in Tab. 3. We compare our models to two baselines: one with the same number of heads as the total number of experts ($H \cdot E$) of the EPA models, and the other has the same number of heads as the number of active attention matrices ($H$) as our models. Our models always closely match the performance of the full, many-head baseline with the fraction of memory and compute requirements. Importantly, our method also achieves a wall-clock speedup, enough to accelerate the entire training pipeline by a factor of around 1.5 (see Appendix A.4 for more details). This confirms the competitiveness of our method.
3.3.1 Fast Transformer
The goal of achieving more resource-efficient Transformers includes reducing the resource requirements of both the MLP and the attention layers. Csordás et al. (2023) proposed a parameter-efficient
Table 3: The performance of EPA compared to baselines on different datasets with different model sizes. It can be seen that the predictive performance of our Expert Projection Attention model is comparable to the baselines, and is always better than the baseline with an equal number of heads. Perplexity is shown for Wikitext 103, C4 and peS2o datasets, and bits/character (bpc) for Enwik8.
| Model | Dataset | \( n_{\text{heads}} \) | #params | ppl/bpc | MACs | Mem (floats) |
|----------------|-------------|------------------------|---------|---------|--------|--------------|
| EPA | C4 | 2 | 47M | 22.55 | 202.5M | 0.8M |
| Transformer XL | C4 | 10 | 47M | 22.62 | 453.4M | 3.5M |
| Transformer XL | C4 | 2 | 47M | 23.38 | 453.4M | 1.4M |
| EPA | C4 | 4 | 262M | 16.27 | 2.4G | 5.6M |
| Transformer XL | C4 | 16 | 262M | 16.41 | 5.4G | 21.0M |
| EPA | Wikitext 103| 2 | 47M | 12.31 | 170.4M | 0.8M |
| Transformer XL | Wikitext 103| 10 | 47M | 12.32 | 453.4M | 3.5M |
| Transformer XL | Wikitext 103| 2 | 47M | 12.73 | 453.4M | 1.4M |
| EPA | Wikitext 103| 2 | 262M | 9.77 | 2.0G | 2.9M |
| Transformer XL | Wikitext 103| 16 | 262M | 9.82 | 5.4G | 21.0M |
| Transformer XL | Wikitext 103| 2 | 262M | 10.09 | 5.4G | 6.3M |
| EPA | peS2o | 2 | 47M | 12.86 | 202.5M | 0.8M |
| Transformer XL | peS2o | 2 | 47M | 13.28 | 453.4M | 1.4M |
| Transformer XL | peS2o | 10 | 47M | 14.28 | 453.4M | 3.5M |
| Transformer XL | peS2o | 16 | 262M | 10.78 | 5.4G | 21.0M |
| EPA | peS2o | 4 | 262M | 10.81 | 2.4G | 5.6M |
| EPA | Enwik8 | 2 | 41M | 1.10 | 709.3M | 2.8M |
| Transformer XL | Enwik8 | 8 | 41M | 1.10 | 1.6G | 10.5M |
| Transformer XL | Enwik8 | 2 | 41M | 1.13 | 1.6G | 4.2M |
MoE method to accelerate the MLP layers. However, it remains unclear whether it can be efficiently combined with our Expert Projection Attention, or can have some negative interaction effect if combined in a "Fast Transformer", where every layer is MoE-based.
In order to verify this, we take the architecture proposed by Csordás et al. (2023) without any hyperparameter change and replace the attention layer with EPA. The hyperparameters for the attention are directly taken from the experiments shown in Tab. 3. The results are shown in Tab. 4. The combined, fully-MoE model often outperforms the dense baselines for each dataset and model size considered, except in the case of the 259M parameter model on the C4 dataset.
Table 4: The performance of Fast Transformer (Expert Projection Attention + \( \sigma \)-MoE [Csordás et al., 2023]) compared to baselines on different datasets and model sizes. Our Fast Transformer model is close or better compared to the baselines.
| Model | Dataset | \( n_{\text{heads}} \) | #params | ppl/bpc | MACs | Mem (floats) |
|----------------|-------------|------------------------|---------|---------|--------|--------------|
| Fast Transformer| Wikitext 103| 2 | 47M | 12.17 | 170.4M | 0.8M |
| Transformer XL | Wikitext 103| 10 | 47M | 12.32 | 453.4M | 3.5M |
| Fast Transformer| Wikitext 103| 4 | 259M | 9.81 | 2.4G | 5.6M |
| Transformer XL | Wikitext 103| 16 | 262M | 9.85 | 5.4G | 21.0M |
| Fast Transformer| C4 | 2 | 47M | 22.09 | 202.5M | 0.8M |
| Transformer XL | C4 | 10 | 47M | 22.62 | 453.4M | 3.5M |
| Fast Transformer| C4 | 4 | 259M | 16.45 | 2.4G | 5.6M |
| Transformer XL | C4 | 16 | 262M | 17.85 | 5.4G | 21.0M |
| Fast Transformer| peS2o | 2 | 47M | 12.56 | 202.5M | 0.8M |
| Transformer XL | peS2o | 10 | 47M | 14.28 | 453.4M | 3.5M |
| Fast Transformer| peS2o | 4 | 259M | 9.86 | 2.4G | 5.6M |
| Transformer XL | peS2o | 16 | 262M | 10.83 | 5.4G | 21.0M |
4 ROPE POSITIONAL ENCODINGS
All of our experiments so far have used a Transformer XL model. Thus, it remains unclear whether Expert Projection Attention is specific to this model or can be also used with other attention methods. As an alternative, we consider RoPE positional encodings [Su et al., 2021] without the XL cache (thus, the attention matrices are square). We test these models on Wikitext 103. The results are shown in Tab. 5. Our method also performs well in this case.
Table 5: The performance of Expert Projection Attention compared to dense baseline on Wikitext 103, using RoPE positional encoding instead of Transformer XL.
| Model | Dataset | n_heads | #params | ppl/bpc | MACs | Mem (floats) |
|-------------------|-------------|---------|---------|---------|--------|--------------|
| EPA (RoPE) | Wikitext 103| 2 | 45M | 12.75 | 285.6M | 1.3M |
| Transformer (RoPE)| Wikitext 103| 10 | 45M | 12.78 | 560.9M | 6.1M |
| Transformer (RoPE)| Wikitext 103| 2 | 45M | 12.96 | 560.9M | 1.9M |
| EPA (RoPE) | Wikitext 103| 4 | 243M | 10.00 | 4.2G | 18.4M |
| Transformer (RoPE)| Wikitext 103| 16 | 244M | 10.17 | 6.4G | 37.7M |
| Transformer (RoPE)| Wikitext 103| 2 | 244M | 10.26 | 6.4G | 8.4M |
5 ANALYSIS
In order to see how the network uses the attention heads, we trained a small, 6-layer, 8-head Transformer on ListOps [Nangia & Bowman, 2018; Csordás et al., 2022]. The reason for this choice is that small, algorithmic tasks tend to be more interpretable compared to language models. We also train a parameter-matched, 2-head Expert Projection Attention model. Both models achieve around 95% accuracy on a held-out IID validation set, in contrast to the dense 2-head model, which saturates around 80%. Note that ListOps is a classification task and does not use autoregressive masking.
Following [Csordás et al., 2022], we visualize the maximum of attention heads for each layer, both for the standard Transformer (Fig. 2a) and Expert Projection Attention (Fig. 2b). The attention maps are qualitatively similar. Note that the initialization and the learning dynamics are different for the two models, thus the overlap would not be perfect even with the same type of model. We show all the attention maps for both models in Fig. 4 and 5 in the Appendix.
In addition, we visualize individual attention heads for the Expert Projection Attention model. An example is shown in Fig. 2c. In addition to the attention map, we show the weight of the selected experts for both the value and output projection (denoted by V and O, respectively, on the sides of the attention map). Often it is possible to interpret the selection weights: here, the output experts specialize according to different operations, while the input ones distinguish numbers and closed parentheses. The attention map itself appears to distribute information about contiguous chunks of numbers. Similar plots for all heads are shown in Fig. 5 in the Appendix.
The attention maps of the language models are difficult to interpret. However, we visualized the attention maps of the 47M parameter Transformer XL and the Expert Projection Attention model from Tab. 5. We found them to be qualitatively similar. We also identified induction heads [Olsson et al., 2022] in both models, some examples shown for EPA in Fig. 6a and for Transformer in Fig. 6b in the appendix. Other typical vertical line-lined attention patterns are shown in Fig. 6c and 6d.
6 RELATED WORK
The method most closely related to ours is MoA [Zhang et al., 2022], which introduces a MoE style attention. It defines each attention head as an expert but shares the key and value projections between them. Unlike in our case, each of the selected experts requires a separate attention matrix, which significantly increases its memory usage. Due to the use of a competitive softmax-based activation function in the selection network, it requires complex regularization to prevent expert collapse. In the original formulation, the number of active heads is high. We also confirmed in our experiments that MoA needs many attention heads to match the performance of the dense baseline (see Sec. 3.2), and it is only possible to do so with a significantly higher resource budget than our method.
Figure 2: An attention map of the (a) standard Transformer and (b) Expert Projection Attention. The maximum of all heads in the given layer are shown. (c) A head of EPA. On the left side of the attention plot, the selection weights of the output projection expert are shown. Similarly, at the bottom, the selection weights of the value experts are visible. In the selection maps, dark blue always corresponds to 1, while white is 0. The scale shown on the right is only for the attention.
Nguyen et al. (2022) analyze the attention matrices, and they conclude that they are usually low rank. Motivated by this, the authors construct a few (e.g., 2) "global attention matrices", and they compute each local matrix for specific heads by a weighted average of those. However, they average the logits, not the final matrix, so each individual head-specific matrix has to be computed. This means that in the best case, they can only save half of the computation associated with the attention matrix because the readout (Eq. 3) is still needed. For the same reason, memory savings are also low. The authors also use sampling of the attention matrices.
Peng et al. (2020) proposes to reweight the contribution of each head by a gating function. However, they only reduce the number of total attention heads by one, presumably to compensate for the parameters used by the selection logic. Their goal was not to reduce resource usage but to have better predictive performance, which they achieve. They use a softmax-based competitive selection mechanism. To avoid collapse, the gating function is trained only in some steps.
Csordás et al. (2023) introduce the non-competitive σ-MoE method that we also use for our attention mechanism. However, the authors focus on accelerating the MLPs and not the attention. More broadly, Shazeer et al. (2017) introduces sparsely-gated mixture of experts in LSTM (Hochreiter & Schmidhuber, 1997) networks. Fedus et al. (2021) introduces Mixture of Experts in Transformers. Lepikhin et al. (2021) trains a MoE-based LLM, and Clark et al. (2022) analyzes the scaling laws of MoE models. Lewis et al. (2021) introduces an alternative method for preventing collapse.
Dao et al. (2022) provides a hardware-aware CUDA implementation of the entire attention layer, which avoids storing the attention matrix. By saving memory bandwidth in this way, they achieve a significant wall clock time speedup, despite that the attention matrix should be recomputed in the backward pass. This is orthogonal to our method and they can be combined for further acceleration.
7 CONCLUSION
On a wide range of language modeling datasets with different model sizes, our novel Mixture-of-Experts-based attention method called Expert Projection Attention (EPA) achieves performance on par with parameter-matched dense counterparts, but with only a fraction of the computational cost and memory usage. EPA drastically reduces the number of attention matrices that have to be computed, by using MoE for the value and output projections. Our method is stable and does not need additional regularization to prevent degenerate solutions (a well-known practical issue in many existing MoE models). Our method can also be successfully combined with MoE MLP layers, to obtain a "Fast Transformer" where every layer is MoE-based, achieving a huge reduction in resource requirements.
REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In *Int. Conf. on Learning Representations (ICLR)*, San Diego, CA, USA, May 2015.
Tom B Brown et al. Language models are few-shot learners. In *Proc. Advances in Neural Information Processing Systems (NeurIPS)*, Virtual only, December 2020.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. *Preprint arXiv:2303.12712*, 2023.
Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, and Furu Wei. On the representation collapse of sparse mixture of experts. In *Proc. Advances in Neural Information Processing Systems (NeurIPS)*, New Orleans, Louisiana, USA, December 2022.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Josep E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Łukasz Kaiser, David Benjamin Belanger, Lucy J. Colwell, and Adrian Weller. Rethinking attention with performers. In *Int. Conf. on Learning Representations (ICLR)*, Virtual only, May 2021.
Aidan Clark, Diego de Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake A. Hechtman, Trevor Cai, Sebastian Borgeaud, George van den Driessche, Eliza Rutherford, Tom Hennigan, Matthew Johnson, Katie Millican, Albin Cassirer, Chris Jones, Elena Buchatskaya, David Budden, Laurent Sifre, Simon Osindero, Oriol Vinyals, Jack W. Rae, Erich Elsen, Koray Kavukcuoglu, and Karen Simonyan. Unified scaling laws for routed language models. *Preprint arXiv:2202.01169*, 2022.
Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. The neural data router: Adaptive control flow in transformers improves systematic generalization. In *Int. Conf. on Learning Representations (ICLR)*, Virtual only, April 2022.
Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. Approximating two-layer feedforward networks for efficient transformers. In *Findings of the Association for Computational Linguistics: EMNLP 2023*, November 2023.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-XL: Attentive language models beyond a fixed-length context. In *Proc. Association for Computational Linguistics (ACL)*, pp. 2978–2988, Florence, Italy, 2019.
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In *Proc. Advances in Neural Information Processing Systems (NeurIPS)*, New Orleans, Louisiana, USA, December 2022.
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *Preprint arXiv:2101.03961*, 2021.
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *Journal of Machine Learning Research (JMLR)*, 23(1): 5232–5270, 2022.
Georgi Gerganov. llama.cpp. https://github.com/ggerganov/llama.cpp, 2023.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, pp. 1735–1780, 1997.
|
O3Mej5jlda
|
One of the claims the authors made is that the datasets previous studies used are too easy and lack diversity. However, from table 5, the performance of different transfer learning methods is very close, which can somehow indicate the proposed benchmark also lacks diversity and the performance on some datasets can be > 90%, which also means they are not difficult enough.
|
BENCHMARKING FEW-SHOT TRANSFERABILITY OF PRE-TRAINED MODELS WITH IMPROVED EVALUATION PROTOCOLS
Anonymous authors
Paper under double-blind review
ABSTRACT
Few-shot transfer has been made possible by stronger pre-trained models and improved transfer algorithms. However, there lack of a unified, rigorous evaluation protocol that is challenging yet meets real-world usage. To this end, we carefully review previous evaluation principles and establish new standards with recipes from different aspects following our empirical findings, including the report of confidence intervals, the standard for hyperparameter tuning, and variation of ways and shots, etc. With these standards, we create FEWTRANS, a few-shot transfer benchmark containing 10 challenging datasets from diverse domains with three sub-benchmarks: one that compares pre-trained models, one that compares transfer algorithms for vision-only models, and one that compares transfer algorithms for multimodal models. To facilitate future research, we reimplement and compare some of the recent pre-trained models and transfer algorithms. We observe that, while stronger pre-trained models bring significant performance improvement, the performance of most transfer methods is quite close, and simply finetuning the whole backbone performs well enough, especially for multi-modal models. We hope that the release of FEWTRANS benchmark will streamline reproducible and rigorous advances in few-shot transfer learning research.
1 INTRODUCTION
Recent progress on computer vision (Kolesnikov et al., 2020; Radford et al., 2021; Islam et al., 2021; Dehghani et al., 2023) suggests that good performance on a variety of vision tasks can be achieved at low cost by transferring a pretrained, large-scale model with only a few labeled samples, facilitating downstream scenarios where labeled data can be expensive or difficult to obtain. This few-shot transferability of pre-trained models can be further improved by adopting recently proposed transfer algorithms that are claimed to be better than vanilla finetuning in terms of accuracy or efficiency, such as partial finetuning (Zaken et al., 2022), low-rank adaptation (Hu et al., 2022), adapter tuning (Houlsby et al., 2019; Chen et al., 2022; Li et al., 2022b), meta-learning (Shysheya et al., 2023), prompt tuning (Jia et al., 2022; Zhou et al., 2022c; Khattak et al., 2023) and so on.
However, the evaluation criteria of few-shot transfer have not been unified and diverge across separate threads of research, which hinders newly proposed pretrained models or transfer algorithms from being accurately evaluated and compared with previous ones. To build a unified, reasonable evaluation protocol, we first review previous evaluation setups for few-shot transfer with careful experiments. We find several inappropriate aspects caused by the specific few-shot nature of few-shot transfer problems overlooked by previous evaluation criteria.
In particular, we find two major deficiencies in previous evaluation setups. First, we observe that a single few-shot task has large performance variation caused by random sampling of training data, thus previous reports of few-shot performance with few tasks are unreliable: just by changing seeds that generate tasks, one can obtain high performance with arbitrary methods. This problem can be handled easily by sampling more tasks. We then note that current hyperparameter selection criterion that sets an additional validation dataset from the target domain is not realistic for real-world few-shot problems. We argue that model selection should be either dependent on a dataset irrelevant to the target dataset, or should be only dependent on the downstream task at hand. Our further
analysis shows that the optimal hyperparameters of few-shot transfer change from task to task and from dataset to dataset, thus designing a reasonable model selection criterion that reflects the real performance of models/methods while being fair is difficult. We thus propose to use hyperparameter ensemble (Wenzel et al., 2020) which avoids looking for a single hyperparameter but instead classifies test samples using several adapted classifiers obtained from a range of hyperparameters.
Integrating all our solutions to major and minor deficiencies of previous evaluation protocols, we construct FewTrans, a few-shot transfer benchmark containing 10 diverse downstream datasets, with the ability of sampling class-imbalanced tasks with varying numbers of classes and shots. FewTrans has three sub-benchmarks for comparing pretrained models, transfer algorithms for vision-only models, and transfer algorithms for multimodal models respectively. To facilitate future research, we have reimplemented and compared a bunch of pretrained models and transfer algorithms. We have several interesting observations. We observe that while a larger pretraining dataset contributes significantly to the downstream few-shot performance, different transfer algorithms have quite close performance. A simple all-parameter finetuning performs surprisingly well and seems not to meet overfitting problems especially for multimodal models, calling into question whether we are making progress on the problem.
2 Related Work
Few-shot transferability of pre-training models improves with larger training datasets, architectures, and better pre-training algorithms. Kornblith et al. (2019) verify that models transferred from supervised ImageNet models generally perform much better than those trained from scratch on downstream tasks, especially under few-shot settings. Self-supervised ImageNet models were then shown to be better source models on few-shot transfer tasks than supervised models (Islam et al., 2021; Luo et al., 2023). Recent studies (Kolesnikov et al., 2020; Zhai et al., 2022; Dehghani et al., 2023) further show that scaling up pre-training datasets to hundreds of millions and parameters to billions leads to stably increasing few-shot transfer performance. On the other hand, the CLIP model (Radford et al., 2021) leverages multimodal data for pre-training and achieves very impressive zero/few-shot performance on a suit of visual classification datasets using hand-crafted text prompts.
Unlike many-shot transfer learning literature where standard benchmarks like VTAB (Zhai et al., 2019) exist, most papers that evaluate few-shot transferability of pretrained models do not use benchmarks but instead self-select datasets for evaluation (Kornblith et al., 2019; Kolesnikov et al., 2020; Radford et al., 2021). One exception is the few-shot transfer benchmark of transfer algorithms for multimodal models (Zhou et al., 2022c) that has 11 downstream datasets. Some evaluation principles of our benchmark were largely inspired by Meta-Dataset (Triantafillou et al., 2020), a benchmark for classical few-shot classification problems. There are several reasons for why we do not build our benchmark on the top of Meta-Dataset, including no class names in some datasets, unnatural image preprocessing in some datasets, having too many shots in a task, etc.
3 The Problem of Few-shot Transfer Learning
In transfer learning, we have a pretrained model $f_\theta : \mathbb{R}^d \rightarrow \mathbb{R}^m$ mapping inputs $x \in \mathbb{R}^d$ to features $z \in \mathbb{R}^m$. The goal of transfer learning is to transfer the pretrained model $f_\theta$ to a specified downstream task. Any downstream task $\tau$ can be described as a combination of a training set $D^{tr} = \{(x_i, y_i)\}_{i=1}^N$ and a test set $D^{te} = \{(x'_j, y'_j)\}_{j=1}^M$, where $y_i, y'_j \in \{1, ..., n_{cls}\}$ are class labels. The task $\tau$ is called a $K$-shot task if there are exactly $K$ samples per class in $D^{tr}$. During transfer, the pretrained model $f$ will be adapted to task $\tau$ using the training set $D^{tr}$ through a transfer algorithm such as finetuning, producing a new classifier mapping images to labels of the new task. To evaluate the effectiveness of transfer, the produced classifier will be evaluated on the test set $D^{te}$.
In a typical transfer learning evaluation setup (Zhai et al., 2019), the downstream task involves an entire downstream dataset, so the number of samples per class can be quite large, deviating from some practical transfer scenarios where downstream data is difficult to obtain. Under few-shot transfer scenario, the number of samples per class can be quite small, usually less than 20 or 10.
4 INAPPROPRIATE EVALUATION OF PREVIOUS METHODS
In this section, we point out several flaws of previous few-shot transfer evaluation protocols. For all experiments done in this section, we use fine-tuning as the transfer algorithm. Following Luo et al. (2023), we separately set the learning rates for the backbone of the pretrained model and the linear head for improved performance. By default, we use Adam (Kingma & Ba, 2015) as the optimizer.
4.1 LARGE PERFORMANCE FLUCTUATION CAUSED BY SAMPLING

Different from the typical transfer learning setup where an entire dataset is used as the downstream task, in few-shot transfer, a randomly sampled small part of the dataset is used as the downstream task. Previous works that evaluate pretrained models on few-shot transfer tasks (Kolesnikov et al., 2020; Radford et al., 2021; Zhou et al., 2022c) sample a single or a few (usually 3) tasks and only report the average performance on the sampled tasks without error bars. This can be problematic because the performance can be largely influenced by the choice of sampled data especially under few-shot settings. To illustrate this, we give the single-task transfer performance of DINOv2-small (Oquab et al., 2023) on the EuroSAT dataset (Helber et al., 2019) along with the 95% confidence intervals in Figure 1. As seen, when the number of shots is small, the spread of the error bar can be very large. For 1-shot task, the performance can vary from less than 50% to more than 80% within the confidence interval. This is caused by the randomness of the training set, where a change to a single sample can lead to large fluctuations in performance (Agarwal et al., 2021). Thus the comparison in previous works using only a few tasks is inappropriate because the change of seed can determine the rank of pretrained models/transfer methods completely. To make the comparison meaningful, we should at least sample more tasks to make the confidence interval small enough.
4.2 UNREALISTIC MODEL SELECTION
In the typical transfer learning setting, the downstream dataset is so large that we can partition it into a training set for adaptation, and a validation set for selecting hyperparameters like learning rates and number of epochs for adaptation. When it comes to the evaluation of few-shot transferability of pretrained models, previous works either tune hyperparameters on a large validation set (possibly from different classes) from the same dataset (Radford et al., 2021; Luo et al., 2023) or set hyperparameters to some default “magic” values dependent on downstream datasets (Kolesnikov et al., 2020; Li et al., 2022b; Xu et al., 2022; Zhou et al., 2022c). While it seems valid to tune hyperparameters on a separated validation set as what is done in the traditional many-shot transfer learning literature, we point out that doing so is inappropriate in the few-shot setting because it deviates from real-world scenarios where additional labeled data from the same dataset for validation is hard to obtain.
Thus to make the evaluation protocol realistic while being fair for comparison, we have two choices: (1) determine hyperparameters of transfer algorithms in advance on a held-out dataset that is both different from the pretraining dataset and target downstream dataset; (2) determine hyperparameters based on the few training samples of the target downstream dataset on the fly. We will next evaluate the validity of these two choices.
Optimal hyperparameters change from task to task. If we determine hyperparameters on a separate dataset, then the hyperparameters will be the same for all tasks. Is this appropriate? In Table 1, we show the optimal hyperparameters of ten tasks sampled from the same dataset. We can observe that, even when sampled from the same dataset with the same set of classes, tasks with different training samples can have different optimal hyperparameters. The optimal number of epochs varies from 15 to 40; the optimal learning rate for pretrained backbone varies from $2e^{-06}$ to $5e^{-05}$; the optimal learning rate for linear classifier varies from 0.01 to 0.2.
Table 1: Optimal hyperparameters vary from task to task. The pretrained model is DINOv2-small, and all tasks are 1-shot sampled from EuroSAT.
| Task ID | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|---------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| Epoch | 15 | 15 | 15 | 40 | 15 | 40 | 30 | 20 | 20 | 30 |
| Backbone lr | 5e-05 | 5e-06 | 5e-06 | 1e-05 | 2e-06 | 5e-06 | 1e-05 | 2e-05 | 2e-05 | 1e-05 |
| Head lr | 0.05 | 0.01 | 0.2 | 0.02 | 0.01 | 0.05 | 0.01 | 0.05 | 0.05 | 0.2 |
Figure 2: The heatmaps showing how the few-shot transfer performance of a single 1-shot task sampled from EuroSAT changes with hyperparameters. We fix the number of epochs to 50 in the left plot, and fix the head lr to 0.01 in the second plot. The black rectangles highlight the optimal hyperparameter areas.
**Few-shot transfer performance is sensitive to the choice of hyperparameters.** Only showing that the optimal hyperparameters change from task to task is not enough to conclude that the few-shot transfer performance will change from task to task if we use the same hyperparameters for all tasks. We still need to show that few-shot transfer performance is sensitive to hyperparameters. We show how sensitive few-shot transfer performance is to the choice of hyperparameters in Figure 2. We plot the heatmaps of few-shot transfer performance of a single 1-shot task when varying two of the hyperparameters. As we can see, the variation of accuracy can be very large in the considered ranges, from around 20% to more than 60%. In particular, the performance can drop very quickly when we move out of the optimal area (highlighted in the black rectangle). For example, in the left plot, if we go down or right from the black rectangle, that is, increasing the learning rate of the backbone or the linear head, we will go into a chaotic area, where the accuracy oscillates up and down irregularly and often drops to half or even less. This phenomenon seems not that evident for the number of epochs in the right plot where the accuracy seems to be smoother, but we can still see a 10% performance fluctuation around the optimal area.
**Optimal hyperparameters change from dataset to dataset.** Even if we can tolerate the performance variation per task, we show that the “average optimal hyperparameters”—the hyperparameters that give the highest average performance over several tasks sampled from a dataset—can still vary from dataset to dataset in Table 2. For example, the optimal number of epochs when transferred to Plant Disease (Mohanty et al., 2016) is 50, while the optimal number of epochs when transferred to UCF101 (Soomro et al., 2012) is 10. Among the six downstream datasets, the backbone learning rate ranges from $1e^{-06}$ to $2e^{-05}$, and the head learning rate ranges from $5e^{-04}$ to $1e^{-02}$.
Combining the analysis above, we can conclude that single hyperparameters will lead to unstable few-shot transfer performance from task to task and from dataset to dataset. So this hyperparameter selection criteria will cause large uncertainty of few-shot transfer performance and thus cannot reflect the true performance of different methods. Thus a proper hyperparameter selection criteria should rely only on the training set of the downstream dataset at hand, which we will explore next.
**Cross-validation fails to provide reliable estimation of hyperparameters.** A representative way of estimating the hyperparameters using the training set of downstream tasks is cross-validation which has a long history of use in machine learning (Kohavi et al., 1995; Arlot & Celisse, 2009). The main idea behind $l$-fold cross-validation is to split data $l$ times, each time into a training part...
Table 2: Average optimal hyperparameters of few-shot transfer vary from dataset to dataset. The pretrained model is DINOv2-small, and all tasks are 1-shot.
| | CIFAR-100 | UCF | Plant Disease | Aircraft | DTD | EuroSAT |
|------------------|-----------|-----|---------------|----------|-----|---------|
| Epoch | 30 | 10 | 50 | 30 | 20 | 30 |
| Backbone lr | 1e-05 | 1e-05 | 2e-05 | 1e-06 | 1e-05 | 1e-05 |
| Head lr | 0.0005 | 0.01 | 0.005 | 0.005 | 0.001 | 0.01 |
Figure 3: Cross-validation cannot find good hyperparameters when the number of shots is small, regardless of the domain shift between pretraining and downstream dataset. We use a subset class of ImageNet as the training set, and use the remaining part as the downstream dataset for the left plot.
and a validation part. For each time of split, the training part is used to adapt the model and the validation part is used to evaluate the adaptation. The hyperparameters are chosen such that the average error over all splits is small. While cross-validation can work well when there is abundant data, we find that it will meet difficulties when data for adaptation is scarce, because (1) the number of samples per class is too small to split. For example, when the number of samples is below 5, it is only possible to use the leave-one-out strategy, that is, the validation part only has 1 sample for each split, leading to unreliable performance estimation. For extreme 1-shot case, we cannot apply cross-validation because there is no data to split; (2) $l$-fold cross-validation changes the number of shots from $K$ to $K(l - 1)/l$. As we have shown previously, optimal hyperparameters for few-shot transfer can change when the task has changed, thus the hyperparameters found by cross-validation can be biased. We verify our considerations in Figure 3, where we show that there is a gap between the accuracy obtained by 5-fold cross-validation and the accuracy obtained by using the “average optimal hyperparameters” of the dataset for both in-domain and out-of-domain transfer, especially when the number of shots is small.
In conclusion, figuring out the optimal hyperparameters for few-shot transfer is very important and is, if not impossible, very difficult under real-world settings. Because of this difficulty, a good few-shot transfer method should not only have high performance at its optimal hyperparameters, but should also have resistance to the change of hyperparameters, that is, the test loss landscape around the optimal hyperparameters should be flat such as we can tolerate an inevitable deviation of hyperparameter estimations. Thus a good evaluation protocol should evaluate both the performance that a pretrained model can reach, as well as its sensitivity to the choice of hyperparameters.
4.3 Other Considerations
Apart from the aforementioned two major defects of previous evaluation of few-shot transferability of pretrained models, we also notice several other points that can be improved, with some of them inspired by the few-shot learning literature (Triantafillou et al., 2020).
No variation of the number of classes. Following transfer learning literature, papers that evaluates the few-shot transferability of pretrained models often use all classes of the target downstream dataset (Kolesnikov et al., 2020; Radford et al., 2021) to form a task, thus the capability of pretrained models transferring to less number of classes which forms more specific fine-grained category structures is not considered.
No class imbalance. For simplicity, almost all previous few-shot transfer evaluations use class-balanced settings, where the number of shots in each class of the training set is exactly the same
Table 3: Average 1-shot transfer performance of pretrained DINOv2-small over 50 tasks: hyperparameter ensemble vs. individual hyperparameter configurations. See appendix for details.
| configuration | (1,1) | (1,2) | (1,3) | (2,1) | (2,2) | (2,3) | (3,1) | (3,2) | (3,3) | Avg | lr ensemble | lr+epoch ensemble |
|---------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-----|-------------|-------------------|
| EuroSAT | 67.52 | 67.23 | 68.00 | 70.75 | 71.01 | 61.87 | 45.44 | 45.55 | 42.48 | 59.98 | 70.21 | 70.72 |
| Aircraft | 61.92 | 61.55 | 61.44 | 61.49 | 61.79 | 61.23 | 61.44 | 61.31 | 60.45 | 61.40 | 63.39 | 63.28 |
for all classes. However, we cannot guarantee that this will still hold in real-world few-shot transfer scenarios and thus models and algorithms should be evaluated on class-imbalanced scenarios.
Datasets lack diversity, are too easy, and may have errors. Take the widely-used few-shot transfer benchmark for multimodal pretrained models (Radford et al., 2021; Zhou et al., 2022c) that contains 11 datasets as an example. Images from most datasets in this benchmark are taken from modern cities, thus being similar to parts of ImageNet and the tasks are not difficult to solve even when there are only a few samples per class. This can be seen from recent papers (Khattak et al., 2023) where the average few-shot accuracy of 5 datasets reaches more than 90%, in the condition that some of the datasets have more than 100 classes, which should have been difficult to classify correctly with few samples per class. In addition, the benchmark has StanfordCars (Krause et al., 2013) as one of its datasets, which has proven to have tons of mislabeled images and outliers (Cleanlab, 2023).
5 INTRODUCING THE FEWTRANS BENCHMARK
In this section, we introduce several evaluation standards to solve the aforementioned issues of few-shot transfer, which constitutes the key components of our proposed FewTrans benchmark.
5.1 HYPERPARAMETER ENSEMBLE FOR ROBUST FEW-SHOT EVALUATION
To overcome the difficulty of estimating hyperparameters with a few samples, we propose to not search for single hyperparameters, but instead use hyperparameter ensemble (Momma & Bennett, 2002; Wenzel et al., 2020) that utilizes several hyperparameters for prediction. Specifically, let \( \mathcal{H} = \{h^i\}_{i=1}^{m} \) be a set of \( m \) hyperparameter configurations, where \( h^i = \{h_1^i, h_2^i, ..., h_n^i\} \) is a single configuration that includes values of \( n \) hyperparameters. Suppose that the classifier produced by adapting a pre-trained model to a downstream task by hyperparameter configuration \( h^i \) is \( g_{h^i} \), which maps images to classification scores. Then for any given test image \( x \) of the downstream task, the classification score of \( x \) wrt hyperparameter ensemble \( \mathcal{H} \) is defined as the sum of all scores obtained by each hyperparameter configuration, i.e., \( \sum_{h \in \mathcal{H}} g_h(x) \).
One advantage hyperparameter ensemble offers is its robustness to individual bad hyperparameter configurations. We can observe this in the first row of Table 3: the accuracies of some hyperparameter configurations are very low (less than 50%), but the hyperparameter ensemble still reaches 70.21% accuracy, very close to the optimal performance 71.01% achieved by the optimal individual configuration. Thus as long as the good hyperparameters are included in \( \mathcal{H} \), the few-shot transfer performance with hyperparameter ensemble will be very close to the performance obtained by the good hyperparameters. As we have seen before, the variations of hyperparameters of a given pretrained model are not beyond several orders, thus as long as we set a large enough range of hyperparameters, every evaluated task can provably approach its optimal performance and the evaluation is thus stabilized to some extent. In addition, it does not introduce additional computation overhead compared to cross-validation.
Another advantage of using hyperparameter ensemble is that it can measure the sensitivity of the few-shot transfer performance to the choice of hyperparameters. As seen from the second row of Figure 3, when the loss landscape around the optimal hyperparameters is flat, the performance given by the ensemble will be higher, while not causing too strong fluctuations.
According to what we have just discussed, the two criteria that we require for a good hyperparameter searcher are both satisfied by hyperparameter ensemble. We thus use it in FewTrans. For practical usage, we still need to determine how to set the range of hyperparameters for each pretrained models/transfer algorithm. Since we know that the hyperparameters won’t usually change too much from dataset to dataset, we determine the range on a held-out dataset by finding the best average
Table 4: Sub-benchmark of FewTrans that compares the few-shot transferability of different pre-trained models. We use all-parameter finetune as the transfer algorithm for all models. We temporarily do not evaluate pretrained models that use larger architectures.
| Models | Dataset | ImageNet-S | DTD | CIFAR-100 | Flowers | UCF | EuroSAT | Quick Draw | Fungi | Plant Disease | Aircraft | Average |
|-----------------|---------|------------|-----|-----------|---------|-----|---------|------------|-------|---------------|----------|---------|
| ResNet-50 | IN-1M | 63.6±1.5 | 69.3±1.1 | 74.3±1.1 | 84.1±1.1 | 76.7±1.2 | 84.1±1.0 | 64.8±1.3 | 47.6±1.4 | 72.5±1.4 | 51.7±1.4 | 68.9±1.3 |
| MAE-base | IN-1M | 72.3±1.5 | 74.0±1.1 | 86.3±0.8 | 92.1±0.6 | 88.2±0.8 | 88.1±0.9 | 72.6±1.2 | 62.1±1.3 | 85.9±0.8 | 52.3±1.3 | 77.4±1.1 |
| Swin-B | IN-1M | 71.5±1.5 | 74.7±1.1 | 87.0±0.8 | 90.9±0.8 | 87.8±0.8 | 87.6±0.9 | 74.2±1.1 | 60.7±1.3 | 86.1±0.8 | 55.4±1.4 | 77.6±1.1 |
| EsViT-SwinB | IN-1M | 69.7±1.5 | 75.3±1.0 | 84.7±0.9 | 93.6±0.6 | 88.0±0.8 | 87.8±0.9 | 70.8±1.2 | 62.9±1.3 | 86.2±0.8 | 57.7±1.5 | 77.7±1.1 |
| ConvNext-B | IN-1M | 74.6±1.5 | 73.9±1.1 | 88.3±0.7 | 90.8±0.8 | 89.1±0.8 | 86.4±1.0 | 75.1±1.1 | 60.8±1.3 | 84.2±0.9 | 55.7±1.4 | 77.9±1.1 |
| IBOT-ViT | IN-1M | 69.7±1.5 | 74.9±1.1 | 89.6±0.7 | 93.6±0.6 | 89.1±0.8 | 89.1±0.8 | 69.9±1.2 | 60.8±1.3 | 86.3±0.8 | 56.7±1.5 | 78.0±1.1 |
| BiT-R101 | IN-14M | 68.2±1.5 | 77.5±1.0 | 85.3±0.8 | 99.6±0.1 | 89.4±0.7 | 87.0±0.9 | 71.9±1.2 | 67.9±1.2 | 91.1±0.6 | 57.5±1.4 | 79.5±1.0 |
| CLIP-base | WIT-400M| 80.4±1.3 | 83.5±0.8 | 94.9±0.3 | 96.9±0.3 | 95.4±0.3 | 88.5±0.7 | 76.5±0.9 | 54.8±1.4 | 77.2±1.0 | 78.8±1.0 | 82.7±0.9 |
| DINOv2-small | LVD-142M| 75.1±1.4 | 81.3±0.9 | 89.8±0.7 | 99.6±0.1 | 90.3±0.7 | 87.0±0.9 | 78.1±1.0 | 69.9±1.2 | 89.8±0.7 | 67.3±1.4 | 82.8±1.0 |
| DINOv2-base | LVD-142M| 79.8±1.3 | 82.6±0.9 | 93.0±0.5 | 99.9±0.0 | 93.9±0.5 | 87.7±0.9 | 79.6±1.0 | 74.9±1.1 | 91.8±0.6 | 70.3±1.3 | 85.3±0.9 |
optimal hyperparameters on this dataset, setting it as the center of the hyperparameter range, and expanding it to a full range. For pretrained models not trained on ImageNet, we choose the validation set of ImageNet as the held-out dataset, while for ImageNet models, we choose CUB (Welinder et al., 2010) as the held-out dataset.
5.2 Other Components of FewTrans
Datasets. We choose datasets such that the sampled tasks are not too easy, cover different domains, and do not have many errors. In addition, in order to evaluate multimodal models, we require that each class of chosen datasets should have a text name. We finally choose ten datasets that satisfy these criteria: ImageNet-Sketch (Wang et al., 2019), DTD (Cimpoi et al., 2014), CIFAR-100 (Krizhevsky et al., 2009), VGG Flowers (Nilsback & Zisserman, 2008), UCF-101 (Soomro et al., 2012), EuroSAT (Helber et al., 2019), Quick Draw (Jonas et al., 2016), Fungi (Schroeder & Cui, 2018), Plant Disease (Mohanty et al., 2016) and Aircraft (Maji et al., 2013).
Base-novel split. Following literature of transfer algorithms for multimodal models (Zhou et al., 2022c; Khattak et al., 2023), we split the classes of each dataset into a base set of classes and a novel set of classes. For base evaluation, the pretrained multimodal model will be adapted to the training set sampled from base set and evaluated on the test set sampled from base set. For base-to-novel evaluation, the pretrained multimodal model will still be adapted to the training set sampled from the base set, but evaluated on the test set sampled from the novel set of classes. This is possible since multimodal models like CLIP (Radford et al., 2021) do not need a tunable classification head, but classify images dependent on text names of classes only. For unimodal models, we only conduct base evaluation. The base-novel split is approximately 4 : 1 for each dataset.
Sampling criteria. We follow the task sampling criteria adopted in Meta-Dataset (Triantafillou et al., 2020) with some small differences. Specifically, to sample a task, we first sample a random number of classes from the target task. The number of classes is sampled uniformly from [2, 15] for all datasets except for ImageNet-Sketch, whose classes per task are hierarchically sampled from one node in WordNet to improve the quality of sampled tasks. Then images in the task are sampled with an imbalance of shots for each class. In Meta-Dataset, the average number of shots can be large (20 or more), deviating from the true few-shot settings. We thus restrict the maximum number of training samples in each class to 10, constructing “true” few-shot tasks. To have a well estimation of performance, we sample 600 tasks per dataset and report the 95% confidence intervals.
5.3 Experiments on FewTrans
We use the aforementioned evaluation protocols to evaluate the few-shot transferability of pretrained models and compare different transfer algorithms. This results in three sub-benchmarks that (1) compares different pretrained models, (2) compares different transfer algorithms for pure vision
Table 5: Sub-benchmark of FewTrans that compares different transfer algorithms for pure vision pretrained models. The visual encoder of CLIP-base is chosen as the pretrained model.
| | ImageNet-S | DTD | CIFAR-100 | Flowers | UCF | EuroSAT | Quick Draw | Fungi | Plant Disease | Aircraft | Average |
|------------------|------------|-----|-----------|---------|-----|---------|-------------|-------|---------------|----------|---------|
| Linear | 72.1±1.5 | 76.7±1.1 | 83.7±0.8 | 95.5±0.4 | 91.6±0.7 | 81.5±1.0 | 70.8±1.1 | 56.8±1.4 | 75.3±1.1 | 68.0±1.3 | 77.2±1.1 |
| Finetune | 73.1±1.5 | 79.9±1.0 | 88.0±0.8 | 95.9±0.5 | 93.0±0.6 | 87.6±0.9 | **78.9±1.0** | 58.9±1.4 | 83.7±0.9 | 70.7±1.3 | 81.0±1.0 |
| LoRA | 73.8±1.5 | 80.7±1.0 | 88.7±0.7 | 96.1±0.4 | 93.3±0.6 | **87.7±0.9** | 78.0±1.1 | 59.4±1.4 | 83.3±0.9 | 71.0±1.3 | 81.2±1.0 |
| BitFit | 73.6±1.5 | 79.7±1.0 | 89.3±0.7 | 96.8±0.4 | 93.3±0.6 | 86.5±0.9 | 77.3±1.1 | 61.3±1.3 | 83.5±0.9 | 71.0±1.2 | 81.2±1.0 |
| SSF | 74.2±1.5 | 80.3±1.0 | 89.0±0.7 | 96.7±0.4 | 93.2±0.6 | 87.4±0.9 | 77.3±1.1 | 60.8±1.3 | 84.4±0.9 | 70.7±1.3 | 81.4±1.0 |
| Adapter | 74.1±1.5 | 80.5±1.0 | 89.8±0.7 | 96.9±0.4 | 93.6±0.5 | 86.5±0.9 | 77.3±1.0 | 61.2±1.3 | 83.2±0.9 | 70.9±1.2 | 81.4±1.0 |
| Adaptformer | 74.1±1.5 | 80.8±1.0 | 90.0±0.7 | **97.0±0.3** | **93.8±0.5** | 87.0±0.9 | 77.7±1.0 | 61.8±1.3 | 83.6±0.9 | 71.0±1.2 | 81.7±1.0 |
| VPT | 73.2±1.5 | **82.1±0.9** | **90.2±0.7** | **97.0±0.4** | 93.6±0.5 | 87.3±0.9 | 78.2±1.0 | **61.9±1.3** | **85.7±0.9** | 71.6±1.2 | 82.1±1.0 |
| TSA | **74.3±1.5** | **80.0±1.0** | **89.5±0.7** | **96.9±0.4** | **93.5±0.6** | **87.5±0.9** | **78.3±1.0** | **64.5±1.3** | **86.2±0.8** | **72.2±1.2** | **82.3±1.0** |
Table 6: Sub-benchmark of FewTrans that compares different transfer algorithms for base evaluation of multi-modal pretrained models. CLIP-base is chosen as the pretrained model.
| | ImageNet-S | DTD | CIFAR-100 | Flowers | UCF | EuroSAT | Quick Draw | Fungi | Plant Disease | Aircraft | Average |
|------------------|------------|-----|-----------|---------|-----|---------|-------------|-------|---------------|----------|---------|
| Zero-shot | 72.6±1.5 | 73.0±1.0 | 92.9±0.4 | 86.3±0.9 | 90.5±0.6 | 64.4±1.2 | 57.4±1.2 | 38.7±1.5 | 46.0±1.4 | 69.2±1.2 | 69.1±1.2 |
| CoOp | 79.3±1.3 | 83.8±0.8 | 93.8±0.4 | 97.8±0.2 | 95.1±0.4 | 84.3±0.8 | 73.8±0.9 | 51.9±1.5 | 70.9±1.2 | 70.0±1.4 | 80.1±1.0 |
| ProGrad | 79.4±1.3 | 82.3±0.8 | 93.9±0.4 | 96.2±0.3 | 94.7±0.4 | 84.1±0.8 | 72.5±0.9 | 53.8±1.4 | 71.6±1.1 | 73.2±1.2 | 80.2±0.9 |
| VPT | 78.8±1.3 | 81.3±0.8 | 94.5±0.3 | 95.5±0.4 | 94.5±0.4 | 88.3±0.7 | 75.1±0.9 | 47.4±1.5 | 72.9±1.1 | 76.5±1.1 | 80.5±0.9 |
| MaPLe | 79.2±1.3 | 82.5±0.8 | 94.6±0.3 | 96.5±0.4 | 95.1±0.4 | 88.8±0.7 | 76.3±0.9 | 48.9±1.5 | 74.6±1.1 | 74.5±1.1 | 81.1±0.9 |
| KgCoOp | 79.9±1.2 | 84.1±0.7 | 94.1±0.4 | 97.5±0.2 | 95.3±0.4 | 84.7±0.8 | 74.1±0.9 | 55.2±1.5 | 72.9±1.1 | 73.9±1.2 | 81.2±1.0 |
| CoCoOp | 79.8±1.2 | 83.4±0.8 | 93.8±0.4 | 97.4±0.3 | 95.4±0.4 | 86.3±0.7 | 76.0±0.9 | 52.2±1.6 | 76.7±1.1 | 74.1±1.2 | 81.5±1.0 |
| AllFIT | 80.4±1.3 | 83.5±0.8 | 94.9±0.3 | 96.9±0.3 | 95.4±0.3 | 88.5±0.7 | 76.5±0.9 | 54.8±1.4 | 77.2±1.0 | 78.8±1.0 | 82.7±0.9 |
| VisualFT | 80.0±1.2 | 83.0±0.8 | **95.1±0.3** | **96.6±0.4** | **95.1±0.4** | **89.9±0.7** | **78.3±0.8** | **52.7±1.4** | **80.1±0.9** | **77.7±1.0** | **82.9±0.9** |
| TextFT | **80.9±1.2** | **85.4±0.7** | **94.2±0.4** | **98.3±0.2** | **96.0±0.3** | **85.6±0.8** | **75.8±0.9** | **62.5±1.4** | **80.3±0.9** | **79.0±1.0** | **83.8±0.9** |
models, and (3) compares different transfer algorithms for multimodal models for base evaluation and base-to-novel evaluation.
Evaluated models and algorithms. For pretrained models, we evaluate supervised models including ResNet-50 (He et al., 2016), SwinTransformer-base (Liu et al., 2021), ConvNext-base (Liu et al., 2022) trained on ImageNet 1K, and BiT-R101 (Kolesnikov et al., 2020) trained on ImageNet 21K; self-supervised ImageNet models including MAE-base (He et al., 2022), IBOT-ViT-base (Zhou et al., 2022a) and EsViT-Swin-base (Li et al., 2022a); multimodal pretrained model CLIP (Radford et al., 2021) trained on 400 million image-text pairs; and self-supervised models DINOv2-small, DINOv2-base (Oquab et al., 2023) trained on 142M curated images. For transfer algorithms for pure vision models, we evaluate linear probing (Zhang et al., 2016), Finetune (He et al., 2022), and several parameter-efficient finetuning methods including LoRA (Hu et al., 2022), BitFit (Zaken et al., 2022), SSF (Lian et al., 2022), Adapter (Houlsby et al., 2019), Adaptformer (Chen et al., 2022), VPT (Jia et al., 2022) and TSA (Li et al., 2022b). For transfer algorithms for multimodal models, we evaluate CoOp (Zhou et al., 2022c), CoCoOp (Zhou et al., 2022b), VPT (Jia et al., 2022), MaPLe (Khattak et al., 2023), KgCoOp (Yao et al., 2023), ProGrad (Zhu et al., 2023), Finetune of visual encoder, Finetune of text encoder and Finetune of both encoders. We give results in Table 4/7. We make following observations.
The size of the pretraining dataset matters. As seen from Table 4, models trained on ImageNet-1K have very similar performance when well-tuned (except for ResNet-50 which does not use most of the training tricks), regardless of the training algorithm and architecture used. The difference between the worst-performing MAE and best-performing IBOT is 0.6, smaller than the range of
Table 7: Sub-benchmark of FewTrans that compares different transfer algorithms for base-to-novel evaluation of multi-modal pretrained models. CLIP-base is chosen as the pretrained model.
| | ImageNet-S | DTD | CIFAR-100 | Flowers | UCF | EuroSAT | Quick Draw | Fungi | Plant Disease | Aircraft | Average |
|----------------|------------|-----|-----------|---------|-----|---------|------------|-------|---------------|----------|---------|
| CoCoOp | 67.5±1.4 | 66.7±1.1 | 86.8±0.5 | 77.5±1.1 | 87.9±0.7 | 67.5±1.3 | 61.4±1.1 | 21.8±1.2 | 59.4±1.4 | 47.9±1.7 | 64.4±1.2 |
| CoOp | 64.3±1.5 | 71.5±1.0 | 86.5±0.6 | 85.0±0.8 | 86.1±0.7 | 71.5±1.2 | 60.9±1.2 | 31.5±1.4 | 65.0±1.2 | 46.2±1.9 | 66.8±1.2 |
| ProGrad | 65.0±1.5 | 71.7±1.0 | 86.7±0.5 | 85.0±0.8 | 86.3±0.7 | 72.0±1.2 | 61.1±1.2 | 32.5±1.4 | 65.7±1.2 | 49.7±1.8 | 67.6±1.2 |
| VPT | 71.7±1.3 | 67.7±1.0 | 87.5±0.6 | 84.5±0.8 | 86.4±0.7 | 68.1±1.4 | 56.7±1.2 | 37.0±1.3 | 56.9±1.4 | 61.5±1.3 | 67.8±1.1 |
| MaPL | 70.4±1.3 | 62.1±1.2 | 88.3±0.5 | 82.4±0.8 | 87.3±0.6 | 77.1±1.3 | 60.8±1.1 | 34.3±1.3 | 62.2±1.3 | 56.2±1.4 | 68.1±1.1 |
| KgCoOp | 68.9±1.4 | 72.7±0.9 | 87.0±0.5 | 86.6±0.7 | 87.8±0.7 | 70.6±1.2 | 60.6±1.1 | 33.9±1.4 | 66.7±1.2 | 51.9±1.8 | 68.7±1.2 |
| Zero-shot | 73.9±1.3 | 68.7±1.1 | 86.8±0.5 | 87.0±0.7 | 89.0±0.6 | 69.7±1.4 | 58.1±1.2 | 39.3±1.4 | 59.2±1.2 | 61.5±1.3 | 69.3±1.1 |
| VisualFT | 74.0±1.3 | 69.0±1.0 | 88.3±0.5 | 86.7±0.7 | 89.0±0.6 | 70.2±1.4 | 60.4±1.1 | 38.9±1.4 | 67.5±1.3 | 62.2±1.3 | 70.6±1.1 |
| TextFT | 74.2±1.3 | 69.8±1.0 | 87.0±0.5 | 87.5±0.7 | 89.8±0.6 | 72.2±1.4 | 59.9±1.2 | 39.2±1.4 | 70.2±1.2 | 61.7±1.3 | 71.2±1.1 |
| AllFT | 74.1±1.3 | 69.4±1.0 | 88.1±0.5 | 87.2±0.7 | 89.5±0.6 | 72.3±1.4 | 60.9±1.1 | 39.6±1.4 | 68.9±1.2 | 62.8±1.3 | 71.3±1.1 |
confidence interval. However, when the dataset size increases, we see a very clear improvement in few-shot transfer performance.
**CLIP meets problems with uncommon class names.** From Figure 4, we see that CLIP exhibits promising performance on most datasets, but performs badly on Fungi and Plant Disease, two fine-grained datasets whose category names are mostly rare words. This is something like a “text domain shift” which requires significant updates for the text encoder. We expect that such problems can be relieved when the number of shots increases, but for few-shot evaluation on these two datasets, only using the visual encoder of CLIP (see Table 5) can be better than using both encoders (see Table 6).
**Visual-only transfer algorithms perform similar.** From Table 5, we can see that except for linear probing, all transfer algorithms for pure visual pretrained models have very similar performance and have intersected confidence intervals. This is in contrast to the benchmark of many-shot transfer learning like VTAB (Zhai et al., 2019), where different transfer algorithms are shown to have significant performance gaps (see Chavan et al., 2023 for example).
**Finetune performs surprisingly well** on all sub-benchmarks for transfer algorithms as shown in Table 5, especially for multimodal models. Intuitively, finetuning all parameters of the pretrained model with a few samples should meet overfitting problems. Such a phenomenon needs deeper understanding.
**Are we making progress on few-shot multimodal transfer?** While we observe in Table 6 that all specifically designed transfer algorithms for CLIP perform better than zero-shot baseline in base evaluation, they all perform worse than zero-shot baseline in base-to-novel evaluation in Table 7, different from what some of the methods claimed in their paper with the old benchmarks. In contrast, simple finetune, either finetuning a single encoder or finetuning both, surpasses all these methods in both evaluation settings. This indicates we are not making progress in this field and we should rethink what’s the thing that leads to real improvement of few-shot multimodal transfer performance.
## 6 Conclusion and Future Work
We have introduced FewTrans, a unified, realistic, rigorous benchmark for evaluating few-shot transferability of pretrained models. Our initial exploration of this benchmark shows that transferring from a better pretrained model trained on a large pretraining dataset seems to be much more important than using a better transfer algorithm. However, we believe that with rigorous evaluation, comparison and further investigations on FewTrans, good transfer algorithms will finally emerge. We are now implementing more algorithms and trying to include more pretrained models in the benchmark. In addition to comparing few-shot performance, we plan to add a comparison of the number of tunable parameters and the time needed for a complete adaptation for transfer algorithms.
7 REPRODUCIBILITY STATEMENT
We do our best to ensure the reproducibility of our benchmark. We include most details of our empirical investigations and the benchmark in the two sections of Appendix. The code for the benchmark can be found at https://anonymous.4open.science/r/FewTrans-7FB5.
REFERENCES
Mayank Agarwal, Mikhail Yurochkin, and Yuekai Sun. On sensitivity of meta-learning to support data. NeurIPS, 2021.
Sylvain Arlot and Alain Celisse. A survey of cross-validation procedures for model selection. arXiv preprint arXiv:0907.4728, 2009.
Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, and Zhiqiang Shen. One-for-all: Generalized lora for parameter-efficient fine-tuning. arXiv preprint arXiv:2306.07967, 2023.
Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. NeurIPS, 2022.
Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In CVPR, 2014.
Cleanlab. Stanford cars (cars196) dataset contains many errors. https://www.linkedin.com/feed/update/urn:li:activity:7067249290959589376/, 2023.
Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al. Scaling vision transformers to 22 billion parameters. In ICML, 2023.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In CVPR, 2022.
Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In ICML, 2019.
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In ICLR, 2022.
Ashraful Islam, Chun-Fu Richard Chen, Rameswar Panda, Leonid Karlinsky, Richard Radke, and Rogerio Feris. A broad study on the transferability of visual representations with contrastive learning. In ICCV, pp. 8845–8855, 2021.
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In ECCV, 2022.
Jongejan Jonas, Rowley Henry, Kawashima Takashi, Kim Jongmin, and Fox-Gie Nick. The quick, draw! – a.i. experiment. quickdraw.withgoogle.com, 2016.
Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. Maple: Multi-modal prompt learning. In CVPR, 2023.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
|
pAsQSWlDUf
|
the calculation of weight based on the distance in the data space. However, this make the weighting process be dependent on the scale of input data. Together with the wrapper of Sigmoid function, it might be saturated upon too large or too small input. This effect might make the weights not representative to use in instance-wise CL. While empirically, it illustrates the effective over in latent space, more effort need to be done to consider on which space one should rely on to calculate distance.
|
SOFT CONTRASTIVE LEARNING FOR TIME SERIES
Seunghan Lee, Taeyoung Park, Kibok Lee
Department of Statistics and Data Science, Yonsei University
{seunghan9613,tpark,kibok}@yonsei.ac.kr
ABSTRACT
Contrastive learning has shown to be effective to learn representations from time series in a self-supervised way. However, contrasting similar time series instances or values from adjacent timestamps within a time series leads to ignore their inherent correlations, which results in deteriorating the quality of learned representations. To address this issue, we propose SoftCLT, a simple yet effective soft contrastive learning strategy for time series. This is achieved by introducing instance-wise and temporal contrastive loss with soft assignments ranging from zero to one. Specifically, we define soft assignments for 1) instance-wise contrastive loss by the distance between time series on the data space, and 2) temporal contrastive loss by the difference of timestamps. SoftCLT is a plug-and-play method for time series contrastive learning that improves the quality of learned representations without bells and whistles. In experiments, we demonstrate that SoftCLT consistently improves the performance in various downstream tasks including classification, semi-supervised learning, transfer learning, and anomaly detection, showing state-of-the-art performance. Code is available at this repository: https://github.com/seunghan96/softclt.
1 INTRODUCTION
Time series (TS) data are ubiquitous in many fields, including finance, energy, healthcare, and transportation (Ding et al., 2020; Lago et al., 2018; Solares et al., 2020; Cai et al., 2020). However, annotating TS data can be challenging as it often requires significant domain expertise and time. To overcome the limitation and utilize unlabeled data without annotations, self-supervised learning has emerged as a promising representation learning approach not only in natural language processing (Devlin et al., 2018; Gao et al., 2021) and computer vision (Chen et al., 2020; Dosovitskiy et al., 2021), but also in TS analysis (Franceschi et al., 2019; Yue et al., 2022). In particular, contrastive learning (CL) has demonstrated remarkable performance across different domains (Chen et al., 2020; Gao et al., 2021; Yue et al., 2022). As it is challenging to determine similarities of instances in self-supervised learning, recent CL works apply data augmentation to generate two views per data and take views from the same instance as positive pairs and the others as negatives (Chen et al., 2020). However, we argue that the standard CL objective might be harmful for TS representation learning, because inherent correlations in similar TS instances and values nearby timestamps within a TS, which could be a strong self-supervision, are ignored in CL. For example, distance metrics such as dynamic time warping (DTW) have been widely used for measuring the similarities of TS data, and contrasting TS data might lose such information. Also, values with close timestamps are usually similar in natural TS data, so contrasting all values with different timestamps with the same degree of penalty as in previous CL methods (Eldele et al., 2021; Yue et al., 2022) might not be optimal. Motivated by this, we explore the following research question: how can we take account of the similarities of time series data for better contrastive representation learning? To this end, we propose Soft Contrastive Learning for Time series (SoftCLT). Specifically, we propose to consider the InfoNCE loss (Oord et al., 2018) not only for the positive pairs but also all other pairs and compute their weighted summation in both instance-wise CL and temporal CL, where instance-wise CL contrasts the representations of TS instances, while temporal CL contrasts the representations of timestamps within a single TS, as shown in Figure 1. We propose to assign soft assignments based on the distance between TS for the instance-wise CL, and the difference of timestamps for the temporal CL. This formulation can be seen as a generalization of the standard contrastive loss, as the proposed loss becomes the contrastive loss if we replace soft assignments with hard assignments of either zero for negative or one for positive.
We conduct extensive experiments in various tasks, including TS classification, semi-supervised classification, transfer learning, and anomaly detection tasks to prove the effectiveness of the proposed method. Experimental results validate that our method improves the performance of previous CL methods, achieving state-of-the-art (SOTA) performance on a range of downstream tasks. The main contributions of this paper are summarized as follows:
- We propose SoftCLT, a simple yet effective soft contrastive learning strategy for TS. Specifically, we propose soft contrastive losses for instance and temporal dimensions, respectively, to address limitations of previous CL methods for TS.
- We provide extensive experimental results on various tasks for TS, showing that our method improves SOTA performance on a range of downstream tasks. For example, SoftCLT improves the average accuracy of 125 UCR datasets and 29 UEA datasets by 2.0% and 3.9%, respectively, compared to the SOTA unsupervised representation for classification tasks.
- SoftCLT is easily applicable to other CL frameworks for TS by introducing soft assignments and its overhead is negligible, making it practical for use.
## RELATED WORK
### Self-supervised learning
In recent years, self-supervised learning has gained lots of attention for its ability to learn powerful representations from large amounts of unlabeled data. Self-supervised learning is done by training a model to solve a pretext task derived from a certain aspect of data without supervision. As a self-supervised pretext task, next token prediction (Brown et al., 2020) and masked token prediction (Devlin et al., 2018) are commonly used in natural language processing, while solving jigsaw puzzles (Noroozi & Favaro, 2016) and rotation prediction (Gidaris & Komodakis, 2018) are proposed in computer vision. In particular, contrastive learning (Hadsell et al., 2006) has shown to be an effective pretext task across domains, which maximizes similarities of positive pairs while minimizing similarities of negative pairs (Gao et al., 2021; Chen et al., 2020; Yue et al., 2022).
### Contrastive learning in time series
In the field of TS analysis, several designs for positive and negative pairs have been proposed for CL, taking into account the invariant properties of TS. Table 1 compares various CL methods in TS including ours in terms of several properties. T-Loss (Franceschi et al., 2019) samples a random subseries from a TS and treats them as positive when they belong to its subseries, and negative if belong to subseries of other TS. Self-Time (Fan et al., 2020) captures inter-sample relation between TS by defining augmented sample of same TS as positive and negative otherwise, and captures intra-temporal relation within TS by solving a classification task, where the class labels are defined using the temporal distance between the subseries. TNC (Tonekaboni et al., 2021) defines temporal neighborhood of windows using normal distribution and treats samples in neighborhood as positives. TS-SD (Shi et al., 2021) trains a model using triplet similarity discrimination task, where the goal is to identify which of two TS is more similar to a given TS, using DTW to define similarity. TS-TCC (Eldele et al., 2021) proposes a temporal contrastive loss by making the augmentations predict each other’s future, and CA-TCC (Eldele et al., 2023), which is the extension of TS-TCC to the semi-supervised setting, adopts the same loss. TS2Vec (Yue et al., 2022) splits TS into two subseries and defines hierarchical contrastive loss in both instance and temporal dimensions. Mixing-up (Wickström et al., 2022) generates new TS by mixing two TS, where the goal is to predict the mixing weights. CoST (Woo et al., 2022) utilizes both time domain and frequency domain contrastive losses to learn disentangled seasonal-trend representations of TS. TimeCLR (Yang et al., 2022) introduces phase-shift and amplitude change augmentations, which are data augmentation methods based on DTW. TF-C (Zhang et al., 2022) learns both time- and frequency-based representations of TS and proposes a novel time-frequency consistency architecture. In the medical domain, Subject-Aware CL (Cheng et al., 2020) proposes an instance-wise CL framework where the temporal information is entangled by architecture design, and CLOCs (Kiyasseh et al., 2021) proposes to consider spatial dimension specifically available in their application, which is close to the channels in general TS. While previous CL methods for TS compute hard contrastive loss, where the similarities between all negative pairs are equally minimized, we introduce soft contrastive loss for TS.
| Method | Instance-wise CL | Temporal CL | Hierarchical CL | Soft CL |
|-----------------|------------------|-------------|-----------------|--------|
| T-Loss | ✓ | | | |
| Self-Time | ✓ | | | |
| TNC | ✓ | | | |
| TS-SD | ✓ | | | |
| TS-TCC | ✓ | | | |
| TS2Vec | ✓ | | | |
| Mixing-Up | ✓ | | | |
| CoST | ✓ | | | |
| TimeCLR | ✓ | | | |
| TF-C | ✓ | | | |
| CA-TCC | ✓ | | | |
| SoftCLT (Ours) | ✓ | | | |
Table 1: Comparison table of contrastive learning methods in time series.
Figure 1: Overall framework of SoftCLT. Unlike the conventional hard CL that gives either positive or negative assignments to sample pairs, SoftCLT gives soft assignments to both instance-wise and temporal relationships. Two views of the same sample are denoted as $r$ and $\tilde{r}$, respectively.
Soft contrastive learning. CL is typically done by batch instance discrimination, where each instance is considered to be in a distinct class. However, this approach can pose a risk of pushing similar samples farther apart in the embedding space. To address this issue, several methods have been proposed, including a method that utilizes soft assignments of images (Thoma et al., 2020) based on feature distances and geometric proximity measures. NNCLR (Dwibedi et al., 2021) defines additional positives for each view by extracting top-$k$ neighbors in the feature space. NCL (Yèche et al., 2021) finds neighbors using supervision from the medical domain knowledge and jointly optimizes two conflicting losses with a trade-off: the neighbor alignment loss maximizing the similarity of neighbors as well as positive pairs, and the neighbor discriminative loss maximizing the similarity of positive pairs while minimizing the similarity of neighbors. SNCLR (Ge et al., 2023), which extends NNCLR with soft assignments, employs an attention module to determine the correlations between the current and neighboring samples. CO2 (Wei et al., 2021) introduces consistency regularization to enforce relative distribution consistency between different positive views and all negatives, resulting in soft relationships between samples. ASCL (Feng & Patras, 2022) introduces soft inter-sample relations by transforming the original instance discrimination task into a multi-instance soft discrimination task. Previous soft CL methods in non-TS domains compute soft assignments on the embedding space, because similarities of instances on the data space are difficult to measure, particularly in computer vision (Chen et al., 2020). In contrast, we propose to compute soft assignments based on the distance between TS instances on the data space.
Masked modeling in time series. Other than CL, masked modeling has recently been studied as a pretext task for self-supervised learning in TS by masking out a portion of TS and predicting the missing values. While CL has demonstrated remarkable performance in high-level classification tasks, masked modeling has excelled in low-level forecasting tasks (Dong et al., 2023; Huang et al., 2022; Xie et al., 2022). TST (Zerveas et al., 2021) adopts the masked modeling paradigm to TS, where the goal is to reconstruct the masked timestamps. PatchTST (Nie et al., 2023) aims to predict the masked subseries-level patches to capture the local semantic information and reduce memory usage. SimMTM (Dong et al., 2023) reconstructs the original TS from multiple masked TS.
3 METHODOLOGY
In this section, we propose SoftCLT by introducing soft assignments to instance-wise and temporal contrastive losses to capture both inter-sample and intra-temporal relationships, respectively. For instance-wise CL, we use distance between TS on the data space to capture the inter-sample relations, and for temporal CL, we use the difference between timestamps to consider the temporal relation within a single TS. The overall framework of SoftCLT is illustrated in Figure 1.
3.1 PROBLEM DEFINITION
This paper addresses the task of learning a nonlinear embedding function $f_\theta : x \rightarrow r$, given a batch of $N$ time series $\mathcal{X} = \{x_1, \ldots, x_N\}$. Our goal is to learn $f_\theta$ mapping a time series $x_i \in \mathbb{R}^{T \times D}$ to a representation vector $r_i = [r_{i,1}, \ldots, r_{i,T}]^\top \in \mathbb{R}^{T \times M}$, where $T$ is the sequence length, $D$ is the input feature dimension, and $M$ is the embedded feature dimension.
3.2 SOFT INSTANCE-WISE CONTRASTIVE LEARNING
Contrasting all instances within a batch might be harmful for TS representation learning because similar instances are learned to be far away from each other on the embedding space. Unlike other domains such as computer vision, the distance between TS data computed on the data space...
is useful for measuring the similarity of them. For example, the pixel-by-pixel distance of two different images is not related to their similarities in general, that of two TS data is useful to measure their similarities. With a min-max normalized distance metric \( D(\cdot, \cdot) \), we define a soft assignment for a pair of data indices \((i, i')\) for the instance-wise contrastive loss using the sigmoid function \( \sigma(a) = 1/(1 + \exp(-a)) \):
\[
w_I(i, i') = 2\alpha \cdot \sigma(-\tau_I \cdot D(x_i, x_{i'})),
\]
(1)
where \( \tau_I \) is a hyperparameter controlling the sharpness and \( \alpha \) is the upper bound in the range of \([0, 1]\) to distinguish pairs of the same TS and pairs of different TS close to each other; when \( \alpha = 1 \), we give the assignment of one to the pairs with the distance of zero as well as the pairs of the same TS. Note that distances between TS are computed with the original TS rather than the augmented views, because the pairwise distance matrix can be precomputed offline or cached for efficiency.
For the choice of the distance metric \( D \), we conduct an ablation study in Table 6d, comparing 1) cosine distance, 2) Euclidean distance, 3) dynamic time warping (DTW), and 4) time alignment measurement (TAM) (Folgado et al., 2018). Among them, we choose DTW as the distance metric throughout the experiments based on the result in Table 6d. While the computational complexity of DTW is \( O(T^2) \) for two TS of length \( T \) which might be costly for large-scale datasets, it can be precomputed offline or cached to facilitate efficient calculations, or its fast version such as FastDTW (Salvador & Chan, 2007) with the complexity of \( O(T) \) can be used. We empirically confirmed that the output of DTW and FastDTW is almost the same, such that the CL results also match.
Let \( r_{i,t} = r_{i+2N,t} \) and \( \tilde{r}_{i,t} = r_{i+N,t} \) be the embedding vectors from two augmentations of \( x_i \) at timestamp \( t \) for conciseness. Inspired by the fact that the contrastive loss can be interpreted as the cross-entropy loss (Lee et al., 2021), we define a softmax probability of the relative similarity out of all similarities considered when computing the loss as:
\[
p_I((i, i'), t) = \frac{\exp(r_{i,t} \circ r_{i',t})}{\sum_{j=1,j \neq i}^{2N} \exp(r_{i,t} \circ r_{j,t})},
\]
(2)
where we use the dot product as the similarity measure \( \circ \). Then, the soft instance-wise contrastive loss for \( x_i \) at timestamp \( t \) is defined as:
\[
\ell_I^{(i,t)} = -\log p_I((i, i+N), t) - \sum_{j=1,j \neq \{i,i+N\}}^{2N} w_I(i, j \mod N) \cdot \log p_I((i, j), t).
\]
(3)
The first term in \( \ell_I^{(i,t)} \) corresponds to the loss of the positive pair, and the second term corresponds to that of the other pairs weighted by soft assignments \( w_I(i, i') \). Note that this loss can be seen as a generalization of the hard instance-wise contrastive loss, which is the case when \( \forall w_I(i, i') = 0 \).
### 3.3 Soft Temporal Contrastive Learning
Following the intuition that values in adjacent timestamps are similar, we propose to compute a soft assignment based on the difference between timestamps for temporal contrastive loss. Similar to the soft instance-wise contrastive loss, the assignment is close to one when timestamps get closer and zero when they get farther away. We define a soft assignment for a pair of timestamps \((t, t')\) for the temporal contrastive loss as:
\[
w_T(t, t') = 2 \cdot \sigma(-\tau_T \cdot |t - t'|),
\]
(4)
where \( \tau_T \) is a hyperparameter controlling the sharpness. As the degree of closeness between timestamps varies across datasets, we tune \( \tau_T \) to control the degree of soft assignments. Figure 2a illustrates an example of soft assignments with respect to timestamp difference with different \( \tau_T \).
**Hierarchical loss.** For temporal CL, we consider hierarchical contrasting on intermediate representations in the network \( f_\theta \) as done in prior CL methods for TS. Specifically, we adopt the hierarchical contrastive loss proposed in TS2Vec (Yue et al., 2022), where the losses are computed on intermediate representations after each max-pooling layer along the temporal axis and then aggregated. As shown in Figure 2b, similarities between adjacent time step decrease after pooling, we adjust \( \tau_T \) by multiplying \( m^k \) in Eq. 4, i.e., \( \tau_T = m^k \cdot \tilde{\tau}_T \) where \( m \) is the kernel size of pooling layers, \( k \) is the depth, and \( \tilde{\tau}_T \) is the base hyperparameter.
Figure 2: (a) shows examples of soft assignments for soft temporal CL, where a smaller $\tau_T$ results in smoother assignments. (b) is an example of hierarchical representations, demonstrating that increasing layer depth results in a larger semantic difference between adjacent time steps, so $\tau_T$ should be increased to compensate for it.
Now, let $r_{i,t} = r_{i,t+2T}$ and $\tilde{r}_{i,t} = r_{i,t+T}$ be the embedding vectors from two augmentations of $x_i$ at timestamp $t$ for conciseness. Similar to Eq. 2, we define a softmax probability of the relative similarity out of all similarities considered when computing the loss as:
$$p_T(i, (t, t')) = \frac{\exp(r_{i,t} \circ r_{i,t'})}{\sum_{s=1,s \neq t}^{2T} \exp(r_{i,t} \circ r_{i,s})}. \quad (5)$$
Then, the soft temporal contrastive loss for $x_i$ at timestamp $t$ is defined as:
$$\ell_T^{(i,t)} = -\log p_T(i, (t, t + T)) - \sum_{s=1,s \neq \{t,t+T\}}^{2T} w_T(t, s \mod T) \cdot \log p_T(i, (t, s)). \quad (6)$$
Similar to the soft instance-wise contrastive loss, this loss can be seen as a generalization of the hard temporal contrastive loss, which is the case when $\forall w_T(t, t') = 0$.
The final loss for SoftCLT is the joint of the soft instance-wise and temporal contrastive losses:
$$L = \frac{1}{4NT} \sum_{i=1}^{2N} \sum_{t=1}^{2T} (\lambda \cdot \ell_I^{(i,t)} + (1 - \lambda) \cdot \ell_T^{(i,t)}), \quad (7)$$
where $\lambda$ is a hyperparameter controlling the contribution of each loss, set to 0.5 unless specified.
The proposed loss has an interesting mathematical interpretation that it can be seen as the scaled KL divergence of the softmax probabilities from the normalized soft assignments, where the scale is the sum of soft assignments. We provide more details in Appendix D.
4 EXPERIMENTS
We conduct extensive experiments to validate the proposed method and assess its performance in different tasks: (1) classification with univariate and multivariate TS, (2) semi-supervised classification by (i) self-supervised learning followed by fine-tuning and (ii) semi-supervised learning, (3) transfer learning in in-domain and cross-domain scenarios, and (4) anomaly detection in normal and cold-start settings. We also conduct ablation studies to validate the effectiveness of SoftCLT as well as its design choices. Finally, we visualize pairwise distance matrices and t-SNE (Van der Maaten & Hinton, 2008) of temporal representations to show the effect of SoftCLT over previous methods.
We use the data augmentation strategies of the methods we apply our SoftCLT to: TS2Vec generates two views as TS segments with overlap, and TS-TCC/CA-TCC generate two views with weak and strong augmentations, using the jitter-and-scale and permutation-and-jitter strategies, respectively.
4.1 CLASSIFICATION
We conduct experiments on TS classification tasks with 125 UCR archive datasets (Dau et al., 2019) for univariate TS and 29 UEA archive datasets (Bagnall et al., 2018) for multivariate TS,
1Some of the previous methods cannot handle missing observations, so three of the 128 datasets are omitted.
2One of the 30 datasets is omitted for a fair comparison with some of the previous methods.
| Method | Avg. Acc.(%) | Avg. Rank | Avg. Acc.(%) | Avg. Rank |
|------------|--------------|-----------|--------------|-----------|
| DTW-D | 72.7 | 5.30 | 65.0 | 4.60 |
| TNC | 76.1 | 4.42 | 67.7 | 4.76 |
| TST | 64.1 | 6.19 | 63.5 | 5.26 |
| TS-TCC | 75.7 | 4.29 | 68.2 | 4.38 |
| T-Loss | 80.6 | 3.50 | 67.5 | 3.86 |
| TS2Vec + Ours | 83.0 | 2.80 | 71.2 | 3.28 |
| | **85.0(± 2.0)** | **1.49** | **75.1(± 3.9)** | **1.86** |
Table 2: Accuracy and rank on UCR/UEA.
respectively. Specifically, we apply SoftCLT to TS2Vec (Yue et al., 2022), which has demonstrated SOTA performance on the above datasets. As baseline methods, we consider DTW-D (Chen et al., 2013), TNC (Tonekaboni et al., 2021), TST (Zerveas et al., 2021), TS-TCC (Eldele et al., 2021), T-Loss (Franceschi et al., 2019), and TS2Vec (Yue et al., 2022). The experimental protocol follows that of T-Loss and TS2Vec, where the SVM classifier with the RBF kernel is trained on top of the instance-level representations obtained by max-pooling representations of all timestamps. Table 2 and the critical difference (CD) diagram based on the Wilcoxon-Holm method (Ismail Fawaz et al., 2019) shown in Figure 3 demonstrate that the proposed method improves SOTA performance by a significant margin on both datasets in terms of accuracy and rank. In Figure 3, the best and second-best results for each dataset are in red and blue, respectively. We also connect methods with a bold line if their difference is not statistically significant in terms of the average rank with a confidence level of 95%, which shows that the performance gain by the proposed method is significant.
### 4.2 SEMI-SUPERVISED CLASSIFICATION
We conduct experiments on semi-supervised classification tasks by adopting SoftCLT to TS-TCC (Eldele et al., 2021) and its extension CA-TCC (Eldele et al., 2023), which are the methods that incorporate CL into self- and semi-supervised learning, respectively. As baseline methods, we consider SSL-ECG (Sarkar & Etemad, 2020), CPC (Oord et al., 2018), SimCLR (Chen et al., 2020) and TS-TCC (Eldele et al., 2021) for self-supervised learning, and Mean-Teacher (Tarvainen & Valpola, 2017), DivideMix (Li et al., 2020), SemiTime (Fan et al., 2021), FixMatch (Sohn et al., 2020) and CA-TCC (Eldele et al., 2023) for semi-supervised learning. Note that both TS-TCC and CA-TCC perform instance-wise and temporal contrasting, however, their temporal contrasting is achieved by predicting one view’s future from another, which is different from the conventional contrastive loss with positive and negative pairs. Therefore, we adopt our soft temporal contrastive loss as an additional loss to both methods. For evaluation, we utilize the same experimental settings and datasets of CA-TCC, which includes eight datasets (Anguita et al., 2013; Andrzejak et al., 2001; Dau et al., 2019), six of which are from the UCR archive. We consider two semi-supervised learning scenarios, (1) self-supervised learning with unlabeled data followed by supervised fine-tuning with labeled data and (2) semi-supervised learning with both labeled and unlabeled data, following CA-TCC (Eldele et al., 2023). Table 3 presents the experimental results with both methods in scenarios with 1% and 5% labeled datasets, showing that applying SoftCLT achieves the best overall performance across most of the datasets in both scenarios.
| Dataset | SSL-ECG | CPC | SimCLR | TS2Vec | + Ours | TS-TCC | + Ours | Mean-Teacher | DivideMix | SemiTime | FixMatch | CA-TCC |
|------------------|---------|-----|--------|--------|--------|--------|--------|-------------|-----------|----------|----------|--------|
| HAR | 60.0/54.0| 65.4/63.8| 65.5/64.3| 88.2/85.7| **91.0**/91.0| 70.5/69.5| 82.9/82.8| 75.9/74.0| 76.5/75.4| 77.0/76.3| 76.4/75.6| 77.3/76.2| 90.6/90.6|
| Epilepsy | 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0| 95.0/94.0|
| Water | 93.4/76.1| 93.5/78.4| 93.8/78.5| 67.9/56.1| 95.1/87.1| 93.2/76.7| **96.5**/96.5| 94.7/84.7| 93.2/82.0| 94.1/84.4| 95.0/84.3| 95.1/85.3| 98.9/98.8|
| FordA | 67.9/67.9| 75.8/75.8| 75.8/75.8| 81.2/81.2| 81.2/81.2| 81.2/81.2| 81.2/81.2| 81.2/81.2| 81.2/81.2| 81.2/81.2| 81.2/81.2| 81.2/81.2| 81.2/81.2|
| FordB | 64.4/60.5| 64.6/60.5| 64.9/60.9| 49.3/49.8| 65.4/63.9| 67.9/67.9| **78.6**/78.6| 78.6/78.6| 78.6/78.6| 78.6/78.6| 78.6/78.6| 78.6/78.6| 78.6/78.6|
| PDC | 62.5/41.2| 62.8/48.2| 61.5/38.4| 63.1/62.8| 63.6/62.8| 63.8/48.1| **63.4**/63.4| 62.1/40.8| 62.1/40.7| 62.0/40.4| 61.9/40.0| 63.4/49.3| 73.3/71.7|
| StarLightCurves | 60.1/50.0| 59.3/48.9| 62.5/51.2| 57.6/48.6| 62.7/53.0| 63.6/50.5| **64.6**/63.2| 48.9/48.3| 59.8/49.4| 57.3/48.7| 58.1/46.9| 65.7/55.7| 70.3/68.8|
| ElectricDevices | 63.7/58.6| 75.4/74.7| 75.8/74.9| 90.1/91.0| 92.1/92.1| 77.6/76.7| **92.6**/92.6| 88.2/88.1| 88.2/88.1| 88.2/88.1| 88.2/88.1| 88.2/88.1| 93.4/93.4|
Table 3: Semi-supervised classification results. The table shows the results of fine-tuning self- and semi-supervised models, with 1% and 5% of labels. Best results across each dataset are in bold, while the second-best results are underlined. The accuracy and MF1 score are reported in order.
4.3 Transfer Learning
We conduct experiments on transfer learning for classification in in-domain and cross-domain settings which are used in previous works (Zhang et al., 2022; Eldele et al., 2021; 2023; Dong et al., 2023), by adopting our SoftCLT to TS-TCC and CA-TCC. As baseline methods, we consider TS-SD (Shi et al., 2021), TS2Vec (Yue et al., 2022), Mixing-Up (Wickström et al., 2022), CLOCS (Kiayashe et al., 2021), CoST (Woo et al., 2022), LaST (Wang et al., 2022), TF-C (Zhang et al., 2022), TS-TCC (Eldele et al., 2021), TST (Zerveas et al., 2021) and SimMTM (Dong et al., 2023). In in-domain transfer learning, the model is pretrained on SleepEEG (Kemp et al., 2000) and fine-tuned on Epilepsy (Andrzejak et al., 2001), where they are both EEG datasets and hence considered to be in a similar domain. In cross-domain transfer learning, which involves pretraining on one dataset and fine-tuning on different datasets, the model is pretrained on SleepEEG, and fine-tuned on three datasets from different domains, FD-B (Lessmeier et al., 2016), Gesture (Liu et al., 2009), and EMG (Goldberger et al., 2000). Also, we perform transfer learning without adaptation under self-and semi-supervised settings, where source and target datasets share the same set of classes but only 1% of labels are available for the source dataset, and no further training on the target dataset is allowed. Specifically, models are trained on one of the four conditions (A,B,C,D) in the Fault Diagnosis (FD) datasets (Lessmeier et al., 2016) and test on another. Table 4a shows the results of both in- and cross-domain transfer learning, and Table 4b shows the results of both self- and semi-supervised settings with FD datasets. Notably, SoftCLT applied to CA-TCC improves average accuracy of twelve transfer learning scenarios with FD datasets by 10.68%.
4.4 Anomaly Detection
We conduct experiments on univariate TS anomaly detection (AD) task by adopting SoftCLT to TS2Vec (Yue et al., 2022) under two different settings: the normal setting splits each dataset into two halves according to the time order and use them for training and evaluation, respectively, and the cold-start setting pretrains models on the FordA dataset in the UCR archive and evaluates on each dataset. As baseline methods, we consider SPOT (Siffer et al., 2017), DSPOT (Siffer et al., 2017), DONUT (Xu et al., 2018), SR (Ren et al., 2019), for the normal setting, and FFT (Rasheed et al., 2009), Twitter-AD (Vallis et al., 2014), Luminol (LinkedIn, 2018) for the cold-start setting, and TS2Vec (Yue et al., 2022) for both. The anomaly score is computed by the L1 distance of two representations encoded from masked and unmasked inputs following TS2Vec. We evaluate the compared method on the Yahoo (Laptev et al., 2015) and KPI (Ren et al., 2019) datasets. We found that suppressing instance-wise CL leads to better AD performance on average, so we report TS2Vec and SoftCLT performances without instance-wise CL; more details can be found in the Appendix G.
As shown in Table 5, SoftCLT outperforms the baselines in both settings in terms of the F1 score, precision, and recall. Specifically, SoftCLT applied to TS2Vec improves the F1 score approximately 2% in both datasets under both normal and cold-start settings.
| | Yahoo | KPI |
|----------|-------|-----|
| F1 | Prec. | Rec.|
| SPOT | 33.8 | 26.9 |
| DSPOT | 31.6 | 24.1 |
| DONUT | 2.6 | 1.3 |
| SR | 5.63 | 45.1 |
| TS2Vec* | 72.3 | 69.3 |
| + Ours | 74.2 | 72.2 |
(a) Results of AD task on normal setting.
| | Yahoo | KPI |
|----------|-------|-----|
| F1 | Prec. | Rec.|
| FFT | 29.1 | 20.2 |
| Twitter-AD | 24.5 | 16.6 |
| Luminol | 38.8 | 25.4 |
| SR | 52.9 | 40.4 |
| TS2Vec* | 74.0 | 70.7 |
| + Ours | 76.2 | 75.3 |
(b) Results of AD task on cold-start setting.
* We used the official code to replicate the results without the instance-wise contrastive loss.
Table 5: Anomaly detection results.
| Soft assignment | UCR datasets | UEA datasets | Temporal CL | Instance-wise CL | Inst. CL | Temporal CL |
|-----------------|--------------|--------------|-------------|------------------|---------|-------------|
| Instance-wise | Avg. Acc.(%) | Avg. Acc.(%) | Method | Avg. Acc.(%) | α | Avg. Acc.(%) |
| Temporal | | | Neighbor | 76.1 | 0.25 | 83.0 |
| ✔ | 82.3 (+1.6) | 70.5 | Linear | 77.2 | 0.50 | 83.9 |
| ✔ | 83.7 (+1.4) | 73.0 (+2.5) | Gaussian | 83.5 | 0.75 | 83.4 |
| ✔ | 85.0 (+2.7) | 73.8 (+3.3) | Sigmoid | 83.7 | 1.00 | 83.1 |
| | | | | | | |
(a) Application of soft assignments. (b) Assignment func. (c) Upper bound. (d) Distance func.
Table 6: Ablation study results.
4.5 Ablation Study
Effectiveness of SoftCLT. Table 6a shows the effect of soft assignments from the standard hard CL. Applying soft assignments to instance-wise or temporal CL provides a performance gain, and applying them to both dimensions results in the best performance, improving the accuracy on the UCR and UEA datasets by 2.7% and 3.7%, respectively.
Design choices for soft temporal CL. Table 6b compares different choices of the soft assignment $w_T$. Neighbor takes neighborhood within a window around the reference point as positive and the others as negative. Linear gives soft assignments linearly proportional to the time difference from the reference point, where the most distant one gets the value of zero. Gaussian gives soft assignments based on a Gaussian distribution with the mean of the reference point and the standard deviation as a hyperparameter. Among them, Sigmoid in Eq. 4 shows the best performance as shown in Table 6b.
Upper bound for soft instance-wise CL. In the soft instance-wise contrastive loss, $\alpha$ is introduced to avoid giving the same assignment to pairs of the same TS and pairs of the different TS with the distance of zero, where $\alpha = 1$ makes both cases to have the same assignment. Table 6c studies the effect of tuning $\alpha$. Based on the results, $\alpha = 0.5$ is the best, i.e., the similarity of the pairs of the same TS should be strictly larger than other pairs, but not by much.
Distance metrics for soft instance-wise CL. Table 6d compares different choices of the distance metric $D$ in Eq. 1: cosine distance (COS), Euclidean distance (EUC), dynamic time warping (DTW), and time alignment measurement (TAM) (Folgado et al., 2018) on 128 UCR datasets, where the baseline is TS2Vec and the hard or best soft temporal CL is applied together. The result shows that the improvement by soft instance-wise CL is robust to the choice of the distance metric. We use DTW throughout all other experiments because DTW is well-studied, commonly used in the literature and fast algorithms such as FastDTW are available.
4.6 Analysis
Comparison with soft CL methods in computer vision. While soft CL methods have been proposed in other domains, they compute soft assignments on the embedding space because it is difficult to measure the similarities on the data space, particularly in computer vision. However, we argue that the similarities on the data space is indeed a strong self-supervision, leading to better representation learning. To confirm this, we compare SoftCLT with soft CL methods proposed in other domains working on the embedding space: NNCLR (Dwibedi et al., 2021) and ASCL (Feng & Patras, 2022), on UCR datasets. For a fair comparison, we apply all compared methods to TS2Vec under the same setting. As shown in Table 7, different from the proposed method, NNCLR and ASCL deteriorate the performance of TS2Vec, implying that similarities measured on the data space is strong self-supervision, while similarities measured on the learnable embedding space might not
| Method | Total | Length of time series | Gap (A-B) | Temporal CL | Seasonality |
|------------|-------|----------------------|-----------|-------------|-------------|
| | | ≤ 200 (A) > 200 (B) | | | Low (103/128) High (25/128) |
| TS2Vec | 82.3 | 88.1 | 79.6 | 5.8 | |
| + NNCLR | 66.0 | 82.6 | 58.2 | 24.4 | |
| + ASCL | 76.5 | 86.6 | 71.8 | 14.8 | |
| + Ours | 85.0 | 89.8 | 81.9 | 7.9 | |
Table 7: Comparison of soft CL methods.
| Temporal CL | Seasonality |
|-------------|-------------|
| Soft | |
| | Low (103/128) High (25/128) |
| | 84.1 | 80.1 |
| | 85.6 | 81.7 |
| Gain | +1.5 | +1.6 |
Table 8: Effect of soft temporal CL by seasonality.
be useful in some domains. To further investigate the failure modes of the previous methods, we categorize datasets by the average TS length of 200 in Table 7, and observe that previous methods fail to capture the similarities of long TS data.
**Robustness to seasonality.** An assumption behind the proposed soft temporal CL is that values in adjacent timestamps are similar, which may raise a concern that seasonality in TS might not be captured. To address this, we categorize UCR datasets based on seasonality by ADF test (Sims et al., 1990) at the significance level of \( p = 0.05 \). As shown in Table 8, the performance gain by SoftCLT is consistent regardless of the seasonality. Our conjecture is that TS in the real world usually do not exhibit the perfect seasonality, as indicated by the ADF test result, such that SoftCLT takes advantage of the non-seasonal portions. Meanwhile, previous works have tried to decompose trend and seasonality in TS for representation learning (Wang et al., 2022; Woo et al., 2022). However, this may not be realistic for TS that are neither simultaneously auto-regressive nor stationary (Shen et al., 2022). In summary, we do not consider seasonality in TS directly, because it is not only challenging to extract but we can still achieve good performance without considering it in practice.
**Instance-wise relationships.** To see whether instance-wise relationships are preserved in the encoder, we visualize the pairwise instance-wise distance matrices of representations on the InsectEP-GRegularTrain dataset from the UCR archive (Dau et al., 2019) extracted from each layer, where the brighter color indicates the lower distance between instances. The top and bottom panels of Figure 4 show the changes in pairwise distance matrices of representations as depth progresses when adopting hard and soft CL, respectively. The results indicate that SoftCLT preserves the relationships between TS instances throughout encoding, while the standard hard CL fails to preserve them.
**Temporal relationships.** To assess the quality of temporal relationships captured by SoftCLT, we apply t-SNE (Van der Maaten & Hinton, 2008) to visualize the temporal representations, which are representations of each timestamp in a single TS. Figure 5 compares t-SNE of the representations learned with hard and soft CL over different training epochs, with the points getting darker as time progresses. While hard CL finds coarse-grained neighborhood relationships such that it fails to distinguish late timestamps in dark red, soft CL finds more fine-grained relationships.
## 5 Conclusion
In this paper, we present a soft contrastive learning framework for time series. In contrast to previous methods that give hard assignments to sample pairs, our approach gives soft assignments based on the instance-wise and temporal relationships on the data space. We demonstrate the effectiveness of our method in a range of tasks, leading to significant improvements in performance. We hope our work enlightens the effectiveness of self-supervision from the data space and motivates future works on contrastive representation learning in various domains to take account of it.
ETHICS STATEMENT
The proposed soft contrastive learning algorithm for time series has a potential to make a significant impact on the field of representation learning for time series data. The ability to apply this algorithm to various tasks and solve the general problem of time series representation learning is promising. In particular, the algorithm can be applied to transfer learning, which may be useful in scenarios with small datasets for downstream tasks. Furthermore, we expect that the idea of utilizing self-supervision from the data space for contrastive representation learning motivates future works in various domains.
However, as with any algorithm, there are ethical concerns to be considered. One potential ethical concern is a potential for the algorithm to perpetuate biases that may exist in the datasets used for pretraining. For example, if the pretraining dataset is imbalanced with respect to certain demographic attributes, this bias may be transferred to fine-tuning, potentially leading to biased predictions. It is essential to evaluate and address potential biases in the pretraining dataset before using the algorithm in real-world scenarios.
To ensure responsible use of the algorithm, we will make the datasets and code publicly available. Public availability of datasets and code allows for transparency and reproducibility, allowing other researchers to evaluate and address potential biases and misuse.
ACKNOWLEDGEMENTS
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2020R1A2C1A01005949, 2022R1A4A1033384, RS-2023-00217705), the MSIT(Ministry of Science and ICT), Korea, under the ICAN(ICT Challenge and Advanced Network of HRD) support program (RS-2023-00259934) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation), the Yonsei University Research Fund (2023-22-0071), and the Son Jiho Research Grant of Yonsei University (2023-22-0006).
REFERENCES
Ralph G Andrzejak, Klaus Lehnertz, Florian Mormann, Christoph Rieke, Peter David, and Christian E Elger. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. *Physical Review E*, 64(6):061907, 2001.
Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra Perez, and Jorge Luis Reyes Ortiz. A public domain dataset for human activity recognition using smartphones. In *ESANN*, pp. 437–442, 2013.
Anthony Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, and Eamonn Keogh. The uea multivariate time series classification archive, 2018. *arXiv preprint arXiv:1811.00075*, 2018.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In *NeurIPS*, 2020.
Ling Cai, Krzysztof Janowicz, Gengchen Mai, Bo Yan, and Rui Zhu. Traffic transformer: Capturing the continuity and periodicity of time series for traffic forecasting. *Transactions in GIS*, 24(3): 736–755, 2020.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *ICML*, 2020.
Yanping Chen, Bing Hu, Eamonn Keogh, and Gustavo EAPA Batista. Dtw-d: time series semi-supervised learning from a single example. In *SIGKDD*, 2013.
Joseph Y Cheng, Hanlin Goh, Kaan Dogrusoz, Oncel Tuzel, and Erdrin Azemi. Subject-aware contrastive learning for biosignals. *arXiv preprint arXiv:2007.04871*, 2020.
|
Aemqy6Hjdj
|
In Table 1, the reweight baseline with WiSE when using the DINOv2 model seems to be missing. Comparing reweighting and the proposed CFA method the gain in performance without WiSE seems to be <0.3%, especially in the case of OfficeHome and DomainNet.
|
Enhancing Compositional Generalization via Compositional Feature Alignment
Anonymous authors
Paper under double-blind review
Abstract
Real-world applications of machine learning (ML) models often confront data distribution shifts, wherein discrepancies exist between the training and test data distributions. In the common multi-domain multi-class setup, as the number of classes and domains scales up, it becomes infeasible to gather training data for every domain-class combination. This challenge naturally leads the quest for models with Compositional Generalization (CG) ability, where models can generalize to unseen domain-class combinations. To delve into the CG challenge, we develop CG-Bench, a suite of CG benchmarks derived from existing real-world image datasets, and observe that the prevalent pretraining-finetuning paradigm on foundational models, such as CLIP and DINOv2, struggles with the challenge. To address this challenge, we propose Compositional Feature Alignment (CFA), a simple two-stage finetuning technique that i) learns two orthogonal linear heads on a pretrained encoder with respect to class and domain labels, and ii) fine-tunes the encoder with the newly learned head frozen. We theoretically and empirically justify that CFA encourages compositional feature learning of pretrained models.
We further conduct extensive experiments on CG-Bench for CLIP and DINOv2, two powerful pretrained vision foundation models. Experiment results show that CFA outperforms common finetuning techniques in compositional generalization, corroborating CFA’s efficacy in compositional feature learning.
1 Introduction
Over the past decade, machine learning has emerged as a transformative technology, driving advancements across various domains such as computer vision (He et al., 2016a;b), natural language processing (Devlin et al., 2019; Brown et al., 2020), biology (Jumper et al., 2021), etc. These innovations have been fueled by the development of increasingly sophisticated models, the availability of large-scale datasets, and the growth of computational power. However, a crucial obstacle persists in applying machine learning models to real-world scenarios: their performance tends to degrade significantly when confronted with data distribution shifts (Koh et al., 2021; Gulrajani & Lopez-Paz, 2021; Santurkar et al., 2021), where the data distribution during testing differs from that used in training.
In an effort to overcome this problem, the machine learning research community has turned its attention to Out-of-Distribution (OOD) generalization, with the goal of developing models that are robust under data distribution shifts. Existing research primarily investigates various types of data distribution shifts, such as domain generalization (Gulrajani & Lopez-Paz, 2021; Koh et al., 2021), subpopulation shift (Santurkar et al., 2021; Yang et al., 2023), input corruption (Deng et al., 2009), and spurious correlation (Sagawa et al., 2020). While generalizing to these different type of distribution shifts has garnered significant attention, there exists another realistic yet understudied challenge in OOD generalization: compositional generalization (CG).
Within the multi-domain, multi-class context, assumes we have $E$ domains (i.e., environments) and $K$ classes, leading to $E \times K$ pairs of domain and class combinations, which could be formulated as elements in a $E \times K$ matrix as shown in Figure 1. In domain generalization (DG), the learner has access to data from all the classes and all the domains and aims to make predictions on data from a new, unseen domain. Nonetheless, in real-world scenarios, given the large number of categories, e.g., 1000 classes in ImageNet, one cannot always collect the complete data from all the domains. Put in another word, the training data might not cover all the possible combinations of domains.
Figure 1: Compositional generalization (CG) vs. domain generalization (DG). Masked entries are unseen domain-class combinations, while unmasked ones exist in the training dataset.
and classes, as represented by each cell in the matrix in Figure 1. This sparsity pattern becomes especially pronounced when the number of classes or environments is large because collecting comprehensive training data for each combination becomes a formidable task. In such cases, a key challenge arises: can the model generalize to unseen domain-class combinations? This is the compositional generalization (CG) challenge we aim to tackle in this work.
The CG challenge manifests ubiquitously across various real-world applications. For example, data distribution shifts in certain existing DG datasets, such as iWildCam (Beery et al., 2020; Koh et al., 2021), are more accurately characterized by CG than DG. Moreover, we find that the widely-used method of finetuning pretrained (foundation) models struggles to tackle the CG challenge. This emphasizes the need for the machine learning community to recognize and address this emerging distribution shift challenge with innovative solutions.
Our Contributions In our attempt to tackle this challenge, we draw inspiration from existing lines of research such as invariant risk minimization (IRM) (Arjovsky et al., 2019) and invariant-feature subspace recovery (ISR) (Wang et al., 2022). In particular, Wang et al. (2022) showed that under certain structural conditions in the data generative process, post-processing methods via subspace projection can effectively learn invariant features that can generalize across unseen domains from the same data generative process but under different interventions on non-causal factors. Empirically, we find that if the learned features (i.e., the outputs of the last hidden layer) conform to a compositional structure where the subspace of domain-related features is orthogonal to that of class-related features, the corresponding model can generalize across unknown domain-class pairs. Motivated by this observation, to induce features that match this compositional structure, we introduce a two-stage finetuning approach termed Compositional Feature Alignment (CFA), which is also inspired by recent progress in the literature of neural collapse (Papyan et al., 2020; Zhu et al., 2021; Yang et al., 2022).
More specifically, upon the features given by the encoder, we construct two heads, one for predicting the target label of interest and the other for predicting the domain index. Note that the two-head architecture is not new, and has been widely used in domain adversarial neural networks (Ganin & Lempitsky, 2015; Zhao et al., 2018). However, different from domain adversarial neural networks where adversarial training through minimax optimization is needed, our proposed method is computationally lightweight and can be divided into two stages. CFA first identifies a proper compositional feature structure via a two-head regularized linear probing (i.e., training linear heads with the encoder frozen). Subsequently, the encoder undergoes finetuning with the heads being frozen. Leveraging tools from the neural collapse literature, we theoretically prove that CFA can effectively align features with the compositional feature structure under mild assumptions. Furthermore, we construct a synthetic Color-CIFAR dataset to examine CFA empirically and observe that CFA can indeed align features with the desired compositional feature structure.
To facilitate the studies of compositional generalization, we curate a suite of benchmarks for the compositional generalization challenge, building on four real-world image datasets: Office-Home (Venkateswara et al., 2017), DomainNet (Peng et al., 2019), WILDS-iWildCam (Beery et al., 2020), and WILDS-FMoW (Christie et al., 2018). We consider two powerful pretrained vision en-
coders, CLIP (Radford et al., 2021) and DINOv2 (Oquab et al., 2023), with ViT-B (Dosovitskiy et al., 2021) architecture, and apply different finetuning methods to them, including linear probing, full finetuning, LP-FT (Kumar et al., 2022), reweighting and our proposed CFA. Extensive experimental results on CG-Suite show that CFA-finetuned models can indeed generalize to unseen domain-class combinations better than other finetuning methods. We hope that the curated CG-Suite can facilitate future research on compositional generalization.
2 COMPOSITIONAL FEATURE ALIGNMENT
The key to the CG challenge is to identify and encode the compositional relationship between classes and domains by learning the features. Hence, it is important to first understand what kind of feature structures are desired for compositional generalization. To this end, we first provide a formal definition of the compositional feature structure, and then explain our motivations behind the definition.
Definition 1 (Compositional Feature Structure). For any input \( x \) from class \( y \in \{1, \ldots, K\} \) and domain \( e \in \{1, \ldots, E\} \), its feature \( z \in \mathbb{R}^d \) satisfies the compositional feature structure as long as \( z \) can be decomposed as:
\[
\text{Class Feature: } z_1 \sim \mathcal{N}(\mu_y^1, \Sigma_y^1) \in \mathbb{R}^{d_1}, \\
\text{Domain Feature: } z_2 \sim \mathcal{N}(\mu_e^2, \Sigma_e^2) \in \mathbb{R}^{d_2} \\
\text{Total Feature: } z = R \begin{bmatrix} z_1 \\ z_2 \\ z_{\text{noise}} \end{bmatrix}
\]
where \( z_{\text{noise}} \in \mathbb{R}^{d-d_1-d_2} \) represent noise features irrelevant to classes and domains, and \( R \in \mathbb{R}^{d \times d} \) is a full-rank orthonormal matrix. Note that \( \mu_y^1, \Sigma_y^1 \) have dependence on \( y \), while \( \mu_e^2, \Sigma_e^2 \) rely on \( e \).
Fig. 2 provides a visualization of this compositional feature structure for a simple setup of 2 classes and 3 domains, where one domain-class combination is absent in the training set. It is evident that class features and domain features exist in orthogonal subspaces, as required by Definition 1. In this case, a linear classifier that exclusively utilizes class features and disregards domain and noise features can effectively generalize the unseen domain-class combination. It is noteworthy that even with the perfect alignment of learned features to this compositional structure on all training data, there is no guarantee that features from unseen domain-class combinations will still conform to this structure.
2.1 Method: Training with Frozen Orthogonal Heads under Normalization
Though we have defined an ideal feature structure for compositional generalization, the neural features produced by pretrained models may not align with this structure. In this section we introduce our method to encourage the learned features to follow the above structure. At a high level, the
proposed method contains two stages of finetuning from pretrained models. We include an auxiliary linear head for domain prediction, complementing the pre-existing class prediction head. The first stage involves multi-label linear probing, which adjusts the two heads to achieve an optimal compositional feature structure. In the second stage, we fine-tune the encoder, while keeping the two heads frozen. The final product is a finetuned encoder that generates features in alignment with the predetermined compositional feature structure from stage one. The two stages, diagrammatically represented in Fig. 3, are detailed below, followed by a discussion on our rationale behind the algorithm design.
Stage 1 (Multi-Label Linear Probing). We begin with a pretrained encoder, $\Phi(\cdot)$ that maps inputs to $d$-dimensional features of unit norm, where $d > K + E$. We construct two linear heads without bias terms, denoted by $W_1 \in \mathbb{R}^{K \times d}$ and $W_2 \in \mathbb{R}^{E \times d}$. Keeping $\Phi(\cdot)$ frozen, we train these heads with two cross-entropy loss terms, which take into account both class and domain labels. An orthogonality constraint ensures $W_1$ and $W_2$ span orthogonal subspaces.
Mathematically, the optimization objective of the first stage can be written as
$$\min_{W_1, W_2} \frac{1}{N} \sum_{(x, y, e) \in D_{\text{train}}} \frac{1}{K} \ell_{\text{CE}}(\beta_1 \cdot W_1 \Phi(x), y) + \lambda \frac{1}{E} \ell_{\text{CE}}(\beta_2 \cdot W_2 \Phi(x), e)$$
subject to $\beta_1, \beta_2, \lambda > 0$ and $W_1 \in \mathcal{U}(d)^K$, $W_2 \in \mathcal{U}(d)^E$, $W_1 W_2^T = 0$
where $D_{\text{train}}$ represents the training set, $\ell_{\text{CE}}$ is the cross-entropy loss, $\beta_1, \beta_2$ are inverse temperature parameters (also called as logit scale in CLIP (Radford et al., 2021)), $\mathcal{U}(d)$ denotes the set of $d$-dimensional unit vectors, and $0$ stands for the zero matrix.
Stage 2 (finetuning with Frozen Heads). We then freeze the trained $W_1$ and $W_2$, and proceed to fine-tune the encoder $\Phi(\cdot)$ end-to-end, using the same multi-label cross-entropy loss function.
The optimization objective of this finetuning stage can be expressed as
$$\min_{\Phi} \frac{1}{N} \sum_{(x, y, e) \in D} \frac{1}{K} \ell_{\text{CE}}(\beta_1 \cdot W_1 \Phi(x), y) + \lambda \frac{1}{E} \ell_{\text{CE}}(\beta_2 \cdot W_2 \Phi(x), e)$$
The following discussion explains our motivations and reasoning underlying the algorithm design:
• **Freezing Head for Feature Alignment**: Recent work on neural collapse indicate that during the training of a multi-class classifier using cross-entropy loss, freezing the linear head according to a simplex structure can guide the features to align with the frozen head (Zhu et al., 2021; Yang et al., 2022). This observation implies that the features of data in class $y$ to collapse in the direction of the row vector of the classifier corresponding to class $y$. In addition, we empirically observe that the head-freezing technique does not compromise the model’s performance compared to end-to-end finetuning. We include the details regarding this experiment in an ablation study in Appendix B. Inspired by these, we devise the two-stage strategy where the Stage 1 determines the optimal head weights for our compositional feature structure, and the Stage 2 finetunes the encoder with frozen head weights to align the features with the feature structure.
• **Linear Probing Two Orthogonal Heads**: Unlike research work on neural collapse that focus exclusively on class prediction (Papyan et al., 2020; Zhu et al., 2021; Yang et al., 2022), our work also accounts for the effects of domains, as outlined in Definition 1. We therefore introduce an auxiliary head, $W_2$, for domain prediction, alongside the original class prediction head denoted as
Thus, for a sample \( x \), the encoder, \( W_1 \), and \( W_2 \) predict the class and domain labels based on the feature \( \Phi(x) \). Definition 1 implicitly poses an orthogonality requirement on domain-related and class-related features since \( R \) is an orthonormal matrix. To meet this feature orthogonality requirement, we impose an orthogonality constraint on the two heads (i.e., \( W_1 W_2^T = 0 \)).
• **Normalizing Features and Weights to Address Data Imbalance**: While Zhu et al. (2021); Yang et al. (2022) provide a theoretical justification for head freezing, their theory assumes class-balanced training data. In the case of data imbalance, Thrampoulidis et al. (2022) shows that the head and features may become misaligned. Upon reviewing the technical details of Thrampoulidis et al. (2022), we find that this misalignment can be rectified by normalizing features and head weights to a hyper-sphere. This normalization ensures constant norms for features and head weights, thereby ensuring alignment (cf. Theorem 1). Consequently, we assume that the features produced by the encoder \( \Phi \) are also normalized to unit norm, which is a common practice in modern vision model pretraining such as CLIP (Radford et al., 2021), SimCLR (Chen et al., 2020), MoCo (He et al., 2020), DINO (Caron et al., 2021). Additionally, we impose the head normalization constraint \( W_1 \in U(d)^K, W_2 \in U(d)^E \) in (2), a technique already employed in CLIP (Radford et al., 2021).
### 2.2 Theoretical Guarantee
In the algorithm above, Stage 1 is relatively simple, comprising a joint minimization problem over two linear heads. In contrast, Stage 2 is more complex, as it optimizes a neural encoder using two heads under two cross-entropy loss terms. We offer theoretical justification for Stage 2 below, demonstrating that the finetuned encoder can indeed align features with the two frozen orthogonal heads produced by Stage 1, thereby creating a feature structure that meets the requirement in Definition 1.
In line with recent research on neural collapse (Mixon et al., 2020; Fang et al., 2021; Zhu et al., 2021; Thrampoulidis et al., 2022), we adopt the unconstrained feature model (UFM) or layer-peeled model, where \( z = \Phi(x) \) is treated as a free optimization variable in \( \mathbb{R}^d \) for every input \( x \).
We denote \( Z = [\phi(x_1), \ldots, \phi(x_n)] \in \mathbb{R}^{d \times N}, Y = [y_1, \ldots, y_N], \) and \( E = [e_1, \ldots, e_N] \) as the stack of features, class labels, and environment labels, respectively. In the context of the unconstrained feature model, the optimization objective of Stage 2 is transformed to:
\[
\min_Z \frac{1}{KN} \ell_{CE}(\beta_1 \cdot W_1 Z, Y) + \lambda \frac{1}{EN} \ell_{CE}(\beta_2 \cdot W_2 Z, E) \quad \text{s.t.} \quad Z \in U(d)^N
\]
(4)
**Theorem 1** (Feature Alignment). Assuming the feature dimension \( d \) is no smaller than \( K + E \), and training data exists for each class and domain (though not necessarily for each domain-class combination), and \( W_1 \) and \( W_2 \) are normalized and span orthogonal subspaces such that \( W_1 \in U(d)^K, W_2 \in U(d)^E \) and \( W_1 W_2^T = 0 \). Additionally, we assume \( \beta_1, \beta_2 \) are sufficiently large. The global minimum of (4) results in the following: for any \( i \in [N] \), denote \( z_i \) as the \( i \)-th column vector of \( Z \), we have
\[
z_i^* = W_1^T a_{y_i} + W_2^T b_{e_i}
\]
(5)
where \( a_{y_i} \in \mathbb{R}^K \) is a vector depending on the class label \( y_i \), and \( b_{e_i} \in \mathbb{R}^E \) is a vector depending on the domain label \( e_i \).
This theorem, intuitively, demonstrates that upon optimizing (4) to a global minimum, for any training sample from class \( y \) in environment \( e \), its corresponding feature \( z^* \) can be decomposed as a linear combination of two vectors depending on \( y \) and \( e \), respectively, and the two vectors live in orthogonal feature subspaces. This indicates that the learned features conform to a compositional feature structure satisfying Definition 1. The complete proof is found in Appendix C, where we leverage theoretical tools from Thrampoulidis et al. (2022) in the proof.
### 2.3 Toy Example on Color-CIFAR
To empirically show that our two-stage algorithm promotes the encoder’s ability to learn the compositional feature, we devised a simple example using Color-CIFAR, a customized semi-synthetic dataset.
---
1 Recent studies show that weight normalization for the linear head (without bias) can enhance the performance of fine-tuned CLIP (Goyal et al., 2022; Wang et al., 2023).
dataset. The Color-CIFAR dataset is derived from CIFAR-10 (Krizhevsky, 2009), and it is created by i) converting all images to grayscale, and ii) assigning each grayscale image to an RGB channel based on its domain label. In this simple experiment, we pick four classes from the CIFAR10 classes and evenly assign 3 domain labels to each class. One out of the twelve domain-class pairs is marked as OOD and the encoder is finetuned on the rest of the data. We choose the CLIP ViT-B/32 image encoder as our pretrained model and finetune the encoder using our CFA algorithm. The resulting features extracted by the pretrained and finetuned model are shown in Fig. 4. To provide a clear visualization, we only show the features from two classes, airplane and automobile.
We visualize the top-3 dimensions after performing SVD. From Fig. 4 we can easily see that, the pretrained model, though distinguishing the domains successfully, fails at clustering the classes. In contrast, our two-stage CFA learns features to form a compositional structure. For better visualization, we connect the means of the features in each domain-class pair and display a similar pattern as our desired compositional feature structure presented in Fig. 2. Note that our finetune model also encodes the OOD samples correctly following the compositional feature structure, which empirically justifies the effectiveness of our algorithm.
3 Empirical Studies
3.1 Benchmark Development of CG-Bench
We create CG-Bench, a compositional generalization benchmark built on four datasets previously designed for DG research: Office-Home (Venkateswara et al., 2017), DomainNet (Peng et al., 2019), and iWildCam (Beery et al., 2020) & FMoW (Christie et al., 2018) from the WILDS benchmark (Koh et al., 2021). These datasets span a wide range of application scenarios from common object recognition in web-crawled images to wildlife recognition from camera traps and building recognition from satellite images. Due to the page limit, we elaborate on the motivation for creating the CG-bench and the curation procedure by taking DomainNet as an example. Additional details regarding benchmark curation can be found in Appendix A.
DomainNet (Peng et al., 2019) consists of objects in different art styles. In the DomainNet dataset, there are $K = 345$ classes and $E = 6$ domains: \{Clipart, Infograph, Painting, Quickdraw, RealImage, Sketch\}. In addressing the DG challenge, prior research using DomainNet typically employed a leave-one-out cross-validation strategy. This involves training the model on data from five of the domains and subsequently evaluating its performance on the sixth, omitted domain. In addition to the DG task, it’s worth noting that the CG challenge is intrinsically present within DomainNet. To underscore this point, we carried out a preliminary experiment using CLIP.
CG Challenge in DomainNet We randomly divide DomianNet into training and evaluation sets, with an 80:20 split. A CLIP model is fully fine-tuned on this training data, and evaluated on validation data from all domain-class combination. We also gathered zero-shot accuracy of the CLIP model for comparison. As a final step, we examined the test accuracies for each domain-class combination, correlating them with the count of their respective training data samples. We only focus on
hard domain-class combinations that the zero-shot accuracy is below 30%, and visualize evaluation results over these combinations in Figure 5a. Firstly, we notice that certain domain-class combinations possess minimal or even no training samples (e.g., some combinations have only 2 images and neither of them is sampled into the training set). This observation aligns with our considered CG scenario. Within this CG context, both the fine-tuned and zero-shot models encounter difficulties in achieving high test accuracy when the training data for a specific domain-class combination is insufficient. This leads us to conclude that the CG challenge is inherently present in DomainNet, and current zero-shot and fine-tuned models fail to address. Consequently, we are motivated to establish a benchmark for a methodical investigation of this challenge.
**Benchmark Curation Setup** Consider data belonging to $E$ domains (i.e., environments) and $K$ classes, resulting in an $E \times K$ matrix of domain-class combinations (such as the demo shown in Fig. 1). A binary mask $M_{id} \in \{0, 1\}^{E \times K}$ is applied to this matrix to indicate *in-distribution* domain-class combinations, ensuring that each row or column contains both 0 and 1 so that the training data includes all classes and domains, while some domain-class combinations may be absent. The complementary binary mask ($M_{ood} = 1 - M_{id}$) represents OOD domain-class combinations.
**CG Curation of DomainNet** We form a $E \times K$ class-environment matrix for DomainNet and evaluate the zero-shot accuracy for every domain-class combination. The resulting data distribution is visualized in Figure 5b. We designate the combinations that fall within the lowest 20% of zero-shot accuracies as the out-of-distribution (OOD) set, while the top 80% constitute the in-distribution (ID) set. To ensure comprehensive representation, each row/column contains at least one entry from the ID set. The ID data is then further segregated into a training set and an ID validation set at a 9:1 ratio. Meanwhile, the OOD data is divided between OOD validation and test sets. When evaluating a model trained on the main training dataset, we assess its performance across the ID validation, OOD validation, and OOD test subsets. For this benchmark, our key metric is the average top-1 accuracy for both the ID and OOD sets.
### 3.2 Evaluations
**Baselines** We take the OpenAI’s ViT-B/16 CLIP (Radford et al., 2021) and the Meta’s ViT-B/14 DINOv2 (Oquab et al., 2023) as the pretrained model for each benchmark and implement three finetuning strategies as the baseline: **full finetuning**, **linear probing then full finetuning (LP-FT)** and **rewriting**. LP-FT (Kumar et al., 2022) is a simple two-stage finetuning strategy that addresses the problem that full finetuning can distort pretrained features and often underperforms linear-probing on out-of-distribution data. Note: Our proposed approach bears similarities with LP-FT (Kumar et al., 2022), as both methods start with a linear probing stage and then proceed with finetuning. However, two critical differences are: i) our approach employs two heads with a multi-label cross-entropy loss and imposes an orthogonality constraint during the linear probing stage, and ii) we keep the heads frozen during the second finetuning stage. Rewriting (Buda et al., 2018) balances the number of samples from each group in each batch during training. It is robust to group shifts in OOD generalization tasks. For the CG benchmarks, we implemented two versions...
Table 1: Test accuracy (%) and F1-macro scores (%) of different methods on CG-Bench. Bold values mark the highest accuracy or F1 score. The OOD accuracy of FMoW is the worst region accuracy.
| Model | Methods | OfficeHome | DomainNet | iWildCam | FMoW |
|-------|------------------|------------|-----------|----------|------|
| | ID Acc | OOD Acc | ID Acc | OOD Acc | ID Acc | OOD Acc | ID F1 | OOD F1 | ID Acc | OOD Acc |
| CLIP | | | | | | | | | | |
| | 89.2 | 50.3 | 61.7 | 6.6 | 13.7 | 6.9 | 11.7 | 9.2 | 20.4 | 18.8 |
| | Linear Probing | 90.9 | 41.0 | 72.7 | 4.7 | 72.5 | 14.4 | 42.1 | 22.5 | 37.7 | 27.6 |
| | Fine-Tuning | **94.3** | 51.0 | **82.0**| 7.5 | 74.5 | 16.5 | 43.8 | 22.2 | 65.8 | 38.7 |
| | Fine-Tuning (WiSE) | 93.7 | 52.5 | 76.4 | 8.7 | 67.0 | 13.7 | 31.6 | 17.0 | 49.5 | 40.6 |
| | LP-FT | 93.5 | 43.9 | 81.5 | 5.3 | 74.0 | 17.0 | 42.5 | 26.6 | **65.9**| 40.2 |
| | LP-FT (WISE) | 93.0 | 42.8 | 79.4 | 5.3 | 74.4 | 18.2 | 44.5 | 28.7 | 56.6 | 36.3 |
| | Reweight-E | 94.0 | 51.9 | 81.2 | 7.4 | **75.3**| 17.2 | 45.2 | 24.3 | 62.4 | 41.8 |
| | Reweight-E (WISE) | 93.6 | 53.1 | 75.9 | 8.5 | 68.5 | 13.7 | 32.9 | 18.0 | 46.8 | 41.7 |
| | Reweight-Y×E | 93.7 | 52.2 | 81.0 | 7.6 | 72.2 | 17.4 | 41.6 | 30.0 | 58.0 | 41.1 |
| | Reweight-Y×E (WISE) | 93.4 | 53.4 | 75.5 | 8.5 | 55.9 | 15.1 | 29 | 22.7 | 42.1 | 37.5 |
| | CFA | 943.07 | 54.33.2 | 81.6 | 7.3 | 74.0 | 18.3 | 43.6 | 31.0 | 65.3 | **41.6**|
| | CFA (WiSE) | 93.1 | **56.94**| 76.5 | **9.2** | 74.6 | **19.7**| **45.6**| **32.5**| 53.5 | 36.6 |
| DINOv2| | | | | | | | | | |
| | Fine-Tuning | 91.8 | 38.6 | 82.4 | 5.3 | 76.4 | 14.4 | 47.6 | 18.3 | 66.1 | 38.4 |
| | Linear Probing | **93.3** | 40.0 | 75.4 | 4.8 | 77.0 | 19.6 | 50.7 | 27.9 | 45.5 | 25.5 |
| | LP-FT | 93.1 | 38.2 | 82.5 | 5.1 | 77.6 | 23.1 | 52.8 | 30.8 | 67.1 | 37.4 |
| | LP-FT (WISE) | 94.0 | 39.7 | 81.6 | 6.2 | 77.9 | 22.3 | **53.2**| 31.0 | 61.0 | 33.7 |
| | Reweight-E | 91.2 | 38.9 | 81.8 | 5.2 | 76.9 | 13.1 | 48 | 17.9 | 62.3 | 38.4 |
| | Reweight-Y×E | 91.3 | 39.0 | 81.5 | 5.3 | 72.2 | 17.4 | 41.6 | 30.0 | 57.5 | 37.6 |
| | CFA | 92.8 | 39.2 | **82.6**| 5.6 | 78.1 | 22.8 | 52.5 | 30.8 | **67.2**| **38.5**|
| | CFA (WiSE) | 93.1 | **40.4**| 79.6 | **6.4** | **78.2**| **23.8**| 52.6 | **33.4**| 59.8 | 34.3 |
of reweighting strategies: Reweight-E, which does re-sampling according to the domain labels, and Reweight-Y×E, which balances according to the domain-class combinations.
Postprocessing with WiSE-FT (Wortsman et al., 2022). After finetuning using the three baseline methods and our proposed CFA, we also apply WiSE-FT (Wortsman et al., 2022) with $\alpha = 0.5$ to postprocess the model, which just takes an average of initial and finetuned model parameters in the parameter space. It has been shown that Wise-FT can improve the model performance (especially OOD performance) in some cases (Wortsman et al., 2022). Note: For the CLIP experiments, we interpolate the finetuned model with the zero-shot CLIP encoder and classification head. For the DINOv2 experiments, since there is no zero-shot classification head available, we interpolate the finetuned models with the linear probing/stage-1 CFA results. Consequently, we do not perform WiSE-FT on full finetuning and reweighting baselines since they do have linear-probed heads.
Implementation of CFA Empirically, we make small modifications to Stage 1 to divide it into two steps: i) Train $W_2$ on the domain labels with reweighting until it converges with $W_1$ fixed to its zero-shot weight; ii) then train $W_1$ on the class label with reweighting, and encourage the orthogonality between $W_1$ and the fixed $W_2$ with a $\ell_2$ regularization term, $\|W_1^T W_2\|_F^2$. We also use reweighting for the class labels when performing the linear probing for LP-FT for a fair comparison. Following Sec. 2.1, we normalize the row vectors of $W_1$ and $W_2$ to the unit norm and normalize the outputs of $\Phi$ also to the unit norm. In addition, in the linear probing stage of CFA and LP-FT, we constrain $W_1$ and $W_2$ to a subspace determined by the zero-shot linear classifier of CLIP, as we find it can improve the final OOD performance. Besides, we find that in Stage 2, it is empirically sufficient to train the encoder with a very small $\lambda$ value, which is a loss coefficient in (3), and we deploy $\lambda = 0$ in Stage 2 to reduce compute cost. In both Stage 1 and Stage 2, we use the AdamW (Loshchilov & Hutter, 2017) optimizer with a cosine annealing scheduler (Loshchilov & Hutter, 2016). More details and hyperparameters can be found in Appendix B.
Empirical Conclusions The results of our empirical experiments are presented in Table 1. Observing the results, we conclude that: a) Compared with full finetuning, LP-FT and reweighting, our CFA can improve the performance of pretrained models on OOD data for compositional generalization; b) WiSE-FT can further improve the OOD performance of all methods in most cases (when WiSE-FT fails, it will fail on both ID and OOD); c) While CFA enjoys a superior OOD performance, its in-distribution (ID) performance is maintained around the same level of full finetuning, which is a desired property. Also, we notice that, although our CFA increases the performance of models on OOD data in CG tasks, there is still a gap between its ID and OOD performance, indicating that CG is quite a challenging task and needs future algorithm advancement to be further addressed.
Ablation Studies We conducted ablation studies on i) the choice of normalized head v.s. unconstrained head, ii) the choice of frozen head v.s. trainable head, and iii) the loss coefficient $\lambda$ in Stage 2 on CLIP model. Due to the page limit, we defer results and details to Appendix B. Also, we discuss the training stability, overhead and performance gain of CFA in Appendix B, as well as a study of CFA with partial availability of domain labels.
4 RELATED WORKS
OOD Generalization In OOD generalization, the model is trained on labeled data from a limited number of known domains, and the goal is to improve the performance of the models so that they can better generalize to previously unseen or new test domains (Blanchard et al., 2011). A common approach to tackle OOD generalization is domain-invariant learning, which aims to learn a representation of data that has an invariant distribution over different training domains. Previous works taking this approach match domains in feature space either by aligning moments (Sun & Saenko, 2016) or using an adversarial loss (Ganin et al., 2016). However, these methods are later pointed out to be generally inadequate (Zhao, 2019). Another popular approach is to learn the optimal invariant predictors. Taking this approach, an invariant risk minimizer (IRM) optimizes a highly non-convex bi-level objective and simplifies the optimization using a penalty regularized objective. However, Rosenfeld et al. (2021); Kamath et al. (2021); Ahuja et al. (2021) theoretically show that these algorithms fail even in simple data models. Similarly, Wang et al. (2022) proposed Invariant-feature Subspace Recovery (ISR), which recovers the subspace spanned by the invariant features, and then fits predictors in this subspace. Distributionally robust optimization (Sagawa et al., 2020) is also a choice to tackle OOD generalization, which optimizes models over a worst-case distribution that is perturbed around the original distribution. In addition to the methods designed for training a model from scratch, recent works (Kumar et al., 2022; Wortsman et al., 2022; Goyal et al., 2022) also discuss increasing the OOD accuracy over the pretrained model. While these methods provide impressive empirical improvements on pretrained models, theoretical explanations are yet to be provided.
Composition Generalization In the computer vision literature, previous research has investigated attribute-object compositions (also referred to as compositional zero-shot learning) (Misra et al., 2017; Nagarajan & Grauman, 2018; Purushwalkam et al., 2019; Naeem et al., 2021; Nayak et al., 2023; Hao et al., 2023), with the goal of predicting attributes of an object in addition to its class. For instance, in a binary image classification task involving cat versus tiger, a classifier might be required to predict whether the animal is old or young alongside the conventional class prediction. In contrast, compositional generalization (CG) has a different focus. It concentrates purely on predicting the object class (e.g., cat versus tiger). Specifically, CG tasks aim to accurately identify young cat images as cat, even when the training data consisting only of young tiger and old cat instances and lacking young cat images. This scenario introduces an out-of-distribution (OOD) shift, and the challenge lies in developing models robust to such shifts. While compositional zero-shot learning (CZSL) can decouple attributes from objects, it is not universally adaptable for OOD generalization tasks. This approach relies on powerful vision-language models (VLMs) such as CLIP, while CG does limit the type of image classifiers. However, in certain real-world domains, such as remote sensing or medical imaging, there is a lack of paired image-text data to train strong VLMs. Therefore, adopting self-supervised models such as MAE (He et al., 2022) and DINO (Caron et al., 2021) presents a more practical strategy for these domains (Cong et al., 2022; Wanyan et al., 2023). As shown in Table 1, our CFA can work with both VLMs and self-supervised models. In contrast, CZSL can not be directly applied to self-supervised models. Besides, Sivaprasad et al. (2022) explores CG under a slightly simplified premise, where only one random domain is masked for each class. In addition, their method is built upon ResNet (He et al., 2016b) and does not scale well with modern transformer architectures.
5 CONCLUSION
This paper delves into the challenge of Compositional Generalization (CG) in machine learning, focusing on generalization to unseen domain-class combinations. By developing CG-Bench, a suite of benchmarks from real-world image datasets, we highlighted the shortcomings of prevalent pretraining-finetuning frameworks in tackling this challenge. Our proposed solution, the Compositional Feature Alignment (CFA), offers a promising approach to improve the CG performance of pretrained models, as evidenced in our experiments. Despite these advances, our study is not without limitations. Our experiments are currently limited to the base-sized ViT models, and our empirical studies draw from a restricted number of datasets of limited size. As we strive to overcome these limitations in future work, we look to include larger models and diversify our benchmark suite, exploring alternative data sources beyond images. We invite the broader machine learning community to join us in the ongoing exploration of the important challenge of CG.
REFERENCES
Kartik Ahuja, Jun Wang, Amit Dhurandhar, Karthikeyan Shanmugam, and Kush R. Varshney. Empirical or invariant risk minimization? a sample complexity perspective. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=jrA5GAccy_.
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019.
Sara Beery, Elijah Cole, and Arvi Gjoka. The iwildcam 2020 competition dataset. *arXiv preprint arXiv:2004.10340*, 2020.
Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. *Advances in neural information processing systems*, 24:2178–2186, 2011.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020.
Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. *Neural networks*, 106:249–259, 2018.
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 9650–9660, 2021.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020.
Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, 2018.
Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David Lobell, and Stefano Ermon. Satmae: Pre-training transformers for temporal and multispectral satellite imagery. *Advances in Neural Information Processing Systems*, 35:197–211, 2022.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In *CVPR*, 2009.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. *ICLR*, 2021.
Cong Fang, Hangfeng He, Qi Long, and Weijie J Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. *Proceedings of the National Academy of Sciences*, 118(43):e2103091118, 2021.
Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In *International conference on machine learning*, pp. 1180–1189. PMLR, 2015.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. *The journal of machine learning research*, 17(1):2096–2030, 2016.
|
bPG48f3ppz
|
ImageNet-1k is used for alignment pretraining before fine-tuning on other datasets. However, test accuracy after further fine-tuning on ImageNet itself is not reported. Does SpikeCLIP have adequate representational capacity and scalability for such large-scale image classification tasks?
|
SPIKECLIP: A CONTRASTIVE LANGUAGE-IMAGE PRE-TRAINED SPIKING NEURAL NETWORK
Anonymous authors
Paper under double-blind review
ABSTRACT
Spiking neural networks (SNNs) have demonstrated the capability to achieve comparable performance to deep neural networks (DNNs) in both visual and linguistic domains while offering the advantages of improved energy efficiency and adherence to biological plausibility. However, the extension of such single-modality SNNs into the realm of multimodal scenarios remains an unexplored territory. Drawing inspiration from the concept of contrastive language-image pre-training (CLIP), we introduce a novel framework, named SpikeCLIP, to address the gap between two modalities within the context of spike-based computing through a two-step recipe involving “Alignment Pre-training + Dual-Loss Fine-tuning”. Extensive experiments demonstrate that SNNs achieve comparable results to their DNN counterparts while significantly reducing energy consumption across a variety of datasets commonly used for multimodal model evaluation. Furthermore, SpikeCLIP maintains robust performance in image classification tasks that involve class labels not predefined within specific categories.
1 INTRODUCTION
While modern deep neural networks achieve impressive performance on a variety of image, audio, and language tasks and sometimes even perform better than humans, their substantial energy requirements have become a subject of increasing scrutiny. Representative examples like ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023) have exhibited significant energy consumption, especially when engaged in complex reasoning tasks. Consequently, the energy-efficient advantage of SNNs is garnering escalating interest and recognition within the machine-learning community. Emerging as the third generation of neural networks (Maass, 1997), SNNs have drawn increasing attention due to their biological plausibility, event-driven nature, rapid inference capabilities, and efficient energy utilization (Pfeiffer & Pfeil, 2018; Roy et al., 2019). Utilizing SNNs in the development of extensive computational models offers the potential for significant energy efficiency and subsequent cost reductions in the implementation of large-scale applications, thereby promoting further advancements with such a computational paradigm.
Within the realm of computer vision, SNNs have achieved great success in image classification (Cao et al., 2015; Diehl et al., 2015; Rueckauer et al., 2017; Hu et al., 2018; Yin et al., 2020; Fang et al., 2021; Zhou et al., 2023a,b). Among them, a series of works by Spikingformer (Zhou et al., 2023a,b), inspired by the Vision Transformer (ViT) (Dosovitskiy et al., 2010), have proposed effective SNNs architectures grounded in hardware feasibility. In contrast to their application in computer vision, the utilization of SNNs in natural language processing remains relatively limited (Kao et al., 2022; Lv et al., 2022; Zhu et al., 2023b), with only a handful of studies exploring the potential of SNNs in text processing tasks. For example, Lv et al. (2022) proposed a TextCNN-based SNN to attempt to complete the task of text classification, despite the large performance difference with the Transformer-based language model.
Previous works on SNNs largely targeted single-modality input representations using spikes. However, the exploration of extending SNNs to multimodal contexts remains uncharted territory. To address this gap, we introduce SpikeCLIP, inspired by the dual-stream CLIP trained via contrastive learning (Radford et al., 2021). Through SpikeCLIP, we evaluated the feasibility and potential of using the spike paradigm to handle multimodal tasks.
SpikeCLIP is the first multimodal SNN, trained using the method of “Alignment Pre-training + Dual-Loss Fine-tuning”. Specifically, we initially maximize the cosine similarity between the output representations of CLIP and SpikeCLIP, both image-side and text-side, utilizing a large pre-training dataset. This allows SpikeCLIP to generate universal representations of images and text, a process called “Alignment Pre-training”. Subsequently, to enhance SpikeCLIP’s performance on targeted downstream datasets, we undertook the “Dual-Loss Fine-tuning” process, emphasizing the optimization of Kullback-Leibler divergence (KL) loss and Cross-Entropy (CE) loss. The KL loss is calculated based on the class probability distribution that SpikeCLIP and the task-specific fine-tuned CLIP yield during the classification, while the CE loss is determined by contrasting the class probability distribution produced by SpikeCLIP against the actual labels (see Figure 1 for details). Similar to CLIP, SpikeCLIP possesses zero-shot learning ability (Table 2) and has the flexibility to circumvent the constraints associated with fixed labels in classification tasks (Table 4).
The contribution of this study can be summarized as follows:
• We have demonstrated for the first time that SNNs can perform feature extraction and alignment across multiple modalities through spiking trains. Based on the findings, we propose a cross-modal SNN, named SpikeCLIP, which performs well in cross-modal alignment between images and text.
• A training method is also proposed with a novel “Alignment Pre-training + Dual-loss Fine-tuning” strategy. With pre-trained SpikeCLIP, we make it possible to efficiently fine-tune SpikeCLIP on subsequent datasets without necessitating initialization from scratch for a new dataset.
• SpikeCLIP not only exhibits competitive performance when compared to existing single-modal SNNs but also empowers the spiking computing paradigm to overcome the constraints of the fixed label quantification intrinsic to image classification.
2 RELATED WORK
Unlike traditional Artificial Neural Networks (ANNs), SNNs employ spikes in a stimulus time window (time step, denoted $T$) for information processing, demonstrating biological plausibility, event-driven nature, rapid inference capabilities, and efficient energy utilization (Pfeiffer & Pfeil, 2018; Roy et al., 2019). In recent years, there has been substantial attention on SNNs, resulting in numerous studies dedicated to discovering more efficient architectures and training methods.
In computer vision (CV), a lot of progress has been made in SNNs. Cao et al. (2015) demonstrated the feasibility of applying the weights of Convolutional Neural Networks (CNNs) to SNNs, which
have similar architectures as the original CNNs. This approach exemplifies the transformation of ANNs into SNNs using weight conversion. Similarly, Wang et al. (2022) devised strategies incorporating signed neurons and memory functionalities to counteract the performance decline observed during the ANN-to-SNN conversion. Furthermore, Bu et al. (2023) implemented a quantized clip background shift activation function in initial ANNs, surpassing traditional ReLU functions and mitigating performance degradation in the ANN-to-SNN transition. In contrast to the method of constructing SNNs from ANNs, some studies employ surrogate gradients to directly train SNNs during backpropagation. For instance, Wu et al. (2018) proposed a Spatio-Temporal Backpropagation (STBP) training framework, introducing an approximate derivative to address the non-differentiable issue related to spiking activities. Expanding on STBP, Zheng et al. (2021) proposed a Threshold Correlated Batch Normalization (tDBN) method, enabling the creation of deeper layers within SNNs by utilizing emerging spatiotemporal backpropagation techniques. Additionally, the innovative approach by Zhou et al. (2022) introduced Transformer-based architectures to SNNs, marking significant advancements in image classification performance. Subsequent enhancements to this groundbreaking model are documented in Zhou et al. (2023a,b), contributing to the continuous refinement and improvement of performance in this field.
In Natural Language Processing (NLP), the exploration of SNNs is relatively nascent. A few seminal works have marked progress in this domain. For instance, Lv et al. (2022) pioneered text classification by transmuting word embeddings into spike trains. Additionally, Bal & Sengupta (2023) innovated an SNN architecture analogous to BERT through knowledge distillation, as elucidated by Hinton et al. (2015). Moreover, Zhu et al. (2023b) delved into the SNNs for text generation, utilizing an architecture analogous to Recurrent Neural Networks (RNNs). In multimodal processing, a myriad of prominent multimodal models grounded in ANNs have been developed, with examples like OSCAR (Li et al., 2020) and SimVLM (Wang et al., 2021) representing single-stream architectures, and CLIP (Radford et al., 2021) and WenLan (Huo et al., 2021) exemplifying dual-stream architectures. However, multimodal SNNs remain largely unexplored due to their challenging training and generally inferior performance compared to ANN counterparts. Nevertheless, drawing inspiration from the pioneering efforts documented in Zhou et al. (2022)(2023a,b), there emerges a promising avenue for the conception of multimodal models rooted in SNNs, taking cues from CLIP (Radford et al., 2021). CLIP utilizes a combined image and text encoder, trained through contrastive learning from extensive image-text pairs. Inspired by CLIP, our SpikeCLIP demonstrates for the first time that SNNs also perform well in feature alignment between images and text.
3 METHOD
Inspired by CLIP (Radford et al., 2021), we perform image classification by evaluating the semantic similarity between visual and textual representations. This methodology incorporates semantically supervised information through the alignment of image and text modalities, thereby obviating the need for explicit classification within the model. Given the strong image representation ability of SNNs (Zhou et al., 2023a,b) and the demonstrated success of spiking representations for text embeddings (Lv et al., 2022), we posit that text information encoded in spiking signals can synergistically complement spiking image representations to accomplish multimodal tasks. In the SpikeCLIP architecture, the image encoder is based on Spikingformer (Zhou et al., 2023b), while the text encoder is a Spiking Multi-Layer Perceptron (S-MLP).
During the pre-training, our primary focus is to optimize the cosine similarity between the output representations produced by both the image and text encoder of CLIP and SpikeCLIP, as described in Equation [3]. This process facilitates the alignment of general representations between SpikeCLIP and CLIP. Before fine-tuning SpikeCLIP, a CLIP is fine-tuned on a specific dataset. The fine-tuned CLIP serves to guide the modification of SpikeCLIP’s probability distribution before classification, as articulated by the loss function specified in Equation [4]. Additionally, SpikeCLIP receives supervision from ground-truth labels, as captured in the loss function presented in Equation [5]. During inference, SpikeCLIP is fed an image and several candidate text labels associated with it. After calculating the cosine similarity between the image representation and various text representations, the text label with the highest cosine similarity is selected as the best output. The overall architecture of SpikeCLIP is illustrated in Figure [2]. In the following, we start with an overview of spiking neurons, then explore the architecture of SpikeCLIP, and finally discuss the training methodology used.
Figure 2: The architecture of SpikeCLIP. The image processing component comprises a Spiking Patch Splitting (SPS) layer, multiple Spike Blocks, and a projection layer. Within each Spike Block, there is a Spiking Multi-Head Attention (S-MHA) module as well as a Spiking Multi-Layer Perceptron (S-MLP) module. SpikeCLIP’s text processing component integrates a Word Embedding layer along with an MLP-based module. Communication between these individual modules is facilitated through binary code, leading to lower energy consumption.
3.1 INTEGRATE-AND-FIRE NEURON
Leaky Integrate-and-Fire (LIF) neurons are extensively utilized within SNNs to construct the Spiking Neuron Layer (shown in Figure 2), and serve a role analogous to activation units in ANNs. Different from the activation units in ANNs, LIF neurons function akin to a Heaviside step function as the networks propagate forward, wherein all floating-point numbers within the data stream are transformed into binary integers, either 0 or 1. LIF neurons operate on the weighted sum of inputs. The membrane potential of the neuron $U_t$ is affected by these inputs at a given time step $t$. The neuron will produce a spike $S_t$, once its membrane potential exceeds the threshold $U_{thr}$, as follows:
$$
S_t = \begin{cases}
1, & \text{if } U_t \geq U_{thr}; \\
0, & \text{if } U_t < U_{thr}.
\end{cases}
$$
The dynamic equation governing the membrane potential of LIF neurons is presented as follows:
$$
U_t = I_t + \beta U_{t-1} - S_{t-1} U_{thr}, \quad I_t = WX_t
$$
where $U_t$ and $U_{t-1}$ are the membrane potentials at the time of $t$ and $t-1$ respectively. $I_t$ signifies the weighted sum of inputs at time $t$, while $\beta$ represents the rate of membrane potential decay. $W$ comprises a set of learnable weights. Furthermore, the expression $S_{t-1} U_{thr}$ encapsulates the logic governing the reset of the membrane potential.
3.2 ARCHITECTURE
The architecture of SpikeCLIP is shown in Figure 2. The model is composed of two primary components: an image encoder and a text encoder. Because Spikingformer (Zhou et al., 2023a) is not only based on a Transformer architecture (like CLIP) but also achieves optimal performance in image classification tasks, we chose to use it as the base model for the image encoder of SpikeCLIP. In addition, the image encoder combines outputs across multiple time steps through the use of Time-Steps Weight, which is an algorithm design that takes into account the interaction of spike signals with different time steps. (see Appendix A.1 for the rationale behind this design choice). As for the text encoder of SpikeCLIP, after evaluating the performance of Transformer-based and Multi-Layer Perceptron (MLP)-based architectures, we chose a simpler MLP-based architecture as the text encoder for SpikeCLIP (a comparative analysis can be found in Appendix A.3).
3.3 PRE-TRAINING AND FINE-TUNING
We introduce a two-step training method of “Alignment Pre-training + Dual-Loss Fine-tuning” to align the semantic spaces of image and text modalities. For convenience, we will refer to a conventional
CLIP as $C$. First, we use $C$ to help align the output representations of the image and text sides of SpikeCLIP in general. This step enables SpikeCLIP to generate high-quality representations for images and text, as well as possess some zero-shot learning ability. Then, we fine-tune $C$ on a downstream dataset. We represent the image encoder of the fine-tuned $C$ as $C_{fv}$, and $C_{fv}$ is used as a teacher model when fine-tuning the SpikeCLIP image encoder. In “Dual-Loss Fine-tuning”, SpikeCLIP receives supervision from the teacher model and the ground-truth labels through KL Loss and CE Loss, respectively.
### 3.3.1 Language-Image Pretraining
In the following, the image encoder and text encoder of $C$ will be referred to as $C_I$ and $C_v$. We will also designate the image and text encoder of SpikeCLIP as $SC_v$ and $SC_I$. The datasets used for pre-training SpikeCLIP image and text encoders are denoted as $D_{img}$ and $D_{txt}$.
Diverging from the direct application of contrastive training, which may result in gradient vanishing or exploding, we adopt the idea of KD to align the image encoder of SpikeCLIP using spike signals with the image representation generated by the CLIP image encoder. The same alignment approach is applied to the text encoder. This design tackles the challenge of directly aligning two types of pulse signals by introducing the floating-point representations generated by CLIP as a “bridge.”
The specific operations are as follows: during the pre-training of $SC_v$ (or $SC_I$), for any given image (or text) $x_i$ in a dataset $D_{img}$ (or $D_{txt}$) of size $N$, two latent space vectors $v_i$ and $\hat{v}_i$ are generated after the image passes $C_v$ and $SC_v$ (or the text passes $C_I$ and $SC_I$), respectively. The objective of the pre-training is to maximize the cosine similarity between $v_i$ and $\hat{v}_i$. The loss function is formulated as follows, where $N$ is the number of training instances:
$$L = \frac{1}{N} \sum_{i=1}^{N} \left(1 - \frac{v_i \cdot \hat{v}_i}{\|v_i\| \cdot \|\hat{v}_i\|}\right)$$
### 3.3.2 Fine-tuning Guided by Dual Loss
We perform fine-tuning by optimizing both the KL Loss and the CE Loss on a downstream dataset (denoted $D_{down}$). As in the work by Kingma & Welling (2013) and Zhu et al. (2023a), we use the two losses to construct a joint loss, which enables SpikeCLIP to automatically consider both the KL loss function and the CE loss function when optimizing the joint loss function. Among them, the CE loss guarantees the consistency of SpikeCLIP with the real labels. On this basis, because of the loss caused by the inherent spike signals of SpikeCLIP, we ensure the consistency of SpikeCLIP with the task-specific fine-tuned CLIP by applying the KL loss as a penalty.
The model will try to find a balance and ultimately minimize the sum of these two loss functions. We describe the fine-tuning process in detail below.
Before fine-tuning SpikeCLIP, we need a conventional CLIP fine-tuned on the dataset $D_{down}$, and its image encoder is $C_{fv}$, which is used as a teacher model. Additionally, since the architecture of SpikeCLIP’s text encoder ($SC_I$) is relatively simpler than that of the image encoder ($SC_v$), and the dataset ($D_{txt}$) used to train the text encoder is sufficient, the text encoder has been trained enough. Therefore, we freeze the parameters of the text encoder during fine-tuning to prevent its parameters from being updated (refer to Appendix A.3 for details). Then, we construct a label text set (denoted Candidate labels in Figure 2) containing $M \times k$ text instances by combining the $M$ labels and the corresponding $k$ templates from dataset $D_{down}$. After feeding Candidate labels to $SC_I$, we obtain $M$ text representations with dimension $d$ for classification, called Candidates, similar to the “potential text pairings” in CLIP (Radford et al., 2021).
During the fine-tuning, any image $x_i$ from $D_{down}$ is fed separately into $SC_v$ and $C_{fv}$, outputting two distinct latent $v_i$ and $\hat{v}_i$ of dimension $d$ respectively. Subsequently, matrix multiplication is performed with $v_i$ and $\hat{v}_i$ respectively against Candidates, obtaining two class probability distributions $pre_i$ and $\hat{pre}_i$. We guide $pre_i$ with $\hat{pre}_i$ through minimizing the KL Loss, ensuring that the classification probability distribution of SpikeCLIP does not deviate too much from its corresponding CLIP during the fine-tuning. This constraint is based on knowledge distillation (Hinton et al., 2015), with CLIP as a teacher, guiding $SC_v$ to update parameters in a more stable direction. The CE Loss is derived from
\( \text{pre}_i \) and ground-truth label \( y_i \). In conjunction with KL Loss, CE Loss enhances the efficiency of SpikeCLIP’s fine-tuning on the downstream dataset (refer to Table 3 for details).
The KL loss, CE loss, and Joint loss are defined below:
\[
\text{KLDivLoss} = \frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{M} p_{\hat{r}e_{ij}} \log \left( \frac{p_{\hat{r}e_{ij}} + \epsilon}{p_{re_{ij}} + \epsilon} \right)
\]
\[
\text{CELoss} = -\frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{M} y_{ij} \log(p_{re_{ij}})
\]
\[
\text{JointLoss} = \text{KLDivLoss} + \alpha \cdot \text{CELoss}
\]
where \( N \) is the number of training instances for the downstream dataset, \( \epsilon \) is a small constant, such as \( \epsilon = 1 \times 10^{-10} \), set for numerical stability and to avoid division by zero, and \( \alpha \) is a hyperparameter and defaults to 1.
4 EXPERIMENTS
We conducted four experiments to thoroughly evaluate SpikeCLIP. Section 4.2 presents its CIFAR dataset performance and zero-shot learning ability. In Section 4.3, we extensively studied pre-training’s importance, the impact of pre-training data, and the influence of loss functions during fine-tuning. Section 4.4 evaluates SpikeCLIP’s modality alignment, while Section 4.5 analyzes its energy efficiency. Dataset details are in Section 4.1 and experimental settings are in Appendices A.2 and A.3.
4.1 DATASET
We used the ImageNet-1k dataset (Russakovsky et al., 2015) for pre-training and the following six datasets as downstream datasets: CIFAR10 (Krizhevsky, 2009), CIFAR100 (Krizhevsky, 2009), Flowers102 (Nilsback & Zisserman, 2008), OxfordIIITPet (Parkhi et al., 2012), Caltech101 (Fei-Fei et al., 2004), and STL10 (Coates et al., 2011). These datasets are well-known and have varying numbers of labels for image classification tasks. Additionally, we constructed a new dataset (\( D_{txt} \)), from labels and templates of all datasets used to assess CLIP, containing 115,708 text entries. The dataset, used for pre-training SpikeCLIP’s text encoder, encapsulates a wide array of standard text labels pertinent to image classification tasks (See Appendix A.4 for details).
4.2 IMAGE CLASSIFICATION
In this section, we conduct two experiments: First, we compare the performance difference between SpikeCLIP and the previous models trained on either single-modal or multi-modal data. Secondly, since we are unable to access the complete dataset used to pre-train CLIP as it is not publicly available, we utilize an ANN counterpart to SpikeCLIP, named ScratchCLIP, for comparative experiments with SpikeCLIP. To ensure fairness, ScratchCLIP’s image encoder adopts the Transformer architecture, and its text encoder uses the MLP architecture. While its parameters are similar to SpikeCLIP’s, it lacks spiking neurons and processes data in floating-point form. Moreover, both models were pre-trained and fine-tuned under the same conditions.
4.2.1 RESULTS ON CIFAR
The accuracy on CIFAR achieved by SpikeCLIP is reported in Table 4.2.1 compared to baseline models. In Table 4.2.1 Hybrid training (Rathi et al., 2020), Diet-SNN (Rathi & Roy, 2020), STBP (Wu et al., 2018), STBP NeuNorm (Wu et al., 2019), TSSL-BP (Zhang & Li, 2020), STBP-tdBN (Zheng et al., 2021), TET (Deng et al., 2022), TEBN (Duan et al., 2022) and Spikingformer (Zhou et al., 2023b) are single-modality SNNs. For ANNs, ViT (ViT-B/16) (Dosovitskiy et al., 2010) is one of the top-performing single-modality ANNs, while CLIP (Dosovitskiy et al., 2010) is one
Table 1: Accuracy results on CIFAR datasets. SpikeCLIP achieves accuracy of 94.48% and 77.69% on CIFAR10 and CIFAR100 respectively, surpassing all single-modality SNNs except Spikingformer (with a small performance drop of 1.47% and 2.68%). The best and second-best results of SNNs and ANNs are highlighted with bold fonts, as well as their performance gaps indicated by “Gap (Accuracy)”. Note that the performance gap between SpikeCLIP and its single-modality state-of-the-art model (i.e., Spikingformer) is much less than that between the conventional CLIP and ViT (SOAT traditional ANN on CIFAR datasets).
| Method | Param (M) | Time Step | CIFAR 10 | CIFAR 100 | Gap (Accuracy) |
|--------------|-----------|-----------|----------|-----------|----------------|
| Hybrid training | 9.27 | 125 | 92.22 | 67.87 | |
| Diet-SNN | 0.27 | 10/5 | 92.54 | 64.07 | |
| STBP | 17.54 | 12 | 89.83 | -- | |
| STBP NeuNorm | 17.54 | 12 | 90.53 | -- | |
| TSSL-BP | 17.54 | 5 | 91.41 | -- | |
| STBP-tdBN | 12.63 | 4 | 92.92 | 70.86 | |
| TET | 12.63 | 4 | 94.44 | 74.47 | |
| TEBN | – | 4 | 95.58 | 78.71 | |
| Spikingformer | 9.32 | 4 | 95.95 | 80.37 | 1.47/2.68 |
| SpikeCLIP (ours) | 56.87 | 4 | 94.48 | 77.69 | |
| ViT | 86.39 | 1 | 99.13 | 94.20 | |
| CLIP | 149.6 | 1 | 98.45 | 89.70 | 0.68/4.50 |
of the best-performing multimodal ANNs. According to the data in Table 4.2.1 it is evident that SpikeCLIP has a higher classification accuracy (94.48%/77.69%) than any other single-modality SNN on the CIFAR dataset, except for Spikingformer, which currently holds the top spot. However, it is worth noting that single-modality models tend to perform better than multi-modality ones, even in ANNs. As shown in the table, ViT, a single-modality model, outperforms CLIP on CIFAR10/100 by 0.68%/4.5%. Therefore, it is reasonable to expect a performance gap (1.47%/2.68%) between SpikeCLIP and Spikingformer on CIFAR10/100 for SNNs.
Overall, the comparison between the two gaps described above illustrates the degree of performance of SpikeCLIP, which sets the benchmark for future multimodal SNNs on the same dataset.
4.2.2 Zero-shot Results
CLIP is trained using a large dataset composed of numerous image-text pairs, but this dataset is not open source and we cannot train SpikeCLIP with it. For evaluating the zero-shot learning ability of SpikeCLIP and its ANN counterpart, ScratchCLIP, we resort to using ImageNet-1k as the pre-training dataset for both, as ImageNet-1k is one of the largest image-text classification datasets available to us. To compare their zero-shot learning ability, SpikeCLIP and ScratchCLIP are evaluated on downstream datasets for accuracy after being trained for the same number of epochs on the ImageNet-1k dataset.
Table 2: Zero-shot classification results. CLIP is a pre-trained model (openai/clip-vit-base-patch16). ScratchCLIP is an ANN with a transformer on the image side and an MLP on the text side.
| Model | CIFAR 10 | CIFAR 100 | Flowers 102 | Caltech 101 | OxfordIIITPet | STL 10 | Avg |
|-------------|----------|-----------|-------------|-------------|---------------|--------|------|
| ScratchCLIP | 59.70 | 27.94 | 8.33 | 48.72 | 48.60 | 75.69 | 44.83|
| SpikeCLIP | 58.03 | 26.66 | 9.02 | 48.28 | 44.89 | 77.79 | 44.11|
Note: For comparison with SpikeCLIP: (a) ScratchCLIP’s image encoder has four layers like SpikeCLIP; (b) In the image encoder of ScratchCLIP, a patch splitting layer with the same parameters as the SPS layer in SpikeCLIP is used to maintain the same parameter level as SpikeCLIP; (c) ScratchCLIP undergoes the same rounds of pre-training as SpikeCLIP on ImageNet-1k, followed by zero-shot classification on the downstream dataset.
According to the data presented in Table 4.2.2 SpikeCLIP has an average accuracy of 44.11% on downstream datasets. This is slightly lower than its ANN counterpart, ScratchCLIP, which has an average accuracy of 44.83%. However, the difference between the two is only 0.72%, which is negligible. Despite the fact that SpikeCLIP uses integer operations to conserve energy, which distinguishes it from ScratchCLIP, it still performs competitively under equivalent pre-training.
conditions. Therefore, we can reasonably assume that SpikeCLIP’s performance could be further improved with additional training data.
4.3 ABLATION EXPERIMENTS
We conducted some ablation experiments to investigate the impact of SpikeCLIP performance by the following three factors:
- Pre-training with large-scale dataset.
- The size of and the data distribution of datasets used for pre-training.
- Dual loss applied in fine-tuning stage.
Table 3: Ablation study. The top-performing results in each column are highlighted. **E1** reveals that pre-training with LSD significantly improves the model’s classification performance on downstream datasets; **E2** affirms that optimizing both losses during fine-tuning yields the most significant performance boost.
| Setting | CIFAR 10 | CIFAR 100 | Flowers 102 | Caltech 101 | OxfordIIITPet | STL 10 | Avg |
|---------|----------|-----------|-------------|-------------|---------------|--------|-----|
| E1 | | | | | | | |
| w/o LSD | 93.23 | 74.59 | 66.98 | 23.67 | 34.94 | 69.25 | 60.44 |
| w/ LSD | 94.48 | 77.69 | 86.07 | 82.31 | 67.18 | 89.48 | 82.89 |
| CE | 94.22 | 77.52 | 82.86 | 66.01 | 88.92 | 65.29 | 78.69 |
| KL | 94.20 | 77.42 | 81.76 | 65.95 | 89.58 | 62.72 | 78.61 |
| CE + KL | 94.33 | 77.68 | 82.97 | 66.34 | 89.59 | 86.47 | 82.90 |
Pre-training with large scale dataset. Previous single-modality SNNs could only be trained from scratch on new datasets when performing image classification tasks. This meant that for each specific downstream dataset, a different model needed to be trained, which was highly inefficient. However, our SpikeCLIP can effectively achieve zero-shot classification results on various downstream datasets through “Alignment Pre-training” and only requires fine-tuning on the downstream dataset to significantly improve classification performance. This is the first pre-training and fine-tuning paradigm based on the SNNs framework. To compare with the pre-training setup using a large-scale dataset (LSD), we completed the “Alignment Pre-training + Dual-Loss Fine-tuning” steps on all downstream datasets separately. As shown in E1 of Table 3, when pre-training is performed using LSD, the increase in accuracy ranged from 1.25% to 58.64%, with an average improvement of 22.45%.
Dataset Size and data distribution during pre-training. Our SpikeCLIP has demonstrated impressive results on downstream datasets despite being pre-trained only on a limited dataset of ImageNet-1k. However, we believe that expanding the pre-training dataset could further enhance its performance. In pursuit of this hypothesis, we present the following discussions and experimental designs:
Generally, a model’s performance improves with the amount of data it is trained on, and this can be measured by the size of the data volume and the similarity between the training and evaluation datasets. Larger amounts of data and more similar distributions between the two datasets typically lead to better evaluation results. Taking these factors into consideration, we establish gradients of data size and form three different data distribution groups for each size: Slightly-similar, Intermediate, and Dissimilar. Please refer to Appendix [A.6] for more details. Figure 4.3 illustrates that SpikeCLIP follows these conclusions, which leads us to believe that training SpikeCLIP on larger and more varied datasets could result in even better performance.
Dual-loss for fine-tuning. During the fine-tuning stage, we utilize joint loss to update the parameters of $SC_v$, which includes two losses: the KL loss and the CE loss. The CE loss relies on the model’s ground-truth labels to guide training, while the KL loss ensures that the model captures the ranking information of classification probabilities generated by $C_{fu}$. This dual-loss approach helps maintain weight stability during gradient updates, as demonstrated in E2 of table 3. Our hypothesis is confirmed as SpikeCLIP performance improves when both CE and KL loss functions are applied.
4.4 CROSS-MODAL IMAGE CLASSIFICATION
In this section, we demonstrate the effect of SpikeCLIP in aligning modality information between images and text into the same semantic space using two methods — Expanded Label Set (ELS) and Unseen Label Set (ULS). The implementation details of the two methods are detailed in Appendix
Figure 3: The impact of dataset size and data distribution. The training data is sampled from various datasets, leading to differences in similarity between the training dataset and the evaluation dataset. (a) Slightly-similar: ImageNet-1k + CIFAR100 + CIFAR10; Intermediate: ImageNet-1k + CIFAR100; Dissimilar: ImageNet-1k. (b) Slightly-similar: CIFAR10 + CIFAR100 + ImageNet-1k; Intermediate: CIFAR10 + CIFAR100; Dissimilar: CIFAR10.
A.5 Compared to the baseline, both transformation methods have a low performance penalty. It’s worth noting that this is the first time SNNs have achieved modal alignment in classification tasks without the constraint of fixed labels.
Table 4: Cross-modal image classification. In ELS, the dataset’s label set is expanded to $N$ times the original, where $N \in \{2, 5, 8\}$. In ULS, unseen label words are used to replace the label set of the downstream dataset, according to a replacement ratio $\alpha$, where $\alpha \in \{20\%, 40\%, 80\%, 100\%\}$. Experimental results from both ELS and ULS strategies demonstrate that SpikeCLIP excels in achieving accurate image-text alignment and exhibits robustness in image classification tasks.
| Dataset | Baseline | ELS | ULS (Acc/Std) |
|---------|----------|-----|---------------|
| | | $\times 2$ | $\times 5$ | $\times 8$ | 20% | 40% | 80% | 100% |
| CIFAR 10 | 94.33 | 94.33 | 94.33 | 94.32 | 94.33(0.028) | 94.32(0.033) | 94.22(0.017) | 94.18 |
| STL 10 | 89.59 | 89.59 | 89.59 | 89.45 | 89.45(0.008) | 89.20(0.127) | 87.42(0.504) | 87.64 |
4.5 Energy Consumption
We report in Table 5 the average firing rate of spiking neurons (Firing Rate), energy consumption (Energy), and energy reduction (Energy Reduction) rate of SpikeCLIP compared to ScratchCLIP on downstream datasets. The calculation methods are shown in Appendix A.7.
Table 5: Energy consumption. SpikeCLIP reduces energy consumption by 77.06% to 78.66% compared to its ANN counterpart.
| Dataset | CIFAR 10 | CIFAR 100 | Flowers 102 | Caltech 101 | OxfordIIIPet | STL 10 |
|-------------|----------|-----------|-------------|-------------|--------------|--------|
| Firing Rate(%) | 27.26 | 28.98 | 29.30 | 27.97 | 27.93 | 27.56 |
| Energy(mJ) | 3.17 | 3.37 | 3.41 | 3.25 | 3.25 | 3.21 |
| Energy Reduction | 78.66% ↓ | 77.31% ↓ | 77.06% ↓ | 78.10% ↓ | 78.13% ↓ | 78.42% ↓ |
5 Conclusion
This study has illustrated the capacity of Spiking Neural Networks (SNNs) to effectively capture multi-modal features and perform multi-modal classifications with remarkable proficiency, contingent upon the alignment of features from distinct modalities. We introduced SpikeCLIP, a novel multi-modal SNN architecture, underpinned by the innovative training approach termed “Alignment Pre-training + Dual-Loss Fine-tuning”. SpikeCLIP exhibits impressive classification capabilities and also demonstrates promise under the setting of zero-shot learning. By successfully bridging the gap in the application of SNNs within multi-modal scenarios, this research serves as a fundamental stepping stone, laying the groundwork for prospective investigations in this field.
REPRODUCIBILITY STATEMENT
The datasets used in the above experiments are all open source. In order to replicate the experiments in the sections 4.2, 4.3, and 4.4, we have provided all the code and running scripts in the supplementary materials. We have also provided a README script that guides how to run the code. In addition, the project will be published on Github to provide experimental support.
REFERENCES
Małyaban Bal and Abhronil Sengupta. SpikingBERT: Distilling BERT to train spiking language models using implicit differentiation. arXiv preprint arXiv:2308.10873, 2023.
Tong Bu, Wei Fang, Jianhao Ding, PengLin Dai, Zhaofei Yu, and Tiejun Huang. Optimal ann-snn conversion for high-accuracy and ultra-low-latency spiking neural networks. arXiv preprint arXiv:2303.04347, 2023.
Yongqiang Cao, Yang Chen, and Deepak Khosla. Spiking deep convolutional neural networks for energy-efficient object recognition. International Journal of Computer Vision, 113(1):54–66, 2015.
Adam Coates, Andrew Y. Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Geoffrey J. Gordon, David B. Dunson, and Miroslav Dudík (eds.), Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2011, Fort Lauderdale, USA, April 11-13, 2011, volume 15 of JMLR Proceedings, pp. 215–223. JMLR.org, 2011. URL http://proceedings.mlr.press/v15/coates11a.html.
Shikuang Deng, Yuhang Li, Shanghang Zhang, and Shi Gu. Temporal efficient training of spiking neural network via gradient re-weighting. arXiv preprint arXiv:2202.11946, 2022.
Peter U Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International joint conference on neural networks (IJCNN), pp. 1–8. IEEE, 2015.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arxiv 2020. arXiv preprint arXiv:2010.11929, 2010.
Chaoteng Duan, Jianhao Ding, Shiyan Chen, Zhaofei Yu, and Tiejun Huang. Temporal effective batch normalization in spiking neural networks. Advances in Neural Information Processing Systems, 35:34377–34390, 2022.
Wei Fang, Zhaofei Yu, Yanqing Chen, Tiejun Huang, Timothée Masquelier, and Yonghong Tian. Deep residual learning in spiking neural networks. In Neural Information Processing Systems, 2021.
Li Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In 2004 Conference on Computer Vision and Pattern Recognition Workshop, pp. 178–178, 2004. doi: 10.1109/CVPR.2004.383.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Mark Horowitz. 1.1 computing’s energy problem (and what we can do about it). In 2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC), pp. 10–14. IEEE, 2014.
Yangfan Hu, Huajin Tang, and Gang Pan. Spiking deep residual networks. IEEE Transactions on Neural Networks and Learning Systems, 2018.
|
VXak3CZZGC
|
I'd appreciate it if the authors could explain the motivation behind Equation 6 a little more. Is the primary goal to improve on the computational efficiency of computing the average across all training data points? Or is there another benefit to adopting an exponential moving average? This also ties loosely into my next question.
|
HYPO: HYPERSPHERICAL OUT-OF-DISTRIBUTION GENERALIZATION
Haoyue Bai∗†, Yifei Ming∗†, Julian Katz-Samuels‡†, Yixuan Li†
Department of Computer Sciences, University of Wisconsin-Madison†
Amazon‡
{baihaoyue, alvinming, sharonli}@cs.wisc.edu, jkatzsamuels@gmail.com
ABSTRACT
Out-of-distribution (OOD) generalization is critical for machine learning models deployed in the real world. However, achieving this can be fundamentally challenging, as it requires the ability to learn invariant features across different domains or environments. In this paper, we propose a novel framework HYPO (HYPerpherical OOD generalization) that provably learns domain-invariant representations in a hyperspherical space. In particular, our hyperspherical learning algorithm is guided by intra-class variation and inter-class separation principles—ensuring that features from the same class (across different training domains) are closely aligned with their class prototypes, while different class prototypes are maximally separated. We further provide theoretical justifications on how our prototypical learning objective improves the OOD generalization bound. Through extensive experiments on challenging OOD benchmarks, we demonstrate that our approach outperforms competitive baselines and achieves superior performance. Code is available at https://github.com/deeplearning-wisc/hypo.
1 INTRODUCTION
Deploying machine learning models in real-world settings presents a critical challenge of generalizing under distributional shifts. These shifts are common due to mismatches between the training and test data distributions. For instance, in autonomous driving, a model trained on in-distribution (ID) data collected under sunny weather conditions is expected to perform well in out-of-distribution (OOD) scenarios, such as rain or snow. This underscores the importance of the OOD generalization problem, which involves learning a predictor that can generalize across all possible environments, despite being trained on a finite subset of training environments.
A plethora of OOD generalization algorithms has been developed in recent years (Zhou et al., 2022), where a central theme is to learn domain-invariant representations—features that are consistent and meaningful across different environments (domains) and can generalize to the unseen test environment. Recently, Ye et al. (2021) theoretically showed that the OOD generalization error can be bounded in terms of intra-class variation and inter-class separation. Intra-class variation measures the stability of representations across different environments, while inter-class separation assesses the dispersion of features among different classes. Ideally, features should display low variation and high separation, in order to generalize well to OOD data (formally described in Section 3). Despite the theoretical analysis, a research question remains open in the field:
RQ: How to design a practical learning algorithm that directly achieves these two properties, and what theoretical guarantees can the algorithm offer?
To address the question, this paper presents a learning framework HYPO (HYPerpherical OOD generalization), which provably learns domain-invariant representations in the hyperspherical space with unit norm (Section 4). Our key idea is to promote low variation (aligning representation across
∗Equal contribution. Correspondence to Yifei Ming and Yixuan Li
†This work is not related to the author’s position at Amazon.
domains for every class) and high separation (separating prototypes across different classes). In particular, the learning objective shapes the embeddings such that samples from the same class (across all training environments) gravitate towards their corresponding class prototype, while different class prototypes are maximally separated. The two losses in our objective function can be viewed as optimizing the key properties of intra-class variation and inter-class separation, respectively. Since samples are encouraged to have a small distance with respect to their class prototypes, the resulting embedding geometry can have a small distribution discrepancy across domains and benefits OOD generalization. Geometrically, we show that our loss function can be understood through the lens of maximum likelihood estimation under the classic von Mises-Fisher distribution.
**Empirical contribution.** Empirically, we demonstrate strong OOD generalization performance by extensively evaluating HYPO on common benchmarks (Section 5). On the CIFAR-10 (ID) vs. CIFAR-10-Corruption (OOD) task, HYPO substantially improves the OOD generalization accuracy on challenging cases such as Gaussian noise, from 78.09% to 85.21%. Furthermore, we establish superior performance on popular domain generalization benchmarks, including PACS, Office-Home, VLCS, etc. For example, we achieve 88.0% accuracy on PACS which outperforms the best loss-based method by 1.1%. This improvement is non-trivial using standard stochastic gradient descent optimization. When coupling our loss with specialized optimization SWAD (Cha et al., 2021), the accuracy is further increased to 89%. We provide visualization and quantitative analysis to verify that features learned by HYPO indeed achieve low intra-class variation and high inter-class separation.
**Theoretical insight.** We provide theoretical justification for how HYPO can guarantee improved OOD generalization, supporting our empirical findings. Our theory complements Ye et al. (2021), which does not provide a loss for optimizing the intra-class variation or inter-class separation. Thus, a key contribution of this paper is to provide a crucial link between provable understanding and a practical algorithm for OOD generalization in the hypersphere. In particular, our Theorem 6.1 shows that when the model is trained with our loss function, we can upper bound intra-class variation, a key quantity to bound OOD generalization error. For a learnable OOD generalization task, the upper bound on generalization error is determined by the variation estimate on the training environments, which is effectively reduced by our loss function under sufficient sample size and expressiveness of the neural network.
## Problem Setup
We consider a multi-class classification task that involves a pair of random variables \((X, Y)\) over instances \(x \in X \subset \mathbb{R}^d\) and corresponding labels \(y \in Y := \{1, 2, \cdots, C\}\). The joint distribution of \(X\) and \(Y\) is unknown and represented by \(P_{XY}\). The goal is to learn a predictor function, \(f : X \rightarrow \mathbb{R}^C\), that can accurately predict the label \(y\) for an input \(x\), where \((x, y) \sim P_{XY}\).
Unlike in standard supervised learning tasks, the out-of-distribution (OOD) generalization problem is challenged by the fact that one cannot sample directly from \(P_{XY}\). Instead, we can only sample \((X, Y)\) under limited environmental conditions, each of which corrupts or varies the data differently. For example, in autonomous driving, these environmental conditions may represent different weathering conditions such as snow, rain, etc. We formalize this notion of environmental variations with a set of environments or domains \(E_{all}\). Sample pairs \((X^e, Y^e)\) are randomly drawn from environment \(e\). In practice, we may only have samples from a finite subset of available environments \(E_{avail} \subset E_{all}\). Given \(E_{avail}\), the goal is to learn a predictor \(f\) that can generalize across all possible environments. The problem is stated formally below.
**Definition 2.1 (OOD Generalization).** Let \(E_{avail} \subset E_{all}\) be a set of training environments, and assume that for each environment \(e \in E_{avail}\), we have a dataset \(D^e = \{(x^e_j, y^e_j)\}_{j=1}^{n_e}\), sampled i.i.d. from an unknown distribution \(P_{X^eY^e}\). The goal of OOD generalization is to find a classifier \(f^*\), using the data from the datasets \(D^e\), that minimizes the worst-case risk over the entire family of environments \(E_{all}\):
\[
\min_{f \in F} \max_{e \in E_{all}} \mathbb{E}_{P_{X^eY^e}} \ell(f(X^e), Y^e),
\]
where \(F\) is hypothesis space and \(\ell(\cdot, \cdot)\) is the loss function.
The problem is challenging since we do not have access to data from domains outside \(E_{avail}\). In particular, the task is commonly referred to as multi-source domain generalization when \(|E_{avail}| > 1\).
3 Motivation of Algorithm Design
Our work is motivated by the theoretical findings in Ye et al. (2021), which shows that the OOD generalization performance can be bounded in terms of intra-class variation and inter-class separation with respect to various environments. The formal definitions are given as follows.
**Definition 3.1** (Intra-class variation). The variation of feature $\phi$ across a domain set $\mathcal{E}$ is
$$V(\phi, \mathcal{E}) = \max_{y \in \mathcal{Y}} \sup_{e, e' \in \mathcal{E}} \rho(\mathbb{P}(\phi^e | y), \mathbb{P}(\phi^{e'} | y)), \quad (2)$$
where $\rho(\mathbb{P}, \mathbb{Q})$ is a symmetric distance (e.g., Wasserstein distance, total variation, Hellinger distance) between two distributions, and $\mathbb{P}(\phi^e | y)$ denotes the class-conditional distribution for features of samples in environment $e$.
**Definition 3.2** (Inter-class separation\(^1\)). The separation of feature $\phi$ across domain set $\mathcal{E}$ is
$$I_\rho(\phi, \mathcal{E}) = \frac{1}{C(C - 1)} \sum_{y \neq y' \in \mathcal{Y}} \min_{e \in \mathcal{E}} \rho(\mathbb{P}(\phi^e | y), \mathbb{P}(\phi^e | y')). \quad (3)$$
The intra-class variation $V(\phi, \mathcal{E})$ measures the stability of feature $\phi$ over the domains in $\mathcal{E}$ and the inter-class separation $I(\phi, \mathcal{E})$ captures the ability of $\phi$ to distinguish different labels. Ideally, features should display high separation and low variation.
**Definition 3.3.** The OOD generalization error of classifier $f$ is defined as follows:
$$\text{err}(f) = \max_{e \in \mathcal{E}_{\text{all}}} \mathbb{E}_{p_e X Y} \ell(f(X^e), Y^e) - \max_{e \in \mathcal{E}_{\text{avail}}} \mathbb{E}_{p_e X Y} \ell(f(X^e), Y^e)$$
which is bounded by the variation estimate on $\mathcal{E}_{\text{avail}}$ with the following theorem.
**Theorem 3.1** (OOD error upper bound, informal (Ye et al., 2021)). Suppose the loss function $\ell(\cdot, \cdot)$ is bounded by $[0, B]$. For a learnable OOD generalization problem with sufficient inter-class separation, the OOD generalization error $\text{err}(f)$ can be upper bounded by
$$\text{err}(f) \leq O\left((V^{\text{sup}}(h, \mathcal{E}_{\text{avail}}))^{\alpha^2} (\alpha + d)^2\right), \quad (4)$$
for some $\alpha > 0$, and $V^{\text{sup}}(h, \mathcal{E}_{\text{avail}}) \triangleq \sup_{\beta \in S^{d-1}} V(\beta^\top h, \mathcal{E}_{\text{avail}})$ is the inter-class variation, $h(\cdot) \in \mathbb{R}^d$ is the feature vector, and $\beta$ is a vector in unit hypersphere $S^{d-1} = \{\beta \in \mathbb{R}^d : \|\beta\|_2 = 1\}$, and $f$ is a classifier based on normalized feature $h$.
**Remarks.** The Theorem above suggests that both low intra-class variation and high inter-class separation are desirable properties for theoretically grounded OOD generalization. Note that in the full formal Theorem (see Appendix C), maintaining the inter-class separation is necessary for the learnability of the OOD generalization problem (Def. C.2). In other words, when the learned embeddings exhibit high inter-class separation, the problem becomes learnable. In this context, bounding intra-class variation becomes crucial for reducing the OOD generalization error.
Despite the theoretical underpinnings, it remains unknown to the field how to design a practical learning algorithm that directly achieves these two properties, and what theoretical guarantees can the algorithm offer. This motivates our work.
To reduce the OOD generalization error, our key motivation is to design a hyperspherical learning algorithm that directly promotes low variation (aligning representation across domains for every class) and high separation (separating prototypes across different classes).
4 Method
Following the motivation in Section 3, we now introduce the details of the learning algorithm HYPO (HYPerspherical OOD generalization), which is designed to promote domain invariant representations.
---
\(^1\)Referred to as “Informativeness” in Ye et al. (2021).
in the hyperspherical space. The key idea is to shape the hyperspherical embedding space so that samples from the same class (across all training environments \( \mathcal{E}_{\text{avail}} \)) are closely aligned with the corresponding class prototype. Since all points are encouraged to have a small distance with respect to the class prototypes, the resulting embedding geometry can have a small distribution discrepancy across domains and hence benefits OOD generalization. In what follows, we first introduce the learning objective (Section 4.1), and then we discuss the geometrical interpretation of the loss and embedding (Section 4.2). We will provide theoretical justification for HYPO in Section 6, which leads to a provably smaller intra-class variation, a key quantity to bound OOD generalization error.
### 4.1 Hyperspherical Learning for OOD Generalization
**Loss function.** The learning algorithm is motivated to directly optimize the two criteria: intra-class variation and inter-class separation. At a high level, HYPO aims to learn embeddings for each sample in the training environments by maintaining a class prototype vector \( \mu_c \in \mathbb{R}^d \) for each class \( c \in \{1, 2, ..., C\} \). To optimize for low variation, the loss encourages the feature embedding of a sample to be close to its class prototype. To optimize for high separation, the loss encourages different class prototypes to be far apart from each other.
Specifically, we consider a deep neural network \( h : \mathcal{X} \mapsto \mathbb{R}^d \) that maps an input \( \tilde{x} \in \mathcal{X} \) to a feature embedding \( \tilde{z} := h(\tilde{x}) \). The loss operates on the normalized feature embedding \( z := \tilde{z}/\|\tilde{z}\|_2 \). The normalized embeddings are also referred to as hyperspherical embeddings, since they are on a unit hypersphere, denoted as \( S^{d-1} := \{ z \in \mathbb{R}^d \mid \|z\|_2 = 1 \} \). The loss is formalized as follows:
\[
L = -\frac{1}{N} \sum_{e \in \mathcal{E}_{\text{avail}}} \sum_{i=1}^{|D_e|} \log \left( \frac{\exp(z_i^T \mu_{c(i)}/\tau)}{\sum_{j=1}^C \exp(z_i^T \mu_j/\tau)} \right) + \frac{1}{C} \sum_{i=1}^C \log \left( \frac{1}{C-1} \sum_{j \neq i, j \in \mathcal{Y}} \exp(\mu_i^T \mu_j/\tau) \right),
\]
where \( N \) is the number of samples, \( \tau \) is the temperature, \( z \) is the normalized feature embedding, and \( \mu_c \) is the prototype embedding for class \( c \). While hyperspherical learning algorithms have been studied in other contexts (Mettes et al., 2019; Khosla et al., 2020; Ming et al., 2023), none of the prior works explored its provable connection to domain generalization, which is our distinct contribution. We will theoretically show in Section 6 that minimizing our loss function effectively reduces intra-class variation, a key quantity to bound OOD generalization error.
The training objective in Equation 5 can be efficiently optimized end-to-end. During training, an important step is to estimate the class prototype \( \mu_c \) for each class \( c \in \{1, 2, ..., C\} \). The class-conditional prototypes can be updated in an exponential-moving-average manner (EMA) (Li et al., 2020):
\[
\mu_c := \text{Normalize}(\alpha \mu_c + (1 - \alpha)z), \quad \forall c \in \{1, 2, \ldots, C\}
\]
where the prototype \( \mu_c \) for class \( c \) is updated during training as the moving average of all embeddings with label \( c \), and \( z \) denotes the normalized embedding of samples of class \( c \). An end-to-end pseudo algorithm is summarized in Appendix A.
**Class prediction.** In testing, classification is conducted by identifying the closest class prototype:
\[
\hat{y} = \arg\max_{c \in [C]} f_c(x), \quad \text{where } f_c(x) = z^T \mu_c \text{ and } z = \frac{h(x)}{\|h(x)\|_2} \text{ is the normalized feature embedding.}
\]
### 4.2 Geometrical Interpretation of Loss and Embedding
Geometrically, the loss function above can be interpreted as learning embeddings located on the surface of a unit hypersphere. The hyperspherical embeddings can be modeled by the von Mises-Fisher (vMF) distribution, a well-known distribution in directional statistics (Jupp & Mardia, 2009). For a unit vector \( z \in \mathbb{R}^d \) in class \( c \), the probability density function is defined as
\[
p(z \mid y = c) = Z_d(\kappa) \exp(\kappa \mu_c^T z),
\]
where \( \mu_c \in \mathbb{R}^d \) denotes the mean direction of the class \( c \), \( \kappa \geq 0 \) denotes the concentration of the distribution around \( \mu_c \), and \( Z_d(\kappa) \) denotes the normalization factor. A larger \( \kappa \) indicates a higher concentration around the class center. In the extreme case of \( \kappa = 0 \), the samples are distributed uniformly on the hypersphere.
Under this probabilistic model, an embedding \( z \) is assigned to the class \( c \) with the following probability
\[
p(y = c \mid z; \{\kappa, \mu_j\}_{j=1}^C) = \frac{Z_d(\kappa) \exp(\kappa \mu_c^\top z)}{\sum_{j=1}^C Z_d(\kappa) \exp(\kappa \mu_j^\top z)} = \frac{\exp(\mu_c^\top z / \tau)}{\sum_{j=1}^C \exp(\mu_j^\top z / \tau)},
\]
where \( \tau = 1/\kappa \) denotes a temperature parameter.
**Maximum likelihood view.** Notably, minimizing the first term in our loss (cf. Eq. 5) is equivalent to performing maximum likelihood estimation under the vMF distribution:
\[
\argmax_\theta \prod_{i=1}^N p(y_i \mid x_i; \{\kappa, \mu_j\}_{j=1}^C), \text{ where } (x_i, y_i) \in \bigcup_{e \in E_{\text{train}}} D_e
\]
where \( i \) is the index of sample, \( j \) is the index of the class, and \( N \) is the size of the training set. In effect, this loss encourages each ID sample to have a high probability assigned to the correct class in the mixtures of the vMF distributions.
## 5 EXPERIMENTS
In this section, we show that HYPO achieves strong OOD generalization performance in practice, establishing competitive performance on several benchmarks. In what follows, we describe the experimental setup in Section 5.1, followed by main results and analysis in Section 5.2.
### 5.1 EXPERIMENTAL SETUP
**Datasets.** Following the common benchmarks in literature, we use CIFAR-10 (Krizhevsky et al., 2009) as the in-distribution data. We use CIFAR-10-C (Hendrycks & Dietterich, 2019) as OOD data, with 19 different common corruption applied to CIFAR-10. In addition to CIFAR-10, we conduct experiments on popular benchmarks including PACS (Li et al., 2017), Office-Home (Gulrajani & Lopez-Paz, 2020), and VLCS (Gulrajani & Lopez-Paz, 2020) to validate the generalization performance. PACS contains 4 domains/environments (photo, art painting, cartoon, sketch) with 7 classes (dog, elephant, giraffe, guitar, horse, house, person). Office-Home comprises four different domains: art, clipart, product, and real. Results on additional OOD datasets Terra Incognita (Gulrajani & Lopez-Paz, 2020), and ImageNet can be found in Appendix F and Appendix G.
**Evaluation metrics.** We report the following two metrics: (1) ID classification accuracy (ID Acc.) for ID generalization, and (2) OOD classification accuracy (OOD Acc.) for OOD generalization.
**Experimental details.** In our main experiments, we use ResNet-18 for CIFAR-10 and ResNet-50 for PACS, Office-Home, and VLCS. For these datasets, we use stochastic gradient descent with momentum 0.9, and weight decay \( 10^{-4} \). For CIFAR-10, we train the model from scratch for 500 epochs using an initial learning rate of 0.5 and cosine scheduling, with a batch size of 512. Following common practice for contrastive losses (Chen et al., 2020; Khosla et al., 2020; Yao et al., 2022), we use an MLP projection head with one hidden layer to obtain features. The embedding (output) dimension is 128 for the projection head. We set the default temperature \( \tau \) as 0.1 and the prototype update factor \( \alpha \) as 0.95. For PACS, Office-Home, and VLCS, we follow the common practice and initialize the network using ImageNet pre-trained weights. We fine-tune the network for 50 epochs. The embedding dimension is 512 for the projection head. We adopt the leave-one-domain-out evaluation protocol and use the training domain validation set for model selection (Gulrajani & Lopez-Paz, 2020), where the validation set is pooled from all training domains. Details on other hyperparameters are in Appendix D.
| Algorithm | PACS | Office-Home | VLCS | Average Acc. (%) |
|--------------------|------|-------------|------|------------------|
| ERM (Vapnik, 1999) | 85.5 | 67.6 | 77.5 | 76.7 |
| CORAL (Sun & Saenko, 2016) | 86.2 | 68.7 | 78.8 | 77.9 |
| DANN (Ganin et al., 2016) | 83.7 | 65.9 | 78.6 | 76.1 |
| MLDG (Li et al., 2018a) | 84.9 | 66.8 | 77.2 | 76.3 |
| CDANN (Li et al., 2018c) | 82.6 | 65.7 | 77.5 | 75.3 |
| MMD (Li et al., 2018b) | 84.7 | 66.4 | 77.5 | 76.2 |
| IRM (Arjovsky et al., 2019) | 83.5 | 64.3 | 78.6 | 75.5 |
| GroupDRO (Sagawa et al., 2020) | 84.4 | 66.0 | 76.7 | 75.7 |
| I-Mixup (Wang et al., 2020; Xu et al., 2020; Yan et al., 2020) | 84.6 | 68.1 | 77.4 | 76.7 |
| RSC (Huang et al., 2020) | 85.2 | 65.5 | 77.1 | 75.9 |
| ARM (Zhang et al., 2021) | 85.1 | 64.8 | 77.6 | 75.8 |
| MTL (Blanchard et al., 2021) | 84.6 | 66.4 | 77.2 | 76.1 |
| VREx (Krueger et al., 2021) | 84.9 | 66.4 | 78.3 | 76.5 |
| Mixstyle (Zhou et al., 2021) | 85.2 | 60.4 | 77.9 | 74.5 |
| SelfReg (Kim et al., 2021) | 85.6 | 67.9 | 77.8 | 77.1 |
| SagNet (Nam et al., 2021) | 86.3 | 68.1 | 77.8 | 77.4 |
| GVRT (Min et al., 2022) | 85.1 | 70.1 | 79.0 | 78.1 |
| VNE (Kim et al., 2023) | 86.9 | 65.9 | 78.1 | 77.0 |
| HYPO (Ours) | 88.0 ± 0.4 | 71.7 ± 0.3 | 78.2 ± 0.4 | 79.3 |
Table 1: Comparison with domain generalization methods on the PACS, Office-Home, and VLCS. All methods are trained on ResNet-50. The model selection is based on a training domain validation set. To isolate the effect of loss functions, all methods are optimized using standard SGD. We report the average and std of our method. ±x denotes the rounded standard error.
5.2 Main Results and Analysis
HYPO excels on common corruption benchmarks. As shown in Figure 2, HYPO achieves consistent improvement over the ERM baseline (trained with cross-entropy loss), on a variety of common corruptions. Our evaluation includes different corruptions including Gaussian noise, Snow, JPEG compression, Shot noise, Zoom blur, etc. The model is trained on CIFAR-10, without seeing any type of corruption data. In particular, our method brings significant improvement for challenging cases such as Gaussian noise, enhancing OOD accuracy from 78.09% to 85.21% (+7.12%). Complete results on all 19 different corruption types are in Appendix E.
Figure 2: Our method HYPO significantly improves the OOD generalization performance compared to ERM on various OOD datasets w.r.t. CIFAR-10 (ID). Full results can be seen in Appendix E.
HYPO establishes competitive performance on popular benchmarks. Our method delivers superior results in the popular domain generalization tasks, as shown in Table 1. HYPO outperforms an extensive collection of common OOD generalization baselines on popular domain generalization datasets, including PACS, Office-Home, VLCS. For instance, on PACS, HYPO improves the best loss-based method by 1.1%. Notably, this enhancement is non-trivial since we are not relying on specialized optimization algorithms such as SWAD (Cha et al., 2021). Later in our ablation, we show that coupling HYPO with SWAD can further boost the OOD generalization performance, establishing superior performance on this challenging task.
With multiple training domains, we observe that it is desirable to emphasize hard negative pairs when optimizing the inter-class separation. As depicted in Figure 3, the embeddings of negative pairs from the same domain but different classes (such as dog and elephant in art painting) can be quite close on the hypersphere. Therefore, it is more informative to separate such hard negative pairs. This can be enforced by a simple modification to the denominator of our variation loss (Eq. 11 in Appendix D), which we adopt for multi-source domain generalization tasks.
| Algorithm | Art painting | Cartoon | Photo | Sketch | Average Acc. (%) |
|---------------------------|--------------|---------|-------|--------|------------------|
| PCL w/ SGD (Yao et al., 2022) | 88.0 | 78.8 | 98.1 | 80.3 | 86.3 |
| HYPO w/ SGD (Ours) | 87.2 | 82.3 | 98.0 | 84.5 | **88.0** |
| PCL w/ SWAD (Yao et al., 2022) | 90.2 | 83.9 | 98.1 | 82.6 | 88.7 |
| HYPO w/ SWAD (Ours) | 90.5 | 84.6 | 97.7 | 83.2 | **89.0** |
Table 2: Results for comparing PCL and HYPO with SGD-based and SWAD-based optimizations on the PACS benchmark. (*The performance reported in the original PCL paper Table 3 is implicitly based on SWAD).
Relations to PCL. PCL (Yao et al., 2022) adapts a proxy-based contrastive learning framework for domain generalization. We highlight several notable distinctions from ours: (1) While PCL offers no theoretical insights, HYPO is guided by theory. We provide a formal theoretical justification that our method reduces intra-class variation which is essential to bounding OOD generalization error (see Section 6); (2) Our loss function formulation is different and can be rigorously interpreted as shaping vMF distributions of hyperspherical embeddings (see Section 4.2), whereas PCL can not; (3) Unlike PCL (86.3% w/o SWAD), HYPO is able to achieve competitive performance (88.0%) without heavy reliance on special optimization SWAD (Cha et al., 2021), a dense and overfit-aware stochastic weight sampling (Izmailov et al., 2018) strategy for OOD generalization. As shown in Table 2, we also conduct experiments in conjunction with SWAD. Compared to PCL, HYPO achieves superior performance with **89%** accuracy, which further demonstrates its advantage.
Visualization of embedding. Figure 4 shows the UMAP (McInnes et al., 2018) visualization of feature embeddings for ERM (left) vs. HYPO (right). The embeddings are extracted from models trained on PACS. The red, orange, and green points are from the in-distribution, corresponding to art painting (A), photo (P), and sketch (S) domains. The violet points are from the unseen OOD domain cartoon (C). There are two salient observations: (1) for any given class, the embeddings across domains $\mathcal{E}_{\text{all}}$ become significantly more aligned (and invariant) using our method compared to the ERM baseline. This directly verifies the low variation (cf. Equation 2) of our learned embedding. (2) The embeddings are well separated across different classes, and distributed more uniformly in the space than ERM, which verifies the high inter-class separation (cf. Equation 3) of our method. Overall, our observations well support the efficacy of HYPO.
Quantitative verification of intra-class variation. We provide empirical verification on intra-class variation in Figure 5, where the model is trained on PACS. We measure the intra-class variation with Sinkhorn divergence (entropy regularized Wasserstein distance). The horizontal axis (0)-(6) denotes...
Figure 5: Intra-class variation for ERM (left) vs. HYPO (right) on PACS. For each class \( y \), we measure the Sinkhorn Divergence between the embeddings of each pair of domains. Our method results in significantly lower intra-class variation across different pairs of training domains compared to ERM.
different classes, and the vertical axis denotes different pairs of training domains (‘P’, ‘A’, ‘S’). Darker color indicates lower Sinkhorn divergence. We can see that our method results in significantly lower intra-class variation compared to ERM, which aligns with our theoretical insights in Section 6.
Additional ablation studies. Due to space constraints, we defer additional experiments and ablations to the Appendix, including (1) results on other tasks from DomainBed (Appendix F); (2) results on large-scale benchmarks such as ImageNet-100 (Appendix G); (3) ablation of different loss terms (Appendix H); (4) an analysis on the effect of \( \tau \) and \( \alpha \) (Appendix I).
6 Why HYPO Improves Out-of-Distribution Generalization?
In this section, we provide a formal justification of the loss function. Our main Theorem 6.1 gives a provable understanding of how the learning objective effectively reduces the variation estimate \( V^{\text{sup}}(h, \mathcal{E}_{\text{avail}}) \), thus directly reducing the OOD generalization error according to Theorem 3.1.
For simplicity, we assume \( \tau = 1 \) and denote the prototype vectors \( \mu_1, \ldots, \mu_C \in S^{d-1} \). Let \( \mathcal{H} \subset \{ h : X \mapsto S^{d-1} \} \) denote the function class induced by the neural network.
**Theorem 6.1 (Variation upper bound using HYPO).** When samples are aligned with class prototypes such that \( \frac{1}{N} \sum_{j=1}^{N} \mu_{c(j)}^\top z_j \geq 1 - \epsilon \) for some \( \epsilon \in (0, 1) \), then \( \exists \delta \in (0, 1) \), with probability at least \( 1 - \delta \),
\[
V^{\text{sup}}(h, \mathcal{E}_{\text{avail}}) \leq O(\epsilon^{1/3} + (\ln(2/\delta)/N)^{1/6} + (\mathbb{E}_D[\frac{1}{N} \mathbb{E}_{\sigma_1, \ldots, \sigma_N} \sup_{h \in \mathcal{H}} \sum_{i=1}^{N} \sigma_i z_i^\top \mu_{c(i)}])^{1/3}),
\]
where \( z_j = \frac{h(x_j)}{\|h(x_j)\|_2} \), \( \sigma_1, \ldots, \sigma_N \) are Rademacher random variables and \( O(\cdot) \) suppresses dependence on constants and \( |\mathcal{E}_{\text{avail}}| \).
**Implications.** In Theorem 6.1, we can see that the upper bound consists of three factors: the optimization error, the Rademacher complexity of the given neural network, and the estimation error which becomes close to 0 as the number of samples \( N \) increases. Importantly, the term \( \epsilon \) reflects how sample embeddings are aligned with their class prototypes on the hyperspherical space (as we have \( \frac{1}{N} \sum_{j=1}^{N} \mu_{c(j)}^\top z_j \geq 1 - \epsilon \), which is directly minimized by our proposed loss in Equation 5).
The above Theorem implies that when we train the model with the HYPO loss, we can effectively upper bound the intra-class variation, a key term for bounding OOD generation performance by Theorem 3.1. In Section H, we provide empirical verification of our bound by estimating \( \hat{\epsilon} \), which is indeed close to 0 for models trained with HYPO loss. We defer proof details to Appendix C.
**Necessity of inter-class separation loss.** We further present a theoretical analysis in Appendix J explaining how our loss promotes inter-class separation, which is necessary to ensure the learnability of the OOD generalization problem. We provide a brief summary in Appendix C and discuss the notion of OOD learnability, and would like to refer readers to Ye et al. (2021) for an in-depth and formal treatment. Empirically, to verify the impact of inter-class separation, we conducted an ablation study in Appendix H, where we compare the OOD performance of our method (with separation loss) vs. our method (without separation loss). We observe that incorporating separation loss indeed achieves stronger OOD generalization performance, echoing the theory.
7 RELATED WORKS
Out-of-distribution generalization. OOD generalization is an important problem when the training and test data are sampled from different distributions. Compared to domain adaptation (Daume III & Marcu, 2006; Ben-David et al., 2010; Tzeng et al., 2017; Kang et al., 2019; Wang et al., 2022c), OOD generalization is more challenging (Blanchard et al., 2011; Muandet et al., 2013; Gulrajani & Lopez-Paz, 2020; Bai et al., 2021b; Zhou et al., 2021; Koh et al., 2021; Bai et al., 2021a; Wang et al., 2022b; Ye et al., 2022; Cha et al., 2022; Bai et al., 2023; Kim et al., 2023; Guo et al., 2023; Dai et al., 2023; Tong et al., 2023), which aims to generalize to unseen distributions without any sample from the target domain. In particular, a popular direction is to extract domain-invariant feature representation. Prior works show that the invariant features from training domains can help discover invariance on target domains for linear models (Peters et al., 2016; Rojas-Carulla et al., 2018). IRM (Arjovsky et al., 2019) and its variants (Ahuja et al., 2020; Krueger et al., 2021) aim to find invariant representation from different training domains via an invariant risk regularizer. Mahajan et al. (2021) propose a causal matching-based algorithm for domain generalization. Other lines of works have explored the problem from various perspectives such as causal discovery (Chang et al., 2020), distributional robustness (Sagawa et al., 2020; Zhou et al., 2020), model ensembles (Chen et al., 2023b; Rame et al., 2023), and test-time adaptation (Park et al., 2023; Chen et al., 2023a). In this paper, we focus on improving OOD generalization via hyperspherical learning, and provide a new theoretical analysis of the generalization error.
Theory for OOD generalization. Although the problem has attracted great interest, theoretical understanding of desirable conditions for OOD generalization is under-explored. Generalization to arbitrary OOD is impossible since the test distribution is unknown (Blanchard et al., 2011; Muandet et al., 2013). Numerous general distance measures exist for defining a set of test domains around the training domain, such as KL divergence (Joyce, 2011), MMD (Gretton et al., 2006), and EMD (Rubner et al., 1998). Based on these measures, some prior works focus on analyzing the OOD generalization error bound. For instance, Albuquerque et al. (2019) obtain a risk bound for linear combinations of training domains. Ye et al. (2021) provide OOD generalization error bounds based on the notation of variation. In this work, we provide a hyperspherical learning algorithm that provably reduces the variation, thereby improving OOD generalization both theoretically and empirically.
Contrastive learning for domain generalization. Contrastive learning methods have been widely explored in different learning tasks. For example, Wang & Isola (2020) analyze the relation between the alignment and uniformity properties on the hypersphere for unsupervised learning, while we focus on supervised learning with domain shift. Tapaswi et al. (2019) investigates a contrastive metric learning approach for hyperspherical embeddings in video face clustering, which differs from our objective of OOD generalization. Von Kügelgen et al. (2021) provide theoretical justification for self-supervised learning with data augmentations. Recently, contrastive losses have been adopted for OOD generalization. For example, CIGA (Chen et al., 2022) captures the invariance of graphs to enable OOD generalization for graph data. CNC (Zhang et al., 2022) is specifically designed for learning representations robust to spurious correlation by inferring pseudo-group labels and performing supervised contrastive learning. SelfReg (Kim et al., 2021) proposes a self-supervised contrastive regularization for domain generalization with non-hyperspherical embeddings, while we focus on hyperspherical features with theoretically grounded loss formulations.
8 CONCLUSION
In this paper, we present a theoretically justified algorithm for OOD generalization via hyperspherical learning. HYPO facilitates learning domain-invariant representations in the hyperspherical space. Specifically, we encourage low variation via aligning features across domains for each class and promote high separation by separating prototypes across different classes. Theoretically, we provide a provable understanding of how our loss function reduces the OOD generalization error. Minimizing our learning objective can reduce the variation estimates, which determine the general upper bound on the generalization error of a learnable OOD generalization task. Empirically, HYPO achieves superior performance compared to competitive OOD generalization baselines. We hope our work can inspire future research on OOD generalization and provable understanding.
ACKNOWLEDGEMENT
The authors would like to thank ICLR anonymous reviewers for their helpful feedback. The work is supported by the AFOSR Young Investigator Program under award number FA9550-23-1-0184, National Science Foundation (NSF) Award No. IIS-2237037 & IIS-2331669, and Office of Naval Research under grant number N00014-23-1-2643.
REFERENCES
Kartik Ahuja, Karthikeyan Shanmugam, Kush Varshney, and Amit Dhurandhar. Invariant risk minimization games. In *International Conference on Machine Learning*, pp. 145–155, 2020.
Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H Falk, and Ioannis Mitliagkas. Generalizing to unseen domains via distribution matching. *arXiv preprint arXiv:1911.00804*, 2019.
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019.
Haoyue Bai, Rui Sun, Lanqing Hong, Fengwei Zhou, Nanyang Ye, Han-Jia Ye, S-H Gary Chan, and Zhenguo Li. Decaug: Out-of-distribution generalization via decomposed feature representation and semantic augmentation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pp. 6705–6713, 2021a.
Haoyue Bai, Fengwei Zhou, Lanqing Hong, Nanyang Ye, S-H Gary Chan, and Zhenguo Li. Nas-ood: Neural architecture search for out-of-distribution generalization. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 8320–8329, 2021b.
Haoyue Bai, Ceyuan Yang, Yinghao Xu, S-H Gary Chan, and Bolei Zhou. Improving out-of-distribution robustness of classifiers via generative interpolation. *arXiv preprint arXiv:2307.12219*, 2023.
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. *Machine Learning*, 79(1):151–175, 2010.
Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. In *Advances in Neural Information Processing Systems*, volume 24, 2011.
Gilles Blanchard, Aniket Anand Deshmukh, Ürun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. *The Journal of Machine Learning Research*, 22(1):46–100, 2021.
Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. *Advances in Neural Information Processing Systems*, 34:22405–22418, 2021.
Junbum Cha, Kyungjae Lee, Sungrae Park, and Sanghyuk Chun. Domain generalization by mutual-information regularization with pre-trained models. In *European Conference on Computer Vision*, pp. 440–457, 2022.
Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. Invariant rationalization. In *International Conference on Machine Learning*, pp. 1448–1458, 2020.
Liang Chen, Yong Zhang, Yibing Song, Ying Shan, and Lingqiao Liu. Improved test-time adaptation for domain generalization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 24172–24182, 2023a.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International Conference on Machine Learning*, 2020.
|
88FcNOwNvM
|
+ Suppose it is jointly trained, how does the network learn to decompose the image into shadow image, object image, background image, etc? Is there any specific constraint for learning these different properties?
|
COMPOSITIONAL IMAGE DECOMPOSITION WITH DIFFUSION MODELS
Anonymous authors
Paper under double-blind review
ABSTRACT
Given an image of a natural scene, we are able to quickly decompose it into a set of components such as objects, lighting, shadows, and foreground. We can then picture how the image would look if we were to recombine certain components with those from other images, for instance producing a scene with a set of objects from our bedroom and animals from a zoo under the lighting conditions of a forest, even if we have never seen such a scene in real life before. We present a method to decompose an image into such compositional components. Our approach, Decomp Diffusion, is an unsupervised method which, when given a single image, infers a set of different components in the image, each represented by a diffusion model. We demonstrate how components can capture different factors of the scene, ranging from global scene descriptors (e.g., shadows, foreground, facial expression) to local scene descriptors (e.g., objects). We further illustrate how inferred factors can be flexibly composed, even with factors inferred from other models, to generate a variety of scenes sharply different than those seen in training time.
1 INTRODUCTION
Humans have the remarkable ability to quickly learn new concepts, such as learning to use a new tool after observing just a few demonstrations (Allen et al., 2020). This skill relies on the ability to combine and reuse previously acquired concepts to accomplish a given task (Lake et al., 2017). This is particularly evident in natural language, where a limited set of words can be infinitely combined under grammatical rules to express various ideas and opinions (Chomsky, 1965). In this work, we propose a method to discover compositional concepts from images in an unsupervised manner, which may be flexibly combined both within and across different image modalities.
Prior works on unsupervised compositional concept discovery may be divided into two separate categories. One line of approach focuses on discovering a set of global, holistic factors by representing data points in fixed factorized vector space (Vedantam et al., 2018; Higgins et al., 2018; Singh et al., 2019; Peebles et al., 2020). Individual factors, such as facial expression or hair color, are represented as independent dimensions of the vector space, with recombination between concepts corresponding to recombination between underlying dimensions. However, since the vector space has a fixed dimensionality, multiple instances of a single factor, such as multiple different sources of lighting, may not be easily combined. Furthermore, as the vector space has a fixed underlying structure, individual factored vector spaces from different models trained on different datasets may not be combined, e.g., the lighting direction in one dataset with the foreground of an image from another.
An alternative approach decomposes a scene into a set of different underlying “object” factors. Each individual factor represents a separate set of pixels in an image defined by a disjoint segmentation mask (Burgess et al., 2019; Locatello et al., 2020b; Monnier et al., 2021; Engelcke et al., 2021a). Composition between different factors then corresponds to composing their respective segmentation masks. However, this method struggles to model higher-level relationships between factors, as well as multiple global factors that collectively affect the same image.
Recently, COMET (Du et al., 2021a) proposes to instead decompose a scene into a set of factors represented as energy functions. Composition between factors corresponds to solving for a minimal energy image subject to each energy function. Each individual energy function can represent global concepts such as facial expression or hair color as well as local concepts such as objects. However, COMET is unstable to train due to second-order gradients, and often generates blurry images.
Figure 1: **Image Decomposition with Decomp Diffusion.** Our unsupervised method can decompose an input image into both local factors, such as objects (Left), and global factors (Right), such as facial features. Additionally, our approach can combine the deduced factors for image reconstruction.
In this paper, we leverage the close connection between Energy-Based Models (LeCun et al., 2006; Du & Mordatch, 2019) and diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) and propose Decomp Diffusion, an approach to decompose a scene into a set of factors, each represented as separate diffusion models. Composition between factors is achieved by sampling images from a composed diffusion distribution (Liu et al., 2022; Du et al., 2023), as illustrated in Figure 1. Similar to composition between energy functions, this composition operation allows individual factors to represent both global and local concepts and further enables the recombination of concepts across different models and datasets.
However, unlike the underlying energy decomposition objective of COMET, Decomp Diffusion may directly be trained through denoising, a stable and less expensive learning objective, and leads to higher resolution images. In summary, we contribute the following: First, we present Decomp Diffusion, an approach using diffusion models to decompose scenes into a set of different compositional concepts which substantially outperforms prior work using explicit energy functions. Second, we show Decomp Diffusion is able to successfully decompose scenes into both global concepts as well as local concepts. Finally, we show that concepts discovered by Decomp Diffusion generalize well, and are amenable to compositions across different modalities of data, as well as components discovered by other instances of Decomp Diffusion.
## 2 UNSUPERVISED DECOMPOSITION OF IMAGES INTO ENERGY FUNCTIONS
In this section, we introduce background information about COMET (Du et al., 2021a) which our approach extends. COMET infers a set of latent factors from an input image, and uses each inferred latent to define a separate energy function over images. To generate an image that exhibits inferred concepts, COMET runs an optimization process over images on the sum of different energy functions.
In particular, given an image $x \in \mathbb{R}^D$, COMET uses a learned encoder $\text{Enc}_\phi(x)$ to infer a set of $K$ different latents $z_k \in \mathbb{R}^M$, where each latent $z_k$ represents a different concept in an image. Both images and latents are passed into an energy function $E_\theta(x, z_k) : \mathbb{R}^D \times \mathbb{R}^M \rightarrow \mathbb{R}$, which maps these variables to a scalar energy value.
Given a set of different factors $z_k$, decoding these factors to an image corresponds to solving the optimization problem:
$$\arg\min_x \sum_k E_\theta(x; z_k).$$
To solve this optimization problem, COMET runs an iterative gradient descent procedure from an image initialized from Gaussian noise. Factors inferred from either different images or even different models may likewise be decoded by optimizing the energy function corresponding to sum of energy function of each factor.
COMET is trained so that the $K$ different inferred factors $z_k$ from an input image $x_i$ define $K$ energy functions, so that the minimal energy state corresponds to the original image $x_i$:
$$L_{\text{MSE}}(\theta) = \left\| \arg\min_x \left( \sum_k E_\theta(x; z_k) \right) - x_i \right\|^2,$$
where $z_k = \text{Enc}_\phi(x_i)[k]$. The argmin of the sum of the energy functions is approximated by $N$ steps of gradient descent
$$x_i^N = x_i^{N-1} - \gamma \nabla_x \sum_k E_\theta(x_i^{N-1}; \text{Enc}_\phi(x_i)[k]),$$
where $\gamma$ is the step size. Optimizing the training objective in Equation 2 corresponds to back-propagating through this optimization objective. The resulting process is computationally expensive and unstable to train, as it requires computing second-order gradients.
3 COMPOSITIONAL IMAGE DECOMPOSITION WITH DIFFUSION MODELS
Next, we discuss how we may instead decompose images into a set of composable diffusion models. We first discuss how diffusion models may be seen as parameterizing energy functions in Section 3.1. Then in Section 3.2, we describe how we use this connection in Decomp Diffusion to decompose images into a set of composable diffusion models.
3.1 DENOISING NETWORKS AS ENERGY FUNCTIONS
Denoising Diffusion Probabilistic Models (DDPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020) are a class of generative models that facilitate generation of images $x_0$ by iteratively denoising an image initialized from Gaussian noise. Given a randomly sampled noise value $\epsilon \sim \mathcal{N}(0, 1)$, as well as a set of $t$ different noise levels $\epsilon^t = \sqrt{\beta_t} \epsilon$ added to a clean image $x_i$, a denoising model $\epsilon_\theta$ is trained to denoise the image at each noise level $t$:
$$L_{\text{MSE}} = \| \epsilon - \epsilon_\theta(\sqrt{1 - \beta_t} x_i + \sqrt{\beta_t} \epsilon, t) \|_2^2.$$
(4)
In particular, the denoising model learns to estimate a gradient field of natural images, describing the direction noisy images $x^t$ with noise level $t$ should be refined to become natural images (Ho et al., 2020). As discussed in both (Liu et al., 2022; Du et al., 2023), this gradient field also corresponds to the gradient field of an energy function
$$\epsilon_\theta(x^t, t) = \nabla_x E_\theta(x)$$
(5)
that represents the relative log-likelihood of a datapoint.
To generate an image from the diffusion model, a sample $x^T$ at noise level $T$ is initialized from Gaussian noise $\mathcal{N}(0, 1)$ and then iteratively denoised through
$$x^{t-1} = x^t - \gamma \epsilon_\theta(x^t, t) + \xi, \quad \xi \sim \mathcal{N}(0, \sigma_t^2 I),$$
(6)
where $\sigma_t^2$ is the variance according to a variance schedule and $\gamma$ is the step size\(^1\). This directly corresponds to the noisy energy optimization procedure
$$x^{t-1} = x^t - \gamma \nabla_x E_\theta(x^t) + \xi, \quad \xi \sim \mathcal{N}(0, \sigma_t^2 I).$$
(7)
The functional form of Equation 7 is very similar to Equation 3, and illustrates how sampling from a diffusion model is similar to optimizing a learned energy function $E_\theta(x)$ that parameterizes the relative negative log-likelihood of the data density.
When we train a diffusion model to recover a conditional data density that consists of a single image $x_i$, i.e., when we are autoencoding an image given an inferred intermediate latent $z$, then the denoising network directly learns an $\epsilon_\theta(x, t, z)$ that estimates gradients of an energy function $\nabla_x E_\theta(x, z)$. This energy function has minimum
$$x_i = \arg\min_x E_\theta(x, z),$$
(8)
as the highest log-likelihood datapoint will be $x_i$. The above equivalence suggests that we may directly use diffusion models to parameterize the unsupervised decomposition of images into the energy functions discussed in Section 2.
---
\(^1\)An linear decay $\frac{1}{\sqrt{1 - \beta_t}}$ is often also applied to the output $x^{t-1}$ for sampling stability.
3.2 Decompositional Diffusion Models
In COMET, given an input image \( x_i \), we are interested in inferring a set of different latent energy functions \( E_\theta(x, z_k) \) such that
\[
x_i = \arg\min_x \sum_k E_\theta(x, z_k).
\]
Using the equivalence between denoising networks and energy function discussed in Section 3.1 to recover the desired set of energy functions, we may simply learn a set of different denoising functions to recover an image \( x_i \) using the objective
\[
L_{MSE} = \left\| \epsilon - \sum_k \epsilon_\theta \left( \sqrt{1 - \beta_t} x_i + \sqrt{\beta_t} \epsilon, t, z_k \right) \right\|^2_2,
\]
where each individual latent \( z_k \) is inferred by a jointly learned neural network encoder \( Enc_\phi(x_i)[k] \).
Since the encoder compresses the input \( x_i \) into a set of low dimensional latent representations \( z = \{z_1, z_2, \ldots, z_K\} \), by information bottleneck, each individual \( z_k \) is encouraged to capture important, orthogonal information from the inputs, which we demonstrate correspond to factors such as objects or attributes of a scene. This resulting objective is simpler to train than that of COMET, as it requires only a single step denoising supervision and does not need computation of second-order gradients.
Reconstruction Training. As discussed in (Ho et al., 2020), the denoising network \( \epsilon_\theta \) may either be trained to directly estimate the starting noise \( \epsilon \) or the original image \( x_i \). These two predictions are functionally identical, as \( \epsilon \) can be directly obtained by taking a linear combination of \( x^t \) and \( x_i \).
While standard diffusion training directly predicts \( \epsilon \), we find that predicting \( x_i \) and then regressing \( \epsilon \) leads to better performance, as this training objective is more similar to autoencoder training.
Once we have recovered these denoising functions, we may directly use the noisy optimization objective in Equation 7 to sample from compositions of different factors. The full training and sampling algorithm for our approach are shown in Algorithm 1 and Algorithm 2 respectively.
### Algorithm 1 Training Algorithm
1: **Input:** Encoder \( Enc_\phi \), denoising model \( \epsilon_\theta \), components \( K \), data distribution \( p_D \)
2: **while** not converged **do**
3: \( x_i \sim p_D \)
4: \( \triangleright \) Extract components \( z_k \) from \( x_i \)
5: \( z_1, \ldots, z_K \leftarrow Enc_\phi(x_i) \)
6: \( \triangleright \) Compute denoising direction
7: \( \epsilon \sim N(0, 1), t \sim \text{Unif}([1, \ldots, T]) \)
8: \( x^t_i = \sqrt{1 - \beta_t} x_i + \sqrt{\beta_t} \epsilon \)
9: \( \epsilon_{\text{pred}} \leftarrow \sum_k \epsilon_\theta(x^t_i, t, z_k) \)
10: \( \triangleright \) Optimize objective \( L_{MSE} \) wrt \( \theta \):
11: \( \Delta \theta \leftarrow \nabla_\theta \| \epsilon_{\text{pred}} - \epsilon \|^2 \)
12: **end while**
### Algorithm 2 Image Generation Algorithm
1: **Input:** Diffusion steps \( T \), denoising model \( \epsilon_\theta \), latent vectors \( \{z_1, \ldots, z_K\} \), step size \( \gamma \)
2: \( x^T_i \sim N(0, 1) \)
3: **for** \( t = T, \ldots, 1 \) **do**
4: \( \triangleright \) Sample Gaussian noise
5: \( \xi \sim N(0, 1) \)
6: \( \triangleright \) Compute denoising direction
7: \( \epsilon_{\text{pred}} \leftarrow \sum_k \epsilon_\theta(x^t_i, t, z_k) \)
8: \( \triangleright \) Run noisy gradient descent
9: \( x^{t-1} = \frac{1}{\sqrt{1 - \beta_t}} (x^t_i - \gamma \epsilon_{\text{pred}} + \sqrt{\beta_t} \xi) \)
10: **end for**
11: **return** \( x^0 \)
4 Experiments
In this section, we begin by comparing our approach for decomposing images with prior approaches. We evaluate how effectively each method decomposes individual components from images, representing both local and global factors in the scenes. As factors are discovered in an unsupervised manner, we name each factor based on visual inspection and provide an extensive set of examples to aid in visualization of each factor. We further assess the quality of image reconstruction as well as the underlying disentanglement of factors. Furthermore, we demonstrate the recombination of individual components to generate novel combinations. We analyze this across both within and across different image datasets.
4.1 Quantitative Metrics
For quantitative evaluation of image quality and disentanglement, we employ the following metrics:
Figure 3: **Global Factor Decomposition.** Our method can enable global factor decomposition and reconstruction on CelebA-HQ (Left) and Virtual KITTI 2 (Right). Note that we name inferred concepts for easy understanding.
Figure 4: **Global Factor Recombination.** Recombination of inferred factors on Falcor3D and CelebA-HQ datasets. In Falcor3D (Left), we show image variations by varying inferred factors such as lighting intensity. In CelebA-HQ (Right), we recombine factors from two different inputs to generate novel face combinations.
**FID** (Heusel et al., 2017). Fréchet Inception Distance (FID) measures the quality of generative models based on the feature similarity between generated images and ground truth images, where image features are extracted using a pre-trained Inception model (Szegedy et al., 2016).
**KID** (Binkowski et al., 2018). Kernel Inception Distance (KID) is an enhanced version of FID that performs well even when the number of generated samples is limited. While FID is sensitive to the number of generated samples, KID exhibits better behavior in such cases.
**LPIPS** (Zhang et al., 2018). LPIPS is a perceptual metric that measures the similarity of images by computing their distances in the feature space. A lower LPIPS score indicates a higher similarity.
**MIG** (Chen et al., 2018). The Mutual Information Gap (MIG) measures disentanglement quality using the mutual information between a latent variable and a ground truth factor.
**MCC** (Hyvärinen & Morioka, 2016). The Mean Correlation Coefficient (MCC) is another disentanglement quantitative evaluation. It matches each latent with a desired ground truth factor using the correlation matrix between the ground truth factors and latent representations.
To evaluate the quality of reconstructed images, we use FID, KID and LPIPS on images reconstructed from CelebA-HQ (Karras et al., 2017), Falcor3D (Nie et al., 2020). Virtual KITTI 2 (Cabon et al., 2020) and CLEVR (Johnson et al., 2017). Following COMET (Du et al., 2021a), we evaluate the learned latent representations on disentanglement using both MIG and MCC on the Falcor3D dataset.
### 4.2 Global Factors
Given a set of input images, we illustrate that our unsupervised approach can capture a set of global scene descriptors such as lighting and background, and recombine them to construct image variations. We evaluate results in terms of image quality and disentanglement of global components.
**Decomposition and Reconstruction.** In Figure 3, we show how our approach decomposes CelebA-HQ face images into a set of factors on the left-hand side. These factors include facial features, hair color, skin tone, and hair shape, each named based on qualitative visualization. In addition, we further compare our method on image reconstruction with existing baselines in Figure 5. Our method generates better reconstructions than COMET, in that images are sharper and more similar to the input as well as other recent baselines.
| Model | CelebA-HQ | Falcor3D | Virtual KITTI 2 | CLEVR |
|---------------|-----------|----------|-----------------|-------|
| | FID ↓ | KID ↓ | LPIPS ↓ | FID ↓ | KID ↓ | LPIPS ↓ | FID ↓ | KID ↓ | LPIPS ↓ |
| β-VAE (β = 4) | 107.29 | 0.107 | 0.239 | 116.96| 0.124 | 0.075 | 196.68| 0.181 | 0.479 | 316.64| 0.083 | 0.651 |
| MONet | 35.27 | 0.030 | 0.098 | 69.49 | 0.067 | 0.082 | 67.92 | 0.043 | 0.154 | 60.74 | 0.063 | 0.118 |
| COMET | 62.41 | 0.056 | 0.134 | 46.38 | 0.040 | 0.032 | 124.57| 0.091 | 0.342 | 103.84| 0.119 | 0.141 |
| Slot Attention| 50.41 | 0.050 | 0.154 | 60.31 | 0.031 | 0.079 | 142.03| 0.119 | 0.207 | 27.43 | 0.026 | 0.031 |
| Hessian Penalty| 34.90 | 0.021 | | 322.45| 0.479 | | 116.91| 0.084 | | 25.40 | 0.016 | |
| GENESIS-V2 | 41.64 | 0.035 | 0.132 | 130.56| 0.130 | 0.097 | 134.31| 0.105 | 0.202 | 318.46| 0.403 | 0.631 |
| Ours | 16.48 | 0.013 | 0.089 | 14.18 | 0.008 | 0.028 | 21.59 | 0.008 | 0.058 | 11.49 | 0.011 | 0.012 |
Table 1: Image Reconstruction Evaluation. We evaluate the quality of 64 × 64 reconstructed images using FID, KID and LPIPS on 10,000 images from 4 different datasets. Our method achieves the best performance.
On the right side of Figure 3, we also demonstrate that Decomp Diffusion can be applied to infer factors such as shadow, lighting, landscape, and objects on Virtual KITTI 2. We can further compose these factors to reconstruct the input images, as illustrated in the rightmost column. Comparative decompositions from other methods can be found in Figure XVIII.
We further provide qualitative results to demonstrate the impact of number of concepts $K$ on both CelebA-HQ and Falcor3D in Figure XVI and Figure XVII. As expected, different $K$ can lead to different sets of decomposed concepts being produced, but certain concepts are learned across different $K$, such as the facial features concepts in Figure XVII.
Recombination. In Figure 4, we provide additional insights into each captured factor by recombining the decomposed factors from both the Falcor3D and CelebA-HQ datasets. On the left-hand side, we demonstrate how recombination can be performed on a source image by varying a target factor, such as lighting intensity, while preserving the other factors. This enables us to generate image variations using inferred factors such as lighting intensity, camera position, and lighting position.
On the right-hand side of Figure 4, we further show how factors extracted from different human faces can be recombined to generate a novel human face that exhibits selected global factors. For instance, we can combine facial features from one person with hair shape from another to create a new face that exhibit chosen properties. These results illustrate that our method can effectively disentangle images into global factors that can be recombined for novel generalization.
Quantitative results. To quantitatively compare different methods, we first evaluate the disentanglement of the given methods on the Falcor3D dataset. As shown in Table 3, Decomp Diffusion (dim = 64) achieves the best scores across disentanglement metrics, showing its effectiveness in capturing a set of global scene descriptors. In addition, we evaluate our models with different latent dimensions of 32, 64, and 128 to verify the impact of latent dimension. We find that our method achieves the best performance when using a dimension of 64. We posit that a smaller dimension may lack the capacity to encode all the information, thus leading to worse disentanglement. A larger dimension may be too large that it fails to separate distinct factors. Thus, we apply PCA to project the output dimension 128 to 64 (last row), and we observe that it can boost the MIG performance but lower the MCC score.
Finally, we evaluate the visual quality of reconstructed images using the decomposed scene factors, as presented in Table 1. We observe that our method outperforms existing methods in terms of FID, KID and LPIPS across datasets, indicating superior image reconstruction quality.
Diffusion Parameterizations. We next analyze two choices of diffusion parameterizations, e.g., whether the model should predict $x_0$ or the noise $\epsilon$, in Table 2. We find that directly predicting the input $x_0$ (3rd and 6th rows) outperforms the $\epsilon$ parametrization (1st and 4th row) on both CelebA-HQ and CLEVR datasets in terms of MSE and LPIPS (Zhang et al., 2018). This is due to using a reconstruction-based training procedure, as discussed in Section 3.2. We also compare using a single component to learn reconstruction (2nd and 5th rows) with our method (3rd and 6th rows), which uses
| Dataset | Multiple Components | Predict $x_0$ | MSE ↓ | LPIPS ↓ | FID ↓ | KID ↓ |
|---------------|---------------------|--------------|-------|---------|-------|-------|
| CelebA-HQ | Yes | No | 105.003| 0.603 | 155.46| 0.141 |
| | No | Yes | 88.551| 0.192 | 30.10 | 0.022 |
| | Yes | Yes | 76.168| 0.089 | 16.48 | 0.013 |
| CLEVR | Yes | No | 56.179| 0.3061 | 42.72 | 0.033 |
| | No | Yes | 26.094| 0.2236 | 24.27 | 0.023 |
| | Yes | Yes | 6.178 | 0.0122 | 11.54 | 0.010 |
Table 2: Ablations. We analyze the impact of predicting $x_0$ or $\epsilon$, as well as using multiple components or a single component. We compute pixel-wise MSE and LPIPS of reconstructions on both CLEVR and CelebA-HQ.
| Model | Dim (D) | $\beta$ | Decoder Dist. | MIG ↑ | MCC ↑ |
|---------------|---------|---------|---------------|---------|---------|
| InfoGAN | 64 | – | – | 2.48 ± 1.11 | 52.67 ± 1.91 |
| $\beta$-VAE | 64 | 4 | Bernoulli | 8.96 ± 3.53 | 61.57 ± 4.09 |
| $\beta$-VAE | 64 | 16 | Gaussian | 9.33 ± 3.72 | 57.28 ± 2.37 |
| $\beta$-VAE | 64 | 4 | Gaussian | 10.90 ± 3.80 | 66.08 ± 2.00 |
| GENESIS-V2* | 128 | – | – | 5.23 ± 0.02 | 63.83 ± 0.22 |
| MONet | 64 | – | – | 13.94 ± 2.09 | 65.72 ± 0.89 |
| COMET | 64 | – | – | 19.63 ± 2.49 | 76.55 ± 1.35 |
| Ours | 32 | – | – | 11.72 ± 0.05 | 57.67 ± 0.09 |
| Ours | 64 | – | – | **26.45 ± 0.16** | **80.42 ± 0.08** |
| Ours | 128 | – | – | 12.97 ± 0.02 | 80.27 ± 0.17 |
| Ours* | 128 | – | – | 16.57 ± 0.02 | 71.19 ± 0.15 |
Table 3: Disentanglement Evaluation. Mean and standard deviation (s.d.) metric scores across 3 random seeds on the Falcor3D dataset. Decomp Diffusion enables better disentanglement according to 2 common disentanglement metrics. The asterisk (*) indicates PCA is applied to project the output dimension to 64.
Figure 6: Local Factor Decomposition. Illustration of object-level decomposition on CLEVR (left) and Tetris (right). Our method can extract individual object components that can be reused for image reconstruction.
Figure 7: Local Factor Recombination. We recombine local factors from 2 images to generate composition of inferred object factors. On both CLEVR and Tetris (Left), we recombine inferred object components in the bounding box to generate novel object compositions. On CLEVR (Right), we compose all inferred factors to generalize up to 8 objects, though training images only contain 4 objects.
multiple components for reconstruction. Our method demonstrates the best reconstruction quality, as measured by MSE and LPIPS.
4.3 LOCAL FACTORS
Given an input image with multiple objects, e.g., a purple cylinder and a green cube, we aim to factorize the input image into individual object components using object-level segmentation.
Decomposition and Reconstructions. We qualitatively evaluate local factor decomposition on object datasets such as CLEVR and Tetris in Figure 6. Given an image with multiple objects, our method can isolate each individual object component, and can also faithfully reconstruct the input image using the set of decomposed object factors. Note that since our method does not obtain an explicit segmentation mask per object, it is difficult to quantitatively assess segmentations (but we found our approach to almost always correctly segment objects).
Recombination. To further validate our approach, we present qualitative results showcasing the recombination of captured local factors from different input images to generate previously unseen image combinations. In Figure 7, we demonstrate how our method utilizes a subset of factors from each image for local factor recombination. On the left-hand side, we show the generation of novel object combinations using factorized energy functions representing individual local object components from two inputs, shown within the bounding boxes, on both the CLEVR and Tetris datasets. On the right-hand side, we demonstrate how our method can recombine all existing local components from two CLEVR images, even though each training image only consists of 4 objects.
Figure 8: **Multi-modal Dataset Decomposition.** We show our method can capture a set of global factors that are shared between hybrid datasets such as KITTI and Virtual KITTI 2 scenes (**Left**), and CelebA-HQ and Anime faces (**Right**). Note that we name inferred concepts for better understanding.
Figure 9: **Multi-modal Dataset Recombination.** Our method exhibits the ability to recombine inferred factors from various hybrid datasets. We can recombine different extracted factors to generate unique compositions of KITTI and Virtual KITTI 2 scenes (**Top**), and compositions of CelebA-HQ and Anime faces (**Bottom**).
Thus, our method generalizes well to novel combinations of 8 object components. We illustrate that our approach is highly effective at recombining local factors to create novel image combinations.
4.4 Cross Dataset Generalization
We next assess the ability of our approach to extract and combine concepts across multiple datasets. We investigate the recombination of factors in multi-modal datasets, and the combination of separate factors from distinct models trained on different datasets.
**Multi-modal Decomposition and Reconstruction.** We evaluate our proposed method’s efficacy in decomposing multi-modal datasets into a set of factors. Because such datasets are comprised of images from different modalities, they pose a challenge for extracting a common set of factors. However, as shown in Figure 8, our method successfully performs this task. The left-hand side exhibits the decomposition of images from a hybrid dataset comprising KITTI and Virtual KITTI into a set of global factors, such as background, lighting, and shadows. The right-hand side decomposes the two types of faces into a cohesive set of global factors including face shape, hair shape, hair color, and facial details, which can be utilized for reconstruction. This demonstrates our method’s effectiveness in factorizing hybrid datasets into a set of factors.
**Multi-modal Recombination.** Furthermore, we assess the ability of our proposed method to recombine obtained factors across multi-modal datasets, as illustrated in Figure 9. On the top half, from the hybrid KITTI and Virtual KITTI dataset, we recombine extracted factors from two distinct images to produce novel KITTI-like scenes, for instance incorporating a blue sky background with shadows in the foreground. In the bottom half, we present our method’s capacity for reusing and combining concepts to generate unique anime faces. Specifically, we combine hair shapes and colors from a human face image with face shape and details from an anime face image, resulting in novel anime-like faces.
**Cross Dataset Recombination.** Given two instances of trained models, where one is trained on CLEVR objects and CLEVR Toy objects, we investigate how we can combine local factors extracted from different modalities to generate novel combinations. In Figure 10, our method can extract object components in the bounding box from two images from different datasets, and then further combine to generate unseen combinations of object components from different models. In Table V, we provide the FID and KID scores of generated recombinations against the original CLEVR dataset and CLEVR Toy dataset. Our method outperforms COMET on both datasets, indicating the model can obtain better visual quality and more cohesive recombination.
Figure 10: **Cross Dataset Recombination.** We further showcase our method’s ability to recombine across datasets using 2 different models that train on CLEVR and CLEVR Toy, respectively. We compose inferred factors as shown in the bounding box from two different modalities to generate unseen compositions.
5 RELATED WORK
**Compositional Generation.** An increasing body of recent work has studied compositional generation (Du et al., 2020; Liu et al., 2021; 2022; Wu et al., 2022; Shi et al., 2023; Cong et al., 2023; Cho et al., 2023; Du et al., 2023; Huang et al., 2023; Nie et al., 2021; Wang et al., 2023; Gandikota et al., 2023), where we seek to generate outputs subject to a set of different conditions. Existing work on compositional generation focus either on modifying the underlying generative process to focus on a set of specifications (Feng et al., 2022; Shi et al., 2023; Cong et al., 2023; Huang et al., 2023), or by composing a set of independent models specifying desired constraints (Du et al., 2020; Liu et al., 2021; 2022; Nie et al., 2021; Du et al., 2023; Wang et al., 2023). Similar to (Du et al., 2021b), our work aims discover a set of compositional components from an unlabeled dataset of images which may further be integrated with compositional operations from (Du et al., 2023; Liu et al., 2022).
**Unsupervised Decomposition.** Our work is related to existing research on unsupervised decomposition, where works have studied how to obtain global factor disentanglement (Higgins et al., 2017; Burgess et al., 2018; Locatello et al., 2020a; Klindt et al., 2021; Peebles et al., 2020; Singh et al., 2019). These approaches typically focus on discovering a global latent space which best describes the input space, with prior work in (Preechakul et al., 2022) also exploring this global latent space on diffusion models. Our approach aims instead to decompose data into multiple different compositional vector spaces, which allow us to both compose multiple instances of one factor together, as well as compose factors across different datasets. The most similar work in this direction is COMET (Du et al., 2021a), but unlike COMET we decompose images into a set of different diffusion models, and illustrate how this enables higher fidelity and more scalable image decomposition.
Our work is also related to the field of unsupervised object discovery (Burgess et al., 2019; Greff et al., 2019; Locatello et al., 2020b; Lin et al., 2020; Engelcke et al., 2021a; Du et al., 2021b; Singh et al., 2022; Kipf et al., 2022; Seitzer et al., 2022; Jia et al., 2023), which seeks to decompose a scene into a set of different objects. Developed concurrently with our approach, Jiang et al. (Jiang et al., 2023) and Wu et al. (Wu et al., 2023) proposes to decomposes images into a set of object-centric diffusion models. Separate from these works, our approach does not assume the explicit decomposition of images into segmented components, enabling the ability to represent objects and global factors in a scene, drawing on the connection of diffusion models and EBMs.
6 CONCLUSION
**Limitations.** Our work has several limitations. First, our current approach decomposes images into a fixed number of factors that is specified by the user. While there are cases where the number of components is apparent, in many datasets the number is unclear and there may be variable number dependent on image. In Section C, the sensitivity of our approach to the number of components specified is studied and we find that we recover duplicate components when the number is too large, and subsets of components when it is too small. We believe a principled approach to determining number of factors is a interesting direction of future work. In addition, factors discovered by approach is not guaranteed be distinct from the original image or from each other, and if the latent encoder’s embedding dimension is too large, each latent factor may capture the original image itself. Adding explicit regularization to enforce independence between latents would be interesting future work.
**Conclusion.** In this work, we present Decomp Diffusion and demonstrate its efficacy at decomposing images into both global factors of variation, such as facial expression, lighting, and background, and local factors, such as constituent objects. We further illustrate the ability of different inferred components to compose across multiple datasets and models. We hope that our work inspires future research in unsupervised discovery of compositional representations in images.
REFERENCES
Kelsey R. Allen, Kevin A. Smith, and Joshua B. Tenenbaum. Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning. *Proceedings of the National Academy of Sciences*, 117(47):29302–29310, 2020. ISSN 0027-8424. doi: 10.1073/pnas.1912341117. URL https://www.pnas.org/content/117/47/29302.
Mikołaj Bińkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. *arXiv preprint arXiv:1801.01401*, 2018.
Gwern Branwen, Anonymous, and Danbooru Community. Danbooru2019 portraits: A large-scale anime head illustration dataset, 2019.
Christopher P Burgess, Irina Higgins, Arka Pal, Loïc Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in beta-vae. *arXiv preprint arXiv:1804.03599*, 2018.
Christopher P Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representation. *arXiv:1901.11390*, 2019.
Yohann Cabon, Naila Murray, and Martin Humenberger. Virtual kitti 2. *arXiv preprint arXiv:2001.10773*, 2020.
Tian Qi Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating sources of disentanglement in variational autoencoders. *arXiv:1802.04942*, 2018.
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In *NeurIPS*, 2016.
Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David I. Inouye, and Ajinkya Kale. Towards enhanced controllability of diffusion models, 2023. URL https://arxiv.org/abs/2302.14368.
Noam Chomsky. *Aspects of the Theory of Syntax*. The MIT Press, Cambridge, 1965. URL http://www.amazon.com/Aspects-Theory-Syntax-Noam-Chomsky/dp/0262530074.
Yuren Cong, Martin Renqiang Min, Li Erran Li, Bodo Rosenhahn, and Michael Ying Yang. Attribute-centric compositional text-to-image generation. *arXiv preprint arXiv:2301.01413*, 2023.
Yilun Du and Igor Mordatch. Implicit generation and generalization in energy-based models. *arXiv preprint arXiv:1903.08689*, 2019.
Yilun Du, Shuang Li, and Igor Mordatch. Compositional visual generation with energy based models. In *Advances in Neural Information Processing Systems*, 2020.
Yilun Du, Shuang Li, Yash Sharma, B. Joshua Tenenbaum, and Igor Mordatch. Unsupervised learning of compositional energy concepts. In *Advances in Neural Information Processing Systems*, 2021a.
Yilun Du, Kevin A. Smith, Tomer Ullman, Joshua B. Tenenbaum, and Jiajun Wu. Unsupervised discovery of 3d physical objects. In *International Conference on Learning Representations*, 2021b. URL https://openreview.net/forum?id=lf7st0BJIA5.
Yilun Du, Conor Durkan, Robin Strudel, Joshua B Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, and Will Grathwohl. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. *arXiv preprint arXiv:2302.11552*, 2023.
Martin Engelcke, Oiwi Parker Jones, and Ingmar Posner. Genesis-v2: Inferring unordered object representations without iterative refinement. *Advances in Neural Information Processing Systems*, 34:8085–8094, 2021a.
|
h6Tz85BqRI
|
In Table 4, Only-VQ outperforms Class-based and AE+Class-based, but Only-VQ only adopts class soft labels, the VQ component helps to train the codebook, why this approach is better than the other two? Can you give some discussions on it?
|
VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs
Ling Yang1 Ye Tian1* Minkai Xu3 Zhongyi Liu2 Shenda Hong1 Wei Qu2 Wentao Zhang1 Bin Cui1† Muhan Zhang1† Jure Leskovec3
1Peking University 2Ant Group 3Stanford University
yangling0818@163.com, tyfeld@stu.pku.edu.cn,
{zhongyi.lzy, qingze.qw}@antgroup.com, {minkai, jure}@cs.stanford.edu
{hongshenda, wentao.zhang, bin.cui, muhan}@pku.edu.cn
Abstract
GNN-to-MLP distillation aims to utilize knowledge distillation (KD) to learn computationally-efficient multi-layer perceptron (student MLP) on graph data by mimicking the output representations of teacher GNN. Existing methods mainly make the MLP to mimic the GNN predictions over a few class labels. However, the class space may not be expressive enough for covering numerous diverse local graph structures, thus limiting the performance of knowledge transfer from GNN to MLP. To address this issue, we propose to learn a new powerful graph representation space by directly labeling nodes’ diverse local structures for GNN-to-MLP distillation. Specifically, we propose a variant of VQ-VAE (Van Den Oord et al., 2017) to learn a structure-aware tokenizer on graph data that can encode each node’s local substructure as a discrete code. The discrete codes constitute a codebook as a new graph representation space that is able to identify different local graph structures of nodes with the corresponding code indices. Then, based on the learned codebook, we propose a new distillation target, namely soft code assignments, to directly transfer the structural knowledge of each node from GNN to MLP. The resulting framework VQGraph achieves new state-of-the-art performance on GNN-to-MLP distillation in both transductive and inductive settings across seven graph datasets. We show that VQGraph with better performance infers faster than GNNs by 828×, and also achieves accuracy improvement over GNNs and stand-alone MLPs by 3.90% and 28.05% on average, respectively. Our code is available at https://github.com/YangLing0818/VQGraph
1 Introduction
Graph Neural Networks (GNNs) (Yang & Hong, 2022; Li et al., 2020a; Perozzi et al., 2014; Xu et al., 2019; Morris et al., 2019; Yang et al., 2020; Chen et al., 2020b) have been widely used due to their effectiveness in dealing with non-Euclidean structured data, and have achieved remarkable performances in various graph-related tasks (Hamilton et al., 2017; Kipf & Welling, 2017; Veličković et al., 2018). Modern GNNs rely on message passing mechanism to learn node representations (Yang et al., 2020). GNNs have been especially important for recommender systems (Fan et al., 2019; He et al., 2020; Wu et al., 2022; Xiao et al., 2023; Zhang et al., 2024), fraud detection (Dou et al., 2020; Liu et al., 2021; Yang et al., 2023), and information retrieval (Li et al., 2020b; Mao et al., 2020). Numerous works (Pei et al., 2020b) focus on exploring more effective ways to leverage informative neighborhood structure for improving GNNs (Park et al., 2021; Zhu et al., 2021; Zhao et al., 2022; Tang et al., 2022; Lee et al., 2021; Chien et al., 2022; Abu-El-Haija et al., 2019).
It is challenging to scale GNNs to large-scale applications which are constrained by latency and require fast inference (Zhang et al., 2020; 2022a; Jia et al., 2020), because message passing necessitates fetching topology and features of many neighbor nodes for inference on a target node, which
*Contributed equally.
†Corresponding authors.
Figure 1: The t-SNE visualization of the learned graph representation space in two kinds of teacher GNNs: (a) previous SOTA “class-based” NOSMOG (Tian et al., 2023b) and (b) our “structure-based” VQGraph. “Class-based” denotes learning with class labels, and “structure-based” denotes learning with our local structure reconstruction. Our learned space is more compact. We here provide both class labels and our structure labels along with illustrative substructures for demonstration.
is time-consuming and computation-intensive. Multi-layer perceptrons (MLPs) are efficient alternatives to deploy on graphs that only depend on node feature without the need of explicit message passing (Zhang et al., 2022b). Thus, recent methods use knowledge distillation (Hinton et al., 2015; Tian et al., 2023a; Gou et al., 2021; Yuan et al., 2020; Zhou & Song, 2021) to transfer the learned structural knowledge from GNNs to MLPs (Zhang et al., 2022b; Zheng et al., 2022; Tian et al., 2023b), which build the statistical associations between node features and class labels by making the MLP mimic the (well-trained) GNN’s predictions. Then only MLPs are deployed for inference, which can also perform well in real-world graphs.
Despite some progress, current GNN-to-MLP distillation methods have a common fundamental issue: their graph representation spaces of GNN are mainly learned by a few class labels, and the class space may not be expressive enough for covering numerous diverse local graph structures of nodes, limiting the distillation performance. We explain this problem by using t-SNE (Van der Maaten & Hinton, 2008) to visualize the graph representation space as in Figure 1. We can observe that the graph representation space in previous teacher GNN is not expressive enough for identifying fine-grained local structural differences between nodes of the same class, which may limit the structural knowledge transfer from GNN to MLP.
Here we introduce a new powerful graph representation space for bridging GNNs and MLPs by directly labeling diverse nodes’ local structures. Specifically, we propose a variant of VQ-VAE (Van Den Oord et al., 2017) to learn a structure-aware tokenizer on graph data that can encode each node with its substructure as a discrete code. The numerous codes constitute a codebook as our new graph representation space that is able to identify different local neighborhood structures of nodes with the corresponding code indices. As demonstrated in Figure 1, our learned representation space is more expressive and can identify subtle differences between nodes’ local structures. Based on the learned codebook, we can effectively facilitate the structure-based distillation by maximizing the consistency of soft code assignments between GNN and MLP models, given by the KL divergence between GNN predictions and MLP predictions over the discrete codes of the codebook.
We highlight our main contributions as follows: (i) To the best of our knowledge, we for the first time directly learn to label nodes’ local neighborhood structures to acquire a powerful node representation space (i.e., a codebook) for bridging GNNs and MLPs. (ii) Based on the learned codebook, we utilize a new distillation target with soft code assignments to effectively facilitate the structure-aware knowledge distillation. We further conduct both visualization and statistical analyses for better understanding with respect to our superior local and global structure awareness for GNN-to-MLP distillation. (iii) Extensive experiments across seven datasets show VQGRAPH can consistently outperform GNNs by 3.90% on average accuracy, while enjoying 828× faster inference speed. Also VQGRAPH outperforms MLPs and SOTA distillation method NOSMOG (Tian et al., 2023b) by 28.05% and 1.39% on average accuracy across datasets, respectively.
2 RELATED WORK
Inference Acceleration for GNNs Pruning (Zhou et al., 2021) and quantizing GNN parameters (Zhao et al., 2020) have been studied for inference acceleration (Chen et al., 2016; Judd et al.,
Although these approaches accelerate GNN inference to a certain extent, they do not eliminate the neighbor-fetching latency. Graph-MLP (Hu et al., 2021) proposes to bypass GNN neighbor fetching by learning a computationally-efficient MLP model with a neighbor contrastive loss, but its paradigm is only transductive and can be not applied in the more practical inductive setting. Besides, some works try to speed up GNN in training stage from the perspective of node sampling (Zou et al., 2019; Chen et al., 2018c), which are complementary to our goal on inference acceleration.
Knowledge Distillation for GNNs Existing GNN-based knowledge distillation methods try to distill teacher GNNs to smaller student GNNs (GNN-GNN distillation) or MLPs (GNN-to-MLP distillation). Regarding the GNN-GNN distillation, LSP (Yang et al., 2021c), TinyGNN (Yan et al., 2020), GFKD (Deng & Zhang, 2021), and GraphSAIL (Xu et al., 2020) conduct KD by enabling student GNN to maximally preserve local information that exists in teacher GNN. The student in CPF (Yang et al., 2021b) is not a GNN, but it is still heavily graph-dependent as it uses LP (Zhu & Ghahramanih, 2002; Huang et al., 2021). Thus, these methods still require time-consuming neighbors fetching. To address these latency issues, recent works focus on GNN-to-MLP distillation that does not require message passing, as seen in (Hu et al., 2021; Zhang et al., 2022b; Zheng et al., 2022; Tian et al., 2023b). For example, recent sota methods GLNN (Zhang et al., 2022b) and NOS-MOG (Tian et al., 2023b) train the student MLP with node features as inputs and class predictions from the teacher GNN as targets. However, class predictions over a few labels, as their distillation targets, can not sufficiently express structural knowledge of graph structures as discussed in Sec. 1. Hence, we for the first time propose to directly label nodes’ local neighborhood structures to facilitate structure-aware knowledge distillation.
3 Preliminaries
Notation and Graph Neural Networks We denote a graph as \( G = (V, A) \), with \( V = \{v_1, v_2, \cdots, v_n\} \) represents all nodes, and \( A \) denotes adjacency matrix, with \( A_{i,j} = 1 \) if node \( v_i \) and node \( v_j \) are connected, and 0 otherwise. Let \( N \) denote the total number of nodes. \( X \in \mathbb{R}^{N \times D} \) represents the node feature matrix with each raw being a \( D \)-dimensional node attribute \( v \). For node classification, the prediction targets are \( Y \in \mathbb{R}^{N \times K} \), where row \( y_v \) is a \( K \)-dim one-hot vector for node \( v \). For a given \( G \), usually a small portion of nodes will be labeled, which we mark using superscript \( L \), i.e. \( V^L, X^L \) and \( Y^L \). The majority of nodes will be unlabeled, and we mark using the superscript \( U \), i.e. \( V^U, X^U \) and \( Y^U \). For a given node \( v \in V \), GNNs aggregate the messages from node neighbors \( N(v) \) to learn node embedding \( h_v \in \mathbb{R}^{d_n} \) with dimension \( d_n \). Specifically, the node embedding in \( l \)-th layer \( h_v^{(l)} \) is learned by first aggregating (AGG) the neighbor embeddings and then updating (UPDATE) it with the embedding from the previous layer. The whole learning process can be denoted as: \( h_v^{(l)} = \text{UPDATE}(h_v^{(l-1)}, \text{AGG}(\{h_u^{(l-1)} : u \in N(v)\})) \).
Vector Quantized-Variational AutoEncoder (VQ-VAE) for Continuous Data The VQ-VAE model (Van Den Oord et al., 2017) is originally proposed for modeling continuous data distribution, such as images, audio and video. It encodes observations into a sequence of discrete latent variables, and reconstructs the observations from these discrete variables. Both encoder and decoder use a shared codebook. More formally, the encoder is a non-linear mapping from the input space, \( x \), to a vector \( E(x) \). This vector is then quantized based on its distance to the prototype vectors (tokens) in the codebook \( e_k, k \in 1 \ldots K \) such that each vector \( E(x) \) is replaced by the index of the nearest code in the codebook, and is transmitted to the decoder: \( \text{quantize}(E(x)) = e_k \), where \( k = \arg \min_j ||E(x) - e_j|| \). To learn these mappings, the gradient of the reconstruction error is then back-propagated through the decoder, and to the encoder using the straight-through gradient estimator (Bengio et al., 2013). Besides reconstruction loss, VQ-VAE has two additional terms to align the token space of the codebook with the output of the encoder. The codebook loss, which only applies to the codebook variables, brings the selected code \( e \) close to the output of the encoder, \( E(x) \). The commitment loss, which only applies to the encoder weights, encourages the output of the encoder to stay close to the chosen code to prevent it from fluctuating too frequently from one code vector to another. The overall objective is:
\[
L(x, D(e)) = ||x - D(e)||_2^2 + ||\text{sg}[E(x)] - e||_2^2 + \eta ||\text{sg}[e] - E(x)||_2^2,
\]
where \( E \) is the encoder function and \( D \) is the decoder function. The operator \( \text{sg} \) refers to a stop-
gradient operation that blocks gradients from flowing into its argument, and $\eta$ is a hyperparameter which controls the reluctance to change the code corresponding to the encoder output. In this paper, we explore the potential of VQ-VAE for representing discrete graph-structured data.
4 VQGRAPH
The critical insights of VQGRAPH is learning an expressive graph representation space that directly labels nodes’ diverse local neighborhood structures with different code indices for facilitating effective structure-aware GNN-to-MLP distillation. First, we learn a structure-aware graph tokenizer to encode the nodes with diverse local structures to corresponding discrete codes, and constitute a codebook (Sec. 4.1). Then we utilize the learned codebook for GNN-to-MLP distillation, and propose a tailored structure-aware distillation objective based on soft code assignments (Sec. 4.2).
4.1 GRAPH TOKENIZER TRAINING
Labeling Nodes’ Local Structure with Discrete Codes Similar to the tokenization in NLP (Sennrich et al., 2016; Wu et al., 2016), we tokenize the nodes with different neighborhood structures as discrete codes using a variant of VQ-VAE (Van Den Oord et al., 2017), i.e., a graph tokenizer that consists of a GNN encoder and a codebook. More concretely, the nodes $V = \{v_1, v_2, \cdots, v_n\}$ of a graph $G$ are tokenized to $Z = \{z_1, z_2, \cdots, z_n\}$, where the codebook contains $M$ discrete codes. Firstly, the teacher GNN encoder encodes the nodes to nodes embeddings. Next, our graph tokenizer looks up the nearest neighbor code embedding in the codebook for each node embedding $h_i$. Let $E = [e_1, e_2, \cdots, e_M] \in \mathbb{R}^{M \times D}$ denote the codebook embeddings, which are randomly initialized and are then optimized in pretraining. The assigned code of $i$-th node is:
$$z_i = \arg\min_j \|h_i - e_j\|_2,$$
We feed the corresponding codebook embeddings $\{e_{z_1}, e_{z_2}, \cdots, e_{z_n}\}$ to the linear decoder ($p_\psi : e_{z_i} \rightarrow \hat{v}_i$) to reconstruct the input graph including both nodes attributes $X \in \mathbb{R}^{N \times D}$ and adjacency matrix $A \in \mathbb{R}^{N \times N}$ for an end-to-end optimization of our graph tokenizer.
Graph Tokenizer Optimization We adapt VQ-VAE (first term in Equation (1)) to fit our graph tokenization. In addition, the categorical information is also critical for node representations, thus we integrate it into the optimization of our graph tokenizer:
$$L_{Rec} = \frac{1}{N} \sum_{i=1}^{N} \left(1 - \frac{v_i^T \hat{v}_i}{\|v_i\| \cdot \|\hat{v}_i\|}\right)^2 + \left\|A - \sigma(\hat{X} \cdot \hat{X}^T)\right\|_F^2,$$
node reconstruction edge reconstruction
$$L_{Tokenizer} = L_{Rec} + L_{CE}(y(v_i), \hat{y}(e_{z_i})) + \frac{1}{N} \sum_{i=1}^{N} \|sg[h_i] - e_{z_i}\|_2^2 + \frac{\eta}{N} \sum_{i=1}^{N} \|sg[e_{z_i}] - h_i\|_2^2,$$
where $L_{CE}$ is the cross-entropy loss between the predicted code and the true code.
where \( \hat{v} \in \mathbb{R}^D \) and \( \hat{X} \in \mathbb{R}^{N \times D} \) denote the predicted node embedding and node embedding matrix, respectively. \( \text{sg}[\cdot] \) stands for the stopgradient operator, and the flow of gradients is illustrated in Figure 2. \( L_{\text{Rec}} \) denotes the graph reconstruction loss, aiming to preserve node attributes by the first node reconstruction term with the scaled cosine error (\( \gamma \geq 1 \)), and recover graph structures by the second topology reconstruction term. \( L_{\text{CE}} \) is the cross-entropy loss between labels \( y_{(v_i)} \) and the GNN predictions \( \hat{y}_{(e_{z_i})} \) that are based on the assigned codes \( \{e_{z_i}\} \). In \( L_{\text{Tokenizer}} \), the third term is a VQ loss aiming to update the codebook embeddings and the forth term is a commitment loss that encourages the output of the GNN encoder to stay close to the chosen code embedding. \( \eta \) is a hyper-parameter set to 0.25 in our experiments. With the learned graph tokenizer, we acquire a powerful codebook that not only directly identifies local graph structures, but also preserves class information for node representations, facilitating later GNN-to-MLP distillation.
**Clarifying Superiority over VQ-VAE and Graph AutoEncoders** In contrast to vanilla VQ-VAE, we provide a new variant of VQ-VAE for modeling discrete graph data instead of continuous data, and utilize the learned codebook for distillation task instead of generation tasks. In contrast to traditional graph autoencoders (Kipf & Welling, 2016), our model does not suffer from large variance (Van Den Oord et al., 2017). And with our expressive latent codes, we can effectively avoid “posterior collapse” issue which has been problematic with many graph AE models that have a powerful decoder, often caused by latents being ignored. We provide experimental comparison to demonstrate our superiority in Sec. 5.3.
**Scaling to Large-Scale Graphs** We have introduced the main pipeline of graph tokenizer training based on the entire graph input. Nevertheless, for large-scale industrial applications, one cannot feed the whole graph due to the latency constraint (Fey et al., 2021; Bojchevski et al., 2020; Ying et al., 2018; Chen et al., 2020a). Numerous researches choose subgraph-wise sampling methods as a promising class of mini-batch training techniques (Chen et al., 2018b;a; Zeng et al., 2020; Huang et al., 2018; Chiang et al., 2019; Zou et al., 2019; Shi et al., 2023), implicitly covering the global context of graph structure through a number of stochastic re-sampling. We follow this technique to perform large-scale graph tokenizer training. For example, we adopt GraphSAGE (Hamilton et al., 2017) as teacher GNN, we sample the target nodes as a mini-batch \( Y_{\text{sample}} \) and samples a fixed size set of neighbors for feature aggregation. We utilize the connections between the target nodes as the topology reconstruction target (the second term of \( L_{\text{Rec}} \) in Equation (3)) for approximately learning global graph structural information.
### 4.2 Structure-Aware Code-Based GNN-to-MLP Distillation
After the graph tokenizer optimization, we obtain a pre-trained teacher GNN encoder and a set of codebook embeddings \( E \). We hope to distill the structure knowledge node-by-node from the GNN to a student MLP based on the codebook. Next, we will introduce our tailored structure-aware GNN-to-MLP distillation with the soft code assignments over the learned codebook.
**Aligning Soft Code Assignments Between GNN and MLP** Different from previous class-based distillation methods (Zhang et al., 2022b; Tian et al., 2023b) that constrain on class predictions between GNN and MLP, we utilize a more expressive representation space of graph data, i.e., our structure-aware codebook, and propose code-based distillation to leverage more essential information of graph structures for bridging GNN and MLP. Formally, for each node \( v_i \), we have its GNN representation \( h_i^{\text{GNN}} \in \mathbb{R}^D \) and the MLP representation \( h_i^{\text{MLP}} \in \mathbb{R}^D \). Then we respectively compare their node representations with all \( M \) codes of the codebook embeddings \( E \in \mathbb{R}^{M \times D} \) and obtain corresponding soft code assignments \( r_i^{\text{GNN}} \in \mathbb{R}^M \) and \( r_i^{\text{MLP}} \in \mathbb{R}^M \):
\[
r_i^{\text{GNN}} = \text{COMP}(h_i^{\text{GNN}}, E), \quad r_i^{\text{MLP}} = \text{COMP}(h_i^{\text{MLP}}, E),
\]
(4)
where \( \text{COMP}: [\mathbb{R}^D, \mathbb{R}^{M \times D}] \rightarrow \mathbb{R}^M \) can be arbitrary relation module for computing 1-vs-\( M \) code-wise relations, and such relations can be viewed as assignment possibilities. We use \( L_2 \) distance in our experiments (more studies in Appendix C.1). Kindly note that the codebook size \( M \) can be large, especially for large-scale graphs, thus the soft code assignment of each node contains abundant 1-vs-\( M \) global structure-discriminative information. Therefore we choose the soft code assignment as
the target for final distillation:
$$L_{code\_distill} = \frac{1}{N} \sum_{i=1}^{N} \tau^2 \text{KL}(p_i^{\text{GNN}} || p_i^{\text{MLP}}) = \frac{1}{N} \sum_{i=1}^{N} \tau^2 p_i^{\text{GNN}} \log \frac{p_i^{\text{GNN}}}{p_i^{\text{MLP}}},$$
where KL refers to Kullback–Leibler divergence with:
$$p_i^{\text{GNN}} = \text{Softmax}(r_i^{\text{GNN}}/\tau), \quad p_i^{\text{MLP}} = \text{Softmax}(r_i^{\text{MLP}}/\tau),$$
being the scaled code assignments, and $\tau$ is the temperature factor to control the softness. Kindly note that the code assignment is only used for optimizing the training, and we remove it for deploying the pure MLP model. In this way, VQGRAPH is able to effectively distill both local neighborhood structural knowledge and global structure-discriminative ability from GNNs to MLPs, without increasing inference time. The overall training loss of VQGRAPH is composed of classification loss $L_{cls}$, traditional class-based distillation loss $L_{class\_distill}$, and our code-based distillation loss, i.e.,
$$L_{VQGRAPH} = L_{cls} + \alpha L_{class\_distill} + \beta L_{code\_distill}.$$
where $\alpha$ and $\beta$ are factors for balancing the losses.
5 EXPERIMENTS
Datasets and Evaluation We use five widely used public benchmark datasets (Zhang et al., 2022b; Yang et al., 2021a) (Citeseer, Pubmed, Cora, A-computer, and A-photo), and two large OGB datasets (Hu et al., 2020a) (Arxiv and Products) to evaluate the proposed model. In our experiments, we report the mean and standard deviation of ten distinct runs with randomized seeds to ensure robustness and reliability of our findings. We also extend our VQGRAPH to heterophilic graphs and make performance improvement in Appendix A.2. We utilize accuracy to gauge model performance. More details are in Appendix B.1.
Model Architectures For fair comparison, we adopt GraphSAGE with GCN aggregation as our teacher model (also as graph tokenizer) and use the same student MLP models for all evaluations following SOTA GLNN (Zhang et al., 2022b) and NOSMOG (Tian et al., 2023b). The codebook size increases accordingly with dataset size (studied in Sec. 5.3). For example, we set 2048 and 8192 for Cora and A-photo, respectively. More model hyperparameters are detailed in Appendix B.2. We investigate the influence of alternative teacher models, including GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), and APPNP (Klicpera et al., 2019), detailed in Appendix C.2.
Transductive vs. Inductive We experiment in two separate settings, transductive (tran) and inductive (ind), for comprehensive evaluation. In both settings, we first pre-train our graph tokenizer to learn the codebook embeddings $E$ for code-based distillation. For the tran setting, we train our models on the labeled graph $G$, along with the corresponding feature matrix $X^L$ and label vector $Y^L$, before evaluating their performance on the unlabeled data $X^U$ and $Y^U$. Soft labels, soft code assignments are generated for all nodes within the graph (i.e., $y_v^{\text{soft}}, r_v^{\text{GNN}}, r_v^{\text{MLP}}$ for $v \in V$). As for ind, we follow the methodology of prior work (Tian et al., 2023b) in randomly selecting 20% of the data for inductive evaluation. Specifically, we divide the unlabeled nodes $Y^U$ into two separate yet non-overlapping subsets, observed and inductive (i.e., $V^U = V^U_{\text{obs}} \sqcup V^U_{\text{ind}}$), producing three distinct graphs, $G = G^L \sqcup G^U_{\text{obs}} \sqcup G^U_{\text{ind}}$, wherein there are no shared nodes. In training, the edges between $G^L \sqcup G^U_{\text{obs}}$ and $G^U_{\text{ind}}$ are removed, while they are leveraged during inference to transfer positional features via average operator (Hamilton et al., 2017). Node features and labels are partitioned into three disjoint sets, i.e., $X = X^L \sqcup X^U_{\text{obs}} \sqcup X^U_{\text{ind}}$ and $Y = Y^L \sqcup Y^U_{\text{obs}} \sqcup Y^U_{\text{ind}}$. Soft labels and soft code assignments are generated for nodes within the labeled and observed subsets (i.e., $y_v^{\text{soft}}, r_v^{\text{GNN}}, r_v^{\text{MLP}}$ for $v \in V^L \sqcup V^U_{\text{obs}}$). We provide code and models in the supplementary material.
5.1 Main Results
GNN-to-MLP Distillation We compare VQGRAPH to other state-of-the-art GNN-to-MLP distillation methods GLNN and NOSMOG with same experimental settings, and use distilled MLP models for evaluations. We first consider the standard transductive setting, enabling direct comparison with previously published literature (Zhang et al., 2022b; Hu et al., 2020b; Yang et al., 2021a). As depicted in Tab. 1, VQGRAPH outperforms all baselines including teacher GNN models across
Table 1: Node classification results under the standard setting, results show accuracy (higher is better). $\Delta_{GNN}$, $\Delta_{MLP}$, $\Delta_{NOSMOG}$ represents the difference between VQGRAPH and GNN, MLP, NOSMOG, respectively. GLNN and NOSMOG are the SOTA GNN-to-MLP distillation methods.
| Datasets | SAGE | MLP | GLNN | NOSMOG | VQGRAPH | $\Delta_{GNN}$ | $\Delta_{MLP}$ | $\Delta_{NOSMOG}$ |
|------------|----------|----------|----------|----------|----------|----------------|---------------|-------------------|
| Citeseer | 70.49 ± 1.53 | 58.50 ± 1.86 | 71.22 ± 1.50 | 73.78 ± 1.54 | **76.08 ± 0.55** | ↑ 7.93% | ↑ 30.05% | ↑ 3.18% |
| Pubmed | 75.56 ± 2.06 | 68.39 ± 3.09 | 75.59 ± 2.46 | 77.34 ± 2.36 | **78.40 ± 1.71** | ↑ 3.76% | ↑ 14.64% | ↑ 1.37% |
| Cora | 80.64 ± 1.57 | 59.18 ± 1.60 | 80.26 ± 1.66 | 83.04 ± 1.26 | **83.93 ± 0.87** | ↑ 4.08% | ↑ 41.82% | ↑ 1.07% |
| A-computer | 82.82 ± 1.37 | 67.62 ± 2.21 | 82.71 ± 1.18 | 84.04 ± 1.01 | **85.17 ± 1.29** | ↑ 2.84% | ↑ 25.95% | ↑ 1.34% |
| A-photo | 90.85 ± 0.87 | 77.29 ± 1.79 | 91.95 ± 1.04 | 93.36 ± 0.69 | **94.21 ± 0.45** | ↑ 3.70% | ↑ 21.89% | ↑ 0.91% |
| Arxiv | 70.73 ± 0.35 | 55.67 ± 0.24 | 63.75 ± 0.48 | 71.65 ± 0.29 | **72.43 ± 0.20** | ↑ 2.40% | ↑ 30.11% | ↑ 0.93% |
| Products | 77.17 ± 0.32 | 60.02 ± 0.10 | 63.71 ± 0.31 | 78.45 ± 0.38 | **79.17 ± 0.21** | ↑ 2.59% | ↑ 31.91% | ↑ 0.92% |
Table 2: Node classification results in a production scenario with both inductive and transductive settings. ind indicates the results on $Y^{L}_{ind}$, tran indicates the results on $Y^{L}_{tran}$, and prod indicates the interpolated production results of both ind and tran.
| Datasets | Eval | SAGE | MLP | GLNN | NOSMOG | VQGRAPH | $\Delta_{GNN}$ | $\Delta_{MLP}$ | $\Delta_{NOSMOG}$ |
|------------|------|----------|----------|----------|----------|----------|----------------|---------------|-------------------|
| Citeseer | prod | 68.06 | 58.49 | 69.09 | 70.60 | **73.76** | ↑ 8.37% | ↑ 26.11% | ↑ 5.80% |
| | ind | 69.14 ± 2.99 | 59.31 ± 4.56 | 68.48 ± 2.38 | 70.50 ± 2.30 | **72.93 ± 1.78** | ↑ 5.48% | ↑ 22.96% | ↑ 3.74% |
| | tran | 67.79 ± 2.80 | 58.29 ± 1.94 | 69.23 ± 2.39 | 70.07 ± 2.25 | **74.39 ± 1.94** | ↑ 10.03% | ↑ 27.90% | ↑ 7.74% |
| Pubmed | prod | 74.77 | 68.39 | 74.67 | 75.82 | **76.92** | ↑ 2.86% | ↑ 12.47% | ↑ 1.45% |
| | ind | 75.07 ± 2.89 | 68.28 ± 3.25 | 74.55 ± 2.95 | 75.8 ± 3.32 | **76.71 ± 2.76** | ↑ 2.18% | ↑ 12.35% | ↑ 1.11% |
| | tran | 74.70 ± 2.33 | 68.42 ± 3.06 | 74.70 ± 2.75 | 75.80 ± 3.06 | **77.13 ± 3.01** | ↑ 3.25% | ↑ 12.73% | ↑ 1.75% |
| Cora | prod | 79.53 | 59.18 | 77.83 | 81.02 | **81.68** | ↑ 2.70% | ↑ 38.02% | ↑ 0.81% |
| | ind | 81.03 ± 1.71 | 59.44 ± 3.36 | 73.32 ± 1.50 | 81.36 ± 1.53 | **82.20 ± 1.32** | ↑ 1.44% | ↑ 38.29% | ↑ 1.03% |
| | tran | 79.16 ± 1.60 | 59.12 ± 1.49 | 78.97 ± 1.36 | 80.93 ± 1.65 | **81.15 ± 1.25** | ↑ 2.51% | ↑ 37.26% | ↑ 0.27% |
| A-computer | prod | 82.73 | 67.62 | 82.10 | 83.85 | **84.16** | ↑ 1.73% | ↑ 24.46% | ↑ 0.37% |
| | ind | 82.83 ± 1.51 | 67.69 ± 2.20 | 80.27 ± 2.11 | 84.36 ± 1.57 | **85.73 ± 2.04** | ↑ 3.50% | ↑ 26.65% | ↑ 1.62% |
| | tran | 82.70 ± 1.34 | 67.60 ± 2.23 | 80.26 ± 1.80 | 83.72 ± 1.44 | **84.56 ± 1.81** | ↑ 2.25% | ↑ 25.08% | ↑ 1.00% |
| A-photo | prod | 90.45 | 77.29 | 91.32 | 92.47 | **93.05** | ↑ 2.87% | ↑ 20.39% | ↑ 0.62% |
| | ind | 90.45 ± 1.47 | 77.44 ± 1.50 | 89.50 ± 1.12 | 92.61 ± 1.09 | **93.05 ± 0.89** | ↑ 2.82% | ↑ 20.24% | ↑ 0.54% |
| | tran | 90.42 ± 0.68 | 77.31 ± 1.90 | 81.80 ± 0.49 | 92.44 ± 0.51 | **92.96 ± 1.02** | ↑ 2.81% | ↑ 20.19% | ↑ 0.56% |
| Arxiv | prod | 70.69 | 55.35 | 63.50 | 70.90 | **71.43** | ↑ 1.05% | ↑ 29.05% | ↑ 0.75% |
| | ind | 70.69 ± 0.58 | 55.29 ± 0.63 | 59.04 ± 0.46 | 70.09 ± 0.55 | **70.86 ± 0.42** | ↑ 0.24% | ↑ 28.16% | ↑ 1.10% |
| | tran | 70.69 ± 0.39 | 55.36 ± 0.34 | 64.61 ± 0.15 | 71.10 ± 0.34 | **72.03 ± 0.56** | ↑ 1.90% | ↑ 30.11% | ↑ 1.31% |
| Products | prod | 76.93 | 60.02 | 63.47 | 77.33 | **77.93** | ↑ 1.30% | ↑ 29.84% | ↑ 0.71% |
| | ind | 77.23 ± 0.24 | 60.02 ± 0.09 | 63.38 ± 0.33 | 77.02 ± 0.19 | **77.25 ± 0.25** | ↑ 0.35% | ↑ 29.12% | ↑ 0.62% |
| | tran | 76.86 ± 0.27 | 60.02 ± 0.11 | 63.49 ± 0.31 | 77.41 ± 0.21 | **78.36 ± 0.13** | ↑ 1.95% | ↑ 30.56% | ↑ 1.23% |
all datasets. Specifically, VQGRAPH improves performance by an average of 3.90% compared to its teacher GNN, highlighting its ability to capture superior structural information without relying on explicit graph structure input. Comparing VQGRAPH to NOSMOG, our proposed model achieves an average improvement of 1.39% across both small- and large-scale graph datasets. Further model analysis of VQGRAPH is presented in Sec. 5.3.
Experiments in Inductive and Transductive Settings To gain deeper insights into the effectiveness of VQGRAPH, we conduct experiments in a realistic production (prod) scenario that involves both inductive (ind) and transductive (tran) settings across multiple datasets, as detailed in Tab. 2. Our experimental results demonstrate that VQGRAPH consistently achieves superior performance compared to the teacher model and baseline methods across all datasets and settings. Specifically, our proposed method outperforms GNN across all datasets and settings with an average improvement of 2.93%, demonstrating its superior efficacy of our learned code-based representation space in capturing graph structural information, even on large-scale datasets. Furthermore, when compared to MLP and NOSMOG, VQGRAPH consistently achieves significant performance improvements, with average gains of 25.81% and 1.6%, respectively, across all datasets and settings.
5.2 Model Analysis
Trade-off between Performance and Inference Time To demonstrate its efficiency and capacity of our VQGRAPH, we visualize the trade-off between node classification accuracy and model inference time on Citeseer dataset in Figure 3. Our results indicate that achieves a highest accuracy of 76% while maintaining a fast inference time of 1.45ms. Compared to the other models with similar inference time, VQGRAPH significantly outperforms NOSMOG and MLPs by 3.12% and 30.05% in average accuracy, respectively. For those models having comparable performance with VQGRAPH, they require a considerable amount of inference time, e.g., 2 layers GraphSAGE (SAGE-L2) needs 152.31ms and 3 layers GraphSAGE (SAGE-L3) needs 1201.28ms, making them
unsuitable for real-world applications. This makes VQGraph 105× faster than SAGE-L2 and 828× faster than SAGE-L3. Although increasing the hidden size of NOSMOG slightly improves its performance, NOSMOGw2 (2-times wider than NOSMOG) and NOSMOGw4 still perform worse than VQGraph with more inference time, demonstrating the superior efficiency of our VQGraph.
Compactness of Learned Node Representation Space We use t-SNE (Van der Maaten & Hinton, 2008) to visualize the node representation spaces of both teacher GNN and distilled student MLP models with different methods, and put the results in Figure 4. Our VQGraph provides a better teacher GNN model than GLNN and NOSMOG, and the node representations of same classes have a more compact distribution. The representations extracted by our distilled MLP model also have a more compact distribution. We attribute these to our expressive code-based representation space, providing more structure-aware representations for classifying nodes. Besides, our new code-based distillation strategy can effectively deliver both graph structural information and categorial information from GNN to MLP, guaranteeing the compactness of MLP’s representation space.
Consistency between Model Predictions and Global Graph Topology Here we corroborate the superiority of VQGraph over GNNs, MLPs, GLNN and NOSMOG in capturing global graph structural information, which is complementary to the above analysis on local structure awareness. We use the cut value to effectively evaluate the alignment between model predictions and graph topology as (Zhang et al., 2022b; Tian et al., 2023b) based on the approximation for the min-cut problem (Bianchi et al., 2019). The min-cut problem divides nodes \( V \) into \( K \) disjoint subsets by removing the minimum number of edges. Correspondingly, the min-cut problem can be expressed as:
\[
\max \frac{1}{K} \sum_{k=1}^{K} (C_k^T A C_k) / (C_k^T D C_k),
\]
where \( C \) is the node class assignment, \( A \) is the adjacency matrix, and \( D \) is the degree matrix. Therefore, cut value is defined as follows:
\[
CV = \frac{\text{tr}(\hat{Y}^T A \hat{Y})}{\text{tr}(\hat{Y}^T D \hat{Y})},
\]
where \( \hat{Y} \) is the model prediction output, and the cut value \( CV \) indicates the consistency between the model predictions and the graph topology. We report the cut values for various models in the transductive setting in Tab. 3. The average \( CV \) achieved by VQGraph is 0.9493, while SAGE, GLNN, and NOSMOG have average \( CV \) values of 0.9276, 0.8725, and 0.9348, respectively. We find VQGraph obtains the highest cut value, indicating the superior global structure-capturing ability over GNN and SOTA GNN-to-MLP distillation methods.
### Table 3: The cut value. VQGraph predictions are more consistent with the graph topology than GNN, MLP, GLNN and the state-of-the-art method NOSMOG.
| Datasets | SAGE | MLP | GLNN | NOSMOG | VQGraph |
|--------------|--------|--------|--------|--------|---------|
| Citeseer | 0.9535 | 0.8107 | 0.9447 | 0.9659 | 0.9786 |
| Pubmed | 0.9597 | 0.9062 | 0.9298 | 0.9641 | 0.9883 |
| Cora | 0.9385 | 0.7203 | 0.8908 | 0.9480 | 0.9684 |
| A-computer | 0.8951 | 0.6764 | 0.8579 | 0.9047 | 0.9190 |
| A-photo | 0.9014 | 0.7099 | 0.9063 | 0.9084 | 0.9177 |
| Arxiv | 0.9052 | 0.7252 | 0.8126 | 0.9066 | 0.9162 |
| Products | 0.9400 | 0.7518 | 0.7657 | 0.9456 | 0.9571 |
| Average | 0.9276 | 0.7572 | 0.8725 | 0.9348 | 0.9493 |
5.3 Ablation Study
Influence of the Codebook Size We analyze the influence of the codebook size of our graph tokenizer. From Figure 5(i), we observe that changing codebook size can significantly influence the performance for our distilled MLP model. Too small a value cannot have enough expressiveness
for preserving graph structures while too large a value will lead to code redundancy impairing the performance. Moreover, VQGRAPH has different optimal codebook sizes for various datasets as in Figure 5(ii). Another interesting observation is the graph with more nodes or edges tends to require a larger codebook size to achieve optimal distillation results. In our VQGraph, the size of the codebook is mainly determined by the complexity of the graph data, considering both nodes and edges which produce different local substructures. Thus, our codebook size is still small compared to the exponential graph topological complexity, demonstrating the expressiveness of our codebook. Taking Cora as an example, our codebook size is 2048, but it contains 2485 nodes with an average degree about 4 which can theoretically result in $O(2485^4)$ possible 1-hop substructure patterns.

(i) Accuracy vs. Codebook Size. (ii) Optimal codebook size for various datasets.
Figure 5: Influence of the codebook size.
**Contribution Analysis of VQGRAPH**
We design experiments to make statistical analysis on the contributions of our VQGRAPH. The results are presented in Tab. 4, and we observe that compared to traditional class-based distillation, both Only-VQ and VQGRAPH promote the average accuracy, suggesting that both graph tokenizer and soft code assignments have vital impacts on final performance. Moreover, comparing AE+class-based to class-based, we find adding structure awareness slightly improves GNN-to-MLP distillation. Our designed graph VQ-VAE efficiently improves the GNN-to-MLP distillation results more significantly than classic graph (Variational)AE, because we directly learn numerous structure-aware codes to enrich the expressiveness of node representations. Our VQ+code-based distillation (denoted as VQGRAPH) substantially improves node classification performance over Only-VQ across all datasets, demonstrating the superiority of our new structure-aware distillation targets over soft labels. Please refer to Appendix C for more ablation studies.
Table 4: Class-based denotes only using soft labels for distillation (e.g., GLNN), AE+Class-based denotes adding classic graph Auto-Encoder (Kipf & Welling, 2016) for structure awareness. Only-VQ denotes using VQ for training teacher but using soft labels for distillation. $\Delta_{\text{Only-VQ}}$, $\Delta_{\text{VQGRAPH}}$ represents the differences between Only-VQ, VQGRAPH and Class-based.
| Datasets | GNN | Class-based | AE+Class-based | Only-VQ (ours) | VQGRAPH (ours) | $\Delta_{\text{Only-VQ}}$ | $\Delta_{\text{VQGRAPH}}$ |
|------------|--------------|-------------|----------------|---------------|----------------|---------------------------|---------------------------|
| Citeseer | 70.49 ± 1.53 | 71.22 ± 1.54| 71.65 ± 0.69 | 74.96 ± 1.50 | 76.08 ± 0.55 | ↑ 3.25% | ↑ 6.82% |
| Pubmed | 75.56 ± 2.06 | 75.59 ± 2.46| 76.56 ± 1.23 | 77.86 ± 2.46 | 78.40 ± 1.71 | ↑ 3.00% | ↑ 3.71% |
| Cora | 80.64 ± 1.57 | 80.26 ± 1.66| 81.11 ± 1.01 | 82.48 ± 0.46 | 83.93 ± 0.87 | ↑ 2.77% | ↑ 4.57% |
| A-computer | 82.82 ± 1.37 | 82.71 ± 1.18| 83.01 ± 1.18 | 84.06 ± 1.18 | 85.17 ± 1.29 | ↑ 1.63% | ↑ 2.97% |
| A-photo | 90.85 ± 0.87 | 91.95 ± 1.04| 92.06 ± 0.69 | 93.86 ± 1.04 | 94.21 ± 0.45 | ↑ 2.08% | ↑ 2.45% |
| Arxiv | 70.73 ± 0.35 | 63.75 ± 0.48| 70.10 ± 1.02 | 70.75 ± 0.48 | 72.43 ± 0.20 | ↑ 10.98% | ↑ 13.61% |
| Products | 77.17 ± 0.32 | 67.71 ± 0.31| 77.65 ± 0.98 | 78.71 ± 0.31 | 79.17 ± 0.21 | ↑ 16.25% | ↑ 16.93% |
### 6 Conclusion
In this paper, we improve the expressiveness of existing graph representation space by directly labeling nodes’ diverse local structures with a codebook, and utilizing the codebook for facilitating structure-aware GNN-to-MLP distillation. Extensive experiments on seven datasets demonstrate that our VQGRAPH can significantly improve GNNs by **3.90%**, MLPs by **28.05%**, and the state-of-the-art GNN-to-MLP distillation method by **1.39%** on average accuracy, while maintaining a fast inference speed of **828×** compared to GNNs. Furthermore, we present additional visualization and statistical analyses as well as ablation studies to demonstrate the superiority of the proposed model.
ACKNOWLEDGEMENT
This work was supported by the National Natural Science Foundation of China (No.U22B2037 and U23B2048).
REFERENCES
Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In *international conference on machine learning*, pp. 21–29. PMLR, 2019.
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv preprint arXiv:1308.3432*, 2013.
Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Mincut pooling in graph neural networks. 2019.
Aleksandar Bojchevski, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. Scaling graph neural networks with approximate pagerank. In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, pp. 2464–2473, 2020.
H Bunke and G Allermann. Inexact graph matching for structural pattern recognition. *Pattern Recognition Letters*, 1(4):245–253, 1983. ISSN 0167-8655. doi: https://doi.org/10.1016/0167-8655(83)90033-8.
Jianfei Chen, Jun Zhu, and Le Song. Stochastic training of graph convolutional networks with variance reduction. In *International Conference on Machine Learning*, pp. 942–950. PMLR, 2018a.
Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: Fast learning with graph convolutional networks via importance sampling. In *International Conference on Learning Representations*, 2018b.
Jie Chen, Tengfei Ma, and Cao Xiao. FastGCN: Fast learning with graph convolutional networks via importance sampling. In *International Conference on Learning Representations*, 2018c.
Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, and Ji-Rong Wen. Scalable graph neural networks via bidirectional propagation. *Advances in neural information processing systems*, 33:14556–14566, 2020a.
Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In *ICML*, 2020b.
Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks, 2020c.
Yu-Hsin Chen, Joel Emer, and Vivienne Sze. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. In *2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)*, pp. 367–379, 2016. doi: 10.1109/ISCA.2016.40.
Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In *Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining*, pp. 257–266, 2019.
Eli Chien, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Jiong Zhang, Olgica Milenkovic, and Inderjit S Dhillon. Node feature extraction by self-supervised multi-scale neighborhood prediction. In *International Conference on Learning Representations*, 2022.
Xiang Deng and Zhongfei Zhang. Graph-free knowledge distillation for graph neural networks, 2021.
|
lDbjooxLkD
|
Wikipedia seems to suggest that the author's estimator $PU := r/\mathbb{E}[K]$ is a *biased* estimate (https://en.wikipedia.org/wiki/Negative_binomial_distribution#Maximum_likelihood_estimation), directly contradicting Theorem 1. Could the authors please double check?
|
Predicting Emergent Abilities with Infinite Resolution Evaluation
Shengding Hu\textsuperscript{1}, Xin Liu\textsuperscript{2}, Xu Han\textsuperscript{1,3,*}, Xinrong Zhang\textsuperscript{1}, Chaoqun He\textsuperscript{1}, Weilin Zhao\textsuperscript{1}, Yankai Lin\textsuperscript{4}, Ning Ding\textsuperscript{1}, Zebin Ou\textsuperscript{5}, Guoyang Zeng\textsuperscript{6}, Zhiyuan Liu\textsuperscript{1,*}, Maosong Sun\textsuperscript{1,*}
\textsuperscript{1}Department of Computer Science and Technology, Tsinghua University
\textsuperscript{2}Beijing Language and Culture University.
\textsuperscript{3}Shanghai Artificial Intelligence Laboratory
\textsuperscript{4}Renmin University of China. \textsuperscript{5}Zhihu Inc. \textsuperscript{6}Modelbest Inc.
hsd23@mails.tsinghua.edu.cn
Abstract
The scientific scale-up of large language models (LLMs) necessitates a comprehensive understanding of their scaling properties. However, the existing literature on the scaling properties only yields an incomplete answer: optimization loss decreases predictably as the model size increases, in line with established scaling law; yet no scaling law for task has been established and the task performances are far from predictable during scaling. Task performances typically show minor gains on small models until they improve dramatically once models exceed a size threshold, exemplifying the “emergent abilities”. In this study, we discover that small models, although they exhibit minor performance, demonstrate critical and consistent task performance improvements that are not captured by conventional evaluation strategies due to insufficient measurement resolution. To measure such improvements, we introduce PASSUNTIL, an evaluation strategy with theoretically infinite resolution, through massive sampling in the decoding phase. With PASSUNTIL, we conduct a quantitative investigation into the scaling law of task performance. The investigation contains two parts. Firstly, a strict task scaling law that is not conventionally known to exist, is identified, enhancing the predictability of task performances. Remarkably, we are able to predict the performance of the 2.4B model on code generation with merely 0.05% deviation before training starts, which is the first systematic attempt to verify predictable scaling proposed by GPT-4’s report (OpenAI, 2023). Secondly, underpinned by PASSUNTIL, we are able to study emergent abilities quantitatively. We identify a kind of accelerated emergence whose scaling curve cannot be fitted by standard scaling law function and has a increasing speed. We then examine two hypothesis and imply that the “multiple circuits hypothesis” might be responsible for the accelerated emergence.
“See the world in a grain of sand”
1 Introduction
Large Language Models (LLMs) (Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020; Chowdhery et al., 2022) have become a center of interest among AI researchers recently. These models, trained on expansive datasets and furnished with an enormous number of parameters, have demonstrated unparalleled proficiency across diverse domains, such as text generation (Dubois et al., 2023), code completion (Chen et al., 2021; Rozière et al., 2023), and academic test (Hendrycks et al., 2020).
The impressive success of these LLMs depends heavily on scaling up the model parameters and pre-training data volume. It has been consistently observed that, when considering a continuum of models with nearly identical architectures, larger models coupled with increased pre-training corpora consistently yield diminished training loss. This observation has been mathematically formalized as the scaling law of loss (Kaplan et al., 2020; Henighan et al., 2020), which states that the reducible loss achieved by the model in the log scale is linear to the model size in the log scale. Scaling law has provided guidance for the scientific scaling of LLMs, including determining the balance
*Corresponding Authors.
of the model size and pre-training data size (Hoffmann et al., 2022; Muennighoff et al., 2023). This has transformed what was once a somewhat blind scaling process into a methodology underpinned by empirical assurance. Nonetheless, such beneficial scaling law yield predictions solely on the loss, not extending to the real task performance encountered in practice. This divergence establishes a substantial gap in a comprehensive scaling-up methodology (Ganguli et al., 2022).
Figure 1: We can discriminate subtle performance improvement (left), which is evaluated as all zeros in conventional methods (right). The right figure directly uses Figure 9(a) in Sorscher et al. (2022) as a comparison, which the authors utilize to illustrate a “break-through” behavior in task performance. The internal figure inside the left figure shows the performances in a log(− log(·)) space, which displays strong linearity, supporting the task scaling law (Eq.(3)).
The challenge in extending loss scaling law to task performance predominantly stems from the discontinuity observed in task performance during scaling. Language models below a certain size yield trivial performance, i.e., random guessing on multiple choices or zero scores on generation tasks. However, when the model size surpasses a certain threshold, a distinct surge in performance appears, which leads to substantially non-trivial performance. This phenomenon is summarized as the “emergent abilities” (Srivastava et al., 2022; Wei et al., 2022a), and is observed across various model families and tasks. It seems that qualitative changes happen inside the model, which makes the model start to manifest unique capabilities. While these emerging phenomenon indicate that LLMs are becoming stronger, they complicate the prediction on task performance.
A pivotal question arises: can we unlock predictable scaling of the task performance, from the apparent discontinuities? We hypothesize that the perceived discontinuity from trivial to excellent performance might stem from limited evaluation resolution\(^1\). By employing a more nuanced resolution, one could potentially uncover the scaling law for tasks. The most related work to ours is Schaeffer et al. (2023), which proposes two methodology to make emergent abilities continuous, i.e., “change of metrics” and “increase resolution” by expanding test set size. Our motivation diverges from the “change of metric” approach of Schaeffer et al. (2023), which posits that employing other continuous metrics can cause emergent abilities to disappear. A limitation of alternative smooth metrics (e.g., distribution distance) is they yield insufficient insights into the target metrics (e.g., exact match) that evaluators intuitively perceive. In contrast, our method extends the “increase resolution” approach in a novel way, which target directly at predicting the performance such as code generation in our experiments.
We introduce an evaluation strategy named PASSUNTIL that, for the first time, enables quantitative exploration of the scaling properties of task performance. PASSUNTIL deploys extensive random sampling in the decoding phase (e.g., \(10^5\) sampling times), and evaluates each sampling result until any generation passes the target test. Therefore, this evaluation strategy has infinite measurement resolution as long as computational resources are not bounded. Moreover, it can provide maximum likelihood estimates of target metrics such as accuracy and exact match. To refine our evaluation resolution and accuracy, we suggest fitting to instance-level scaling law since different test instances might have different speeds of performance improvement during scaling.
With the proposed evaluation strategy, we delve into the scaling law governing task performance. To begin with, we train two series of models ranging from 0.03B to 2.4B. These models strictly adhere to pre-training loss scaling law, providing a solid foundation for analyzing task performance scaling behavior. We mainly disclose two findings in our exploration.
---
\(^1\)By “resolution”, we view evaluation as a measurement of the real probability of completing a task. And resolution is the smallest probability difference that the evaluation strategy can detect.
Firstly, task performances are predictable with \textsc{PassUntil}. We validate the presence of subtle but non-negligible performance in smaller models that can be captured by \textsc{PassUntil}. These performances are on the order of $10^{-5}$ and exhibit steady enhancement as the model scales up. Subsequently, we derive the mathematical form of \textbf{task scaling law}, experimentally verifying an almost strict linear relationship between $\log(-\log(\text{PU}))$ and $\log(N)$, where PU denotes the estimation of target metric given by \textsc{PassUntil} and $N$ is the number of model parameters. This relationship enables us to attain highly accurate predictions. For instance, in the code generation task, our predictions exhibit a mere 0.05% deviation from the actual values.
Secondly, we discover a phenomenon of \textbf{accelerated emergence}. To begin with, we discover that the shape of the task scaling curve is not uniform across tasks. Several task manifest scaling functions that diverge from the typical task scaling law. In other words, their scaling curve is smooth and incremental but cannot be fitted by the typical scaling law function. Their scaling curve of $\log(-\log(\text{PU}))$ w.r.t. $\log(N)$ is concave, which is akin to an acceleration in the performance scaling speed. We provide a mathematical definition of such phenomenon. With the quantitative definition, we exclude a possible multi-step reasoning explanation (Schaeffer et al., 2023), and propose an alternative hypothesis. This hypothesis is predicated on potential transformer circuits (Nelson et al., 2021) that are used to explain the “grokking” phenomenon (Power et al., 2022; Varma et al., 2023). It is in harmony with the observed scaling function.
Our work represents the first open-source attempt regarding the predictability of task performance. While GPT-4’s report (OpenAI, 2023) has initiated this exploration, it has not provided comprehensive details. We will open-source all checkpoints to facilitate future research in this direction.
2 RELATED WORK
Predicting task performance before training is an aspirational objective for the development of predictable AI systems, and a multitude of studies approach this aim from various perspectives.
**Loss Scaling Law.** Scaling phenomena have been observed across a broad spectrum of deep learning architectures. The power-law scaling behavior of loss in RNN-based models is investigated in Hestness et al. (2017). Kaplan et al. (2020) delineate the loss scaling trends for Transformer-based language models and explores the scaling behavior of optimal hyper-parameters. They formally established the following scaling law
$$L = cN^{-\alpha} + L_0,$$
where $N$ is the number of non-embedding parameters of LLM, $c$, $\alpha$ are positive coefficients, and $L_0$ is the irreducible loss representing the randomness in data. This formulation has catalyzed the proliferation of LLMs. Subsequently, scaling laws are established for various domains and scenarios, including multi-modality (Henighan et al., 2020; Zhai et al., 2022), computation constraint scenario (Hoffmann et al., 2022), data engineering (Muenninghoff et al., 2023; Sorscher et al., 2022), and reinforcement learning (Gao et al., 2023). Yao & Wang (2023) extend the scaling law into loss prediction by introducing hyper-parameter scaling methods. The relationship of our work with these existing literature is twofold. First, these works concentrate on training and validation loss metrics, which do not reliably predict task performance. Second, our research builds on these scaling laws and extends the mathematical form of Eq.(1) to the scaling law of task performance.
**Scaling Behavior of Task Performance.** Despite the predictable decrement in LLM loss, task performance improvements are twisted during scaling. While some tasks, predominantly those relying on memorization of knowledge, have shown progressive improvement, numerous tasks exhibit breakthrough behavior as model size increases (Srivastava et al., 2022; Wei et al., 2022a). Wei et al. (2022a) illustrate that the concept of “emergence” is also pertinent to prompting techniques such as Chain-of-Thought (Wei et al., 2022b) and In-context Learning (Brown et al., 2020), complicating the pursuit of understanding the scaling law of task performance. It appears that the law of loss scaling offers no assurance for task performance, engendering a lack of guidance in pre-training methodology. Fortunately, several studies endeavor to demystify these emergent abilities. GPT-4’s technical report (OpenAI, 2023) reports that GPT-4’s task performance can be predicted with less than $1/10000$ of computation, albeit without disclosing the methodology and acknowledging that certain abilities are still beyond prediction. Subsequent research (Schaeffer et al., 2023) attributes emergence to two reasons. The first one is non-smooth metrics. We disagree with it since the alternative metrics could not explain the sudden increase in target metrics such
as exact match, which are of paramount interest to us. We align with their second attribution to improve resolution by adding more test samples. Different from their method, we propose a practical method to improve resolution without the need of adding test samples. Our work is also the first open-source attempt to quantitatively investigate the scaling behavior of task performance, proposing task scaling law and accelerated emergence phenomenon.
3 Pilot Experiments on Increasing Random Sample Numbers
We initiate our exploration by visualizing the effect of improving evaluation resolution on open-sourced models. We choose four small models and evaluate them on two subsets of BigBench task (Srivastava et al., 2022): Emoji Movie and Date Understanding (see Appendix D.4.2 and D.4.3 for the subsets). We employ beam search and random sampling (with three sample times: 1, 100, and 10,000) during decoding. If any sampled answer of a test instance is evaluated as correct, then the instance is marked as “passed”. We present the number of passed instances in Figure 2.

**Figure 2:** BS denotes beam search, RS-K denotes random sampling K times.
We can see that even for such tasks presenting substantial difficulty to small models, most instances are passable with enough random sampling times, which will contribute to the subtle task performance improvement. Inspired by this observation, we propose our evaluation strategy that centered around improving the resolution of evaluation.
4 Methods
In this section, we describe our methods to increase the resolution of evaluation, which empowers the investigation of the scaling behavior of task performance. The first is an evaluation strategy PASSUNTIL, and the second is an instance-level scaling curve fit. We also derive the task scaling law based on the loss scaling law.
4.1 Infinite Resolution with PassUntil
We view task performance evaluation as the measurement of the probability of a model passing a task. Given a task instance \( s \), suppose the probability that a model pass it is \( P(s) \), our job is to estimate \( E_s[P(s)] \). Randomly sampling a fixed time \( K \) could estimate \( P(s) \). However, it is hard to define the budget \( K \) that is both acceptable in computation and has enough resolution for hard samples that have small \( P(s) \). We propose PASSUNTIL, which performs an evaluation right after an answer is generated and determines whether it is passed before we sample the next generation. We stop sampling until \( r \) (a constant) samples have passed the evaluation and record the sampling number \( K \). We name the estimate of \( P(s) \) as the PASSUNTIL score PU, which is defined as
\[
PU = \frac{r}{K}
\]
Theoretically, PU has the capability to measure success rates that are infinitesimally small. The PASSUNTIL has the following properties.
---
2The definition of “pass” does not need to be generating exactly the ground truth answer. For example, suppose we predict model’s performance on AlpacaEval (Li et al., 2023b), we can define “pass” as the model generation being better than GPT-4, judged by GPT-4. Therefore the “pass” has broad application.
Theorem 1. PU is a maximum likelihood estimate for \( P(s) \).
Proof. The failure time \( f = K - r \) follows the negative binomial distribution with success probability \( P(s) \). \( r/K \) is known to be an maximum likelihood estimate for \( P(s) \). \( \square \)
In practice, we set \( r \) to as small as 1 or 2 considering the efficiency of evaluation. We also set the upper bound of \( K \) to a large number, such as \( 10^5 \), to prevent endless sampling if we encounter an extremely low \( P(s) \). Note that many instances stop before reaching this upper-bound. Next we discuss the necessity and limitations of PASSUNTIL.
Necessity. Generally, deriving \( P(s) \) theoretically from the token probability on the ground truth solution is not feasible. This is due to two primary facts: firstly, there are likely to be multiple viable solutions; secondly, even though there is only one solution, there exist multiple decoding approaches besides the optimal tokenization to decode the solution\(^3\).
Limitations. (1) Currently, our evaluation strategy is designed to be applicable when a random baseline achieves \( P(s) = 0 \). In the context of multiple-choice grade as the evaluation metric, evaluations tend to exhibit a biased high score relative to the true performance of the model (e.g., \( P(s) = 0.25 \) with random guess for four options). This random noise can overshadow the improvements made by smaller models. The exploration of scaling law for tasks with non-zero random baselines remains a subject for future research. (2) We currently only consider random sampling as a viable target decoding strategy due to its widespread use in LLMs. Using beam search as target decoding strategies and their relationship with random sampling poses an interesting avenue for future exploration and study.
4.2 From Loss-Scaling Law to Task Scaling Law
Then, we derive the task scaling law that PASSUNTIL will follow. We assume that the test loss of generating the next token decreases according to the scaling law of Eq.(1).
\[
PU \sim \prod_{i=1}^{[y]} P(y_i | x_{1:i}, y_{1:i-1}) = \prod_{i=1}^{[y]} \exp(-c_i N^{-\alpha_i} - L_{0i}),
\]
where \( x_{1:[x]} \) is the input sequence and \( y_{1:[y]} \) is the most probable sequence that decodes the correct answer (assuming its dominance compared to other sequences). Assume that the test sample is passable given a sufficiently potent LLM, then the irreducible loss for each token \( L_{0i} \) approaches 0. And assume the test loss of each token in the answer is decreasing with uniform speed when scaling (i.e., \( a_i = a, \forall i \)), we can derive the following function for PU on task performance:
\[
PU(c, \alpha; N) \sim \exp(\sum_i -c_i N^{-\alpha}) = \exp(-cN^{-\alpha})
\]
where \( c = \sum_i c_i \). The resulting mathematical model is similar to that in GPT-4 technical report (OpenAI, 2023) and Equation (4) in Schaeffer et al. (2023).
4.3 Fitting Strategy
Dataset-level Fit. When fitting the parameters \( c, \alpha \) in PU, a dataset-level fit is plausible. For the \( j \)-th model in the scaling curve, the individual test sample’s PU is first averaged over the test set to procure \( \log(-\log(PU(N_j))) \), followed by a linear regression to \( \log N_j \).
Instance-level Fit. We notice that differences between instances lead to different scaling behaviors, which means a dataset-level fit might not be accurate when the difficulty in the test set is diverse. For example, PU on easy questions get saturated to 1 on a small model while the hard questions still receive trivial performance (see Appendix B.1 for illustration). We propose to fit an individual PASSUNTIL score (IPU) for each question and aggregate them into an estimate for the whole dataset.
\[
PU(\{c_s, a_s\}; N) = \frac{1}{|S|} \sum_s IPU(c_s, a_s; N)
\]
\(^3\)For example, [4513], [717,18], and [16,17,18] all decode into string “123” in GPT-4’s tokenizer with vocab “cl100k-base”.
5 Predictable Scaling Experiments
In this section, we demonstrate how the proposed framework works in practice. We first pre-train two series of language models ranging from 0.03B to 2.4B using two dataset mixtures. We predict the performance of the 2.4B model based on the performance of the rest of the models in the series.
5.1 Scaling Configurations.
Model Configurations. We propose to keep a consistent “shape” of the Transformers while expanding their sizes. For the $i$-th model in the scaling curve, we set the number of layers to be $4i$, the number of attention heads to be $\lfloor \frac{i(N+1)}{2} \rfloor$, and the dimension of head to be 64. This results in the hidden state’s dimension $d_m$ being $d_h n_h$. We set the dimension of the feed-forward layer to be $2.5 d_m$. The specific values are listed in the model configurations in Table 3 of Appendix D.1. The architecture is similar to LLaMA (Touvron et al., 2023a) (see Appendix D.1 for details).
Pre-training Corpora. For series 1, we use the StarCoder dataset (Li et al., 2023a) as our pre-training data. For series 2, we use a mixture of StarCoder and Pile (Gao et al., 2020) dataset. Leveraging the optimal compute LLMs (Hoffmann et al., 2022), we set the maximum pre-training tokens for each model size to be the $20N$, where $N$ is the number of non-embedding parameters of the model. The detailed portion within the data mixture can be seen in Appendix D.2.

Figure 3: Training loss of the two series of models trained on different data mixtures. The internal figure illustrates the end-step reducible loss relative to model size, represented in logarithmic scale.
Hyper-parameters. Hyper-parameters are also of paramount importance in training a series of models that scale successfully. We examine the cosine learning rate scheduler, aligning our approach with that of Hoffmann et al. (2022), and determine the critical batch size in accordance with Kaplan et al. (2020). Nonetheless, due to constraints in space, we move the details to Appendix D.3.
5.2 Loss Scaling Law Verification.
We present the training loss curves for models in Figure 3. It is evident that the end-step training losses decrease in line with the scaling law. These empirically observed loss scaling laws lay a foundation for the subsequent approximation of task performance. Note that despite the occurrence of the loss spike in the 1.5B and 2.4B models, convergence to the scaling law is ultimately achieved, exemplifying the robustness of such an empirical law.
5.3 Dataset-level Fit
We select HumanEval (Chen et al., 2021), Emoji Movie, and Date Understanding (Srivastava et al., 2022) as the evaluation tasks. Note that Emoji Movie is conventionally cited as representing “emergent abilities” (Srivastava et al., 2022) (see the right figure in Figure 1). HumanEval is assessed using a zero-shot learning setting, while Emoji Movie and Date Understanding are evaluated employing 4-shot In-context Learning (Brown et al., 2020). We additionally use Chain-of-Thought Reasoning (Wei et al., 2022b) for Emoji Movie. See Appendix D.4 for the illustration and evaluation details of each task. We remove the distracting test instances from our evaluation list. For Emoji Movie, we remove the movie names that are common words (e.g., “it”) identified by NLTK (Bird et al., 2009). These common words make the exact string match susceptible to random guess’s correctness (See Appendix D.5 for details).
Figure 4: Task performance scales predictably with model scale. The red points denote the real performance of 2.4B model, which are close to the task scaling laws fitted from 0.03B to 1.5B.
We observe that all three tasks exhibit a strong linear relationship between $\log(-\log(\text{PU}))$ and $\log(N)$, verifying the success of task scaling law given by Eq.(3). The estimation of the scaling law functions utilizes the 0.03b to 1.5B models, which predicts the performance of the 2.4B model with small yet acceptable deviations.
5.4 INSTANCE-LEVEL FIT
According to § 4.3, we take the difference among test samples into consideration to improve the estimation. We plot how instance-level PASSUNTIL scales in Figure 13 of Appendix E.4. The fitted curves demonstrate that the performances of different instances not only originate from unique starting points but also scale at varying speeds. Nevertheless, they can be fitted by task scaling law individually. Some instances deviate from the scaling law, which needs future investigation.
| Method | HumanEval (1) | HumanEval (2) | Date Understanding (2) | Emoji Movie (2) |
|-----------------|---------------|---------------|------------------------|-----------------|
| Real Value | 0.05990 | 0.04279 | 0.00346 | 0.002608 |
| Dataset-level Fit | 0.06550 | 0.05191 | 0.00377 | **0.002381** |
| Instance-level Fit | **0.05987** | **0.04402** | **0.00352** | 0.003112 |
Table 1: Prediction of our framework compared to the real performance on two series of models. The number after the task denotes the model series used in the evaluation.
Figure 5: PU w.r.t. the test loss on HumanEval of model series 1.
Figure 6: We successfully predicted the performance of 2.4B model with 0.05% deviation (left) and 1.7% deviation (right).
**Estimating PASSUNTIL from Test Loss.** Estimating at the instance level presents challenges for hard instances that lack adequate non-zero PU values for fitting. These samples may also contribute to PU as the model size increases. We suggest leveraging test loss on ground truth answers to assist the prediction for such instances (See Appendix A.2 for a detailed discussion of its validity). We leverage the “easy” instances, which have both test loss and non-zero PU to estimate the relation between test loss and PU (Figure 5). Then we predict the test loss of each instance on 2.4B model based on 0.03B ~ 1.5B models. Finally, we transform the predicted test loss to predicted PU according to the aforementioned relationship. Details are presented in Appendix E.2. We provide the final prediction result of 2.4B model in Table 1, and draw the predicted PU curve in Figure 6. We can see that the predictions are accurate, with only 0.05% difference on HumanEval of series 1 and 1.7% difference on Date Understanding of series 2.
6 QUANTITATIVE ANALYSIS OF EMERGENCE
Building on the discovery of the predictability of task performance, we proceed with our investigation into a quantitative analysis of scaling behavior of a broader range of tasks. We prove that even with the refined resolution brought by PASSUNTIL and predictability of other emergent abilities, there are still certain abilities hard to be predicted. We establish their mathematical definitions, and examine the possible explanations for such scaling behaviors.
We study the scaling curve on the “Unnatural In-context Learning (UICL)” categories in Big-Bench (Srivastava et al., 2022). “Unnatural In-context Learning” is a set of 8 tasks designed to specifically study the in-context learning ability. These tasks involve input-output pairs that have been intentionally altered to deviate from the typical training distribution, thereby necessitating the model’s focus on unconventional in-context patterns. Task details and examples are in Appendix D.4.4. We randomly select 20 questions in the test set from each task and sample 4-shot examples from the remaining questions to serve as in-context examples. The evaluation metric employed is the exact match, and the upper bound sampling time is set to $10^5$.
When fitting the scaling curve, we only utilize the dataset-level PASSUNTIL since these test instances are manually constructed to test one skill of LLM and thus might be devoid of difficulty variation. Since our test set is small, we bootstrap 100 times from the 20 question’s test result and use the bootstrapped to calculate the standard error of each PASSUNTIL estimate (shown in the green hue in the Figures).
Categorization of Emergence. The evaluation on task “Dates” and “Identity” is shown in Figure 7. Other tasks are shown in Appendix E.3. “Dates” exhibit very smooth and consistent improvement starting from 0.03B, while the other tasks are a bit twisty. Nevertheless, 5/8 of these in-context learning tasks display a strictly concave function between $\log(-\log(\text{PU}))$ and $\log N$. The others (3/8) miss 1 or 2 valid estimation points due to their extreme difficulty for 0.03B and 0.1B models, since 0 PASSUNTIL is overseen even with $10^5$ sampling time, which we left for future exploration. The 5/8 tasks deviates from the scaling law (Eq.(3)) which requires this function to be linear. This means, unlike those tasks governed by the task scaling law, where “growth speed” $\alpha$ is uniform across different model sizes, there exist some tasks that see an increase in “growth speed” $\alpha$ as models enlarge. This phenomenon exemplifies an accelerated emergence phenomenon. To provide concrete discussion of accelerated emergence, we provide our categorization of task scaling curves first.
Mathematical Definition of Emergence. Since the loss scaling law of Eq.(1) is the only widely accepted principle during model scaling, we rely on its derived task scaling law of Eq.(3) as a separator between emergence and other scaling behavior.
Definition 1. Given a spectrum of models, we let the number of non-embedding parameters be variable $N$, suppose the PU($N$) estimated by PASSUNTIL on a task is a continuous function of $N$. Define $F(N) = \log(-\log(\text{PU}(N)))$, then the scaling curve of a task can be categorized into three basic main categories:
4if $F(N)$ has both convex and concave parts, then we can call it mixed growth.
1. if \( F(N) \) is a linear function of \( \log N \), then the task obeys scaling law growth.
2. if \( F(N) \) is a convex function of \( \log N \), then the task obeys sub-scaling law growth.
3. if \( F(N) \) is a concave function of \( \log N \), then the task obeys super-scaling law growth, or “accelerated emergence”.
Figure 8 shows visualizations of three types of growth. Qualitatively, the scaling curves of all three types appear analogous to exponential growth when performance starts to become noticeable. However, they are qualitatively different. Task scaling curves with task scaling law growth or sub-scaling law growth are easier to predict and control, whereas accelerated emergence is not easy to predict, which might go out of control when the model gets larger.
**Cause of Shape of Scaling Curve.** The above mathematical definition provides us the opportunity to examine the hypothesis regarding the genesis of these scaling behavior. Here, we first study the following hypothesis: Emergent abilities may be induced by multi-step reasoning (Srivastava et al., 2022; Wei et al., 2022a; Schaeffer et al., 2023).
We prove that, surprisingly, “multi-step reasoning” leads to sub-scaling law growth.
**Theorem 2.** Suppose each reasoning step’s success rate, measured by PASS UNTIL obeys the scaling law growth, then the multi-step success rate follows the sub-scaling law growth.
**Proof.** Suppose the success rate of reasoning step \( i \) obeys a scaling law growth with coefficient \( c_i \) and \( \alpha_i \), then \( F(N) = \log \left( \sum_i c_i \exp(-\alpha_i \log N) \right) \). Using Cauchy–Schwarz inequality, we can prove that \( \frac{\partial^2 F}{(\log N)^2} \geq 0 \). Therefore, the scaling curve is convex. See Appendix C.1 for more.
This proof can also be understood more intuitively: the growth speed will initially be boosted by the improvement of those easy steps, and eventually be bounded by the most difficult steps, thus showing a decreasing growth speed. Then, we propose an alternative hypothesis: suggesting that multiple neural “circuits” (Nelson et al., 2021) may be represented within the LLMs, and that as long as one such circuit can successfully solve the test instance, the test instance is deemed passed. This hypothesis is inspired by the explanation of “grokking” phenomenon given by Varma et al. (2023). They propose that there exists a memorization circuit and a generalization circuit inside the transformers, and the “grokking” phenomenon is led by the generalization circuit getting more efficient than the memorization circuit during training. We will demonstrate that with this hypothesis, the scaling curve exhibits characteristics of emergence.
**Theorem 3.** Suppose multiple circuits \( i \) exist in the LLMs that are responsible for solving the task, and each displays scaling law growth and has \( PU_i \). And suppose the success rate of the task is the majority voting of these circuits, i.e., \( F(N) = \log(-\log \max_i PU_i) \). Then, \( F(N) \) is a concave function of \( \log N \).
**Proof.** \( F(N) = \min_i (\log c_i - \alpha_i \log N) \). Since the minimum operator keeps concavity, \( F(N) \) is a concave function of \( \log N \). See Appendix C.1 for a more elaborated proof.
We loosely test the hypothesis by fitting the scaling curve for the UICL task. In practice, similar to Varma et al. (2023), we adopt a soft version of the majority voting. We apply a weighted combination between two circuits. And we assume the number of the circuits is 2. Therefore, we fit \( w_1(\alpha_1 \log N - \log c_1) + w_2(\alpha_2 \log N - \log c_2) \) to \( F(N) \), where \( w_1 \) and \( w_2 \) is given by the Softmax of \( \alpha_i \log N - \log c_i \). The resulting fit curve is demonstrated in the green line in Figure 7 and Appendix E.3. We can see that this hypothesis produces fit curves that align more accurately with the observed performance scaling curve.
**7 Conclusion.**
Our work introduces a novel evaluation strategy capable of detecting minimal performance improvements during model scaling, thus opening avenues for quantitatively measuring the task scaling laws and the emergence abilities. This method has enabled the successful prediction of the task performance of larger models. Additionally, we have performed a quantitative analysis of emergent abilities, providing a clearer insight into their nature and origination. This research not only enhances our understanding of LLMs’ scaling properties but also sets the stage for future explorations in scientific scale-up of LLMs.
ETHICAL STATEMENT
In this paper, we demonstrate that although we can predict a set of emergent abilities, the accelerated emergence remains hard to be predicted. The hypothesis regarding the cause of accelerated emergence implies that we need a better understanding of the working mechanism to produce accurate predictions for such emergent ability. Without an understanding of the working mechanism, any fit curve to the early stage of task performance improvement might be governed by another stronger, yet unknown, “generalization” circuit when the model gets sufficiently large. Thus, this hypothesis calls for deeper research into the mechanism of LLMs to prevent the safety concerns brought by accelerated emergent abilities.
REPRODUCIBILITY STATEMENT
We will open-source and all evaluation scripts for reference.
ACKNOWLEDGEMENTS
This work is supported by the National Key R&D Program of China (No.2022ZD0160501).
REFERENCES
Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”, 2009.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guéstrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023.
Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, et al. Predictability and surprise in large generative models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1747–1764, 2022.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835–10866. PMLR, 2023.
Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
|
1GdAJ3GsOw
|
Why Tensor Parallelsim work is compared PyTorchDDP? For fairness, how about comparing with other Tensor Parallelism works like PyTorchFSDP, Alpa (its intra-op parallelism), Colossal-AI (its tensor parallelism part), DeepSpeed (its Automatic Tensor Parallelism), and Megatron-LM?
|
DistPar: Tensor Partitioning for Distributed Neural Network Computing
Anonymous authors
Paper under double-blind review
Abstract
Existing distributed training systems suffer from the difficulties of adapting to diverse model architectures and balancing the trade-off between computational and communication costs. We introduce Distributed Partitioning (DistPar), a framework that allows users to develop parallel models with the ease of writing single-device programs. We establish the basic properties of tensor partitioning, which significantly expand the search space for optimal parallel strategies. The process of distributing global tensors from a single-device perspective is driven by the innovative use of collective communication primitives and their extensions which represent conversions between arbitrary tensor distribution properties. To further address the challenge of parallel scheme optimization, we carry out a cost function that considers both computational and communication costs. Guided by the cost function, the best-performing parallel scheme is automatically selected with configurable parameters, thus simplifying the process of developing parallel models. We demonstrate state-of-the-art results on extensive experiments. Moreover, DistPar reaches 50% higher throughput in large-scale face recognition tasks and a 20% improvement in language modeling tasks compared with data parallelism provided by PyTorch. This performance improvement aligns with the expected speedup and is particularly notable as the number of computing devices increases. The code will be released at https://github.com/DistPar.
1 Introduction
In recent years, deep learning has been widely applied in many fields such as image, speech, and natural language processing (Angelova et al., 2015; Ba et al., 2015; Frome et al., 2013; Gonzalez-Dominguez et al., 2015; Hinton et al., 2012; Heigold et al., 2013; Karpathy et al., 2014; Le, 2013; Maddison et al., 2015). With the increasing demand for training efficiency and data processing capabilities of deep learning, single-device training systems, although useful in certain scenarios, may struggle to meet the requirements. Hence, the distributed training approach has become an effective way to improve computing power constantly.
Distributed deep learning’s performance relies primarily on efficient collective communication to adapt to different given computational devices (Yuan et al., 2022; Lepikhin et al., 2020). Existing deep learning parallelism libraries have made great efforts on it. Typically, parallelization strategies in the context of distributed deep learning include two main aspects: data parallelism and model parallelism. Data parallelism, the former, entails the further subdivision of a mini-batch of data, subsequently distributed across computational nodes, which facilitates the training of substantial volumes of data (Baruah et al., 2022; Shallue et al., 2018; Nguyen & Wahbi, 2021; Herlihy et al., 2021; Krizhevsky, 2014). Model parallelism, the latter, is conventionally applied to partition neural networks into segments that are subsequently deployed across computational nodes (Dean et al., 2012; Narayanan et al., 2021; Huang et al., 2018; Harlap et al., 2018; Shoeybi et al., 2020; Xu et al., 2021; Wang et al., 2021; Bian et al., 2021a). Based on the parallelism strategies mentioned, we believe a comprehensive approach that aggregates them with each other, enables faster computation and efficient utilization of computational devices.
Existing parallelism libraries like Pytorch, its DistributedDataParallel interface is challenging to users, because it requires users to design the communicative module of parallelism strategies manually. Hence, it’s necessary for us to design a set of parallel operation semantics from the bottom to
achieve an end-to-end structure so that users can handle parallel training tasks on multiple devices with the same ease as a single device.
Our unified strategy, DistPar, introduces a set of tensor partitioning attributes aimed at instructing the allocation of global logical tensors to specific physical devices—referred to as physical tensors for simplicity. DistPar merges these devices into a coherent logical supercomputer, allowing developers to handle parallel training tasks on multiple devices as simply as a single device. This enhanced accessibility for individual users, so they can focus on more top-level design.
The process of distributing global tensors from a single-device perspective is driven by the innovative use of collective communication primitives and their extensions which represent conversions between arbitrary tensor distribution properties. This capability is integrated into DistPar through the inclusion of pass layers. Therefore, DistPar effectively enhances the extensibility, enabling to be adaptive to different model structure and computational device.
To further address the challenge of parallel scheme optimization, DistPar assesses the cost in a comprehensive manner, which combines the conversion of parallel attributes across various parallelization strategies. At the meantime, to simplify the process of designing and selecting the best scheme, we provide a configurable parameter so that users can easily optimize computational cost and communication cost collaboratively and automatically. Evidently, the cost design helps users to adapt to different computational devices and design their own parallelism program easily.
The overall contributions are as follows:
• We present a novel tensor partitioning strategy, DistPar, aimed at generating a comprehensive range of parallelization strategies.
• We employ meticulously designed intermediate primitives to facilitate the automatic transformation of distributed properties within the context of physical tensors. These mechanisms naturally support arbitrary parallelization combinations.
• We introduce cost hyperparameter to generate different parallelization strategies, enabling the user to evolve the selection of optimal parallelization schemes.
• We prove that DistPar attains state-of-the-art performance in standard benchmark assessments.
2 RELATED WORKS
Numerous distributed parallelism strategies exist, with data parallelism and model parallelism being as the most widely adopted approaches.
Data parallelism involves dividing a mini-batch of data into smaller segments and distributing them to different computational nodes (Baruah et al., 2022; Shallue et al., 2018; Nguyen & Wahib, 2021; Herlihy et al., 2021; Krizhevsky, 2014). In data parallelism (Krizhevsky, 2014), each device retains a complete copy of the distributed neural network (DNN) model and processes a portion of the entire training dataset. This approach enables the training of large datasets, thereby enhancing both the scale and speed of training. However, data parallelism introduces inter-device communication overhead during the synchronization process when model weights are updated. This issue can become more apparent as the model size increases, which poses some challenges to the scalability and compatibility of data parallelism.
Model parallelism offers an alternative to data parallelism by directly partitioning DNN models across devices. With model parallelism (Kingma & Ba, 2017; Fang et al., 2023), weight parameters within the model are distributed among the available workers, which are typically GPUs. This approach consists of two main components: tensor parallelism and pipeline parallelism.
Tensor parallelism involves splitting tensors across an array of devices, typically occurring between the forward and backward propagation phases (Shoeybi et al., 2020; Xu et al., 2021; Wang et al., 2021; Bian et al., 2021a; Wang et al., 2021b; Bian et al., 2021b; Cannon, 1969; Berntsen, 1989; van de Geijn & Watts, 1995; Solomonik & Demmel, 2011). Megatron-LM (Shoeybi et al., 2020) introduced 1D tensor parallelism, which divides the linear layer along either the column or row dimensions. When employing tensor parallelism, communication tends to be frequent, and the data volume transferred during these communications is often substantial.
Pipeline parallelism divides the model on a layer basis, occurring at the junction of adjacent stages (Huang et al., 2018; Harlap et al., 2018; Li & Hoefler, 2021). Recent developments, such
as GPipe (Huang et al., 2018), have introduced pipeline parallelism, which involves synchronous weight updates. In this case, communication remains frequent but typically involves smaller data volumes. Due to the inherent characteristics of pipeline parallelism, amounts of device idle time called bubbles are generated.
**Comparison.** To reduce communication volume, tensor parallelism is preferred. Meanwhile, to improve peer-to-peer communication, pipeline parallelism is a suitable choice. However, it is equally important to note that bubbles cost a significant amount of time. To mitigate this, it is recommended to limit the number of pipeline stages to the number of micro-batches. In practice, when the level of tensor parallelism matches the number of devices, performance tends to reach its peak.
Other optimized strategies, as demonstrated in previous studies (Jia et al., 2018a,b), concentrate on tensor-related refinements along multiple axes to determine the most optimal parallelization strategy.
Achieving high throughput at a large scale demands innovative and intricate design across various facets. This includes the intelligent partitioning of computational graphs onto devices to minimize data transfer over the network while minimizing device idle time. It also involves the implementation of communication optimizations specific to the domain.
**Unified strategy.** Based on the comparisons mentioned earlier, we conclude there is an imperative need for a unified strategy that amalgamates various advantages. A commonality observed in existing parallelization strategies is the shared goal of optimizing the utilization of computational resources and enhancing overall computational efficiency. However, it is crucial to acknowledge that a single parallelization strategy often struggles to meet the efficiency requirements of complex business models. These individual parallelization strategies fall short in planning and executing the global logical computational graphs effectively. Therefore, a holistic approach to the entire process is necessary. We have identified three key indicators—accessibility, compatibility, and communication cost—as crucial elements to facilitate comprehensive considerations.
### 3 METHODOLOGY
This section establishes the theoretical foundation for subsequent experiments detailed in Section 4. We also introduce the proposed intermediate primitives designed to optimize model communication cost. Moreover, we illustrate complex operations using intermediate primitives. To be clear, we induce the transformations of distributed properties, offering a comprehensive perspective on distributed computation and collective communication. Finally, we employ partition analysis to quantitatively assess associated expenses in the theory.
#### 3.1 DISTRIBUTED PROPERTIES
Many parallelism strategies suffer from the bottleneck to be adaptive to different model structures and computational devices, so we need to design parallelism operation semantics from the bottom of the distributed training system. In this way, we can satisfy arbitrary parallelism strategies and their extensions. Distributed properties involve various parallel-related terms, with the goal of modeling global distributed computation by parameterizing operator deployment schemes. Within the modeling framework, developers have access to flexibly construct algorithmic models and configure distributed attributes according to their preferences. Formally, distributed properties are defined as a set of parameters associated with primitive operators. Their core framework involves the registration of operators along with their distributed attribute signatures. Here, we define the framework and further explain it with a qualitative analysis. Specifically, we discuss four key distributed properties: Placement, Scatter, Broadcast, and PartialReduce.
**Placement** of each operator in the logical graph specifies the devices where logical operators will be deployed. In the case of common data parallelism, all operators are deployed to all devices. Logically, all operators are designed to run on a single device, but in practice, they operate on different devices based on their placement configuration.
**Broadcast** is a procedure that involves sending the complete data of a logical tensor to all other computational nodes in the cluster, resulting in the creation of physical tensors that are copies of the logical tensors. Its process ensures that each physical operator has access to the entire dataset stored in the logical tensor. For convenience, we denote the Broadcast attribute as B.
Scatter involves splitting data from a logical tensor into chunks and sending these chunks to devices in a certain order. This creates local physical tensors. The Scatter property is characterized by a single parameter for partitioning, denoted as $S(0)$ for horizontal slicing and $S(1)$ for vertical-axis slicing. Scatter represents a one-to-multiple distribution similar to Broadcast. Their distinction is that Broadcast sends identical copies to all devices, whereas Scatter sends different chunks to each device. For simplicity, we denote Scatter as $S$.
PartialReduce signifies that the physical and logical tensors have matching shapes, but the values in the physical tensors constitute a subset of those in the logical tensors. Figure 1(a) illustrates the characteristics of PartialReduce. The complete global logical tensor can be reconstructed by reducing the physical tensor at the target location across all devices. Logically, the global logical tensor $Y$ is obtained by the logical tensors $U$ and $V$. However, in the physical implementation, component $U_0$ of logical tensor $U$, sliced by $S(1)$, and component $V_0$ of logical tensor $V$, with $S(0)$, are deployed on device 0. They are utilized to execute the corresponding operator, yielding the local physical tensor $Y_0$. Meanwhile, we use the same operation to obtain $Y_1$. Consequently, $Y$ can be reconstructed by reducing $Y_0$ and $Y_1$. Furthermore, $Y_0$, $Y_1$, and $Y$ share an identical shape.
3.2 Conversions of Distributed Properties
This section derives the intermediate primitives and their variants, such as complex operation construction, and conversions between distributed properties, and also mentions the crucial intermediate primitives for converting diverse distributed attributes and evaluating the associated communication cost. The optimal parallel strategy selection relies on minimizing communication overhead. Converting tensor distributed attributes between devices incurs overhead, except when executed on the same device, in $S2P$, which eliminates communication costs. However, cross-device communication cost in conversions is proportional to the size of the logical tensor $T$. Furthermore, induced from the modeling, we introduce existing intermediate primitives. The combinations of primitives and various conversions between distributed properties have been shown in Appendix A.1, and the complex operations are included in Appendix A.2.

Figure 1: An example of a PartialReduce procedure(a), where PartialReduce is denoted as $P$, and the behavior of $12P$(b), $12P$ is an atomic operation deploying a global logic tensor to a local reduction, where one device places a physical tensor, a copy of the global logic tensor, other devices only place physical tensors that have the same shape as the global logic tensor but with all values set to zero.
3.3 Immediate Inference
Immediate inference involves deducing the distributed properties of the output from the attributes of the input tensor. Table 1 in Appendix A.1 illustrates the process of directly inferable distribution using the matmul operator, where each case of the input’s properties is specified, and the valid output’s distributed properties are inferred. It takes a global logical tensor as input and infers the distributed attributes of local physical tensors across all devices. If the inference depends on the assistance of intermediate primitives, we select the most cost-effective primitive to insert between the input and the local physical tensor beforehand. When two adjacent operators establish a producer-consumer relationship and the distributed properties of the output tensor from the producer operator do not align with the properties required by the consumer operator, DistPar needs to dynamically derive intermediate transformation primitives. These primitives are automatically inserted between
the producer and consumer operators through the pass layers to ensure alignment. We present an example of inferring the intermediate primitive AllGather in Appendix A.1.2
3.4 COST DESIGN
The overall cost is evaluated based on both computational cost and communication cost. To be specific, in order to optimize computational cost and communication cost collaboratively, we need to characterize the trade-off between them. Therefore, we introduce the ratio of computational cost to communication cost, which is denoted by beta.
**Computational Cost** in DistPar is simplified to the sum of the elements of the input and output tensors corresponding to different parallelization strategies, due to the fact that DistPar assumes all parallelization strategies use the same operator library.
**Communication Cost** is defined as the total communications across multiple devices. In our implementation, communication cost is estimated using the conversion cost that results from the conversions of distributed properties. Details are revealed in Appendix A.1.
4 EXPERIMENTS
In this section, we conduct a comparative analysis of DistPar, TensorFlow, and Pytorch to demonstrate the effectiveness of DistPar.
4.1 SYSTEM PERFORMANCE
**Setup.** We conducted a comparative evaluation, analyzing ResNet-50 pre-trained on the ImageNet-2012 dataset (Heigold et al., 2013) for image recognition and the BERT-Base model (Karpathy et al., 2014) for query answering in natural language processing tasks. We assessed the throughput and speedup of these models implemented with DistPar, as well as the data parallelism libraries of PyTorch and TensorFlow. It is worth noting that our emphasis is on system performance metrics rather than learning objectives.

Figure 2: Training speed for 2 models using 32-bit floats. Throughput is measured in images per second for the ResNet-50 and in sentences per second for the BERT Base model. The fastest speed for each model is shown in the group of green rectangles in subplots (a) and (c). Larger batch sizes narrow the distance between DistPar’s speedup curve and the ideal curve, indicating that DistPar can effectively leverage system scalability with large-scale datasets in subplots (b) and (d).
Analysis. We analyze the system performance in view of throughput and speedup. On mainstream models for various tasks, namely ResNet-50 in Figure 2(a)(b) and BERT in Figure 2(c)(d), we conducted a comparative evaluation on the performance of DistPar’s automatically selected parallelism strategy against data parallelism in PyTorch and TensorFlow frameworks.
• Throughput Comparison Figure 2(a) and (c) illustrate the variation in the throughput performance of the three libraries as the number of computational devices changes. When comparing the throughput of DistPar-implemented ResNet-50 models with 16 and 32 computational devices, it is observed that they outperform the suboptimal PyTorch implementation by 1500 and 2300 images/second, respectively. In the case of BERT-base models, the respective throughput improvements are 500 and 750 sentences/second. As depicted in Figure 2(a) and (c), which illustrate the throughput of DistPar across various numbers of computational devices, it’s evident that DistPar consistently outperforms the comparative frameworks. Furthermore, this advantage becomes more obvious as the scale of computational devices increases. These findings underscore the superior overall throughput performance of DistPar, owing to its designed and selected global parallelization strategy in comparison to the data parallelism strategy employed by the comparative frameworks.
• Speedup Comparison Figure 2(b) and (d) illustrate the variation in the speedup performance of the three libraries as the number of computational devices changes. With the increase in the number of devices, it becomes more evident that both the ResNet-50 model(b) and the BERT model(d) implemented with DistPar(blue curve) closely approach the ideal system(black curve), while TensorFlow (green curve) follows DistPar as the next best option. For ResNet-50 model(b) and BERT model(d), when the number of computational devices reaches 32, they achieve speedups 2 and 5 times higher than PyTorch(red curve), respectively. This indicates that when dealing with a larger number of computational devices, the performance improvement of DistPar over PyTorch’s data parallelism strategy becomes more notable. These results collectively highlight that, in comparison to the baselines, DistPar exhibits enhanced system scalability. From the figure, it’s clear that DistPar outperforms the existing TensorFlow and PyTorch. When batch sizes get larger, the distance between DistPar’s speedup curve and the ideal curve is narrowed, indicating that DistPar can effectively leverage system scalability with large-scale datasets, showcasing its promising adaptability. In summary, DistPar can boost the system’s overall performance including throughput and speedup, and achieve promising results compared with popular deep learning parallelism libraries.
4.2 Hyperparameter Optimization
Setup. This experiment demonstrates DistPar’s optimization of parallelization strategies, as Figure 3 shows. The definition of overall cost can be found in Section 3.4. Specifically, the evaluating environment is configured with 4 * NVIDIA GeForce GTX 1080 GPU.
Figure 3: Results of the hyperparameter optimization experiment. Since the values of beta corresponding to the maximum throughput vary on different models, we can select the optimal parallelism strategy for each model by adjusting the value of beta (a). Compared with the cost design of baselines that only takes communication cost into account, DistPar has notably better performance due to its collaborative optimization on both computational cost and communication cost (b).
Analysis. DistPar exhibits varying parallelization strategies based on the ratio of computational cost to communication cost, denoted as the hyperparameter beta. This leads to different distribution characteristics of input and output tensors for the operators comprising the model. For different models, the beta value corresponding to the maximum throughput varies. For LeNet, AlexNet, Vgg16, and MobileNetV2, the beta values corresponding to their respective maximum throughputs are 10, 1, 0.1, and 0.01, with the corresponding speedup percentages being 7.48%, 64.75%, 2.83%, and 8.41%. The results highlight that DistPar adapts its parallelization strategy based on beta, resulting in different throughput outcomes. It is worth noting that the beta value corresponding to the maximum throughput is not consistent with the baseline which only considers the communication cost. This implies that, compared to a baseline approach that only considers communication cost, DistPar effectively leverages both computational and communication costs to guide its parallelization strategy selection. In summary, DistPar empowers users to optimize parallelization strategies for different models by fine-tuning the hyperparameter beta. This enables the selection of the parallelization strategy that corresponds to the maximum throughput for each model.
4.3 Scalability Analysis
Setup. In order to observe the DistPar’s implementation of the large-scale face recognition insightface model, we conduct a series of separate experiments. The throughput on the insightface model was evaluated on different batch sizes and the number of categories. The configured with 8 GPUs of NVIDIA Tesla V100, FP32. Moreover, data parallelization with Broadcast and model parallelization with S1. To explore more cases, we vary the batch size and parallelization options for the fully connected layer of the last layer of the insightface model. As shown in Figure 4.

(a) 
(b) 
Figure 4: Performances of DistPar, data parallelization, and model parallelization, with batch_size fixed to 8 and 64. As the number of categories and the batch size vary, DistPar shows an identical pattern of prioritizing data parallelism when the number of categories is small and tends to select model parallelism when it is gradually increasing. DistPar can outperform data parallelism by 120% and 50% within batchsize fixed to 8 and 64 respectively, which confirms that DistPar is able to automatically plan and select the better parallelization scheme that is adaptive to different computational resources according to different tasks.
Analysis. Based on the Insightface model structure for face recognition tasks, we analyze the impact of changes in the number of categories on the selection of DistPar parallelization strategies. When the number of categories is small, data parallelism performs similarly to model parallelism and maintains a relatively good performance. However, as the number of categories increases, the throughput of data parallelism decreases. On the other hand, the performance of the model parallelism strategy remains stable. For DistPar, when the number of categories is low, it favors data parallelism. However, as the number of categories increases, DistPar tends to choose model parallelism as the overall strategy. These experimental results confirm that DistPar has the capability to select the optimal parallelization strategy that matches different numbers of categories effectively. Furthermore, we analyze the impact of batch size on the selection of DistPar parallelization strategies. When the batch size is small, DistPar exhibits better compared to data parallelism and model parallelism. As the batch size increases, the performance of DistPar remains competitive with model parallelism. It’s worth noting that when the batch size is 128, DistPar’s performance is slightly lower than that of model parallelism. However, by adjusting the hyperparameter beta, DistPar can be fine-tuned to
match the performance of model parallelism. These experimental results confirm that DistPar can adapt to different batch sizes and select the optimal parallelization strategy accordingly.
4.4 OPTIMIZATION SPACE
Setup. We conducted comparative experiments on the last three fully connected layers of the VGG16 network using DistPar with the manual configuration strategy provided by PyTorch, which involves a potential combination of all parallel strategies, DistPar implements an optimal parallelization strategy suitable for the last three layers.
Analysis From the experiments, the throughput of the data parallelism strategy DDD configured in PyTorch is the lowest, as shown in Figure 5. By introducing some degree of model parallelism, the overall performance of VGG16 is improved. Considering the large dimension of the first fully connected layer, configuring it with the S0 parallelization strategy yields favorable results. The results indicate that the manually configured optimal parallelization strategy in PyTorch is RCR, confirming that the S0 parallelization strategy is best suited for the first fully connected layer.

Compared to the manually configured PyTorch parallelization strategy, the DistPar strategy exhibits significant performance improvements. In PyTorch’s manual configuration approach, only the distributed attributes affecting variable operations are determined, while the parallelization strategy for intermediate tensors remains undetermined. Meanwhile, DistPar has the capability to comprehensively select and optimize parallelization strategies for intermediate tensors, analyzing operators within the backward computation graph to determine the best parallelization strategy. In contrast to Pytorch’s manual configuration approach, DistPar has a larger search space. In summary, compared to manually configured PyTorch parallelization strategies, DistPar yields superior performance, resulting from DistPar’s larger search space and its optimization capabilities.
4.5 PRIMITIVE-LEVEL OPTIMIZATION
Setup. DistPar offers multiple implementations for the same parallelization strategy. For example, as shown in Figure 3(b) (see Appendix A.3), the S2B transformation can be realized using both the AllGather approach and a combination of Gather and Broadcast. In order to investigate how DistPar’s use of different implementations for the same parallelization strategy affects system throughput performance, we evaluated the throughput performance of various collective communication operations, including ReduceScatter, AllGather, and AllReduce, as they vary with the scale of computational devices, using the Enflame-CloudBlazer T10-16GB DCU in the same environment.
Analysis. In Figure 6, the results indicate that different communication primitives exhibit various throughput performances at the same number of computational devices. The overall throughput trends for all primitives show a pattern of initial decline followed by stabilization as the scale of
computational devices increases. When there are 8 devices, the throughput of AllGather is 10.36 and 12.40 times higher than ReduceScatter and AllReduce, respectively. This suggests that when the number of computational devices is relatively low, significant performance differences exist among different communication primitives. As the number of devices increases to 320, these differences are reduced to 1.03 and 1.0, respectively, indicating that the performance gap between different primitives gradually narrows with the growth in the number of computational devices. This experiment confirms that, when the number of computational devices is low, DistPar exhibits significant performance variations based on different communication primitives, expanding the candidate space for selecting the optimal strategy for the same parallelization strategy. When the number of computational devices is high, DistPar’s implementations based on different communication primitives for the same parallelization strategy tend to have stabilized performance differences, highlighting DistPar’s ability to select the most stable and highest throughput implementation when there are a significant number of computational devices.

**Figure 6:** The throughputs for data parallelism with different tensor partition options in DistPar. This figure illustrates throughputs of varied intermediate primitives are different under the same device. Notably, throughputs for all primitives initially drop before plateauing. This decline is due to the reduced communication bandwidth between devices as the parallel width of collective communication widens, leading to less bandwidth utilization by individual intermediate primitives.
## 5 CONCLUSIONS AND FUTURE WORK
In this paper, we propose DistPar, a unified approach for efficient tensor partitioning in parallel computation of neural networks, and describe the methodology of determining solution spaces for attribute conversions in distributed training systems. The results indicate that the proposed tensor partitioning approach of DistPar supports flexible combinations of various parallelism strategies. Furthermore, under the collaborative guidance of computational cost and communication cost, DistPar enables users to select the parallelism strategy that yields the maximum throughput corresponding to different models. Hence, we believe DistPar is very promising in related domains. However, there are potential limitations that need to be considered. We qualitatively discuss the relationship between cluster communication performance and parallel width. As the parallel width $n$ of collective communication increases and the input data size $|T|$ remains constant, both the total communication volume across devices and the memory savings on each device grow proportionally. The time required for a specific collective communication is not affected by the parallel width $n$. Consequently, as $n$ increases, DistPar can utilize a bandwidth of size $(n - 1) \times |T|$ for inter-device communication. This benefits in two ways: Firstly, each device can process a smaller data portion, $\frac{|T|}{n}$, leading to faster computation; Secondly, memory savings increase by $(n - 1) \times |T|$, thus future work needs to build the model of communication efficiency and communication bandwidth through experimental simulation.
REFERENCES
Anelia Angelova, Alex Krizhevsky, and Vincent Vanhoucke. Pedestrian detection with a large-field-of-view deep network. In *2015 IEEE International Conference on Robotics and Automation (ICRA)*, 2015.
Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention, 2015. URL https://doi.org/10.48550/arXiv.1412.7755
Nirvik Baruah, Peter Kraft, Fiodar Kazhamiaka, Peter D. Bailis, and Matei A. Zaharia. Parallelism-optimizing data placement for faster data-parallel computations. *Proc. VLDB Endow.*, 2022.
Jarle Berntsen. Communication efficient matrix multiplication on hypercubes. *Parallel Computing*, 12(3):335–342, 1989. ISSN 0167-8191. doi: https://doi.org/10.1016/0167-8191(89)90091-4. URL https://www.sciencedirect.com/science/article/pii/0167819189900914
Zhengda Bian, Qifan Xu, Boxiang Wang, and Yang You. Maximizing parallelism in distributed training for huge neural networks, 2021a.
Zhengda Bian, Qifan Xu, Boxiang Wang, and Yang You. Maximizing parallelism in distributed training for huge neural networks, 2021b.
Lynn E. Cannon. A cellular computer to implement the kalman filter algorithm. 1969. URL https://api.semanticscholar.org/CorpusID:60822897
Jeffrey Dean, Gregory S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc’Aurelio Ranzato, Andrew W. Senior, Paul A. Tucker, Ke Yang, and A. Ng. Large scale distributed deep networks. In *NIPS*, 2012.
Jiarui Fang, Zilin Zhu, Shenggui Li, Hui Su, Yang Yu, Jie Zhou, and Yang You. Parallel training of pre-trained models via chunk-based dynamic memory management. *IEEE Transactions on Parallel and Distributed Systems*, 34(1):304–315, jan 2023. doi: 10.1109/tpds.2022.3219819. URL https://doi.org/10.1109%2Ftpds.2022.3219819
Andrea Frome, Gregory S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In *NIPS*, 2013.
Javier Gonzalez-Dominguez, Ignacio Lopez-Moreno, Pedro J. Moreno, and Joaquín González-Rodríguez. Frame-by-frame language identification in short utterances using deep neural networks. *Neural networks : the official journal of the International Neural Network Society*, 2015.
Aaron Harlap, Deepak Narayanan, Amar Phanishayee, Vivek Seshadri, Nikhil Devanur, Greg Ganger, and Phil Gibbons. Pipedream: Fast and efficient pipeline parallel dnn training, 2018.
Georg Heigold, Vincent Vanhoucke, Andrew W. Senior, Patrick Nguyen, Marc’Aurelio Ranzato, Matthieu Devin, and Jeffrey Dean. Multilingual acoustic models using distributed deep neural networks. *2013 IEEE International Conference on Acoustics, Speech and Signal Processing*, 2013.
Maurice Herlihy, Nir Shavit, Victor Luchangco, and Michael F. Spear. Data parallelism. *The Art of Multiprocessor Programming*, 2021. URL https://api.semanticscholar.org/CorpusID:61521671
Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel rahman Mohamed, Navdeep Jaitly, Andrew W. Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition. *IEEE Signal Processing Magazine*, 2012.
Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. *CoRR*, abs/1811.06965, 2018.
|
vBw8JGBJWj
|
I like how the authors keep the section of Problem Statement, but I don’t follow the exact problem to be solved however. The paragraph on Page 3 in Problem Statement only talked about the notation but does not specify what the task is, or what the desired output should be.
|
Encoding Unitig-level Assembly Graphs with Heterophilous Constraints for Metagenomic Contigs Binning
Hansheng Xue\textsuperscript{1,2}, Vijini Mallawaarachchi\textsuperscript{3}, Lexing Xie\textsuperscript{1}, Vaibhav Rajan\textsuperscript{2}\textsuperscript{*}
\textsuperscript{1}School of Computing, Australian National University, Canberra, Australia
\textsuperscript{2}School of Computing, National University of Singapore, Singapore
\textsuperscript{3}College of Science and Engineering, Flinders University, Adelaide, Australia
\{hansheng.xue, lexing.xie\}@anu.edu.au, vaibhav.rajan@nus.edu.sg
vijini.mallawaarachchi@flinders.edu.au
Abstract
Metagenomics studies genomic material derived from mixed microbial communities in diverse environments, holding considerable significance for both human health and environmental sustainability. Metagenomic binning refers to the clustering of genomic subsequences obtained from high-throughput DNA sequencing into distinct bins, each representing a constituent organism within the community. Mainstream binning methods primarily rely on sequence features such as composition and abundance, making them unable to effectively handle sequences shorter than 1,000 bp and inherent noise within sequences. Several binning tools have emerged, aiming to enhance binning outcomes by using the assembly graph generated by assemblers, which encodes valuable overlapping information among genomic sequences. However, existing assembly graph-based binners mainly focus on simplified contig-level assembly graphs that are recreated from assembler’s original graphs, unitig-level assembly graphs. The simplification reduces the resolution of the connectivity information in original graphs. In this paper, we design a novel binning tool named \textsc{UnitigBin}, which leverages representation learning on unitig-level assembly graphs while adhering to heterophilous constraints imposed by single-copy marker genes, ensuring that constrained contigs cannot be grouped together. Extensive experiments conducted on synthetic and real datasets demonstrate that \textsc{UnitigBin} significantly surpasses state-of-the-art binning tools.
1 Introduction
Metagenomics involves the analysis of genetic materials originating from mixed microbial communities present in various environments (Koeberlein et al., 2002). It offers a suite of tools rooted in genome sequencing to address crucial questions related to human health and environmental sustainability. As an illustration, the Human Microbiome Project (Turnbaugh et al., 2007) uses metagenomic analysis techniques to acquire valuable insights into the intricate microbial communities residing in human body. This effort also aids in identifying microbial species associated with various diseases present in the human gut (Nayfach et al., 2019). In a standard metagenomic analysis workflow, genetic materials are gathered from the microbial community and then processed through a sequencing platform to generate DNA sequences commonly referred to as reads. Since these genetic materials are mixed together, it’s uncertain which species each read belongs to. A core challenge in downstream analysis is to determine the species present within the input sample by examining these reads. However, these reads are too short for direct analysis. Many metagenomic techniques use assembly graphs to assemble these short reads into longer DNA sequences known as unitigs. Contigs are then formed by combining one or multiple connected unitigs (Xiang et al., 2023). In an assembly graph, each vertex corresponds to a unitig, and each edge represents the overlapping relationships between two unitigs (Nurk et al., 2017; Kolmogorov et al., 2020). Contigs are represented as either
*Corresponding author.
a single vertex or a path comprising several vertices within the assembly graph. Contigs binning refers to the clustering of these contigs into distinct bins corresponding to constituent genomes.
Many existing metagenomic binning tools rely on statistical information extracted from contigs themselves, including nucleotide composition and abundance features (Breitwieser et al., 2019; Yue et al., 2020). These tools do not take into account the homophily information within the assembly graph (Barnum et al., 2018), which suggests that sequences connected to each other in the assembly graph are more likely to belong to the same species. In addition, inherent noise within sequences presents additional challenges for these sequence feature-based binning methods. Several binning tools have been developed recently to leverage assembly graphs, including GraphBin (Mallawaarachchi et al., 2020) and GraphMB (Lamurias et al., 2022). However, instead of directly using unitig-level assembly graphs produced by the assembler, they simplify original graphs and re-construct contig-level assembly graphs, where vertices represent contigs and edges represent their overlaps. However, the simplification reduces the resolution of the connectivity information in unitig-level assembly graphs and may introduce erroneous edges (Xiang et al., 2023). Moreover, many existing binning tools face difficulties and exhibit low recall values when handling short sequences, often shorter than 1,000 bp, which are commonly excluded from analysis.
Additional biological information, such as single-copy marker genes, can also be exploited to improve binning results (Albertsen et al., 2013; Dupont et al., 2012). Single-copy marker genes are genes that occur only once in each species. If two contigs share the same single-copy marker gene, it is highly likely that they belong to different species. Some binning tools have utilized this additional information from single-copy marker genes to estimate the initial number of bins or enhance the quality of contig binning results (Mallawaarachchi & Lin, 2022; Lamurias et al., 2023). However, there are limited graph neural network models that can be directly employed to model the unitig-level assembly graph with heterophilous relationships. Moreover, dealing with the large-scale characteristics of unitig-level assembly graphs in real metagenomic data poses significant challenges for the learning process. In this paper, we develop a graph neural network framework designed to model the unitig-level assembly graph while also adhering to heterophilous constraints imposed by single-copy marker genes, called UNITIGBIN. The contributions of this paper are listed as follows:
• To the best of our knowledge, this is the first use of graph neural networks to model the unitig-level assembly graph in the field of metagenomic contig binning.
• We devise a novel model for constraint-based graph learning, UNITIGBIN-Learning, which captures the unitig-level assembly graph with constraints by employing a diffusive convolution and optimizing triplet constraints. A p-batch strategy is designed for parallelization.
• We devise a novel UNITIGBIN-Binning framework that leverages a Matching algorithm to initialize marked contigs, uses Propagating labels to annotate unmarked contigs meanwhile satisfying constraints, and employs a local refining strategy to fine-tune final binning.
• Extensive experiments on synthetic and real datasets show that UNITIGBIN significantly outperforms state-of-the-art binners in terms of binning purity and completeness, regardless of whether graphs are from standard metagenomic assemblers, metaSPAdes and metaFlye.
2 RELATED WORK
Contigs binners. Despite contigs being assembled from short reads using assembly graphs, the majority of existing binners overlook the homophily information present in these assembly graphs. Instead, these binning tools rely on composition (normalized oligonucleotide, i.e., short strings of length $k$, frequencies) and coverage (average number of reads aligning to each position of the contig) information to perform contig binning. For example, MetaWatt (Strous et al., 2012) leverages multivariate statistics and Markov models to bin contigs. CONCOCT (Alneberg & et al., 2014) combines the variational Bayesian model selection and Gaussian mixture model to cluster contigs into bins. MaxBin2 (Wu et al., 2016) designs an expectation-maximization algorithm that uses both composition and coverage information to iteratively bin the contigs. BusyBeeWeb (Laczny et al., 2017) is a web application that uses a bootstrapped supervised binning approach for contig binning. MetaBAT2 (Kang et al., 2019) is a graph partitioning approach that uses contigs’ composition to construct the graph. SolidBin (Wang et al., 2019) uses a semi-supervised spectral clustering algorithm combined with additional biological knowledge. MetaBinner (Wang et al., 2023) is an ensemble binning tool that can integrate various types of features. In addition, several binning tools have been
developed to enhance performance using deep learning techniques. For instance, VAMB (Nissen et al., 2021) uses deep variational autoencoders to learn both composition and coverage information. CLMB (Zhang et al., 2022) employs deep contrastive learning techniques to produce robust results even from noisy data. SemiBin (Pan et al., 2022) designs a semi-supervised Siamese neural network incorporating must-link and cannot-link constraints obtained from reference genomes. These binning tools do not utilize assembly graphs and often omit short contigs because composition and coverage features are less reliable for short contigs, leading to lower recall values.
Assembly graph improves binning. To enhance the binning outcomes, recent bin-refinement methods (Mallawaarachchi et al., 2020) have introduced the utilization of assembly graphs. However, these bin-refinement tools are not independent and require the initial binning results from existing binners as a starting point. MetaCoAG (Mallawaarachchi & Lin, 2022) is a standalone binning tool capable of integrating composition, abundance, and assembly graph information to enhance binning performance. In addition, several methods have been developed to use graph neural networks to model the assembly graph. For instance, GraphMB (Lamurias et al., 2022) uses a variational autoencoder model to encode both composition and abundance and then feeds these features into graph neural networks for contigs binning. This approach does not incorporate additional information like heterophilous constraints from single-copy marker genes. RepBin (Xue et al., 2022) designs a self-supervised graph learning framework for modeling assembly graphs while encoding prior constraints. Then, a semi-supervised label propagation model is employed for contig binning. CCVAE (Lamurias et al., 2023) develops a variational autoencoder to simultaneously learn the assembly graph and the information of single-copy marker genes and then uses a clustering algorithm for contig binning. A common limitation of these graph-based binning tools is their applicability solely to contig-level assembly graphs, which are generated from the unitig-level assembly graph using specific strategies. The transition from unitig-level to contig-level reduces the resolution of connectivity information in the original graph and may introduce errors due to the chosen strategy.
3 METHODOLOGY
Preliminaries. Given a unitig-level assembly graph $G=(V,E,P,X)$ along with its constraints $C$. The output embedding for unitigs and contigs in the graph are $d$-dimensional vectors $Z \in \mathbb{R}^{|V| \times d}$ and $\tilde{Z} \in \mathbb{R}^{|P| \times d}$, and contigs binning results are denoted as $B=\{b_i \in \mathbb{R}^K, i \in |P|\}$, where $K$ denotes the number of bins. In the assembly graph $G$, $V$ is the nodes or unitigs set, $E=\{(v_i,v_j)|v_i,v_j \in V\}$ denotes edges indicating the overlap between unitigs, $P=\{(v_i,...v_j,...v_k)|v_i,v_j,v_k \in V\}$ corresponds to the paths or contigs within the graph, and $X=\{x_v|v \in V\}$ are features associated with nodes. Heterophilous constraints from the single-copy marker genes are $C=\{(p_i,...,p_j)|p_i,p_j \in P\}$. The objective of binning metagenomic contigs is to assign a label $b_i \in B$ to each contig $p_i \in P$.
Preprocessing. The heterophilous constraints $C$ are created by employing the FragGeneScan (Rho et al., 2010) and HMMER (Eddy, 2011) tools to detect contigs containing single-copy marker genes, following a similar approach as MaxBin (Wu et al., 2016) and MetaCoAG (Mallawaarachchi & Lin, 2022). When contigs belong to the same constraint set, it implies that these contigs should not be grouped together in pairs within the bins. The unitig-level assembly graphs are constructed from two widely used assemblers: metaSPAdes (Nurk et al., 2017) and metaFlye (Kolmogorov et al., 2020).
Prior to modeling the assembly graph and clustering contigs, we initially perform preprocessing operations based on known knowledge (unitig-level assembly graph $G$ and heterophilous constraints $C$). Two operators are introduced in UNITIGBIN, i) Graph disentangling and ii) Contigs sampling. Graph disentangling is designed to separate contigs within the unitig-level assembly graph. For example, when constraints suggest that contig $A$ and contig $B$ should belong to distinct bins, yet these
two contigs share an overlapping unitig in the assembly graph (as shown in Figure 1), we create a new unitig by duplicating the original unitig to disentangle the assembly graph. Contigs sampling is devised to create positive relationships among contigs by leveraging the inherent structure of the assembly graph. Different from conversion strategies used in GraphBin and GraphMB to construct contig-level assembly graphs. Here, we focus on determining whether two paths are directly connected or linked by only a single hop [Miller et al., 2010]; in this case, we establish a positive edge between these contigs. The sampled positive contigs set is represented as \( O = \{(p_i, p_j) | p_i, p_j \in P\} \).
Overview. UNITIGBIN consists of two main components: Learning, which uses a graph neural network to model the unitig-level assembly graph while adhering to constraints, and Binning, a contig-level framework. In the Binning stage, a Matching algorithm is employed to initialize marked contigs. Propagating labels are used to annotate unmarked contigs while satisfying constraints, and a local Refining strategy is incorporated to fine-tune binning assignments (refer to Figure 1).
3.1 Learning: REPRESENTING UNITIG-LEVEL ASSEMBLY GRAPH WITH CONSTRAINTS
The UNITIGBIN-Learning model aims to obtain latent representations for both unitigs/nodes \( Z \) and contigs/paths \( \hat{Z} \) considering both unitig-level assembly graph \( G \) and heterophilous constraints \( C \). In this section, we will introduce the Learning framework in three components, a) Diffusion encoder-decoder framework, b) Triplet Gaussian constraints optimization, and c) \( p \)-Batch parallelization.
3.1.1 DIFFUSION ENCODER-DECODER FRAMEWORK
An encoder-decoder architecture is adopted as the foundational learning framework in Learning, which comprises two primary components: a graph diffusive convolution encoder [Klicpera et al., 2019] and an inner-product decoder. The diffusive encoder captures the graph’s topology and initial node features, while the inner-product decoder reconstructs the graph’s structure using learned features from the diffusive encoder. Minimizing the reconstruction loss, which measures the dissimilarity between the original and reconstructed graph, allows us to obtain the node embeddings.
Encoder-Decoder. In Learning, a variational autoencoder [Kingma & Welling, 2013] is established. \( A \) denotes the adjacency matrix of a unitig-level assembly graph with self-loops (\( A = A + I_N \), \( I_N \) is the unit matrix), \( D \) stands for its diagonal degree matrix, i.e., \( D_{ii} = \sum_{j=1}^{N} A_{ij} \). We use DIFFCONV to symbolize the diffusive convolution. Then, the diffusive encoder can be formulated as:
\[
q(Z|X,A) = \prod_{i=1}^{N} q(z_i|X,A), \quad \text{with} \quad q(z_i|X,A) = N(z_i|\mu_i, \text{diag}(\sigma_i^2)), \quad \text{where} \quad \mu = \text{DIFFCONV}_\mu(H, A), \quad \log \sigma = \text{DIFFCONV}_\sigma(H, A), \quad \text{and} \quad H = \text{DIFFCONV}(X, A).
\]
The inner-product decoder is calculated as
\[
p(\hat{A}|Z) = \prod_{i=1}^{N} \prod_{j=1}^{N} p(A_{ij}|z_i, z_j), \quad \text{with} \quad p(A_{ij}=1|z_i, z_j) = \text{Sigmoid}(z_i^\top z_j).
\]
The object is as follows:
\[
L_q = \mathbb{E}_{q(Z|X,A)}[\log p(\hat{A}|Z)] - KL[q(Z|X,A)||p(Z)],
\]
where \( p(Z) = \prod_i p(z_i) = \prod_i N(z_i|0,I) \) is a Gaussian prior. A weighted cross entropy loss [Kipf & Welling, 2016] is used in Equation 1 to measure the reconstruction error between \( A \) and \( \hat{A} \). \( KL(\cdot) \) is the divergence function to measure the similarity between the distribution of \( q(Z|X,A) \) and \( p(Z) \).
Diffusiv convolution. Following RepBin [Xue et al., 2022], we also use the PageRank-based diffusion [Page et al., 1999] to model the unitig-level assembly graph. To provide a brief explanation, when considering a node \( i \) with vectorial feature \( x_i \), the iterative calculation of its diffusive feature follows the equation \( \text{PPR}(x_i) = (1-\alpha)\text{APPR}(x_i) + \alpha x_i \), where \( \alpha \in (0,1] \) is the probability of transitioning to a different state. The transitions state for node \( i \) can be computed as \( \text{PPR}(x_i) = \alpha(I_N - (1-\alpha)A)^{-1}x_i \). The diffusive convolution and layer-wise propagation rule can be formulated as:
\[
H^{l+1} = \sigma(\text{DIFFCONV} \cdot H^l \Theta^l), \quad \text{DIFFCONV} = \alpha[I_N - (1-\alpha)D^{-1/2}AD^{-1/2}]^{-1}
\]
where \( \sigma(\cdot) \) denotes a non-linear activation function, and \( \Theta^l \in \mathbb{R}^{V \times d_l} \) is a \( l \)-layer trainable transformation matrix, \( d_l \) is the embeddings dimension in the \( l \)-layer, and \( H^0 = X \). By optimizing the object
in Equation [1], latent embeddings for unitigs can be obtained, \( Z \in \mathbb{R}^{V \times d} \). A readout function can be used to generate features for contigs, \( \hat{Z} = R(Z) \), with \( \hat{Z}_i = \frac{1}{|P_i|} \sum_{v \in P_i} Z_v \), and \( \hat{Z} \in \mathbb{R}^{P \times d} \).
### 3.1.2 Triplet Gaussian Constraints Optimization
In constraints \( C \), each set signifies that certain contigs contain the identical marker gene and these contigs must not be grouped pairwise into the same bin. In Learning, we convert contig-constraints \( C \) into the pairwise contig-constraints set \( C' \) and pairwise unitig-constraints set \( M' \). In each pairwise constraint \((i, j) \in M'\), it indicates that unitig \( i \) and \( j \) must not be assigned to the same bin. We treat these pairwise constraints as negative samples, whereas we sample existing edges in the graph as positive samples. In detail, for every pairwise constraint \((i, j) \in M'\), we sample node \( i \)'s neighbors, \( N_i \), as positive edges. Sampled triplet constraints can be defined as \( M = \{(i, j, k), i, j \in V, k \in N_i \} \). To integrate Gaussian distributions and triplet constraints, we draw inspiration from [Bojchevski & Günnemann, 2018] and incorporate measuring and ranking strategies. Within the encoder-decoder framework, the Gaussian embeddings in hidden layers can be acquired as follows: \( z_i = N(\mu_i, \Sigma_i) \) with \( \Sigma_i = \text{diag}(\text{elu}(\log \sigma_i) + 1) \), \( \mu_i \in \mathbb{R}^d \), \( \Sigma_i \in \mathbb{R}^{d \times d} \), where \( \mu = \text{DIFFCONV}_{\mu}(H, A) \) and \( \sigma = \text{DIFFCONV}_{\sigma}(H, A) \). The KL divergence-based dissimilarity measurement [He et al., 2015] between two Gaussian embeddings \( z_i \) and \( z_j \) can be represented as
\[
\Delta(z_i, z_j) = D_{KL}(N_j || N_i) = \frac{1}{2} \left[ \text{tr}(\Sigma_i^{-1} \Sigma_j) + (\mu_i - \mu_j)^T \Sigma_i^{-1} (\mu_i - \mu_j) - d - \log \frac{\det(\Sigma_j)}{\det(\Sigma_i)} \right],
\]
where \( \text{tr}(\cdot) \) and \( |\cdot| \) denote the trace and determinant of a matrix respectively. Each triplet constraint \((i, j, k) \in M\) signifies that unitig \( i \) and \( j \) must not be in the same bins, and there is a high probability that nodes \( i \) and \( k \) should belong to the same bin. In other words, node \( i \) is more closely related to \( k \) compared to node \( j \). We formulate the triplet constraints ranking strategy as \( \Delta(z_i, z_k) < \Delta(z_i, z_j) \). The square-exponential loss [LeCun et al., 2006] is used to measure the triplet constraints ranking as:
\[
L_c = \sum_{(i, j, k) \in M} \left[ D_{KL}(N_k || N_i)^2 + \exp(-D_{KL}(N_j || N_i)) \right]
\]
### Algorithm 1: The Unitig-level Assembly Graph Learning Algorithm UNITIGBIN-Learning.
**Data:** Unitig-level assembly graph \( G \); constraints \( C \); dimension of embedding \( d \); number of graph batches \( n \);
**Result:** Embedding for unitigs \( Z \) and contigs \( \hat{Z} \).
1. \( G, O \leftarrow \text{Preprocess}(G, C) \) // Graph untangling and Contigs sampling
2. \( M \leftarrow \text{Sample}(G, C) \) // Sample triplet unitig constraints
3. Batches \( \leftarrow p\text{-Batch}(O, n) \) // Split batches
4. for \( e \in \text{epochs} \) do
5. for \( b \in \text{Batches} \) do
6. \( H_b \leftarrow \text{DIFFCONV}(A_b, X_b) \) // Base diffusive convolution
7. \( \mu_b, \log \sigma_b \leftarrow \text{DIFFCONV}_{\mu}(H_b, A_b), \text{DIFFCONV}_{\sigma}(H_b, A_b) \) // Gaussian embedding
8. \( L_{gb} \leftarrow \text{Equation 1} \) // Compute loss for graph reconstruction
9. end
10. \( L_g \leftarrow L_{gb} \) // Accumulate the batch losses
11. \( L_c \leftarrow \text{Equation 3}, L_b \leftarrow D_{KL} \) // compute the constraint,batch loss
12. \( L \leftarrow L_g + L_b + \lambda_1 \cdot L_c \) // Compute loss in Equation 4
13. end
14. \( Z \leftarrow \mu \) // Unitigs Embedding
15. \( \hat{Z} \leftarrow R(Z) \) // Contigs Embedding
### 3.1.3 \( p \)-Batch: Training Data Batching
Training GNNs on unitig-level assembly graphs from real metagenomic data, which can reach millions in size, presents significant computational challenges. Creating training batches presents a challenge as it must satisfy two criteria: i) processing each contig in parallel while preserving its completeness, and ii) grouping diverse contigs with positive relationships into the same batch to retain this valuable information. To address these hurdles, we introduce a graph splitting and training module named \( p \)-Batch, which systematically selects independent sets of nodes from the Positive-contig Graph which derived from the positive contigs set \( O \) in an iterative manner. The \( p \)-Batch module takes each path as the minimum splitting unit and functions iteratively through two steps: i)
selecting the largest contigs sets from the candidates, and ii) feeding them into the smallest batch. The p-Batch will continue until all candidate contigs are fed into one batch. In practice, there are still some unitigs that are fed into different batches. We design a loss function to minimize the probability distribution of these jointly nodes. The objective function for the p-Batch loss function can be calculated as \( L_b = \sum_{(i,j) \in Q} D_{KL}(N_j || N_i)^2 \), where \( Q \) is the set of joint-unitigs pairs.
**Objective function.** The object of Learning is determined by a combination of \( L_g \), \( L_c \), and \( L_b \), with \( \lambda_1 \) regulating the significance of constraints loss. Refer to Algorithm 1 for pseudocode of Learning.
\[
L = L_g + L_b + \lambda_1 \cdot L_c
\]
### 3.2 Binning: Comprising Matching Constraints, Propagating and Refining Bins
**Matching.** After obtaining the embeddings of contigs in the Learning step, you can directly apply existing clustering algorithms (such as K-Means) for contigs binning. However, dealing with imbalanced bin sizes adds complexity to the contigs binning process. RepBin (Xue et al., 2022) proposes a semi-supervised label propagation model using constrained contigs as initial labels. However, RepBin runs K-Means on embeddings of a large number of constrained contigs to initialize labels, which can be computationally expensive. The lack of a known number of bins is another challenge.
In UNITIGBIN, we devised a simple yet efficient matching algorithm for attaining optimal binning initialization. Matching mainly consists of two key steps: i) Binning Initialization and ii) Iterative Matching. Initially, we arrange constraints in \( C \) in descending order based on their length, selecting the largest set as the initial bin. We then perform iterative calculations to determine the similarity between matched bins and candidate contigs. A greedy method is used to select the maximum value for matching operations. In the matching process, we incorporate a threshold value denoted as \( T \). If the similarity between a bin and a contig is above \( T \), we add this contig to the bin. Instead, we opt to create a new bin that includes this contig. Refer to Algorithm 2 in Appendix A.1 for the pseudocode.
**Propagating.** From the preceding Learning and Matching phases, we obtain embeddings of unitigs denoted as \( Z \in \mathbb{R}^{V \times d} \) and initial labels assigned to constrained contigs represented by \( Y_C \in \mathbb{R}^K \). We follow RepBin and design a contig-level label propagation model instead of running K-Means algorithm directly. Besides, we also introduce a penalty function to maximize constraint satisfaction.
Propagating consists of three parts: graph convolution, readout function, and fully connected layer. Graph convolution learns both the unitig-level assembly graph and unitigs features from Learning, which can be described as \( Z^{l+1} = \sigma(\text{Conv} \cdot Z^l \Theta^l) \), where \( \text{Conv} = D^{-1/2} A D^{-1/2} \). The embeddings of contigs can be obtained through the readout function, \( \tilde{Z} = R(Z) \), \( \tilde{Z} \in \mathbb{R}^{P \times d} \). Then, the binning probability can be represented as \( Y = \text{Softmax}(\tilde{Z}W + b) \), \( Y \in \mathbb{R}^{P \times K} \), where \( K \) is the number of bins. A cross-entropy function is used to optimize the binning results. However, the binning assignment may violate prior constraints. To maximize constraint satisfaction, we introduce an optimization function. Given \( K \) bins, we use a 0/1 matrix in \( \mathbb{R}^{K \times K} \) for incorporating constraints. The constraint matrix \( I_{\neq} \) denotes the binary conflict relationships among \( K \) bins, i.e., \( I_{\neq}(i, j) = 1 \) if \( i \neq j \) and 0 otherwise, for any \( i, j \in \{1, \ldots, K\} \). The bin-assignment matrix \( Y \in \mathbb{R}^{P \times K} \) is a matrix that represents the bin assignment probability (over \( K \) bins) for each contig \( i \) in its corresponding row \( Y_i \). For any constraint \( (i, j) \in C' \), we aim to assign different bins to \( i \) and \( j \) and thus maximize the sum of joint-probabilities with different bins, i.e., \( Y_i^T I_{\neq} Y_j \). The objective function is as follows:
\[
L = - \sum_{l \in Y_C} \sum_{k=1}^{K} Y_{lk} \ln Z_{lk} - \lambda_2 \cdot \frac{1}{|C'|} \sum_{(i,j) \in C'} \log(Y_i^T I_{\neq} Y_j)
\]
**Refining.** In Refining, our primary goal is to explore potential binning assignments for contigs, taking into account heterophilous constraints. This step primarily consists of two components: i) Splitting and ii) Merging. Splitting aims to divide existing bins into multiple sub-bins when identical marker genes are present within the bin. Merging is intended to combine sub-bins into a larger bin when these sub-bins do not share the same marker genes. Refer to Appendix A.1 for the pseudocode.
### 4 Experiments
**Datasets and Baselines.** We evaluate UNITIGBIN model on 12 datasets, consisting of 6 assembled by metaSPAdes v3.15.2 (Nurk et al., 2017) and 6 assembled by metaFlye v2.9 (Kolmogorov et al.,
Table 1: CheckM results for the number of HQ bins by UNITIGBIN and baselines.
| Methods | Sim20G | Sim50G | Sim100G | Sharon | DeepHPM | COPD |
|-------------|--------|--------|---------|--------|---------|------|
| MetaBAT2 | 5 | 16 | 3 | 2 | 0 | 0 |
| MaxBin2 | 20 | 35 | 54 | 6 | 8 | 9 |
| Semi Bin | 18 | 38 | 68 | 5 | - | - |
| VAMB | 18 | 31 | 51 | 5 | 2 | 6 |
| GraphMB | 9 | 13 | 18 | 2 | - | - |
| CCVAE | 12 | 15 | 28 | 2 | - | - |
| RepBin | 18 | 15 | 19 | 1 | 0 | - |
| MetaCoAG | 17 | 34 | 69 | 7 | 8 | 17 |
| **UNITIGBIN** | **20** | **43** | **76** | **7** | **12** | **21** |
| △% MaxBin2 | 0% | 18.6% | 28.9% | 14.3% | 33.3% | 57.1% |
| △% SemiBin/VAMB | 10% | 11.6% | 10.5% | 28.6% | 83.3% | 71.4% |
| △% MetaCoAG | 15% | 20.9% | 9.2% | 0% | 33.3% | 19.0% |
In metaSPAdes-assembled datasets, Sim20G, Sim50G, and Sim100G are three datasets collected from GraphBin2 (Mallawaarachchi et al., 2021) and MetaCoAG (Mallawaarachchi & Lin, 2022). In metaFlye-assembled datasets, 6 real-world Wastewater Treatment Plant (WWTP) datasets are collected (Singleton et al., 2021). Table A1 provides a comprehensive overview of the dataset statistics. UNITIGBIN is evaluated against three categories of binning tools: a) 2 traditional approaches, MaxBin 2.0 (Wu et al., 2016) and MetaBAT2 (Kang et al., 2019); b) 2 deep learning-based binning tools, SemiBin (Pan et al., 2022), and VAMB (Nissen et al., 2021); c) 4 assembly graph-based binning models, GraphMB (Lamurias et al., 2022), RepBin (Xue et al., 2022), MetaCoAG (Mallawaarachchi & Lin, 2022), and CCVAE (Lamurias et al., 2023).
Metrics and Experimental Settings. We use the popular CheckM v1.1.3 (Parks et al., 2015) tool to evaluate the binning results of UNITIGBIN and baselines. CheckM assesses bin quality through sets of single-copy marker genes and without using ground truth. We use CheckM to assess the completeness and contamination of the bins generated by each tool. For metaSPAdes datasets, we adhere to the experimental setup outlined in MetaCoAG (Mallawaarachchi & Lin, 2022). We define precision as $1/(1 + \text{contamination})$ and recall as completeness. High-quality (HQ) bins are characterized by precision > 90 and recall > 80. Medium-quality (MQ) bins have precision > 80 and recall > 50, while the remaining bins are classified as Low-quality (LQ) bins. For metaFlye datasets, we follow the experimental setup used in GraphMB (Lamurias et al., 2022) and CCVAE (Lamurias et al., 2023), which employs two specific criteria: completeness > 90 & contamination < 5, and completeness > 50 & contamination < 10, to assess the quality of bins. We also use AMBER v2.0.2 (Meyer et al., 2018) tool and calculate the Precision, Recall, F1, Adjusted Rand Index (ARI) metrics (Xue et al., 2022) to evaluate simulated datasets using ground truth.
### 4.1 Evaluation on metaSPAdes-based Datasets
Table 1 shows that UNITIGBIN significantly outperforms state-of-the-art baselines, achieving the highest number of high-quality bins as evaluated by CheckM. In Sim100G, UNITIGBIN yields 76 high-quality bins, approximately 9.2% more than the highest number obtained by baselines (69 for MetaCoAG). In COPD, UNITIGBIN also attains the highest number of high-quality (HQ) bins, with 21 HQ bins, which is considerably greater than the second-highest number of HQ bins obtained by MetaCoAG (17). This substantial gap between UNITIGBIN and baselines underscores the superior performance of our model in contig binning. The CheckM results for medium-quality (MQ) bins generated by UNITIGBIN and baselines can be found in Table A3. UNITIGBIN consistently outperforms other methods by achieving the highest number of HQ+MQ bins across most datasets.
In three simulated datasets, we also employ the AMBER tool and calculate the Precision, Recall, F1, and ARI scores to assess the performance of both UNITIGBIN and baselines. Here we take Sim20G as an example, Figure 3 denotes the Average Completeness (AC) and Average Purity (AP) at the nucleotide level, while Table 2 presents the F1 and ARI score (with ‘bp’ representing the nucleotide-level and ‘seq’ representing the sequence-level), and the number of HQ bins. Comparison with baselines demonstrates that UNITIGBIN achieves the highest level of performance. In particular, UNITIGBIN is capable of binning not only long contigs but also those shorter than...
1,000 bp, which are typically discarded by other binning tools. For instance, UNITIGBIN achieves a sequence-level F1 score of 0.952, which is significantly higher than the second-highest F1 score obtained by MaxBin2, 0.632. The calculated Precision, Recall, F1, and ARI scores are shown in Table A2.

### 4.2 Evaluation on metaFlye-based datasets
We also benchmark UNITIGBIN and baselines on six real datasets assembled using metaFlye. We use the CheckM tool and count the number of bins that meet two criteria (following CCVAE): A) completeness > 90 & Contamination < 5; and B) completeness > 50 & Contamination < 10. Figure 4 clearly shows that UNITIGBIN outperforms baselines across all six datasets. UNITIGBIN produces a total of 1,775 bins that meet the criteria A. In contrast, the highest count achieved by baselines is 962 bins, which is 45.8% less than the number of bins obtained by UNITIGBIN. Notably, CCVAE uses CheckM to detect single-copy marker genes within contigs and extract heterophilous constraints. Moreover, CCVAE also employs CheckM to evaluate binning results. To eliminate potential ambiguity, we follow the pipeline of MaxBin2 and MetaCoAG, using FragGeneScan and HMMER to identify contigs containing marker genes. We also present results for UNITIGBIN using constraints extracted from CheckM, which are detailed in Figure A1. In summary, UNITIGBIN consistently demonstrates superior performance across datasets assembled by both metaSPAdes and metaFlye.

### 4.3 Visualization and Experimental Analysis
**Visualization.** To gain deeper insights into the binning results, we employ the python-igraph package to visualize the unitig-level assembly graph of Sim100G, alongside the ground truth and binning results obtained from various binning tools (selected ten representative bins, see Figure 5). Nodes represent unitigs, while edges indicate overlapping relationships between distinct unitigs. Distinct colors represent different species or bins. UNITIGBIN produces binning results that align well with the ground truth, whereas other baselines struggle with missing or inaccurate labels.
| Methods | F1(bp) | F1(seq) | ARI(bp) | ARI(seq) | HQ↑ |
|-------------|--------|---------|---------|----------|-----|
| MetaBAT2 | 61.0 | 30.1 | 39.2 | 21.6 | 4 |
| MaxBin2 | **99.0** | **63.2** | **99.0** | **77.5** | **20** |
| SemiBin | 98.4 | 53.7 | 98.1 | 41.7 | 19 |
| VAMB | 97.5 | 59.1 | 97.9 | **96.7** | 18 |
| GraphMB | 94.2 | 58.3 | 55.9 | 34.4 | 10 |
| CCVAE | 97.8 | 61.3 | 79.1 | 46.8 | 13 |
| RepBin | 96.6 | 44.8 | 96.3 | 14.2 | 16 |
| MetaCoAG | 95.3 | 58.9 | 99.1 | 78.7 | 15 |
| **UNITIGBIN** | **98.7** | **95.2** | **99.3** | **97.4** | **20** |
Ablation study & Parameters analysis. To assess the effectiveness of our proposed model, we perform an ablation study to investigate the individual algorithmic components within UNITIGBIN. Figure 6 illustrates that each component within UNITIGBIN contributes to the improvement in contigs binning performance (more details in Appendix A.7). We also analyze the impact of parameters such as dimension $d$, the transition probability $\alpha$ in diffusive convolution, the threshold $T$ in Matching, the importance of constraints $\lambda_1$ in loss function Eqn 4, and the importance of constraints $\lambda_2$ in loss function Eqn 5. Figures A3 show that UNITIGBIN displays a relatively low sensitivity to variations in the aforementioned parameters. As $\lambda_1$ is raised, involving more importance of constraints, the performance of UNITIGBIN increases and tends to stabilize.
Training process & Running time. Figure A2 (a) and (b) show the training process of Learning and Propagating in UNITIGBIN respectively. As the number of training iterations increases, the proportion of violated constraints decreases and more constraints are satisfied. We also benchmark the running time of UNITIGBIN against selected baselines on Sim100G (refer to Figure A2 (c)). UNITIGBIN is the second-fastest deep learning-based binning tool, with a runtime of approximately 30 mins, beaten only by VAMB, which is faster. It is significantly faster than other deep learning-based methods.
5 CONCLUSION
To model the unitig-level assembly graph directly output from metagenomic assemblers while incorporating heterophilous constraints derived from single-copy marker genes, we present a novel binning tool called UNITIGBIN, a graph neural network model with constraint satisfaction designed for binning metagenomic contigs. UNITIGBIN comprises Learning, which uses a graph neural network model to learn the unitig-level assembly graph while adhering to constraints. It is followed by a contig Binning framework that employs an adapted Matching algorithm to initialize marked contigs, uses Propagating to annotate unmarked contigs while satisfying constraints, and incorporates a local Refining strategy to fine-tune binning assignments. Extensive experiments conducted on both synthetic and real datasets show that UNITIGBIN outperforms existing binning tools significantly. The primary limitations of the model include its inability to label overlapping bins, where contigs belong to multiple species, and the difficulty of binning unmarked and short contigs. As future work, we plan to explore graph neural networks for binning short, unmarked contigs with multiple labels, and efficiently encoding large-scale unitig-level assembly graphs.
REFERENCES
Mads Albertsen, Philip Hugenholtz, Adam Skarshewski, Kåre L Nielsen, Gene W Tyson, and Per H Nielsen. Genome sequences of rare, uncultured bacteria obtained by differential coverage binning of multiple metagenomes. *Nature biotechnology*, 31(6):533–538, 2013.
Johannes Alneberg and et al. Binning metagenomic contigs by coverage and composition. *Nature methods*, pp. 1144–1146, 2014.
Tyler P Barnum, Israel A Figueroa, Charlotte I Carlström, Lauren N Lucas, Anna L Engelbrektson, and John D Coates. Genome-resolved metagenomics identifies genetic mobility, metabolic interactions, and unexpected diversity in perchlorate-reducing communities. *The ISME journal*, 12(6):1568–1581, 2018.
Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In *ICLR*, 2018.
Florian P Breitwieser, Jennifer Lu, and Steven L Salzberg. A review of methods and databases for metagenomic classification and assembly. *Briefings in bioinformatics*, 20(4):1125–1136, 2019.
Simon JS Cameron, Keir E Lewis, Sharon A Huws, Wanchang Lin, Matthew J Hegarty, Paul D Lewis, Luis AJ Mur, and Justin A Pachebat. Metagenomic sequencing of the chronic obstructive pulmonary disease upper bronchial tract microbiome reveals functional changes associated with disease severity. *PLoS One*, 11(2):e0149095, 2016.
Alex Chklovski, Donovan H Parks, Ben J Woodcroft, and Gene W Tyson. Checkm2: a rapid, scalable and accurate tool for assessing microbial genome quality using machine learning. *Nature Methods*, 20(8):1203–1212, 2023.
Chris L Dupont, Douglas B Rusch, Shibu Yooseph, Mary-Jane Lombardo, R Alexander Richter, Ruben Valas, Mark Novotny, Joyclyn Yee-Greenbaum, Jeremy D Selengut, Dan H Haft, et al. Genomic insights to sar86, an abundant and uncultivated marine bacterial lineage. *The ISME journal*, 6(6):1186–1199, 2012.
Sean R Eddy. Accelerated profile HMM searches. *PLoS computational biology*, 7(10):e1002195, 2011.
Hadrien Gourlé, Oskar Karlsson-Lindsjö, Juliette Hayer, and Erik Bongcam-Rudloff. Simulating Illumina metagenomic data with InSilicoSeq. *Bioinformatics*, 35(3):521–522, 2019.
Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. Learning to represent knowledge graphs with gaussian embedding. In *CIKM*, 2015.
Tammi Kaeberlein, Kim Lewis, and Slava S Epstein. Isolating”uncultivable” microorganisms in pure culture in a simulated natural environment. *Science*, 296(5570):1127–1129, 2002.
Dongwan D Kang, Feng Li, Edward Kirton, Ashleigh Thomas, Rob Egan, Hong An, and Zhong Wang. MetaBAT 2: an adaptive binning algorithm for robust and efficient genome reconstruction from metagenome assemblies. *PeerJ*, 7:e7359, 2019.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013.
Thomas N Kipf and Max Welling. Variational Graph Auto-Encoders. In *NeurIPS Workshop on Bayesian Deep Learning*, 2016.
Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion Improves Graph Learning. In *NeurIPS*, 2019.
Mikhail Kolmogorov, Derek M Bickhart, Bahar Behsaz, Alexey Gurevich, Mikhail Rayko, Sung Bong Shin, Kristen Kuhn, Jeffrey Yuan, Evgeny Polevikov, Timothy PL Smith, et al. metaFlye: scalable long-read metagenome assembly using repeat graphs. *Nature Methods*, 17(11):1103–1110, 2020.
|
Wgb8tuu5BI
|
For graphs like Figure 3(c), the main issue seems to come mostly from the measurement error. The intrinsic trends isn't that crucial. Can one simply ignore the intrinsic trends and treat this as a causal model with measurement errors, for which there is existing work? Would that allow us to identify the measurement trend variable?
|
Decoupling Intrinsic and Measurement Trends: A Crucial Consideration in Time Series Causal Discovery
Anonymous authors
Paper under double-blind review
Abstract
In the realm of time series data, it is common to encounter time trends, which manifest as a function concerning time within a given data span. Time trends can be classified into intrinsic (real) and measurement (false) trends. Intrinsic trends are inherent to the underlying mechanisms of the variables, while measurement trends are essentially measurement errors unique to the observed values (e.g., an increase in diagnosed thyroid nodule patients due to enhanced medical techniques, despite a stable incidence rate over time). Measurement trends can critically influence the results of a variety of causal discovery methods and hence, necessitate elimination prior to causal analytic procedures. In this study, we introduce a novel framework capable of detecting all trend-influenced variables and distinguishing between intrinsic and measurement trends, called Trend Differentiator (TrendDiff). This approach consists of two primary steps: trend variable identification and trend type differentiation. The first step leverages Constraint-based Causal Discovery from heterogeneous/Nonstationary Data (CD-NOD) to identify variables with trends. Following this, we utilize the structure characteristics to differentiate between intrinsic and measurement trends. Experimental results on various synthetic scenarios and real-world data sets are employed to demonstrate the efficacy of our methods.
1 Introduction
Emerging in the early 1990s, causal discovery algorithms have undergone substantial growth in the past two decades (Spirites & Zhang, 2016). These algorithms strive to infer causal relationships from purely observational data, serving as a valuable instrument in situations where randomized controlled trials are rendered impractical due to ethical concerns, financial constraints, and other obstacles. Standing at the intersection of explosive data volumes and advancements in computational capabilities, a surge in theoretical and applied causal research has ensued. Hitherto, causal discovery methods have been employed across various disciplines, such as climatology, healthcare, and economics, among others (Ebert-Uphoff & Deng, 2012; Shen et al., 2020; Hall-Hoffarth, 2022). Yet the rapid accumulation of data presents not only exhilarating possibilities but significant challenges in the domain of causal discovery.
A prevalent challenge is the presence of time trends, frequently encountered in time series data. As articulated by Phillips (2005), “No one understands trends, but everyone sees them in the data” (Phillips, 2005). While previous efforts have extensively examined the impact of time trends on the performance of conventional statistical algorithms (White & Granger, 2011; Wu et al., 2007), the effects on causal discovery methodologies remain unexplored. Given that the definition of time trends is still contentious, to be more precise, time trends are defined as a function concerning time within a given data span here. Based on the origin of these trends, they are classified into two categories: intrinsic (real) and measurement (false) trends. Intrinsic trends are inherent to the fundamental mechanisms governing the variables (e.g., global warming, the temperature is really increasing), whereas measurement trends are essentially observation errors unique to the recorded values (e.g., an observed increase in diagnosed thyroid nodule patients due to enhanced medical techniques, despite a stable real incidence rate over time (Davies & Hoang, 2021), see Figure 1).
These two types of trends originate from distinct sources, exert disparate impacts, and necessitate differential treatment in the context of causal discovery.

**Figure 1:** The true and observed incidence of the thyroid nodule along time – a typical example of measurement trends.
However, there is this impression – time trends, be it an intrinsic trend, or a measurement trend, should be removed before analyses – which is not accurate. Undoubtedly, measurement trends, being a form of measurement error, necessitate removal. Consider constraint-based causal discovery methods, which rely on conditional independence tests; measurement trends introduce two issues for constraint-based algorithms: 1. the dependence between measurement-trend variables and their neighbors weakens with increasing trends; 2. the conditional independence given the measurement-trend variables vanishes, yielding increasing dependence (Scheines & Ramsey, 2016; Zhang et al., 2017a). As illustrated in **Figure 2**, the measurement trend in $X_2$ not only affects its dependence with $X_1$ and $X_3$ (the dependence decreases with trends), but also causes $X_2$ to fail in separating $X_1$ and $X_3$. Analogous phenomena transpire for another measurement-trend variable $X_3$. The causal network identified by constraint-based methods diverges significantly from the ground truth in such scenarios. As noted in earlier research regarding measurement error in causal discovery, measurement trends not only influence constraint-based causal algorithms but also extend their impact to other methodologies, including those based on functional causal models (Zhang et al., 2017a). Conversely, intrinsic trends are integral components of the variables and mechanisms, facilitating the identification of underlying causal relationships. Removal of intrinsic trends would decrease the signal-to-noise ratio, leading to lower detection power, and thus should be avoided. Consequently, discerning between intrinsic and measurement trends and eliminating the latter is crucial before conducting causal discovery analyses.

**Figure 2:** An illustration of how ignoring measurement trends in causal discovery may lead to spurious connections by constraint-based methods. (a) The true causal graph (including measurement-trend variables $X_2$ and $X_3$, the true values of which are not observable). (b) The estimated skeleton on the observed data. Note: the circled underlined variables $X_2$ and $X_3$ in (a) are real values, while $X_2$ and $X_3$ in (b) are observed values with measurement trends.
In the present study, we assume the underlying causal structure to be a directed acyclic graph (DAG) containing variables exhibiting time trends, either intrinsic or measurement. Our objective is to devise a principled framework capable of identifying trend-influenced variables and distinguishing those with measurement trends from those exhibiting intrinsic trends. The paper is structured as follows: Section 2 defines the research question using DAG. Section 3 outlines our methodology for pinpointing variables exhibiting time trends, encompassing both intrinsic and measurement trends. In Section 4, we delve deeper into the techniques employed to distinguish between intrinsic and measurement trends. Together, these two sections offer a thorough exposition of the method Trend Differentiator (TrendDiff) used to identify and differentiate variables based on their time trends. Finally, an array of simulation studies under various scenarios and a real-world application are presented in section 5, substantiating the efficacy of our approach.
2 PARAMETERIZING TIME TRENDS
To put intrinsic and measurement trends clearer, we resort to structural equation models (SEMs), where each variable $V_i$ is formulated as a function of its direct causes and an error term $\varepsilon_i$. Here $\varepsilon_i$ encapsulates all other unmeasured causes of $V_i$ and $\varepsilon_i$ of variables are independent of each other. Figure 3 shows the structures of intrinsic and measurement trends, respectively. Figure 3 (a) illustrates a straightforward model featuring a causal chain from $X_1$ to $X_2$ to $X_3$. Each variable is associated with a structural equation, and the model can be parameterized by assigning exact functions to $f(V_i)$, as well as a joint normal distribution to $\varepsilon_1, \varepsilon_2, \varepsilon_3 \sim \mathcal{N}(\mu, \Sigma^2)$. In this case, $\Sigma^2$ is diagonal, reflecting the independence among the error terms $\varepsilon_1$, $\varepsilon_2$, and $\varepsilon_3$. Regardless of the functions and free parameter values assigned, the model in Figure 3 (a) exhibits conditional independence: $X_1 \perp\!\!\!\perp X_3 | X_2$.
In Figure 3 (b), we present the same model as in Figure 3 (a) but with an added intrinsic trend $T_2$ affecting $X_2$. The intrinsic trend $T_2$ impacts the generation of $X_2$ and is an inherent part of its underlying mechanisms. In this case, the observed and real values of $X_2$ are identical. The added intrinsic time trend is able to go into the causal network through $X_2$ without altering the original causal structure. Consequently, a trend in $X_3$ can be observed, which arises due to the influence of $T_2$. In Figure 3 (c), we depict the same model but with true values $X_2$ being “measured” as $X_2$, accompanied by a measurement trend $T_2$. In this case, the real and observed values of $X_2$ differ. The measurement trend $T_2$ is present only in the observed $X_2$. Due to the collider at $X_2$, $T_2$ cannot influence the real values $X_2$ and is unable to propagate through the original causal network. As previously mentioned, here the measurement trend $T_2$ essentially represents a form of measurement error, which can adversely affect the performance of causal discovery algorithms.

**Figure 3:** An illustration of causal models for variables with intrinsic and measurement trends and corresponding equations. (a) A three-variable chain graph without trend). (b) $X_2$ with an intrinsic trend. (c) $X_2$ with a measurement trend, $X_2$ represents the true values of $X_2$. Note: variables without a circle are observed variables, while those with a circle are hidden variables. To make the graph clearer, we omitted $\varepsilon_1, \varepsilon_2, \varepsilon_3$ in Figure 3 (b) and (c).
3 PHASE 1: DETECTION OF TIME-TREND VARIABLES AND CAUSAL STRUCTURE RECOVERY
3.1 ASSUMPTIONS
Adopting a more relaxed version of causal sufficiency, this work assumes pseudo causal sufficiency. In causal discovery, the causal sufficiency assumption posits that all common causes (confounders) of the observed variables are included in the data set. The presence of time trends in data, however, may violate this assumption. Time trends typically emerge from intricate, compounded factors. As a statistical expedient, these factors are collectively considered, predicated on the combined effect being expressible as a mathematically smooth function of time when quantitatively represented. Time trends across distinct variables can be interrelated due to specific types of unobserved confounders. Consequently, we merely assume that these confounders, if any, are fixed at each time point within data exhibiting time trends, which is referred to as pseudo causal sufficiency.
Assuming that the observed data are independently identically distributed (I.I.D.), this work concentrates on instantaneous or contemporaneous causal relationships, and the strength of the causal relations does not change over time. As a consequence, time-delayed causal relations, specifically autoregressive models, are not explicitly explored. Nevertheless, it is worth noting that our framework can be naturally generalized to encompass time-delayed causal relations in time series, akin to how constraint-based causal discovery has been adapted to manage time series data (see, e.g., Chu et al., 2008).
Let \( \{g_l(C)\}_{l=1}^L \) represent the set of confounders (potentially empty). Additionally, we posit that for each \( V_i \), the local causal process can be depicted by the SEM:
\[
V_i = f_i(\text{PA}_i, g^i(C), \theta_i(C), \varepsilon_i)
\]
Here, \( g^i(C) \subseteq \{g_l(C)\}_{l=1}^L \) signifies the set of confounders influencing \( V_i \) (an empty set when no confounder is present behind \( V_i \) and any other variable), while \( \theta_i(C) \) represents the effective parameters within the model, also presumed to be functions of \( C \). Moreover, \( \varepsilon_i \) denotes a disturbance term, independent of \( C \) and exhibiting non-zero variance (i.e., the model is non-deterministic). The mutual independence of \( \varepsilon_i \) is also assumed.
In this work, we consider \( C \) as a random variable, yielding a joint distribution over \( V \cup \{g_l(C)\}_{l=1}^L \cup \{\theta_m(C)\}_{m=1}^n \). We assume that this distribution adheres to the Markov and faithfulness properties with respect to the graph resulting from the subsequent modifications to \( G \) (which, as a reminder, represents the causal structure over \( V \)): add \( \{g_l(C)\}_{l=1}^L \cup \{\theta_m(C)\}_{m=1}^n \) to \( G \), and for each \( i \), add an arrow from each variable in \( g^i(C) \) to \( V_i \) and add an arrow from \( \theta_i(C) \) to \( V_i \). This extended graph is denoted as \( G^{\text{aug}} \). Evidently, \( G \) is merely the induced subgraph of \( G^{\text{aug}} \) over \( V \). Importantly, leaf nodes — those devoid of descendants — manifest characteristics indistinguishable when they are with either an intrinsic or a measurement trend. Hence, we assume that trend variables are not positioned as leaf nodes.
### 3.2 Detection of Time-Trend Variables and Causal Structure Recovery
In this section, we use the Constraint-based Causal Discovery from heterogeneous/Nonstationary Data (CD-NOD) to detect variables exhibiting time trends and subsequently deduce the causal network for \( V \cup \{C\} \). The core concept hinges on using the (observed) variable \( C \) as a surrogate for the unobserved \( \{g_l(C)\}_{l=1}^L \cup \{\theta_m(C)\}_{m=1}^n \). In essence, we utilize \( C \) to encapsulate the \( C \)-specific information. Under the assumptions detailed in Section 3.1, it becomes feasible to deploy conditional independence tests on the combined set of \( V \cup \{C\} \) to detect variables with time trends and recover the structure. This is achieved by Algorithm 1 and supported by Theorem 1.
In Algorithm 1, we first construct a complete undirected graph, denoted \( U_C \), which incorporates both \( C \) and \( V \). In Step 2 of the algorithm, the decision regarding whether a variable \( V_i \) exhibits a time trend is contingent upon the conditional independence between \( V_i \) and \( C \), given a subset of other variables. If a time trend is present in \( V_i \), then the module of \( V_i \) evolves in conjunction with \( C \). Consequently, the probability distribution \( P(V_i | \text{PA}_i) \) will not remain constant across different values of \( C \). As a result, \( V_i \) and \( C \) are conditionally dependent regardless of any subset of other variables. Based on this rationale, we assume that if \( V_i \perp\!\!\!\perp C | \text{PA}_i \), then there should be no time trend in \( V_i \). Conversely, if this assumption does not hold, then we claim to detect variables with time trends. After this step, all variables linked to \( C \), referred to as “\( C \)-specific variables”, are considered to be with time trends. It’s important to highlight that this step is characterized by high recall; however, its precision might exhibit slight variations. This precision is contingent on the number of no-trend variables possessing changing modules within the data set. Specifically, Algorithm 1 has been designed to effectively identify all variables exhibiting changing modules. While time-trend variables inherently exhibit a changing module, the reverse is not necessarily true. As a result, our categorization of “\( C \)-specific variables” also encompasses variables that, although devoid of trends, display changing modules. Given that our focus is refined to “\( C \)-specific variables” throughout Phase 2, this characteristic ensures Phase 1 is conservative. “\( C \)-specific variables” will usually equal to or larger than the true trend-variable set, thereby guaranteeing the comprehensive inclusion of every trend variable.
Algorithm 1 Detection of Time-trend Variables and Recovery of Causal Structure
1. Build a complete undirected graph $U_G$ on the variable set $V \cup C$.
2. (Detection of time-trend variables) For each $i$, test for the marginal and conditional independence between $V_i$ and $C$. If they are independent given a subset of $\{V_k | k \neq i\}$, remove the edge between $V_i$ and $C$ in $U_G$.
3. (Recovery of causal skeleton) For every $i \neq j$, test for the marginal and conditional independence between $V_i$ and $V_j$. If they are independent given a subset of $\{V_k | k \neq i, k \neq j\} \cup \{C\}$, remove the edge between $V_i$ and $V_j$ in $U_G$.
4. (Orientation) For the obtained skeleton, orient it by standard orientation rules and distribution shift. After the orientation process, we can get the causal network for $V \cup C$, called $G_{\text{phase1}}$.
Step 3 aims to discover the skeleton of the causal structure over $V$. It leverages the results from Step 2: if neither $V_i$ nor $V_j$ is adjacent to $C$, then $C$ does not need to be involved in the conditioning set. In practice, one may apply any constraint-based search procedures on $V \cup C$, e.g., SGS and PC (Spirtes et al., 1993). Its (asymptotic) correctness is justified by the following theorem:
Theorem 1: Given Assumptions made in Section 3.1, for every $V_i, V_j \in V$, $V_i$ and $V_j$ are not adjacent in $G$ if and only if they are independent conditional on some subset of $\{V_k | k \neq i, k \neq j\} \cup \{C\}$.
Given that this segment is identical to the Constraint-based Causal Discovery from Heterogeneous/Nonstationary Data (CD-NOD), we refrain from delving into further details here. For a comprehensive explanation of the step 4 orientation procedure and the complete proof of Theorem 1, please refer to (Zhang et al., 2017b; Huang et al., 2020).
4 Phase 2: Utilizing Structural Differences to Distinguish Between Intrinsic and Measurement-Trend Variables
In Phase 1, we procured the set of variables exhibiting time trends (those associated with $C$) as well as the causal network $G_{\text{phase1}}$ for $V \cup C$. By constraining our analysis to only the “C-specific variables” while pinpointing intrinsic-trend variables, Phase 2 of our algorithm benefits from increased efficiency and a reduced risk of false positives. Besides, although the derived causal structure $G_{\text{phase1}}$ in Phase 1 might not be entirely accurate due to the existence of measurement trends, it serves as a foundational aid in differentiating types of trends. In phase 2, we demonstrate that by examining the different structures within causal networks, it is feasible to differentiate variables with intrinsic trends from those influenced by measurement trends.
4.1 Distinguish Between Intrinsic and Measurement Trends by $G_{\text{phase1}}$
As depicted earlier, intrinsic-trend variables do not change the causal network, whereas those variables characterized by measurement trends can induce structural alterations in causal discovery. Next, we delve into how a measurement-trend variable influences the causal structure of $G_{\text{phase1}}$ and leverage this understanding to partly distinguish between the two trend types.
Figure 4 illustrates how a measurement-trend variable alters the output causal structure of Phase 1. In Figure 4(a), we depict a chain with a measurement trend in $X_2$. During Phase 1, the time index $C$ is integrated into our analysis to pinpoint all trend variables. Due to the presence of a measurement trend in $X_2$, a connection from $C$ to $X_2$ is established. Furthermore, based on the conditional independence observed in the actual structure Figure 4(a), we have $T \perp\!\!\!\perp X_3$ and, crucially, $T \not\perp\!\!\!\perp X_3|X_2$. By extension, because $C$ is a proxy for $T$, the relationships $C \perp\!\!\!\perp X_3$ and $C \not\perp\!\!\!\perp X_3|X_2$ should hold. The dependency dynamics between $X_1$ and $C$ follow suit. As a result, the Phase 1 structural outcome should be the one shown in Figure 4(b). It’s worth noting that since the measurement trend $T$ is independent across all variables within the causal network, no arrow can stem from the measurement-trend variable to other variables in $G_{\text{phase1}}$. In essence, any linkage from a “C-specific variable” to other entities indicates an intrinsic trend.
Figure 4: An illustration of how a measurement-trend variable alters the output causal structure of Phase 1. (a) the real structure with a measurement trend in $X_2$, (b) the output structure of (a) by the algorithm in Phase 1. $X_2$ represents observed values, $X_3$ represents the true values of $X_2$. Note: variables without a circle are observed variables, while those with a circle are hidden variables.
In summary, we first employ the structure of $G_{\text{phase}1}$ to discern intrinsic-trend variables. A “C-specific variable” is deemed to exhibit an intrinsic trend if it possesses an arrow pointing to other variables in $G_{\text{phase}1}$.
4.2 Distinguish between Intrinsic and Measurement Trends by Further Conditional Independence Tests
Having identified certain intrinsic-trend variables based solely on the structure of $G_{\text{phase}1}$, it becomes necessary to undertake additional conditional independence tests for further recognition of more intrinsic-trend variables. As illustrated in Figure 3, the children of time-trend variables serve as critical pivot points in their differentiation process. For variables with intrinsic trends (see Figure 3b), there is $T_2 \not\perp X_3$ and $T_2 \perp X_3 | X_2$. Conversely, for variables with measurement trends (see Figure 3c), there is $T_2 \perp X_3$ and $T_2 \not\perp X_3 | X_2$. Thus, the criterion for identifying an intrinsic-trend variable $X_2$ can be $T_2 \not\perp X_3$ and $T_2 \perp X_3 | X_2$. Here $T_2$ is the trend of $X_2$ and $X_3$ is a child of $X_2$. Since the trend $T_2$ is not directly observable in this context. As an alternative, we employ the time index $C$ again, working as a suitable proxy for the unobservable trend. Therefore, the criterion is: $C \not\perp X_3$ and $C \perp X_3 | X_2$.
The first row of Figure 5 illustrates four scenarios of child variables that may arise when screening for the intrinsic-trend variable $X_1$. In Figure 5 (a), no trend is evident in the child variable $X_2$, allowing us to easily identify $X_1$ as an intrinsic-trend variable using our criterion. However, in Figure 4 (b)(c), the child variable $X_2$ exhibits intrinsic and measurement trends, respectively. Since trends are functions of time, time serves as a confounder (common cause) of trends $T_1$ and $T_2$. In these cases, the path from $T_1$ to $X_2$ via the confounder “time” cannot be blocked, as neither “time” nor $T_2$ is observable (we can obtain a surrogate for $T_2$, but it is insufficiently accurate to block the path). Consequently, we cannot distinguish variables with intrinsic trends from those with measurement trends when all child variables have trends. However, if the trend in the child variable $X_2$ originates from its other observable parent $X_3$, as depicted in Figure 4 (d), the intrinsic-trend variable $X_1$ is identifiable since we can block the path through “time” by conditioning on $X_3$.
For structures (b) and (c), first-order descendants (children) do not facilitate distinguishing trend types. However, can second-order descendants provide clarity? Will it help if structures similar to (a) or (d) emerge subsequent to (b) and (c)? The subsequent row illustrates potential second-order descendant structures for both (b) and (c). Although (b-1) and (b-2) remain non-identifiable, (c-1) and (c-2) can be discerned. The principles behind (c-1) and (c-2) align with those of (a) and (d), namely $C \not\perp X_3$ and $C \perp X_3 | X_1$. It’s noteworthy that structures (c-1) and (c-2) essentially represent (a) and (d) but with an added measurement-trend variable subsequent to the intrinsic-trend variable $X_1$ under examination. Extending this rationale, we can infer that all structures obtained by adding $n$ measurement-trend variables between $X_1$ and $X_2$ in structures (a) and (d) can theoretically be identified, where $n=0,1,2...$
In summary, intrinsic-trend variables are discernible only when (1) the intrinsic-trend variable $X$ to be tested possesses at least one descendant variable $Y$ without trends (like structure (a)) or with trends stemming from other observable variables (like structure (d)); and (2) there are no other intrinsic-trend variables on the path from $X$ to $Y$. Nevertheless, the performance deteriorates in reality as the number of measurement-trend variables between $X_1$ and $X_2$ increases, due to the amplification of noise with increasing distance. To maintain accuracy, this study restricts its focus
to first-order scenarios, wherein $X_2$ is a direct descendant, namely a child of $X_1$. Algorithm 2 for Phase 2 is provided in Appendix A.1. Combining Algorithm 1 and Algorithm 2, we can obtain the proposed Trend Differentiator (TrendDiff Algorithm).

**Figure 5:** Different scenarios for descendants of intrinsic-trend variables. First raw: Four possible cases of intrinsic-trend variable’s child nodes in causal networks. (a) Child node without trend. (b) Child node with an intrinsic trend. (c) Child node with a measurement trend. (d) Child node with a trend from other observable nodes. Second raw (b-1), (b-2), (c-1), and (c-2): Four possible cases of intrinsic-trend variable’s second-order descendant for structure (b) and (c).
## 5 EXPERIMENTS
The proposed TrendDiff algorithm has been employed on a variety of synthetic and real-world data sets. We assessed the accuracy with which this method can pinpoint variables exhibiting intrinsic trends across diverse scenarios. Besides, we further contrasted the efficacy of causal discovery methodologies pre- and post-removal of measurement trends discerned by our techniques, thereby demonstrating the advantages of eliminating such trends.
### 5.1 SIMULATIONS
Algorithm performance is first evaluated by simulation data. We generated synthetic data according to the SEMs specified in Figure 8. More specifically, $V_1$, $V_5$, and $V_7$ have intrinsic trends, $V_2$, and $V_6$ have measurement trends. Time trends are defined as a sinusoid function of time, with periods $w$ randomly selected from the range(5,25). All relationships are nonlinear. We tried different noise types (Gaussian, Exponential, Gumbel), as well as different sample sizes ($T = 600, 900, 1200, 1500$). In each setting, we ran 50 trials. We tested the generated data with the proposed TrendDiff method and compared the results of PC algorithm before and after the removal of identified measurement trends.
**Figure 6** displays the simulation results. Figure 6 (a) presents the F1 score, precision, and recall of identified intrinsic-trend variables under varying data length $T$ and noise type. The robustness of the proposed algorithm is evidenced by its consistent performance in Gaussian, Exponential, and Gumbel noise models. As the data length increases, there is a corresponding enhancement in performance. When data length equals to or above 1500, the algorithm demonstrates commendable efficiency, with the F1 score, precision, and recall all close to 0.9. The recall is a little bit lower compared with the precision. This discrepancy arises from our conservative approach, prioritizing the minimization of false-positive intrinsic-trend variables, as they present greater detrimental consequences than false negatives. Figure 6 (b) contrasts the efficacy of the PC algorithm in reconstructing the original causal network using data pre and post-elimination of detected measurement
trends. Removal of these measurement trends notably bolsters the performance of the PC algorithm, with a pronounced enhancement in the F1 score and precision. Besides these tests, we also generated data from random structures. We further used the data from random structure to evaluate the sensitivity of our approach towards data length, noise type, dimensionality (denoted by the number of nodes), and sparsity (defined by the degree considering edges in both directions). Since time trends may be approximately linear in some situations, we tested TrendDiff performance for linear-trend scenarios as well. Our method displayed stability across varying conditions, the results of which can be found in the Appendix. These results further establish its robustness and adaptability.

**Figure 6:** Simulation performance. (a) Performance of identifying intrinsic-trend variables under varying conditions, measured in terms of F1 score, precision, and recall (higher values indicate better performance). (b) Performance of PC algorithm using data pre and post-elimination of detected measurement trends.
### 5.2 Real Data
We also applied the proposed approach to a real environmental health dataset. This dataset contains daily values of variables regarding air pollution, weather, and sepsis emergency hospital admission in Hong Kong for the period from 2007 to 2018. It is a typical dataset used to assess the interactions between environmental factors and human health. There are pronounced time trends in this data (**Figure 7a**), rendering it a good application example for TrendDiff algorithm. In our initial analysis, we applied TrendDiff to determine the intrinsic trend variables within the data. The outcome from Phase 1 (as detailed in Algorithm 1) indicates that sepsis emergency hospital admissions, $CO$, $O_3$, and $SO_2$ are variables exhibiting a trend, be it measurement or intrinsic. Subsequently, in our follow-up phase (Algorithm 2), we differentiated between measurement-trend and intrinsic-trend variables. It was discerned that $CO$, $O_3$, and $SO_2$ have intrinsic trends while the daily count of sepsis emergency hospital admissions stood out as the sole variable characterized by a measurement trend. This result is consistent with existing evidence. There have been heated discussions in top medical journals about the observed rise in sepsis cases. A prevailing consensus among researchers is that this uptick in sepsis incidences can be largely attributed to the refined definitions and enhanced coding practices for sepsis, rather than the real incidence increase ([Rhee et al., 2017]; [Fleischmann-Struzek et al., 2018]). As for the trio of variables recognized with an intrinsic trend — $CO$, $O_3$, and $SO_2$ — ample research has been conducted on their trends. However, none have ascribed these trends to measurement inaccuracies, supporting our results here ([Wei et al., 2022]). Beyond merely distinguishing two types of trends, we also conducted a comparison of causal discovery results, before and after eliminating the identified measurement trend. Here the time series causal discovery method Peter-Clark-momentary-conditional-independence plus (PCMCI+) was adopted ([Runge, 2020]). Utilizing this environmental health dataset from Hong Kong, our primary objective
was to delineate the environmental determinants linked to sepsis. As illustrated in Figure 7(b), there are significant variations in outcomes contingent on the removal of the measurement trend. Initial analyses using raw data classified \( CO \) and \( SO_2 \) as mitigating factors against sepsis. However, upon exclusion of the sepsis measurement trend, only temperature was pinpointed as a synchronous risk factor for sepsis. Though this analysis did not deal with factors like seasonality, the observed discrepancies highlight the paramount importance of detecting and addressing measurement trends in causal discovery analysis.

**Figure 7:** Evaluation of performance using a real-world dataset. (a) Depiction of time series variables. (b) Raw: discovery of structure from raw data by Peter-Clark-momentary-conditional-independence plus (PCMCI+). Detrended: discovery of structure after removal of identified measurement trends by PCMCI+.
### 6 Conclusion and Discussions
There has long been a pressing need for techniques to discern intrinsic trends from measurement trends. The proposed algorithm TrendDiff stands out as the first dedicated solution to this problem. Beyond its applicability in data pre-processing for causal discovery—as demonstrated in both simulated and real-world scenarios—its advantages are manifold. Firstly, by addressing measurement trends, which essentially is a kind of measurement error, data quality is enhanced. The adage “Garbage In, Garbage Out” underscores the pivotal role of data quality in application studies, a principle that spans multiple disciplines. This uplift in quality augments not only causal discovery but also the efficacy of a myriad of other methodologies. Secondly, the practical significance of this method is profound. For both entrepreneurs and investors, discerning genuine market trends from ephemeral ones is pivotal. Investing resources or capital in fake trends can culminate in substantial disappointment, given the absence of a genuine market fit. Algorithms tailored for distinguishing trend types play a crucial role in mitigating such risks.
In future work, we aim to solve the following questions: 1. How to further improve the performance of the current algorithm, especially when data length is limited? 2. What if a variable bears both intrinsic and measurement trends? Can we develop a method to distinguish the two types of trends within the same variable? 3. How to better remove the identified measurement trends? For linear trends, removal is straightforward. Yet, addressing nonlinear trends is more challenging, primarily because their exact form or shape is often unknown.
REFERENCES
Tianjiao Chu, Clark Glymour, and Greg Ridgeway. Search for additive nonlinear time series causal models. *Journal of Machine Learning Research*, 9(5), 2008.
Louise Davies and Jenny K Hoang. Thyroid cancer in the usa: current trends and outstanding questions. *The lancet Diabetes & endocrinology*, 9(1):11–12, 2021.
Imme Ebert-Uphoff and Yi Deng. Causal discovery for climate research using graphical models. *Journal of Climate*, 25(17):5648–5665, 2012.
Carolin Fleischmann-Struzek, Antje Mikolajetz, Daniel Schwarzkopf, J Cohen, CS Hartog, M Pletz, P Gastmeier, and K Reinhart. Challenges in assessing the burden of sepsis and understanding the inequalities of sepsis outcomes between national health systems: secular trends in sepsis and infection incidence and mortality in germany. *Intensive care medicine*, 44:1826–1835, 2018.
Emmet Hall-Hoffarth. Causal discovery of macroeconomic state-space models. *arXiv preprint arXiv:2204.02374*, 2022.
Biwei Huang, Kun Zhang, Jiji Zhang, Joseph Ramsey, Ruben Sanchez-Romero, Clark Glymour, and Bernhard Schölkopf. Causal discovery from heterogeneous/nonstationary data. *The Journal of Machine Learning Research*, 21(1):3482–3534, 2020.
World Health Organization et al. Global report on the epidemiology and burden of sepsis: current evidence, identifying gaps and future directions. 2020.
Peter CB Phillips. Challenges of trending time series econometrics. *Mathematics and Computers in Simulation*, 68(5-6):401–416, 2005.
Chanu Rhee, Raymund Dantes, Lauren Epstein, David J Murphy, Christopher W Seymour, Theodore J Iwashyna, Sameer S Kadri, Derek C Angus, Robert L Danner, Anthony E Fiore, et al. Incidence and trends of sepsis in us hospitals using clinical vs claims data, 2009-2014. *Jama*, 318(13):1241–1249, 2017.
Kristina E Rudd, Sarah Charlotte Johnson, Kareha M Agesa, Katya Anne Shackelford, Derrick Tsoi, Daniel Rhodes Kievlan, Danny V Colombara, Kevin S Ikuta, Niranjan Kissoon, Simon Finfer, et al. Global, regional, and national sepsis incidence and mortality, 1990–2017: analysis for the global burden of disease study. *The Lancet*, 395(10219):200–211, 2020.
Jakob Runge. Causal network reconstruction from time series: From theoretical assumptions to practical estimation. *Chaos: An Interdisciplinary Journal of Nonlinear Science*, 28(7), 2018.
Jakob Runge. Discovering contemporaneous and lagged causal relations in autocorrelated nonlinear time series datasets. In *Conference on Uncertainty in Artificial Intelligence*, pp. 1388–1397. PMLR, 2020.
Jakob Runge, Peer Nowack, Marlene Kretschmer, Seth Flaxman, and Dino Sejdinovic. Detecting and quantifying causal associations in large nonlinear time series datasets. *Science advances*, 5(11):eaau4996, 2019.
Richard Scheines and Joseph Ramsey. Measurement error and causal discovery. In *CEUR workshop proceedings*, volume 1792, pp. 1. NIH Public Access, 2016.
Xinpeng Shen, Sisi Ma, Prashanthi Vemuri, and Gyorgy Simon. Challenges and opportunities with causal discovery algorithms: application to alzheimer’s pathophysiology. *Scientific reports*, 10(1):2975, 2020.
Mervyn Singer, Clifford S Deutschman, Christopher Warren Seymour, Manu Shankar-Hari, Djillali Annane, Michael Bauer, Rinaldo Bellomo, Gordon R Bernard, Jean-Daniel Chiche, Craig M Coopersmith, et al. The third international consensus definitions for sepsis and septic shock (sepsis-3). *Jama*, 315(8):801–810, 2016.
Peter Spirtes and Kun Zhang. Causal discovery and inference: concepts and recent methodological advances. In *Applied informatics*, volume 3, pp. 1–28. SpringerOpen, 2016.
|
OCqyFVFNeF
|
However, the authors focus on explainable AI, and the generalizability is defined on different models of the same task, which does not align with the conventional understanding of DNN generalizability.
|
DEFINING AND EXTRACTING GENERALIZABLE INTERACTION PRIMITIVES FROM DNNs
Lu Chen\textsuperscript{1}∗ Siyu Lou\textsuperscript{1,2}∗ Benhao Huang\textsuperscript{1} Quanshi Zhang\textsuperscript{1†}
\textsuperscript{1}Shanghai Jiao Tong University, Shanghai, China \textsuperscript{2}Eastern Institute of Technology, Ningbo, China
\{lu.chen,siyu.lou,hbh00109@hbh,zqs1022\}@sjtu.edu.cn
ABSTRACT
Faithfully summarizing the knowledge encoded by a deep neural network (DNN) into a few symbolic primitive patterns without losing much information represents a core challenge in explainable AI. To this end, Ren et al. (2024) have derived a series of theorems to prove that the inference score of a DNN can be explained as a small set of interactions between input variables. However, the lack of generalization power makes it still hard to consider such interactions as faithful primitive patterns encoded by the DNN. Therefore, given different DNNs trained for the same task, we develop a new method to extract interactions that are shared by these DNNs. Experiments show that the extracted interactions can better reflect common knowledge shared by different DNNs.
1 INTRODUCTION
Explaining and quantifying the exact knowledge encoded by a deep neural network (DNN) presents a new challenge in explainable AI. Previous studies mainly visualized patterns encoded by DNNs (Bau et al., 2017; Kim et al., 2018) and estimated a saliency map on input variables (Simonyan et al., 2013; R. Selvaraju et al., 2017). However, a new question is that can we formulate the implicit knowledge encoded by the DNN as explicit and symbolic primitive patterns? In fact, we hope these primitive patterns serve as elementary units for inference, just like concepts in human cognition.
However, there is no widely accepted way to define the concept encoded by a DNN, because we cannot mathematically define/formulate the exact concept in human cognition. Nevertheless, if we ignore cognitive issues, Ren et al. (2024); Li & Zhang (2023b) have derived a series of theorems as convincing evidence to take interactions as symbolic primitives encoded by a DNN. Specifically, an interaction captures the intricate nonlinear relationship encoded by the DNN. For instance, when a DNN processes a sentence “It is raining cats and dogs”, the DNN may encode the interaction between a set of input variables $S = \{\text{raining, cats, and, dogs}\} \subseteq N$. When all words in $S$ are present, an interactive effect $I(S)$ emerges, and pushes the DNN’s inference towards the semantic meaning of “heavy rain.” However, if any word in $S$ is masked, the effect will be removed.
Ren et al. (2024) have mainly proven two theorems to justify the convincingness of considering above interactions as primitive inference patterns encoded by the DNN. First, it is proven that under some common conditions\textsuperscript{2}, a well-trained DNN usually just encodes a limited number of interactions w.r.t. a few sets of input variables. More crucially, let us randomly mask an input sample $x$ in different ways to generate an exponential number of masked samples. It is proven that people can use just a few interactions to accurately approximate the DNN’s outputs on all these masked samples. Thus, these few interactions are referred to as interaction primitives.
Despite the aforementioned theorems, this study does not yet deem the above interactions as faithful primitives of DNNs. The core problem is that existing interaction extraction methods cannot theoretically guarantee the generalization (transferability) of the interactions, e.g., ensuring to extract
∗ Equal contribution.
† Quanshi Zhang is the corresponding author. He is with the Department of Computer Science and Engineering, the John Hopcroft Center at the Shanghai Jiao Tong University, China.
\url{https://github.com/sjtu-xai-lab/generalizable-interaction}
\textsuperscript{2}Please see Appendix B for details.
common interactions shared by different AI models. Interactions, which are not shared by different DNNs, may be perceived as out-of-distribution signals without clear meanings.
Therefore, in this study, we revisit the generalization of interactions. Specifically, we identify a clear mechanism that makes the existing method extract different interactions from the same DNN under different initialization states, which hurts the generalization power of interactions.
Thus, to address the generalization issue, we propose a new method for extracting generalizable interactions. A generalizable interaction is defined as Figure 1 shows. Given multiple DNNs trained for the same task and an input sample, if an interaction can be extracted from all these DNNs, then we consider it generalizable. Our method is designed to extract interactions with maximum generalization power. This approach ensures that if an interaction exhibits a significant impact on the output score for one DNN, it usually demonstrates noteworthy influence for other DNNs. We conducted experiments on various datasets. Experiments showed that our proposed method significantly improved the generalization power of the extracted interactions across different DNNs.
2 GENERALIZABLE INTERACTION PRIMITIVES ACROSS DNNs
2.1 PRELIMINARIES: EXPLAINING THE NETWORK OUTPUT WITH INTERACTION PRIMITIVES
Although there is no theory to guarantee that how to define concepts that fit well with human cognition, (Li & Zhang, 2023a) and (Ren et al., 2023b) still provided mathematical supports to explain why we can still use interactions between input variables as the primitives or concepts encoded by the DNN. Specifically, there are two types of interactions, i.e., AND interactions and OR interactions.
**AND interactions.** Given a function \( v : \mathbb{R}^n \rightarrow \mathbb{R} \), let us consider an input sample \( x = [x_1, x_2, \cdots, x_n]^T \) with \( n \) input variables indexed by \( N = \{1, 2, \ldots, n\} \). Here, \( v(x) \in \mathbb{R} \) denotes the function output on \( x \). Then, (Ren et al., 2023b) have used the Harsanyi dividend (Harsanyi, 1963) \( I_{\text{and}}(S|x) \) to quantify the numerical effect of the AND relationship between input variables in \( S \subseteq N \), which is encoded by the function \( v \). We consider this interaction as an **AND interaction**.
\[
I_{\text{and}}(\emptyset|x) := v(x_\emptyset),
\]
where \( I_{\text{and}}(\emptyset|x) = v(x_\emptyset) \), and \( x_T \) denotes a sample whose input variables in \( N \setminus T \) are masked.\(^4\)
Each AND interaction \( I_{\text{and}}(S|x) \) reveals the AND relationship between all variables in \( S \). For instance, let us consider the slang term \( S = \{x_3, x_4, x_5, x_6\} \) in the sentence “\( x_1 = It, x_2 = is, x_3 = raining, x_4 = cats, x_5 = and, x_6 = dogs! \)” as a toy example. The co-occurrence of four words forms the semantic concept of “heavy rain” and contributes a numerical effect \( I_{\text{and}}(S|x) \) to the function output. Otherwise, the masking of any word \( x_i \in S \) invalidates the semantic concept and eliminates the interaction effect, i.e., obtaining \( I_{\text{and}}(S|x_{\text{masked}}) = 0 \) on the masked sample.
\(^3\)If the target function/model/network has a vectorized output, e.g., a DNN for multi-category classification, we may set \( v(x) = \log \frac{p(y=y_{\text{truth}}|x)}{\sum_{y} p(y|x)} \) by following (Deng et al., 2022).
\(^4\)We followed (Li & Zhang, 2023a) to obtain two discrete states for each input variable, i.e., the masked and unmasked states. We simply masked each input variable \( i \in N \setminus S \) using baseline values.
**OR interactions.** Analogously, we can also use the OR interaction to explain the function \( v : \mathbb{R}^n \rightarrow \mathbb{R} \). To this end, (Zhou et al., 2023; Li & Zhang, 2023a) have defined the following OR interaction effect \( I_{or}(S|x) \) to measure the OR interaction encoded by \( v \). In particular, \( I_{or}(\emptyset|x) = v(x_0) \).
\[
I_{or}(S|x) := -\sum_{T \subseteq S} (-1)^{|S|-|T|} v(x_{N \setminus T}).
\]
(2)
Each OR interaction \( I_{or}(S|x) \) describes the OR relationship between all variables in \( S \). Let us consider an input sentence "\( x_1 = This, x_2 = movie, x_3 = is, x_4 = boring, x_5 = and, x_6 = disappointing \)" for sentiment classification. Let us set \( S = \{x_4, x_6\} \). The presence of any word in \( S \) will contribute a negative sentiment effect \( I_{or}(S|x) \) to the function output.
**Sparsity of interactions.** Theoretically, according to Equation (1), a function can encode at most \( 2^n \) different AND interactions w.r.t. all \( 2^n \) subsets \( \forall S, S \subseteq N \). However, (Ren et al., 2024) have proved that under some common conditions, most well-trained DNNs only encode a small set of AND interactions, denoted by \( \Omega \), i.e., only a few interactions \( S \in \Omega \) have considerable effects \( I_{and}(S|x) \). All other interactions have almost zero effects, i.e., \( I_{and}(S|x) \approx 0 \), which can be regarded as a set of negligible noise patterns.
It is worth noting that an OR interaction can be regarded as a specific AND interaction, if we inverse the definition of the masked state and the unmasked state of an input variable. Thus, the proven sparsity of AND interactions can also indicate the conclusion that well-trained DNNs tend to encode a small number of OR interactions.
**Definition of interaction primitives.** Considering the above proven sparsity of interactions, we define an interaction primitive as a salient interaction. Formally, given a threshold \( \tau \), the set of interaction primitives are defined as \( \Omega = \{ S \subseteq N : |I(S|x)| > \tau \} \).
**Theorem 1 (Universal matching theorem, proved by (Ren et al., 2024)).** As the corollary of the proven sparsity in (Ren et al., 2024), the function’s output on all \( 2^n \) masked samples \( \{x_S | S \subseteq N \} \) could be universally explained by the interaction primitives in \( \Omega \), s.t., \( |\Omega| \ll 2^n \), i.e., \( \forall S \subseteq N, v(x_S) = \sum_{T \subseteq S} I_{and}(T|x) \approx \sum_{T \subseteq S : T \in \Omega} I_{and}(T|x) \).
In particular, Theorem 1 shows that if we arbitrarily mask the input sample \( x \), we can get \( 2^n \) different masked samples \( \forall S, S \subseteq N \). Then, we can universally match the output of the function \( v(x_S) \) on all \( 2^n \) masked samples using only a few interaction primitives in \( \Omega \).
### 2.2 Faithfulness problem with interaction-based explanations
**Basic setting of using AND-OR interactions to explain a DNN.** In this section, we consider to employ both AND interactions and OR interactions to explain the DNN’s output. This is because the complexity of the representations in DNNs makes it difficult to rely solely on either AND interactions or OR interactions to faithfully explain true inference primitives encoded by the DNN.
To this end, we need to decompose the output of the DNN into two terms \( v(x) = v_{and}(x) + v_{or}(x) \), so that we can use AND interactions to explain the term \( v_{and}(x) \) and use OR interactions to explain the term \( v_{or}(x) \). In this way, the first challenge is how to learn an appropriate decomposition of \( v_{and}(x) \) and \( v_{or}(x) \) that reveals intrinsic primitive interactions encoded by the DNN. We will discuss this challenge later.
No matter how we randomly decompose \( v(x) = v_{and}(x) + v_{or}(x) \), Theorem 2 states that we can still use interactions to fit the DNN’s outputs on \( 2^n \) randomly masked samples \( \{x_T | T \subseteq N \} \). Furthermore, according to the sparsity of interaction primitives in Section 2.1, we can obtain Proposition 1, i.e., the \( 2^n \) network outputs on all masked samples can usually be approximated by a small number of AND interaction primitives in \( \Omega_{and} \) and OR interaction primitives in \( \Omega_{or} \), s.t., \( |\Omega_{and}|, |\Omega_{or}| \ll 2^n \).
**Theorem 2 (Universal matching theorem, proof in Appendix C).** Let us be given a DNN \( v \) and an input sample \( x \). For each randomly masked sample \( x_T, T \subseteq N \), we obtain
\[
v(x_T) = v_{and}(x_T) + v_{or}(x_T) = \sum_{S \subseteq T} I_{and}(S|x_T) + \sum_{S \in \{S : S \cap T \neq \emptyset \} \cup \{\emptyset\}} I_{or}(S|x_T).
\]
(3)
To compute \( I_{and}(S|x) \), we use a baseline value \( b_i \) and set \( x_i = b_i \) to represent its masked state. If we consider \( b_i \) variable as the presence of the variable, and consider the original value \( x_i \) as its masked state (i.e., using \( v(b_T) \) to represent \( v(x_{N \setminus T}) \) in Equation (2)), then \( I_{or}(S|x) \) in Equation (2) can be formulated the same as the AND interaction in Equation (1).
Proposition 1. The output of a well-trained DNN on all $2^n$ masked samples $\{x_T | T \subseteq N\}$ could be universally approximated by the interaction primitives in $\Omega_{\text{and}}$ and $\Omega_{\text{or}}$, s.t., $|\Omega_{\text{and}}|, |\Omega_{\text{or}}| \ll 2^n$, i.e., $\forall T \subseteq N, v(x_T) = \sum_{S \subseteq T} I_{\text{and}}(S|x_T) + \sum_{S \in \{S : S \cap T \neq \emptyset\} \cup \{\emptyset\}} I_{\text{or}}(S|x_T) \approx v(x_\emptyset) + \sum_{\emptyset \neq S \subseteq T : S \in \Omega_{\text{and}}} I_{\text{and}}(S|x_T) + \sum_{S \cap T \neq \emptyset : S \in \Omega_{\text{or}}} I_{\text{or}}(S|x_T)$, where $v(x_\emptyset) = v_{\text{and}}(x_\emptyset) + v_{\text{or}}(x_\emptyset)$.
Problems with the faithfulness of interactions. Although the universal matching capacity proven in Theorem 2 is a quite significant advantage of AND-OR interactions, it is still not the ultimate guarantee for the faithfulness of the extracted interactions. To be precise, there is still no standard way to faithfully decompose the $v_{\text{and}}(x)$ term and the $v_{\text{or}}(x)$ term that reveal intrinsic primitive interactions encoded by the DNN, considering the following two challenges.
• Challenge 1, ambiguous decomposition of $v_{\text{and}}(x)$ and $v_{\text{or}}(x)$ usually brings in considerable uncertainty in the extraction of interactions. Let us take the following toy Boolean function as an example to illustrate the diversity of interactions, $f(x) = x_1 \land x_2 \land x_3 + x_2 \land x_3 + x_3 \land x_4 + x_4 \lor x_5$, where $x = [x_1, x_2, x_3, x_4, x_5]^T$ and $x_i \in \{0, 1\}$. We have two ways to decompose $f(x)$. First, we can simply decompose $v_{\text{and}}(x) = x_1 \land x_2 \land x_3 + x_2 \land x_3 + x_3 \land x_4$ and $v_{\text{or}}(x) = x_4 \lor x_5$, then to explain $f(x)$ with an OR interaction $I_{\text{or}}(S = \{4, 5\})$ and three AND interactions $I_{\text{and}}(S = \{1, 2, 3\})$, $I_{\text{and}}(S = \{2, 3\})$ and $I_{\text{and}}(S = \{3, 4\})$. Alternatively, we can also use exclusively AND interactions to explain $f(x)$. Specifically, we can rewrite $v_{\text{and}}(x) = x_1 \land x_2 \land x_3 + x_2 \land x_3 + x_3 \land x_4 + x_4 \lor x_5 = x_1 \land x_2 \land x_3 + x_2 \land x_3 + x_3 \land x_4 + (x_4 + x_5 - x_4 \land x_5)$ and $v_{\text{or}}(x) = 0$, w.r.t $x_i \in \{0, 1\}$. Thus, the $v_{\text{and}}$ term can be explained by a total of six AND interaction primitives. This is a typical case for diverse strategy of extracting interactions that are generated by different decompositions.
The aforementioned $f(x)$ is just an exceedingly simple function. In real-world applications, DNNs usually encode intricate AND-OR relationships among input variables, making it exceptionally challenging to formulate an explicit expression for the DNN function or to establish a definitive ground-truth decomposition of $v_{\text{and}}(x)$ and $v_{\text{or}}(x)$. Consequently, the diversity issue with interactions are ubiquitous and unavoidable.
• Challenge 2, how to ensure the interaction primitives are generalizable. It is commonly considered that generalizable primitives are usually transferable over different models trained for the same task, instead of being over-fitted by a single model. Thus, if an interaction primitive can be consistently extracted from different DNNs, then it can be considered as a faithful concept. Otherwise, non-generalizable (non-transferable) interactions do not appear as faithful concepts, even though they still satisfy the criteria of sparsity and universal matching in Theorem 2.
Definition 1 (Transferability of interaction primitives). Given $m$ different DNNs trained for the same task, $v^{(1)}, v^{(2)}, \ldots, v^{(m)}$, we use AND-OR interactions to explain the output score $v^{(i)}(x)$ of each DNN $v^{(i)}$ on the input sample $x$. Let $\Omega_{\text{and},(i)} = \{S \subseteq N : |I_{\text{and}}^{(i)}(S|x)| > \tau^{(i)}\}$ and $\Omega_{\text{or},(i)} = \{S \subseteq N : |I_{\text{or}}^{(i)}(S|x)| > \tau^{(i)}\}$ denote a set of sparse AND interaction primitives and a set of sparse OR interaction primitives, respectively. Then, the set of generalizable AND and the set of generalizable OR interaction primitives for the $i$-th DNN, are defined as $\Omega_{\text{and},\text{shared}} = \bigcap_{i=1}^m \Omega_{\text{and},(i)}$ and $\Omega_{\text{or},\text{shared}} = \bigcap_{i=1}^m \Omega_{\text{or},(i)}$, respectively. The generalization power of AND and OR interactions of the $i$-th DNN, can be measured by $s_{\text{and}}^{(i)} = |\Omega_{\text{and},\text{shared}}|/|\Omega_{\text{and},(i)}|$ and $s_{\text{or}}^{(i)} = |\Omega_{\text{or},\text{shared}}|/|\Omega_{\text{or},(i)}|$, respectively.
Definition 1 introduces the generalization power of interaction primitives. A larger value signifies higher transferability and, consequently, more generalizable interactive primitives.
2.3 EXTRACTING GENERALIZABLE INTERACTION PRIMITIVES
Neither of the aforementioned two challenges has been adequately tackled in previous interaction studies. In essence, interactions are determined by the decomposition of the network output $v(x) = v_{\text{and}}(x) + v_{\text{or}}(x)$. Thus, if we rewrite the decomposition as $v_{\text{and}}(x_T) = 0.5v(x_T) + \gamma_T$ and $v_{\text{or}}(x_T) = 0.5v(x_T) - \gamma_T$, then the learning of the best decomposition is equivalent to learning a set of $\{\gamma_T\}$. Here, the parameter $\gamma_T \in \mathbb{R}$ for a subset $T \subseteq N$ determines a specific decomposition between $v_{\text{and}}(x_T)$ and $v_{\text{or}}(x_T)$. Therefore, our goal is to learn the appropriate parameters $\{\gamma_T\}$ that reduce the aforementioned uncertainty of interaction primitives and boost their generalization power.
To this end, to alleviate the uncertainty of the interactions, the most intuitive approach is to learn the sparsest interactions, considering the principle of Occam’s Razor, as follows. It is because the sparsest (or simplest) explanation is usually considered as the most faithful explanation.
\[
\min_{\{\gamma_T\}} \|I_{\text{and}}\|_1 + \|I_{\text{or}}\|_1,
\]
where \(I_{\text{and}} = [I_{\text{and}}(T_1|x), \ldots, I_{\text{and}}(T_{2^n}|x)]^T\), \(I_{\text{or}} = [I_{\text{or}}(T_1|x), \ldots, I_{\text{or}}(T_{2^n}|x)]^T \in \mathbb{R}^{2^n}\), \(T_k \subseteq N\).
The above \(\ell_1\) norm loss promotes the sparsity of both AND interactions and OR interactions.
### 2.3.1 Only achieving sparsity is not enough
Although the sparsity can be used to reduce the uncertainty of interactions, the sparsity of interactions w.r.t. each single input sample obtained in Equation (4) does not fully solve the above two challenges. **First**, Ren et al. (2023c) have found that the extraction of high-order interactions is usually sensitive to small noises in input variables, where the order is defined as the number of input variables in \(S\), i.e., \(\text{order}(S) = |S|\). It means that when different noises are added to the input samples, the algorithm may extract fully different high-order interactions. Similarly, this will also hurt the generalization power of interaction primitives over different samples, when these samples contain similar sets of input variables.
**Second**, optimizing the loss in Equation (4) may lead to diverse solutions. Given different initial states, the loss in Equation (4) may learn two different sets of parameters \(\{\gamma_T\}\) as two local minima with similar loss values, while the two sets of parameters \(\{\gamma_T\}\) generate two different sets of interactions. We conducted experiments to illustrate this point. Given a pre-trained BERT model (Devlin et al., 2019) and an input sentence \(x\) on the SST-2 dataset for sentiment classification, we learned the parameters \(\{\gamma_T\}\) to extract sparse interaction primitives. In this experiment, we repeatedly extracted two sets of AND-OR interactions by applying two different sets of initialized parameters \(\{\gamma_T\}\), which are denoted by \((A_{\text{and}}, A_{\text{or}})\) and \((B_{\text{and}}, B_{\text{or}})\). \(A_{\text{and}} = \{S \subseteq N : |I_{\text{and}}(S|x)| > \tau_{A_{\text{and}}}\}\) denotes the set of AND interaction primitives extracted by a certain initialization of \(\{\gamma_T\}\), where the parameter \(\tau_{A_{\text{and}}}\) was determined to ensure that each set selected the most salient \(K = 100\) interactions. We used the transferability of interaction primitives in Definition 1:
\[S_{\text{and}} = |A_{\text{and}} \cap B_{\text{and}}|/|A_{\text{and}}|\] and \[S_{\text{or}} = |A_{\text{or}} \cap B_{\text{or}}|/|A_{\text{or}}|\], to measure the diversity of interactions caused by the different parameter initializations. Table 2 shows that given different initial states, optimizing the loss in Equation (4) usually extracted two dramatically different sets of AND-OR interactions with only 21% overlap. Figure 12 further shows the top 5 AND-OR interaction primitives extracted from BERT model on the same input sentence, which illustrates that given different initial states, the loss in Equation (4) would learn different AND-OR interactions.
**Third**, prioritizing sparsity cannot guarantee high generalization power across different models. Since a DNN may simultaneously learn common knowledge shared by different DNNs and be overfitted to some out-of-the-distribution patterns, different DNNs may only share partial interaction primitives. We believe that the shared common interactions are more faithful, so the transferability is another way to guarantee the generalization power of interaction primitives. Therefore, we hope to formulate and extract common interactions that are generalizable through different DNNs.
We conducted experiments to illustrate the difference between interactions extracted from two DNNs by using Equation (4). We used BERT_BASE \(v_{\text{base}}\) and BERT_LARGE \(v_{\text{large}}\) (Devlin et al., 2019) for the task of sentiment classification. Specifically, given an input sentence \(x\), we learned two sets of parameters \(\{\gamma_T^{\text{base}}\}\) and \(\{\gamma_T^{\text{large}}\}\) for the BERT-base model and the BERT-large model, respectively. Then we extracted two sets of AND-OR interactive concepts \((\Omega_{\text{and},\text{base}}, \Omega_{\text{or},\text{base}})\) and \((\Omega_{\text{and},\text{large}}, \Omega_{\text{or},\text{large}})\), respectively. Subsequently, we computed the transferability of the extracted interaction primitives according to Definition 1. Figure 3(a) shows that the transferability of the extracted AND-OR interaction primitives was much lower than interactions proposed in this study.
### 2.3.2 Extracting Generalizable Interactions
As discussed above, the sparsity alone is not enough to tackle the aforementioned challenges. Therefore, in this study, we propose to use the generalization power as a straightforward purpose, to
---
6Please see Appendix D for details.
boost the faithfulness of interactions. Meanwhile, the sparsity of interactions is also supposed to be guaranteed. Given a total of \( m \) DNNs \( v^{(1)}, v^{(2)}, \ldots, v^{(m)} \) trained for the same task, the objective of extracting generalizable interactions shared by the \( m \) DNNs is revised from Equation (4), as follows.
\[
\min_{\{\gamma_T^{(1)}, \ldots, \gamma_T^{(m)}\}} \| \text{rowmax}(\mathbb{I}_{\text{and}}) \|_1 + \| \text{rowmax}(\mathbb{I}_{\text{or}}) \|_1,
\]
where \( \mathbb{I}_{\text{and}} = \begin{bmatrix} I_{\text{and}}^{(1)} & I_{\text{and}}^{(2)} & \cdots & I_{\text{and}}^{(m)} \end{bmatrix} \in \mathbb{R}^{2^n \times m} \) and \( \mathbb{I}_{\text{or}} = \begin{bmatrix} I_{\text{or}}^{(1)} & I_{\text{or}}^{(2)} & \cdots & I_{\text{or}}^{(m)} \end{bmatrix} \in \mathbb{R}^{2^n \times m} \). \( I_{\text{and}}^{(i)} = [I_{\text{and}}^{(i)}(T_1|x), \ldots, I_{\text{and}}^{(i)}(T_{2^n}|x)]^T \in \mathbb{R}^{2^n} \) and \( I_{\text{or}}^{(i)} \) represent the all \( 2^n \) AND-OR interactions extracted from the \( i \)-th DNN, \( T_k \subseteq N \). The matrix operator \( \text{rowmax}() \) computes the \( \ell_\infty \) norm of each row within the matrix, i.e., \( \text{rowmax}(\mathbb{I}_{\text{and}}) = [\| \mathbb{I}_{\text{and}}[1,:] \|_\infty, \ldots, \| \mathbb{I}_{\text{and}}[2^n,:] \|_\infty]^T \in \mathbb{R}^{2^n} \). For each specific subset of variables \( T_k \subseteq N \), the \( \text{rowmax}() \) operation returns the most salient interaction strength over all \( m \) interactions from the \( m \) DNNs. Please see Appendix F for more discussion on the matrix \( \mathbb{I}_{\text{and}} \).
Unlike Equation (4), the revised loss in Equation (5) only penalizes the most salient interactions over all \( m \) interactions extracted from \( m \) DNNs, with respect to each subset \( T_k \subseteq N \). This loss function ensures that if a DNN encodes a strong interaction w.r.t. the set \( T_k \), then we can also extract the same interaction w.r.t. \( T_k \) from the other \( m - 1 \) DNNs without a penalty. The \( \ell_1 \) norm also makes that the \( m \) DNNs share similar sets of sparse interactions. Considering the sparsity of interactions, for most subset \( T_k \), the effect \( I_{\text{and/or}}^{(i)}(T_k|x) \) is supposed to keep almost zero on all \( m \) DNNs.
Just like in Equation (4), we decompose the output of the \( i \)-th DNN as \( v_{\text{and}}^{(i)}(x_T) = 0.5v^{(i)}(x_T) + \gamma_T^{(i)} \) and \( v_{\text{or}}^{(i)}(x_T) = 0.5v^{(i)}(x_T) - \gamma_T^{(i)} \) to compute two vectors of AND-OR interactions, \( I_{\text{and}}^{(i)} \) and \( I_{\text{or}}^{(i)} \).
**Redundancy of interactions.** However, it is important to emphasize that only penalizing the largest interaction among the \( m \) DNNs in Equation (5) still faces the redundancy problem. Specifically, for each \( i \)-th DNN, we compute a total of \( 2^n \) AND interactions and \( 2^n \) OR interactions w.r.t. different subsets \( T \subseteq N \). Some of these \( 2^{n+1} \) interactions, denoted by the set \( \Omega_{\text{max}}^{(i)} \), are selected by the loss in Equation (5) as the most salient interactions over \( m \) DNNs, while the set of other unselected interactions are denoted by \( \Omega_{\text{others}}^{(i)} = \{ T \subseteq N \} \setminus \Omega_{\text{max}}^{(i)} \). Then, the redundancy problem is caused by a short-cut solution to the loss minimization in Equation (5), i.e., using unselected not-so-salient interactions in \( \Omega_{\text{others}}^{(i)} \) to represent numerical effects of selected interactions \( \Omega_{\text{max}}^{(i)} \), as discussed in Challenge 1 in Section 2.2. As a short-cut solution to Equation (5), this may also reduces the strength of the penalized salient interactions in Equation (5), but generates lots of redundant interactions.
Therefore, we revise the loss in Equation (5) to add penalties on unselected interactions to avoid the short-cut solution with a coefficient of \( \alpha \), as follows:
\[
\min_{\{\gamma_T^{(1)}, \ldots, \gamma_T^{(m)}\}} (\| \text{rowmax}(\mathbb{I}_{\text{and}}) \|_1 + \| \text{rowmax}(\mathbb{I}_{\text{or}}) \|_1) + \alpha (\| \mathbb{I}_{\text{and}} \|_1 + \| \mathbb{I}_{\text{or}} \|_1),
\]
where \( \alpha \in [0, 1] \) is a positive scalar. We extend the notation of the \( \ell_1 \) norm \( \| \cdot \|_1 \) to represent the sum of the absolute values of all elements in a given vector or matrix. It is worth noting that the generalization power of interactions is guaranteed by the \( \text{rowmax}() \) function in Equation (5), which assigns much higher penalties to non-generalizable interactions than generalizable interactions.
**Sharing decomposition between DNNs.** Optimizing Equation (6) is challenging. To address this challenge, we introduce a set of strategies to facilitate the optimization process. We assume that when all \( m \) DNNs are sufficiently trained, these DNNs tend to have similar decompositions of AND interactions and OR interactions, i.e., obtaining similar parameters, \( \forall T \subseteq N, \gamma_T^{(1)} \approx \gamma_T^{(2)} \approx \cdots \approx \gamma_T^{(m)} \). To achieve this, we introduce two types of parameters for \( \gamma_T^{(i)}, \hat{\gamma}_T^{(i)} = \bar{\gamma}_T + \hat{\gamma}_T^{(i)} \), where \( \bar{\gamma}_T \) represents the common decomposition shared by all DNNs, and \( \hat{\gamma}_T^{(i)} \) represents the decomposition specific to each \( i \)-th DNN. We constrain the significance of the unshared decomposition by using a bound \( |\hat{\gamma}_T^{(i)}| < \tau_T^{(i)} \), where \( \tau_T^{(i)} = 0.5 \cdot \mathbb{E}_x[v^{(i)}(x) - v^{(i)}(x_0)] \). During the training process, if \( |\hat{\gamma}_T^{(i)}| > \tau_T^{(i)} \), then we set \( \hat{\gamma}_T^{(i)} = \tau_T^{(i)} \cdot \text{sign}(\hat{\gamma}_T^{(i)}) \).
---
7Please see Appendix F for a detailed explanation of Equation (6).
8Note that regardless of whether Equation (6) is optimized to the optimal solution, theoretically, the extracted AND-OR interactions can still satisfy the property of universal matching in Theorem 2.
Modeling noises. Furthermore, we have identified a potential limitation in the definition of the interactions, i.e., the sensitivity to noise. Let us assume that the output of the \(i\)-th DNN has a small noise. We represent such noises by adding a small Gaussian noise \(\epsilon_T \sim \mathcal{N}(0, \sigma^2)\) to the network output \(v^{(i)}_{\text{and}}(x_T) = v^{(i)}_{\text{and}}(x_T) + \epsilon_T^{(i)}\). In this case, we can derive that \(I^{(i)}_{\text{and}}(T) = I^{(i)}_{\text{and}}(T) + \sum_{T' \subseteq T} (-1)^{|T'| - |T'|} \epsilon_T^{(i)}\). We prove that the variance of \(I^{(i)}_{\text{and}}(T)\) caused by the Gaussian noises is \(\mathbb{E}_{\epsilon_T \sim \mathcal{N}(0, \sigma^2)}[I^{(i)}_{\text{and}}(T) - E_{S, \epsilon_S \sim \mathcal{N}(0, \sigma^2)}[I^{(i)}_{\text{and}}(S)]]^2 = 2^{|T|} \sigma^2\) (please see Appendix D for details). Similarly, the variance of \(I^{(i)}_{\text{or}}(T)\) is also \(2^{|T|} \sigma^2\) for OR interactions. It means that the variance/instability of interactions increases exponentially with the order of the interaction \(|T|\).
Therefore, we propose to directly learn the error term \(\epsilon_T^{(i)}\) based on Equation (6) to remove tiny noisy signals, which are unavoidable in real data but cannot be modeled as AND-OR interactions, i.e., setting \(v^{(i)}(x_T) = v^{(i)}_{\text{and}}(x_T) + v^{(i)}_{\text{or}}(x_T) + \epsilon_T^{(i)}\), in order to enhance the robustness of our interaction extraction process. The error term is constrained to a small range \(|\epsilon_T^{(i)}| < \tau_{\epsilon}^{(i)}\), subject to \(\tau_{\epsilon}^{(i)} = 0.02 \cdot |v^{(i)}(x) - v^{(i)}(x_0)|\). During the training process, if \(|\epsilon^{(i)}| > \tau_{\epsilon}^{(i)}\), then we set \(|\epsilon^{(i)}| = \tau_{\epsilon}^{(i)} \cdot \text{sign}(\epsilon_T^{(i)})\).
Then, we conducted experiments to examine whether the extracted AND-OR interactions could still accurately explain the network output, when removing the error term. We followed experimental settings in Section 5 to extract interactions on both BERT_BASE and BERT_LARGE models. We computed the matching error \(e(x_T) = |v(x_T) - v_{\text{approx}}(x_T)|\), where \(v_{\text{approx}}(x_T)\) was the network output approximated by all interactions based on Theorem 2. Figure 13 shows matching errors of all masked samples w.r.t. all subsets \(T \subseteq N\), when we sorted the network outputs for all \(2^n\) masked samples in a descending order. It shows that the real network output was well approximated by interactions.
3 EXPERIMENT
In this section, we conducted experiments to verify the sparsity and generalization power of the interaction primitives extracted by our proposed method on the following three tasks.
Task1: sentiment classification with language models. We jointly extracted two sets of AND-OR interaction primitives from the BERT_BASE model and the BERT_LARGE model (Devlin et al., 2019) by following Equation (6). We finetuned the pre-trained BERT_BASE model and the BERT_LARGE model on the SST-2 dataset (Socher et al., 2013) for sentiment classification. For each input sentence \(x\) containing \(n\) tokens,\(^9\) we analyzed the log-odds output of the ground-truth label, i.e., \(v(x) = \log \frac{p(y=y_{\text{truth}}|x)}{1-p(y=y_{\text{truth}}|x)}\) by following (Deng et al., 2022).
Task2: dialogue task with large language models. We extracted two sets of AND-OR interaction primitives from the pre-trained LLaMA model (Touvron et al., 2023) and OPT-1.3B model (Zhang et al., 2022b). We explained the DNNs’ outputs on the SQuAD dataset (Rajpurkar et al., 2016). We took the first several words of each document in the dataset as the input of a DNN, and let the DNN predict the next word. For each input sentence \(x\) containing \(n\) words,\(^9\) we analyzed the log-odds output of the \((n+1)\)-th word that was associated with the highest probability by the DNN, \(y_{\text{max}}\), i.e., \(v(x) = \log \frac{p(y=y_{\text{max}}|x)}{1-p(y=y_{\text{max}}|x)}\).
Task3: image classification task with vision models. We extracted two sets of AND-OR interaction primitives from the ResNet-20 model (He et al., 2016) and the VGG-16 model (Simonyan & Zisserman, 2015), which were trained on the MNIST dataset (LeCun, 1998). These models were trained to classify the digit “3” from other digits. In practice, considering the \(2^n\) computational complexity, we have followed settings in (Li & Zhang, 2023b), who labeled a few important input patches in the image as input variables. For each input image \(x\) containing \(n\) patches,\(^9\) we analyzed the scalar output before the softmax layer corresponding to the digit “3.”
Sparsity of the extracted primitives. We aggregated all AND-OR interactions from various samples, and draw their strength in a descending order in Figure 2. This figure compares the curve of interaction strength \(|I(S)|, S \subseteq N\) between our extracted interactions, the traditional interactions (Li
---
\(^9\)Please see Appendix O for details.
Figure 2: Strength of AND-OR interactions $\log |I(S|x)|$ over different samples in a descending order. All interactions above the dash line had much more significant effect (shown in a log space) and were considered as salient interactions.
Figure 3: Generalization power (measured by $s_{\text{and}}$ and $s_{\text{or}}$) of the extracted primitives interactions.
& Zhang [10] namely, Traditional) and the original Harsanyi interactions (Ren et al., 2023a) (namely, Harsanyi). The competing method (Li & Zhang [2023b]) (Traditional in Figure 2) extracts the sparest interactions according to Equation (4), and the original Harsanyi interactions (Ren et al., 2023a) (Harsanyi in Figure 2) extracts interactions according to Equation (1). We found that most of the interactions had negligible effect. Although the proposed method reduced the sparsity a bit, the extracted interactions were still sparse enough to be considered as primitive inference patterns.
For each DNN, we further set a threshold $\tau^{(i)} = 0.05 \cdot \max_S |I(S|x)|$.
**Generalization power of the extracted interaction primitives.** We took the most salient $k$ interactions from each $i$-th DNN as the set of AND-OR interaction primitives, i.e., $|\Omega^{\text{ind}, (i)}| = |\Omega^{\text{or}, (i)}| = k$, $i \in \{1, 2\}$. We used the metric $s_{\text{and}}$ and $s_{\text{or}}$ in Definition 1 to measure the generalization power of interactions extracted from two DNNs. Figure 3 shows the generalization power of interactions when we computed $s_{\text{and}}$ and $s_{\text{or}}$ based on different numbers $k$ of most salient interactions. We found that the set of AND-OR interactions extracted from the proposed method exhibited higher generalization power than interactions extracted from the traditional method.
**Low-order interaction primitives are more stable.** Furthermore, we compared the ratio of shared interactions of different orders, i.e., $\text{order}(S) = |S|$. For interactions of each order $o$, we computed the overall strength of all positive interactions and that of all negative interactions of the $i$-th DNN, which were shared by other DNNs, as
$$\text{Shared}^{+(i)}(o) = \sum_{\text{op} \in \{\text{and, or}\}} \sum_{S \in \Omega^{\text{op}, (i)}, |S| = o} \max(0, I_{\text{op}}^{(i)}(S|x)),$$
and
$$\text{Shared}^{-(i)}(o) = \sum_{\text{op} \in \{\text{and, or}\}} \sum_{S \in \Omega^{\text{op}, (i)}, |S| = o} \min(0, I_{\text{op}}^{(i)}(S|x)),$$
respectively. Besides,
$$\text{All}^{+(i)}(o) = \sum_{\text{op} \in \{\text{and, or}\}} \sum_{S \in \Omega^{\text{op}, (i)}, |S| = o} \max(0, I_{\text{op}}^{(i)}(S|x)),$$
and
$$\text{All}^{-(i)}(o) = \sum_{\text{op} \in \{\text{and, or}\}} \sum_{S \in \Omega^{\text{op}, (i)}, |S| = o} \min(0, I_{\text{op}}^{(i)}(S|x)),$$
denote the overall strength of salient positive interactions and that of salient negative interactions, respectively. In this way, Figure 4 reports ($\text{Shared}^{+(i)}, \text{Shared}^{-(i)})$ and ($\text{All}^{+(i)}, \text{All}^{-(i)})$ for different orders within the three tasks. It shows that low-order interactions were more likely to be shared by different DNNs than high-order interactions. Besides, Figure 4 further shows a higher ratio of interactions extracted by the proposed method were shared by different DNNs than interactions extracted by the traditional method. In particular, the high similarity of interactions between ResNet-20 and VGG-16 shows that although two DNNs for the same tasks had fully different architectures, there probably existed a set of ultimate interactions for a task, and different well-optimized DNNs were likely to converge to such interactions.
**Visualization of the shared interaction primitives across different DNNs.** We also visualize the shared and distinctive interaction primitives in Figure 5. This figure shows that generalizable interac-
---
10 In the implementation of the competing method, we also learned an additional error term $e_T^{(i)}$ to remove small noises, just like in Section 2.3.2, to enable fair comparisons.
11 Please see Appendix H for more details.
Figure 4: Overall interactions and shared interactions. The red and black bars show the overall strength of positive interactions $\text{All}^{+(i)}(o)$ and that of negative interactions $\text{All}^{-(i)}(o)$ of each $o$-th order. The orange and green bars indicate the strength of positive interactions that are shared by the other DNN $\text{Shared}^{+(i)}(o)$ and that of the shared negative interactions $\text{Shared}^{-(i)}(o)$, respectively.
Figure 5: Visualization of the shared and distinctive interaction primitives across different DNNs. We selected some of salient interactions from the most salient $k = 50$ AND-OR interactions in each DNN. The black and gray color show the AND interactions and the OR interactions, respectively. The left and right column show the distinctive interactions extracted from the BERT$_{\text{BASE}}$ model and the BERT$_{\text{LARGE}}$ model, respectively. The middle column shows the shared interactions extracted from both models. Please see Appendix I for more interactions.
Interactions shared by different models can be regarded as more reliable concepts, which consistently contribute salient interaction effects to the output of different DNNs. In comparison, non-generalizable interactions, which are sometimes over-fitted by a single model, may appear as out-of-distribution features. From this perspective, we consider generalizable interactions as relatively faithful concepts that often have a significant impact on the inference of DNNs. Figure 5 further shows that, our method extracted much more shared interactions than the traditional interaction-extraction method, which shows that our method could obtain more stable explanation of the inference logic of a DNN. It is because the interactions shared by different DNNs were usually considered more faithful.
4 CONCLUSION
In this paper, we proposed a method to extract generalizable interaction primitives. The sparsity and universal-matching property of interactions provide lots of evidence to faithfully explain DNNs with interactions. Thus, in this paper, we propose to further improve the generalization power of interactions, which adds the last piece of the puzzle of interaction primitives. Compared to traditional interactions, interactions shared by different DNNs are more likely to be the underlying primitives that shape the DNN’s output. Furthermore, the extraction of interaction primitives also contributes to real applications. For example, it can assist in learning optimal baseline values for Shapley values (Ren et al., 2023b) and explaining the representation limits of Bayesian networks (Ren et al., 2023c). In addition, the extraction of generalizable interaction primitives shared by different DNNs provide a new perspective to formulating the out-of-distribution (OOD) features. Previous studies usually treated an entire sample as an OOD sample, whereas our work redefines the OOD problem at the level of detailed interactions, i.e., unshared interactions can be regarded as OOD information.
ACKNOWLEDGMENTS
This work is partially supported by the National Science and Technology Major Project (2021ZD0111602), the National Nature Science Foundation of China (62276165,92370115), Shanghai Natural Science Foundation (21JC1403800,21ZR1434600).
ETHICS STATEMENT
This paper aims to extract generalizable interaction primitives that are shared by different DNNs. This paper utilizes publicly released datasets which have been widely accepted by the machine learning community. This paper does not involve human subjects and does not include potentially harmful insights, methods, or applications. The paper also does not involve discrimination/bias/fairness issues, as well as privacy and security issues. There are no ethical issues with this paper.
REPRODUCIBILITY STATEMENT
We provide proofs for the theoretical results of this study in Appendix C to D. We also provide experimental details in Section 3 and Appendix O.
REFERENCES
BAAI. Aquila-7b. 2023. URL https://huggingface.co/BAAI/Aquila-7B.
David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6541–6549, 2017.
Huiqi Deng, Qihan Ren, Hao Zhang, and Quanshi Zhang. Discovering and explaining the representation bottleneck of dnns. In International Conference on Learning Representations, 2022.
Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Ziwei Yang, Zheyang Li, and Quanshi Zhang. Unifying fourteen post-hoc attribution methods with taylor interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT, 2019.
Amil Dravid, Yossi Gandelsman, Alexei A. Efros, and Assaf Shocher. Rosetta neurons: Mining the common units in a model zoo. IEEE International Conference on Computer Vision, 2023.
John C Harsanyi. A simplified bargaining model for the n-person cooperative game. International Economic Review, 4(2):194–220, 1963.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International Conference on Machine Learning, pp. 2668–2677. PMLR, 2018.
Yann LeCun. The mnist database of handwritten digits. 1998. URL http://yann.lecun.com/exdb/mnist/.
Mingjie Li and Quanshi Zhang. Technical note: Defining and quantifying and-or interactions for faithful and concise explanation of dnns. arXiv preprint arXiv:2304.13312, 2023a.
Mingjie Li and Quanshi Zhang. Does a neural network really encode symbolic concept? International Conference on Machine Learning, 2023b.
|
DiWRG9JTWZ
|
What is the difference between spurious-correlation studied in this paper and overfitting in few shot learning? Can we view spurious-correlation as one type of model overfitting? If this is true, it would be interesting to have a comprehensive study of overfitting problem in few-shot learning, in addition to the spurious correlation studied in this paper.
|
MetaCoCo: A New Few-Shot Classification Benchmark with Spurious Correlation
Min Zhang\textsuperscript{1} Haoxuan Li\textsuperscript{2} Fei Wu\textsuperscript{1} Kun Kuang\textsuperscript{1*}
\textsuperscript{1}Zhejiang University \textsuperscript{2}Peking University
\{zhangmin.milab, wufei, kunkuang\}@zju.edu.cn, hxli@stu.pku.edu.cn
Abstract
Out-of-distribution (OOD) problems in few-shot classification (FSC) occur when novel classes sampled from testing distributions differ from base classes drawn from training distributions, which considerably degrades the performance of deep learning models deployed in real-world applications. Recent studies suggest that the OOD problems in FSC mainly include: (a) cross-domain few-shot classification (CD-FSC) and (b) spurious-correlation few-shot classification (SC-FSC). Specifically, CD-FSC occurs when a classifier learns transferring knowledge from base classes drawn from seen training distributions but recognizes novel classes sampled from unseen testing distributions. In contrast, SC-FSC arises when a classifier relies on non-causal features (or contexts) that happen to be correlated with the labels (or concepts) in base classes but such relationships no longer hold during the model deployment. Despite CD-FSC has been extensively studied, SC-FSC remains understudied due to lack of the corresponding evaluation benchmarks. To this end, we present Meta Concept Context (MetaCoCo), a benchmark with spurious-correlation shifts collected from real-world scenarios. Moreover, to quantify the extent of spurious-correlation shifts of the presented MetaCoCo, we further propose a metric by using CLIP as a pre-trained vision-language model. Extensive experiments on the proposed benchmark are performed to evaluate the state-of-the-art methods in FSC, cross-domain shifts, and self-supervised learning. The experimental results show that the performance of the existing methods degrades significantly in the presence of spurious-correlation shifts. We open-source all codes of our benchmark and hope that the proposed MetaCoCo can facilitate future research on spurious-correlation shifts problems in FSC. The code is available at: https://github.com/remiMZ/MetaCoCo-ICLR24.
1 Introduction
Few-shot classification (FSC) aims to recognize unlabeled images (or query sets) from novel classes with only a few labeled images (or support sets) by transferring knowledge learned from base classes. Despite the impressive advances in the FSC, in real-world applications, out-of-distribution (OOD) problems in FSC occur when the novel classes sampled from testing distributions differ from the base classes drawn from training distributions, which significantly degrades the performance and robustness of deep learning models, and has gained increasing attention in recent years (Song et al., 2022; Li et al., 2023d). As shown in Figure 1, the OOD problems in FSC can be broadly categorized into two categories with different forms of distribution shifts: (a) cross-domain few-shot classification (CD-FSC) and (b) spurious-correlation few-shot classification (SC-FSC), as established by previous works (Triantafillou et al., 2020; Yue et al., 2020; Luo et al., 2021; Li et al., 2022).
Cross-domain few-shot classification (CD-FSC). Cross-domain shifts occur when a classifier learns transferring knowledge from base classes drawn from seen training distributions but recognizes novel classes sampled from unseen testing distributions. For example, in COVID-19 predictions, we may want to train a model on patients from a few sampled countries and then deploy the trained model to a broader set of countries. Existing OOD methods in FSC have shown considerable progress in solving the cross-domain shifts problem (Hou et al., 2019; Doersch et al., 2020).
Figure 1: Example of cross-domain shifts and spurious-correlation shifts in FSC. (a) In Meta-dataset with cross-domain shifts (Triantafillou et al., 2020), the model is trained on base classes sampled from three datasets including miniImageNet, CUB-200-2011 and Aircraft, then tested on novel classes drawn from VGG Flower. (b) In our proposed MetaCoCo with spurious-correlation shifts, each class (or concept, e.g., dog) consists of different backgrounds (or context, e.g., autumn).
Meanwhile, two standard cross-domain benchmarks have been proposed to evaluate the effectiveness of these methods, i.e., Meta-dataset (Triantafillou et al., 2020) consisting of 10 existing datasets, and BSCD-FSL (Guo et al., 2020) consisting of 4 existing datasets. Figure 1(a) shows the example of cross-domain shifts on Meta-dataset, where mini (miniImageNet), CUB (CUB-200-2011) and Aircraft are used as the base classes with VGG Flower as the novel classes, with each dataset exhibits a distinct distribution.
Spurious-correlation few-shot classification (SC-FSC). Spurious-correlation shifts arise when a classifier relies on spurious, non-causal context features that are not essential to the true label or concept, which can significantly reduce the robustness and generalization ability of the model. In the COVID-19 example, a recent nationwide cross-sectional study found spurious correlations between long-term PM$_{2.5}$ exposure and COVID-19 deaths in the United States due to county-level socioeconomic and demographic variables as confounders (Wu et al., 2020). To this end, models trained on base classes with spurious features and evaluated on novel classes without the relationship suffer substantial drops in performance. As shown in Figure 1(b), we show the example of spurious-correlation shifts in our proposed benchmark, where each class presents a range of non-causal contexts, such as autumn or snow. Meanwhile, the concepts of the base classes and the novel classes would be distinct in the FSC problem, e.g., “dog in the autumn” in the base class and “cat in the autumn” in the novel class, which emphasizes the impact of spurious correlation between concepts and contexts in the proposed benchmark. Despite the widespread of spurious-correlation shifts in the real-world FSC problems (Wang et al., 2017a; Yue et al., 2020; Luo et al., 2021; Zhang et al., 2023b), SC-FSC remains understudied due to lack of the corresponding evaluation benchmarks.
Shortcomings of spurious-correlation shifts benchmarks in traditional machine learning. Recently, spurious-correlation shifts in traditional machine learning (TML) have been investigated extensively (Arjovsky et al., 2019; Sagawa et al., 2019; Rosenfeld et al., 2020; Ahmed et al., 2020; Bae et al., 2021; Shen et al., 2021), and various benchmarks have been created, including toy datasets, e.g., ColoredMNIST (Arjovsky et al., 2019), and real-world datasets, e.g., NICO (He et al., 2021). These TML benchmarks cannot be used directly to evaluate the performance in FSC problems with spurious-correlation shifts, following the reasons below: (1) The number of classes. Most TML benchmarks are the binary classification problem, but for FSC problems, we need enough classes to split base and novel classes. (2) The number of samples. FSC needs adequate samples from base classes to learn the transferring knowledge to novel classes with a few labeled images. (3) The num-
number of contexts. Contexts in TML benchmarks are commonly limited, but FSC with many classes requires more contexts to build stronger spurious-correlation shifts. To the best of our knowledge, there does not exist a unified study and the benchmark of spurious-correlation shifts for FSC.
In this paper, we present Meta Concept Context (MetaCoCo), a large-scale benchmark with a total of 175,637 images, 155 contexts and 100 classes, with spurious-correlation shifts arising from various contexts in the real-world scenarios. The basic idea of constructing spurious-correlation shifts is to label the images with the main concepts and contexts. For example, in the category with “dog” as the main concept, the images are categorized into different contexts such as “autumn”, “snow”, and “rock”, which denotes that the “dog” is in the autumn, in the snow, or on the rock, respectively. With the help of these contexts, one can easily design a spurious-correlation-shift setting by training the model in some contexts and testing the model in other unseen contexts for studying spurious-correlation shifts as well as the unseen concepts for studying few-shot classification problems.
Furthermore, we propose a metric by using CLIP as a pre-trained vision-language model to quantify and compare the extent of spurious correlations on MetaCoCo and other FSC benchmarks. We conduct extensive experiments on MetaCoCo to evaluate the state-of-the-art methods in FSC, cross-domain shifts, and self-supervised learning. We open-source all codes for our benchmark and hope the proposed MetaCoCo will facilitate the development of spurious-correlation robust models.
2 COMPARISON WITH EXISTING BENCHMARKS
MetaCoCo provides a unified framework to facilitate the development of models robust to spurious-correlation shifts in FSC. We next discuss how MetaCoCo is related to existing benchmarks.
Relation to few-shot classification benchmarks. Few-shot classification (FSC) has attracted attention for its ability to recognize novel classes using few labeled images. Many methods have been proposed to solve the FSC problems, including (1) Fine-tuning based methods (Chen et al., 2019; Tian et al., 2020a; Chen et al., 2021), which address the problem by learn to transfer. (2) Metric-based methods (Vinyals et al., 2016; Snell et al., 2017; Li et al., 2019a; Zhang et al., 2022a), which solve the problem by learn to compare. (3) Meta-based methods (Finn et al., 2017; Rusu et al., 2019; Bae et al., 2021; Zhang et al., 2020), which tackle the problem by learn to learn.
Many FSC benchmarks have been proposed to evaluate the effectiveness of these methods, including miniImageNet (Vinyals et al., 2016), Places (Zhou et al., 2017), CIFAR-FS (Bertinietto et al., 2019), Plantae (Van Horn et al., 2018), CUB-200-2011 (Wah et al., 2011), Stanford Dogs (Khosla et al., 2011), Stanford Cars (Krause et al., 2013), etc. These datasets are generally divided into training, validation and testing sets with non-overlap classes. While these datasets are useful testbeds for verifying FSC methods, they follow the independent and identically distributed (IID) assumption.
Relation to cross-domain shifts FSC benchmarks. Cross-domain shifts have been widely studied in the FSC community, which aims to learn the transferring knowledge from seen training distributions to recognize unseen testing distributions. Many CD-FSC methods have been proposed to address the cross-domain problem (Tseng et al., 2020; Sun et al., 2021; Liang et al., 2021; Wang & Deng, 2021; Li et al., 2022; Zhang et al., 2022b), which can be mainly divided into bi-level optimization (Tseng et al., 2020; Triantafillou et al., 2021; Li et al., 2023b; Zhang et al., 2023c), domain adversarial learning (Motiian et al., 2017; Zhao et al., 2021), adversarial data augmentation (Wang & Deng, 2021; Sun et al., 2021), and module modulation (Liu et al., 2021; Li et al., 2022). Some benchmarks have been proposed to evaluate the effectiveness of these CD-FSC methods, including Meta-dataset (Triantafillou et al., 2020) consisting of 10 existing datasets, and BSCD-FSL (Guo et al., 2020) consisting of 4 existing datasets. They usually use the leave-one-domain-out setting as the testing domain and the others as training domains. However, these benchmarks use different datasets as domains to construct cross-domain distribution shifts, causing them to fail to reflect spurious correlation shifts that occur in real-world applications (see more discussion in Appendix A).
Relation to spurious-correlation shifts TML benchmarks. Spurious-correlation shifts have been studied recently in traditional machine learning (TML) (Sagawa et al., 2019; Krueger et al., 2021; Yao et al., 2022; Bai et al., 2024; Tang et al., 2024). Many methods mainly focus on causal learning (Peters et al., 2015; Kuang et al., 2018; Kamath et al., 2021; Wu et al., 2022; Wang et al., 2024; Li et al., 2024; Zhu et al., 2024), invariant learning (Arjovsky et al., 2019; Chang et al., 2020; Rosenfeld et al., 2020; Huang et al., 2023), and distributionally robust optimization (Arjovsky et al., 2019).
Table 1: A summary of the existing benchmarks and our proposed spurious-correlation benchmark, i.e., MetaCoCo. \( C \) and \( N \) are the number of classes and samples, respectively. The subscripts “all”, “train”, “val” and “test” mean the all dataset, training set, validation, and testing set, respectively.
| Dataset | \( C_{\text{all}} \) | \( C_{\text{train}} \) | \( C_{\text{val}} \) | \( C_{\text{test}} \) | \( N_{\text{all}} \) | \( N_{\text{train}} \) | \( N_{\text{val}} \) | \( N_{\text{test}} \) | Context | Similarity |
|--------------------------|----------------------|------------------------|----------------------|-----------------------|---------------------|------------------------|-----------------|------------------|---------|------------|
| miniImageNet (Vinyals et al., 2016) | 100 | 64 | 16 | 20 | 60,000 | 38,400 | 9,600 | 12,000 | 0 | 0.211 |
| CIFAR-FS (Krizhevsky et al., 2009) | 100 | 64 | 16 | 20 | 60,000 | 38,400 | 9,600 | 12,000 | 0 | 0.181 |
| Stanford Dogs (Khosla et al., 2011) | 120 | 70 | 20 | 30 | 20,580 | 12,165 | 3,312 | 5,103 | 0 | 0.244 |
| Stanford Cars (Krause et al., 2013) | 196 | 130 | 17 | 49 | 16,185 | 10,766 | 1,394 | 4,025 | 0 | 0.164 |
| Aircraft (Wah et al., 2011) | 100 | 70 | 15 | 15 | 10,000 | 5,000 | 2,500 | 2,500 | 0 | 0.228 |
| CUB-200-2011 (Wah et al., 2011) | 200 | 140 | 30 | 30 | 11,788 | 7,648 | 1,182 | 2,958 | 0 | 0.266 |
| Describable Textures (Cimpoi et al., 2014) | 47 | 33 | 7 | 7 | 5,640 | 3,960 | 840 | 840 | 0 | 0.194 |
| Traffic Signs (Houben et al., 2013) | 43 | - | 43 | - | 50,000 | - | - | 50,000 | 0 | 0.193 |
| Omniglot (Lake et al., 2015) | 50 | 25 | 5 | 25 | 32,000 | 17,660 | 1,620 | 13,800 | 0 | 0.212 |
| Fungi (Schroeder & Cui, 2018) | 1,394 | 994 | 200 | 200 | 89,760 | 64,449 | 12,195 | 13,116 | 0 | 0.191 |
| VGG Flower (Nilsback & Zisserman, 2008) | 102 | 71 | 15 | 16 | 8,189 | 5,655 | 1,109 | 1,425 | 0 | 0.177 |
| MSCOCO (Lin et al., 2014) | 80 | - | 40 | 40 | 860,001 | 513,021 | 346,980 | 0 | 0.173 |
| Quick Draw (Jongejan et al., 2016) | 345 | 241 | 52 | 52 | 50,426,266 | 34,776,331 | 7,939,640 | 7,710,295 | 0 | 0.168 |
| CropDiseases (Mohanty et al., 2016) | 38 | - | 38 | - | 43,456 | - | - | 43,456 | 0 | 0.213 |
| ChestX (Wang et al., 2017b) | 8 | - | 8 | - | 25,848 | - | - | 25,848 | 0 | 0.183 |
| EuroSAT (Helber et al., 2019) | 10 | - | 10 | - | 27,000 | - | - | 27,000 | 0 | 0.173 |
| ISIC2018 (Codella et al., 2019) | 7 | - | 7 | - | 10,015 | - | - | 10,015 | 0 | 0.186 |
| MetaCoCo (Ours) | 100 | 64 | 16 | 20 | 175,637 | 156,666 | 5,839 | 12,268 | 155 | 0.142 |
etc. Some toy benchmarks, e.g., ColoredMNIST (Arjovsky et al., 2019) and real-world benchmarks, e.g., NICO (He et al., 2021) and MetaShift (Liang & Zou, 2022), have been proposed to evaluate the performance of these methods. These TML benchmarks do not be used directly in the FSC setting, due to lack of sufficient classes, number of samples, and number of contexts. Although IFSL (Yue et al., 2020) and COSOC (Luo et al., 2021) have experimentally proved the importance of spurious-correlation shifts, there is still a lack of a benchmark for evaluation. Therefore, we propose MetaCoCo in this paper to reflect spurious-correlation shifts arising in real-world scenarios.
### 3 Problem and Evaluation Settings
FSC aims to recognize unlabeled images (or query sets) from novel classes with only few labeled images (or support sets). Following the previous studies (Vinyals et al., 2016; Tian et al., 2020b), we adopt an episodic paradigm to train and evaluate the few-shot models. Specifically, each \( N \)-way \( K \)-shot episode \( T_e \) has a support set \( S_e = \{(x_i, y_i) : i = 1, \ldots, I_s\} \) and a query set \( Q_e = \{(x_i, y_i) : i = I_s + 1, \ldots, I_s + I_q\} \), where \( x_i \in X \) is the image and \( y_i \in Y \) is the label from a set of \( N \) classes \( C_e \), with \( I_s = N \cdot K \) and \( I_q \) be the image numbers in the support and query set, respectively.
Let \( S_e(X) \) and \( Q_e(X) \) be the image spaces of \( S_e \) and \( Q_e \), and \( S_e(Y) \) and \( Q_e(Y) \) be the corresponding label spaces, respectively. The label space of \( S_e \) and \( Q_e \) is same but the image space is different, i.e., \( S_e(X) \neq Q_e(X) \) and \( S_e(Y) = Q_e(Y) \). During the training phase, for meta-based and metric-based methods, episodes are randomly sampled from the base classes set \( D_b \) to train the model. Instead, for fine-tuning based methods, a mini-batch images is randomly sampled from \( D_b \) to train the model. During the testing phase, the trained model is fine tuned with \( S_e \) and evaluated with \( Q_e \) in novel episodes sampled from the novel classes set \( D_n \). Note that \( D_b \) contains more images and classes compared with \( D_n \) but label spaces are disjoint, i.e., \( D_b(Y) \neq D_n(Y) \).
The model architectures have a feature encoder \( f_\theta \) and a classifier \( c_\phi \) parameterized by \( \theta \) and \( \phi \). The \( f_\theta \) aims to extract features, \( f_\theta : X \rightarrow Z \), and the \( c_\phi \) predicts the class of extracted features, \( c_\phi : Z \rightarrow Y \).
#### 3.1 Cross-Domain Shifts and Spurious-Correlation Shifts
In Table 1, we summarize the statistics of the existing benchmarks and our proposed spurious-correlation benchmark, i.e., MetaCoCo. Specifically, Meta-dataset (Triantafillou et al., 2020) and BSCD-FSL (Guo et al., 2020) are two commonly used cross-domain benchmarks, where Meta-dataset has 10 existing datasets, including ILSVRC-2012 (Deng et al., 2009), Omniglot (Lake et al., 2015), Aircraft (Wah et al., 2011), CUB-200-2011 (Wah et al., 2011), Describable Textures (Cimpoi et al., 2014), Quick Draw (Jongejan et al., 2016), Fungi (Schroeder & Cui, 2018), VGG Flower (Nilsback & Zisserman, 2008), Traffic Signs (Houben et al., 2013) and MSCOCO (Lin et al., 2014). BSCD-FSL (Guo et al., 2020) has 4 existing datasets, including CropDiseases (Mohanty et al., 2016), EuroSAT (Helber et al., 2019), ISIC2018 (Codella et al., 2019) (Tschandl et al., 2018), and
\( ^1D_b(Y) \) and \( D_n(Y) \) can be defined similarly, meaning the label spaces of \( D_b \) and \( D_n \), respectively.
ChestX (Wang et al., 2017b). The main differences between cross-domain benchmarks and our proposed MetaCoCo benchmark are as follows: (1) **The cause of shifts.** The shifts in cross-domain benchmarks are caused by varying distributions between various datasets. Instead, the shifts in MetaCoCo are caused by varying both concepts and contexts. For example, for cross-domain shifts, the FSL model is trained on miniImageNet and tested on EuroSAT. Whereas for spurious-correlation shifts, the FSL model is trained and tested on images that have distinct associations with the contexts. (2) **The use of contexts.** In contrast to the existing few-shot classification benchmarks, as shown in Table 1, the proposed MetaCoCo benchmark further uses context information collected from real-world scenarios to reflect the spurious-correlation shifts.
### 3.2 Similarity Between the Concept and Context Information
For images containing both conceptual and contextual information, a greater similarity between image and context implies that the benchmark has more spurious-correlation shifts. To intuitively show that MetaCoCo has considerably more spurious-correlation shifts than the existing FSC benchmarks including cross-domain-shift benchmarks, we introduce a novel metric that uses CLIP (Radford et al., 2021) as a pre-trained vision-language model. By calculating the cosine distance of text and image features extracted by pre-trained text and image encoder from CLIP, the similarity $M_{ce}$ between conceptual language information and image visual knowledge, and the similarity $M_{te}$ between contextual language expression and image visual knowledge are calculated as follows:
$$M_{ce} = d(z_x, z_t^{ce}), \quad M_{te} = d(z_x, z_t^{te}),$$
where $d(\cdot, \cdot)$ is the cosine distance measurement, $z_x$ is the image features extracted by pre-trained image encoder by CLIP, and $z_t^{ce}$ and $z_t^{te}$ represent the text features of concept and context extracted by pre-trained text encoder by CLIP, respectively. Figure 2(a) shows the sample-averaged similarity $M_{ce}$ between concept and images on the existing FSC benchmarks as well as the proposed MetaCoCo. It can be seen that MetaCoCo has significantly lower similarity between concepts and images. This is because the added context information in the image introduces spurious-correlations with the concepts, e.g., “grass” and “dog”, thus weakening the direct correlation between the images and the concepts or labels, and presenting a more challenging evaluating benchmark for the FSC. Figure 2(b) further shows the context-image similarities $M_{te}$ (horizontal axis) versus the concept-image similarities $M_{ce}$ (vertical axis) of the sample points in the MetaCoCo. We find that the overall context-image similarities are slightly higher than the concept-image similarities, suggesting that spurious-correlation shifts are substantial in the proposed benchmark.
### 3.3 Evaluation Strategies
Before presenting the datasets, we first discuss the evaluation strategies in MetaCoCo, including:
(1) **Fine-tuning based methods.** Fine-tuning based methods follow the transfer learning procedure, including two phases: pre-training with base classes and test-tuning with novel classes. In the pre-
---
Since the existing FSC benchmarks lack context information as shown in Table 1, we are not able to compute their sample-averaged similarity $M_{ce}$ between contexts and images.
training with base classes phase, the base classes \( D_b \) is used to train a \( C_{base} \)-class classifier as below:
\[
\Gamma = \arg\min_{\theta, \phi} \sum_{i=1}^{T} L_{CE}(c_\phi(f_\theta(x_i)), y_i),
\]
where \( T \) is the sample number of \( D_b \), and \( L_{CE}(\cdot, \cdot) \) is the cross-entropy loss. In the test-tuning with novel classes phase, each episode \( T_e = (S_e, Q_e) \) is sampled from novel classes \( D_n \) and a new \( C_e \)-class classifier is re-learned based on a few labeled images \( S_e \) and tested on \( Q_e \).
(2) **Metric-based methods.** Metric-based methods directly compare the similarities (or distance) between query images and support classes, i.e., learning to compare, through the episodic training mechanism. Taking Prototypical Network (ProtoNet) (Snell et al., 2017) as an example, it takes the mean vector of each support class as its corresponding prototype representation, and then compares the relationships between query images and prototypes. The prototype \( p_n \) of each class in the support set \( S_e \) can be formulated as
\[
p_n = \frac{1}{K} \sum_{(x_i, y_i) \in S_e} f_\theta(x_i) \cdot I(y_i = n),
\]
where \( I(\cdot) \) is the indicator function, then the metric loss on \( Q_e \) can be computed as:
\[
L(\theta) = \frac{1}{I_q} \sum_{(x_i, y_i) \in Q_e} \log P(y_i | Q_e), \quad \text{where } P(y_i | Q_e) = \frac{\exp(-D(f_\theta(x_i), p_{y_i}))}{\sum_{n=1}^{N} \exp(-D(f_\theta(x_i), p_n))},
\]
and \( D(\cdot, \cdot) \) denotes a distance measurement, e.g., the squared euclidean distance in the ProtoNet.
(3) **Meta-based methods.** Meta-based methods aim to make the trained model able to quickly adapt to unseen novel tasks by a few gradient steps in the testing phase. Specifically, the learning paradigm of meta-based methods has two levels, i.e., inner-level and outer-level, to update the base and meta learner, respectively. Model-agnostic meta-learning (MAML) (Finn et al., 2017) is one representative method, whose core idea is to train a model’s initial parameters by using the two levels. Specifically, the base learner is optimized on the support set \( S_e \) that
\[
\{\theta', \phi'\} \leftarrow \{\theta, \phi\} - \eta_{out} \nabla_{\{\theta, \phi\}} L_{ce}(c_{\phi'}(f_{\theta'}(x_i), y_i)),
\]
where
\[
\{\theta', \phi'\} = \{\theta, \phi\} - \eta_{in} \nabla_{\{\theta, \phi\}} L_{ce}(c_\phi(f_\theta(x_i), y_i)),
\]
and the \( \eta_{in} \) and \( \eta_{out} \) are the learning rates of the inner level and the outer level, respectively.
4 **META-COCO: A NEW FEW-SHOT CLASSIFICATION BENCHMARK WITH SPURIOUS CORRELATION**
MetaCoCo aims to present an environment for evaluating the fine control of spurious-correlation shifts in the FSC problems. Specifically, our approach consists of (1) dataset generating, and (2) episode sampling, whose operational procedures are detailed below.
**Dataset generating.** Compared with the existing benchmarks, the samples in MetaCoCo consist of both conceptual and contextual information, and many of these images exhibit a strong correlation with the context, which increases the impact of spurious-correlation shifts between the training data and the testing data on the prediction performance. Specifically, we first select 100 categories of common objects following DomainNet (Peng et al., 2019). These categories include 155 contexts, which are collected from the adjectives or nouns appeared more frequently with these categories from WordNet (Miller, 1995). Then the images are collected by searching a category name combined with a context name (e.g., “dog on grass”) in various image search engines. One of the main challenges is that the downloaded data contains a large portion of outliers. To clean the dataset, we manually filter out the outliers, which takes around 2,500 hours in total. To control the annotation quality, we assign two annotators to each image and only take the images agreed by both annotators. After the filtering process, we kept 17.6k images from the 1.0 million images crawled from the web. The dataset has an average of around 1,000 images per category (see Appendix B for more details).
**Episode sampling.** MetaCoCo has 100 categories, and the number of matching contexts for each category is inconsistent, resulting in an inconsistent number of samples for each category. We sort the samples from most to least. The first 64 categories with the largest number of samples are used as training data, then 20 categories are selected as testing data, and the last 16 categories are used as validation data. FSC adopts an episodic paradigm to train and test the model. Each \( N \)-way \( K \)-shot
Table 2: Experiments in state-of-the-art few-shot classification and self-supervised learning methods. “rot.” and “jig.” mean using the Rotation and Jigsaw self-supervised pretext tasks, respectively.
| Method | Conference | Backbone | Type | GL | LL | TT | 1-shot | 5-shot |
|-----------------|------------|----------|--------------------|----|----|-----|--------|--------|
| Baseline | ICLR 2019 | ResNet12 | Fine-tuning | ✓ | ✓ | | 46.78 | 60.78 |
| Baseline++ | ICLR 2019 | ResNet12 | Fine-tuning | ✓ | ✓ | | 46.95 | 58.50 |
| RFS-simple | ECCV 2020 | ResNet12 | Fine-tuning | ✓ | ✓ | | 47.02 | 56.71 |
| Neg-Cosine | ECCV 2020 | ResNet12 | Fine-tuning | ✓ | ✓ | | 50.78 | 62.34 |
| SKD-GEN0 | BMVC 2021 | ResNet12 | Fine-tuning | ✓ | ✓ | | 51.34 | 63.21 |
| FRN | CVPR 2022 | ResNet12 | Fine-tuning | ✓ | ✓ | | 50.26 | 60.56 |
| Yang et al. | ECCV 2022 | ResNet12 | Fine-tuning | ✓ | ✓ | | 58.01 | 69.32 |
| LP-FT-FB | ICLR 2023 | ResNet12 | Fine-tuning | ✓ | ✓ | | 56.21 | 70.21 |
| MAML | ICML 2017 | ResNet12 | Meta | ✓ | ✓ | | 48.71 | 54.23 |
| Versa | NeurIPS 2018| ResNet12 | Meta | ✓ | ✓ | | 39.64 | 53.06 |
| R2D2 | ICLR 2019 | ResNet12 | Meta | ✓ | ✓ | | 45.25 | 60.14 |
| MTL | CVPR 2019 | ResNet12 | Meta | ✓ | ✓ | | 44.23 | 58.04 |
| ANIL | ICLR 2020 | ResNet12 | Meta | ✓ | ✓ | | 36.58 | 50.54 |
| BOIL | ICLR 2021 | ResNet12 | Meta | ✓ | ✓ | | 44.09 | 55.61 |
| CDRM-M | NeurIPS 2023| ResNet18 | Meta | ✓ | ✓ | | 44.88 | 61.42 |
| CDKT+ | NeurIPS 2023| ResNet18 | Meta | ✓ | ✓ | | 44.81 | 59.87 |
| CovaMNet | AAXY 2019 | ResNet12 | Metric | ✓ | ✓ | | 47.81 | 58.43 |
| DN4 | CVPR 2019 | ResNet12 | Metric | ✓ | ✓ | | 45.04 | 57.68 |
| CAN | NeurIPS 2019| ResNet12 | Metric | ✓ | ✓ | | 48.93 | 62.36 |
| DeepBDT | CVPR 2022 | ResNet12 | Metric | ✓ | ✓ | | 46.78 | 62.54 |
| FGFL | ICCV 2023 | ResNet12 | Metric | ✓ | ✓ | | 46.78 | 64.32 |
| PUTM | ICCV 2023 | ResNet18 | Metric | ✓ | ✓ | | 60.23 | 72.36 |
| TSA+DETA | ICCV 2023 | ResNet18 | Metric | ✓ | ✓ | | 51.42 | 61.58 |
| MoCo | CVPR 2020 | ResNet50 | Self-supervised learning | ✓ | ✓ | | 56.90 | 70.65 |
| SimCLR | ICML 2020 | ResNet50 | Self-supervised learning | ✓ | ✓ | | 58.12 | 71.21 |
| ProtoNet | NeurIPS 2017| ResNet18 | Metric | ✓ | ✓ | | 43.74 | 57.84 |
| + rot. + HTS | ECCV 2020 | ResNet18 | Self-supervised learning | ✓ | ✓ | | 40.64 | 54.23 |
| + HG + SSFSL | ECCV 2020 | ResNet18 | Self-supervised learning | ✓ | ✓ | | 42.06 | 55.13 |
| + rot. + jig. + SSFSL | ECCV 2020 | ResNet18 | Self-supervised learning | ✓ | ✓ | | 45.43 | 58.91 |
| ProtoNet | NeurIPS 2017| ResNet12 | Metric | ✓ | ✓ | | 44.46 | 59.01 |
| + rot. + SLA | ICML 2020 | ResNet12 | Self-supervised learning | ✓ | ✓ | | 42.69 | 59.50 |
| + rot. + HTS | ECCV 2022 | ResNet12 | Self-supervised learning | ✓ | ✓ | | 40.29 | 58.09 |
| + rot. + BF3S | NeurIPS 2017| ResNet18 | Metric | ✓ | ✓ | | 43.19 | 60.50 |
| + rot. + HTS | ECCV 2022 | ResNet18 | Self-supervised learning | ✓ | ✓ | | 43.67 | 60.78 |
| ProtoNet | NeurIPS 2017| ResNet18 | Metric | ✓ | ✓ | | 43.78 | 57.64 |
| + rot. + BF3S | ICCV 2019 | ResNet18 | Self-supervised learning | ✓ | ✓ | | 45.31 | 62.31 |
episode $T_e$ has a support set $S_e$ and a query set $Q_e$, where $S_e$ and $Q_e$ share the same categories but different images. Therefore, we have two sample episodic strategies: independent and identically distributed (IID) episode, i.e., the support and query images with the same contexts, and out-of-distribution (OOD) episode, i.e., the support and query images with the different contexts.
5 EXPERIMENTS
In this section, we evaluate the spurious-correlation performance of the state-of-the-art methods optimized with different learning strategies. These experiments further demonstrate that SC-FSC is still a major challenge. (see Appendix C and D for more experimental details and results).
5.1 EXPERIMENTAL SETUP
Few-shot classification methods. We evaluate the performance with a large number of algorithms that span different learning strategies, including: (1) Five fine-tuning based methods: Baseline (Chen et al., 2019), Baseline++ (Chen et al., 2019), RFS-simple (Tian et al., 2020a), Neg-Cosine (Liu et al., 2020) and SKD-GEN0 (Rajasegaran et al., 2020). (2) Six metric-based methods: ProtoNet (Snell et al., 2017), RelationNet (Sung et al., 2018), CovaMNet (Li et al., 2019b), DN4 (Li et al., 2019a), CAN (Hou et al., 2019) and RENet (Kang et al., 2021). (3) Six meta-based methods: MAML (Finn et al., 2017), Versa (Gordon et al., 2018), R2D2 (Bertinetto et al., 2019), MTL (Sun et al., 2019), ANIL (Raghu et al., 2020) and BOIL (Oh et al., 2020). (4) Six self-supervised learning methods: MoCo (He et al., 2020), SimCLR (Chen et al., 2020), SSFSL (Su et al., 2020), HTS (Zhang et al., 2022a), SLA (Lee et al., 2020) and BF3S (Gidaris et al., 2019). (5) Seven cross-domain methods: Linear (Yue et al., 2020), Cosine (Yue et al., 2020), k-NN (Yue et al., 2020), ATA (Wang & Deng, 2021), FT (Tseng et al., 2020), LRP (Sun et al., 2021) and ISFL (Yue et al., 2020).
Backbone architectures. Following prior literatures (Li et al., 2023d), all fine-tuning based methods, metric-based methods and meta-based methods adopt three different embedding backbones from shallow to deep, i.e., Conv64F, ResNet12 and ResNet18. For other learning strategy methods,
Table 3: Experiments of cross-domain and spurious-correlation few-shot classification methods.
| Method | Conference | Type | GL | LL | TT | 5-way 1-shot | 5-way 5-shot |
|-----------------|------------|----------|----|----|----|--------------|--------------|
| RelationNet | CVPR 2018 | Metric | ✓ | | | 45.32 ± 0.48 | 57.73 ± 0.45 |
| +ATA (Wang & Deng, 2021) | IJCAI 2021 | CD-FSC | ✓ | | | 43.24 ± 0.47 | 56.94 ± 0.47 |
| +FT (Tseng et al., 2020) | ICLR 2020 | CD-FSC | ✓ | | | 45.37 ± 0.50 | 58.74 ± 0.48 |
| GNN (Satorras & Estrach, 2018) | ICLR 2018 | Metric | ✓ | | | 48.14 ± 0.55 | 61.94 ± 0.56 |
| +ATA (Wang & Deng, 2021) | IJCAI 2021 | CD-FSC | ✓ | | | 46.78 ± 0.55 | 61.78 ± 0.52 |
| +FT (Tseng et al., 2020) | ICLR 2020 | CD-FSC | ✓ | | | 47.30 ± 0.56 | 65.90 ± 0.56 |
| TPN (Liu et al., 2018) | ICLR 2019 | Metric | ✓ | | | 49.65 ± 0.51 | 60.62 ± 0.47 |
| +ATA (Wang & Deng, 2021) | IJCAI 2021 | CD-FSC | ✓ | | | 47.15 ± 0.53 | 60.33 ± 0.31 |
| +FT (Tseng et al., 2020) | ICLR 2020 | CD-FSC | ✓ | | | 45.62 ± 0.51 | 55.78 ± 0.52 |
| Linear (Yue et al., 2020) | NeurIPS 2020 | Fine-tuning | ✓ | | | 43.31 ± 0.40 | 57.87 ± 0.41 |
| Cosine (Yue et al., 2020) | NeurIPS 2020 | Fine-tuning | ✓ | ✓ | | 42.81 ± 0.42 | 56.33 ± 0.41 |
| k-NN (Yue et al., 2020) | NeurIPS 2020 | Fine-tuning | ✓ | ✓ | | 42.22 ± 0.42 | 57.93 ± 0.42 |
| MAMC (Tian et al., 2017) | ICML 2017 | Meta | ✓ | | | 44.09 ± 0.52 | 53.98 ± 0.48 |
| +IFSL (Yue et al., 2020) | NeurIPS 2020 | SC-FSC | ✓ | | | 43.42 ± 0.51 | 55.00 ± 0.48 |
| MTL (Sun et al., 2019) | CVPR 2019 | Meta | ✓ | | | 43.80 ± 0.48 | 57.18 ± 0.48 |
| +IFSL (Yue et al., 2020) | NeurIPS 2020 | SC-FSC | ✓ | | | 43.42 ± 0.48 | 56.90 ± 0.48 |
| MatchingNet (Vinyals et al., 2016) | NeurIPS 2016 | Metric | ✓ | | | 43.72 ± 0.49 | 56.12 ± 0.49 |
| +IFSL (Yue et al., 2020) | NeurIPS 2020 | SC-FSC | ✓ | | | 44.11 ± 0.49 | 55.86 ± 0.49 |
| SIB (Hu et al., 2020) | ICLR 2020 | Meta | ✓ | | | 48.43 ± 0.57 | 58.53 ± 0.51 |
| +IFSL (Yue et al., 2020) | NeurIPS 2020 | SC-FSC | ✓ | | | 47.97 ± 0.54 | 58.41 ± 0.50 |
Figure 3: Experiments of the test-tuning phase with different sampling episodes, i.e., IID and OOD.
we adopt different feature backbones based on the corresponding original papers, e.g., ResNet10 for cross-domain few-shot classification methods, WRN-28-10 for self-supervised learning methods.
Evaluation protocols. Following the prior work (Li et al., 2023d), in this paper, we control the evaluation for all methods, evaluate them on 600 sampled tasks and repeat this process five times, i.e., a total of 3,000 tasks. The top-1 mean accuracy will be reported. All images are resized into 84 × 84 by using the single center crop (Li et al., 2019b). Three common tricks are used: (1) Global-label (GL) indicates that the global labels of the training set are used for pre-training during the training phase. (2) Local-label (LL) means that only the specific local labels are used in the episodic training phase. (3) Test-tune (TT) means test-tuning of using the support set at the testing stage.
5.2 Main results
In this section, we conduct extensive experiments on various methods with six learning strategies.
Experiments in fine-tuning, metric- and meta-based methods and self-supervised methods. We evaluate the performance of 17 competing few-shot methods and six self-supervised methods in our MetaCoCo. The results of the 5-way 1- or 5-shot setting are shown in Table 2. From Table 2 we have the following findings: (1) We find that the performance of all methods decreases compared with existing FSC benchmarks (Li et al., 2023d), which demonstrates that these methods are insufficient in solving the spurious-correlation-shift problem. (2) Previous works introduced self-supervised learning to improve the generalization of FSC models, but experiments have shown that this is not suitable for the SC-FSC problem. In some cases, using self-supervised learning even damages the performance, i.e., ProtoNet has 43.14% in 1-shot, but the accuracy by using rotation is 40.65%.
Experiments in CD-FSC and SC-FSC methods. Table 3 displays the accuracy of seven CD-FSC methods. These methods have a significant performance on solving the cross-domain-shift problem on the Meta-dataset (Triantafillou et al., 2020) and BSCD-FSL (Guo et al., 2020). However, in MetaCoCo, the advantages of these methods disappear, resulting in weaker performance, even worse than non-cross-domain FSC methods. It is worth noting that the main motivation of IFSL (Yue et al., 2020) is to use the idea of causality to solve the impact of spurious correlation between contextual information and images on the model training phase. However, we observe a substantial decrease of the performance on the real-world spurious-correlation benchmark, i.e., MetaCoCo.
To this end, according to these experimental results, we observe that most methods are insufficient to solve the spurious-correlation-shift FSC problem. We hope the proposed MetaCoCo can facilitate future research on the important and real-world problem for few-shot classification.
5.3 IN-DEPTH STUDY
To further analyze the influence of spurious shifts in MetaCoCo, we conduct in-depth experiments.
**Effect of the IID and OOD episodes.** Figure 3 shows the results of FSC methods under 5-way 1- and 5-shot settings. The IID and OOD episodes represent the same and different contexts of the support and query sets during the test-tuning phase, respectively (see Section 4). These results clearly denote that the learning process of the IID episode is better than the optimization process of the OOD episode. This further demonstrates that the model tends to utilize contextual information during the learning process. Once images do not match the contexts, the performance will deteriorate.
**Effect of different backbone architectures.** In Chen et al. (2019), they change the depth of the feature backbone to reduce intra-class variation for all methods. Following this paper, we start from Conv64F and gradually increase the backbone to ResNet12 and 18. The experiments under 5-way and 10-way 1-shot settings are shown in Figure 4. It is arguably a common sense that the stronger backbone is used, the performance is best. However, we surprisingly find that this may not be always in the SC-FSC problem. Figure 4 shows the performance degradation in some settings.
**Ways and shots analysis.** We further study the performance of “ways” (Figure 5 left) and “shots” (Figure 5 right). As expected, we found that the difficulty increases as the way increases, and performance degrades. More examples per class, on the other hand, indeed make it easier to correctly classify that class. Interestingly, Versa presents a poor performance with increasing the way but it improves at a high rate when the shot increases, which further represents that the contextual effects become larger when the task becomes difficult. CAN has the best accuracy under all settings because it uses a transduction strategy to introduce query samples in the training phase, which destroys the strong spurious correlations between contexts and images.
6 CONCLUSION
In this paper, we present Meta Concept Context (MetaCoCo), a large-scale, diverse and realistic environment benchmark for spurious-correlation few-shot classification. We believe that our exploration of various modes on MetaCoCo has uncovered interesting directions for future works: it remains unclear what is the best learning strategy for avoiding the effect of spurious-correlation contexts and the most appropriate episodic sample. Current models even including these cross-domain FSC models don’t work when trained on mismatching contexts. Current models are also not robust to the amount of data in testing episodes, each excelling in a different part of the spectrum. We believe that addressing these shortcomings constitutes an important research goal moving forward.
ACKNOWLEDGMENTS
This work was supported in part by National Natural Science Foundation of China (No. U20A20387, 62376243, 62037001, 623B2002), the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010) and Project by Shanghai AI Laboratory (P22KS00111). All opinions in this paper are those of the authors and do not necessarily reflect the views of the funding agencies.
REFERENCES
Faruk Ahmed, Yoshua Bengio, Harm van Seijen, and Aaron Courville. Systematic generalisation with group invariant predictions. In *International Conference on Learning Representations, ICLR*, 2020.
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019.
Jun-Hyun Bae, Inchul Choi, and Minho Lee. Meta-learned invariant risk minimization. *arXiv preprint arXiv:2103.12947*, 2021.
Shuanghao Bai, Min Zhang, Wanqi Zhou, Siteng Huang, Zhirong Luan, Donglin Wang, and Badong Chen. Prompt-based distribution alignment for unsupervised domain adaptation. In *Proceedings of the AAAI conference on artificial intelligence, AAAI*, 2024.
Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. In *Proceedings of the International Conference on Learning Representations, ICLR*, 2019.
Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. Invariant rationalization. In *International Conference on Machine Learning, ICML*, pp. 1448–1458. PMLR, 2020.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International Conference on Machine Learning, ICML*, pp. 1597–1607. PMLR, 2020.
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In *International Conference on Learning Representations, ICLR*, 2019.
Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, and Xiaolong Wang. Meta-baseline: exploring simple meta-learning for few-shot learning. In *Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV*, 2021.
Hao Cheng, Siyuan Yang, Joey Tianyi Zhou, Lanqing Guo, and Bihan Wen. Frequency guidance matters in few-shot learning. In *Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV*, pp. 11814–11824, 2023.
Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR*, pp. 3606–3613, 2014.
Noel Codella, Veronica Rotemberg, Philipp Tschandl, M Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). *arXiv preprint arXiv:1902.03368*, 2019.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR*, pp. 248–255. Ieee, 2009.
Carl Doersch, Ankush Gupta, and Andrew Zisserman. Crosstransformers: spatially-aware few-shot transfer. In *Advances in neural information processing systems, NeurIPS*, volume 33, pp. 21981–21993, 2020.
|
glwwbaeKm2
|
The paper highlights, in Figure 1(b), the relatively small scope of existing VFL datasets. This raises questions about the alignment of the proposed VFL dataset synthesis approach with real-world scenarios
|
VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning Benchmarks
Zhaomin Wu, Junyi Hou, Bingsheng He
National University of Singapore
{zhaomin,junyi.h,hebs}@comp.nus.edu.sg
Abstract
Vertical Federated Learning (VFL) is a crucial paradigm for training machine learning models on feature-partitioned, distributed data. However, due to privacy restrictions, few public real-world VFL datasets exist for algorithm evaluation, and these represent a limited array of feature distributions. Existing benchmarks often resort to synthetic datasets, derived from arbitrary feature splits from a global set, which only capture a subset of feature distributions, leading to inadequate algorithm performance assessment. This paper addresses these shortcomings by introducing two key factors affecting VFL performance - feature importance and feature correlation - and proposing associated evaluation metrics and dataset splitting methods. Additionally, we introduce a real VFL dataset to address the deficit in image-image VFL scenarios. Our comprehensive evaluation of cutting-edge VFL algorithms provides valuable insights for future research in the field.
1 Introduction
Federated learning [Konečný et al., 2016] is acknowledged for enabling model training on distributed data with enhanced privacy. In this study, we delve into the less explored vertical federated learning (VFL), where each party has a feature subset, aligning with a general definition of federated learning [Li et al., 2021a] that includes privacy-preserving collaborative learning like assisted learning [Diao et al., 2022] and split learning [Nepakomma et al., 2018]. The VFL application, depicted in Figure 1a, involves an initial development phase using synthetic or real-world benchmarks, followed by deployment in actual federated environments upon validation.
Evaluating VFL algorithms is challenging due to the inherent confidentiality of VFL data [Liu et al., 2022]. The scope of party imbalance and correlation in existing real VFL datasets, termed the real scope, is limited. Datasets in the OARF benchmark [Hu et al., 2022], FedAds [Wei et al., 2023], NUS-WIDE [Chua et al., 2009], and Vehicle [Duarte and Hu, 2004], predominantly represent scenarios where parties are balanced and exhibit weak correlations, as depicted in Figure 1b.
To address the constraints inherent in the real scope, many VFL benchmarks [Hu et al., 2022; He et al., 2020; Caldas et al., 2018] utilize synthetic datasets. This evaluation scope, termed uniform scope, represent the imbalance-correlation scope under an equal distribution of features among parties, either randomly or manually. The uniform scope, though commonly adopted in VFL experiments [Diao et al., 2022; Castiglia et al., 2022], confines the evaluation to scenarios featuring balanced, strongly correlated parties according to Figure 1b. Another critical limitation is the misalignment between the uniform scope and real scope, underscoring the imperative for a diverse and realistic VFL benchmark.
Constructing a systematic synthetic VFL benchmark necessitates pinpointing the key factors affecting VFL algorithm performance. Existing synthetic benchmarks for non-i.i.d. horizontal federated learning (HFL), such as NIID-Bench [Li et al., 2022a], fall short for VFL due to inherent assumptions about feature space and instance significance. Specifically, while HFL benchmarks typically assume independent and uniformly significant instances, this does not hold in VFL where features exhibit intrinsic correlations and differing importances. Furthermore, HFL benchmarks posit that all parties share the same feature space, a premise misaligned with VFL’s distributed feature paradigm. This delineates the unique analytical challenges inherent to synthetic VFL benchmarks.
Given these limitations, our statistical analysis of supervised VFL tasks identifies party importance and correlation as two crucial factors influencing target probability distributions in synthetic VFL datasets derived from the same global dataset. Accordingly, we propose VertiBench, a comprehensive VFL benchmark featuring novel feature-splitting methods for synthetic dataset generation. VertiBench offers three primary benefits: (1) it generally encompasses the uniform scope; (2) it effectively emulates the real scope, as evidenced by comparable performance on VertiBench-synthetic datasets; and (3) it introduces the capability to evaluate other scenarios that have not been explored in the previous studies, e.g. imbalanced feature split, broadening the scope of VFL evaluation.
Our primary contributions include: (1) Synthetic dataset generation methods with varied party importance and correlation, capturing a broad scope of VFL scenarios. (2) Novel real-world image-to-image VFL dataset Satellite. (3) Techniques to evaluate the party importance and correlation of real-world VFL datasets, enabling feature split comparison with synthetic VFL datasets. (4) Comprehensive benchmarks of mainstream cutting-edge VFL algorithms, providing key insights.
For example, we demonstrate the scalability of VFL algorithms, challenging prior assumptions about VFL scaling difficulties (Hu et al., 2022), and emphasize the challenges of communication efficiency in VFL datasets across varying imbalance levels. The VertiBench source code is available on GitHub (Wu et al., 2023a), with data splitting tools installable from PyPI (Wu et al., 2023b). The pre-split dataset is accessible in (Anonymized, 2023).
2 EVALUATE VFL DATASETS
In this section, our objective is to investigate the primary factors influencing VFL performance when generating synthetic VFL datasets from a fixed global dataset. Additionally, we explore methods to efficiently estimate these factors, guiding the subsequent feature split.
2.1 FACTORS THAT AFFECT VFL PERFORMANCE
Suppose there are $K$ parties. Denote the data on party $P_k$ as a random vector $X_k$ ($1 \leq k \leq K$). Denote the label as a random variable $y$. A supervised learning algorithm maximizes the likelihood function where hypothesis $h$ represents models and parameters, i.e., $L(y|X_K, ..., X_1; h)$. These supervised learning algorithms estimate the probability mass function in Eq. (1). The proof of Proposition 1 is provided in Appendix A.
**Proposition 1.** The probability mass function can be written as
$$\log P(y|X_K, ..., X_1) = \sum_{k=1}^{K} \log \frac{P(y|X_k,...,X_1)}{P(y|X_{k-1},...,X_1)} + \log P(y)$$
(1)
In VFL, $P(y)$ is the same for all the parties. The skewness among $K$ parties is determined by $K$ ratios of distributions. Interestingly, this ratio quantifies the divergence between two marginal probability distributions of $y$ - one inclusive of $X_k$ and the other exclusive of $X_k$. Essentially, the ratio estimates the impact on the global distribution when the features of a single party are excluded. This can be interpreted as the **importance** of a given party. Proposition 1 applies regardless of the order of $X_1, ..., X_k$. Shapley value, emphasizing feature independence, aids in precisely evaluating party importance in vertical federated learning, as demonstrated in (Wang et al., 2019) (Han et al., 2021).
In another aspect, the ratio \( \frac{P(y|X_k, \ldots, X_1)}{P(y|X_{k-1}, \ldots, X_1)} \) is determined by the correlation between \( X_k \) and \( X_1, \ldots, X_{k-1} \). In cases where the independence assumption underlying the Shapley value is invalidated, assessing each party’s impact on the global distribution becomes more accurate when based on feature correlation.
We identify feature importance and correlation as pivotal factors influencing VFL algorithm performance. For datasets with nearly independent features, the low inter-party correlation makes correlation-based splits less meaningful, suggesting the superiority of importance-based feature splits. Conversely, in datasets with highly correlated features, assessing individual feature importance becomes impractical, making correlation-based splits more suitable due to varying inter-party correlations.
Importance and correlation are treated as orthogonal evaluation factors applicable in distinct scenarios. While there may be an intrinsic link between them, our experiments indicate that focusing on one factor at a time yields explainable results reflective of real-world performance. As discussed in Appendix H, the interplay between importance and correlation can be complex. A joint optimization for both factors might be computationally intensive and less explainable, while providing limited additional insights. The subsequent sections will introduce our approach to evaluate these two factors and generating synthetic datasets based on each factor accordingly.
2.2 Evaluate Party Importance
To assess the importance for each party, we sum the importance of its features. While numerous methods to evaluate feature importance can be adopted in VertiBench, this study primarily focuses on two approaches: 1) Shapley Value: Feature importance is determined using Shapley values, efficiently estimated by evaluating the performance of a trained XGBoost (Chen and Guestrin [2016]) on random subsets. 2) Shapley-CMI (Han et al. [2021]): This approach, which does not rely on specific models, estimates the importance of each feature based on the Shapley-CMI applied to the global dataset. Both methods yield consistent and reasonable estimates of party importance.
2.3 Evaluate Party Correlation
The task of efficiently evaluating correlation among two groups of features is challenging despite well-studied individual feature correlation (Myers and Sirois [2004], De Winter et al. [2016]). The Shapley-Taylor index, proposed for evaluating correlation between feature sets (Sundararajan et al. [2020]), is computationally intensive (NP-hard), and unsuitable for high-dimensional datasets. The determinant of the correlation matrix (Wang and Zheng [2014]) efficiently estimates inter-party correlation but is over-sensitive to linearly correlated features, impeding its use in feature partitioning. A more refined metric - the multi-way correlation coefficient (mcor) (Taylor [2020]) addresses this, but like the determinant, it struggles with unequal feature numbers across parties, a typical VFL scenario, due to the assumption of a square correlation matrix.
Given the limitations of existing metrics (Taylor [2020], Wang and Zheng [2014]), we propose a novel metric to examine the correlation when the parties involved possess unequal numbers of features. Our approach hinges on the use of the standard variance of the singular values of the correlation matrix. This serves as an efficient measure of the overall correlation between two parties. Since the feature-wise correlation is an orthogonal research area, we selected Spearman rank correlation (Zar [2005]) due to its capability to handle non-linear correlation.
To elaborate further, we denote the column-wise correlation matrix between two matrices, \( X_i \) and \( X_j \), as \( \text{cor}(X_i, X_j) \). As a result, we formally define the correlation between two entities, \( X_i \in \mathbb{R}^{n \times m_i} \) and \( X_j \in \mathbb{R}^{n \times m_j} \), in terms of their respective parties as Eq. 2:
\[
\text{Pcor}(X_i, X_j) := \frac{1}{\sqrt{d}} \sqrt{\frac{1}{d-1} \sum_{t=1}^{d} (\sigma_t(\text{cor}(X_i, X_j)) - \bar{\sigma})^2}, \quad d = \min(m_i, m_j)
\]
In this equation, \( \sigma_i(\cdot) \) means the \( i \)-th singular value of a matrix, while \( \bar{\sigma} \) stands for their mean value. Proposition 2 states that Pcor is equivalent to mcor for inner-party correlation (see Appendix A for proof). Experiments detailed in Appendix D.1 reveal that Pcor exhibits trends analogous to mcor (Taylor [2020]) when assessing inter-party correlation between equal number of features.
Proposition 2. For any real matrix $X$, $\text{Pcor}(X, X) = \text{mcor}(X, X)$
The singular values of a correlation matrix, $\text{Pcor}$, represent the magnitudes of its ellipsoid’s semi-axes, indicating the degree of dependence among features. The standard deviation of these singular values reflects the distribution of dependence across different axes. A notably large singular value in a specific axis (Figure 2c) suggests a high concentration of dependence. For instance, if there’s only one nonzero singular value, it implies that all features are perfectly correlated with a single feature. Conversely, if the singular values are uniformly distributed such as Figure 2a (indicated by a small standard deviation), it denotes less concentrated feature correlations. Therefore, the standard deviation of singular values serves as a measure of the dataset’s proximity to perfect correlation.
Proposition 3 states that $\text{Pcor}$, like $\text{mcor}$, spans a range from 0 to 1, even when assessing inter-party correlation. A $\text{Pcor}$ value of 1 signifies perfect correlation between $X_1$ and $X_2$, while a value of 0 indicates their independence.
Proposition 3. For any two real matrices $X_1$ and $X_2$, $\text{Pcor}(X_1, X_2) \in [0, 1]$
It is important to note that the absolute value of $\text{Pcor}$ alone does not fully capture inter-party correlation. For instance, when $X_i$ and $X_j$ are two parties both containing the same set of independent features, $\text{Pcor}(X_i, X_j)$ yields a value of 0, the same as the $\text{Pcor}$ between two independent parties. Despite the same $\text{Pcor}$ value, these scenarios intuitively differ in their levels of inter-party correlation. This discrepancy arises from overlooking the inner-party correlation of $X_i$ and $X_j$. Typically, parties with highly correlated features tend to exhibit higher $\text{Pcor}$ values with other parties.
To accurately measure the correlation between $X_i$ and $X_j$, we evaluate how the shift towards perfect correlation varies when $X_i$ is replaced by $X_j$. This is captured by the relative change in $\text{Pcor}$, denoted as $\text{Pcor}(X_i, X_j) - \text{Pcor}(X_i, X_i)$. In the perspective of variance analysis (Kruskal and Wallis [1952]), this difference quantifies the degree to which the standard deviation $\text{Pcor}(X_i, X_j)$ is explained by inter-party factors, controlling the contribution of inner-party correlations. The overall inter-party correlation, denoted as $\text{Icor}$, is described as the mean party-wise correlation across all distinct party pairs. Formally,
$$\text{Icor}(X_1, \ldots, X_K) := \frac{1}{K(K-1)} \sum_{i=1}^{K} \sum_{j=1, j \neq i}^{K} (\text{Pcor}(X_i, X_j) - \text{Pcor}(X_i, X_i)).$$
(a) $x, y, z \sim U(0, 1)$
(b) $x, y \sim U(0, 1), z = -x^2 - y^2$
(c) $x \sim U(0, 1), y = 2x, z = x + 1$
Figure 2: Examples of $\text{Pcor}$ values on different levels of correlation. $U$ means uniform distribution. Arrow direction indicates right singular vector orientation, arrow scale represents singular values.
$\text{Icor}$ exhibits notable properties both theoretically and empirically. Theoretically, as demonstrated in Theorem 1 (see Appendix A for proof), optimizing $\text{Icor}$ yields ideal feature splits in optimal scenarios. Specifically, in datasets comprising two independent but internally perfectly correlated feature sets, $\text{Icor}$ reaches its minimum when each party exclusively possesses one feature set and attains its maximum when each party equally shares half of the features from both sets. Empirically, we evaluate the link between inter-party correlation and $\text{Icor}$ in complex, real-world datasets (Appendix D). These empirical observations align with theoretical insights, confirming $\text{Icor}$’s capability in analyzing intricate data correlations.
Theorem 1. Consider a global dataset \( X \) comprising two independent datasets \( D_1, D_2 \in \mathbb{R}^{n \times m} \), each of the same dimension. Independence implies that for any feature \( a_i^{(1)} \) from \( D_1 \) and any feature \( a_j^{(2)} \) from \( D_2 \), where \( i, j \in [1, m] \), the correlation \( \text{Cor}(a_i^{(1)}, a_j^{(2)}) = 0 \). Furthermore, assume within \( D_1 \) and \( D_2 \), all features are perfectly correlated, such that for all pairs of distinct features \( a_i^{(1)}, a_j^{(1)} \) in \( D_1 \) and \( a_i^{(2)}, a_j^{(2)} \) in \( D_2 \), with \( i, j \in [1, m] \) and \( i \neq j \), the correlations satisfy \( \text{Cor}(a_i^{(1)}, a_j^{(1)}) = 1 \) and \( \text{Cor}(a_i^{(2)}, a_j^{(2)}) = 1 \) respectively. When the features of \( X \) are divided equally into two subsets, \( X_1 \) and \( X_2 \), such that each subset contains \( m/2 \) features, the overall inter-party correlation \( I_{\text{cor}}(X_1, X_2) \) satisfies
\[
I_{\text{cor}}(X_1, X_2) \in \left[ -\frac{m}{\sqrt{m(m-1)}}, 0 \right].
\]
The lower bound occurs if and only if \( X_1 \) comprises all features of either \( D_1 \) or \( D_2 \), with \( X_2 \) containing the remaining features. The upper bound occurs if and only if \( X_1 \) holds \( m \) features from both \( D_1 \) and \( D_2 \), with \( X_2 \) holding the remaining \( m \) features from \( D_1 \) and \( D_2 \).
3 Split Synthetic VFL Datasets
This section aims to develop algorithms to split features according to two key factors: importance and correlation. These algorithms should allow users to adjust the party importance and correlation of synthetic VFL datasets by simply modulating two parameters: \( \alpha \) and \( \beta \). The intended mapping should meet two criteria: (1) The scope of \( \alpha \) and \( \beta \) should encompass a broad spectrum of feature splits, inclusive of both real splits and random splits. (2) When two global datasets bear similarities, synthetic VFL datasets derived from them using identical \( \alpha \) and \( \beta \) parameters should yield similar VFL algorithm behaviors. We provide both theoretical and empirical validation for criteria (1) in this section, whereas criteria (2) is substantiated through experiments in Section 4.4.
3.1 Split by Party Importance
In light of the computational expense incurred by the Shapley value method, an alternative and more efficient strategy is necessary to perform feature splits based on importance. With all parties exhibiting symmetry in the context of \( X \), varying the importance among parties essentially translates to varying the variance of the importance among them. Assuming each party \( P_i \) possesses an importance factor \( \alpha_i > 0 \), we propose the implementation of the Dirichlet distribution parameterized by \( \alpha = \{\alpha_i\}_{i=1}^K \) for feature splitting. This approach ensures two beneficial properties post-split: (1) a larger \( \alpha_i \) guarantees a higher expected importance for \( P_i \), and (2) a smaller \( \|\{\alpha_i\}_{i=1}^K\|_2 \) assures a greater variance in the importance among parties.
More specifically, we propose a feature splitting method based on feature importance. After initializing local datasets for each party, a series of probabilities \( r_1, \ldots, r_K \) s.t. \( \sum_{k=1}^K r_k = 1 \) is sampled from a Dirichlet distribution \( \text{Dir}(\alpha_1, \ldots, \alpha_K) \). Each feature is randomly allocated to a party \( P_k \), selected based on the probabilities \( r_k \). To accommodate algorithms that fail when faced with empty features, we can ensure each party is initially provided with a random feature before the algorithm is set in motion. Detailed formalization of this algorithm can be found in Appendix C.
Theorem 2. Consider a feature index set \( A = \{1, 2, \ldots, m\} \) and a characteristic function \( v : 2^A \to \mathbb{R} \) such that \( v(\emptyset) = 0 \). Let \( \phi_j(v) \) denote the importance of the \( j \)-th feature on \( v \) such that \( \sum_{j=1}^m \phi_j(v) = v(A) \). Assume that the indices in \( A \) are randomly distributed to \( K \) parties with probabilities \( r_1, \ldots, r_K \sim \text{Dir}(\alpha_1, \ldots, \alpha_K) \). Let \( Z_i \) be the sum of feature importance for party \( i \). Then, we have \( \forall i \in [1, K] \) and \( E[Z_i] \propto \alpha_i \).
The proof of Theorem 2 can be found in Appendix A, resembling the Dirichlet-multinomial mean proof but focusing on sum importance instead of feature counts. The metric of importance, \( \phi_j(v) \), comprises the Shapley value and the recently proposed Shapley-CMI (Han et al., 2021). Theorem 2 asserts that the expected cumulative importance \( E[Z_i] \) of each party is proportional to the importance parameter \( \alpha_i \). The Dirichlet-based split method ensures that: (1) a larger value of \( \alpha_i \) leads to a higher expected value of \( r_i \), thus a higher expected value of party importance, and (2) a smaller value of...
\[\|\{\alpha_i\}_{i=1}^{K}\|_2\] results in a larger variance in \(r_i\), as well as more imbalanced importance among parties. Both properties are empirically validated in Appendix D.2. Hence, the proposed method naturally aligns with the requirements for feature importance. With \(\alpha = 1\), Dirichlet-split mirrors a uniform distribution, incorporating random splits within the uniform scope. Even for manual equal splits lacking consistent criteria, a large \(\alpha\) in Dirichlet-split can encapsulate them by yielding nearly equal feature distribution among parties.
### 3.2 Split by Party Correlation
This correlation-based feature-split algorithm (Alg. 1) is designed to allocate features across multiple parties based on a given correlation parameter \(\beta\). The algorithm’s operation is premised on a defined number of features for each party, represented as \(m_1, \ldots, m_K\). Commencing with the initialization of a column permutation matrix \(P\) to an identity matrix (line 1), the algorithm proceeds to define a score function, \(f(P; X)\), which represents the overall correlation Icor after the features are permutated by \(P\) (line 2). Subsequently, the algorithm determines the range of the score function (lines 3-4). This forms the basis for calculating the target correlation \(f^*(X; \beta)\), which is a linear interpolation between the lower and upper bounds controlled by the correlation index \(\beta\) (line 5). Next, the algorithm locates the optimal permutation matrix \(P^*\) by solving an permutation-based optimization problem. Notably, we employ the Biased Random-Key Genetic Algorithm (BRKGA) [Gonçalves and Resende, 2011] for this purpose. The final step of the algorithm splits the features according to the derived optimal permutation and the pre-set number of features for each party (lines 6-7).
**Algorithm 1:** Feature Splitting by Correlation
**Input:** Global dataset \(X \in \mathbb{R}^{n \times m}\), correlation index \(\beta\), number of features \(m_1, \ldots, m_K\)
**Output:** Local datasets \(X_1, \ldots, X_K\)
1. \(P \leftarrow I;\) /* Initiate permutation matrix */
2. \(f(P; X) := \text{Icor}(X_1^P, \ldots, X_K^P) \ s.t.\ X_1^P, \ldots, X_K^P \leftarrow \text{split features of } XP \text{ by } m_1, \ldots, m_K;\)
3. \(f_{\min}(X) = \min_P f(P; X);\) /* Calculate lower bound */
4. \(f_{\max}(X) = \max_P f(P; X);\) /* Calculate upper bound */
5. \(f^*(X; \beta) \leftarrow (1 - \beta)f_{\min}(X) + \beta f_{\max}(X);\) /* Calculate target correlation */
6. \(P^* \leftarrow \arg \min_P |f(P; X) - f^*(X; \beta)|;\) /* Find the permutation matrix */
7. \(X_1^P, \ldots, X_K^P \leftarrow \text{split features of } XP^* \text{ by } m_1, \ldots, m_K;\)
8. return \(X_1, \ldots, X_K\)
The efficiency of the optimization process, involving numerous Icor invocations, is crucial. For smaller datasets, Singular Value Decomposition (SVD) [Baker, 2005] is used for direct singular value computation. However, for high-dimensional datasets, we employ truncated SVD [Hansen, 1990] estimates the largest top-\(d_t\) singular values, assuming the remainder as zero for standard variance calculation. The ablation study of \(d_t\) is included in Appendix G.6. Our experiments, detailed in Appendix D.2, confirm the efficacy of both split methods.
### 3.3 Compare Feature Split Across Global Datasets
The metrics presented in Section 2 facilitate meaningful comparisons of feature splits within the same global datasets but fall short when comparing across different datasets. To bridge this gap and enable a comparison between real and synthetic VFL datasets, we introduce methods to map these metrics to two values: \(\alpha\) and \(\beta\), where \(\alpha\) indicates party balance and \(\beta\) indicates party correlation. Consequently, this mapping enables a direct comparison between feature splits originating from real and synthetic VFL datasets, as demonstrated in Figure 1b.
To estimate \(\alpha\), the importance of each party is calculated by Shapley values. These importance are then normalized and treated as Dirichlet parameters \(\alpha_i\) for each party \(P_i\), in line with Theorem 2. To approximate the scale of the Dirichlet parameters and align them with the generation of synthetic datasets, we find a symmetric Dirichlet distribution \(\text{Dir}(\alpha)\) that has the same variance as \(\text{Dir}(\alpha_1, \ldots, \alpha_K)\), as given in Proposition 4. This value of \(\alpha\) reflects the variance of party importance. The proof is provided in Appendix A.
Proposition 4. Given a Dirichlet distribution \( \text{Dir}(\alpha_1, \ldots, \alpha_K) \) with mean variance \( \sigma \), symmetric Dirichlet distribution \( \text{Dir}(\alpha) \) that has the same mean variance \( \sigma \) if \( \alpha = \frac{K-1-K^2\sigma}{K^3\sigma} \).
To estimate \( \beta \), we start by computing the potential minimum and maximum values of Icor by shuffling the features among parties, denoted as \( \text{Icor}_{\min}, \text{Icor}_{\max} \). Next, we estimate the Icor of the actual dataset, \( \text{Icor}_{\text{real}} \), and derive the \( \beta \) value using \( \beta = \min \left\{ \max \left\{ \frac{\text{Icor}_{\text{real}} - \text{Icor}_{\min}}{\text{Icor}_{\max} - \text{Icor}_{\min}}, 0 \right\}, 1 \right\} \). It is important to note that in real-world scenarios, \( \text{Icor}_{\text{real}} \) might fall slightly outside the range of \( \text{Icor}_{\min}, \text{Icor}_{\max} \) due to the constraints of optimization algorithms. To rectify this, we clip the estimated \( \beta \) to ensure \( \beta \in [0, 1] \).
4 EXPERIMENT
This section benchmarks cutting-edge VFL algorithms, with a detailed review in Section 4.1. Experimental settings are outlined in Section 4.2, and results regarding VFL accuracy and synthetic-real correlation are in Sections 4.3 and 4.4, respectively. Further evaluations, such as real communication cost, scalability, training time, and real dataset performance, are in Appendix G. Each experiment elucidates results and provides relevant insights, highlighting (1) the performance-communication tradeoff of NN-based and boosting-based methods, (2) the performance similarity between synthetic and real VFL datasets under the same \( \alpha, \beta \), and (3) the scalability potential of VFL algorithms.
4.1 REVIEW OF VFL ALGORITHMS
This section reviews existing VFL algorithms, with a focus on accuracy, efficiency, and communication cost. VertiBench concentrates on common supervised learning tasks such as classification and regression within synchronized parties, summarized in Table 1. Notably, this benchmark excludes studies exploring other aspects (Jin et al., 2021; Qi et al., 2022; Jiang et al., 2022) and other tasks (Chang et al., 2020; Li et al., 2021b; Chen and Zhang, 2022; He et al., 2022; Li et al., 2022b). Since most VFL algorithms presume exact inter-party data linking, we adopt this approach in VertiBench, despite recent contrary findings (Wu et al., 2022a; Nock et al., 2021) that this assumption may not be true. We refer to parties with and without labels as primary and secondary parties respectively.
| Category | Model | Algorithm | Contribution | Reference | Data | Feature |
|----------|-------|-----------|--------------|-----------|------|---------|
| Ensemble-based | Any | AL GAL | Accuracy | Xian et al., 2020; Diao et al., 2022 | Syn | Manual |
| NN | SplitNN | Accuracy | Vepakomma et al., 2018 | Syn | N/A |
| | C-VFL | Communication | Castiglia et al., 2021 | Syn | Manual |
| | BlindFL | Efficiency | Fu et al., 2022b | Syn | Manual |
| | FedOnce | Communication | Wu et al., 2022c | Syn | Random |
| Split-based | SecureBoost | Accuracy | Cheng et al., 2021 | Syn | Manual |
| GBDT | Pivot | Accuracy | Wu et al., 2020 | Syn | Manual |
| | FedTree | Accuracy, Efficiency | Li et al., 2023 | Syn | Random |
| | VF2Boost | Efficiency | Fu et al., 2021 | Syn | Manual |
| RF | Fed-Forest | Communication | Liu et al., 2020 | Syn | Random |
1 Abbreviations: NN - neural network; GBDT - gradient boosting decision trees; RF - random forest; Any - model-agnostic.
2 Dataset in experiments: Syn - synthetic datasets partitioned from global datasets.
3 Datasets used in the experiments: Manual - features manually split without specific reasons; Random - features randomly split without explanation; N/A - no VFL experiments conducted.
Most of the existing VFL methods can be categorized into ensemble-based and split-based. Ensemble-based methods have each party maintain a full model for local prediction and use collaborative ensemble techniques during training. Conversely, split-based methods delegate each party with a portion of the model, representing different inference stages. A comprehensive comparison is in Appendix B. In this paper, we concentrate on the primary types of VFL, acknowledging that there are various subtypes as identified in (Liu et al., 2022). Exploring these subtypes in depth will be an objective of our future research efforts.
In our experiments, we evaluate various VFL algorithms, including split-NN-based (e.g., SplitNN, C-VFL, FedOnce), split-GBDT-based (FedTree), and ensemble-based (GAL). For fairness, evaluations exclude encryption or noise. Noting minor variances among split-GBDT-based methods such as FedTree and SecureBoost, FedTree is used as a representative in our experiments.
4.2 Experimental Settings
This subsection includes the datasets and training method. Detailed dataset specifications, environments, and hyperparameter settings can be found in Appendix F.
Datasets. Our experiments utilize 11 datasets: nine centralized ones (covtype (Blackard 1998), msd (Bertin-Mahieux 2011), gisette (Guyon et al. 2008), realsim (Andrew 2015), epsilon (Guo-Xun et al. 2008), letter (Slate 1991), radar (Khosravi 2020), MNIST (Deng 2012), CIFAR10 (Krizhevsky and Hinton 2009)), and two real-world VFL datasets (NUS-WIDE (Chua et al. 2009), Vehicle (Duarte and Hu 2004)), with detailed descriptions available in Appendix E. The msd dataset is used for regression tasks, while the others cater to classification tasks. Each dataset is partitioned into 80% training and 20% testing instances except NUS-WIDE, MNIST, and CIFAR10 with pre-defined test set. The datasets’ features are distributed among multiple parties (typically four), split based on party importance ($\alpha$) or correlation ($\beta$). In the correlation-based split, each party is assigned an equal number of features.
Training. For classification tasks, we use accuracy as the evaluation metric, while regression tasks are evaluated using the Root Mean Square Error (RMSE). To ensure the reliability of our results, we conduct five runs for each algorithm, using seeds ranging from 0 to 4 to randomly split the datasets for each run, and then compute their mean metrics and standard deviation. Detailed hyper-parameter settings for each algorithms are provided in Appendix F.
4.3 VFL Accuracy
In this subsection, we assess the impact on the performance of VFL algorithms when varying $\alpha$ and $\beta$. Our analysis includes all the three VFL categories in Table I. The performance is summarized in Figure 3 and detailed in Table 9 in Appendix G. The result on msd dataset provides similar insights to others, thus only included in Table 9. From our exploration, we can draw three key observations.
Split parameters $\alpha$ and $\beta$ significantly affect VFL algorithm performance, depending on the algorithm and dataset. SplitNN and FedTree show stable performance across various $\alpha$ and $\beta$ settings. In contrast, C-VFL demonstrates notable performance fluctuations: up to 10% on epsilon and 40% on letter with varying $\alpha$. GAL performs better on imbalanced datasets (affected by $\alpha$ by 8% on letter and radar, 2-5% on others) and is minimally influenced by $\beta$. FedOnce, favoring balanced and highly correlated datasets, is affected by $\alpha$ (5-10% on letter, gisette, epsilon) and by $\beta$ (1-3% on covtype, epsilon). These findings highlight the need for comprehensive evaluations across a range of $\alpha$ and $\beta$ to determine VFL algorithms’ robustness.
SplitNN often leads in accuracy across most datasets; however, the performance of split-GBDT-based and ensemble-based methods can vary significantly depending on the dataset. As anticipated, given its iterative transmission of substantial representations and gradients, SplitNN often outperforms other methods across a majority of datasets. Comparatively, the performance of FedTree and GAL is dataset-dependent. FedTree is well-suited to high-dimensional, smaller datasets like gisette, but struggles with larger datasets like epsilon and covtype. GAL, on the other hand, performs admirably with binary classification and regression tasks, though its performance drops significantly as the number of classes increases, as observed on the covtype and letter dataset.
The compression of SplitNN renders them particularly affected by party imbalance. C-VFL, modelled after SplitNN, exhibits the least accuracy among tested baselines due to its compression approach. Moreover, C-VFL exhibits marked sensitivity to the imbalance level, $\alpha$. Specifically, at $\alpha = 0.1$, its accuracy on datasets like letter and epsilon scarcely surpasses random guessing. However, C-VFL thrives in highly imbalanced split of radar dataset. This data-dependent behavior underscores an urgent need to refine compression techniques for VFL tailored to varying imbalances.
4.4 Performance Correlation: VertiBench Scope vs. Real Scope
In assessing the performance correlation between VertiBench-synthetic and real VFL datasets, we use derived $\alpha$ and $\beta$ values of NUS-WIDE and Vehicle (Section 3.3) to generate comparable synthetic datasets. To evaluate the relative performance of each algorithm, we calculate the accuracy differences between Vehicle-synthetic and NUS-WIDE-synthetic datasets for each algorithm and compare with real dataset accuracy differences, with further details in Appendix G.8.
Our experiment reveals a positive correlation between relative algorithm performance on synthetic datasets with matching $\alpha$ and $\beta$, and their performance on real VFL datasets. This indicates that, under the same $\alpha$ or $\beta$, higher mean accuracy on synthetic datasets typically implies better performance on real VFL datasets, thus affirming the relevance of VertiBench-synthetic datasets in approximating real VFL performance.
5 Conclusion
We introduce VertiBench, a refined benchmarking tool for Vertical Federated Learning (VFL), adept at generating a variety of synthetic VFL datasets from a single global dataset. The scope of VertiBench extends beyond the confines of existing uniform and real scopes, shedding light on VFL scenarios previously unexplored. Our findings underscore performance variations under diverse data partitions, emphasizing the need to evaluate VFL algorithms across varied feature splits for enhanced insights into their real-world applicability.
6 REPRODUCIBILITY STATEMENT
The code for this study is accessible via a GitHub repository (Wu et al., 2023a), accompanied by a README.md file that provides guidelines for environment setup and result reproduction. Comprehensive proofs of all theoretical results are meticulously detailed in Appendix A. Further, Appendix F offers a detailed description of dataset specifications and hyperparameter configurations.
ACKNOWLEDGEMENT
This research is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-018). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore. This work is supported in part by AMD under the Heterogeneous Accelerated Compute Clusters (HACC) program.
REFERENCES
McCallum Andrew. Real vs. simulated, 2015. URL https://www.csie.ntu.edu.tw/~cjilin/libsvmtools/datasets/binary/real-sim.bz2
Anonymized, 2023. URL https://drive.google.com/drive/folders/1T173Doy7xW0BRv2D8FHZFqS1zzWid2gj
Kirk Baker. Singular value decomposition tutorial. The Ohio State University, 24, 2005.
T. Bertin-Mahieux. Yearpredictionmsd, 2011. URL https://www.csie.ntu.edu.tw/~cjilin/libsvmtools/datasets/regression/YearPredictionMSD.bz2
Jock Blackard. Covertype, 1998. URL https://www.csie.ntu.edu.tw/~cjilin/libsvmtools/datasets/multiclass/covtype.bz2 DOI: https://doi.org/10.24432/C50K5N.
Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2018.
Timothy J Castiglia, Anirban Das, Shiqiang Wang, and Stacy Patterson. Compressed-VFL: Communication-efficient learning with vertically partitioned data. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2738–2766. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/castiglia22a.html
Qi Chang, Hui Qu, Yikai Zhang, Mert Sabuncu, Chao Chen, Tong Zhang, and Dimitris N. Metaxas. Synthetic learning: Learn from distributed asynchronized discriminator gan without sharing medical image data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
Jiayi Chen and Aidong Zhang. Fedmsplit: Correlation-adaptive federated multi-task learning across multimodal split networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’22, page 87–96, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393850. doi: 10.1145/3534678.3539384. URL https://doi-org.libproxy1.nus.edu.sg/10.1145/3534678.3539384
Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785–794, 2016.
Kewei Cheng, Tao Fan, Yilun Jin, Yang Liu, Tianjian Chen, Dimitrios Papadopoulos, and Qiang Yang. Secureboost: A lossless federated learning framework. IEEE Intelligent Systems, 36(6): 87–98, 2021.
|
L9NM2CEol3
|
What are some practical applications in which the resources of each client changes over time? It would be helpful to describe some examples of such applications to emphasize the importance of addressing the targeted problem.
|
Speed Up Federated Learning in Heterogeneous Environment: A Dynamic Tiering Approach
Anonymous authors
Paper under double-blind review
Abstract
Federated learning (FL) enables collaboratively training a model while keeping the training data decentralized and private. However, one significant impediment to training a model using FL, especially large models, is the resource constraints of devices with heterogeneous computation and communication capacities as well as varying task sizes. Such heterogeneity would render significant variations in the training time of clients, resulting in a longer overall training time as well as a waste of resources in faster clients. To tackle these heterogeneity issues, we propose the Dynamic Tiering-based Federated Learning (DTFL) system where slower clients dynamically offload part of the model to the server to alleviate resource constraints and speed up training. By leveraging the concept of Split Learning, DTFL offloads different portions of the global model to clients in different tiers and enables each client to update the models in parallel via local-loss-based training. This helps reduce the computation and communication demand on resource-constrained devices and thus mitigates the straggler problem. DTFL introduces a dynamic tier scheduler that uses tier profiling to estimate the expected training time of each client, based on their historical training time, communication speed, and dataset size. The dynamic tier scheduler assigns clients to suitable tiers to minimize the overall training time in each round. We first theoretically prove the convergence properties of DTFL. We then train large models (ResNet-56 and ResNet-110) on popular image datasets (CIFAR-10, CIFAR-100, CINIC-10, and HAM10000) under both IID and non-IID systems. Extensive experimental results show that compared with state-of-the-art FL methods, DTFL can significantly reduce the training time while maintaining model accuracy.
1 Introduction
Federated learning (FL), which allows clients to train a global model collaboratively without sharing their sensitive data with others, has become a popular privacy-preserving distributed learning paradigm. In FL, clients update the global model using their locally trained weights to avoid sharing raw data with the server or other clients. This training process, however, becomes a significant hurdle for training large models when clients are resource-constrained devices (e.g., mobile/IoT devices, and edge servers) with heterogeneous computation and communication capacities in addition to different dataset sizes. Such heterogeneity would incur a significant impact on training time and model accuracy in conventional FL systems (i.e., larger training time is required to reach similar accuracy compared to non-heterogeneous systems) [Yang et al., 2021; Abdelmoniem et al., 2023].
To train large models with resource-constrained devices, various methods have been proposed in the literature. One solution is to split the global model into a client-side model (i.e., the first a few layers of the global model) and a server-side model, where the clients only need to train the small client-side model via Split Learning (SL) [Gupta & Raskar, 2018; Vepakomma et al., 2018]. Liao et al., 2023 improves model training speed in split federated learning (SFL) by giving local clients control over both the local updating frequency and batch size. However, in SFL, each client needs to wait for the back-propagated gradients from the server to update its model, and the communication overhead for transmitting the forward/backward signals between the server and clients can be substantial at each training round (i.e., time needed to complete a round of training). To address these issues, He et al., 2020a; Cho et al., 2023 uses a knowledge transfer training algorithm, to train small models at clients and periodically transfer their knowledge via knowledge distillation to a large server-side model. Han et al., 2021 develops a federated SL algorithm that addresses the latency...
and communication issues by integrating local-loss-based training into SL. However, the client-side models in He et al. (2020a) and Han et al. (2021) are fixed throughout the training process, and choosing suitable client-side models in heterogeneous environments is challenging as the resources of clients may change over time. Another solution is to divide clients into tiers based on their training speed and select clients from the same tier in each training round to mitigate the straggler problem Chai et al. (2020, 2021). However, existing tier-based works Chai et al. (2020, 2021) still require clients to train the entire global model, which is not suitable for training large models.
In this paper, we propose the Dynamic Tiering-based Federated Learning (DTFL) system, to speed up FL for training large models in heterogeneous environments. DTFL aims to not only incorporate benefits from both SFL Han et al. (2021) and tier-based FL Chai et al. (2020), but also address the latency issues and reduce the training time of these works in heterogeneous environments. In DTFL, we divide clients into different tiers. In different tiers, DTFL offloads different portions of the global model from each client to the server. Then each client and the server update the models in parallel using local-loss-based training Nokland & Eidnes (2019); Belilovsky et al. (2020); Huo et al. (2018); Han et al. (2021). In a heterogeneous environment, the training time of each client can change over time. Static tier assignments can result in severe straggler issues when clients with limited computation and communication resources (e.g., due to other concurrently running applications on mobile devices) are allocated to tiers demanding high levels of resources. To address this challenge, we propose a dynamic tier scheduler that assigns clients to suitable tiers based on their capacities, their task size, and their current training speed. The tier scheduler employs tier profiling to estimate client-side training time, using only the measured training time, communicated network speed, and observed dataset size of clients, making it a low-overhead solution which is suitable for real system implementation. We theoretically show the convergence of DTFL on convex and non-convex loss functions under standard assumptions in FL Li et al. (2019); Reiszadeh et al. (2020) and local-loss-based training Belilovsky et al. (2020); Huo et al. (2018); Han et al. (2021). Using DTFL, we train large models (ResNet-56 and ResNet-110 He et al. (2016)) on different number of clients using popular datasets CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009), CINIC-10 Darlow et al. (2018), and HAM1000 Tschandl et al. (2018) and their non-I.I.D. (non-identical and independent distribution) variants. We also evaluate the performance of DTFL when employing privacy measures, such as minimizing the distance correlation between raw data and intermediate representations, and shuffling patches of data. The results indicate that DTFL can effectively incorporate privacy techniques without significantly impacting model accuracy. Extensive experimental results show that DTFL can significantly reduce the training time while maintaining model accuracy comparable to state-of-the-art FL methods.
2 BACKGROUND AND RELATED WORKS
Federated Learning. Existing FL methods (see a comprehensive study of FL Kairouz et al. (2021)) require clients to repeatedly download and update the global model, which is not suitable for training large models with resource-constrained devices in heterogeneous environments and may suffer a severe straggler problem. To address the straggler problem, Li et al. (2019) selects a smaller set of clients for training in each global iteration, but requires more training rounds. Bonawitz et al. (2019) mitigates stragglers by neglecting the slowest 30% clients, while FedProx Li et al. (2020) uses distinct local epoch numbers for clients. Both Bonawitz et al. (2019) and Li et al. (2020) face the challenge of determining the perfect parameters (i.e., percentage of slowest clients and number of local epochs). Recently, tier-based FL methods Chai et al. (2020, 2021); Reiszadeh et al. (2022) propose to divide clients into tiers based on their training speed and select clients from the same tier in each training round to mitigate the straggler problem. However, clients in existing FL methods are required to train the whole global model, which renders significant hurdles in training large models on resource-constrained devices.
Split Learning. To tackle the computational limitation of resource-constrained devices, Split Learning (SL) Vepakomma et al. (2018); Gupta & Raskar (2018) splits the global model into a client-side model and a server-side model, and clients need to only update the small client-side model, compared to FL. To increase SL training speed Thapa et al. (2022) incorporated FL into SL, and Wu et al. (2023) proposed a first-parallel-then-sequential approach that clusters clients and sequentially trains a model in SL fashion in each cluster, and then transfers the updated cluster model to the next clusters. In SL, clients must wait for the server’s backpropagated gradients to update their models, which can cause significant communication overhead. To address these issues, He et al. (2020a) proposes
FedGKT, to train small models at clients and periodically transfer their knowledge by knowledge distillation to a large server-side model. Han et al. (2021) develops a federated SL algorithm that addresses latency and communication issues by integrating local-loss-based training. Clients train a model using local error signals, which eliminates the need to communicate with the server. However, the client-side models in current SL approaches [He et al., 2020a; Han et al., 2021; Zhang et al., 2023] are fixed throughout the training process, and choosing suitable client-side models in heterogeneous environments is challenging as clients’ resources may change over time. Compared to these works, the proposed DTFL can dynamically adjust the size of the client-side model for each client over time, which can significantly reduce the training time and mitigate the straggler problem.
3 Dynamic Tiering-based Federated Learning
3.1 Problem Statement
We aim to collaboratively train a large model (e.g., ResNet, or AlexNet) by $K$ clients on a range of heterogeneous resource-constrained devices that lack powerful computation and communication resources without centralizing the dataset on the server-side. Let $\{(x_i, y_i)\}_{i=1}^{N_k}$ denote the dataset of client $k$, where $x_i$ denotes the $i$th training sample, $y_i$ is the associated label of $x_i$, and $N_k$ is the number of samples in client $k$’s dataset. The FL problem can be formulated as a distributed optimization problem:
$$\min_w f(w) \overset{\text{def}}{=} \min_w \sum_{k=1}^{K} \frac{N_k}{N} \cdot f_k(w)$$
where $f_k(w) = \frac{1}{N_k} \sum_{i=1}^{N_k} \ell((x_i, y_i); w)$
where $w$ denotes the model parameters and $N = \sum_{k=1}^{K} N_k$. $f(w)$ denotes the global objective function, and $f_k(w)$ denotes the $k$th client’s local objective function, which evaluates the local loss over its dataset using loss function $\ell$.
One main drawback of existing federated optimization techniques (e.g., McMahan et al., 2017; Li et al., 2020; Wang et al., 2020b; Reddi et al., 2020) for solving (1) is that they cannot efficiently train large models on a variety of heterogeneous resource-constrained devices. Such heterogeneity would lead to the severe straggler problem that clients may have significantly different response latencies (i.e., the time between a client receives the training task and returning the results) in the FL process, which would severely slow down the training (see experimental results in Sec. 4.2).
To address these issues, we propose a Dynamic Tiering-based Federated Learning (DTFL) system (see Figure 1), in which we develop a dynamic tier scheduler that assigns clients to suitable tiers based on their training speed. In different tiers, DTFL offloads different portions of the global model to clients and enables each client to update the models in parallel via local-loss-based training, which can reduce the computation and communication demand on resource-constrained devices, while mitigating the straggler problem. Compared with existing works (e.g., He et al., 2020a; Han et al., 2021; Chai et al., 2020), which can be treated as a single-tier case in DTFL, DTFL provides more flexibility via multiple tiers to cater to a variety of heterogeneous resource-constrained devices in heterogeneous environments. As shown in experimental results in Sec. 4.2, DTFL can significantly reduce the training time while maintaining model accuracy, compared with these methods.
3.2 Tiering Local-loss-based Training
To cater for heterogeneous resource-constrained devices, DTFL divides the clients into $M$ tiers based on their training speed. In different tiers, DTFL offloads different portions of the global model $w$ to the server and enables each client to update the models in parallel via local-loss-based training. Specifically, in tier $m$, the model $w$ is split into a client-side model $w^{c_m}$ and a server-side model $w^{s_m}$. Clients in tier $m$ train the client-side model $w^{c_m}$ and an auxiliary network $w^{a_m}$. The auxiliary network is the extra layers connected to the client-side model, and the auxiliary network is...
Table 1: Comparison of training time (in seconds) for 10 clients under different tiers when \( M = 6 \) to achieve 80% accuracy on the I.I.D. CIFAR-10 dataset using ResNet-110. In each experiment, all the clients are assigned to the same tier. Randomly assign clients to different CPU and network speed profiles. Profiles in Case 1: 2 CPUs with 30 Mbps, 1 CPU with 30 Mbps, 0.2 CPU with 30 Mbps. Profiles in Case 2: 4 CPUs with 100 Mbps, 1 CPU with 30 Mbps, 0.1 CPU with 10 Mbps. The experimental setup can be found in Sec. 4.
| Tier | Computation Time | Communication Time | Overall Training Time |
|------|------------------|--------------------|-----------------------|
| Case1 | 4622 8106 9982 10681 11722 12250 13396 |
| | 5911 5995 2187 2189 1018 908 16 |
| | **10533** 14101 12170 12871 12741 13158 13408 |
| Case2 | 8384 14634 17993 19027 21428 22344 24428 |
| | 17754 18090 6720 6762 2941 2653 43 |
| | 26138 32724 24713 25989 **24369** 24997 24471 |
used to compute the local loss on the client-side. By introducing the auxiliary network, we enable each client to update the models in parallel with the server [Han et al., 2021], which avoids the severe synchronization and substantial communication in SL that significantly slows down the training process [Vepakomma et al., 2018; Gupta & Raskar, 2018]. In this paper, we use a few fully connected layers for the auxiliary network as in [Han et al., 2021; Belilovsky et al., 2020; Laskin et al., 2020].
Under this setting, we define \( f_k^c(w_{cm}, w_{am}) \) as the client-side loss function and \( f_k^s(w_{sm}, w_{cm}) \) as the corresponding server-side loss function in tier \( m \). Our goal is to find \( w_{cm}^* \) and \( w_{am}^* \) that minimizes the client-side loss function in each tier \( m \):
\[
\min_{w_{cm}, w_{am}} \sum_{k \in A_m} \frac{N_k}{N_m} \cdot f_k^c(w_{cm}, w_{am})
\]
where \( f_k^c(w_{cm}, w_{am}) = \frac{1}{N_k} \sum_{i=1}^{N_k} \ell((x_i, y_i); w_{cm}, w_{am}) \) and \( N_m = \sum_{k \in A_m} N_k \). \( A_m \) denotes the set of clients in tier \( m \).
Given the optimal client-side model \( w_{cm}^* \), the server finds \( w_{sm}^* \) that minimizes the server-side loss function:
\[
\min_{w_{sm}} \sum_{k \in A_m} \frac{N_k}{N_m} \cdot f_k^s(w_{sm}, w_{cm}^*)
\]
where \( f_k^s(w_{sm}, w_{cm}^*) = \frac{1}{N_k} \sum_{i=1}^{N_k} \ell((z_i, y_i); w_{sm}) \) and \( z_i = h_{w_{cm}^*}(x_i) \) is the intermediate output of the client-side model \( w_{cm}^* \) given the input \( x_i \).
Offloading the model to the server can effectively reduce the total training time, as illustrated in Table 1. As a client offloads more layers to the server (moving towards tier \( m = 1 \)), the model size on the client’s side decreases, thereby reducing the computational workload. Meanwhile, this may increase the amount of data transmitted (i.e., the size of the intermediate data and partial model).
As indicated in Table 1, there exists a non-trivial tier assignment that minimizes the overall training time. To find the optimal tier assignment, DTFL needs to consider multiple factors, including the communication link speed between the server and the clients, the computation power of each client, and the local dataset size.
### 3.3 Dynamic Tier Scheduling
In a heterogeneous environment with multiple clients, the proposed dynamic tier scheduling aims to minimize the overall training time by determining the optimal tier assignments for each client.
Specifically, let \( m_k^{(r)} \) denote the tier of client \( k \) in the training round \( r \). \( T_k^c(m_k^{(r)}) \), \( T_k^{com}(m_k^{(r)}) \), and \( T_k^s(m_k^{(r)}) \) represent the training time of the client-side model, the communication time, and the training time of the server-side model of client \( k \) at round \( r \), respectively. Using the proposed local-loss-based split training algorithm, each client and the server train the model in parallel. The overall training time \( T_k \) for client \( k \) in each round can be presented as:
\[
T_k(m_k^{(r)}) = \max\{T_k^c(m_k^{(r)}) + T_k^{com}(m_k^{(r)}), T_k^s(m_k^{(r)}) + T_k^{com}(m_k^{(r)})\}.
\]
As clients train their models in parallel, the overall training time in each round \( r \) is determined by the slowest client (i.e., straggler). To minimize the overall training time, we minimize the maximum training time of clients in each round:
\[
\min_{\{m_k^{(r)}\}} \max_k T_k(m_k^{(r)}), \text{subject to } \{m_k^{(r)}\} \in \mathbb{M} \forall k,
\]
where \( \mathbb{M} \) denotes the set of tiers. Note that problem (6) is an integer programming problem. To solve (6), it requires the knowledge of each client’s training time \( \{T_k(m_k^{(r)})\} \) under each tier. As
Table 2: The normalized training times for both client-side and server-side models in different tiers for each client relative to Tier 1, using ResNet-56 with 10 clients. In each experiment, all the clients are assigned to the same tier. We change the CPU capacities of clients in each experiment to evaluate the impact of CPU capacities.
| Tier | 1 | 2 | 3 | 4 | 5 | 6 |
|------|-------|-------|-------|-------|-------|-------|
| Client-side Training Time | 1.00 ± 0.04 | 1.63 ± 0.10 | 2.16 ± 0.15 | 2.68 ± 0.22 | 3.30 ± 0.24 | 3.81 ± 0.28 |
| Server-side Training Time | 1.00 ± 0.07 | 0.82 ± 0.06 | 0.65 ± 0.06 | 0.51 ± 0.04 | 0.33 ± 0.03 | 0.20 ± 0.01 |
the capacities of each client in a heterogeneous environment may change over time, a static tier assignment may still lead to a severe straggler problem. The key question is how to efficiently solve (6) in a heterogeneous environment.
To address this challenge, we develop a **dynamic tier scheduler** to efficiently determine the optimal tier assignments for each client in each round. The idea is to use tier profiling to estimate the training time of each client under each tier, based on which each client will be assigned to the optimal tier.
• **Tier Profiling.** Before the training starts, the server conducts tier profiling to estimate $T_k^{c}(m_k^{(r)})$, $T_k^{com}(m_k^{(r)})$ and $T_k^{s}(m_k^{(r)})$ for each client. Specifically, using a standard data batch, the server profiles the transferred data size (i.e., model parameter and intermediate data size) for each tier $m$, as $D_{size}(m_k^{(r)})$. Then, for each client $k$ in tier $m$, the communication time can be estimated as $D_{size}(m_k^{(r)}) \tilde{N}_k / \nu_k^{(r)}$, where $\nu_k^{(r)}$ represents the client’s communication speed and $\tilde{N}_k$ denotes the number of data batches. To track clients’ training time for their respective client-side models, the server maintains and updates the set of historical client-side training times for each client $k$ in tier $m$, denoted as $T_k^{cm}$. To mitigate measurement noise, the server uses Exponential Moving Average (EMA) on historical client-side training time (i.e., $T_k^{cm}(m_k^{(r)}) \leftarrow \text{EMA}(T_k^{cm}(m_k^{(r)}))$) as the current client’s training time in tier $m$. One main challenge of tier profiling is that to capture the dynamics of the training time of each client in a heterogeneous environment, we need the knowledge of the training time of each client in each tier, but only the training time of each client in the assigned tier is available in each round. To estimate the training times in other tiers, we study the relationship of the normalized training times among different tiers for each client, where the normalized training time refers to the model training time using a standard data batch. Table 2 shows the normalized training times of different tiers relative to tier 1. As indicated in Table 2, for any client, the normalized training times for both client-side and server-side models in different tiers are the same. This is because the ratio between the normalized model training times under two different tiers depends on only the model sizes of these two tiers, which does not change if the design of the models under different tiers is given. Based on this tier profiling, we can estimate the training times in other tiers using the observed training time of each client in the assigned tier (see lines 24 to 29 in Algorithm 1).
• **Tier Scheduling.** In each round, the tier scheduler minimizes the maximum training time of clients. First, it identifies the maximum time (i.e., the straggler training time), denoted as $T_{\max}$, by estimating the maximum training time of all clients if they are assigned to a tier that minimizes their training time (see line 31 in Algorithm 1). Then, it assigns other clients to a tier with an estimated training time that is less than or equal to $T_{\max}$ (see line 33 in Algorithm 1). To better utilize the resources of each client, the tier scheduler selects tier $m$ that minimizes the offloading to the server while still ensuring that its estimated training time remains below $T_{\max}$ by $m_k^{(r+1)} \leftarrow \arg \max_m \{ T_k^{c}(m_k^{(r)}) \leq T_{\max} \}$.
The dynamic tier scheduler is detailed in `TierScheduler()` function in Algorithm 1. The DTFL training process (illustrated in Figure 1) is described in Algorithm 1.
### 3.4 Convergence Analysis
We show the convergence of both client-side and server-side models in DTFL on convex and non-convex loss functions based on standard assumptions in FL and local-loss-based training. We assume that (A1) client-side $f_k^{c,m}$ and server-side $f_k^{s,m}$ objective functions of each client in each tier are differentiable and $L$-smooth; (A2) $f_k^{c,m}$ and $f_k^{s,m}$ have expected squared norm bounded by $G_2^2$; (A3) the variance of the gradients of $f_k^{c,m}$ and $f_k^{s,m}$ is bounded by $\sigma^2$; (A4) $f_k^{c,m}$ and $f_k^{s,m}$ are $\mu$-convex for $\mu \geq 0$ for some results; (A5) the client-side objective functions are $(G_2, B)$-BGD (Bounded...
Gradient Dissimilarity): (A6) the time-varying parameter satisfies $d^{(r)}_m < \infty$. These assumptions are well-established and frequently utilized in the machine learning literature for convergence analyses, as in previous works such as Stich (2018); Li et al. (2019); Belilovsky et al. (2020); Yu et al. (2019); Karimireddy et al. (2020). We adopt the approach of Belilovsky et al. (2020) for local-loss-based training, where the server input distribution varies over time and depends on client-side model convergence.
**Theorem 1 (Convergence of DTFL)** Under assumptions (A1), (A2), (A3), and (A5), the convergence properties of DTFL for both convex and non-convex functions are summarized as follows:
**Convex:** Under (A4), $\eta \leq \frac{1}{8L(1+B^2)}$ and $R \geq \frac{4L(1+B^2)}{\mu}$, the client-side model converges at the rate of $\mathcal{O}\left(\mu D^2 \exp\left(-\frac{\eta}{2} \mu R\right) + \frac{\eta H^2_m}{\mu RA_m}\right)$ and the server-side model converges at the rate of $\mathcal{O}\left(\frac{C_1}{R} + \frac{H_2 \sqrt{F^0_m}}{\sqrt{RA_m}} + \frac{F^0_m}{\eta_{max} R}\right)$.
**Non-convex:** If both $f^{cm}$ and $f^{sm}$ are non-convex with $\eta \leq \frac{1}{8L(1+B^2)}$, then the client-side model converges at the rate of $\mathcal{O}\left(\frac{H_1 \sqrt{F^0_m}}{\sqrt{RA_m}} + \frac{F^0_m}{\eta_{max} R}\right)$ and the server-side model converges at the rate of $\mathcal{O}\left(\frac{C_2}{R} + \frac{H_2 \sqrt{F^0_m}}{\sqrt{RA_m}} + \frac{F^0_m}{\eta_{max} R}\right)$, where $\eta_{max}$ is the maximum of learning rate $\eta$, $H^2_1 := \sigma^2 + \left(1 - \frac{A^m}{K}\right) G^2_2$, $H^2_2 := L^2 B^2 + 1 F^0_m + \left(1 - \frac{A^m}{K}\right) L^2 G^2_2$, $D := \|w^0_m - w^*_m\|$, $F^0_m := f^{cm}(w^0_m)$, and $F^0_m := f^{sm}(w^0_m)$. $C_1 = G_1 \sqrt{G^2_2 + 2LB^2 F^0_m \sum_r d^{(r)}_m}$, and $C_2 = G_1 \sqrt{G^2_2 + B^2 G^2_1 \sum_r d^{(r)}_m}$ are convergent based on (A6). $A^m = \min_r \{A^{(r)}_m > 0\}$, where $A^{(r)}_m$ denotes the number of clients in tier $m$ at round $r$. $d^{(r)}_m$ denotes the distance between the density function of the output of the client-side model and its converged state.
According to Theorem 1, both client-side and server-side models converge as the number of rounds $R$ increases, with varying convergence rates across different tiers. Note that as DTFL leverages...
the local-loss-based split training, the convergence of the server-side model depends on the convergence of the client-side model, which is explicitly characterized by $C_1$ and $C_2$ in the analysis. The complete proof of the theorem is given in Appendix B.
4 EXPERIMENTAL EVALUATION
4.1 EXPERIMENTAL SETUP
Dataset. We consider image classification on four public image datasets, including CIFAR-10 [Krizhevsky et al., 2009], CIFAR-100 [Krizhevsky et al., 2009], CINIC-10 [Darlow et al., 2018], and HAM10000 [Tschandl et al., 2018]. We also consider label distribution skew [Li et al., 2022] (i.e., the distribution of labels varies across clients) to generate their non-I.I.D. variants using [He et al., 2020b]. Appendix A describes the dataset distributions used in these experiments.
Baselines. We compare DTFL with state-of-the-art FL/SL methods, including FedAvg [McMahan et al., 2017], SplitFed [Thapa et al., 2022], FedYogi [Reddi et al., 2020], and FedGKT [He et al., 2020a]. For the same reasons as in [He et al., 2020a], we do not compare with FedProx [Li et al., 2020] and FedMA [Wang et al., 2020a]. FedProx [Li et al., 2020] performs worse than FedAvg in the large convolutional neural networks, CNN, setting and FedMA cannot work on modern DNNs that contain batch normalization layers (e.g., ResNet).
Implementation. We conducted the experiment using Python 3.11.3 and the PyTorch library version 1.13.1, which is available online in [Anonymous, 2023]. The DTFL and the baselines are deployed in a server, which is equipped with dual-sockets Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz with hyper-threading disabled, and four NVIDIA GeForce GTX 1080 Ti GPUs, 64 GB of memory. Each client is assigned a different simulated CPU and communication resource in order to simulate heterogeneous resources (i.e., simulate the training time of different CPU/network profiles). By using these resource profiles, we simulate a heterogeneous environment where clients’ capacity varies in both cross-solo and cross-device FL settings. We consider 5 resource profiles: 4 CPUs with 100 Mbps, 2 CPUs with 30 Mbps, 1 CPU with 30 Mbps, 0.2 CPU with 30 Mbps, and 0.1 CPU with 10 Mbps communication speed to the server. Each client is assigned one resource profile at the beginning of the training, and the profile can be changed during the training process to simulate the dynamic environment.
Model Architecture. DTFL is a versatile approach suitable for training a wide range of neural network models (e.g., Multilayer Perceptron, MLP, Recurrent Neural Networks, RNN, and CNN), particularly benefiting large-scale models. In the experiments, we evaluate large CNN models, ResNet-56 and ResNet-110 [He et al., 2016] that work well on the selected datasets. Furthermore, DTFL can also be applied to large language models (LLM) like BERT [Devlin et al., 2018] by splitting techniques as proposed in [Tran et al., 2022; Liu et al., 2022]. For each tier, we split a global model to create client and server-side models. The split layer is different in tiers, and it moves toward the last layer as the tier increases. For each client-side model, we add a fully connected (f.c.) and an average pooling (avgpool) layer as the auxiliary network. More details can be found in Appendix A.5. We follow the same setting as [He et al., 2020a] for FedGKT. We split the global model after module 2 (as defined in Appendix A.5) for the SplitFed model.
4.2 TRAINING TIME IMPROVEMENT OF DTFL
Training time comparison of DTFL to baselines. In Table 3, we summarize all experimental results of training a global model (i.e., ResNet-56 or ResNet-110) with 7 tiers (i.e., $M = 7$) when using different federated learning methods. The experiments were conducted on a heterogeneous client population, with 20% assigned to each profile at the experiment’s outset. Every 50 rounds, the client profiles (i.e., number of simulated CPUs and communication speed) of 30% of the clients were randomly changed to simulate a dynamic environment, while all clients participated in every training round. The training time of each method to achieve a target accuracy is provided in Table 3. In all cases for both I.I.D. and non-I.I.D. settings, DTFL significantly reduces the training time, compared to baselines (FedAvg, SplitFed, FedYogi, FedGKT). For example, DTFL reduces the training time of FedAvg by 80% to reach the target accuracy on I.I.D. CIFAR-10 with ResNet-110. This experiment illustrates the capabilities of DTFL which can significantly reduce training time when training on distributed heterogeneous clients. Figure 2 illustrates the curve of the test accuracy during the training process of all the methods for the I.I.D. CIFAR-10 case with ResNet-110, where we observe a faster convergence using DTFL, compared with baselines.
Table 3: Comparison of training time (in seconds) to baseline approaches with 10 clients on different datasets. The numbers represent the training time used to achieve the target accuracy (i.e., CIFAR-10 I.I.D. 80%, CIFAR-10 non-I.I.D. 70%, CIFAR-100 I.I.D. 55%, CIFAR-100 non-I.I.D. 50%, CINIC-10 I.I.D. 75%, CINIC-10 non-I.I.D. 65%, and HAM10000 75%).
| Method | Global Model | CIFAR-10 I.I.D. | CIFAR-10 non-I.I.D. | CIFAR-100 I.I.D. | CIFAR-100 non-I.I.D. | CINIC-10 I.I.D. | CINIC-10 non-I.I.D. | HAM10000 |
|------------|--------------|----------------|---------------------|------------------|----------------------|----------------|---------------------|-----------|
| DTFL | ResNet-56 | 2750 | 3986 | 3585 | 6093 | 23968 | 40138 | 2353 |
| | ResNet-110 | 4816 | 7054 | 5678 | 9874 | 42099 | 70469 | 3615 |
| FedAvg | ResNet-56 | 13157 | 20773 | 19170 | 35350 | 114509 | 197926 | 11566 |
| | ResNet-110 | 24471 | 39094 | 36360 | 66317 | 210468 | 395423 | 22328 |
| SplitFed | ResNet-56 | 35877 | 46514 | 54174 | 97859 | 271873 | 510156 | 19549 |
| | ResNet-110 | 67265 | 84342 | 101783 | 183122 | 521334 | 896627 | 43581 |
| FedYogi | ResNet-56 | 9122 | 13130 | 12727 | 19216 | 82083 | 113464 | 8071 |
| | ResNet-110 | 19299 | 25668 | 23978 | 35356 | 155212 | 219134 | 14932 |
| FedGKT | ResNet-56 | 25458 | 30808 | 36838 | 59461 | 184589 | 218065 | 37181 |
| | ResNet-110 | 39676 | 47458 | 64457 | 98754 | 321534 | 411259 | 61755 |
Figure 2: Comparing the training process of DTFL with baselines for the I.I.D. CIFAR-10 dataset.
4.3 Understanding DTFL under Different Settings
Performance of DTFL with different numbers of clients. We evaluate the performance of DTFL with different numbers of clients to better understand the scalability of DTFL. Table 4 shows the training time for various training methods using different numbers of clients on the I.I.D. CIFAR-10 dataset, to reach a target accuracy 80% with the ResNet-110 model. In these experiments, we randomly sampled 10% of all clients to be involved in each round of the training process. Note that DTFL can also be employed in other FL client selection methods (e.g., [Chai et al., 2020; 2021]). In general, increasing the number of clients has no adverse effects on DTFL performance and significantly reduces training time compared to other methods.
Impact of the number of tiers on DTFL performance. We evaluate the DTFL performance under different numbers of tiers while employing the global ResNet-110 model (model details under different tiers are provided in Table 11 in the appendix). In Figure 3, we present the total training time for the I.I.D. CIFAR-10 dataset and 10 clients with different numbers of tiers. We conducted experiments with two different cases, similar to those in Table 1, where clients’ CPU profiles randomly switch to another profile every 20 rounds of training within the profiles of the same case. Experiments show that to reach the target accuracy of 80%, the training
Table 4: Performance of DTFL with different numbers of clients.
| # Clients | DTFL | FedAvg | SplitFed | FedYogi | FedGKT |
|-----------|------|--------|----------|---------|--------|
| 20 | 1877 | 7950 | 21350 | 6341 | 14595 |
| 50 | 2547 | 10435 | 29026 | 8073 | 17872 |
| 100 | 3102 | 14032 | 36449 | 10760 | 24438 |
| 200 | 3594 | 16060 | 43942 | 12786 | 27632 |
Figure 3: Impact of the number of tiers on the total training time.
time generally decreases with the number of tiers, as DTFL would have more flexibility to fine-tune the tier of each client based on the heterogeneous resources of each client. It should be noted that the model under each tier needs to be carefully designed based on the structure of the global model. A client-side model obtained by arbitrarily splitting the global model may negatively impact the model accuracy. Thus, the maximum suitable number of tiers is much less than the number of layers of a global model. For ResNet-110, we find 7 tiers provided in Table 11 in the appendix can significantly reduce the training time while maintaining the model accuracy.
4.4 Privacy Discussion
Using DTFL, we can significantly reduce the training time. However, exchanging hidden feature maps (i.e., the intermediate output $z_i$) may potentially leak privacy. A potential threat to DTFL is model inversion attacks, extracting client data by analyzing feature maps or model parameter transfers from clients to servers. Prior research [Yin et al., 2021; Zhu et al., 2019] has shown that attackers need access to all model parameters or gradients to recover client data. This is not feasible from partial or fragmented models. Thus, similar to [Thapa et al., 2022], DTFL can use separate servers for model aggregation and training to prevent a single server from having access to all model parameters and intermediate data. Another potential threat to DTFL is that an attacker can infer client model parameters by inputting dummy data into the client’s local model and training a replicating model on the resulting feature maps [Shen et al., 2023]. DTFL can prevent this attack by denying clients access to external datasets, query services, and dummy data, thereby preventing the attacker from obtaining the necessary data.
However, for attackers with strong eavesdropping capabilities, there may be potential privacy leakage. As DTFL is compatible with privacy-preserving federated learning approaches, existing data privacy protection methods can be easily integrated into DTFL to mitigate potential privacy leakage, e.g., distance correlation [Vepakomma et al., 2020], differential privacy [Abadi et al., 2016], patch shuffling [Yao et al., 2022], PixelDP [Lecuyer et al., 2019], SplitGuard [Erdogan et al., 2022], and cryptography techniques [Sami & Güler, 2023; Qiu et al., 2023]. For example, we can add a regularization term into the client’s local training objective to reduce the mutual information between hidden feature maps and raw data [Wang et al., 2021], making it more difficult for attackers to reconstruct raw data. Each client decorrelates its input $x_i$ and related feature map $z_i$, i.e., $f_k^{\text{private}}(w^{c_m}, w^{a_m}) = (1 - \alpha)f_k^c(w^{c_m}, w^{a_m}) + \alpha DCor(x_i, z_i)$, where $\alpha$ balances the model performance and the data privacy, and $DCor$ denotes the distance correlation defined in [Vepakomma et al., 2020]. Distance correlation enhances the privacy of DTFL against reconstruction attacks [Vepakomma et al., 2020].
Integration of privacy protection methods. We evaluate the model accuracy and privacy trade-offs of DTFL when integrating distance correlation and patch shuffling techniques. Table 5 illustrates the model accuracy of DTFL with distance correlation, showing a decreasing trend as $\alpha$ increases. This suggests that integrating distance correlation can enhance data privacy without significant accuracy loss, especially for relatively smaller values of $\alpha$. Notably, applying patch shuffling with the same settings as in [Yao et al., 2022] to intermediate data has minimal impact on accuracy. The server lacks information about the clients’ $\alpha$ values, which can vary between clients. This prevents the server from inferring the data of the clients.
| Method | Distance Correlation ($\alpha$) | Patch Shuffling |
|--------|-------------------------------|----------------|
| | 0.00 0.25 0.50 0.75 | |
| Accuracy| 87.1 86.8 83.5 75.6 | 85.4 |
5 Conclusion
In this paper, we developed DTFL as an effective solution to address the challenges of training large models collaboratively in a heterogeneous environment. DTFL offloads different portions of the global model to clients in different tiers and allows each client to update the models in parallel using local-loss-based training, which can meet computation and communication requirements on resource-constrained devices and mitigate the straggler problem. We developed a dynamic tier scheduling algorithm, which dynamically assigns clients to appropriate tiers based on their training time. The convergence of DTFL is analyzed theoretically. Extensive experiments on large datasets with different numbers of highly heterogeneous clients show that DTFL can significantly reduce the training time while maintaining model accuracy, compared with state-of-the-art FL methods.
REFERENCES
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318, 2016.
Ahmed M Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, and Marco Canini. A comprehensive empirical study of heterogeneity in federated learning. IEEE Internet of Things Journal, 2023.
Anonymous. Dtfl implementation. https://anonymous.4open.science/r/DTFL-9DEF/, 2023.
Eugene Belilovsky, Michael Eickenberg, and Edouard Oyallon. Decoupled greedy learning of cnns. In International Conference on Machine Learning, pp. 736–745. PMLR, 2020.
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečný, Stefano Mazzocchi, Brendan McMahan, et al. Towards federated learning at scale: System design. Proceedings of Machine Learning and Systems, 1: 374–388, 2019.
Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2018.
Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, and Yue Cheng. Tiff: A tier-based federated learning system. In Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing, pp. 125–136, 2020.
Zheng Chai, Yujing Chen, Ali Anwar, Liang Zhao, Yue Cheng, and Huzeфа Rangwala. Fedat: A high-performance and communication-efficient federated learning system with asynchronous tiers. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–16, 2021.
Yae Jee Cho, Jianyu Wang, Tarun Chirvolu, and Gauri Joshi. Communication-efficient and model-heterogeneous personalized federated learning via clustered knowledge transfer. IEEE Journal of Selected Topics in Signal Processing, 17(1):234–247, 2023.
Luke N Darlow, Elliot J Crowley, Anreas Antoniou, and Amos J Storkey. Cinic-10 is not imagenet or cifar-10. arXiv preprint arXiv:1810.03505, 2018.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Ege Erdogan, Alptekin Küçü, and A Ercument Cicek. Splitguard: Detecting and mitigating training-hijacking attacks in split learning. In Proceedings of the 21st Workshop on Privacy in the Electronic Society, pp. 125–137, 2022.
Otkrist Gupta and Ramesh Raskar. Distributed learning of deep neural network over multiple agents. Journal of Network and Computer Applications, 116:1–8, 2018.
Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee, and Jaekyun Moon. Accelerating federated learning with split learning on locally generated losses. In ICML 2021 Workshop on Federated Learning for User Privacy and Data Confidentiality. ICML Board, 2021.
Chaoyang He, Murali Annavaram, and Salman Avestimehr. Group knowledge transfer: Federated learning of large cnns at the edge. Advances in Neural Information Processing Systems, 33: 14068–14080, 2020a.
Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, et al. Fedml: A research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518, 2020b.
|
mL8Q9OOamV
|
Emu applies the regression loss to latent embeddings computed by the Causal Transformer, whose parameters are randomly initialized and also learned during pretraining. I was surprised that the training went well with the proposed objective, because I think that without additional constraints, the model may easily fall into a degenerate case, like the Causal Transformer always outputting constant vectors. Please elaborate on the mechanism of the proposed l2 regression loss.
|
EMU: GENERATIVE PRETRAINING IN MULTIMODALITY
Quan Sun\textsuperscript{1*} Qiying Yu\textsuperscript{2,1*} Yufeng Cui\textsuperscript{1*} Fan Zhang\textsuperscript{1*} Xiaosong Zhang\textsuperscript{1*}
Yuezhe Wang\textsuperscript{1} Hongcheng Gao\textsuperscript{1} Jingjing Liu\textsuperscript{2} Tiejun Huang\textsuperscript{1,3} Xinlong Wang\textsuperscript{1†}
\textsuperscript{1} Beijing Academy of Artificial Intelligence \textsuperscript{2} Tsinghua University \textsuperscript{3} Peking University
Code & Demo: \url{https://github.com/baaivision/Emu}
ABSTRACT
We present Emu, a multimodal foundation model that seamlessly generates images and text in multimodal context. This omnivore model can take in any single-modality or multimodal data input indiscriminately (\textit{e.g.,} interleaved image, text and video) through a one-model-for-all autoregressive training process. First, visual signals are encoded into embeddings, and together with text tokens form an interleaved input sequence. Emu is end-to-end trained with a unified objective of classifying the next text token or regressing the next visual embedding in the multimodal sequence. This versatile multimodality empowers the leverage of diverse pretraining data sources at scale, such as videos with interleaved frames and text, webpages with interleaved images and text, as well as web-scale image-text pairs and video-text pairs. Emu can serve as a generalist multimodal interface for both image-to-text and text-to-image tasks, supporting in-context image and text generation. Across a broad range of zero-shot/few-shot tasks including image captioning, visual question answering, video question answering and text-to-image generation, Emu demonstrates superb performance compared to state-of-the-art large multimodal models. Extended capabilities such as multimodal assistants via instruction tuning are also demonstrated with impressive performance.
1 INTRODUCTION
With text corpus at massive scale, Large Language Models (LLMs) \cite{brown2020language,chowdhery2022palm,touvron2023scaling} with straightforward training objectives such as next-word-prediction learn to understand, reason, and generate text with unprecedented accuracy and fluency, paving the way for diverse real-life applications \cite{schulman2022large} unthinkable a decade ago. Recent studies \cite{alayrac2022flamingo,driesse2022multimodal,hao2022multimodal} further investigate Large Multimodal Models (LMMs) beyond LLMs. Flamingo \cite{alayrac2022flamingo} connects a powerful language model with a pretrained vision encoder and inserts learnable layers to capture cross-modality dependencies, demonstrating strong abilities in multimodal zero-shot and in-context learning. Recent works \cite{li2023multimodal,dai2023multimodal,huang2023multimodal,liu2023multimodal,zhu2023multimodal,ye2023multimodal,li2023multimodal,gong2023multimodal} adopt this framework and build LMM by docking a vision encoder with an LLM.
The prevailing training objective in such LMMs is predicting the next text token \cite{alayrac2022flamingo,hao2022multimodal,huang2023multimodal,zhu2023multimodal,liu2023multimodal,li2023multimodal}, typically with a frozen vision encoder and no supervision for the vision part, which highly restricts model capacity. Besides, these LMMs are mostly trained on image-text pairs or documents, while overlooking video data as a potential scalable source of interleaved multimodal data. Documents interleaved with images (\textit{e.g.,} textbooks, webpages) provide an intuitive representation of complex concepts, and have proved to be effective in empowering models with multimodal in-context learning ability \cite{alayrac2022flamingo,zhu2023multimodal}. Videos, which usually contain interleaved image frames and subtitles (Figure 3), are an abundant source of multimodal data that naturally contains dense visual signals and encodes stronger cross-modal correlations with text than regular multimedia.
| Image Captioning | Image Question Answering |
|------------------|--------------------------|
| **Input Prompt** | The image shows a photo of |
| **Completion** | the movie poster for "Fight Club." The poster features two men standing next to each other, with the words "Fight Club" written in bold letters above them. The background of the poster is dark and ominous, giving the impression of a gritty and intense movie. |
| **Input Prompt** | Question: Can you tell me about this image? Answer: |
|------------------|---------------------------------------------------|
| **Completion** | this image is the cover of a book called "Mom's Family Calendar" by Sandra Boynton. The book is designed to help moms keep track of their family's schedule and activities. |
| In-context Completion | |
|-----------------------|--|
| **Input Prompt** | There is a chinchilla. |
| **Completion** | There is a godzilla. |
| **Input Prompt** | This is the first image: |
|------------------|-------------------------|
| **Completion** | The animal in the first image surrounded with the plant in the second image: |
| **Input Prompt** | An elder man is playing guitar, oil on canvas, impressionism. |
|------------------|---------------------------------------------------------------|
| **Completion** | The woman in the video is wearing a pair of Apple VR goggles while sitting in an airplane. She appears to be using the goggles to watch something, possibly a video or a 360-degree view of the airplane's surroundings. |
| **Input Prompt** | Question: What's the woman doing in the video? Answer: |
|------------------|--------------------------------------------------------|
| **Completion** | The unusual aspect of this video is that it features a giraffe standing on a diving board and jumping into a swimming pool. This is not a common sight, as giraffes are not typically found in swimming pools. |
Figure 1: Emu as a generalist interface for diverse vision-language applications, such as image captioning, image/video question answering, in-context image-to-text and text-to-image generation, and image blending. More examples in Appendix E.
documents. Furthermore, public videos (especially user-generated clips) possess richer content diversity than Common Crawl\footnote{https://commoncrawl.org/}, from which current training datasets mainly originate.
In this work, we introduce \textbf{Emu}, a large multimodal model that learns from both video and image data interleaved with text, under a unified objective of predicting the next visual or text token in an autoregressive fashion. To take advantage of rich web-scale data with an omnivore capacity, we formulate diverse sources of multimodal data (\textit{e.g.}, videos with subtitles, webpages with images and text) into a unified format of interleaved image embeddings and text tokens (videos are converted into an interleaved sequence of randomly-selected frames and subtitles). Specifically, visual signals are first encoded into embeddings via a visual representation model EVA-CLIP \citep{sun2023evaclip}, instead of being converted into discrete tokens. These visual embeddings together with text tokens constitute an interleaved multimodal input sequence, which will be fed into \textbf{Emu} for training.
We pretrain \textbf{Emu} on these multimodal data sequences under a simple unified objective: predicting the next element in a multimodal sequence. Different from existing LMMs that compute the predict-the-next loss on text tokens only, in training \textbf{Emu}, all input elements including both discrete text tokens and continuous image embeddings are accounted for loss computation. We adopt the cross-entropy classification loss for discrete text tokens, and the $\ell_2$ regression loss for continuous visual embeddings. As raw images typically lack the left-to-right causal dependency as in language, \textbf{Emu} does not perform image generative pretraining in the original pixel space. Instead, visual embeddings are transformed into a causal latent space via Causal Transformer, which accepts the image encodings generated by EVA-CLIP as input, and outputs $N$ tokens that capture the causal dependency of the given image (as illustrated in Figure 2).
Pretrained with the unified objective and diverse data forms, \textbf{Emu} can serve as a generalist interface for both image-to-text and text-to-image tasks by performing various types of completion in a multimodal sequence. As illustrated in Figure 1, \textbf{Emu} accepts multimodal prompts (\textit{e.g.}, text, image, video, or their interleaved sequence) and generates multimodal response (for image generation, visual embeddings are decoded by a fine-tuned diffusion model). Further, \textbf{Emu} demonstrates impressive capabilities such as in-context text and image generation (the 2nd block of Figure 1), image blending (the 5th row of Figure 1, that combines a cat and a tiger into a real-looking tiger-cat), video understanding (the last block of Figure 1), and real-world knowledge grounding (Section 5.4).
We evaluate \textbf{Emu} on a broad range of zero-shot and few-shot tasks including image captioning, visual question answering, video question answering, and text-to-image generation. For qualitative demonstration, we also build an effective multimodal assistant via instruction tuning on multimodal conversational data. The instruction-tuned \textbf{Emu} assistant can effectively follow human instructions and interact with users via multimodal response.
## 2 EMU: PREDICT THE NEXT IN MULTIMODALITY
### 2.1 ARCHITECTURE
\textbf{Emu} is a large-scale multimodal model that performs completion in multimodality, \textit{i.e.}, perceiving interleaved multimodal input and generating outputs varying in modalities. As illustrated in Figure 2, \textbf{Emu} consists of four parts: Visual Encoder, Causal Transformer, Multimodal Modeling, and Visual Decoder. We leverage pretrained EVA-CLIP \citep{sun2023evaclip}, LLaMA \citep{touvron2023llama} and Stable Diffusion \citep{rombach2022highresolution} to initialize the Visual Encoder, the Multimodal Modeling LLM and the Visual Decoder, respectively.
Given any sequence of interleaved image, text and video, we first encode the image into dense visual features via EVA-CLIP, then transform the encodings into a fixed number of $N$ visual causal embeddings via Causal Transformer. Similarly, we encode a video of $T$ frames into $T \times N$ visual causal embeddings. Two special image tokens \([\text{IMG}]\) and \([\text{/IMG}]\) are prepended and appended for each image or frame, respectively, to represent the beginning and end of the encoded image/frame embeddings. The visual causal embeddings are combined with text tokens to form multimodal sequences that are fed into the Multimodal Modeling LLM for unified autoregressive modeling. We append \(<s>\) and \(</s>\) tokens to the start and the end of each sequence. In inference, we fine-tune the Visual Decoder to decode the visual embeddings into a realistic image.
Figure 2: Emu unifies the modeling of different modalities in an auto-regressive manner. Visual signals are first encoded into embeddings, and together with text tokens form an interleaved sequence. The training objective is to either classify the next text token or regress the next visual embedding. In inference, regressed visual embeddings are decoded into a realistic image via a fine-tuned latent diffusion model.
Causal Transformer. Auto-regressively modeling images in raster order is counter-intuitive and has not demonstrated satisfactory performance, which may be attributed to the fact that images naturally possess 2D structures and are not perceived as sequential signals like text. To better capture the characteristics of images and achieve unified modeling of different modalities, we propose a Causal Transformer module to transform 2D spatial visual signals to 1D causal sequences in a latent space $Z$. Specifically, given an image $I$ with its encodings $g(I)$ from EVA-CLIP as condition, Causal Transformer accepts randomly initialized embeddings $\{e_1, e_2, \ldots, e_N\}$ as input, and outputs $N$ embeddings $\{z_1, z_2, \ldots, z_N\}$ that capture the causal dependency of the given image. The architecture of Causal Transformer is similar to the decoder of Transformer (Vaswani et al., 2017), with each block consisting of a causal self-attention layer, a cross-attention layer, and a feed-forward layer. The cross-attention layer aggregates visual information from the image embeddings extracted from EVA-CLIP, where the visual embeddings are treated as keys and values, and the outputs from the previous causal attention layer serve as queries.
Visual Decoder. We use a latent diffusion model to decode visual embeddings into images, and adopt the weights of Stable Diffusion (Rombach et al., 2022) for initialization. Specifically, we feed $N$ visual embeddings generated by Emu into the diffusion model as conditions for image decoding. We replace the linear projections of the cross-attention modules in Stable Diffusion with new linear layers that accommodate the dimension of Emu and Stable Diffusion.
2.2 Training Objective
Given an unlabeled web-scale corpora $\mathcal{D}$ consisting of interleaved multimodal sequences $x = (x_1, x_2, \ldots, x_n)$, where $x$ can be vision-language sequences of various forms, such as image-text pairs, image-text interleaved documents, or videos with subtitles. $x_i$ can be a signal unit (text or image token) from any arbitrary modality. We first convert all continuous 2D signals (images and video frames) into 1D causal latent embedding sequences using Causal Transformer, then insert them back into the corresponding places in the sequence $x$. The resulting sequence is represented as $u = (u_1, u_2, \ldots, u_m)$, where $u_i$ can be either a discrete text token, or a visual embedding that captures causal dependency with neighboring visual embeddings.
We approximate the likelihood of the web-scale corpora $p(x)$ with $p(u)$, and maximize the likelihood in a unified auto-regressive manner as follows:
$$\max_{\theta} \sum_{u \in \mathcal{D}} \sum_{i=1}^{|u|} \log P(u_i | u_1, \ldots, u_{i-1}; \theta) \approx p(x)$$ (1)
Two types of losses are adopted to optimize this objective. For discrete text tokens, cross-entropy loss is used to supervise classification in the predefined vocabulary with a language modeling head. For continuous visual embeddings, $\ell_2$ regression loss is adopted with a separate regression head.
### 2.3 Generalist Interface
The unified auto-regressive modeling of different modalities endows Emu with a powerful ability to serve as a multimodal generalist that can perform any types of completion in a multimodal sequence, i.e., accepting multimodal sequence as input, and outputting signals across vision and language modalities. For example, given two examples as the prompt, Emu automatically infers and completes the corresponding task given a new input, as shown in the second block of Figure 1.
Specifically, given a multimodal context, if the expected output format is text, Emu will use the language modeling head to generate discrete text tokens. If the desired output is image, we will append a \([\text{IMG}]\) token at the end of the input sequence, then Emu will autoregressively generate $N$ visual embeddings that will then be sent to the visual decoder for decoding into a real-world image.
## 3 EMU Training
We pretrain Emu with web-scale data across modalities in various forms, including image-text pairs (LAION-2B [Schuhmann et al., 2022]), LAION-COCO ([ai]b), interleaved images-text data (MMC4 [Zhu et al., 2023b]), video-text pairs (WebVid-10M [Bain et al., 2021]), and our collected interleaved video-text data (YT-Storyboard-1B). All these data are formulated as multimodal sequences, from which Emu learns under the objective of predict-the-next-element in a unified auto-regressive manner. After pretraining, we finetune a decoder to transform visual embeddings into realistic images.
### 3.1 Data
**Image-text Pairs.** We use the image-text pairs from LAION-2B [Schuhmann et al., 2022] and LAION-COCO ([ai]b) for pretraining. LAION-2B provides images paired with noisy alt-texts from the web, and LAION-COCO is its 600M subset that is captioned by BLIP [Li et al., 2022].
**Video-text Pairs.** WebVid-10M [Bain et al., 2021] is an extensive dataset consisting of a large collection of short videos with textual descriptions. These videos are sourced from materials websites with diverse contents and a strong correlation between text and video. We use heuristic rules to remove irrelevant metadata (e.g., resolution of the original video, camera parameters).
**Interleaved Image and Text.** Large-scale image-text interleaved data plays a crucial role in unlocking the in-context learning ability of multimodal models. We leverage the Multimodal-C4 (MMC4) dataset [Zhu et al., 2023b], an expanded version of the text-only C4 [Raffel et al., 2020]. MMC4 comprises a collection of approximately 75 million image-text-interleaved documents, with 400 million images and 38 billion tokens in total. From each document, we sample a random subsequence of $L = 1024$ take up to the first $N = 5$ images. Additionally, we randomly sample $N = 5$ images along with their corresponding sentences to construct a subsequence of $L = 512$.
**Interleaved Video and Text.** Videos with subtitles also present a promising and scalable source of interleaved multimodal data. We introduce the YT-Storyboard-1B dataset which collects 18 million videos and their corresponding subtitles from YouTube using the video-ids provided by the YT-Temporal-1B dataset [Zellers et al., 2022]. Instead of raw videos, we collect storyboard images (about 1.8 billion images in total), a set of thumbnails provided by the YouTube website for quick video viewing. The combination of storyboard thumbnails and subtitles creates a natural interleaved sequence of video and text ordered by timestamps, as in Figure 5. More details are in Appendix A.1.1.
### 3.2 Pretraining
We initialize Emu’s Visual Encoder with the 1B version of EVA-01-CLIP [Sun et al., 2023], and Multimodal Modeling LLM with the 13B version of LLaMA [Touvron et al., 2023]. LLaMA is a decoder-only Transformer [Vaswani et al., 2017] and EVA-01-CLIP is a 40-layer ViT [Dosovitskiy
The Causal Transformer comprises 12 blocks, each of which consists of a causal self-attention layer, a cross-attention layer, and a feed-forward layer. Random initialization is used for Causal Transformer. The total number of parameters of Emu is 14B and is trained end-to-end.
We use a batch size of 128 for image-text pair data, 64 for interleaved image-text data, 16 for video-text pair and interleaved video-text data. Detailed pertaining hyperparameters are in Appendix A.1.1. For each video, we randomly sample 8 frames for pretraining, and all images/frames are resized into $224 \times 224$ resolution. For image-text pair and interleaved data, we randomly put each image before or after its corresponding sentence. We train the model on 128 NVIDIA 80G-A100 GPUs for 10k steps with around 82M samples (150B tokens in total), and the pretraining takes approximately 2 days.
### 3.3 Visual Decoding
After pretraining, we tune the visual decoder with both LAION-COCO (lai b) and LAION-Aesthetics (lai a) (a high-aesthetics quality subset of LAION-5B (Schuhmann et al., 2022)) image-text pair datasets under text-to-image task. Specifically, we initialize the diffusion model with Stable Diffusion v1.5. We freeze the Visual Encoder, Multimodal Modeling LLM in Emu, and the VAE in diffusion model during training, with only the parameters of U-Net updated. For each training sample, we append the \([\text{IMG}]\) token to the end of the input text and feed it into the Multimodal Modeling LLM, which will then generate $N$ visual embeddings in an auto-regressive manner. These visual causal embeddings are fed into Image Decoder as the condition for image generation training.
We follow the model setups of Stable Diffusion v1.5. We train the diffusion model with 32 A100-40G GPUs for 15k iterations. Detailed hyperparameters are in Appendix A.2. To further improve sample quality, we randomly drop image embeddings condition by 10% of the time during training to enable classifier-free guidance (Ho & Salimans, 2022).
### 4 Instruction Tuning
Language instruction tuning has helped pretrained language models to align with user intentions (Ouyang et al., 2022; Wang et al., 2022c; Taori et al., 2023; Zheng et al., 2023) and generalize to unseen tasks (Wei et al., 2022; Chung et al., 2022). We apply multimodal instruction tuning on Emu to align it with human instructions through supervised finetuning on publicly available datasets, including language instructions from ShareGPT (Zheng et al., 2023) and Alpaca (Taori et al., 2023), image-text instructions from LLaVA (Liu et al., 2023b), and video instructions from VideoChat (Li et al., 2023c) and Video-ChatGPT (Maaz et al., 2023). Dataset details can be found in Appendix B.1.
In instruction tuning, we freeze all parameters of pretrained Emu, and fine-tune a low-rank adaption (LoRA) module (Hu et al., 2022). The main focus of instruction tuning is to align the model with natural language instructions, which are less relevant to vision features. Thus, we attach LoRA modules only to the self-attention layers of the Multimodal Modeling LLM, and add no adaptation to the Vision Encoder. Training details can be found in Appendix B.1.
Table 1: Zero-shot comparison, * indicates that the zero-shot prompt is built by using two examples from the task, where their corresponding images have been removed. **Emu-I** is the instruction-tuned Emu model. The best results are **bold** and the second best are **underlined**.
| Models | COCO | NoCaps | Flickr30K | VQAv2 | OKVQA | VizWiz | VisDial | MSVDQA | MSRVTTQA | NExTQA |
|--------------|------|--------|-----------|-------|-------|--------|---------|--------|----------|--------|
| **Per-task Finetuning** | | | | | | | | | | |
| PALI-X-55B | 149.2| 126.3 | - | 86.0 | 66.1 | 70.9 | - | - | 47.1 | 38.3 |
| MetaLM | 82.2 | 58.7 | 43.3 | 41.1 | 11.4 | - | - | - | - | - |
| Kosmos-1 | 84.7 | 67.1 | 51.0 | - | 29.2 | - | - | - | - | - |
| Flamingo-9B *| 79.4 | - | 61.5 | 51.8 | 44.7 | 28.8 | 48.0 | 30.2 | 13.7 | 23.0 |
| **Emu** | 112.4| 96.5 | 72.0 | 52.0 | 38.2 | 34.2 | 47.4 | 18.8 | 8.3 | 19.6 |
| **Emu** * | - | - | - | 52.9 | 42.8 | 34.4 | 47.8 | 34.3 | 17.8 | 23.4 |
| **Emu-I** | 120.4| 108.8 | 77.4 | 57.2 | 43.4 | 32.2 | 43.0 | 34.6 | 16.8 | 5.8 |
| **Emu-I** * | - | - | - | 62.0 | 49.2 | 38.3 | 51.1 | 37.0 | 21.2 | 19.9 |
All instruction-tuning data are packed with this template:
\[
<\text{System Message}> \quad [\text{USER}]: \quad <\text{Instruction}> \quad [\text{ASSISTANT}]: \quad <\text{Answer}>,
\]
where [USER] and [ASSISTANT] are special tokens initialized from the embeddings of words ‘user’ and ‘assistant’, respectively. <System Message> varies depending on the specific task (Appendix B.2). <Instruction> and <Answer> are actual slots for human instructions and assistant answers, and only <Answer> is accounted for loss computation.
5 EVALUATION
We evaluate Emu on a broad range of vision-language tasks including image captioning (MS-COCO (Chen et al., 2015)), image question answering (VQAv2 (Goyal et al., 2017), OKVQA (Marino et al., 2019), VizWiz (Gurari et al., 2018)), visual dialog (VisDial (Das et al., 2017)), video question answering (MSRVTTQA (Xu et al., 2017), MSVDQA (Xu et al., 2017), NextQA (Xiao et al., 2021)) and text2image generation (MS-COCO (Lin et al., 2014)). Details are described in Appendix C.1. We evaluate our pretrained and instruction-tuned models in zero-shot and few-shot settings.
5.1 ZERO-SHOT EVALUATION
In the zero-shot setting, the model is tested on tasks and datasets never encountered during training. Task-specific prompts are used to indicate different tasks to perform, without any additional tuning for model parameters.
Multimodal Understanding. Table 1 presents the zero-shot multimodal understanding performance of Emu and Emu-I (the instruction-tuned model). For zero-shot evaluation of Emu, we adopt the multimodal Chain-of-Thought prompting technique following Huang et al. (2023), which first asks the model to generate a caption for visual content before outputting the final result. Additionally, we evaluate using the same strategy following Flamingo (Alayrac et al., 2022), where two text-only examples from the task are used as prompts (results indicated by an *). For more detailed information regarding the evaluation, please refer to Appendix C.2.
On COCO captioning task, Emu achieves impressive zero-shot CIDEr score (Vedantam et al., 2015) of 112.4, which outperforms other LMMs by a large margin. In a wide range of image and video question answering tasks, Emu consistently surpasses LMMs like Kosmos-1 and Flamingo-9B. Notably, Emu achieves an accuracy of 34.4% on the complex VizWiz VQA dataset, versus Kosmos-1’s 29.2% and Flamingo-9B’s 28.8%. Emu-I is the instruction-tuned Emu model that achieves notable improvements. Remarkably, even with only 14B parameters, Emu-I can outperform much larger-scale Flamingo-80B model in several tasks such as VQAv2 (62.0% vs. 56.3%), VizWiz (38.3% vs. 31.6%), and MSVDQA (37.0% vs. 35.6%).
Text2Image Generation. We evaluate the zero-shot image generation ability on the validation set of MS-COCO (Lin et al., 2014). Following Ramesh et al. (2021), we randomly sample 30k prompts from the validation set and calculate the zero-shot FID (Heusel et al., 2017). The results are shown in Table 2. For the generation of both Emu and SDv1.5, we use PNDM (Liu et al., 2022) scheduler with
Table 3: Few-shot comparison. \( k \) is the number of in-context examples, and we used the same example selection approach (i.e., RICES (Yang et al., 2022b)) as Flamingo (Alayrac et al., 2022).
| Models | VQAv2 \( k=2 \) | VQAv2 \( k=4 \) | VQAv2 \( k=8 \) | VizWiz \( k=2 \) | VizWiz \( k=4 \) | VizWiz \( k=8 \) | MSVDQA \( k=2 \) | MSVDQA \( k=4 \) | MSVDQA \( k=8 \) | MSRVTTQA \( k=2 \) | MSRVTTQA \( k=4 \) | MSRVTTQA \( k=8 \) |
|--------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| Kosmos-1 | 51.4 | 51.8 | 51.4 | 31.4 | 35.3 | 39.0 | - | - | - | - | - | - |
| Flamingo-9B | - | 56.3 | 58.0 | - | 34.9 | 39.4 | - | 36.2 | 40.8 | - | 18.2 | 23.9 |
| PALI-X | - | 56.9 | 57.1 | - | - | - | - | - | - | - | - | - |
| Emu | **56.4** | **58.4** | **59.0** | **37.8** | **41.3** | **43.9** | **36.0** | **37.1** | **39.8** | **21.2** | **21.8** | **24.1** |
Table 4: Zero-shot evaluation regarding each core VL capability of MM-Vet (Yu et al., 2023b).
| Model | Rec | OCR | Know | Gen | Spat | Math | Total |
|------------------------|-----|-----|------|-----|------|------|--------------|
| LLaMA-Adapter v2-7B | 16.8| 7.8 | 2.5 | 3.0 | 16.6 | 4.4 | 13.6±0.2 |
| MinitGPT-4-14B | 29.9| 16.1| 20.4 | 22.1| 22.2 | 3.8 | 24.4±0.4 |
| InstructBLIP-14B | 30.8| 16.0| 9.8 | 9.0 | 21.1 | 10.5 | 25.6±0.3 |
| DreamLLM-7B | 41.8| 26.4| 33.4 | 33.0| 31.0 | 11.5 | 35.9±0.1 |
| LLaVA-65B | 39.2| **28.2**| 26.2 | 28.3 | **33.0**| **15.0**| 35.5±0.3 |
| Emu-I-14B | **45.5**| 19.2 | **36.7**| **35.9**| 25.2 | 3.8 | **36.3±0.3** |
50 steps. We also adopt classifier-free guidance (Ho & Salimans, 2022) for better generation quality. The scaling factor is set to 5.0 and 3.0 for Emu and SDv1.5 respectively, as these settings yield the best performance for both models. Emu achieves better performance compared to a concurrent work GILL (Koh et al., 2023a), which also generates images with LLMs. However, our model is inferior to SDv1.5 in terms of FID. This is probably because the condition space (image embeddings) of our visual decoder deviates a lot from the condition space (text embeddings) of the diffusion model used as initialization, and our model is trained for a relatively short 15k steps.
5.2 Few-shot Evaluation
In few-shot evaluation, the model is prompted with task-specific prompts and several examples collected from the training data to evaluate its in-context learning ability. Evaluation details can be found in Appendix C.3. Table 3 presents the performance of the pretraining model Emu in image and video question answering tasks under the few-shot (\( k = 2, 4, 8 \)) evaluation setting. We use the Retrieval In-Context Example Selection (Yang et al., 2022b) approach following Flamingo (Alayrac et al., 2022). Emu demonstrates superior performance to Flamingo-9B and Kosmos-1 under almost all scenarios. For example, Emu achieves a VQAv2 accuracy of 58.4% and VizWiz 41.3% under the 4-shot setting, surpassing Flamingo-9B by +2.1% and +6.4%, respectively. For video-text tasks, Emu demonstrates strong performance as well, such as 4-shot 21.8% v.s. Flamingo’s 18.2% on the MSRVTTQA benchmark. Additionally, we can observe a positive correlation between the number of shots \( k \) (\( k = 0, 2, 4, 8 \)) and the performance of Emu. These results demonstrate Emu’s remarkable in-context learning ability.
5.3 In-the-wild Evaluation
Table 4 presents zero-shot evaluation results on the in-the-wild benchmark MM-Vet (Yu et al., 2023b). We report the mean and std of 5 evaluation runs following Yu et al. (2023b). For each core capability, the average score is reported. Emu-I exhibits state-of-the-art in-the-wild capability, and even outperforms LLaVA-65B (Lu et al., 2023) in Rec, Know, Gen abilities and the total score.
5.4 Qualitative Evaluation
Beyond quantitative benchmarks, we conduct adequate qualitative evaluation of Emu. Emu demonstrates impressive capabilities that cannot be evaluated on standard benchmarks, including real-world
knowledge grounding (upper right of Figure 4), interleaved multi-image understanding (left side of Figure 4), detailed video understanding (lower right of Figure 4), multimodal assistant (Figure 5), multi-turn dialogue (Figure 6), image blending (Figure 7), and (in-context) text-to-image generation. For in-context text-to-image generation, Emu can generate context-related images (in the first two rows of Figure 8) the generated images share the oil painting style in context, compared with the corresponding images generated without context in the first two rows of Figure 9, and follow context-related instructions, as shown in the 4th row of Figure 1. The multimodal in-context ability of Emu is responsible for this brand-new ability of image generation.
We also compare Emu with other state-of-the-art multimodal assistants in terms of the ability to perform typical image captioning tasks (Figure 10) and follow human instructions (Figure 11). In Figure 11, we test a slightly difficult instruction, and only Emu response properly to list 8 books written by Agatha Christie and then recommend one.
6 RELATED WORK
Multimodal pretraining (Radford et al., 2021; Jia et al., 2021; Sun et al., 2023; Chen et al., 2020; Kim et al., 2021; Wang et al., 2022d, a, b; Cho et al., 2021; Li et al., 2021; Yu et al., 2022a; Chen et al., 2023c; Lu et al., 2022) learns cross-modal interactions from large-scale multimodal data. Flamingo (Alayrac et al., 2022) bridges powerful yet private pretrained vision and large language models and first demonstrates remarkable multimodal zero-shot and few-shot behaviors. With the increasing impact (Schulman et al., 2022) and accessibility (Touvron et al., 2023) of LLMs, recent work has also considered building multimodal models based on LLMs (Li et al., 2023b; Driess et al., 2023; Huang et al., 2023; Dai et al., 2023; Ye et al., 2023; Zeng et al., 2023; Koh et al., 2023b), such as BLIP-series (Li et al., 2023b; Dai et al., 2023) that connect frozen vision and language pretrained models with a Q-Former to bridge the modality gap. These LMMs commonly use predicting the next text token as the training objective and exert no supervision for vision data (Hao et al., 2022; Huang et al., 2023; Zhu et al., 2023a; Liu et al., 2023b; Ye et al., 2023). Instead, Emu unifies the modeling of vision and language with the objective of predicting the next visual or text token in an autoregressive manner, and further explores videos as a new source of interleaved image-text data. This unified modeling leads to a generalist interface for diverse multimodal tasks that output either image or text. Emerging recent studies (Zhu et al., 2023a; Liu et al., 2023b; Maaz et al., 2023; Li et al., 2023c; Liu et al., 2023a; Li et al., 2023a; Chen et al., 2023b,a) attempt to build powerful visual multimodal assistants based on LMMs through constructed conversation data. We also instruction-tune Emu using publicly available datasets and build a multimodal assistant that aligns well with human instructions on both images and videos.
7 LIMITATIONS AND FUTURE TOPICS
Emu shares the well-acknowledged constraints inherent in other LLMs and LMMs, including susceptibility to both visual and language hallucinations, slow auto-regressive inference speed, a cessation of knowledge updates after pretraining, and a potential for generating non-factual content. Besides, Emu predominantly focused on English-language data. As a result, the model’s proficiency in languages other than English is currently delicate, and users should exercise caution when applying it in such contexts. Addressing challenges related to hallucination, enhancing inference speed, and expanding multilingual capabilities are crucial areas for future exploration and improvement.
8 CONCLUSION
In this work, we present Emu, a Large Multimodal Model trained with a unified autoregressive objective of predicting the next element, including both visual and textual tokens. Apart from commonly used image-text pairs and interleaved documents, we explore another scalable data source of image-text interleaved data, i.e., video. Emu trained under such unified objective and diverse data can serve as a generalist interface that is capable of performing diverse multimodal tasks, such as image captioning, image/video question answering, and text-to-image generation, together with new abilities like in-context text and image generation, and image blending. We also build a multimodal assistant instruction-tuned on Emu, which exhibits excellent human-aligned abilities such as multi-turn dialogue. We hope that our work will inspire the community to continue exploring the potential of diverse multimodal data at web-scale and also the generative pretraining beyond vision and language.
ETHICS STATEMENTS
Emu is currently in a preliminary stage and has been developed solely for research purposes. Its usage in specific applications is not recommended until comprehensive risk analyses have been conducted and corresponding mitigation strategies thoroughly explored. The ensuing discussion outlines potential risks and corresponding mitigation strategies of Emu, acknowledging the necessity for further research efforts to comprehensively assess associated risks.
POTENTIAL RISKS
The ethical considerations associated with Emu primarily stem from two key aspects: 1) model initialization: the Multimodal Modeling module of Emu is initialized from an open-sourced large language model LLaMA (Touvron et al., 2023), the Visual Decoder module is initialized from Stable Diffusion (Rombach et al., 2022), and the Vision Encoder is initialized from EVA-CLIP (Sun et al., 2023). Consequently, Emu inherits the potential risks of generating harmful and biased information, including offensive language, propagation of social biases and stereotypes, and the generation of inappropriate content such as pornography and child abuse. 2) Pretraining data. The pretraining data of Emu are publicly available and they are sourced from the Internet, where bias and harmful information are prevalent. Besides, the datasets sourced from the Internet (such as Common Crawl) may include links to images with personal information, potentially compromising privacy and containing sensitive content like faces, medical images, or other personal data.
MITIGATION STRATEGIES
It is crucial to reiterate that Emu is designed exclusively for preliminary academic research and should not be deployed in specific applications without rigorous risk analyses and mitigation strategy exploration. Deployment in production environments warrants a more thorough investigation into model behavior and potential biases.
Given the extensive size of pre-training datasets and the associated training costs, curating datasets and developing models for widespread use exceeds the scope of a single research paper. However, we are open to discussing mitigation strategies to help address ethical concerns.
Short-term approaches include: 1) relying on prompting to mitigate any biases and harmful outputs, 2) implementing rule-based filtering, human oversight, and evaluation to identify and block harmful information, 3) employing a discriminator model capable of classifying harmful information for enhanced blocking. 4) Emu itself can be finetuned to become a multimodal discriminator.
In the long term, strategies involve: 1) social or public policy interventions, such as regulatory frameworks and guidelines; 2) thoughtful product design, especially regarding user interface decisions; 3) advancements in AI Ethics of powerful large models, including the development of better benchmarks and improved mitigation strategies.
Additionally, to address privacy concerns, methods exist for obfuscating or generating personal human attributes like faces (Yang et al., 2022a; Maximov et al., 2020), ensuring anonymity without compromising the quality of learned representations. While this avenue is worth exploring, it is currently beyond the scope of this paper.
In conclusion, Emu is presently a model intended for preliminary research purposes only, and deployment should be deferred until the aforementioned issues are thoroughly considered and addressed. Caution must be exercised before transitioning to production environments.
REFERENCES
Laion-aesthetics. https://laion.ai/blog/laion-aesthetics/
Laion coco: 600m synthetic captions from laion2b-en. https://laion.ai/blog/laion-coco/
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhihao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals,
|
jZPqf2G9Sw
|
“Remarkably, Genie outperformed other models such as ProtDiff (Trippe et al., 2023), FoldingDiff (Wu et al., 2022) or FrameDiff (Yim et al., 2023), and remains comparable to RFDiffusion (Watson et al., 2022).” Why is this remarkable?
|
Dynamics-Informed Protein Design with Structure Conditioning
Urszula Julia Komorowska,* Simon V Mathis,* Kieran Didi, Francisco Vargas, Pietro Lio & Mateja Jamnik
Department of Computer Science and Technology
University of Cambridge
Cambridge, CB30FD, UK
{ujk21, svm34, ked48, fav25, pl219, mj201}@cam.ac.uk
Abstract
Current protein generative models are able to design novel backbones with desired shapes or functional motifs. However, despite the importance of a protein’s dynamical properties for its function, conditioning on these dynamics remains elusive. We present a new approach to include dynamical properties in protein generative modeling by leveraging Normal Mode Analysis. We introduce a method for conditioning diffusion probabilistic models on protein dynamics, specifically on the lowest non-trivial normal mode of oscillation. Our method, similar to classifier guidance conditioning, formulates the sampling process as being driven by conditional and unconditional terms. However, unlike previous works, we approximate the conditional term with a simple analytical function rather than an external neural network, thus making the eigenvector calculations approachable. We present the corresponding SDE theory as a formal justification of our approach. We extend our framework to conditioning on structure and dynamics at the same time, enabling scaffolding of dynamical motifs. We demonstrate the empirical effectiveness of our method by turning the open-source unconditional protein diffusion model Genie into a normal-mode-dynamics-conditional model with no retraining. Generated proteins exhibit the desired dynamical and structural properties while still being biologically plausible. Our work represents a first step towards incorporating dynamical behaviour in protein design and may open the door to designing more flexible and functional proteins in the future.
1 Introduction
Generative Artificial Intelligence (AI) has rapidly accelerated protein design research. A common problem tackled with AI is the task of protein backbone design, which is finding a new and realistic 3D structure tailored to the specific biological function. Recently, AI models based on the denoising diffusion framework (Ho et al., 2020; Song et al., 2021) have shown remarkable success in generating realistic protein backbones, especially backbones with pre-defined, fixed substructures often referred to as motifs (Watson et al., 2022; Trippe et al., 2023). Since many functions have been linked to the presence of various functional motifs, enforcing the generation process to preserve such substructures is crucial in meaningful protein design. However, current modeling approaches do not incorporate an important aspect of protein design - structure alone is not enough to determine the protein’s functional properties. Information about protein flexibility, especially about its low-frequency collective motion, is crucial in determining protein functional properties (Bauer et al., 2019). In this work, we address this research gap and provide a framework for a diffusion model conditioned not only on structural constraints but also on protein dynamics.
We analyse protein dynamics through the lens of Normal Mode Analysis (NMA) (Bahar et al., 2010). This is a simple yet powerful method for obtaining eigenvectors of the motion of protein residues and their relative displacements in each mode. After performing NMA on a real-life protein with known functionality, the obtained eigenvectors can be used as the dynamic targets when using a diffusion model.
model to sample a novel backbone. We are particularly interested in proteins which exhibit hinge-like motions, which are responsible for a number of protein functions and are strongly constrained in both structure and dynamics (Khade et al., 2020). Protein hinges usually involve two secondary structure elements rotating against each other about the common axis, similar to how a hinge at the door frame has closing and opening motions.
Our contributions are as follows:
• We introduce a new methodology for conditioning protein generation on dynamical properties. Our approach is based on NMA which is easy to compute and captures collective motions related to protein function. Moreover, we demonstrate how conditioning on the desired relative displacements, which we refer to as dynamics conditioning, can be accompanied by structure conditioning. To substantiate this joint conditioning theoretically, we present a formal interpretation in terms of stochastic differential equations.
• We train our custom conditional diffusion model and generate dynamics-conditioned backbones. Thanks to the large number of real-life dynamics targets extracted from our data, we provide a detailed analysis of the effectiveness of the method. We measure the agreement of the displacements using a custom loss function and manually inspect the agreement of target and sample displacement vectors for selected samples. Our method indeed allows us to generate proteins with desired dynamics and is easily transferable to other models.
• We showcase the joint conditioning by applying it to a trained Genie model (Lin & AlQuraishi, 2023). Through literature research, we select three proteins that exhibit hinge structures and motions, identify residues located in the hinge arms and use those as conditioning targets. Figure 1 shows that we succeed in generating new and biologically plausible proteins with the targeted hinge dynamics, demonstrating that our framework can be transferred to other models in a plug-and-play fashion.
Figure 1: Comparison of natural proteins (top) from which the hinge targets were extracted with conditional samples (bottom). **Top row:** from the left – lysozyme, adenylate kinase, haemoglobin. **Bottom row:** protein backbones synthesised with Genie that match the pre-selected hinge motif residues and have the desired dynamics, from the left with lysozyme, adenylate kinase, haemoglobin targets. Purple arrows are the displacements of selected residues in the normal mode, while green ones are the displacements in the same mode but in a novel structure. Arrows have been scaled up for increased visual clarity. Note how the relative amplitudes and pair-wise angles of the green arrows match the constraints imposed by the target, and how the relative positions of the novel hinge residues are as in the original structure.
2 BACKGROUND AND RELATED WORK
2.1 DIFFUSION PROBABILISTIC MODELING
The generative process in diffusion probabilistic models (Sohl-Dickstein et al., 2015) starts with a sample from the standard normal distribution, \( x_T \sim \mathcal{N}(0, 1) \). The goal of this process is to transform \( x_T \) into the sample \( x_0 \) from the targeted data distribution \( p_0(x_0) \), initially unknown and indirectly accessed by the trained model.
The key idea is to formulate the model training as a forward diffusion process in which the model predicts how much noise was added to the original sample. For a sample from the training set \( x_0 \), the forward process is defined as iteratively adding a small amount of Gaussian noise to the sample in \( T \) steps, which produces a sequence of noisy samples \( x_{0:T} \) such that the final sample \( x_T \sim \mathcal{N}(0, 1) \) to good approximation. In the Denoising Diffusion Probabilistic Modeling (DDPM) framework (Ho et al., 2020) the noise magnitude at each step is defined by a variance schedule \( \{\beta_t, t \in [0 : T]\} \) such that
\[
p_t(x_t | x_{t-1}) = \mathcal{N}(x_t, \sqrt{1 - \beta_t} x_{t-1}, \beta_t I).
\]
(1)
The above transition defines a Markov process in which the original data is transformed into a standard normal distribution. It is possible to write the density of \( x_t \) given \( x_0 \) in a closed form
\[
p_t(x_t | x_0) = \mathcal{N}(x_t, \sqrt{\alpha_t} x_0, (1 - \alpha_t) I), \quad \text{s.t. } x_t = \sqrt{\alpha_t} x_0 + \sqrt{1 - \alpha_t} \epsilon_t,
\]
(2)
where \( \alpha_t = \prod_i^t \alpha_i \) and \( \alpha_i = 1 - \beta_i \) and \( \epsilon_t \sim \mathcal{N}(0, 1) \). Transforming a sample \( x_T \) into the sample \( x_0 \) is done in several updates that reverse the destructive noising, given by a reverse sampling scheme
\[
x_{t-1} = \frac{1}{\sqrt{\alpha_t}} \left( x_t - \frac{\sqrt{1 - \alpha_t}}{1 - \alpha_t} \epsilon_\theta(x_t, t) \right) + (1 - \alpha_t) z,
\]
(3)
where \( z \sim \mathcal{N}(0, 1) \). The neural network \( \epsilon_\theta \) (the denoiser) should be trained to predict noise added to \( x_0 \). Ho et al. (2020) showed the following loss function is sufficient
\[
L = \mathbb{E}_{x_0,t} \left( ||\epsilon_t - \epsilon_\theta(\sqrt{\alpha_t} x_0 + \sqrt{1 - \alpha_t} \epsilon_t, t)||^2 \right).
\]
(4)
Song et al. (2021) state that the DDPM is an example from the larger class of score-based models. They demonstrated that the discrete forward and reverse diffusion processes have their continuous time equivalents, that is, the forward Stochastic Differential Equation
\[
dx = -\frac{1}{2} \beta(t) x dt + \sqrt{\beta(t)} dw,
\]
(5)
and its reversal
\[
dx = \left[ -\frac{1}{2} \beta(t) x - \beta(t) \nabla_x \ln p_t(x) \right] dt + \sqrt{\beta(t)} d\bar{w},
\]
(6)
where the quantity \( \nabla_x \ln p_t(x_t) \) is called the score and is closely related to the noise in DDPM by the equivalence \( \nabla_x \ln p_t(x_t) = -\epsilon_t/\sqrt{1 - \alpha_t} \) (derivation are in the Appendix F). Any model trained to predict the noise can be written in terms of the score, which is an essential property of our work. Whenever we derive some expression with respect to the score, we can use the noise-based formulation for forward and reverse diffusion processes by simply substituting \( \epsilon_t = -\sqrt{1 - \alpha_t} \nabla_x \ln p_t(x_t) \).
Related work on Diffusion Probabilistic Models for protein design. In the context of protein generative modelling, the real data samples \( x_0 \) are often represented by protein backbone coordinates (e.g., at the resolution of \( C_\alpha \) atoms), optionally with amino-acid identity as a scalar feature. Protein diffusion models operating on such representations were shown to generate designable and novel samples to various degrees (Lin & AlQuraishi, 2023; Ingraham et al., 2022; Watson et al., 2022; Yim et al., 2023). Some of those were additionally designed to condition the sample on properties such as substructure, symmetry or structural motif; however, none of those works link the function to dynamics. Motif scaffolding has been done by, for example, providing the denoised motif residues positions in the conditional training (Watson et al., 2022), by particle filtering methods (Trippe et al., 2023), or by empirically estimating the chances that the sample will have the query motif (Ingraham et al., 2022). Eigenfold (Jing et al., 2023) attempts to incorporate the physical constraints for oscillations into the diffusion kernel, however, it did not improve the sample quality, and it was not tested whether it changes the dynamics of generated samples.
2.2 Normal Mode Analysis
Normal Mode Analysis (NMA) is a technique for describing collective motions of protein residues for a given energy function. It assumes that a protein is in the energy minimum state in a given force field, such that the protein residues will, to first approximation, undergo harmonic motions about their minima (Bahar et al., 2010). Amplitudes and frequencies of such oscillations are the solutions to the equations of motions for all residues. These equations of motions are compactly written in matrix form as \( M\ddot{x} = -Kx \), where \( x \in \mathbb{R}^{3N} \) is a flattened vector of coordinates of \( N \) residues, \( M \in \mathbb{R}^{3N \times 3N} \) is a mass matrix and \( K \in \mathbb{R}^{3N \times 3N} \) is the interaction constants matrix derived from the force field that describes the strength of interactions between residues. Despite the simplistic assumptions about the form of these force fields, NMA has been shown to successfully explain many dynamical phenomena amongst numerous proteins (Gibrat & Gō, 1990; Tama & Sanejouand, 2001; Bahar et al., 1997). Most functional properties of proteins that involve dynamics are related to the low-frequency motions, mathematically represented as the lowest non-trivial eigenvectors of the matrix equation.
3 METHODS
Consider the following problem: given a target matrix \( y_D \in \mathbb{R}^{[C] \times 3} \), where rows correspond to displacement vectors of \( C \) residues, we aim to generate a new protein in which the displacement vectors of selected residues in their non-trivial lowest normal mode are close to those defined by \( y_D \). We use a coarse-grained protein representation, where each residue is represented with the \( C_\alpha \) carbon only, and aim to obtain new \( C_\alpha \) chains that satisfy the dynamics constraint. To tackle this problem we employ score-based generative modelling (Song et al., 2021). We formulate the agreement of the displacement with a target as a condition in the reverse process and quantify the notion of ‘similar dynamics’ with a custom loss function.
3.1 Conditioning Diffusion Models
The goal of conditional generative modeling is to sample from the posterior \( p(x_0|y) \) such that new samples \( x_0 \) satisfy some chosen property \( y \). We specify the following model (Song et al., 2023, Equation 4)
\[
p(x_0|y) = \frac{p(x_0) \exp[-l(y,v(x_0))]}{\int p(x_0) \exp[-l(y,v(x_0))] dx_0} \quad \text{and} \quad \kappa(y) = \int p(x_0) \exp[-l(y,v(x_0))] dx_0
\]
where \( l(y,v(x_0)) \) measures the loss for a measurement of \( y \) at \( x_0 \), \( \kappa(y) \) is the normalisation constant, and \( v(x) \) maps to the relevant physical quantity represented by \( y \). This specification, as shown in Song et al. (2023), allows for guiding a trained unconditional model along the path specified by the loss \( l \). Finding an appropriate \( p(y|x_0) \) is where the novelty of our method lies. For the dynamics target \( y \), if \( p(y|x_0) \) was a neural network, it would need to approximate the eigenvectors of an arbitrary symmetric matrix. To the best of our knowledge, finding matrix eigenvectors for any variable size symmetric matrix with a neural network is not considered a solved problem yet (there exist neural network approaches to find eigenvectors, but those require retraining for every new matrix (Gemp et al., 2021; Yi et al., 2004), and are not suitable for a large dataset of backbone structures).
A method to reconstruct a graph structure from a set of learned eigenvectors via an interactive Laplacian matrix refinement is presented in Martinkus et al. (2022). However, this approach has never been tested for a reverse reconstruction. We escape the need to train a neural network and equate \( p(y|x_0) \) to a simple analytical function.
One of the most common mathematical frameworks to obtain a novel sample with any desired property \( y \) consists of estimating conditional scores. Different approximations for estimating said score have given rise to a variety of methods such as classifier guidance (Dhariwal & Nichol, 2021), classifier free guidance (Ho & Salimans, 2022), and ‘reconstruction guidance’ (Ho et al., 2022; Chung et al., 2022a). What all these approaches have in common is that they decompose the conditional score as
\[
\nabla_x \ln p_t(x_t|y) = \nabla_x \ln p(y|x_t) + \nabla_x \ln p_t(x_t),
\]
where \( p(y|x_t) \) is a probability that the sample meets the condition at \( t = 0 \) given the state \( x_t \) at some other time. Following Chung et al. (2022a), we re-express it with the integral
\[
p(y|x_t) = \int p(y|x_0)p_0(x_0|x_t)dx_0.
\]
The integral is intractable and we cannot evaluate \( p_0(x_0|x_t) \) directly. But as in Chung et al. (2022a), we overcome this via the approximation of the denoiser’s transition density with a delta function centred at the mean
\[
p_0(x_0|x_t) \approx \delta_{E[x_0|x_t]}(x_0).
\]
Such approximations to the posteriors via point masses centred at their means rather than their modes (MAP) are known as Bayes point machines (Herbrich et al., 2001), and have been shown to outperform MAP. Under this approximation, the entire integral simplifies to
\[
p(y|x_t) \approx p(y|E[x_0|x_t]).
\]
Via Tweedie’s formula (Chung et al., 2022a), the expected output of the model at \( t = 0 \) is
\[
E[x_0|x_t] = x_t + (1 - \bar{\alpha}_t)s(x_t, y)/\sqrt{\bar{\alpha}_t}.
\]
Under our model specification, via Bayes rule
\[
p(y|E[x_0|x_t]) = p(E[x_0|x_t]|y)p(y)/p(E[x_0|x_t]),
\]
substituting back into the score we obtain
\[
\nabla_x \ln \left( \frac{p(E[x_0|x_t])}{p(E[x_0|x_t])} \right) \exp[-l(y,v(E[x_0|x_t]))]p(y) = -\nabla_x l(y,v(E[x_0|x_t])).
\]
Depending on the quantity \( y \), different losses must be used in Equation 14. Note that even though the derivations are done in continuous time, the equivalence of the score and the noise still applies, and we can use the discretised sampling scheme as in Equation 3. Now, we explain our choices for dynamics and structure conditioning losses.
### 3.2 Dynamics Loss
The next step is to define the loss function in Equation 14 that enforces the targeted dynamics while being invariant to the protein rotations and translations. Knowing the expected residues’ positions at \( t = 0 \) and the expected components of the normal mode of the conditioned residues given structure \( x_t \) at some time \( t \), the invariance is preserved if one compares the relative pairwise angles between the displacement vectors and their relative magnitudes. Moreover, this makes the conditioning target independent of the protein length: eigenvectors are normalised, hence the amplitudes of displacements of a subset of residues depend on the protein length. Therefore, we propose to use the following loss in Equation 14, which is a simple combination of amplitude and angle terms between all pairwise residues. For the rest of this work, we refer to it as the NMA-loss.
\[
l_{NMA}(y_D,v(x)) = l_{angle}(y_D,v(x)) + l_{ampl}(y_D,v(x)),
\]
\[
l_{angle} = \sum_{i,j \in C} |\cos(y_{D,i},y_{D,j}) - \cos(v(x_t)_i,v(x_t)_j)|,
\]
\[
l_{ampl} = \sum_{i \in C} \left| \frac{||y_{D,i}||}{||v(x)||} - \frac{||v(x)_i||}{||v(x)||} \right|.
\]
In this invariant loss, \( y_{D,i} \) and \( v(x)_i \) are displacement vectors of residue \( i \in C \) in the target \( y_D \) and in the displacements matrix \( v(x) \in \mathbb{R}^{[C] \times 3} \) derived from expected positions at \( t = 0 \). The amplitude terms are normalised such that only their relative sizes matter, consistent with the fact that amplitude information from NMA can only make relative statements about the participation of a given residue in a mode (Bahar et al., 2010). For the combined loss, in the process of minimisation of NMA-loss in the sampling steps, the \( l_{ampl} \) is scaled by 2, such that its contribution is similar in magnitude to \( l_{angle} \).
We compute the NMA-loss using a differentiable implementation of the eigenvector calculations assuming the Hinsen force-field ((Hinsen & Kneller, 1999), more details in Appendix B.2).
3.3 Structure Loss and Joint Conditioning
The essential part of our work is building a connection between conditioning on dynamics and conditioning on structure. Even though dynamics and structure are correlated, many structures will have similar low-frequency eigenvectors, and there is no guarantee that the particular protein packing will correspond to the biological function for which the dynamics were designed. Therefore, dynamics conditioning must be accompanied by structure conditioning. Structure conditioning enforces the generated protein backbone to have a subset of residues \( C_M \) positioned in pre-defined relative positions. For example, structure conditioning might enforce the presence of a given functional motif \( M \) somewhere in the arbitrarily rotated protein. We denote the target positions as \( y_M \in \mathbb{R}^{C_M \times 3} \), and \( x_{CM} \in \mathbb{R}^{C_M \times 3} \) is the prediction of conditioned residues’ positions at \( t = 0 \) in the sampling process. In the language of score-based generative modeling, the conditional score for the joint target \((y_D, y_M)\) will be now decomposed into three terms
\[
\nabla_{x_t} \ln p_t(x_t | y_D, y_M) = \nabla_{x_t} \ln p(y_D | x_t) + \nabla_{x_t} \ln p(y_M | x_t) + \nabla_{x_t} \ln p_t(x_t).
\]
Finally, the appropriate structure loss should be substituted to the \( \nabla_{x_t} \ln p(y_M | x_t) \) term. We define the structure loss to be the misalignment between \( y_M \) and \( x_{CM} \), specifically the \( L1 \) loss between all \( C_M \) residues’ coordinates. In order not to violate equivariance, we use our custom differentiable implementation of the Kabsch algorithm (Kabsch, 1976; 1978) to find the best fit of the target residues \( y_M \) and \( x_{CM} \) at the reverse diffusion step and only then compute the misalignment. In the discussion of the results, we report the final root-mean-square deviation (RMSD), which is related to but different from the structure loss (see Section 5.2).
4 Models and the Experimental Setup
The aim of the experimental evaluation is two-fold. Firstly, we test whether the proposed conditioning method indeed results in better agreement between the target and the novel structure’s dynamics. To do so, we use our custom denoiser model, perform conditional sampling using a large number of dynamics targets and examine the conditioning effectiveness. Secondly, we utilise Genie (Lin & AlQuraishi, 2023), the diffusion model able to produce high-quality samples and modify its sampling scheme with our joint conditioning. We therefore demonstrate the universality of our framework which leaves an open path to transferring our method to other large protein diffusion models. The modified Genie model produces samples conditioned on the hinge targets which we thoroughly evaluate for designability.
4.1 Models
GVP (Geometric Vector Perceptron (Jing et al., 2021b;a)) is the main building block of our equivariant denoiser. We use a Graph Neural Network with 5 layers based on GVP (details in the Appendix B.1). The denoiser was trained with the loss function given by Equation 4. We use the Hoogeboom schedule (Hoogeboom et al., 2022) with a 250-step DDPM discretisation scheme. The model was trained for 1000 epochs with a learning rate of 1e-4.
Genie. Genie (Lin & AlQuraishi, 2023) is a diffusion probabilistic model with the DDPM discretisation. It takes advantage of the protein geometry by extracting the Frenet-Serret frames of residues at each noise prediction step, which are then passed to the SE(3)-equivariant denoiser. Genie outperformed other models such as ProtDiff (Trippe et al., 2023), FoldingDiff (Wu et al., 2022) or FrameDiff (Yim et al., 2023), and remains comparable to RFDiffusion (Watson et al., 2022). For our experiments, we used the published weights of the model trained on the SCOPe dataset (Fox et al., 2014; Chandonia et al., 2021) able to work with proteins up to 256 residues long.
4.2 Dataset and Targets
For our custom model training, we extract all short monomeric CATHv4.3 domains (Orengo et al., 1997) for structures with high resolution (< 3Å), of lengths between 21-112 amino acids, clustered 95% sequence similarity to remove redundancy. The resulting dataset contained 10037 protein structures.
We extract random and strain dynamics targets from the proteins in the validation set. Random targets are the displacements in the randomly chosen sets of 10 consecutive residues; for the strain
targets, we perform strain-energy calculation (Hinsen & Kneller, 1999) (details in the Appendix B.2) and choose 10 consecutive residues with the largest summed energy.
Joint conditioning imposes constraints on both the protein normal mode and the specific residues’ positions. Biologically relevant targets that require such constraints are the hinge parts of proteins. Three proteins were selected from the literature: lysozyme (PDB ID: 6lyz), adenylate kinase (PDB ID: 3adk), and haemoglobin (PDB ID: 2hhb). In each protein we analysed which residues participate in the hinge motion – those residues constitute the $y_M$ targets. For each protein we perform NMA calculation to obtain the displacements of the hinge residues – the $y_D$ targets (details in Appendix D).
4.3 Evaluation metrics
Population level. For the first set of experiments investigating dynamics conditioning, we focus on quick-to-compute statistics of the large sample set to understand the expected effects of conditioning on the sample quality. Apart from the NMA-loss, we check the sample quality using: (1) the mean chain distance ($C_\alpha - C_\alpha$) that should be close to 3.8 Å; (2) the radius of gyration of the backbone, which is an indicator of whether the model produces samples with an adequate compactness; (3) secondary structure statistics (SSE), that is, the proportion of $\alpha$-helices, $\beta$-sheets and disordered loops; (4) novelty in terms of the TM-score to the closest structure in the train set. TM-score measures the topological similarity of protein structures and has values in the range $[0, 1]$. TM-score > 0.5 suggests two structures are in the same fold (Xu & Zhang, 2010).
Detailed statistics. In the case of joint conditioning, we sample novel protein backbones using Genie and check the designability of the new samples using the same in silico evaluation pipeline as in benchmarking unconditional Genie. For each backbone sample, we obtain 8 ProteinMPNN generated sequences and fold each sequence with ESMFold (Lin et al., 2022). We calculate the self-consistency TM-scores (scTM), that is, the TM-scores between the input structure and each of the ESMFold predictions. scTM scores were also considered in other works (Trippe et al., 2023; Lin & AlQuraishi, 2023) as one of the standard metrics for sample quality evaluation. We report the proportion of conditional samples whose best scTM-score to one of the ESMFold designed structures is > 0.5, in the same fashion as in Trippe et al. (2023) that tackles a similar motif conditioning problem.
4.4 Sampling details.
Dynamics conditioning with GVP. The sampling process consisted of 250 reverse diffusion steps (details in the Appendix B.3). We extracted 300 strain and 300 random targets from 300 randomly sampled proteins from the validation set. For each target, we took 3 conditional and unconditional samples, and for each group we selected the one with the lowest NMA-loss. Each sample had the same length as the protein from which the target was extracted.
Joint conditioning with Genie. The original Genie sampling loop with 1000 time steps in the generation was modified to include the conditional score (details in the Appendix B.3). The guidance scales were different for each target, and in the order of 2000-3000.
5 Results and discussion
5.1 Strain and random dynamics targets
Here we present the results for the strain and random dynamics targets. At the start, we filter out the ‘low quality’ samples that evidently do not form a biologically valid proteins (details in the Appendix C). We examine if the conditioning has the desired effect of enforcing the target normal mode. Figure 2 shows that indeed, the NMA-loss is successfully minimised in the conditional samples as compared to the unconditional ones. Note that both the target normal mode and the mode of the newly sampled structure must obey some physical constraints imposed on all proteins and the degrees of freedom of all relative displacements are limited, therefore it is occasionally possible to obtain low loss for the unconditional sample. Encouraged by this finding, we proceed to the visual inspection of the samples. Figure 3 shows a pair of conditional and unconditional samples for one of the strain targets (additional sampled pairs are in Appendix G). There is a better
alignment of the displacement vectors and target vectors for the conditional sample as compared to the unconditional one, which we also consistently observed for the rest of the sampled pairs. We conclude that our conditioning has the desired effect of enforcing the target dynamics. We therefore proceed to the quality check of the samples – we must ensure the conditioning does not compromise the backbone structure. To ensure that the sampled proteins are still biologically valid, we evaluate their geometry. In the end, we investigate the samples’ novelty to check whether the diffusion model has not simply memorised the train set.
Figure 4 shows the SSE and $R_q$ of the samples compared to the train CATH dataset. Unconditional samples show a variety of SSE in proportions close to the CATH dataset. Interestingly, we found that conditioning increases the proportion of $\beta$-sheets at the expense of $\alpha$-helices. $R_q$ distributions of both unconditional and conditional samples have a visible overlap with the CATH $R_q$ distribution, the second one is shifted to larger values (but remains within the $R_q$ values observed in CATH). Therefore, while the conditional samples do not violate physical constraints, the dynamics conditioning introduces changes in protein packing. Whether this effect is significant for downstream applications when the conditioning is transferred into problem-specific models is left for future work. Respective Figures for the random targets can be found in Appendix A. Lastly, we calculate the novelty of the samples expressed in terms of TM-score to the closest structure in the train set. Both unconditional and conditional samples of both target types were highly novel, with TM-score lower than 0.5 in 90% of the samples.
5.2 HINGE TARGET
Finally, we present results for the joint conditioning. The conditional samples were filtered using criteria of mean chain distances outside [3.75, 3.85] Å interval and RMSD with respect to the motif smaller than 1 Å. These constraints left us with 43%, 60% and 23% of the conditional samples for
lysozyme, adenylate kinase and haemoglobin, respectively, such that we ended up with 27 conditional samples. To match that number, we sampled 27 unconditional ones. In the analysis of the remaining samples, we considered the distributions of NMA-loss (see Figure 5) and scTM-score. The distribution of the NMA-loss confirms that our method can enforce the specific dynamics and conditions on the structure at the same time. Analysis of the designability revealed that the distribution of scTM-scores depends on the target we use. The proportions of conditional samples with scTM-score > 0.5 were 0.48, 0.78, 0.41 for lysozyme, adenylate kinase and haemoglobin, respectively. Interestingly, when we sampled 27 structures just with the hinge dynamics conditioning, those values were 0.93, 1.0, and 0.89, respectively, and the decrease in designability can be attributed purely to the difficulties in the structure conditioning (Appendix E). Additional experiments with a conditionally trained Genie model and extra designability results can be found in Appendix H. We finish with the visual investigation of the generated hinge structures. Figure 1 shows pairs of the targets and the new samples (more examples in the Appendix G). The new samples indeed possess the hinge structure, as well as the hinge-like low-frequency motion.
6 CONCLUSIONS AND FURTHER WORK
For the first time, we condition the protein diffusion model on dynamics, thus paving the way to designing more functional proteins in the future. We also make the code publicly available\(^1\). We generate novel proteins with a pre-defined lowest non-trivial normal mode of oscillation for a subset of residues. The large-scale statistics show that the conditioning is effective and can be transferred to already trained unconditional models. The extended version of the conditioning that includes the structure conditioning is implemented as part of the unconditional Genie model and we produce novel proteins that exhibit hinge structure and dynamics while remaining designable by the scTM.
---
\(^1\)Code available at https://github.com/ujk21/dyn-informed.
criteria. Further work includes integrating the dynamics conditioning with other types of structure conditioning, and further evaluation with other types of motions.
REFERENCES
Ivet Bahar, Ali Rana Atilgan, and Burak Erman. Direct evaluation of thermal fluctuations in proteins using a single-parameter harmonic potential. *Folding and Design*, 2(3):173–181, 1997.
Ivet Bahar, Timothy R. Lezon, Ahmet Bakan, and Indira H. Shrivastava. Normal Mode Analysis of Biomolecular Structures: Functional Mechanisms of Membrane Proteins. *Chemical Reviews*, 110(3):1463–1497, 2010.
Jacob A. Bauer, Jelena Pavlović, and Vladena Baurová-Hlinková. Normal mode analysis as a routine part of a structural investigation. *Molecules*, 24(18):3293, Sep 2019. ISSN 1420-3049. doi: 10.3390/molecules24183293. URL http://dx.doi.org/10.3390/molecules24183293.
Nathaniel Bennett, Brian Coventry, Inna Goreshnik, Buwei Huang, Aza Allen, Dionne Vafeados, Ying Po Peng, Justas Dauparas, Minkyung Baek, Lance Stewart, Frank DiMaio, Steven De Munck, Savvas N. Savvides, and David Baker. Improving de novo protein binder design with deep learning. *bioRxiv*, 2022. doi: 10.1101/2022.06.15.495993. URL https://www.biorxiv.org/content/early/2022/06/17/2022.06.15.495993.
Bernard Brooks and Martin Karplus. Normal modes for specific motions of macromolecules: application to the hinge-bending mode of lysozyme. *Proceedings of the National Academy of Sciences*, 82(15):4995–4999, 1985.
Patrick Bryant. Structure prediction of alternative protein conformations. *bioRxiv*, 2023. doi: 10.1101/2023.09.25.559256. URL https://www.biorxiv.org/content/early/2023/09/25/2023.09.25.559256.
John-Marc Chandonia, Lindsey Guan, Shiangyi Lin, Changhua Yu, Naomi K Fox, and Steven E Brenner. SCOPe: improvements to the structural classification of proteins – extended database to facilitate variant interpretation and machine learning. *Nucleic Acids Research*, 50(D1):D553–D559, 12 2021. ISSN 0305-1048. doi: 10.1093/nar/gkab1054. URL https://doi.org/10.1093/nar/gkab1054.
Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In *The Eleventh International Conference on Learning Representations*, 2022a.
Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. *Advances in Neural Information Processing Systems*, 35:25683–25696, 2022b.
Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al. Robust deep learning–based protein sequence design using proteinmpnn. *Science*, 378(6615):49–56, 2022.
Prafulla Dhariwal and Alexander Quinn Nicol. Diffusion models beat GANs on image synthesis. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021. URL https://openreview.net/forum?id=AAWuCvzaVt.
Kieran Didi, Francisco Vargas, Simon V Mathis, Vincent Dutordoir, Emile Mathieu, Urszula J Komorowska, and Pietro Lio. A framework for conditional diffusion modelling with applications in motif scaffolding for protein design. *arXiv preprint arXiv:2312.09236*, 2023.
NK. Fox, Steven E. Brenner, and JM Chandonia. Scope: Structural classification of proteins—extended, integrating scop and astral data and classification of new structures. *Nucleic Acids Research*, 42:D304–D309, 2014. doi: 10.1093/nar/gkt1240.
|
20L7txbIa8
|
Given that model confidence is a part of the autoregressive prediction objective, how calibrated are the generations from the model during evaluation ? Concretely, (1) do the confidence estimates produced during generation form a valid probability distribution and (2) would it be possible to compute the calibration score for the produced probabilities (maybe something like the Expected Calibration Error, eg as done in [1])?
|
UNIPREDICT: LARGE LANGUAGE MODELS ARE UNIVERSAL TABULAR CLASSIFIERS
Anonymous authors
Paper under double-blind review
ABSTRACT
Tabular data prediction is a fundamental machine learning task for many applications. Existing methods predominantly employ discriminative modeling and operate under the assumption of a fixed target column, necessitating re-training for every new predictive task. Inspired by the generative power of large language models (LLMs), this paper exploits the idea of building universal tabular data predictors based on generative modeling, namely UniPredict. Here, we demonstrate the scalability of an LLM to extensive tabular datasets, enabling it to comprehend diverse tabular inputs and predict target variables following the provided instructions. Specifically, we train a single LLM on an aggregation of 169 tabular datasets with diverse targets and compare its performance against baselines that are trained on each dataset separately. We observe this versatile UniPredict model demonstrates an advantage over other models, ranging from 5.4% to 13.4%, when compared with the best tree-boosting baseline and the best neural network baseline, respectively. We further test UniPredict in few-shot learning settings on another 62 tabular datasets. Our method achieves strong performance in quickly adapting to new tasks. In low-resource few-shot setup, we observed a 100%+ performance advantage compared with XGBoost, and significant margin over all baselines. We envision that UniPredict sheds light on developing a universal tabular data prediction system that learns from data at scale and serves a wide range of prediction tasks.
1 INTRODUCTION
Tabular data is organized in a tabular or spreadsheet format within a relational database. Each row within the table corresponds to a specific data sample, and the columns encompass a range of feature variables with diverse types, such as categorical, numerical, binary, and textual features. Tabular data prediction is fundamental to many real-world machine-learning applications such as click-through rate prediction (Yang & Zhai, 2022) and medical outcome prediction (Wang & Sun, 2022).
Nonetheless, most previous methods fall short by assuming a fixed target. This entails selecting a specific column, such as patient mortality in breast cancer cases, with the other columns as the input features. Therefore, a model trained to predict this particular target becomes specialized and cannot be employed for predicting any other target, such as cancer relapse. To predict a different target, one must create a new dataset corresponding to the desired target and retrain the model. This practice renders substantial work involved in developing and hosting dataset-specific tabular data predictors.
Unlike most traditional algorithms that make discriminative modeling for tabular prediction, we intend to harness LLMs for tabular prediction through generative modeling. Figure 1 demonstrates the difference between the previous practices and our modeling paradigm. This paradigm provides substantial flexibility in (1) processing natural language descriptions of tabular data and (2) generating predictions for specified target labels based on input instructions. While previous works have tried to fine-tune LLMs for generating target labels of tabular data (Dinh et al., 2022; Hegselmann et al., 2023), they have their limitations, mainly in that they still require training specific predictors for each dataset and target variable. Moreover, these generative prediction methods do not provide the associated confidence of their predictions as traditional tabular prediction models do. By contrast, the goal of this work is to build universal tabular predictors based on generative LLM, which accept arbitrary inputs and predict for arbitrary targets, following the input instructions.
Figure 1: Visualization for three tabular modeling paradigms. **Left**: In Traditional Tabular Modeling tasks (Figure 1a), distinct models are trained individually on each dataset, making them incapable of adaptation to new datasets with differing features and targets. **Middle**: In the In-Domain Tabular Modeling tasks (Figure 1b), where flexibility is allowed for features, the targets remain the same across datasets. **Right**: the proposed Universal Tabular Modeling paradigm (Figure 1c), which accommodates arbitrary inputs and predicting for arbitrary targets. This paradigm does not impose any restrictions on the domains of the datasets used. In Universal Tabular Modeling, the datasets can originate from entirely different domains.
Specifically, this work explores the ways to unlock the potential of LLMs as universal tabular data predictors, namely UniPredict, which hinges on the following insights:
- **Data Scale**: Scaling to 160+ diverse tabular datasets to fuel the training of a powerful LLM that performs prediction for diverse inputs and targets.
- **Prompt Engineering**: The prompts that integrate the metadata (e.g., the dataset description and schema of columns), the input tabular sample, and the instruction for prediction generation.
- **Instruction Tuning**: Instruction tuning that encourages LLM to not only generate the label but also provide confidence estimates for its predictions.
We elaborate on our framework in Section 2, which is followed by the experiment results in Section 3. In detail, we train a single UniPredict model on the aggregated training sets from 169 tabular datasets and test it on the corresponding test sets. For comparison, we train one unique baseline model for each tabular dataset and report their performances. We observe that the universal tabular predictor UniPredict outperforms the best neural network baselines by 13.4% and the best boosting algorithms by 5.4%, across the test sets. Additionally, we observed that UniPredict exhibits an advantage in the low-resource regime. Even as the sample size increases, it consistently maintains among the top models. We close with the discussion of related papers in Section 4 and the conclusion in Section 5.
## 2 Method and Implementation
### 2.1 Problem Formulation
Before going into details of the proposed method, we define two problems that we aim to resolve:
**Universal Tabular Modeling** Given a dataset $D_n$ in any domain, we have its components $D_n = \{M_n, S_n; T_n\}$ that include the metadata $M_n$, samples $S_n$, and targets $T_n$. Different from traditional tabular models $f_n : S_n \rightarrow T_n$ (shown in Figure 1a) that gives a 1-to-1 dataset-model relationship, or in-domain tabular models $f_{task} : S_n \rightarrow T_{task}$ (shown in Figure 1b), we require a universal model $f_{univ} : S \rightarrow T$ such that $f_{univ}(S_n; M_n) = T_n$. This approach enables us to create
Figure 2: The UniPredict framework. It consists of three steps: 1) Prompt Setup sets up the prompts by metadata, sample serialization, and instructions; 2) Target Augmentation transforms target values into categories with confidence estimates; and 3) Learning fine-tunes the backbone model by prompts and targets yielded from the previous procedures.
a more versatile prediction setting. The model parameters are no longer dependent on any particular dataset or task domain. Instead, a single set of parameters, with the aid of metadata, can be used for all datasets from any domain (shown in Figure 1c).
Few-shot Learning We expect our model $f$ that is trained on datasets $\{D_1, D_2, \cdots, D_n\}$ to be also available to predict for a new target $T_{n+1}$, given $(S_{n+1}, M_{n+1}) \in D_{n+1}$. We can fine-tune $f$ with the new dataset $D_{n+1}$ in a low-resource regime to achieve few-shot learning.
As illustrated in Figure 2, The UniPredict framework is structured around three primary steps: First, in Prompt Setup §2.2, prompts are established through metadata, sample serialization, and instructions. Second, Target Augmentation §2.3 involves transforming target values into categorized forms accompanied by confidence estimates. Last, the Learning §2.4 step fine-tunes the backbone model utilizing the prompts and targets derived from the preceding procedures.
2.2 PROMPT ENGINEERING
Tabular data have to be transformed into natural language inputs to be comprehended by LLMs. It is highlighted that the quality of this natural language input has a major impact on the LLM’s performance (Zhao et al., 2021). We hereby present how we formulate the input prompt for our UniPredict framework. Technically, based on dataset $D = \{M, S; T\}$ we define the function $\text{prompt}(M, S, I)$ that takes pre-processed metadata $M$ and tabular sample $S$, and the instruction $I$ as input and perform serialization to produce the natural language input for LLMs:
Metadata $M$ represents a serialized description of the context and schema definition of the dataset.
Tabular Sample $S$ that represents serialized contents of the raw sample.
Instruction $I$ that contains the guidance that prompts LLMs to make the final prediction about the target, e.g., the probability prediction for each target class.
We describe the detailed setup of these components in the following sections. We also offer the example of used prompts in Appendix B.1.
Metadata Re-formatting As UniPredict accommodates a wide range of tabular datasets that share distinct schema, the dataset metadata plays a vital role in facilitating the language modeling on these diverse tabular data. For instance, many table columns are abbreviations or coded with a private dictionary, thus hurdling LLMs in comprehending the tabular inputs. In practice, the metadata is usually provided in unstructured texts with the raw dataset. Here, we propose to design a function \texttt{reformat(M)} that consolidates arbitrary input M to (1) a description of the target to predict and (2) the semantic descriptions of features. We employ GPT-3.5\footnote{OpenAI API: gpt-3.5-turbo} to automate the metadata reformatting process. We offer the example metadata reformatting process in Appendix B.2.
Feature Serialization Given the raw metadata M and the samples S, we define the function \texttt{serialize(c,v)} to produce a string output given the column names c and feature values v, where c ∈ \texttt{reformat(M)} and v ∈ S. Each value is paired with the corresponding column in the format of “\{column\} is \{value\}, \{column\} is \{value\}, ...”. Besides, we round numeric values to a fixed precision before tokenization, and more data-dependent binning methods, such as adaptive histogram binning, may be considered. Some examples of the serialization can be found in Appendix B.3.
2.3 INSTRUCTION FORMULATION & TARGET AUGMENTATION
When encountering tabular data prediction with LLM, the most natural idea is to put the tabular sample as the input and prompt LLM to generate the target label (Dinh et al., 2022; Hegselmann et al., 2023). For instance, prompting LLM with the input “Is the person’s annual income ≥ 50?” to yield the output “yes” or “no” as the binary prediction. However, it has two main drawbacks:
• Reliability Unlike traditional ML algorithms that produce the probability prediction for each class, this method merely produces the output label. Due to the uncertainty in text generation, the label prediction from LLM may be unreliable without a numerical estimation of its confidence.
• Robustness We empirically identified this modeling paradigm may fail to converge when encountering challenging tabular prediction tasks or noisy inputs. In these scenarios, the LLM may either refuse to generate predictions or tend to continue the input texts.
To overcome these challenges, we propose instructing models to predict each target class probability, e.g., “yes: 0.8; no: 0.2”. This is achieved by adding another target augmentation step.
Target Augmentation We transform the target label into a set of probabilities for each class via a function called “augment”. Formally, for target T in an arbitrary dataset D, we define a function \texttt{augment(T)} = \{C,P\}, where C are new categories of targets with semantic meaning and P are the assigned probabilities to each category. We extend the target into categorical one-hot encoding and then use an external predictor to create the calibrated probability distributions. This replaces the 0/1 one-hot encoding while maintaining the final prediction outcome. For datasets with discrete target values (e.g., classification), the target classes are processed by one-hot encoding. For continuous numerical targets (e.g., regression), the categories are defined by their quantiles.
We use an isotopic calibrated XGBoost classifier (Chen & Guestrin, 2016) with n\_estimators=100 as the external predictor. We train one predictor for each dataset and then leverage it to produce the probability for each class for all samples. It is noted that this predictor serves as a probability estimator for sample labels without the loss of information or data leakage. Formally, given the target classes t ∈ \{0,...,|C|\} and target probabilities p ∈ P, we define a function \texttt{serialize\_target(t,p)} that serializes target classes and probabilities into a sequence formatted as “class \{t_1\} : \{p_1\}, class \{t_2\} : \{p_2\}, ...”. This sequence is used as the referenced output to fine-tune the LLM. Besides the merit of entailing confidence predictions, target augmentation offers more sufficient supervision for LLMs, which we find vital for its robustness in training and inference.
Instruction Formulation The instruction I describes the objective that prompts LLM to comprehend the input tabular sample and predict for the augmented target \texttt{augment(T)}. Given the target classes t ∈ \[0,|C|\] and target semantic explanation e ∈ C, we define a function
`serialize_class(t, e)` that converts the classes `t`, and their corresponding semantic explanation `e`, into a natural language sequence “class `{t}` means `{e}`,...”. We present the example prompts in Appendix B.4.
### 2.4 Learning
**LLM for Tabular Prediction** During fine-tuning, our objective is to minimize the difference between the output sequence generated by the adapted LLM function (represented by `LLM(prompt(M, S, I))`) and the reference output sequence generated from target augmentation (represented by `serialize_target(augment(T))`). However, during testing, we evaluate the prediction correctness instead of the similarity between the output and reference sequences. To do this, we map the natural language sequence generated by the LLM function to the actual class that the model is predicting. We then check the correctness of the prediction by comparing it with the ground truth label. We use a regex expression matching technique for the mapping procedure. We have included examples for such comparisons in Appendix B.5.
**Learning** In our model learning process, we generate prompts using samples and metadata from different datasets and update the model based on instruction fine-tuning. Subsequently, we assess the model’s actual performance by comparing its class predictions (after output mapping) to the original target values. This evaluation is conducted on both the datasets used during training and previously unseen datasets. We adapt GPT-2 (Radford et al., 2019) as our backbone, and we used the huggingface\(^2\) package for training. See Appendix C.3 for the detail of parameter choice.
### 2.5 Our Implementation of UniPredict
**Dataset Setup** We collect the datasets from Kaggle\(^3\). We pre-select the datasets from the classification category and drop the datasets that do not provide organized and recognizable metadata. We leverage the Kaggle API\(^4\) to download both the raw data and their descriptions with an argument `--file-size csv` to restrict the dataset format. In this way, we simplify the follow-up dataset reading procedures. To ensure a comprehensive evaluation, we do not preselect datasets by their domains, categories, or purposes.
We end with the training corpus built from 169 datasets. For each selected dataset, we perform a max-size cutoff at 7500 samples to prevent any datasets with too many samples from dominating the corpus. The number of training samples in the entire corpus is 366,786. Dataset statistics can be found in Appendix C.2.
**Implementations** The target augmentation step is done by the XGBoost classifiers. However, as mentioned in Section 2.3, we accept other classifiers to be adapted as long as they produce proper probability values. Furthermore, measuring the information entailed by different classifiers in this problem is also a potential topic to explore.
Besides the normal UniPredict framework, we instantiate a variant that only takes feature names from the metadata, named as UniPredict-light; in contrast, we named our normal version UniPredict-heavy. UniPredict-light is expected to take less time for fine-tuning and demonstrate an equal or better performance when the dataset is well-maintained. Since no assumptions should be made to unknown datasets, UniPredict-heavy is the most reliable baseline. The difference in implementation of the two variants can be found in Appendix B.1.
### 3 Experiment
In this section, we conducted extensive experiments with UniPredict and a suite of cutting-edge tabular prediction baselines, with a focus on answering the following research questions:
---
\(^2\)https://huggingface.co/
\(^3\)https://www.kaggle.com/datasets/
\(^4\)https://github.com/Kaggle/kaggle-api
Figure 3: The average accuracy and rank of UniPredict-heavy, UniPredict-light, TabLLM (Hegselmann et al., 2023), XGBoost (Chen & Guestrin, 2016), MLP, TabNet (Arik & Pfister, 2021) and FT-Transformer (Gorishniy et al., 2021) on 169 datasets. Each dot indicates a trial on one dataset. UniPredict-heavy demonstrates a remarkable performance advantage over the best neural network model (FT-Transformer) with a relative improvement of 13.4%. It also surpasses the best-performing tree-boosting algorithms by a margin of 5.4%. Our framework’s advantage is further confirmed by Figure 3b, the model ranking (the less the better).
• Universal Tabular Modeling (Section 3.2) Can a single UniPredict model succeed in performing a universal modeling of extensive tabular datasets?
• Few-shot learning (Section 3.3) Compared with the baselines, how well does a pre-trained UniPredict model adapt to new tasks?
• Analysis #1 (Section 3.4) Under what circumstances is UniPredict less competitive to others?
• Analysis #2 (Section 3.5) What are the key factors that make UniPredict a successful candidate for universal tabular prediction?
3.1 Baseline Models
We included MLP as the simplest neural baseline. Drawing inspiration from the effectiveness of tree-boosting algorithms on tabular tasks, we assessed the performance of XGBoost (Chen & Guestrin, 2016), a preeminent model in this domain. To explore the effectiveness of attention-based models in our tasks, we also included TabNet (Arik & Pfister, 2021) and FT-Transformer (Gorishniy et al., 2021) to our experimental evaluation. Additionally, we incorporated TabLLM (Hegselmann et al., 2023) into our analysis, as it represents another model designed for tabular data with a focus on Large Language Models. The configurations and specifics of these baseline models are provided in Appendix C.1. Given the dataset-specific and non-transferable nature of the baseline models, we established isolated instances for each dataset included in our study. In contrast, for UniPredict, which aims at Universal Tabular Prediction, we instantiated a single model instance capable of handling all the datasets used in our experimentation.
3.2 Results on Universal Tabular Modeling
We assessed model accuracy on the test set of all 169 datasets and summarized the results in Figure 3. It is noted that due to the limitation of baseline models in terms of transferability onto new datasets, a distinct model was trained for each dataset, as discussed in Section 3.1. Nonetheless, even without additional dataset-specific fine-tuning, both variants of UniPredict consistently outperform all baseline models in terms of accuracy.
Specifically, UniPredict-heavy achieves a notable increase in absolute accuracy of 2.2% when compared to XGBoost, which stands as the top-performing model among the baseline models. Meanwhile, UniPredict-light, following in the footsteps of its full-size counterpart, continues to exhibit better performance relative to the other models. The ranking metric confirms their dom-
Figure 4: The average accuracy and rank of UniPredict-heavy, UniPredict-light, TabLLM XGBoost, MLP, TabNet and FT-Transformer on 62 datasets. We vary the training data size, ranging from the lowest (10%) to the highest (90%) of the full dataset. The pre-trained UniPredict series exhibit remarkable data efficiency in generalizing to new tasks.
In terms of performance over the baselines. In this metric, both UniPredict-heavy and UniPredict-light consistently occupy top positions. As a candidate of tree-boosting method, although XGBoost shares a similar median ranking with the best-performing models, it displays a higher 25% quartile in Figure 3b, indicating a sparser distribution of rankings. The other baselines fail to deliver comparable performance. TabLLM, designed as an LLM-driven model for individual datasets, does not yield results that are on par with other lighter methods. Despite its moderate ranking in terms of accuracy, it falls to the lower ranks when considering median ranking. Further details on dataset-specific results regarding accuracy and rank are provided in Appendix D.1.
3.3 Results on Few-shot Learning
We experimented UniPredict’s few-shot learning accuracy, compared with baseline models that are trained individually on each of the 62 datasets, where each dataset contains less than 100 samples. This setup is to evaluate models on low-resource datasets because (1) collecting high-quality samples is of high cost in practice, and (2) models that generalize well in large datasets do not always perform as well as in small datasets. For each dataset, we divided it into a train set and a test set, which served for training each model and fine-tuning the pre-trained UniPredict and TabLLM. To thoroughly assess our model’s capacity for generalization, we devised multiple experimental configurations involving the partitioning of the training dataset into different proportions, spanning from 10% to 90% of the entire dataset. For each of these settings, we trained separate baseline models on the respective datasets.
Figure 4 shows the accuracy and ranking of all models with varying training data sizes. The UniPredict series demonstrates a significant advantage in the low-resource regime, particularly when the training sets contain less than 50% of the samples. As the sample size increases, they consistently remain among the top-performing models. The same trend is reflected in the result of model rankings as illustrated in Figure 4b. In contrast, XGBoost shines as the best model in resource-rich training setups, achieving an average accuracy of 0.62 when the training set size is set to 90% of the entire dataset. However, it struggles in scenarios with small training sets. In the extreme low-resource case, where the training set proportion is 10%, it exhibits the poorest performance among all models, with an over 118% disadvantage to UniPred-heavy, and ranks at the bottom. On the other hand, FT-Transformer, an attention-based model, performs comparably to UniPredict-heavy but falls short of surpassing either UniPredict-light or XGBoost in any of the setups. Its rank, however, jumped to the second in the last experiment setup on Figure 4b. MLP delivers a moderate performance, while TabNet fails to converge effectively in these experimental setups. Similarly, TabLLM encounters problems in this context. Throughout all conditions, both TabLLM and TabNet consistently rank at the bottom and do not demonstrate improvement as the training set size scales up. Additional information is provided in Appendix D.2 for more detailed performance analysis of all models.
Figure 5: an overview of the causes for which either model (Figure 5a), UniPredict-heavy (Figure 5b), or UniPredict-light (Figure 5c) experienced poor performance. As described in Section 3.4, COL, FV, META and OTH stand for Excessive Column Number, Bad Feature Values, Bad Metadata and Other reasons, respectively. Among the 169 datasets examined, 8 datasets are included in UniPredict-heavy’s investigation, with 12 causes identified. UniPredict-light fails on 10 datasets, with 11 causes identified.
3.4 Achilles’ Heel: UniPredict’s Failure Analysis
In this section, we aim to explore situations where our UniPredict framework does not perform well, which provides insight for deploying UniPredict and further enhancement. We have identified these situations by collecting datasets from the supervised setup (as used in Section 3.2) and identifying the datasets in which either UniPredict-heavy or UniPredict-light ranks in the bottom 2 (6th or 7th) among all compared methods. For each of these datasets, we have collected potential causes that may lead to the poor performance of our method. We conclude that most failures can be attributed to one or more of the following causes:
• **COL**: Too many COLumns in the dataset. This may result in serialized input strings that exceed the context limit of the language model. It hence undermines model performance because the exceeding parts are pruned.
• **FV**: Poorly represented Feature Values that are challenging for the model to process and comprehend. Examples include an excessive number of numerical values or meaningless characters.
• **META**: Inadequate or ambiguous METAData, such as vague or meaningless column names and metadata, can confuse the model when comprehending the inputs.
• **OTH**: OTHer factors not explicitly covered above that may deteriorate model performance.
We include examples of each causes in Appendix D.3. As illustrated in Figure 5, bad feature values are the primary cause behind approximately half of the failures observed in our framework. Additionally, UniPredict-heavy is affected by confusing metadata descriptions and oversized columns. Interestingly, UniPredict-light, which is configured with minimal metadata usage (as discussed in Section 2.5), seems poised to minimize the influence of poor metadata. However, it paradoxically appears to struggle more with uninterpretable feature values, leading it to encounter more instances of poor performance compared to the default setup, UniPredict-heavy.
In a nutshell, we conclude with three hints in developing UniPredict in practice: (1) offering informative and accurate metadata for the input tabular dataset; (2) improving the context window limit of the LLM predictor to process more complicated inputs; and (3) cleaning up bad feature values before the training.
3.5 Ablation Study
In this section, we conduct an ablation study to examine whether the re-formatting and augmenting of targets are the critical factors contributing to the success of UniPredict. The results are presented in Table 1. In the ablation study, the language models were fine-tuned using labels that only contained the one-hot encoding of the target class without the confidence information distributed into classes. The results consistently demonstrate that regardless of the model variant (whether light or heavy), the model with target augmentation performs noticeably better than the model without augmentation. Furthermore, it is noteworthy that the ablation of UniPredict-light results in a
| Task | UniP-h | Abl-h | UniP-l | Abl-l |
|------------------------------------|--------|-------|--------|-------|
| Universal Tabular Modeling (avg.) | 0.721 | 0.686 | 0.740 | 0.575 |
| Universal Tabular Modeling (med.) | 0.810 | 0.746 | 0.790 | 0.590 |
| Few-Shot Learning: Low-data (avg.) | 0.525 | 0.483 | 0.513 | 0.349 |
| Few-Shot Learning: Low-data (med.) | 0.521 | 0.474 | 0.500 | 0.289 |
| Few-Shot Learning: High-data (avg.)| 0.543 | 0.545 | 0.590 | 0.321 |
| Few-Shot Learning: High-data (med.)| 0.563 | 0.571 | 0.645 | 0.333 |
Table 1: The result of ablation among UniPredict-heavy (UniP-h), UniPredict-heavy without target augmentation (Abl-h), UniPredict-light (UniP-l), UniPredict-light without target augmentation (Abl-l). Tasks examined are Universal Tabular Modeling that uses the same setup as Section 3.2, and Few-shot Learning as Section 3.3. The latter task involves both a low-data setup (Train Set Proportion = 0.3) and a high-data setup (Train Set Proportion = 0.8), which correspond to the conditions shown in Figure 4. For each task and setup, we provide both the average and median performance metrics across all datasets.
more significant decrease in performance compared to UniPredict-heavy. This finding aligns with the conjecture made in Section 2.5 that the heavy variant is more robust and adaptable across different implementations and scenarios.
4 RELATED WORK
Tabular Prediction. Tree-based models have shown outstanding performance on tabular prediction tasks (Chen & Guestrin, 2016; Ke et al., 2017). Inspired by the rise of deep learning for tabular prediction (Arik & Pfister, 2021), the recent research has emphasized three ways of improvement: (1) taking advantage of pre-training or transfer learning on broad tabular data (Wang & Sun, 2022; Zhu et al., 2023); (2) adapting pre-trained large language models to generate the target label column as the prediction (Dinh et al., 2022; Hegselmann et al., 2023); and (3) mining the graph structure considering an overview of the tabular dataset (Du et al., 2022; Chen et al., 2023). In addition, Wang et al. (2023) unify tabular data from various sources into a natural language format, establishing a tabular prediction pipeline capable of handling diverse inputs. However, most of these algorithms perform discriminative modeling for tabular prediction and hence are restricted to making the prediction for a fixed target. UniPredict, by contrast, depends on generative modeling for the prediction of any user-specified target.
Large Language Model. LLMs have demonstrated remarkable capabilities in logical thinking and solving language tasks under instructions (Bubeck et al., 2023; Zhao et al., 2023a). It has motivated researchers to adopt LLMs for a series of tabular data tasks, including tabular data generation (Borisov et al., 2022) and table-to-text generation (Zhao et al., 2023b). Meanwhile, LLMs are fine-tuned for tabular prediction as generation task (Dinh et al., 2022; Hegselmann et al., 2023). While these studies have showcased LLM is able to generate target labels given textualized tabular data, there remains an unexplored opportunity: constructing a versatile tabular predictor capable of handling a wide array of tabular datasets. In addition, previous LLM-based tabular predictors are usually trained to generate the target label while not offering the corresponding prediction probabilities. We argue it is crucial to inspect the prediction probabilities made by LLMs, which is necessary when deploying them in production.
5 CONCLUSION
We present UniPredict that can learn from an aggregation of widespread tabular datasets called universal tabular prediction. We train a single UniPredict model on 169 datasets with more than 300,000 samples and test it on the other 62 datasets for few-shot learning. Empirically, UniPredict yields the best prediction accuracy of 0.81 (2.2% absolute, 5.4% relative improvement compared to XGBoost). On unseen datasets, after dataset-specific fine-tuning, it exhibits great advantage when the training sets contain less than 50% of the samples (118% relative advantage to XGBoost at train-ratio=0.1) and consistently ranks at the top 2 in all scenarios. We envision that UniPredict paves the way for deploying foundational tabular prediction systems.
REFERENCES
Sercan Ö Arik and Tomas Pfister. Tabnet: Attentive interpretable tabular learning. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 6679–6687, 2021.
Vadim Borisov, Kathrin Sessler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. Language models are realistic tabular data generators. In The Eleventh International Conference on Learning Representations, 2022.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
Pei Chen, Soumajyoti Sarkar, Leonard Lausen, Balasubramaniam Srinivasan, Sheng Zha, Ruihong Huang, and George Karypis. HYTREL: Hypergraph-enhanced tabular data representation learning. arXiv preprint arXiv:2307.08623, 2023.
Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794, 2016.
Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee. LIFT: Language-interfaced fine-tuning for non-language machine learning tasks. Advances in Neural Information Processing Systems, 35:11763–11784, 2022.
Kounianhua Du, Weinan Zhang, Ruiwen Zhou, Yangkun Wang, Xilong Zhao, Jiarui Jin, Quan Gan, Zheng Zhang, and David P Wipf. Learning enhanced representation for tabular data via neighborhood propagation. Advances in Neural Information Processing Systems, 35:16373–16384, 2022.
Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deep learning models for tabular data. Advances in Neural Information Processing Systems, 34:18932–18943, 2021.
Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. TabLLM: Few-shot classification of tabular data with large language models. In International Conference on Artificial Intelligence and Statistics, pp. 5549–5581. PMLR, 2023.
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. LightGBM: A highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems, 30, 2017.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Zifeng Wang and Jimeng Sun. Transtab: Learning transferable tabular transformers across tables. arXiv preprint arXiv:2205.09328, 2022.
Zifeng Wang, Chufan Gao, Cao Xiao, and Jimeng Sun. Anypredict: Foundation model for tabular prediction, 2023.
Yanwu Yang and Panyu Zhai. Click-through rate prediction in online advertising: A literature review. Information Processing & Management, 59(2):102853, 2022.
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models, 2021.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023a.
Yilun Zhao, Haowei Zhang, Shengyun Si, Linyong Nan, Xiangru Tang, and Arman Cohan. Large language models are effective table-to-text generators, evaluators, and feedback providers. arXiv preprint arXiv:2305.14987, 2023b.
|
33XGfHLtZg
|
In that case, the known guarantee is $\geq \alpha$, which is loose by $\frac{2}{n+1}$. Does this method recover the tightest guarantees of Split Conformal Predictions when the loss function is set to be miscoverage?
|
CONFORMAL RISK CONTROL
Anastasios N. Angelopoulos\textsuperscript{1}, Stephen Bates\textsuperscript{2}, Adam Fisch\textsuperscript{2}, Lihua Lei\textsuperscript{3}, Tal Schuster\textsuperscript{4}
\textsuperscript{1}UC Berkeley \textsuperscript{2}MIT \textsuperscript{3}Stanford \textsuperscript{4}Google Research
ABSTRACT
We extend conformal prediction to control the expected value of any monotone loss function. The algorithm generalizes split conformal prediction together with its coverage guarantee. Like conformal prediction, the conformal risk control procedure is tight up to an $O(1/n)$ factor. We also introduce extensions of the idea to distribution shift, quantile risk control, multiple and adversarial risk control, and expectations of U-statistics. Worked examples from computer vision and natural language processing demonstrate the usage of our algorithm to bound the false negative rate, graph distance, and token-level F1-score.
1 INTRODUCTION
We seek to endow some pre-trained machine learning model with guarantees on its performance as to ensure its safe deployment. Suppose we have a base model $f$ that is a function mapping inputs $x \in X$ to values in some other space, such as a probability distribution over classes. Our job is to design a procedure that post-processes the output of $f$ to give it a statistical safety property.
Split conformal prediction (Vovk et al., 2005; Papadopoulos et al., 2002), which we will refer to simply as “conformal prediction”, has been useful in areas such as computer vision (Angelopoulos et al., 2021b) and natural language processing (Fisch et al., 2021) to provide such a guarantee. By measuring the model’s performance on a calibration dataset $\{(X_i, Y_i)\}_{i=1}^n$ of feature-response pairs, conformal prediction post-processes the model to construct prediction sets that bound the miscoverage,
$$P(Y_{n+1} \notin C(X_{n+1})) \leq \alpha,$$
where $(X_{n+1}, Y_{n+1})$ is a new test point, $\alpha$ is a user-specified error rate (e.g., 10%), and $C$ is a function of the model and calibration data that outputs a prediction set. Note that $C$ is formed using the first $n$ data points, and the probability in (1) is over the randomness in all $n + 1$ data points (i.e., the draw of both the calibration points $1, \ldots, n$ and the test point $n + 1$).
In this work, we extend conformal prediction to prediction tasks where the natural notion of error is not simply miscoverage. In particular, our main result is that a generalization of conformal prediction provides guarantees of the form
$$E[\ell(C_\lambda(X_{n+1}), Y_{n+1})] \leq \alpha,$$
for any bounded loss function $\ell$ that shrinks as $C_\lambda(X_{n+1})$ grows, and $\lambda$ is an input parameter that controls the growth of $C_\lambda(X_{n+1})$. We call this conformal risk control. Note that (2) recovers the conformal miscoverage guarantee in (1) when using the miscoverage loss, $\ell(C_\lambda(X_{n+1}), Y_{n+1}) = 1\{Y_{n+1} \notin C_\lambda(X_{n+1})\}$. However, our algorithm also extends conformal prediction to situations where other loss functions, such as the false negative rate (FNR), are more appropriate.
As an example, consider multilabel classification, where the $Y_i \subseteq \{1, ..., K\}$ are sets comprising a subset of $K$ classes. Creating sets that contain all the classes may be too conservative if $K$ is massive; instead, given a trained multilabel classifier $f : X \rightarrow [0, 1]^K$, we want to output sets that include a large fraction of the true classes in $Y_i$. To that end, we post-process the model’s raw outputs into the set of classes with sufficiently high scores, $C_\lambda(x) = \{k : f(X)_k \geq 1 - \lambda\}$, where the main parameter of the algorithm $\lambda \in [0, 1]$ is a threshold. Note that as the threshold $\lambda$ grows, we include more classes in $C_\lambda(x)$—i.e., it becomes more conservative. In this case, conformal risk control finds a threshold value $\hat{\lambda}$ that controls the fraction of missed classes, i.e., the expected value
of $\ell(C_\lambda(X_{n+1}), Y_{n+1}) = 1 - |Y_{n+1} \cap C_\lambda(X_{n+1})|/|Y_{n+1}|$. Setting $\alpha = 0.1$ would ensure that our algorithm produces sets $C_\lambda(X_{n+1})$ containing $\geq 90\%$ of the true classes in $Y_{n+1}$ on average.
### 1.1 Algorithm and Preview of Main Results
Formally, we will consider post-processing the predictions of the model $f$ to create a function $C_\lambda(\cdot)$. The function has a parameter $\lambda$ that encodes its conservativeness: larger $\lambda$ values yield more conservative outputs (e.g., larger prediction sets). To measure the quality of $C_\lambda$, we consider a loss function $\ell(C_\lambda(x), y) \in (-\infty, B]$ for some $B < \infty$. We require this loss to be non-increasing in $\lambda$. Our goal is to choose $\hat{\lambda}$ based on the observed data $\{(X_i, Y_i)\}_{i=1}^n$ so that risk control as in (2) holds.
We now rewrite this same task in a more notationally convenient and abstract form. Consider an exchangeable collection of non-increasing, bounded, random functions $L_i : \Lambda \to (-\infty, B]$, $i = 1, \ldots, n + 1$, where $\Lambda$ is the space of all inputs (e.g., ‘thresholds’) to the function $L_i(\lambda)$. Throughout the paper, we assume $\lambda_{\text{max}} := \sup \Lambda \in \Lambda$, so that $L_i(\lambda_{\text{max}})$ is well-defined and satisfies $L_i(\lambda_{\text{max}}) \leq \alpha$ for any $\alpha$ by design (i.e., $\alpha$ is achievable). We seek to use the first $n$ functions to choose a value of the parameter, $\hat{\lambda}$, so that the risk on the unseen function is controlled:
$$E[L_{n+1}(\hat{\lambda})] \leq \alpha.$$
(3)
We are primarily motivated by the case where $L_i(\lambda) = \ell(C_\lambda(X_i), Y_i)$, in which case the guarantee in (3) coincides with risk control as in (2).
Now we describe the algorithm. Let $\hat{R}_n(\lambda) = (L_1(\lambda) + \ldots + L_n(\lambda))/n$. Given any desired risk level upper bound $\alpha \in (-\infty, B)$, define
$$\hat{\lambda} = \inf \left\{ \lambda : \frac{n}{n+1}\hat{R}_n(\lambda) + \frac{B}{n+1} \leq \alpha \right\} = \inf \left\{ \lambda : \hat{R}_n(\lambda) \leq \alpha - \frac{B - \alpha}{n} \right\}. $$
(4)
Since $\hat{R}_n(\lambda)$ is monotone, we can efficiently search for $\hat{\lambda}$ using binary search to arbitrary precision. When the set is empty, we define $\hat{\lambda} = \lambda_{\text{max}}$. Our proposed conformal risk control algorithm is to deploy $\hat{\lambda}$ on the forthcoming test point. Our main result is that this algorithm satisfies (3). Intuitively, we can see that this algorithm reduces to searching for a value of $\lambda$ that results in a slightly conservative empirical risk—that gets less conservative when the difference between the worst-case risk ($B$) and the desired risk ($\alpha$) is smaller, or the calibration set size ($n$) is larger.
Moreover, when the $L_i$ are i.i.d. from a continuous distribution, we can show that the algorithm satisfies a tight lower bound saying it is not too conservative,
$$E[L_{n+1}(\hat{\lambda})] \geq \alpha - \frac{2B}{n+1}. $$
We show the reduction from conformal risk control to conformal prediction in Appendix A. Furthermore, if the risk is non-monotone, then this algorithm does not control the risk; we discuss this in Section 2.3. Finally, we provide both practical examples using real-world data and several theoretical extensions of our procedure in Sections 5 and 6 respectively.
### 1.2 Related Work
Conformal prediction was developed by Vladimir Vovk and collaborators beginning in the late 1990s (Vovk et al., 1999, 2005), and has recently become a popular uncertainty estimation tool in the machine learning community, due to its favorable model-agnostic, distribution-free, finite-sample guarantees. See Angelopoulos & Bates (2021) for a modern introduction to the area or Shafer & Vovk (2008) for a more classical alternative. As previously discussed, in this paper we primarily build on split conformal prediction (Papadopoulos et al., 2002); statistical properties of this algorithm including the coverage upper bound were studied in Lei et al. (2013). Recently there have been many extensions of the conformal algorithm, mainly targeting deviations from exchangeability (Tibshirani et al., 2019; Gibb & Candès, 2021; Barber et al., 2022; Fannjiang et al., 2022).
---
1 Note that any unbounded loss can be transformed to a bounded loss. For example, any unbounded loss $\ell(\lambda)$ on the positive reals can be transformed by taking the inverse tangent, i.e., $\ell'(\lambda) = \arctan(\ell(\lambda))$. As long as the transformation is monotone, a loss of $\alpha$ on the original loss corresponds exactly to a loss of $\alpha'$ on the new loss; controlling this transformed risk may be enough in practice.
improved conditional coverage (Barber et al., 2020; Romano et al., 2019; Guan, 2020; Romano et al., 2020; Angelopoulos et al., 2021b). Most relevant to us is recent work on risk control in high probability (Vovk, 2012; Bates et al., 2021; Angelopoulos et al., 2021a) and its applications (Park et al., 2020; Fisch et al., 2022; Schuster et al., 2021, 2022; Sankaranarayanan et al., 2022; Angelopoulos et al., 2022a,b, inter alia). However, while Bates et al. (2021) and Angelopoulos et al. (2021a) operate in similar mathematical settings and we reuse much of their notation, the algorithm presented herein differs greatly. It is far more sample-efficient, simpler, and provides a guarantee in expectation versus a guarantee in probability. Our algorithm is entirely different—no existing algorithm gives guarantees in expectation for risk control—and its validity proof is mathematically unrelated to these previous works. See Appendix B for a detailed discussion and comparison of these algorithms, including experiments.
To further elaborate on the difference between our work and the broader existing literature, first consider conformal prediction. The purpose of conformal prediction is to provide coverage guarantees of the form in (1). The guarantee available through conformal risk control, (3), strictly subsumes that of conformal prediction; it is generally impossible to recast risk control as coverage control. As a second question, one might ask whether (3) can be achieved through standard statistical machinery, such as uniform concentration inequalities. Though it is possible to integrate a uniform concentration inequality to get a bound in expectation, this strategy tends to be excessively loose both in theory and in practice (see, e.g., the bound of Anthony & Shawe-Taylor (1993)). The technique herein avoids these complications; it is simpler than concentration-based approaches, practical to implement, and tight up to a factor of $1/n$, which is comparatively faster than concentration would allow.
2 THEORY
In this section, we establish the core theoretical properties of conformal risk control. All proofs, unless otherwise specified, are deferred to Appendix E.
2.1 RISK CONTROL
We first show that the proposed algorithm leads to risk control when the loss is monotone.
**Theorem 1.** Consider a sequence of exchangeable random loss functions, $\{L_i(\lambda)\}_{i=1}^{n+1}$, where $L_i : \Lambda \to \mathbb{R}$ which are non-increasing in $\lambda$, right-continuous, and for $\lambda_{\text{max}} = \sup \Lambda \in \Lambda$, satisfy
$$L_i(\lambda_{\text{max}}) \leq \alpha, \quad \sup_\lambda L_i(\lambda) \leq B < \infty \text{ almost surely.}$$
(5)
Then
$$\mathbb{E}[L_{n+1}(\hat{\lambda})] \leq \alpha.$$
**Proof.** Let $\hat{R}_{n+1}(\lambda) = (L_1(\lambda) + \ldots + L_{n+1}(\lambda))/(n + 1)$ and
$$\hat{\lambda}' = \inf \left\{ \lambda \in \Lambda : \hat{R}_{n+1}(\lambda) \leq \alpha \right\}.$$
Since $\inf_\lambda L_i(\lambda) = L_i(\lambda_{\text{max}}) \leq \alpha$, $\hat{\lambda}'$ is well-defined almost surely. Since $L_{n+1}(\lambda) \leq B$, we know
$$\hat{R}_{n+1}(\lambda) = \frac{n}{n+1} \hat{R}_n(\lambda) + \frac{L_{n+1}(\lambda)}{n+1} \leq \frac{n}{n+1} \hat{R}_n(\lambda) + \frac{B}{n+1}.$$
Thus,
$$\frac{n}{n+1} \hat{R}_n(\lambda) + \frac{B}{n+1} \leq \alpha \implies \hat{R}_{n+1}(\lambda) \leq \alpha.$$
This implies $\hat{\lambda}' \leq \hat{\lambda}$ when the LHS holds for some $\lambda \in \Lambda$. When the LHS is above $\alpha$ for all $\lambda \in \Lambda$, by definition, $\hat{\lambda} = \lambda_{\text{max}} \geq \hat{\lambda}'$. Thus, $\hat{\lambda}' \leq \hat{\lambda}$ almost surely. Since $L_i(\lambda)$ is non-increasing in $\lambda$,
$$\mathbb{E}\left[L_{n+1}(\hat{\lambda})\right] \leq \mathbb{E}\left[L_{n+1}(\hat{\lambda}')\right].$$
(6)
Let $E$ be the multiset of loss functions $\{L_1, \ldots, L_{n+1}\}$. Then $\hat{\lambda}'$ is a function of $E$, or equivalently, $\hat{\lambda}'$ is a constant conditional on $E$. Additionally, $L_{n+1}(\lambda)|E \sim \text{Uniform}(\{L_1, \ldots, L_{n+1}\})$ by exchangeability. These facts combined with the right-continuity of $L_i$ imply
$$\mathbb{E}\left[L_{n+1}(\hat{\lambda}') | E\right] = \frac{1}{n+1} \sum_{i=1}^{n+1} L_i(\hat{\lambda}') \leq \alpha.$$
The proof is completed by the law of total expectation and (6). □
2.2 A TIGHT RISK LOWER BOUND
Next we show that the conformal risk control procedure is tight up to a factor $2B/(n+1)$ that cannot be improved in general. The proof will rely on a form of continuity that generalizes the assumption of continuous non-conformity scores used for the standard conformal proof. Define the jump function, which quantifies the size of the discontinuity in a right-continuous input function $l$ at point $\lambda$, a $J(l, \lambda) = \lim_{\epsilon \to 0^+} l(\lambda - \epsilon) - l(\lambda)$. If the probability that $L_i$ has a discontinuity at any pre-specified $\lambda$ is $\mathbb{P}(J(L_i, \lambda) > 0) = 0$, then the conformal risk control procedure is not too conservative.
**Theorem 2.** In the setting of Theorem 1, further assume that the $L_i$ are i.i.d., $L_i \geq 0$, and for any $\lambda$, $\mathbb{P}(J(L_i, \lambda) > 0) = 0$. Then
$$\mathbb{E}\left[L_{n+1}(\hat{\lambda})\right] \geq \alpha - \frac{2B}{n+1}.$$
This bound is tight for general monotone loss functions, as we show next.
**Proposition 1.** In the setting of Theorem 2, for any $\epsilon > 0$, there exists a loss function and $\alpha \in (0, 1)$ such that
$$\mathbb{E}\left[L_{n+1}(\hat{\lambda})\right] \leq \alpha - \frac{2B - \epsilon}{n+1}.$$
Since we can take $\epsilon$ arbitrarily close to zero, the factor $2B/(n+1)$ in Theorem 2 is required. Conformal prediction—both the algorithm and the guarantee—is exactly equivalent to conformal risk control when the loss function is an indicator, including the tighter lower bound (see Appendix A).
2.3 CONTROLLING GENERAL LOSS FUNCTIONS
We next show that the conformal risk control algorithm does not control the risk if the $L_i$ are not assumed to be monotone. In particular, (5) does not hold. We show this by example.
**Proposition 2.** For any $\epsilon$, there exists a non-monotone loss function such that
$$\mathbb{E}\left[L_{n+1}(\hat{\lambda})\right] \geq B - \epsilon.$$
Notice that for any desired level $\alpha$, the expectation in (3) can be arbitrarily close to $B$. Since the function values here are in $[0, B]$, this means that even for bounded random variables, risk control can be violated by an arbitrary amount—unless further assumptions are placed on the $L_i$. However, the algorithms developed may still be appropriate for near-monotone loss functions. Simply ‘monotonizing’ all loss functions $L_i$ and running conformal risk control will guarantee (3), but this strategy will only be powerful (i.e., not conservative) if the loss is near-monotone. For concreteness, we describe this procedure below as a corollary of Theorem 1.
**Corollary 1.** Allow $L_i(\lambda)$ to be any (possibly non-monotone) function of $\lambda$ satisfying (5). Take
$$\tilde{L}_i(\lambda) = \sup_{\lambda' \geq \lambda} L_i(\lambda'), \quad \tilde{R}_n(\lambda) = \frac{1}{n} \sum_{i=1}^{n} \tilde{L}_i(\lambda) \quad \text{and} \quad \tilde{\lambda} = \inf \left\{ \lambda : \frac{n}{n+1} \tilde{R}_n(\lambda) + \frac{B}{n+1} \leq \alpha \right\}.$$
Then,
$$\mathbb{E}\left[L_{n+1}(\tilde{\lambda})\right] \leq \alpha.$$
If the loss function is already monotone, then $\tilde{\lambda}$ reduces to $\hat{\lambda}$. We propose a further algorithm for picking $\lambda$ in Appendix C that provides an asymptotic risk-control guarantee for non-monotone loss functions. However, this algorithm again is only powerful when the risk $\mathbb{E}[L_{n+1}(\lambda)]$ is near-monotone and reduces to the standard conformal risk control algorithm when the loss is monotone.
3 EXAMPLES
To demonstrate the flexibility and empirical effectiveness of the proposed algorithm, we apply it to four tasks across computer vision and natural language processing. All four loss functions are non-binary, monotone losses bounded by 1. They are commonly used within their respective application domains. Our results validate that the procedure bounds the risk as desired and gives useful outputs.
Figure 1: **FNR control in tumor segmentation.** The top figure shows examples of our procedure with correct pixels in white, false positives in blue, and false negatives in red. The bottom plots report FNR and set size over 1000 independent random data splits. The dashed gray line marks $\alpha$.
Figure 2: **FNR control on MS COCO.** The top figure shows examples of our procedure with correct classes in black, false positives in blue, and false negatives in red. The bottom plots report FNR and set size over 1000 independent random data splits. The dashed gray line marks $\alpha$.
to the end-user. We note that the choices of $C_\lambda$ used herein are only for the purposes of illustration; any nested family of sets will work. For each example use case, for a representative $\alpha$ (details provided for each task) we provide both qualitative results and quantitative histograms of the risk and set sizes over 1000 random data splits that demonstrate valid risk control (i.e., with mean $\leq \alpha$).
### 3.1 FNR CONTROL IN TUMOR SEGMENTATION
In tumor segmentation, our input is a $d \times d$ image and our label is a set of pixels $Y_i \in \wp(\{(1,1), (1,2), ..., (d,d)\})$, with $\wp$ the power set. We use an image segmentation model $f : X \rightarrow [0,1]^{d \times d}$ outputting a probability for each pixel and measure loss as the fraction of false negatives,
$$L_{\text{FNR}}^i(\lambda) = 1 - \frac{|Y_i \cap C_\lambda(X_i)|}{|Y_i|}, \quad \text{where } C_\lambda(X_i) = \{y : f(X_i)_y \geq 1 - \lambda\}. \quad (7)$$
The expected value of $L_{\text{FNR}}^i$ is the FNR. Since $L_{\text{FNR}}^i$ is monotone, so is the FNR. Thus, we use the technique in Section 2.1 to pick $\hat{\lambda}$ by (4) that controls the FNR on a new point, which guarantees:
$$\mathbb{E}\left[L_{\text{FNR}}^{n+1}(\hat{\lambda})\right] \leq \alpha. \quad (8)$$
For evaluating the proposed procedure we pool data from several online open-source gut polyp segmentation datasets: Kvasir, Hyper-Kvasir, CVC-ColonDB, CVC-ClinicDB, and ETIS-Larib. We choose a PraNet [Fan et al., 2020] as our base model $f$ and used $n = 1000$, and evaluated risk control with the 781 remaining validation data points. We report results with $\alpha = 0.1$ in Figure 1. The mean and standard deviation of the risk over 1000 trials are 0.0987 and 0.0114, respectively.
Figure 3: Control of graph distance on hierarchical ImageNet. The top figure shows examples of our procedure with correct classes in black, false positives in blue, and false negatives in red. The bottom plots report our minimum hierarchical distance loss and set size over 1000 independent random data splits. The dashed gray line marks $\alpha$.
3.2 FNR control in multilabel classification
In the multilabel classification setting, our input $X_i$ is an image and our label is a set of classes $Y_i \subset \{1, \ldots, K\}$ for some number of classes $K$. Using a multiclass classification model $f : X \to [0, 1]^K$, we form prediction sets and calculate the number of false positives exactly as in (7). By Theorem 1, picking $\hat{\lambda}$ as in (4) again yields the FNR-control guarantee in (8). We evaluate on the Microsoft Common Objects in Context (MS COCO) dataset (Lin et al., 2014), a large-scale 80-class multiclass classification task commonly used in computer vision. We choose a TResNet (Ridnik et al., 2020) as our model $f$ and used $n = 4000$, and evaluated risk control with 1000 validation data points. We report results with $\alpha = 0.1$ in Figure 2. The mean and standard deviation of the risk over 1000 trials are 0.0996 and 0.0052, respectively. The results indicate that the risk is almost exactly controlled, the spread is not too wide, and the set sizes are reasonable, not overly inflated.
3.3 Control of graph distance in hierarchical image classification
In the $K$-class hierarchical classification setting, our input $X_i$ is an image and our label is a leaf node $Y_i \in \{1, \ldots, K\}$ on a tree with nodes $V$ and edges $E$. Using a single-class classification model $f : X \to \Delta^K$, we calibrate a loss in graph distance between the interior node we select and the closest ancestor of the true class. For any $x \in X$, let $\hat{y}(x) = \arg\max_k f(x)_k$ be the class with the highest estimated probability. Further, let $d : V \times V \to \mathbb{Z}$ be the function that returns the length of the shortest path between two nodes, let $A : V \to 2^V$ be the function that returns the ancestors of its argument, and let $P : V \to 2^V$ be the function that returns the set of leaf nodes that are descendants of its argument. We also let $g(v, x) = \sum_{k \in P(v)} f(x)_k$ be the sum of scores of leaves descended from $v$. Further, define a hierarchical distance
$$d_H(v, u) = \inf_{a \in A(v)} \{d(a, u)\}.$$
For a set of nodes $C_\lambda \in 2^V$, we then define the set-valued loss
$$L_i^{\text{Graph}}(\lambda) = \inf_{s \in C_\lambda(X_i)} \{d_H(y, s)\}/D,$$
where $C_\lambda(x) = \bigcap_{a \in A(\hat{y}(x)) : g(a, x) \geq -\lambda} P(a).$
This loss returns zero if $y$ is a child of any element in $C_\lambda$, and otherwise returns the minimum distance between any element of $C_\lambda$ and any ancestor of $y$, scaled by the depth $D$. Thus, it is a monotone loss function and can be controlled by choosing $\hat{\lambda}$ as in (4) to achieve the guarantee
$$\mathbb{E}\left[L_{n+1}^{\text{Graph}}(\hat{\lambda})\right] \leq \alpha.$$
Figure 4: F1-score control on Natural Questions. The top figure shows examples of our procedure with fully correct answers in green, partially correct answers in blue, and false positives in gray. Note that answers that are technically correct may still be down-graded if they do not match the reference. We treat this as part of the randomness in the task. The bottom plots report the F1 risk and average set size over 1000 independent random data splits. The dashed gray line marks $\alpha$.
We use the ImageNet dataset (Deng et al., 2009), which comes with an existing label hierarchy, WordNet, of maximum depth $D = 14$. We choose a ResNet152 (He et al., 2016) for $f$ and $n = 30000$, and evaluate risk with the remaining 20000. We report results with $\alpha = 0.05$ in Figure 3. The mean and standard deviation of the risk over 1000 trials are 0.0499 and 0.0011, respectively. The results indicate that the risk is almost exactly controlled, and that the adaptively chosen resolution of the prediction appropriately encodes the model uncertainty (it is almost always a leaf node).
3.4 F1-score control in open-domain question answering
In the open-domain question answering setting, our input $X_i$ is a question and our label $Y_i$ is a set of (possibly non-unique) correct answers. For example, the input
$$X_{n+1} = \text{“Where was Barack Obama Born?”}$$
could have the answer set
$$Y_{n+1} = \{\text{“Hawaii”, “Honolulu, Hawaii”, “Kapo’olani Medical Center”}\}$$
Formally, here we treat all questions and answers as being composed of sequences (up to size $m$) of tokens in a vocabulary $\mathcal{V}$—i.e., assuming $k$ valid answers, we have $X_i \in \mathcal{Z}$ and $Y_i \in \mathcal{Z}^k$, where $\mathcal{Z} := \mathcal{V}^m$. Using an open-domain question answering model that individually scores candidate output answers $f : \mathcal{Z} \times \mathcal{Z} \rightarrow \mathbb{R}$, we calibrate the best token-based F1-score of the prediction set, taken over all pairs of predictions and answers:
$$L_{F_1}^{n+1}(\lambda) = 1 - \max \left\{ F_1(a,c) : c \in C_\lambda(X_i), a \in Y_i \right\}, \text{ where } C_\lambda(X_i) = \{ y \in \mathcal{V}^m : f(X_i,y) \geq \lambda \}. $$
We define the F1-score following popular QA evaluation metrics (Rajpurkar et al., 2016), where we treat predictions and ground truth answers as bags of tokens and compute the geometric average of their precision and recall (while ignoring punctuation and articles {“a”, “an”, “the”}). Since $L_{F_1}^{n+1}$, as defined in this way, is monotone and upper bounded by 1, it can be controlled by choosing $\hat{\lambda}$ as in Section 2.1 to achieve the following guarantee:
$$\mathbb{E} \left[ L_{F_1}^{n+1}(\hat{\lambda}) \right] \leq \alpha. $$
We use the Natural Questions (NQ) dataset (Kwiatkowski et al., 2019), a popular open-domain question answering baseline, to evaluate our method. We use the splits distributed as part of the Dense Passage Retrieval (DPR) package (Karpukhin et al., 2020). Our base model is the DPR Retriever-Reader model (Karpukhin et al., 2020), which retrieves passages from Wikipedia that might contain the answer to the given query, and then uses a reader model to extract text sub-spans from the retrieved passages that serve as candidate answers. Instead of enumerating all possible answers to a given question, we retrieve the top several hundred candidate answers, extracted from the top 100 passages. We use $n = 2500$ calibration points, and evaluate risk control with the remaining 1110. We use $\alpha = 0.3$ (chosen empirically as the lowest F1 score which reliably results in approximately correct answers by manual validation) in Figure 4. The mean and standard deviation of the risk over 1000 trials are 0.2996 and 0.0150, respectively. The results indicate that the risk is almost exactly controlled, and that the sets are reasonably sized, scaling appropriately with question difficulty.
4 EXTENSIONS
We now discuss several extensions of conformal risk control to different settings and risks.
4.1 RISK CONTROL UNDER DISTRIBUTIONAL SHIFT
Under a distribution shift, the goal in (3) can be redefined as
\[ \mathbb{E}_{(X_1,Y_1),\ldots,(X_n,Y_n)\sim P_{\text{train}}, (X_{n+1},Y_{n+1})\sim P_{\text{test}}} \left[ L_{n+1}(\hat{\lambda}) \right] \leq \alpha. \]
(9)
Assuming that \( P_{\text{test}} \) is absolutely continuous with respect to \( P_{\text{train}} \) and defining \( w(x,y) = \frac{dP_{\text{test}}(x,y)}{dP_{\text{train}}(x,y)} \), the weighted objective (9) can be rewritten as
\[ \mathbb{E}_{(X_1,Y_1),\ldots,(X_{n+1},Y_{n+1})\sim P_{\text{train}}} \left[ w(X_{n+1},Y_{n+1})L_{n+1}(\hat{\lambda}) \right] \leq \alpha. \]
(10)
When \( w \) is known and bounded, we can apply our procedure on the loss function \( \tilde{L}_{n+1}(\lambda) = w(X_{n+1},Y_{n+1})L_{n+1}(\lambda) \), which is non-decreasing, bounded, and right-continuous in \( \lambda \) whenever \( L_{n+1} \) is. Thus, Theorem 1 guarantees that the resulting \( \hat{\lambda} \) satisfies (10). For example, in the covariate shift setting, \( w(X_{n+1},Y_{n+1}) = w(X_{n+1}) \triangleq \frac{dP_{\text{test}}(X_{n+1})}{dP_{\text{train}}(X_{n+1})} \). In this case, we can achieve risk control even when \( w \) is unbounded. In particular, assuming \( L_i \in [0,B] \), for any potential value \( x \) of the covariate, we define
\[ \hat{\lambda}(x) = \inf \left\{ \lambda : \frac{\sum_{i=1}^{n} w(X_i)L_i(\lambda) + w(x)B}{\sum_{i=1}^{n} w(X_i) + w(x)} \leq \alpha \right\}. \]
Proposition 3. In the setting of Theorem 1 with \( \hat{\lambda} \) as above,
\[ \mathbb{E}_{(X_1,Y_1),\ldots,(X_n,Y_n)\sim P_{\text{train}}, (X_{n+1},Y_{n+1})\sim P_{\text{test}}} \left[ L_{n+1}(\hat{\lambda}(X_{n+1})) \right] \leq \alpha. \]
This is an exact generalization of the procedure of Tibshirani et al. (2019) beyond indicator losses. As proposed therein, when unlabeled data in the test domain is available, \( w \) can be estimated by the probabilistic classification algorithm; this gives good practical results (see also our experiment in Appendix D). For arbitrary distribution shifts, we give a total variation bound analogous to that of Barber et al. (2022) for independent data (see their Section 4.1), though the proof is different. Here we will use the notation \( Z_i = (X_i,Y_i) \) and \( \hat{\lambda}(Z_1,\ldots,Z_n) \) to refer to that chosen in (4).
Proposition 4. Let \( Z = (Z_1,\ldots,Z_{n+1}) \) be a sequence of random variables. Then, under the conditions in Theorem 1,
\[ \mathbb{E} \left[ L_{n+1}(\hat{\lambda}) \right] \leq \alpha + B \sum_{i=1}^{n} \text{TV}(Z_i,Z_{n+1}). \]
If further the assumptions of Theorem 2 hold,
\[ \mathbb{E} \left[ L_{n+1}(\hat{\lambda}) \right] \geq \alpha - B \left( \frac{2}{n+1} + \sum_{i=1}^{n} \text{TV}(Z_i,Z_{n+1}) \right). \]
4.2 QUANTILE RISK CONTROL
Snell et al. (2022) generalizes Bates et al. (2021) to control the quantile of a monotone loss function conditional on \((X_i,Y_i)_{i=1}^n\) with probability \(1-\delta\) over the calibration dataset for any user-specified tolerance parameter \( \delta \). In some applications, it may be sufficient to control the unconditional quantile of the loss function, which alleviates the burden of the user to choose the tolerance parameter \( \delta \).
For any random variable \( X \), let Quantile\(_\beta(X) = \inf \{ x : \mathbb{P}(X \leq x) \geq \beta \} \). Analogous to (3), we want to find \( \hat{\lambda} \) based on \((X_i,Y_i)_{i=1}^n\) such that
\[ \text{Quantile}_\beta \left( L_{n+1}(\hat{\lambda}_\beta) \right) \leq \alpha. \]
(11)
By definition, \( \text{Quantile}_\beta \left( L_{n+1}(\hat{\lambda}_\beta) \right) \leq \alpha \iff \mathbb{E} \left[ 1 \left\{ L_{n+1}(\hat{\lambda}_\beta) > \alpha \right\} \right] \leq 1 - \beta \). As a consequence, quantile risk control is equivalent to expected risk control (3) with loss function \( \tilde{L}_i(\lambda) = 1 \left\{ L_i(\lambda) > \alpha \right\} \). Let \( \hat{\lambda}_\beta = \inf \left\{ \lambda \in \Lambda : \frac{1}{n+1} \sum_{i=1}^{n} 1 \left\{ L_i(\lambda) > \alpha \right\} + \frac{1}{n+1} \leq 1 - \beta \right\} \).
Proposition 5. In the setting of Theorem 1 with \( \hat{\lambda} \) as above, (11) is achieved.
It is unclear whether the wider class of quantile-based risks considered by Snell et al. (2022) (e.g. the CVaR) can be controlled unconditionally.
4.3 Controlling multiple risks
Let \( L_i(\lambda; \gamma) \) be a family of loss functions indexed by \( \gamma \in \Gamma \) for some domain \( \Gamma \) that may have infinitely many elements. A researcher may want to control \( \mathbb{E}[L_i(\lambda; \gamma)] \) at level \( \alpha(\gamma) \). Equivalently, we need to find an \( \hat{\lambda} \) based on \( (X_i, Y_i)_{i=1}^n \) such that
\[
\sup_{\gamma \in \Gamma} \mathbb{E}\left[ \frac{L_i(\hat{\lambda}; \gamma)}{\alpha(\gamma)} \right] \leq 1. \tag{12}
\]
Though the above worst-case risk is not an expectation, it can still be controlled. Towards this end, we define \( \hat{\lambda} = \sup_{\gamma \in \Gamma} \hat{\lambda}_\gamma \), where \( \hat{\lambda}_\gamma = \inf \{ \lambda : \frac{1}{n+1} \sum_{i=1}^n L_i(\lambda; \gamma) + \frac{B}{n+1} \leq \alpha(\gamma) \} \).
**Proposition 6.** In the setting of Theorem 1 with \( \hat{\lambda} \) as above, (12) is satisfied.
4.4 Adversarial risks
We next show how to control risks defined by adversarial perturbations. We adopt the same notation as Section 4.3 Bates et al. (2021) (Section 6.3) discusses the adversarial risk where \( \Gamma \) parametrizes a class of perturbations of \( X_{n+1} \), e.g., \( L_i(\lambda; \gamma) = L(X_i + \gamma, Y_i) \) and \( \Gamma = \{ \gamma : \| \gamma \|_\infty \leq \epsilon \} \). A researcher may want to find an \( \hat{\lambda} \) based on \( (X_i, Y_i)_{i=1}^n \) such that
\[
\mathbb{E}[\sup_{\gamma \in \Gamma} L_i(\lambda; \gamma)] \leq \alpha. \tag{13}
\]
This can be recast as a conformal risk control problem by taking \( \tilde{L}_i(\lambda) = \sup_{\gamma \in \Gamma} L_i(\lambda; \gamma) \). Then, the following choice of \( \lambda \) leads to risk control: \( \hat{\lambda} = \inf \{ \lambda : \frac{1}{n+1} \sum_{i=1}^n \tilde{L}_i(\lambda) + \frac{B}{n+1} \leq \alpha \} \).
**Proposition 7.** In the setting of Theorem 1 with \( \hat{\lambda} \) as above, (13) is satisfied.
4.5 U-risk control
For ranking and metric learning, Bates et al. (2021) considered loss functions that depend on two test points. In general, for any \( k > 1 \) and subset \( S \subset \{1, \ldots, n+k\} \) with \( |S| = k \), let \( L_S(\lambda) \) be a loss function. Our goal is to find \( \hat{\lambda}_k \) based on \( (X_i, Y_i)_{i=1}^n \) such that
\[
\mathbb{E}\left[ L_{\{n+1,\ldots,n+k\}}(\hat{\lambda}_k) \right] \leq \alpha. \tag{14}
\]
We call the LHS a U-risk since, for any fixed \( \hat{\lambda}_k \), it is the expectation of an order-\( k \) U-statistic. As a natural extension, we can define
\[
\hat{\lambda}_k = \inf \left\{ \lambda : \frac{k!n!}{(n+k)!} \sum_{S \subset \{1,\ldots,n\}, |S|=k} L_S(\lambda) + B \left( 1 - \frac{(n!)^2}{(n+k)!(n-k)!} \right) \leq \alpha \right\}. \tag{15}
\]
Again, we define \( \hat{\lambda}_k = \lambda_{\max} \) when the RHS is empty. Then we can prove the following result.
**Proposition 8.** Assume that \( L_S(\lambda) \) is non-increasing in \( \lambda \), right-continuous, and \( L_S(\lambda_{\max}) \leq \alpha \), \( \sup_\lambda L_S(\lambda) \leq B < \infty \) almost surely. Then (14) is achieved with \( \hat{\lambda} \) as above.
5 Conclusion
This generalization of conformal prediction broadens its scope to new applications, as shown in Section 3. Still, two primary limitations of our technique remain: firstly, the requirement of a monotone loss is difficult to lift. Secondly, extensions to non-exchangeable data require knowledge about the form of the shift. This issue affects most statistical methods, including standard conformal prediction, and ours is no different in this regard. Finally, the mathematical tools developed in Sections 2 and 4 may be of independent technical interest, as they provide a new, and more general, language for studying conformal prediction, along with new results about its validity.
Reproducibility Statement
Code to reproduce our examples is available at https://github.com/aangelopoulos/conformal-risk
REFERENCES
Anastasios N Angelopoulos, Stephen Bates, Emmanuel J Candès, Michael I Jordan, and Lihua Lei. Learn then Test: Calibrating predictive algorithms to achieve risk control. *arXiv preprint arXiv:2110.01052*, 2021a.
Anastasios N Angelopoulos, Amit Pal Kohli, Stephen Bates, Michael Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, and Yaniv Romano. Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. In *International Conference on Machine Learning*, pp. 717–730. PMLR, 2022a.
Anastasios N Angelopoulos, Karl Krauth, Stephen Bates, Yixin Wang, and Michael I Jordan. Recommendation systems with distribution-free reliability guarantees. *arXiv preprint arXiv:2207.01609*, 2022b.
Anastasios Nikolas Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification, 2021. URL https://arxiv.org/abs/2107.07511.
Anastasios Nikolas Angelopoulos, Stephen Bates, Jitendra Malik, and Michael I Jordan. Uncertainty sets for image classifiers using conformal prediction. In *International Conference on Learning Representations (ICLR)*, 2021b. URL https://openreview.net/forum?id=eNdiU_DBM9.
Martin Anthony and John Shawe-Taylor. A result of vapnik with applications. *Discrete Applied Mathematics*, 47(3):207–217, 1993.
Rina Barber, Emmanuel Candès, Aaditya Ramdas, and Ryan Tibshirani. The limits of distribution-free conditional predictive inference. *Information and Inference*, 10, 08 2020. doi: 10.1093/imaiai/iaaa017.
Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. Conformal prediction beyond exchangeability. *arXiv preprint arXiv:2202.13415*, 2022.
Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael I. Jordan. Distribution-free, risk-controlling prediction sets. *Journal of the ACM*, 68(6), September 2021. ISSN 0004-5411. doi: 10.1145/3478535. URL https://doi.org/10.1145/3478535.
Vidmantas Bentkus. On Hoeffding’s inequalities. *The Annals of Probability*, 32(2):1650 – 1673, 2004. doi: 10.1214/009117904000000360.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848.
Deng-Ping Fan, Ge-Peng Ji, Tao Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, and Ling Shao. Pranet: Parallel reverse attention network for polyp segmentation. In *International Conference on Medical Image Computing and Computer-Assisted Intervention*, pp. 263–273, 2020. doi: 10.1007/978-3-030-59725-2_26.
Clara Fannjiang, Stephen Bates, Anastasios N Angelopoulos, Jennifer Listgarten, and Michael I Jordan. Conformal prediction for the design problem. *arXiv preprint arXiv:2202.03613*, 2022.
Adam Fisch, Tal Schuster, Tommi S. Jaakkola, and Regina Barzilay. Efficient conformal prediction via cascaded inference with expanded admission. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=tnSo6VRLmT.
Adam Fisch, Tal Schuster, Tommi Jaakkola, and Regina Barzilay. Conformal prediction sets with limited false positives. *arXiv preprint arXiv:2202.07650*, 2022.
Alexandre Froda. Sur la distribution des propriétés de voisinage des fonctions de variables réelles. *Bulletin mathématique de la Société Roumaine des Sciences*, 32(2):105–202, 1931.
|
FH7lfTfjcm
|
If I did not overlook, the article only provides an account of the construction process for the PyTorch-Keras benchmark in Appendix A.4, but, it does not cover the construction process for the PyTorch-MXNet benchmark.
|
ADELT: Transpilation Between Deep Learning Frameworks
Anonymous authors
Paper under double-blind review
Abstract
We propose the Adversarial DEep Learning Transpiler (ADELT), a novel approach to source-to-source transpilation between deep learning frameworks. ADELT uniquely decouples code skeleton transpilation and API keyword mapping. For code transpilation, it uses few-shot prompting on large language models, while for API keyword mapping, it employs contextual embeddings from a code-specific BERT. These embeddings are trained in a domain-adversarial setup to generate a keyword translation dictionary. ADELT is trained on an unlabeled web-crawled deep learning corpus, eschewing hand-crafted rules and parallel data. It outperforms state-of-the-art transpilers, improving exact match scores by 17.4 pts and 12.0 pts for PyTorch-Keras and PyTorch-MXNet transpilation pairs respectively. We provide open access to our code, corpus, and evaluation benchmarks.
1 Introduction
The rapid development of deep learning (DL) has led to an equally fast emergence of new software frameworks for training neural networks. Unfortunately, maintaining a deep learning framework and keeping it up-to-date is not an easy task. Many deep learning frameworks are deprecated or lose popularity every year, and porting deep learning code from a legacy framework to a new one is a tedious and error-prone task. A source-to-source transpiler between DL frameworks would greatly help practitioners overcome this difficulty.
Two promising solutions to source-to-source transpilation between deep learning frameworks are unsupervised neural machine translation (NMT) (Artetxe et al., 2018) and large language models. NMT treats deep learning code as a sentence for training sequence-to-sequence (Sutskever et al., 2014) models, but its applicability is limited due to the scarcity of parallel corpora and its notable data hunger. On the other hand, large language models like GPT-3 (Brown et al., 2020), pretrained on web crawl data, offer potential, performing translation tasks in a few-shot or zero-shot manner. Our early experiments with Codex (Chen et al., 2021), a GPT-3 model specialized for code, show its potential in few-shot transpilation of deep learning programs. Yet, such models struggle with API-specific nuances, inaccurately handling function names and parameter mappings. These limitations underscore the difficulties large language models face in preserving precision in complex real-world applications.
That said, most deep learning framework code is structured: each type of layers has its own constructor, and constructing a network involves calling each layer’s constructor in a chaining manner. By leveraging the structures of programming languages, we can decouple the transpilation of skeletal codes from the mapping of API keywords. The transpilation of skeletal codes is the easier part, and large LMs already do a great job. We only need a separate algorithm to translate the API keywords, i.e., the function and parameter names to complete the transpilation.
In this paper, we present ADELT (Figure 1), a method that leverages this insight to transpile DL code. ADELT outperforms the state-of-the-art end-to-end transpilers. The canonicalized source code is decoupled into two parts: the code skeleton and the API keywords. ADELT transpiles the code skeleton using a pretrained large language model by few-shot prompting. Each API keyword occurrence is then embedded into a vector by PyBERT, a BERT pretrained on Python code. This vector is both the textual and the contextual representation of the API keyword. ADELT then leverages domain-adversarial training to learn a generator that maps the vector to an aligned embedding space. The alignment is enforced by a two-player game, where a discriminator is trained to distinguish...
between the embeddings from the source DL framework and those from the target DL framework. The API keyword embeddings are trained jointly with the generator as the output embedding matrix of a softmax classifier on the aligned embedding space. After generating a synthetic API keyword dictionary from the embeddings using a two-step greedy algorithm, ADELT then looks up each API keyword occurrence in the dictionary and puts them back into the transpiled code skeleton.
In summary, this paper makes the following contributions:
• We introduce ADELT, a robust solution for transpilation between deep learning frameworks without training on any labeled data. Outperforming large language models, ADELT excels across various transpilation pairs, achieving exact match scores of 73.0 and 71.5 for PyTorch-Keras and Keras-PyTorch transpilations, respectively. These scores surpass those of the state-of-the-art large language model for code, GPT-4, by 17.4 and 6.8 points respectively.
• To demonstrate our technique, we construct a PyTorch-Keras-MXNet corpus of deep learning code from various Internet sources, containing 19,796 PyTorch modules, 3,703 Keras layers/models, and 1,783 MXNet layers/models. We then build an evaluation benchmark for PyTorch-Keras and PyTorch-MXNet transpilation. The benchmark evaluates both our API keyword mapping algorithm and the overall source-to-source transpilation.
2 METHOD
ADELT (Adversarial DEep Learning Transpiler) is an algorithm that transpiles code from a source deep learning framework into an equivalent one in a target framework, by transpiling the skeletal code using a pretrained large language model, and then looking up each keyword in a dictionary learned with unsupervised domain-adversarial training. ADELT applies the following steps to each piece of input code, which we illustrate using the example shown in Figure 1.
1. Extract API calls from the source code. Such API calls can be automatically extracted with the Python’s built-in ast library. We then convert each API call into its canonical form, where each layer/function has a unique name, and all of its arguments are converted to keyword arguments. Finally, we extract all API keywords from the canonicalized API call, where an API keyword is the name of a layer/function or the name of a keyword argument.
2. Transform the program into its code skeleton by replacing each API keyword occurrence with a distinct placeholder.
3. Transpile the code skeleton, where all API keywords are replaced by placeholders, into the target DL framework using a pretrained big LM (e.g., Codex).
4. Look up each API keyword in the API keyword dictionary, and replace each keyword with its translation. To generate the API keyword dictionary, we first learn the API embeddings using domain-adversarial training based on contextual embeddings extracted by PyBERT (a BERT pretrained on Python code and then fine-tuned on deep learning code). Next, we calculate the cosine similarity between the embedding vectors. Then we generate the API keyword dictionary using a hierarchical algorithm.
5. Put each API keyword back into the transpiled code skeleton to generate the final output.
We describe each of these steps next in detail.
2.1 Canonicalization & API Keyword Extraction
We first parse the source code into an abstract syntax tree (AST) with the Python ast module. Then, canonicalization and API call extraction are applied to the AST.
Canonicalization. We canonicalize each API call using the following steps during both domain-adversarial training (Section 2.3) and inference. Each step involves a recursive AST traversal.
1. Unify the different import aliases of each module into the most commonly used name in the training dataset. For example, torch.nn is converted to nn.
2. Unify different aliases of each layer/function in a DL library into the name in which it was defined. We detect and resolve each alias by looking at its __name__ attribute, which stores the callable’s original name in its definition. For example, layers.MaxPool2D is converted to layers.MaxPooling2D.
3. Convert each positional argument of an API call into its equivalent keyword argument. Sort all keyword arguments according to the order defined in the function signature. This is done by linking the arguments of each API call to the parameters of its API signature using the bind method from Python’s inspect module.
API keyword extraction. We define API keyword as the name of a layer/function or the name of a keyword argument. Once the input code is canonicalized, we locate each API keyword in the AST and then unparse the AST into the canonicalized source code.
2.2 Skeletal Code Transpilation
After canonicalizing the source program, ADELT then replaces all API keywords with a placeholder, turning the source program into its code skeleton. Each placeholder has textual form PLACEHOLDER_i, where i = 1, 2, 3, . . . . The code skeleton is then translated by Codex using few-shot prompting. The full prompt for this step is shown in Appendix A.5.
2.3 Domain-Adversarial Training
Once the code skeleton is transpiled, we then transpile API keywords. We train the aligned embeddings of API keywords in a domain-adversarial setting. In Section 2.4, the embeddings will be used to generate a dictionary that maps an API keyword of the source deep learning framework X(1) to an API keyword in the target DL framework X(2).
Figure 2 illustrates the domain-adversarial approach of ADELT, and Algorithm 1 shows the pseudocode. A generator maps the contextual representations extracted by PyBERT into hidden states (line 5-8). The alignment of hidden states from different DL frameworks is enforced by the adversarial loss induced by the discriminator (line 17-21), so that output embeddings learned with these hidden states (line 11-14) are also aligned. Next, we describe each step in detail:
Each training example is a pair of API keyword occurrences with their context in the training corpus, denoted by (x(1), x(2)). Each keyword occurrence x(l) is tokenized and encoded as multiple byte pair encoding (BPE) tokens. In our unsupervised setting, x(1) and x(2) are independent samples from X(1) and X(2) in the training dataset, respectively, and they are not necessarily translations of each other.
PyBERT. PyBERT is our pretrained Transformer (Vaswani et al., 2017; Devlin et al., 2019) for Python code (Feng et al., 2020; Kanade et al., 2020; Roziere et al., 2021). Given a sequence of BPE tokens that represent an API keyword with its context x(l), PyBERT outputs a sequence of vectors—one vector in R^d_b for each token, where d_b is the hidden dimension size of PyBERT. We
https://docs.python.org/3/reference/datamodel.html#the-standard-type-hierarchy
https://docs.python.org/3/library/inspect.html#inspect.Signature.bind
Algorithm 1 Pseudo-code for domain-adversarial training.
```python
for (x_1, y_1), (x_2, y_2) in loader:
# N samples from X_1, X_2 respectively
# y_1, y_2: API keyword ids
h_1 = B(x_1).detach() # contextual embedding
h_2 = B(x_2).detach() # no gradient to PyBERT
z_1 = G(h_1) # generator hidden states
z_2 = G(h_2) # z_1, z_2: N x d
# dot product of z_1 and output embeddings
logits_1 = mm(z_1, E_1.view(d, m_1))
logits_2 = mm(z_2, E_2.view(d, m_2))
L_CE_1 = CrossEntropyLoss(logits_1, y_1)
L_CE_2 = CrossEntropyLoss(logits_2, y_2)
# discriminator predictions
pred_1 = D(z_1)
pred_2 = D(z_2)
labels = cat(zeros(N), ones(N))
L_D = CrossEntropyLoss(pred_1, labels)
L_G = CrossEntropyLoss(pred_2, 1 - labels)
# joint update of G and E_1
optimize(G + E_1 + E_2, L_CE_1 + L_CE_2)
optimize(D, L_D) # train the discriminator
optimize(G, L_G) # train the generator
```
B: PyBERT used as the contextual embedder. G, D: the generator G and the discriminator D.
E_1: a d by m_1 matrix, where the i-th column vector is the output embedding of API keyword w_i^(l).
mm: matrix multiplication; cat: concatenation
\[ L^{(1)}_{CE} + L_G + L_D \]
Classifier^(1) | Discriminator | Classifier^(2)
---|---|---
Generator | ... share parameters ... | Generator
PyBERT | ... share parameters ... | PyBERT
... nn. Conv @2d ... | ... layers. Dense ...
Figure 2: ADELT’s domain-adversarial training with contextual embeddings from a PyBERT. The generator and the PyBERT are shared between different DL frameworks. We do not fine-tune the PyBERT during adversarial training.
average-pool all BPE tokens of the keyword and get a single \( d_b \)-dimensional vector as the contextual embedding \( \text{PyBERT}(x^{(l)}) \) of the API keyword. We denote the contextual embedding of \( x^{(1)}, x^{(2)} \) by \( h^{(1)}, h^{(2)} \) respectively.
**Generator and discriminator.** We define two multi-layer perceptrons, a generator and a discriminator. A generator \( G \) encodes the contextual embeddings \( h^{(1)}, h^{(2)} \) into hidden states \( z^{(1)}, z^{(2)} \in \mathbb{R}^d \), and a discriminator \( D \) is trained to discriminate between \( z^{(1)} \) and \( z^{(2)} \). The generator is trained to prevent the discriminator from making accurate predictions, by making \( G(\text{PyBERT}(X^{(1)})) \) and \( G(\text{PyBERT}(X^{(2)})) \) as similar as possible. Our approach is inspired by domain-adversarial training (Ganin et al., 2016), where domain-agnostic representations of images or documents are learned for domain adaptation. In our case, a domain is represented by a DL framework.
Formally, we define the probability \( \Pr_D(\text{pred} = l | z) \) that a hidden state \( z \) is from the DL framework \( l \) predicted by the discriminator. Note that \( z^{(1)} = G(h^{(1)}) \) and \( z^{(2)} = G(h^{(2)}) \). The discriminator loss and the generator loss are computed as the binary cross entropy against the true label and the reversed label, respectively, as shown in Equation (1).
**Output embeddings.** Our goal is to learn an embedding for each API keyword, but the contextual embedding of each keyword occurrence varies with its context. So we instead train a \( d \)-dimensional vector \( e_i^{(l)} \) for each API keyword \( w_i^{(l)} \), such that \( e_i^{(l)} \) is similar to the generator hidden states \( z_j^{(l)} \) of this keyword’s occurrences and dissimilar to the hidden states \( z_k^{(l)} \) of any other keyword’s occurrences. \( e_i^{(l)} \) is considered the output embedding of the API keyword \( w_i^{(l)} \). With similarity computed using
dot product, our optimization objective is shown in Equation (2), equivalent to the cross-entropy loss of $m^{(l)}$-way softmax-based classification.
**Adversarial training.** During each training iteration, the generator and discriminator are trained successively to minimize $L_G$ and $L_D$ respectively with mini-batch stochastic gradient descent. Minimizing the adversarial loss equals to minimizing the distance between two distributions of hidden states (Goodfellow et al., 2014). Therefore, the API keywords from the different DL frameworks will be mapped to an aligned embedding space.
Also, we jointly update the generator and the output embeddings to minimize $L_{CE}^{(l)}$ with mini-batch SGD. The joint optimization is crucial, as updating the generator to minimize $L_{CE}^{(l)}$ ensures that each generator hidden state $z^{(l)}$ preserves enough information to recover its original API keyword. As a result, the output embeddings $\{e_i^{(1)}\}_{i=1}^{m^{(1)}}$ and $\{e_j^{(2)}\}_{j=1}^{m^{(2)}}$ are also aligned, as they are trained with vectors $z^{(l)}$ from the aligned embedding space.
We do not fine-tune PyBERT during domain-adversarial training, as fine-tuning PyBERT makes the generator disproportionally strong that results in training divergence.
### 2.4 Hierarchical API Dictionary Generation
ADELT calculates a **scoring matrix** using the aligned API keyword embeddings trained in Section 2.3. The entry in the $i$-th row and the $j$-th column of the matrix is the cosine similarity between $w_i^{(1)}$ and $w_j^{(2)}$, denoted by $s_{i,j}$. Given the scoring matrix, we need to generate an API keyword dictionary that maps each API keyword in one deep learning framework to an API keyword in another DL framework.
**Greedy match** is used to generate a dictionary in word translation of natural languages (Conneau et al., 2018), where each source word is matched to the target word with the highest similarity score.
**Structure of API keywords.** Unlike words in NL, API keywords are **structured**: API keywords can be classified into two types based on their associated AST node: **callable names** (names of functions or classes), and **parameter names** (names of keyword arguments). In dictionary generation, we do not allow callable names to be translated to parameter names. We only allow parameter names to be translated to callable names when the weight passes a threshold. In this case, this parameter will be dropped and generate a new API call (the last case in Table 2). Another structural property is that the matching of parameters depends on the matching of callables.
**Hierarchical API dictionary generation** algorithm leverages the structure of API keywords to generate a dictionary: **Step 1.** Consider each callable and its parameters as a group and compute the **group similarity** between each pair of groups, by summing up similarity scores in the greedy matching of parameter names, plus the similarity between two callable names. **Step 2.** Match groups greedily based on group similarity scores calculated in step 1.
### 3 Experiments
We evaluate the effectiveness of ADELT on the task of transpilation between PyTorch, Keras, and MXNet, and compare our method with baselines.
#### 3.1 Skeletal Code Transpilation
We use Codex (Chen et al., 2021), a GPT model (Brown et al., 2020) finetuned using public GitHub code, to transpile code skeletons. As an autoregressive language model trained on massive web data, Codex can handle translation tasks via prompting with few-shot demonstrations. Our prompt design aligns with Codex’s code translation setup, comprising a single input-output example and three instructions to keep placeholders unchanged. Appendix A.5 provides further details on this.
---
3 We tried to evaluate using JAX (Bradbury et al., 2018). Sadly, JAX is a new DL framework and the GitHub corpus on BigQuery (based on a historical snapshot of GitHub) contains very few (318) examples of JAX.
3.2 Training Setup
DL corpus. We consider 4 data sources GitHub, JuiCe (Agashe et al., 2019), Kaggle (Quaranta et al., 2021), and Web to build our DL corpus. See Appendix A.1 for details.
We tokenize all Python source code and extract subclasses of torch.nn.Module, keras.layers.Layer, or keras.Model. Then, we canonicalize (section 2.1) the code of each class definition. We byte-pair encode (Sennrich et al., 2016), merge, and deduplicate codes from all sources. Finally, we collect all files into our DL Corpus containing 19,796 PyTorch modules, 3,703 Keras layers/models, and 1,783 MXNet modules.
PyBERT is our Transformer encoder pretrained with the masked language modeling (MLM) (Devlin et al., 2019) objective on all open-source Python files from the GitHub dataset. We consider two model sizes: PyBERT_{SMALL} (6-layer, 512-d) and PyBERT_{BASE} (12-layer, 768-d). Detailed pretraining hyperparameters are described in appendix A.2.
Adversarial training. The generator and discriminator of ADELT are multilayer perceptrons. We search the learning rate and batch size according to the unsupervised validation criterion “average cosine similarity” (Conneau et al., 2018), which measures the consistency between learned API keyword embeddings and generated keyword translations. Other hyperparameters are set based on previous studies (Conneau et al., 2018) with details described in Appendix A.3.
3.3 Evaluation Benchmark
Our method is evaluated through the task of transpiling code snippets from one DL framework to another. We employ heuristics to identify potential match pairs in the corpus and manually curate a robust evaluation benchmark. Detailed methodology and statistics can be found in Appendix A.4.
We report Exact Match (EM) score as the main metric. For each code snippet, a model’s transpilation is considered to be an exact match if and only if it is exactly equivalent to the ground truth. The EM score is the number of exact matches divided by the number of examples in the eval set. We also report a more forgiving metric, the F1 score, which quantifies the overlap between the predicted and ground truth outputs. In this context, we treat each prediction or ground truth as a bag of function calls. For each test case, we determine the number of exactly matched calls $n_{match}$, predicted calls $n_{pred}$, and ground truth calls $n_{truth}$. We define the F1 score for a particular example as $2n_{match}/(n_{pred} + n_{truth})$, and report the average F1 scores across all test cases.
3.4 Evaluation of Skeletal Code Transpilation
Transpiling code skeletons of DL programs is an easy task, and Codex easily learned transpilation patterns via few-shot prompting. In our evaluation benchmark, the exact match score of skeletal code transpilation using Codex is 100%.
3.5 Comparison with Other Methods
We compare ADELT using PyBERT_{SMALL} and ADELT using PyBERT_{BASE} with the following baselines. We run all methods 5 times with random seeds [10, 20, 30, 40, 50], and report the arithmetic average of all metrics.
End-to-end language models. We compare ADELT with end-to-end few-shot LLM baselines, including GPT-3, Codex, and GPT-4, where the entire piece of source code, instead of the code skeleton, is fed into the LM to generate the transpiled target program. For source-to-source translation, we randomly give the LLM 5 examples as demonstrations. The prompt design is similar to the code translation setup of Codex. Details are shown in Appendix A.6.
Edit distance. We consider a rule-based baseline where we use edit distance (Levenshtein, 1966) as the similarity measure between API keywords, in place of the similarity measures calculated from learned embeddings. We apply hierarchical API dictionary generation exactly as what we do in ADELT. We report the result of both cased and uncased setups for edit distance calculation.
Table 1: **Comparison between ADELT and other methods** on source-to-source transpilation. “ADELT (Small)” is ADELT with PyBERT\textsubscript{SMALL} and “ADELT (Base)” is ADELT with PyBERT\textsubscript{BASE}. There are two numbers in each table cell: the first one is for transpiling PyTorch to the other framework (Keras or MXNet), and the second one is for transpiling the other framework to PyTorch. Each number is the average of 5 runs with different random seeds.
| | PyTorch-Keras | PyTorch-MXNet |
|------------------|--------------|---------------|
| | F1 | EM | F1 | EM |
| GPT-3 [Brown et al., 2020] | 26.6 | 32.1 | 22.5 | 26.0 |
| Codex [Chen et al., 2021] | 59.9 | 67.1 | 51.5 | 54.6 |
| GPT-4 | 67.7 | 74.9 | 55.6 | 64.7 |
| Edit Distance (Cased) | 31.2 | 30.1 | 20.3 | 16.8 |
| Edit Distance (Uncased) | 23.9 | 30.1 | 12.5 | 16.8 |
| ADELT (Small) | 79.0 | 76.7 | 70.7 | 67.5 |
| ADELT (Base) | 83.4 | 79.3 | 73.0 | 71.5 |
Table 2: **Examples from the evaluation dataset of the PyTorch-Keras transpilation task and the Keras-PyTorch transpilation task.** We show the source code, ground truth target code, and the outputs from Codex, ADELT, and ADELT +. ✓: the output is the same or equivalent to the ground truth. ✗: the output contains an equivalent of the ground truth, but it also contains incorrect extra code. ❌: the output is incorrect.
| Source | Truth | Codex ✓ | ADELT ✓ | ADELT ❌ |
|--------|-------|---------|---------|---------|
| nn.Conv2d(64, 128, 3) | layers.Conv2D(filters=128, kernel_size=3) | layers.Conv2D(128, 3) | layers.Conv2D(filters=128, kernel_size=3) | layers.Embedding(embeddings_initializer=embed_dim) |
| nn.Embedding(vocab_size, embed_dim) | layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) | layers.Embedding(vocab_size, embed_dim) self.position_emb = layers.Embedding(...) | layers.Embedding() |
| Source | Truth | Codex ❌ | ADELT ✓ | ADELT ❌ |
|--------|-------|---------|---------|---------|
| nn.MultiheadAttention(model_dim, num_heads=num_heads, dropout=attn_dropout) | layers.MultiHeadAttention(num_heads=num_heads, key_dim=model_dim, dropout=attn_dropout) | layers.MultiHeadAttention(model_dim, num_heads, dropout=attn_dropout) | layers.MultiHeadAttention(num_heads=num_heads, key_dim=model_dim, dropout=attn_dropout) | layers.Dense(out_dim, activation='relu') |
| in_dim = 256 out_dim = 512 | in_dim = 256 out_dim = 512 nn.Linear(in_dim, out_dim) nn.ReLU() | in_dim = 256 out_dim = 512 nn.Linear(in_dim, out_dim) | in_dim = 256 out_dim = 512 nn.Linear(in_features=in_dim, out_features=out_dim) nn.ReLU() |
The result is shown in Table 1. **ADELT consistently outperforms other methods with respect to all metrics**, and it benefits from a larger pretrained PyBERT embedder. Moreover, even if LLMs used more examples for few-shot supervision, ADELT still consistently outperforms the end-to-end GPT-4 baseline.
### 3.6 Case Studies
Table 2 shows four examples of PyTorch-Keras transpilation together with hypotheses of Codex and ADELT (Base). Both Codex and ADELT transpile the `nn.Conv2d` to Keras correctly by dropping the first argument `in_channels`. ADELT does not translate the parameter names of `nn.Embedding` to `input_dim` and `output_dim` correctly, while Codex does. However, we notice that Codex sometimes
relies on the argument ordering heuristic. In the example of `nn.MultiheadAttention`, where parameters have a different ordering in Keras than in PyTorch, Codex generates the wrong translation, but ADELT successfully constructs the correct mapping between parameters.
Also, in the `nn.Embedding` example, Codex continues to generate code about “positional embeddings” after finishing transpilation. The extra code generated by Codex is relevant to the context. Still, the extra code should not be part of the translation. We have tried various ways to make Codex follow our instructions (see Appendix A.6 for details). However, because Codex is an end-to-end neural language model, our means of changing its predictions are limited, and the result is highly indeterministic. In the end, Codex still occasionally generates extra arguments or unneeded statements.
On the other hand, we decouple neural network training from the transpilation algorithm. ADELT transpiles between deep learning frameworks using deterministic keyword substitution based on a learned API keyword dictionary. The transpiled code is always syntactically correct. If a mistake is found in the dictionary (e.g., the `nn.Embedding` example in Table 2), it can be corrected by simply modifying the dictionary.
Correcting the API keyword dictionary by humans requires much less effort than building the dictionary manually from scratch, as ADELT generates a high-quality dictionary. Developers can even add additional rules to the transpiler. The flexibility of our decoupled design makes ADELT far easier to be integrated into real-world products than end-to-end neural translators/LMs are.
The last case in Table 2 shows an example where an API call (`Layers.Dense` with `activation="relu"`) should be transpiled to two calls (`nn.Linear` and `nn.ReLU`). One-to-many mapping is rare in transpilation between deep learning frameworks, but the capability to model such mapping reflects the generality of a transpiler to other APIs. Both ADELT and Codex fail to solve this example because this usage is rarely seen in the training data. Still, if we train ADELT on an additional synthetic dataset (“ADELT +” in Table 2 See Appendix A.9 for details), it successfully solves this case, showing that our method can model one-to-many mappings when enough training data is available.
### 3.7 Ablation Studies
We conduct ablation studies on PyTorch-Keras transpilation to validate the contribution of each part of ADELT. We consider both source-to-source transpilation and API keyword translation. **API keyword translation** involves retrieving the translation of given API keywords. We create a high-quality dictionary by manually translating the first 50 most frequent API keywords in PyTorch and Keras, respectively. Following the standard practice of word translation, we measure how many times the correct translation of a source word is retrieved (**precision@k** for $k = 1, 5$) and the **mean reciprocal rank** of the correct translation (MRR). The results are shown in Table 3.
**Necessity of contextual embeddings.** In “w/o PyBERT”, we replace PyBERT with Word2Vec (Mikolov et al., 2013) embeddings of the same dimensions $d_h$ trained on the same corpora. The result in Table 3 shows that this change significantly harms the performance of ADELT. This justifies the use of PyBERT, a high-quality pretrained representation of API keywords that can capture their contexts.
**Contribution of adversarial loss.** In “w/o Adv Loss”, we remove the adversarial loss during training. Instead, we only train the generator and the output embeddings with the cross-entropy loss in Equation (2). The result in Table 3 shows that adversarial training contributes ~6 pts in source-to-source transpilation, showing the effectiveness of adversarial training.
**Comparison of similarity measures.** By default, ADELT uses cosine similarity as the similarity measure for API dictionary generation. Table 3 shows the results of using dot product (inner). Measures based on cosine similarity outperform dot product by a small margin. This fact implies that the performance of ADELT is insensitive to the choice of similarity measure.
---
4The definition of positional embeddings usually follows the definition of word embeddings (`nn.Embedding(vocab_size, ...))` in the source code of a Transformer model.
Table 3: **Ablation study results.** By default, ADELT is trained with the adversarial loss on contextual embeddings extracted by PyBERT, and then a dictionary is generated based on cosine similarity scores. We change one component of ADELT (Small) or ADELT (Base) in each experiment to assess its contribution.
| | Keyword | Source Code |
|----------------------|---------|-------------|
| | P@1 | P@5 | MRR | F1 |
| ADELT (Small) | 82.9 | 90.0 | 91.7 | 97.7 |
| ADELT (Base) | 87.1 | 90.0 | 91.7 | 97.7 |
| Domain-adversarial training | | | | |
| w/o PyBERT (Small) | 52.1 | 63.6 | 70.0 | 85.9 |
| w/o PyBERT (Base) | 45.0 | 54.6 | 70.4 | 80.0 |
| w/o Adv Loss (Small) | 80.4 | 88.6 | 90.0 | 97.7 |
| w/o Adv Loss (Base) | 86.3 | 90.5 | 91.7 | 97.7 |
| Measure for dictionary generation | | | | |
| Inner Product (Small)| 81.3 | 79.6 | 91.7 | 90.0 |
| Inner Product (Base) | 85.4 | 93.2 | 91.7 | 97.7 |
### 4 RELATED WORK
**Source-to-source transpilation.** Classical source-to-source transpilers use supervised learning. Nguyen et al. (2013) and Karaivanov et al. (2014) develop Java-C# transpilers using parallel corpora of open-source code. The dependency on parallel corpora renders these methods inapplicable to transpilation between deep learning frameworks, as parallel corpora are difficult to get.
Drawing inspiration from unsupervised neural machine translation (NMT) (Artetxe et al., 2018), recent advancements have made unsupervised programming language translation possible (Lachaux et al., 2020). Such approaches, however, require vast amounts of in-domain unlabeled corpora, as evidenced by Lachaux et al. (2020) and Roziere et al. (2022), who utilized 744GB of GitHub source code and a dataset of 333k curated Java functions respectively. The scarcity of online deep learning code hinders their effectiveness for transpilation between DL frameworks, as we illustrate in Section 3.5.
**Language models are few shot learners.** GPT-3 (Brown et al., 2020) is a language model with 175B parameters trained on massive web crawl data. GPT-3 can be applied to many NLP tasks without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. Codex (Chen et al., 2021) is a GPT-3 fine-tuned on publicly available code from GitHub, specialized for code generation tasks. GPT-4 is a LLM proficient in both code and NL trained using instruction finetuning. In contrast, the code generation step of ADELT is keyword substitution instead of autoregressive generation. ADELT outperforms GPT-3, Codex, and GPT-4 in PyTorch-Keras transpilation.
**Adversarial learning & cross-lingual word embedding.** Conneau et al. (2018) uses domain-adversarial (Ganin et al., 2016) approach to align the distribution of two word embeddings, enabling natural language word translation without parallel data. The domain-adversarial training in ADELT is inspired by their approach, but we align the distributions of the hidden states of keyword occurrences instead of API keyword embeddings.
### 5 CONCLUSION
We presented ADELT, a code transpilation algorithm for deep learning frameworks. ADELT formulates the transpilation problem as API keyword mapping, and uses domain-adversarial training to generate the map. Using our collected Pytorch-Keras and PyTorch-MXNet benchmarks, our evaluation shows that ADELT can significantly outperform state-of-the-art transpilers.
REFERENCES
Rajas Agashe, Srinivasan Iyer, and Luke Zettlemoyer. Juice: A large scale distantly supervised dataset for open domain context-based code generation. arXiv:1910.02216 [cs], Oct 2019. URL http://arxiv.org/abs/1910.02216 arXiv: 1910.02216.
Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. arXiv:1710.11041 [cs], Feb 2018. URL http://arxiv.org/abs/1710.11041 arXiv: 1710.11041.
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. arXiv:2005.14165 [cs], Jul 2020. URL http://arxiv.org/abs/2005.14165 arXiv: 2005.14165.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. arXiv:2107.03374 [cs], Jul 2021. URL http://arxiv.org/abs/2107.03374 arXiv: 2107.03374.
Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. arXiv:1710.04087 [cs], Jan 2018. URL http://arxiv.org/abs/1710.04087 arXiv: 1710.04087.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 [cs], May 2019. URL http://arxiv.org/abs/1810.04805 arXiv: 1810.04805.
Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. Improving zero-shot learning by mitigating the hubness problem. arXiv:1412.6568 [cs], Apr 2015. URL http://arxiv.org/abs/1412.6568 arXiv: 1412.6568.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. Codebert: A pre-trained model for programming and natural languages. arXiv:2002.08155 [cs], Sep 2020. URL http://arxiv.org/abs/2002.08155 arXiv: 2002.08155.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. arXiv:1505.07818 [cs, stat], May 2016. URL http://arxiv.org/abs/1505.07818 arXiv: 1505.07818.
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. arXiv:1406.2661 [cs, stat], Jun 2014. URL http://arxiv.org/abs/1406.2661 arXiv: 1406.2661.
|
sysX9XMGdF
|
How was it ensured that the superior performance achieved by NATR in settings with 4 or 5 layers, as demonstrated in Table 3, is solely attributed to reduced over-dilution and not a result of significantly mitigating *possibly more severe phenomena* such as over-smoothing, over-correlation?
|
Understanding and Tackling Over-Dilution in Graph Neural Networks
Anonymous authors
Paper under double-blind review
Abstract
Message Passing Neural Networks (MPNNs) have become the predominant architecture for representation learning on graphs. While they hold promise, several inherent limitations have been identified, such as over-smoothing and over-squashing. Both theoretical frameworks and empirical investigations substantiate these limitations, facilitating advancements for informative representation. In this paper, we investigate the limitations of MPNNs from a novel perspective. We observe that even in a single layer, a node’s own information can become considerably diluted, potentially leading to negative effects on performance. To delve into this phenomenon in-depth, we introduce the concept of Over-dilution and formulate it with two types of dilution factors: intra-node dilution and inter-node dilution. Intra-node dilution refers to the phenomenon where attributes lose their influence within each node, due to being combined with equal weight regardless of their practical importance. Inter-node dilution occurs when the node representations of neighbors are aggregated, leading to a diminished influence of the node itself on the final representation. We also introduce a transformer-based solution, which alleviates over-dilution by merging attribute representations based on attention scores between node-level and attribute-level representations. Our findings provide new insights and contribute to the development of informative representations.
1 Introduction
Recent progress in representation learning on graph-structured data has been largely attributed to Graph Neural Networks (GNNs), powered by their ability to utilize structural information. In particular, Message Passing Neural Networks (MPNNs) have gained significant attention due to their simple mechanism yet powerful performance (Gilmer et al., 2017). Various extensions of MPNNs have been proposed, primarily, to improve their expressivity and solve issues with degeneration caused during the message passing (Kipf & Welling, 2017; Velickovic et al., 2018; Hamilton et al., 2017; Wu et al., 2019; Chen et al., 2020b; Corso et al., 2020; Bianchi et al., 2021; Brody et al., 2022).
Towards a deeper understanding, several phenomena have been observed and formalized that cause MPNNs to deviate from optimal behavior, such as over-smoothing (Xu et al., 2018; Li et al., 2018b; Nt & Maehara, 2019; Zhao & Akoglu, 2020; Oono & Suzuki, 2020; Chen et al., 2020a), over-squashing (Alon & Yahav, 2021; Topping et al., 2022), and over-correlation (Jin et al., 2022). They have become the foundation for addressing distortions in information on irregular structures, laying the groundwork for subsequent studies to enhance MPNNs (Arnaiz-Rodríguez et al., 2022; Wu et al., 2023; Guo et al., 2023; Eliasof et al., 2023; Nguyen et al., 2023; Di Giovanni et al., 2023; Karhadkar et al., 2023; Gravina et al., 2023). Therefore, it is essential to identify and formalize the limitations (i.e., undesirable behaviors) of MPNNs for the advancement of representation learning on graphs.
In this paper, we investigate a limitation associated with the preservation of attribute-level information. This perspective is distinct from previous categories of limitations, where the primary focus has been on the propagation of node-level representation as illustrated in Figure 1. Although often not emphasized sufficiently, node attributes provide important information about the nodes that can be used to make predictions such as potential links between them (Gong et al., 2014; Huang et al., 2017; Li et al., 2017, 2018a; Hao et al., 2021). We first introduce the phenomenon that outlines the diminishment of a node’s own information on the final representation in MPNNs, referred to as over-dilution. This phenomenon has been observed when nodes have an excessive number of
attributes, hindering their ability to focus on important attributes, or when each node receives an overwhelming amount of information from neighboring nodes, leading to a relative loss of their individual information. As illustrated in Figure 1, we analyze this phenomenon by dividing it into two cascaded sub-phenomena: *intra-node dilution* and *inter-node dilution*. These describe the weakening of influence of the attribute-level and the node-level representations, respectively.
To address the over-dilution phenomenon, we introduce a transformer-based architecture (Vaswani et al., 2017) designed to utilize attribute representations as tokens. Notably, this architecture is not a competitor but a complement to existing node embedding methods (e.g., MPNNs). Its flexibility is underscored by its ability to seamlessly integrate with any node embedding method, computing the final representation by weighting attribute representations based on attention scores associated with the aggregated node-level representation. We theoretically and empirically demonstrate its effectiveness for solving the over-dilution problem. Our main contributions can be summarized as:
• We introduce the *over-dilution* phenomenon from a new perspective, shedding light on its impact on the representation of graph-structured data. We formulate and elucidate this concept through two sub-phenomena: *intra-node dilution* and *inter-node dilution*, which describe the dilution of attribute-level and node-level representations, respectively.
• The concept of over-dilution delves into the limitation tied to the *preservation* of *attribute-level* information, setting it apart from existing limitations primarily centered on the *propagation* of *node-level* representation.
• By investigating the over-dilution phenomenon and addressing it with a transformer-based approach that complements any node embedding methods, we contribute to a deeper understanding and provide insights into the development of informative representations.
## 2 Preliminaries
Attributed graphs are of the form $G = (\mathcal{T}, \mathcal{V}, \mathcal{E})$ that consists of sets of attributes $\mathcal{T}$, nodes $\mathcal{V}$, and edges $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$. Let $\mathcal{T}_v$ be a subset for attributes $t \in \mathcal{T}$ that node $v \in \mathcal{V}$ is associated with. $N_{\mathcal{V}} = |\mathcal{V}|$ and $N_{\mathcal{T}} = |\mathcal{T}|$ indicate the total numbers of nodes and attributes, respectively. The
node feature matrix and the adjacency matrix are represented as \( X \in \mathbb{R}^{N_v \times N_T} \) and \( A \in \mathbb{R}^{N_v \times N_v} \), respectively. We assume that each node has a discrete binary vector, \( X_v \in \mathbb{R}^{N_T} \), indicating existence of attributes where \( X_{v,t}=1 \) if the node \( v \) has attribute \( t \), otherwise \( X_{v,t}=0 \). The embedding of attribute \( t \) is represented as \( z_t \in \mathbb{R}^d \) with dimension \( d \), which is a randomly initialized representation.
### 2.1 Message Passing Neural Networks
In MPNNs, the representations of nodes are calculated through a series of layers, where each layer consists of two main operations: the Update function and the Aggregate function. The update function is used to transform the node representation and the aggregate function is used to combine information from neighboring nodes. This process is repeated for multiple layers, thereby refining the node representations and extracting higher-level features from the graph. We formulate MPNNs as:
\[
h_v^{(l)} = \sigma(\text{Aggregate}(\{\text{Update}(h_u^{(l-1)})|u \in \tilde{N}(v)\})) = \sigma\left(\sum_{u \in \tilde{N}(v)} \alpha_{vu} h_u^{(l-1)} W^{(l)}\right)
\]
where \( \tilde{N}(v) \) is a set of neighbor nodes of \( v \) including itself, \( W^{(l)} \in \mathbb{R}^{d \times d} \) is the learnable parameter at \( l \)-th layer, and \( \text{Aggregate}(\cdot) \) denotes the aggregate function for neighbor nodes. \( h_v^{(0)} = X W^{(0)} \in \mathbb{R}^{N_v \times d} \) is the initial node feature matrix with learnable parameter \( W^{(0)} \in \mathbb{R}^{N_T \times d} \) and dimension \( d \). In this context, the \( t \)-th row of \( W^{(0)} \) is equivalent to \( z_t \), the representation of the corresponding attribute. The parameter \( \alpha_{vu} \) denotes the aggregation coefficient assigned to the edge connecting neighbor node \( u \) to the center node \( v \) in the aggregation function. This coefficient is calculated as \( \frac{1}{\sqrt{\deg(v)\deg(u)}} \) in the case of GCN, or as an attention coefficient between nodes \( v \) and \( u \) in GAT. The receptive field of node \( v \) is defined as: \( B_l(v) := \{u \in V | s_G(v,u) \leq l\} \), where \( s_G \) is the standard shortest-path distance on the graph \( G \) and \( l \in \mathbb{N} \) is the radius.
### 2.2 Over-smoothing and Over-squashing
Over-smoothing refers to the phenomenon where the model excessively propagates information between nodes, leading to a loss of distinguishability of their representations (Xu et al., 2018; NL & Maehara, 2019; Oono & Suzuki, 2020). In the process of exchanging information through message propagation, all nodes have similar representations and noise is conveyed alongside important information (Li et al., 2018b; Chen et al., 2020a).
Over-squashing is a problem that arises when exponentially increasing amounts of information are compressed into a fixed-size vector (Alon & Yahav, 2021). This leads to a bottleneck, particularly in the extended paths within a graph, which hinders GNNs from fitting long-range signals and causes them to fail to propagate messages originating from distant nodes. As a result, the performance is typically compromised, where the task necessitates long-range interaction (Topping et al., 2022).
### 3 Over-dilution
In this section, we introduce a new concept named over-dilution, which is distinct from over-smoothing and over-squashing as illustrated in Figure 1. Over-dilution refers to the diminishment of a node’s information at both the attribute and the node levels. To assess the severity of over-dilution, we define the dilution factor, as a metric that measures the retention of node’s own information in the updated representation. This factor can be decomposed into two cascaded components as described in Eq (2). We define the intra-node dilution factor \( \delta_{\text{intra}} \) mainly for attribute representations and the inter-node dilution factor \( \delta_{\text{inter}} \) for node representations. As depicted in Figure 1(b), the attribute representation is diluted during the first step of message passing (i.e. Update) and then subsequently diluted in the second step (i.e. Aggregate) in form of the node-level representation. Therefore, the dilution factor of attribute \( t \) at node \( v \) can be defined as corresponding to two cascaded steps:
\[
\delta_{v,t} = \delta_{\text{intra}}(t) \ast \delta_{\text{inter}}(v)
\]
where \( \delta_{\text{intra}}(t) \) represents the intra-node dilution factor of attribute \( t \) at the node \( v \) and \( \delta_{\text{inter}}(v) \) represents the inter-node dilution factor of node \( v \) in the graph. We exploit the Jacobian matrix of node representations to quantify dilution factors based on the influence distribution in a similar way as Xu et al. and Topping et al.
Figure 2: (a) The histogram of the inter-dilution factor (aggregation-only) values after single layer of GCN in Computers dataset. (b) The average of the inter-dilution factor (aggregation-only) with the left y-axis and the average size of the receptive field with the right y-axis in the Computers dataset.
Taking the Computers dataset as a primary example, consider a node with 204 attributes (the median value for the number of attributes) and 19 neighboring nodes (the median degree). In this scenario, each attribute of the node would be diluted to $1/204 \times 1/20$, or roughly 0.025%, in a single layer when using either the mean or sum as the aggregation operator.
3.1 Intra-Node Dilution: Measuring Attribute Influence within Each Node
The intra-node dilution factor is a metric that quantifies the degree to which an attribute is diluted at a specific node. We measure the influence of $z_t$ on $h_v^{(0)}$ indicating how much the representation of attribute $t$ affects the initial representation of node $v$.
**Definition 3.1. (Intra-node dilution factor).** For a graph $\mathcal{G} = (\mathcal{T}, \mathcal{V}, \mathcal{E})$, let $z_t$ be the representation of attribute $t \in \mathcal{T}$ and $h_v^{(0)}$ denote the initial feature representation of node $v \in \mathcal{V}$, which is calculated from the representations of attribute subset $\mathcal{T}_v$ that node $v$ possesses. The influence score $I_v(t)$ attribute $t$ on node $v$ is the sum of the absolute values of the elements in the Jacobian matrix $\left[ \frac{\partial h_v^{(0)}}{\partial z_t} \right]$. We define the intra-node dilution factor as the influence distribution by normalizing the influence scores: $\delta_v^{\text{intra}}(t) = I_v(t)/\sum_{s \in \mathcal{T}_v} I_v(s)$. In detail, with the all-ones vector $e$:
$$\delta_v^{\text{intra}}(t) = e^T \left[ \frac{\partial h_v^{(0)}}{\partial z_t} \right] e / \sum_{s \in \mathcal{T}_v} e^T \left[ \frac{\partial h_v^{(0)}}{\partial z_s} \right] e$$
(3)
**Hypothesis 1. (Occurrence of intra-node dilution).** Intra-node dilution occurs when a node-level representation is computed by equally weighting and fusing attribute-level representations, irrespective of their individual importance. The over-dilution effect at the intra-node level becomes more pronounced as the number of attributes increases.
For example, given node $v$ where the important attributes are sparse compared to the total number of attributes $|\mathcal{T}_v|$, the influence of the key attributes get proportionally limited to $1/|\mathcal{T}_v|$. In MPNNs, the representation of node $v$ is calculated by summing or averaging the representations of attributes $t \in \mathcal{T}_v$ as $h_v^{(0)} = XW^{(0)} = \sum_{t \in \mathcal{T}_v} z_t$ or $h_v^{(0)} = \sum_{t \in \mathcal{T}_v} \frac{z_t}{|\mathcal{T}_v|}$. Therefore, the intra-node dilution factor $\delta_v^{\text{intra}}(t)$ takes the constant value $\frac{1}{|\mathcal{T}_v|}$ for all attributes at each node $v$. This implies that the influence of each attribute on the representation of a node is treated as equal and, as the number of attributes increases, the impact of important attributes on the representation of the node is diluted. Given that attributes possess different levels of practical importance, their influences may be diluted in cases where only a small subset of attributes is crucial for the node representation.
3.2 Inter-node Dilution: Measuring Node Influence on Final Representation
The inter-node dilution factor of each node is calculated by considering the influence of the initial node representation on the output representation in the last layer and the influences of all other nodes. We adapt the Jacobian matrix of node representation, as introduced by Xu et al., for quantifying the influence of one node on another, to measure the influence of each node on itself.
Definition 3.2. (Inter-node dilution factor). Let \( h_v^{(0)} \) be the initial feature and \( h_v^{(l)} \) be the learned representation of node \( v \in V \) at the \( l \)-th layer. We define the inter-node dilution factor as the normalized influence distribution of node-level representations: \( \delta_{\text{inter}}(v) = I_v(v)/\sum_{u \in V} I_v(u) \), or
\[
\delta_{\text{inter}}(v) = e^T \left[ \frac{\partial h_v^{(l)}}{\partial h_v^{(0)}} \right] e / \sum_{u \in V} e^T \left[ \frac{\partial h_v^{(l)}}{\partial h_u^{(0)}} \right] e
\]
(4)
In MPNNs, the representation \( h_v^{(l)} \) is calculated from the non-linear transformation (i.e. Update(\(\cdot\))) and the aggregation of the representations \( h_u^{(l-1)} \) for \( u \in \tilde{N}(v) \). To observe the effect of the aggregation exclusively, we eliminate the effect of the non-linear transformation by setting all weight and initial node feature matrices to be the identity matrix. We define \( \delta_{\text{Agg}}(v) \), which is the exclusive version of the inter-node dilution factor, with \( W^{(l)} = W^{(l-1)} = \ldots = W^{(1)} = h^{(0)} = I_N \in \mathbb{R}^{N \times N} \). The output representation of the aggregation-only version of GCN is calculated as \( h_v^{(l)} = (\tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}})^l I_N \), where \( \tilde{A} \) indicates the adjacency matrix with self-loop and \( \tilde{D} \) is the corresponding degree matrix. The numerator of \( \delta_{\text{Agg}}(v) \) is calculated from:
\[
\frac{\partial h_v^{(l)}}{\partial h_v^{(0)}} = \prod_{i=1}^{l} \alpha_{uv}^{(i)} \cdot \frac{\partial h_v^{(0)}}{\partial h_v^{(0)}} + \sum_{u \in \tilde{N}(v) \setminus \{v\}} \sum_{k=1}^{l-1} \prod_{j=k+2}^{l} \alpha_{uv}^{(j)} \alpha_{vu}^{(k+1)} \frac{\partial h_u^{(k)}}{\partial h_v^{(0)}}
\]
(5)
where \( \alpha_{uv}^{(i)} \) indicates the aggregation coefficient from node \( u \) to the node \( v \) at \( i \)-th layer. The former term, which is defined for \( l \geq 1 \), indicates the preserved amount of the representation of node \( v \) and the latter term, which is defined for \( l \geq 2 \), indicates the returned amount of representation of node \( v \) from neighbors after more than two hops aggregation. The denominator of \( \delta_{\text{Agg}}(v) \) is calculated from:
\[
\sum_{u \in V} \frac{\partial h_v^{(l)}}{\partial h_u^{(0)}} = \sum_{x \in \tilde{N}(v)} \sum_{u \in V} \sum_{k=0}^{l-1} \prod_{j=k+2}^{l} \alpha_{uv}^{(j)} \alpha_{vx}^{(k+1)} \frac{\partial h_x^{(k)}}{\partial h_u^{(0)}}
\]
(6)
Hypothesis 2. (Occurrence of inter-node dilution 1). For a node \( v \) and its adjacent nodes, which are denoted as \( \tilde{N}(v) \), inter-node dilution occurs when the aggregation coefficient of the self-loop, \( \alpha_{vv} \), is significantly smaller than the sum of the coefficients of the other edges connecting node \( v \) and its adjacent nodes: \( \alpha_{vv} \ll \sum_{u \in \tilde{N}(v) \setminus \{v\}} \alpha_{vu} \).
The inter-node dilution factor for the aggregation-only at a single layer is calculated as:
\[
\delta_{\text{Agg}}(v) = e^T \left[ \alpha_{vv} \frac{\partial h_v^{(0)}}{\partial h_v^{(0)}} \right] e / e^T \left[ \sum_{u \in \tilde{N}(v)} \alpha_{vu} \frac{\partial h_u^{(0)}}{\partial h_u^{(0)}} \right] e = \frac{\alpha_{vv}}{\sum_{u \in \tilde{N}(v)} \alpha_{vu}}
\]
(7)
In most MPNNs, the inter-node dilution occurs when the degree (i.e. \( |\tilde{N}(v)| \)) is high. For GCN, it can even occur with the low degree if the neighbor nodes have smaller degrees compared to node \( v \), because the aggregation coefficient for self-loop is defined as \( \alpha_{vv} = \frac{1}{\deg(v)} \) while the coefficients for edges with neighbor nodes are defined as \( \alpha_{vu} = \frac{1}{\sqrt{\deg(v)\deg(u)}} \). As shown in the Figure 2(a), a significant number of nodes exhibits low \( \delta_{\text{Agg}}(v) \) values even after one-hop aggregation.
Hypothesis 3. (Occurrence of inter-node dilution 2). Inter-node dilution occurs at node \( v \) as the size of its receptive field \( |B_l(v)| \) increases.
As explained in Xu et al. and Topping et al., the size of the receptive field grows exponentially as the number of layers increases. Consequently, the information from a larger number of nodes is integrated, resulting in a dilution of the information specific to each individual node. Figure 2(b) illustrates the average of this relationship between the inter-node dilution factor (aggregation-only) and the average size of the receptive field in the Computers dataset, as the number of hops increases.
Figure 3: The overall architecture of the Node Attribute Transformer (NATR) comprises of two main components: the attribute encoder for attribute-level representations and the attribute decoder for node-level representations. It can be combined with any node embedding modules such as MPNNs.
4 NODE ATTRIBUTE TRANSFORMER
In this section, we describe the architecture of Node Attribute Transformer (NATR) in details. As illustrated in Figure 3, NATR consists of the attribute encoder and the attribute decoder. While the encoder is designed to consider the correlation between attributes, the decoder plays a crucial role in mitigating over-dilution. It integrates attribute representations across all layers, addressing inter-node dilution, and assigns greater weight to important attributes, tackling intra-node dilution, as discussed in Section 6.1.
4.1 ATTRIBUTE ENCODER
Given a set of randomly initialized representations of attribute tokens \( z^{(0)}_t \in \mathbb{R}^{d_T} \) and its matrix form \( Z^{(0)} \in \mathbb{R}^{N_T \times d_T} \) with \( d_T \) dimension, the attribute representation \( Z^{(n)} \), which is the output of \( n \)-layer of the attribute encoder, is obtained as: \( Z^{(n)} = \text{SelfAttn}^{(n)}(Z^{(n-1)}) \), where \( \text{SelfAttn}^{(n)} \) is the \( n \)-th layer of the attribute encoder containing Multi-Head Self-Attention (MHSA), Add&Norm (Ba et al., 2016), and Feed-Forward Network (FFN) layers as illustrated in the Figure 3. We add \( z^{(0)}_t \) to \( z^{(n)}_t \) for the key and the query at all encoder layers like the positional encoding and it is omitted in the formulation for simplicity. After \( N \) layers of attribute encoder in total, the attribute representation \( z_t = z^{(N)}_t + z^{(0)}_t \), which is \( Z \in \mathbb{R}^{N_T \times d_T} \) in a matrix form, is fed to the attribute decoder. For simplicity, we use the same dimension (\( d_T = d \)) for attribute-level and node-level representations.
4.2 ATTRIBUTE DECODER
The attribute decoder is comprised of the node embedding module, Multi-Head Attention (MHA), Add&Norm, and FFN. The output of the attribute encoder, \( Z \) is used to calculate the key \( K^{(m)} = ZW^{(m)}_{DEC,K} \) and the value \( V^{(m)} = ZW^{(m)}_{DEC,V} \) in the MHA of \( m \)-th decoder layer. The query \( Q^{(m)} = H^{(m)}W^{(m)}_{DEC,Q} \) is calculated from the output of the node embedding module \( H^{(m)} \in \mathbb{R}^{N_V \times d} \), such as MPNNs, at the \( m \)-th decoder layer:
\[
H^{(m)} = \text{NodeModule}(\tilde{H}^{(m-1)}, A)
\]
where \( \tilde{H}^{(m-1)} \) is the output of the previous decoder layer (\( \tilde{H}^{(0)} = H^{(0)} = XW^{(0)} \)) and \( A \) represents the adjacency matrix. We add \( H^{(0)} \) before calculating the query at all decoder layers and it is also omitted in the formulation for simplicity. We denote the node embedding module in the subscript as \( \text{NATR}_{\text{NodeModule}} \). If \( H^{(m)} \) is updated by GCN layer with the formulation \( H^{(m)} = \tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{(m-1)}W^{(m)} \), the model is denoted as \( \text{NATR}_{\text{GCN}} \). Then, the attention coefficient for each attribute at MHA is calculated according to the node-level representation \( Q^{(m)} \).
\[
O^{(m)} = \text{MHA}(Q^{(m)}, K^{(m)}, V^{(m)}) = \text{Concat}(\text{head}_1, ..., \text{head}_h)W^{(m)}_{DEC,O}
\]
Table 1: Dataset statistics. \(|\mathcal{T}_v|\) and degree are related to intra- and inter-node dilutions, respectively.
| Dataset | \(|\mathcal{V}|\) | AVG. DEGREE | MEDIAN DEGREE | \(|\mathcal{T}|\) | AVG. \(|\mathcal{T}_e|\) | MEDIAN \(|\mathcal{T}_e|\) | MAX. \(|\mathcal{T}_e|\) |
|---------------|------------------|-------------|---------------|------------------|------------------------|--------------------------|------------------------|
| AMAZON COMPUTERS | 13752 | 30.393 | 19 | 767 | 267.2 | 204 | 767 |
| AMAZON PHOTO | 7650 | 26.462 | 18 | 745 | 258.8 | 193 | 745 |
| CORA ML | 2995 | 4.632 | 3 | 2879 | 50.5 | 49 | 176 |
| OGB-DDI\text{SUBSET} | 3531 | 499.582 | 500 | 1024 | 58.2 | 56 | 270 |
| OGB-DDI\text{FULL} | 4267 | 500.544 | 446 | 1024+1 | 49.1 | 51 | 271 |
where \(\text{head}_i = \text{softmax}\left(\frac{Q_i K^\top}{\sqrt{d}}\right)V_i\). We use masks in the MHA to merge the representations of the attributes possessed by each node exclusively. The aggregated representation of neighbor nodes \(H^{(m)}\) is added to the output \(O^{(m)} \in \mathbb{R}^{N_v \times d}\) and then fed to Normalization layer followed by FFN layer. After additional normalization layer with skip connection, the final representation at \(m\)-th layer of attribute decoder, \(\tilde{H}^{(m)} \in \mathbb{R}^{N_v \times d}\) is used as the input feature of the node embedding module at the next decoder layer.
\[G^{(m)} = \text{Norm}(H^{(m)} + O^{(m)}), \quad \tilde{H}^{(m)} = \text{Norm}(\text{FFN}(G^{(m)}) + G^{(m)})\]
Note that \(H^{(m)}\) is the representation which is aggregated from neighbors and \(O^{(m)}\) is the representation of each node. Therefore, we can control the inter-node dilution factor by changing as \(G^{(m)} = \text{Norm}((1 - \lambda)H^{(m)} + \lambda O^{(m)})\), where \(0 \leq \lambda \leq 1\) can be a hyperparameter, a learnable parameter, or an attention coefficient. In this work, we use the original formulation in Eq. (10).
Plug-in Version of NATR. We also provide a variant of NATR which is a plug-in version. In the scenario where an established node embedding model (e.g. MPNNs) is already trained and running in the industry, NATR can be easily incorporated into the existing model as a form of the separate architecture which is illustrated in Appendix. The plug-in version is also available for single layer models such as SGC [Wu et al., 2019]. The difference from standard NATR is that the node embedding module is not nested inside the decoder but operated separately.
5 EXPERIMENTS
We evaluate NATR on four benchmark datasets with the OGB pipeline [Hu et al., 2020]. To validate the informativeness of node representations, we conduct the link prediction and the node classification tasks. All experiments are repeated 20 times, and the averages of performance are reported. GCN [Kipf & Welling, 2017], SGC [Wu et al., 2019], and GAT [Veličković et al., 2018] are selected as the main baselines. For NATR\text{SGC}, we adapt the plug-in version of the architecture. In GCN and SGC, aggregation coefficients are calculated based on the degree. GAT uses attention coefficients to calculate its aggregation coefficients. Thereby, the inter-node dilution factor in GCN and SGC is affected by the topology, while in GAT it is affected by node representations. The details of the experiments, the extended results, a comparison with various node embedding modules such as GCNII and Graphormer [Hamilton et al., 2017; Bianchi et al., 2021; Corso et al., 2020; Chen et al., 2020b; Ying et al., 2021], an analysis on complexity, and ablation studies are reported in the Appendix.
5.1 DATASETS
Computers and Photo datasets are segments of the Amazon co-purchase graph [McAuley et al., 2015; Shchur et al., 2018]. Nodes indicate products and edges represent that two products are purchased together frequently. The bag-of-words in the product reviews is used as a set of attributes. CoraML dataset also contains bag-of-words as attributes, but in this case, nodes are documents and edges represent the citation link between them [McCallum et al., 2000; Bojchevski & Günnemann, 2018]. In the OGB-DDI dataset, provided by Wishart et al. and Hu et al., each node represents drug and the edges represent interaction between drugs. We extract node attributes from molecular structures in DrugBank DB [Wishart et al., 2018] and generating Morgan Fingerprints (radius 3, 1024 bits) with RDKit. Any nodes that are not supported by RDKit or DrugBank are deleted, and the corresponding graph is subsequently reconstructed as OGB-DDI\text{SUBSET}. The OGB-DDI\text{FULL} dataset includes all nodes and edges and the unsupported nodes are assigned a dummy attribute.
Table 2: Experimental results of the link prediction with Hits@20 performance (top) and the node classification with MAD score (bottom) on benchmark datasets. The extended results for various node embedding methods including SAGE, PNA, and Graphormer are reported in the Appendix.
| Method | Computers | Photo | Cora ML | OGB-DDI\text{subset} | OGB-DDI\text{full} |
|--------|-----------|-------|---------|----------------------|--------------------|
| GCN | 31.01 ± 3.37 | 51.05 ± 5.45 | 75.93 ± 4.36 | 76.11 ± 5.92 | 68.18 ± 9.24 |
| NATR\text{GCN} | 42.38 ± 3.21 | 58.12 ± 4.18 | 77.04 ± 2.61 | 78.51 ± 4.03 | 73.07 ± 8.16 |
| GAT | 24.73 ± 4.96 | 48.23 ± 7.43 | 72.42 ± 3.45 | 61.46 ± 11.51 | 29.02 ± 12.52 |
| NATR\text{GAT} | 40.63 ± 3.97 | 56.06 ± 3.54 | 74.10 ± 3.22 | 80.68 ± 2.32 | 77.80 ± 6.79 |
| SGC | 30.37 ± 2.73 | 51.31 ± 4.80 | 74.49 ± 3.03 | 41.04 ± 7.12 | 39.19 ± 7.87 |
| NATR\text{SGC} | 36.99 ± 3.34 | 57.42 ± 4.38 | 77.20 ± 2.85 | 86.79 ± 3.66 | 76.99 ± 10.91 |
| Method | Computers | Photo | Cora ML |
|--------|-----------|-------|---------|
| GCN | 80.12 ± 1.71 | 0.46 ± 0.03 | 88.50 ± 2.11 | 0.83 ± 0.06 | 78.71 ± 2.00 | 0.55 ± 0.03 |
| NATR\text{GCN} | 81.70 ± 2.75 | 0.82 ± 0.04 | 90.84 ± 1.26 | 0.91 ± 0.02 | 80.39 ± 2.28 | 0.68 ± 0.03 |
| GAT | 80.86 ± 1.95 | 0.63 ± 0.05 | 88.87 ± 2.04 | 0.57 ± 0.04 | 77.35 ± 2.02 | 0.84 ± 0.04 |
| NATR\text{GAT} | 81.39 ± 2.12 | 0.67 ± 0.03 | 89.23 ± 1.93 | 0.89 ± 0.02 | 79.36 ± 1.66 | 0.74 ± 0.02 |
| SGC | 80.31 ± 1.53 | 0.26 ± 0.03 | 89.18 ± 1.67 | 0.45 ± 0.07 | 79.30 ± 1.89 | 0.34 ± 0.02 |
| NATR\text{SGC} | 80.63 ± 2.30 | 0.68 ± 0.03 | 89.60 ± 1.74 | 0.78 ± 0.05 | 80.22 ± 1.03 | 0.92 ± 0.02 |
5.2 Tasks
**Link Prediction.** We conduct intensive experiments on the task of link prediction. The attribute-level representation is especially important in predicting potential links between nodes [Li et al., 2018a; Hao et al., 2021]. In the case of the OGB-DDI\text{full} dataset, the attribute indicates substructures of chemical compounds so it can provide information about potential interactions between drugs in a biological system. The overall performance is reported in Table 2 (top), and the performance based on the number of layers is reported in Table 3.
**Node Classification.** Despite the potential benefits of smoothing node representations to be more similar to their neighboring nodes in the node classification task for homogeneous graphs, as opposed to preserving the individual features of each node, our experimental results demonstrate that NATR does not impede performance. We also measure the smoothness of node representations based on Mean Average Distance (MAD) [Chen et al., 2020a]. The experimental results in the Table 2 (bottom) show that the NATR architecture is beneficial in addressing the over-smoothing issue by preserving the individual representation of each node.
6 Analysis
6.1 Improvements in the Dilution Factors of NATR
**The intra-node dilution factor.** Unlike the $1/|\mathcal{T}_v|$ approach used in MPNNs, NATR can enhance the representation of important attributes while suppressing others. The intra-node dilution factor for attribute $t$ is calculated as $\exp(Q_v K_t^\top)/\sum_{s \in \mathcal{T}_v} \exp(Q_v K_s^\top)$, which is the attention coefficient at node $v$. In comparison to GCN, NATR\text{GCN} increases $\delta^{\text{intra}}(t)$ in 38.07% of all cases with a median increase of +30.31% and a maximum increase of +4005.60% on the Computers dataset. The detailed statistics are reported in the Appendix.
**The inter-node dilution factor.** The final representation of node $v$ at the last layer $\hat{H}_v^{(M)}$ is calculated based on two node-level representations: $H_v^{(M)}$ and $O_v^{(M)}$. The term $H_v^{(M)}$ contains information from its neighboring nodes, while $O_v^{(M)}$ pertains exclusively to node $v$. The inter-node dilution factor
Table 3: Hits@20 performance on Computers dataset by the number of layers.
| Layers | 2 Layers | 3 Layers | 4 Layers | 5 Layers |
|--------|----------|----------|----------|----------|
| GCN | 31.01 | 30.84 | 28.97 | 26.99 |
| GCN\text{JK} | 29.47 | 27.85 | 28.00 | 27.49 |
| NATR\text{GCN} | 39.81 | 41.54 | 40.96 | 42.38 |
| GAT | 24.73 | 21.07 | 11.52 | 4.15 |
| GAT\text{JK} | 27.22 | 24.54 | 23.90 | 23.98 |
| NATR\text{GAT} | 39.51 | 39.58 | 40.63 | 40.21 |
| SGC | 30.37 | 25.78 | 24.30 | 23.87 |
| NATR\text{SGC} | 36.99 | 36.47 | 35.31 | 34.01 |
of NATR is defined as:
$$\delta_{\text{inter}}(v) = e^T \left[ \frac{\partial \hat{H}_v^{(M)}}{\partial H_v^{(0)}} + \sum_{m=1}^{M} \frac{\partial \hat{H}_v^{(M)}}{\partial O_v^{(m)}} \right] e / e^T \left[ \sum_{u \in V} \frac{\partial \hat{H}_u^{(M)}}{\partial H_u^{(0)}} + \sum_{m=1}^{M} \frac{\partial \hat{H}_u^{(M)}}{\partial O_u^{(m)}} \right] e$$
Even when $\frac{\partial \hat{H}_v^{(M)}}{\partial H_v^{(0)}}$ value of the numerator is smaller than $\sum_{u \in V} \frac{\partial \hat{H}_u^{(M)}}{\partial H_u^{(0)}}$ value of the denominator as in MPNNs, the factor value can still be high in NATR. This is primarily due to the contribution from $\sum_{m=1}^{M} \frac{\partial \hat{H}_v^{(M)}}{\partial O_v^{(m)}}$, helping each node preserve its own feature as shown in Figure 2(b). As demonstrated in Table 3, the performance of MPNNs deteriorates as the depth of the layers increases, whereas NATR models exhibit performance gains. In the case of NATR$_{SGC}$, because we adapt the plug-in version that uses over-diluted representations as queries, the performance is slightly decreased. MPNNs with jumping knowledge (JK) (Xu et al., 2018), which concatenate the outputs of all layers, alleviate the performance drop compared to original models. However, JK models fail to improve performance implying that they are inadequate in utilizing the information as the number of layers increases.
6.2 EFFECTIVENESS OF NATR
To explore the effectiveness of NATR, we measure performance on subsets standing for over-diluted nodes $V_{Q1}$ and less-diluted nodes $V_{Q4}$ after two hops aggregation, which are defined as:
$$V_{Q1} = \{ v \in V \mid \delta_{\text{Agg}}^{\text{inter}}(v) \leq Q1 \}$$
$$V_{Q4} = \{ v \in V \mid \delta_{\text{Agg}}^{\text{inter}}(v) \geq Q3 \land \delta_{\text{Agg}}^{\text{inter}}(v) \neq 1 \}$$
where $Q1$ and $Q3$ represent the first and the third quartiles, which divide the set into the bottom 25% and the top 25% of $\delta_{\text{Agg}}^{\text{inter}}(v)$ values, respectively.
We define two subsets of corresponding edges as $E_{Q1} = \{ (i,j) \in E \mid i \in V_{Q1} \lor j \in V_{Q1} \}$ and $E_{Q4} = \{ (i,j) \in E \mid i \in V_{Q4} \lor j \in V_{Q4} \}$. The isolated nodes, defined as those with $\delta_{\text{Agg}}^{\text{inter}}(v) = 1$, are excluded. As shown in Table 4, NATR models demonstrate improved performance on both subsets, with particularly noteworthy improvement on over-diluted nodes compared to MPNNs.
Furthermore, we compare models with various conditions as described in Table 5. The distinction between (A) and (D) lies in the weight assigned to attribute-level representations. Both are MLP-based models but (D) alleviates the intra-node dilution by mixing attributes according to the attention coefficients. The comparison with (D) and (E) shows the effectiveness of the correlation between attributes through SelfAttn in the attribute encoder. The GCN models, (B) and (C), improve performance compared to (A) as a result of incorporating contextual information from neighboring nodes. The complete model (F) exploits GCN as a node embedding module, allowing attribute representation to be fused while taking into account the context of the graph.
7 CONCLUSION
In this work, we first introduce the concept of over-dilution phenomenon to comprehend the limitations of MPNNs. To assess the over-dilution effect in formal way, we define factors for two sub-phenomena: intra-node dilution and inter-node dilution. The concept of over-dilution encompasses the diminution of information at both the attribute-level and the node-level. Based on our analysis of the dilution effect, we propose the Node Attribute Transformer (NATR) as a solution to alleviate over-dilution and enhance performance. Our approach presents a novel perspective for understanding the limitations of MPNNs and a foundation to the development of more informative representations on graphs.
### Table 4: Hits@5 performance on subsets of Computers dataset.
| Model | $E_{Q1}$ | $E_{Q4}$ |
|----------------|----------|----------|
| GCN | 19.96 | 42.69 |
| NATR$_{GCN}$ | 23.96 (+20.04%) | 45.18 (+5.84%) |
| GAT | 13.57 | 34.86 |
| NATR$_{GAT}$ | 24.46 (+80.38%) | 43.49 (+24.74%) |
| SGC | 19.39 | 38.61 |
| NATR$_{SGC}$ | 24.10 (+24.29%) | 39.72 (+2.89%) |
### Table 5: Comparison with various models: (A)-MLP, (B)-GCN, (C)-GCN$_{JK}$, (D)-NATR$_{MLP}$ with a MLP encoder, (E)-NATR$_{MLP}$ with a SelfAttn encoder, (F)-NATR$_{GCN}$. The model utilizing the correlation between attributes is indicated by CORR, MP denotes the use of message passing, and ATTN indicates that the value is determined by the attention mechanism.
| Model | $\delta_{\text{inter}}(t)$ | $\delta_{\text{Agg}}^{\text{inter}}(v)$ | Corr | MP | Hits@20 |
|----------------|-----------------------------|----------------------------------------|------|----|---------|
| (A) | 1/|T_v| | high | X | X | 20.37 |
| (B) | 1/|T_v| | low | X | ✓ | 31.01 |
| (C) | 1/|T_v| | high | X | ✓ | 29.47 |
| (D) | Attn | high | X | X | 33.26 |
| (E) | Attn | high | ✓ | X | 34.89 |
| (F) | Attn | high | ✓ | ✓ | 42.38 |
REFERENCES
Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In Proceedings of International Conference on Learning Representations (ICLR), 2021. URL https://openreview.net/forum?id=18OOPhOCVH2.
Adrián Arnaiz-Rodríguez, Ahmed Begga, Francisco Escolano, and Nuria M Oliver. Diffwire: Inductive graph rewiring via the lovász bound. In Proceedings of the First Learning on Graphs Conference (LOG), pp. 15:1–15:27, 2022.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Filippo Maria Bianchi, Daniele Grattarola, Lorenzo Livi, and Cesare Alippi. Graph neural networks with convolutional arma filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In Proceedings of International Conference on Learning Representations (ICLR), 2018. URL https://openreview.net/forum?id=r1ZdKJ-0W.
Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In Proceedings of International Conference on Learning Representations (ICLR), 2022. URL https://openreview.net/forum?id=F72ximsx7C1.
Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 3438–3445, 2020a.
Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In Proceedings of International conference on machine learning (ICML), pp. 1725–1735, 2020b.
Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veličković. Principal neighbourhood aggregation for graph nets. In Proceedings of Advances in Neural Information Processing Systems (NeurIPS) 33, pp. 13260–13271. Curran Associates, Inc., 2020.
Francesco Di Giovanni, Lorenzo Giusti, Federico Barbero, Giulia Luise, Pietro Lio, and Michael M. Bronstein. On over-squashing in message passing neural networks: The impact of width, depth, and topology. In Proceedings of the 40th International Conference on Machine Learning (ICML), pp. 7865–7885, 2023.
Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. AAAI Workshop on Deep Learning on Graphs: Methods and Applications, 2021.
Moshe Eliasof, Lars Ruthotto, and Eran Treister. Improving graph neural networks with learnable propagation operators. In Proceedings of the 40th International Conference on Machine Learning (ICML), pp. 9224–9245, 2023.
Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of International conference on machine learning (ICML), pp. 1263–1272. PMLR, 2017.
Neil Zhenqiang Gong, Ameet Talwalkar, Lester Mackey, Ling Huang, Eui Chul Richard Shin, Emil Stefanov, Elaine Shi, and Dawn Song. Joint link prediction and attribute inference using a social-attribute network. ACM Transactions on Intelligent Systems and Technology (TIST), 5(2):1–20, 2014.
Alessio Gravina, Davide Bacciu, and Claudio Gallicchio. Anti-symmetric DGN: a stable architecture for deep graph networks. In The Eleventh International Conference on Learning Representations (ICLR), 2023.
Xiaojun Guo, Yifei Wang, Tianqi Du, and Yisen Wang. Contranorm: A contrastive learning perspective on oversmoothing and beyond. In International Conference on Learning Representations (ICLR), 2023.
|
V8PhVhb4pp
|
Some necessary details are missing. - As described in Sec. 3.3., the 2D diffusion model jointly denoise multi-view images. How does it work exactly? Are these multi-view images just stacked along the channel dimension? If so, what's the ordering?
|
TEXT-TO-3D GENERATION WITH BIDIRECTIONAL DIFFUSION USING BOTH 2D AND 3D PRIORS
Anonymous authors
Paper under double-blind review
Figure 1: Our BiDiff can efficiently generate high-quality 3D objects. It alleviates all these issues in previous 3D generative models: (a) low-texture quality, (b) multi-view inconsistency, and (c) geometric incorrectness (e.g., multi-face Janus problem). The outputs of our model can be further combined with optimization-based methods (e.g., ProlificDreamer) to generate better 3D geometries with slightly longer processing time (bottom row).
ABSTRACT
Most research in 3D object generation focuses on boosting 2D foundational models into the 3D space, either by minimizing 2D SDS loss or fine-tuning on multi-view datasets. Without explicit 3D priors, these methods often lead to geometric anomalies and multi-view inconsistency. Recently, researchers have attempted to improve the genuineness of 3D objects by training directly on 3D datasets, albeit at the cost of low-quality texture generation due to the limited 2D texture variation in 3D datasets. To harness the advantages of both approaches, in this paper, we propose Bidirectional Diffusion (BiDiff), a unified framework that incorporates both a 3D and a 2D diffusion process, to preserve both 3D fidelity and 2D texture richness, respectively. Recognizing that a simple combination can yield inconsistent generation results, we further bridge them with innovative bidirectional guidance. Moreover, we further offer an optional refinement phase utilizing the denoised results to initialize the optimization-based methods, markedly addressing the geometry incorrectness problem and improving the efficiency (3.4h → 20min). Experimental results have shown that our model achieves high-quality, diverse, and scalable 3D generation. The project website is https://bidiff.github.io/
Figure 2: Sampling results of Bidirectional Diffusion model. BiDiff can separately control texture generation (a) and geometry generation (b).
(a) **Texture Control**: we change the texture while maintaining the overall shape.
(b) **Shape Control**: we fix texture patterns and generate various shapes.
1 INTRODUCTION
Recent advancements in text-to-image generation [Metzer et al., 2022] have galvanized efforts to lift 2D foundational models into 3D object generation. While these methods have made strides, they predominantly focus on enriching texture quality using 2D priors, often overlooking the understanding of 3D geometry. One line of works [Poole et al., 2022; Lin et al., 2022] seeks to optimize a randomly initialized neural radiance field (NeRF) [Mildenhall et al., 2021] for 3D generation. These efforts supervise NeRF renderings using an SDS loss derived from a pre-trained 2D diffusion model while leaving behind the geometry solely restricted by the continuous density field. Without 3D constraints, these methods often lead to geometric anomalies, such as the multi-face Janus problem, and necessitate prolonged optimization periods for every single object. Liu et al. (2023a) tries to alleviate this problem by fine-tuning the 2D diffusion models on multi-view datasets, but a simple image-space multi-view correspondence constraint still falls short of ensuring genuine 3D consistency.
To ensure better 3D consistency, concurrently, some researchers proposed to directly learn 3D structures from 3D datasets [Nichol et al., 2022; Jun & Nichol, 2023]. However, the majority of
objects within contemporary 3D datasets (Chang et al., 2015; Deitke et al., 2022) are synthetically generated by human designers, and their textures are typically less authentic than those of real scanned objects. Moreover, the size of 3D dataset is still an order magnitude smaller than the 2D image dataset. As a result, the trained 3D diffusion models can generate accurate 3D geometries, but they frequently produce inauthentic textures due to the limitations in 2D texture comprehension.
This motivation drives our search for an innovative method that seamlessly integrates both 3D and 2D priors within a unified framework. Still, crafting the architecture that combines 3D and 2D priors poses non-trivial challenges: i) The inherent disparity in representations between the 3D and 2D domains makes the learning of their joint distribution a non-trivial task; ii) 3D and 2D generative models are pretrained differently, potentially resulting in opposite generative directions when combined.
To conquer these challenges, we propose Bidirectional Diffusion (BiDiff), a framework that utilizes a bidirectional guidance mechanism to bridge the 3D and 2D diffusion processes. The proposed framework ensures a coordinated denoising direction across both domains. First, we anchor both 3D and 2D diffusion processes in pre-trained large 3D and 2D foundational models respectively, which ensures the robustness and versatility of both texture and geometry generation. Moreover, with the development of individual foundation models in 3D and 2D, BiDiff can be also continuously improved. Specifically, we use as a hybrid representation for 3D objects: the signed distance field (SDF (Wang et al., 2021)) in 3D and multi-view images in 2D. With this representation, we can train a 3D diffusion model in the SDF space and a 2D diffusion model in the multi-view image space, and combine them together.
To further interconnect 3D and 2D diffusion models, we introduce bidirectional guidance to align the generative directions of them. During each diffusion step, the 2D diffusion model’s output is initially incorporated into the 3D diffusion process as a 2D guidance signal. Subsequently, the rendered image produced from the 3D diffusion output is integrated into the 2D diffusion model as a 3D guidance signal. This design ensures that the two diffusion models mutually inform and adapt to one another, orchestrating a unified denoising direction.
The proposed bidirectional diffusion poses several advantages over the previous 3D generation models. First, since we introduce both the 3D and 2D diffusion models, we can modify each diffusion process independently during inference to separately control the shape and texture of the generated results, which is never achieved in previous 3D diffusion methods. As illustrated in Fig. 2, we can either modulate the generated texture independently of the overall shape, or maintain consistent texture across a range of diverse shapes. Second, BiDiff can generate 3D objects Fig. 1(d) through a feed-forward 3D-2D joint diffusion process (∼40s), with more authentic textures than those solely trained on 3D dataset (Jun & Nichol, 2023) Fig. 1(a), and offers explicit geometry (textured mesh) with genuine 3D consistency compared to methods (Liu et al., 2023a) that merely fine-tune 2D foundation models to acquire multi-view correspondence Fig. 1(b). Leveraging the robust capabilities of both 3D and 2D priors, the outputs generated by BiDiff effectively align text prompts with consistent geometrical structures. Upon generating these feed-forward results, our framework further offers an optional refinement phase, employing an optimization method (ProlificDreamer (Wang et al., 2023)). This method is utilized as an efficient process for refinement, markedly enhancing processing speed (3.4h → 20min) while concurrently addressing issues of geometrical inaccuracies, such as the elimination of multi-face anomalies, as demonstrated in Fig. 1(e). In this way, our framework enables creators to rapidly adjust prompts to obtain a satisfactory preliminary 3D model through a lightweight feed-forward generation process, subsequently refining it into a high-fidelity results.
Through training on ShapeNet (Chang et al., 2015) and Objaverse 40K (Deitke et al., 2022), our framework is shown to generate high-quality textured 3D objects with strong generalizability. In summary, our contributions are as follows: 1) we propose Bidirectional Diffusion to jointly diffuse 3D and 2D in a unified framework; 2) we utilize both 3D and 2D priors to achieve a generalizable understanding of texture and geometry; 3) we can control the texture and geometry independently during sampling; 4) we utilize the outputs from BiDiff as a strong initialization of the optimization-based methods to improve the geometry and efficiency.
2 RELATED WORK
Early 3D generative methods adopt various 3D representations, including 3D voxels (Wu et al., 2016; Smith & Meger, 2017; Henzler et al., 2019), point clouds (Panos Achlioptas & Guibas,
Figure 3: The framework of Bidirectional Diffusion. It jointly trains a 3D diffusion in SDF $F$ space and a 2D diffusion in multi-view image $V$ space, which are both enhanced by foundation models and interconnected by bidirectional guidance to achieve consistent denoising between two domains.
Yang et al. (2018), meshes (Lin Gao & Zhang, 2019; Moritz Ibing & Kobbell, 2021), and implicit functions (Chen & Zhang, 2019; Jeong Joon Park & Lovegrove, 2019) for category-level 3D generations. These methods directly train the generative model on a small-scale 3D dataset, and, as a result, the generated objects may either miss tiny geometric structures or lose diversity. Even though there are large-scale (Deike et al., 2022) or high-quality 3D datasets (Tong Wu, 2023) in recent years, they are still much smaller than the datasets used for 2D image generation training.
With the powerful text-to-image synthesis models (Radford et al., 2021; Saharia et al., 2022; Rombach et al., 2022), a new paradigm emerges for 3D generation without large-scale 3D datasets by leveraging 2D generative model. One line of works utilizes 2D priors from pre-trained text-to-image model (known as CLIP) (Jain et al., 2022; Khalid et al., 2022) or 2D diffusion generative models (Wang et al., 2022; Lin et al., 2022; Metzer et al., 2022) to guide the optimization of underlying 3D representations. However, these models could not guarantee cross-view 3D consistency and the per-instance optimization scheme suffers both high computational cost and over-saturated problems. Later on, researchers improve these models using textual codes or depth maps (Seo et al., 2023; Deng et al., 2023; Melas-Kyriazi et al., 2023), and Wang et al. (2023) directly model 3D distribution to improve diversity. These methods alleviate the visual artifacts but still cannot guarantee high-quality 3D results.
Another line of works learn 3D priors directly from 3D datasets. As the diffusion model has been the de-facto network backbone for most recent generative models, it has been adapted to learn 3D priors using implicit spaces such as point cloud features (Zeng et al., 2022; Nichol et al., 2022), NeRF parameters (Jun & Nichol, 2023; Erkoc et al., 2023), or SDF spaces (Cheng et al., 2022; Liu et al., 2023b). The synthesized multi-view images rendered from 3D datasets were also utilized to provide cross-view 3D consistent knowledge (Liu et al., 2023a). These methods normally highlight fast inference and 3D consistent results. However, due to inferior 3D dataset quality and size, these methods generally yield visually lower-quality results with limited diversity. Recently a few methods (Qian et al., 2023; Shi et al., 2023) explored to combine 2D priors and 3D priors from individual pre-trained diffusion models, but they often suffer from inconsistent between two generative processes.
3 METHOD
As many previous studies (Liu et al., 2023a; Qian et al., 2023) have illustrated, both 2D texture and 3D geometry are important for 3D object generation. However, incorporating 3D structural priors and 2D textural priors is challenging: i) combining both 3D and 2D generative models into a single cohesive framework is not trivial; ii) in both training and inference, two generative models may lead to opposite generative direction; iii) The scarcity of high-quality and diverse 3D data considerably hampers the generalizability of a unified 3D and 2D comprehension.
To tackle these problems, we propose Bidirectional Diffusion, a novel framework that marries a 3D diffusion model with a 2D diffusion model using bidirectional guidance, as illustrated in Fig. 3. For a
robust and generalizable understanding of texture and geometry, we incorporate 3D and 2D priors derived from pre-trained foundation models into their respective denoising processes. To further enhance the efficacy and ensure optimal utilization of the 3D and 2D priors while providing precise control over their influence, we present a prior enhancement strategy, which also helps to achieve decoupled texture and geometry control. Moreover, we utilize the results from BiDiff as a strong initialization of optimization-based methods to obtain more delicate post-optimized results efficiently. Below, we start with the introduction of bidirectional diffusion.
### 3.1 BIDIRECTIONAL DIFFUSION
To incorporate both 2D and 3D prior, we represent a 3D object using a hybrid combination of two formats: Signed Distance Field (SDF) \( F \) and multi-view image set \( V = \{ I_i \}_{i=1}^M \), where \( F \) is computed from signed distance values on \( N \times N \times N \) grid points and \( I_i \) is the \( i \)-th image from a multi-view image set of size \( M \).
With this presentation, we learn a joint distribution \( \{ F, V \} \) utilizing two distinct diffusion models: a 3D diffusion model \( D_{3d} \) in the SDF space and a 2D multi-view diffusion model \( D_{2d} \) within the image domain. Specifically, given a timestep \( t \), we add Gaussian noises to both SDF and multi-view images as
\[
F_t = \sqrt{\alpha_t} F_0 + \sqrt{1 - \alpha_t} \epsilon_{3d} \quad \text{and} \quad I_t = \sqrt{\alpha_t} I_0 + \sqrt{1 - \alpha_t} \epsilon_{2d} \quad \forall i,
\]
where \( \epsilon \sim \mathcal{N}(0, I) \) is random noise, and \( \alpha_t \) is noise schedule which is different for 3D and 2D. Subsequently, the straightforward way is to separately train these two diffusion models by minimizing the following two objectives separately:
\[
L_{\text{simple3d}} = E_{F_0 \sim q(F_0), \epsilon_{3d} \sim \mathcal{N}(0, I), t \sim U[1, T]} \| \epsilon_{3d} - D_{3d}(F_t, t) \|_2^2,
\]
\[
L_{\text{simple2d}} = \frac{1}{N} \sum_{i=1}^{N} E_{I_0 \sim q(I_0), \epsilon_{2d} \sim \mathcal{N}(0, I), t \sim U[1, T]} \| \epsilon_{2d} - D_{2d}(I_t, t) \|_2^2.
\]
However, such an approach overlooks the interplay between 3D and 2D. This oversight can lead to incongruent generation outcomes between 3D geometry and 2D multi-view images, hindering the network’s capacity for concurrent 3D and 2D comprehension.
Therefore, we resolve this problem by a novel Bidirectional Diffusion. In this model, the consistency between 3D and 2D diffusion output is enforced through bidirectional guidance. First, we add a 2D guidance to the 3D generative process, as shown in Fig. 3. Specifically, during each denoising step \( t \), we feed the previous denoised multi-view images \( V_{t+1}' = \{ I_{t+1}' \}_{i=1}^N \) into the 3D diffusion model as \( \epsilon_{3d}' = D_{3d}(F_t, V_{t+1}', t) \). This guidance steers the current 3D denoising direction to ensure 2D-3D consistency. It’s worth mentioning that, during training, the denoised output \( V_{t+1}' \) from the previous step \( t + 1 \) is inaccessible, therefore we directly substitute it with the ground truth \( V_t \). However, during inference, we utilize the denoised images from the preceding step. Then we could obtain the denoised radiance field \( F_0' \) given the 2D guided noise prediction \( \epsilon' \) by \( F_0' = \frac{1}{\sqrt{\alpha_t}} (F_t - \sqrt{1 - \alpha_t} \epsilon_{3d}') \).
Secondly, we also add 3D guidance to the 2D generative process. Specifically, using the same camera poses, we render multi-view images \( H_t' \) derived from the radiance field \( F_0' \) by the 3D diffusion model: \( H_t' = R(F_0', P_i), i = 1, \ldots, N \). These images are further used as a guidance to the 2D multi-view denoising process \( D_{2d} \), realizing the 3D-to-2D guidance: \( \epsilon_{2d}' = D_{2d}(V_t, \{ H_t' \}_{i=1}^N, t) \).
In this manner, we can seamlessly integrate and synchronize both the 3D and 2D diffusion processes within a unified framework. In the following sections, we will delve into each component in detail.
### 3.2 3D DIFFUSION MODEL
Our 3D diffusion model aims to generate a neural surface field (NeuS) [Long et al., 2022], with novel 2D-to-3D guidance derived from the denoised 2D multi-view images. To train our 3D diffusion model, at each training timestep \( t \), we add noise to a clean radiance field, yielding the noisy radiance field \( F_t \). This field, combined with the \( t \) embeddings and the text embeddings, is then passed through 3D sparse convolutions to generate a 3D feature volume \( M \) as: \( M = \text{Sparse3DConv}(F_t, t, \text{text}) \).
Simultaneously, using the denoised multi-view images \( V_{t+1}' \) from the previous step of the 2D diffusion model, we project the \( N \times N \times N \) grid points from \( M \) onto all the \( M \) views. For each grid point \( p \), we aggregate the image features into 3D space by calculating the mean and variance of the \( N \) interpolated features, yielding the image-conditioned feature volume \( N \):
\[
N(p) = [\text{Mean}(V_{t+1}'(\pi(p))), \text{Var}(V_{t+1}'(\pi(p)))],
\]
where $\pi$ denotes the projection operation from 3D to 2D image plane. For the setting of not using 3D priors, we fuse these two feature volumes with further sparse convolutions and predict the clean $F_0$ using the fused features.
One important design of our 3D diffusion model is that it incorporates geometry priors derived from the 3D foundation model, Shap-E (Jun & Nicholl [2023]). Shap-E is a latent diffusion (Metzer et al., 2022) model trained on several millions 3D objects, and thus ensures the genuineness of generated 3D objects. Still, we do not want to limit the creativity of our 3D generative model by Shap-E model, and still maintain the capability of generating novel objects that Shap-E cannot.
To achieve this target, We design a feature volume $G$ to represent a radiance field converted from the latent code $C$. It is implemented using NeRF MLPs by setting their parameters to the latent code $C$: $G(p) = \text{MLP}(\lambda(p); \theta = C)$, where $\lambda$ denotes the positional encoding operation.
Still, one limitation of the usage of Shap-E latent code is that the network is inclined to shortcut the training process, effectively memorizing the radiance field derived from Shap-E. To generate 3D objects beyond Shap-E model, we add Gaussian noise at level $t_0$ to the clean latent code, resulting in the noisy latent representation $C_{t_0}$, where $t_0$ represents a predefined constant timestep. Subsequently, the noisy radiance field $G_{t_0}$ is decoded by substituting $C$ with $C_{t_0}$. This design establishes a coarse-to-fine relationship between the 3D prior and the ground truth, prompting the 3D diffusion process to leverage the 3D prior without becoming excessively dependent on it.
In this way, we can finally get the fused feature volumes by:
$$S = U([M, \text{Sparse3DConv}(N), \text{Sparse3DConv}(G_{t_0})]),$$
(5)
where $U$ denotes 3D sparse U-Net. Then we can query features from $S$ for each grid point $p$ and decode it to SDF values through several MLPs: $F_0(p) = \text{MLP}(S(p), \lambda(p))$, where $S(p)$ represents the interpolated features from $S$ at position $p$. Our experiments also demonstrate that our model can generate 3D objects beyond Shap-E model.
### 3.3 2D Diffusion Model
Our 2D diffusion model simultaneously generates multi-view images, by jointly denoise multi-view noisy images $V_i = \{I_i\}_{i=1}^M$. To encourage 2D-3D consistency, the 2D diffusion model is also guided by the 3D radiance field output from 3D diffusion process mentioned above. Specifically, for better image quality, we build our 2D multi-view diffusion model on the basis of several independently frozen foundation models (e.g., DeepFloyd) to harness the potent 2D priors. Each of these frozen 2D foundation models is modulated by view-specific 3D-consistent residual features and responsible for the denoising of a specific view, as described below.
First, to achieve 3D-to-2D guidance, we render multi-view images from the 3D denoised radiance field $F_0'$. Note that the radiance field consists of a density field and a color field. The density field is constructed from the signed distance field (SDF) generated by our 3D diffusion model using S-density introduced in Neus (Wang et al., 2021). To obtain the color field, we apply another color MLP to the feature volume in the 3D diffusion process.
Upon obtaining the color field $c$ and density field $\sigma$, we conduct volumetric rendering on each ray $r(m) = o + md$ which extends from the camera origin $o$ along a direction $d$ to produce multi-view consistent images $\{H_i\}_{i=1}^M$:
$$\hat{C}(r) = \int_0^\infty T(m)\sigma(r(m))c(r(m)), d)dm,$$
(6)
where $T(m) = \exp(-\int_0^m \sigma(r(s))ds)$ handles occlusion.
Secondly, we use these rendered multi-view images as guidance for the 2D foundation model. We first use a shared feature extractor $E$ to extract hierarchical multi-view consistent features from these images. Then each extracted features are added as residuals to the decoder of its corresponding frozen 2D foundation denoising U-Net, achieving multi-view modulation and joint denoising following ControlNet (Zhang & Agrawala [2023], $f_k^i = f_k^i + \text{ZeroConv}(E(H_i^i)[k])$, where $f_k^i$ denotes the original feature maps of the $k$-th decoder layer in 2D foundation model, $E(H_i^i)[k]$ denotes the $k$-th residual features of the $i$-th view, and ZeroConv (Zhang & Agrawala [2023]) is $1 \times 1$ convolution which is initialized by zeros and gradually updated during training. Experimental results show that this 3D-to-2D guidance helps to ensure multi-view consistency and facilitate geometry understanding.
3.4 Prior Enhancement Strategy
In addition to bidirectional guidance, we also propose a prior enhancement strategy to empower a manual control of the strength of 3D and 2D priors independently, which offers a natural mechanism to encourage decoupled texture and geometry control. Inspired by the classifier-free guidance (Ho & Salimans, 2022), during training, we randomly drop the information from 3D priors by setting condition feature volume from $G$ to zero and weaken the 2D priors by using empty text prompts. Consequently, upon completing the training, we can employ two guidance scales, $\gamma_{3d}$ and $\gamma_{2d}$, to independently modulate the influence of these two priors.
Specifically, to adjust the strength of 3D prior, we calculate the difference between 3D diffusion outputs with and without conditional 3D feature volumes, and add them back to 3D diffusion output:
$$\hat{e}_{3d} = D_{3d}(F_t, V'_{t+1}, t) + \gamma_{3d} \cdot ((D_{3d}(F_t, V'_{t+1}, t|G)) - D_{3d}(F_t, V'_{t+1}, t)).$$
(7)
Then we can control the strength of 3D prior by adjusting the weight $\gamma_{3d}$ of this difference term. When $\gamma_{3d} = 0$, it will completely ignore 3D prior. When $\gamma_{3d} = 1$, this is just the previous model that uses both 3D prior and 2D prior. When $\gamma_{3d} > 1$, the model will produce geometries close to the conditional radiance field but with less diversity.
Also, we can similarly adjust the strength of 2D priors by adding differences between 2D diffusion outputs with and without conditional 2D text input:
$$\hat{e}_{2d} = D_{2d}(V_t, \{H^i_t\}_{i=1}^{M}, t) + \gamma_{2d} \cdot ((D_{2d}(V_t, \{H^i_t\}_{i=1}^{M}, t|text)) - D_{2d}(V_t, \{H^i_t\}_{i=1}^{M}, t)).$$
(8)
Increasing $\gamma_{2d}$ results in more coherent textures with text, albeit at the expense of diversity. It is worth noting that while we adjust the 3D and 2D priors independently via Eq. (7) and Eq. (8), the influence inherently propagates to the other domain due to the intertwined nature of our bidirectional diffusion process.
To achieve separate texture and geometry control, on the one hand, we first fix the initial 3D noisy SDF grids and fix the conditional radiance field $C_{t_0}$ while enlarging its influence by Eq. (7). In this way, we can modify the 2D diffusion process by adjusting the text prompts to change texture while maintaining overall shapes; on the other hand, we can keep the texture styles by maintaining keywords in text prompts and enlarge its influence by Eq. (8), then the shape can be adjusted by modifying the 3D diffusion process like varying conditional radiance field.
3.5 Optimization with BiDiff Initialization
Once obtain the denoised radiance field $F_0$, we can further use it as a strong initialization of the optimization-based methods for further refinement. Importantly, since our generated $F_0$ has text-aligned textures derived from the powerful 2D prior as well as accurate geometry guided by the 3D prior, the optimization started from this strong initialization can be rather efficient ($\approx 20$min) and avoid incorrect geometries like multi-face and floaters.
Specifically, we first convert our radiance field $F_0$ into a higher resolution radiance field $\overline{F}_0$ that supports $512 \times 512$ resolution image rendering. This process is achieved by a fast NeRF distillation operation ($\approx 2$min), which first bounds the occupancy grids of $\overline{F}_0$ with the estimated binary grids (transmittance > 0.01) from $F_0$, then overfit $\overline{F}_0$ to $F_0$ by simultaneously optimizing its downsampled density field and interpolated random view renderings with $L_1$ loss between the corresponding results in $F_0$. Thanks to this flexible and fast distillation operation, we can initialize our generated radiance field into any optimization-based method efficiently without the need to match its 3D representation. In our experiments, we use the InstantNGP (Müller et al., 2022) as the high-resolution radiance field.
After initialization, we optimize $\overline{F}_0$ by SDS loss following the previous methods (Poole et al., 2022; Wang et al., 2023). It is noteworthy that since we already have a good initialized radiance field, we do not need to apply a large noise level SDS loss as in previous methods. In contrast, we set the ratio range of denoise timestep to [0.02, 0.5] during the entire optimization process.
4 Experiment
In this section, we described our experimental results. We train our framework on the ShapeNet-Chair (Chang et al., 2015) and Objaverse LVIS 40k datasets (Deitke et al., 2022). We use the pre-trained DeepFloyd-IF-XL as our 2D foundation model and use Shap-E (Jun & Nichol, 2023) as our 3D priors. We adopt the SparseNeuS (Long et al., 2022) as the Neural Surface Field with $N = 128$. We follow ControlNet (Zhang & Agrawala, 2023) and render $M = 8$ multi-view images.
with $64 \times 64$ resolution from SparseNeuS to implement the 3D-to-2D guidance. We train our framework on 4 NVIDIA A100 GPUs for both ShapeNet and Objaverse 40k experiments with batch size of 4. During sampling, we set the 3D and 2D prior guidance scale to 3.0 and 7.5 respectively. More details including the data processing and model architecture can be found in the appendix. We discuss the evaluation and ablation study results below.

**Figure 4:** Qualitative sampling results of Bidirectional Diffusion model, including multi-view images and 3D mesh from diffusion sampling. The top two lines are the results on the Shapenet-Chair, and the bottom three lines are the results on the Objaverse. We compared the results of Shap-E in the last column.
### 4.1 Text-to-3D Results
**ShapeNet-Chair and Objaverse results.** The first and second rows of Fig. 4 present our results trained on the ShapeNet-Chair dataset. Even the chair category frequently exhibits intricate geometric details, our framework demonstrates adeptness in capturing fine geometries. The bottom three rows of Fig. 4 show a capacity to craft a plethora of 3D objects that closely adhere to the given textual prompts. This reaffirms the hypothesis that our method learns a generalizable comprehension of both texture and geometry.
**Decouple geometry and texture control.** At last, we illustrate that our Bidirectional Diffusion separately control geometry generation through 3D diffusion model and texture generation through the 2D diffusion model. This is the first work that enables separate controls in diffusion process. First, as illustrated in Fig. 2(a), when the 3D prior is fixed, we have the flexibility to manipulate the 2D diffusion model using varying textual prompts to guide the texture generation process. This capability enables the generation of a diverse range of textured objects while maintaining a consistent overall shape. Second, when we fix the textual prompt for the 2D priors (e.g., “a xxx with Van Gogh starry sky style”), adjustments to the 3D diffusion model can be made by varying the conditional radiance field derived from the 3D priors. This procedure results in the generation of a variety of shapes, while maintaining a similar texture, as shown in Fig. 2(b).
### 4.2 Compared with Other Methods
**Compared with DreamFusion Series.** Our framework is capable of simultaneously generating multi-view consistent images alongside a 3D mesh in a scalable manner, contrasting with the Dreamfusion [Poole et al., 2022] series which relies on a one-by-one optimization approach. Table 1 reports the CLIP R-Precision [Jun & Nichol, 2023] and inference time on 50 test prompts (manually derived from the captioned objaverse test) to quantitively evaluate these methods. Dreamfusion requires 1 hour to generate a single object. ProlificDreamer [Wang et al., 2023] improves the texture quality, but at the expense of extended optimization time, taking approximately 3.4 hours and leading to more severe multi-face problems. In contrast, our method can produce realistic textured objects with reasonable geometry in 40 seconds. Furthermore, BiDiff can serve as a strong prior for optimization-based methods and significantly boost their performance. Initializing the radiance field in ProlificDreamer
| Method | R-P | Time |
|-----------------|-----|------|
| DreamFusion | 0.67| 1.1h |
| ProlificDreamer | 0.83| 3.4h |
| Ours-sampling | 0.79| 40s |
| Ours-post | 0.85| 20min|
**Table 1:** CLIP R-precision.
Figure 5: Comparison with other optimization or multi-view diffusion based works. The text prompts are “a green dragon head” and “a cute lizard”. We show both multi-view images (right of (d)) and refined results (left of (d)).
with our outputs, shows remarkable improvements in both quality and computational efficiency, as shown in Fig. 5.
Compared with Zero-123 Series
Given one reference image, Zero-123 [Liu et al., 2023a] produces images from novel viewpoints by fine-tuning a pre-trained 2D diffusion model on multi-view datasets. However, this method employs cross-view attention to establish multi-view correspondence without an inherent understanding of 3D structures, inevitably leading to inconsistent multi-view images as shown in Fig. 5. Moreover, the Zero-123 series can not directly generate the 3D mesh, requiring substantial post-processing (SDS loss) to acquire the geometry. In contrast, our framework ingeniously incorporates 3D priors and achieves 3D geometry understanding within a 3D diffusion process. This design enables the simultaneous generation of multi-view consistent images and a 3D mesh, as illustrated in Fig. 2.
4.3 Ablation Studies
We perform comprehensive ablation studies to evaluate the importance of each component. More ablation results can be found in the appendix.
3D priors. To assess the impact of 3D priors, we eliminate the conditional radiance field from Shap-E and solely train the 3D geometry from scratch. The experimental results in Fig. 6(a) demonstrate that in the absence of the 3D priors, our framework can only generate the common objects in the training set.
2D priors. To delve into the impact of 2D priors, we randomly initiate the parameters of 2D diffusion model, instead finetuning on a pretrained model. The results in Fig. 6(a) shows that in the absence of 2D priors, the textures generated tend to fit the stylistic attributes of the synthetic training data. Conversely, with 2D priors, we can produce more realistic textures.
Prior enhancement strategy As discussed in Section 3.4, we can adjust the influence of both 3D and 2D priors by the prior enhancement strategy. Fig. 6(b) shows the results of different enhancement extent under different scale factors. It shows that the prior enhance meant strategy plays a vital role in achieving decoupled texture and geometry control.
5 Conclusion
In this paper, we propose Bidirectional Diffusion, which incorporates both 3D and 2D diffusion processes into a unified framework. Furthermore, Bidirectional Diffusion leverages the robust priors from 3D and 2D foundation models, achieving generalizable geometry and texture understanding.
REFERENCES
Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. *arXiv preprint arXiv:1512.03012*, 2015.
Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019.
Yen-Chi Cheng, Hsin-Ying Lee, Sergey Tuyakov, Alex Schwing, and Liangyan Gui. SDFusion: Multimodal 3d shape completion, reconstruction, and generation. *arXiv*, 2022.
Matt Deitke, Dustín Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of annotated 3d objects. *arXiv preprint arXiv:2212.08051*, 2022.
C. Deng, C. Jiang, C. R. Qi, X. Yan, Y. Zhou, L. Guibas, and D. Anguelov. Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors. In *2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 20637–20647, 2023.
Ziya Erkoç, Fangchang Ma, Qi Shan, Matthias Nießner, and Angela Dai. Hyperdiffusion: Generating implicit neural fields with weight-space diffusion, 2023.
Philipp Henzler, Niloy J. Mitra, and Tobias Ritschel. Escaping plato’s cave: 3d shape from adversarial rendering. In *The IEEE International Conference on Computer Vision (ICCV)*, 2019.
Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. *arXiv preprint arXiv:2207.12598*, 2022.
Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 867–876, 2022.
Julian Straub Richard Newcombe Jeong Joon Park, Peter Florence and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 165–174, 2019.
Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. *arXiv preprint arXiv:2305.02463*, 2023.
Nasir Mohammad Khalid, Tianhao Xie, Eugene Belilovsky, and Popa Tiberiu. Clip-mesh: Generating textured meshes from text using pretrained image-text models. *SIGGRAPH Asia 2022 Conference Papers*, 2022.
Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. *arXiv preprint arXiv:2211.10440*, 2022.
Tong Wu Yu-Jie Yuan Hongbo Fu Yu-Kun Lai Lin Gao, Jie Yang and Hao Zhang. Sdm-net: Deep generative network for structured deformable mesh. *ACM Transactions on Graphics (TOG)*, 38: 1–15, 2019.
Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. *arXiv preprint arXiv:2303.11328*, 2023a.
Zhen Liu, Yao Feng, Michael J. Black, Derek Nowrouzezahrai, Liam Paull, and Weiyang Liu. Meshdiffusion: Score-based generative 3d mesh modeling. In *International Conference on Learning Representations*, 2023b.
Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, and Wenping Wang. Sparseneus: Fast generalizable neural surface reconstruction from sparse views. In *European Conference on Computer Vision*, pp. 210–227. Springer, 2022.
Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, and Andrea Vedaldi. Realfusion: 360 reconstruction of any object from a single image. In *CVPR*, 2023.
|
lCLdLlXAvt
|
- The average sensitivity seems to scale with the maximum allowable depth $D$. Why not just set $D = 1$, or some other small value? It feels like there should be a tension with some notion of clustering quality that benefits from using a tree with more depth.
|
Average Sensitivity of Hierarchical Clustering
Anonymous authors
Paper under double-blind review
Abstract
Hierarchical clustering is one of the most popular methods used to extract cluster structures in a dataset. However, if the hierarchical clustering algorithm is sensitive to a small perturbation to the dataset, then the credibility and replicability of the output hierarchical clustering are compromised. To address this issue, we consider the average sensitivity of hierarchical clustering algorithms, which measures the change in the output hierarchical clustering upon deletion of a random data point from the dataset. Then, we propose a divisive hierarchical clustering algorithm with which we can tune the average sensitivity. Experimental results on benchmark and real-world datasets confirm that the proposed method is stable against the deletion of a few data points, while existing algorithms are not.
1 Introduction
Hierarchical clustering is one of the most popular methods used to extract cluster structures in a dataset consisting of data points (Murtagh and Contreras [2012b]). This method partitions the data points into clusters by constructing a rooted tree whose leaves correspond to data points and internal nodes represent clusters. By tracing the hierarchy from the root to leaves, we can extract interpretable knowledge from the dataset. For example, suppose that we have genomic data of single cells in a tissue. Then, the hierarchy can be used to figure out complex cellular states and tissue compositions (Zurauskienė and Yau [2016]). Hierarchical clusterings are also used in several applications such as phylogenetics (Eisen et al. [1998]), geophysics (Takahashi et al. [2019]), and social network analysis (Gilbert et al. [2011]).
Because of the importance of hierarchical clustering, a plethora of hierarchical clustering algorithms have been proposed (Heller and Ghahramani [2005], Jain [2010], Hastie et al. [2009], Murtagh and Contreras [2012b]). These algorithms are mainly concerned with the quality of the output hierarchical clustering. However, there is another essential aspect that must not be overlooked: stability of the output hierarchical clustering. Since the output is often used to understand the data structure, an algorithm needs to be stable to data perturbations as long as the data distribution remains intact. This requirement can be naturally formalized as a question using the notion of average sensitivity (Varma and Yoshida [2021]): given a random deletion of data points from the original dataset, how stable is the output hierarchical clustering? In the example of genomic data, a stable and reliable algorithm is expected to retain most of the tissue compositions found in the original, even if a few cells are missing. However, in the example in Figure 1 and in the application to geophysics (Figure 3 in Section 8), we show that the existing algorithms are unstable for data point removals.
In this work, we propose a novel algorithm for hierarchical clustering that is stable against deletions of data points. We measure the stability of an algorithm using average sensitivity (Murat and Yoshida [2019], Varma and Yoshida [2021]). Because the average sensitivity was originally defined for algorithms that output vectors or sets, we first formally define the average sensitivity of hierarchical clustering algorithms. Then, we propose a (randomized) algorithm that partitions the dataset in a top-down manner. The proposed algorithm applies a randomized process called the exponential mechanism (McSherry and Talwar [2007]) when partitioning the dataset, and we theoretically prove that it has a small average sensitivity.
Figure 1 shows an illustrative example of sensitive/stable hierarchical clustering algorithms. In this example, the standard agglomerative method induces different hierarchies before and after one data
Figure 1: Examples of a dataset (top left) and its hierarchical clusterings output by an existing agglomerative algorithm using complete linkage (top middle) and the proposed one (top right), and a dataset obtained by removing the data point 4 (bottom left) and its hierarchical clusterings output by the existing agglomerative algorithm (bottom middle) and the proposed one (bottom right). The existing agglomerative clustering algorithm is sensitive to the removal of even a single data point. The proposed algorithm produces a more stable clustering. The red nodes in the right trees denote the change from the trees in the left before the data removal.
point (the data point 4) is removed, as shown in the middle of the figure. This result indicates that the widely used agglomerative method is sensitive to the removal of data points. The objective of this study is to design a hierarchical clustering algorithm that is stable against the removal of a few data points, as shown in the bottom of the figure.
Randomized algorithms may output completely different hierarchical clusterings on the original dataset and on that obtained by deleting a random data point even if the output distributions are close. To alleviate this issue, we design a (randomized) hierarchical clustering algorithm with low average sensitivity under shared randomness, which outputs similar hierarchical clusterings both on the original dataset and on the dataset obtained by deleting a random data point with a high probability over the choice of the random bits used.
We conduct comparisons between our proposed algorithm and existing algorithms with three benchmark datasets. In the experiments, we evaluated the trade-offs between the average sensitivity of the clustering algorithms and their clustering qualities. We observed that most of the existing algorithms exhibit high average sensitivity indicating that their output can change drastically even for the removal of a single data point. By contrast, the proposed algorithm can produce stable clustering results, while maintaining the quality of clustering. We also applied the clustering algorithms to a real-world GPS dataset (Takahashi et al., 2019). The results on this dataset also confirms that the existing algorithms are sensitive to data deletion, while the proposed algorithm is not.
2 RELATED WORK
Hierarchical Clustering Algorithms for hierarchical clustering can be classified into agglomerative and divisive methods (Hastie et al., 2009). Given a dataset, an agglomerative method iteratively finds a pair of data points or clusters using a certain linkage criterion and merges them into a new cluster until all the data points are merged into a single cluster. As the linkage criterion, the single linkage, average linkage, and complete linkage rules are frequently used (Hastie et al., 2009; Murtagh and Contreras, 2012a). A divisive method constructs a hierarchy in a top-down manner. It recursively partitions a dataset into two sub-clusters until all the data points are partitioned or it reaches a prescribed tree depth (Yain, 2010).
Several extensions of the clustering algorithms are considered; Abboud et al. (2019); Moseley et al. (2021) considered improving the computational scalability; Ackerman et al. (2012) introduced a
weighted version of the agglomerative methods; and Kimes et al. (2017) and Gao et al. (2022) introduced statistical tests for clustering. Theoretical aspects of hierarchical clustering are also investigated; Dasgupta (2016) introduced a cost function for hierarchical clustering; Ackerman and Ben-David (2016) showed that the agglomerative methods have some desirable properties; and Roy and Pokutta (2016); Charikar and Chatziafratis (2017); Moseley and Wang (2017); Dhulipala et al. (2022) proposed methods with better approximation guarantees.
We note that the focus of the studies above is on constructing hierarchies with better quality or more efficiency. The current study is orthogonal to them; our focus is on developing a hierarchical clustering algorithm that are stable against the deletion of a data point.
Robust Hierarchical Clustering There have been a few studies on hierarchical clustering algorithms that exhibit robustness against outlier injections (Eriksson et al., 2013; Balcan et al., 2014; Cheng et al., 2019), which is a distinct form of data perturbation compared to the current study. These studies aim to achieve consistent clustering results regardless of the presence of outliers by identifying the injected outliers. It is important to note that hierarchical clustering algorithms can be unstable even in the absence of outliers. As demonstrated in Figure 1, although the underlying data distribution does not change after deleting a data point, the clustering results can differ significantly. For reliable knowledge discovery, it is imperative that the algorithm remains stable for such natural perturbations in the data. However, this specific type of robustness has not yet been thoroughly explored, making our study the first to venture in this direction.
Average Sensitivity The notion of average sensitivity was originally introduced in Murai and Yoshida (2019) to compare network centralities in terms of their stability against graph perturbations. Then the notion was extended to handle graph algorithms in Varma and Yoshida (2021). Since then average sensitivity of algorithms for various problems have been studied, including the maximum matching problem (Yoshida and Zhou, 2021), spectral clustering (Peng and Yoshida, 2020), Euclidean k-clustering (Yoshida and Ito, 2022), dynamic programming problems (Kumabe and Yoshida, 2022a,b), and decision tree learning (Hara and Yoshida, 2023).
3 PRELIMINARIES
We use bold symbols to denote random variables. For two random variables \( X \) and \( Y \) on a finite set \( E \), let \( d_{TV}(X, Y) := \sum_{e \in E} |\Pr[X = e] - \Pr[Y = e]|/2 \) denote the total variation distance between their distributions. For sets \( S \) and \( T \), let \( S \triangle T = (S \setminus T) \cup (T \setminus S) \) denote their symmetric difference.
3.1 Hierarchical Clustering
Let \( X = \{x_1, \ldots, x_n\} \) be a dataset. We always assume that the data points \( x_1, \ldots, x_n \) are distinct (otherwise we assign them unique IDs so that they are distinct). A hierarchical clustering over \( X \) is a rooted tree \( T \) such that each leaf is corresponding to a subset of \( X \) and the subsets corresponding to leaves form a partition of \( X \). Note that hierarchical clustering considered in this work does not always decompose \( X \) into data points. Let \( \text{root}(T) \) denote the root node of \( T \). In this work, we mostly consider binary trees and let \( \text{left}(T) \) and \( \text{right}(T) \) denote the left and right, respectively, subtrees of \( \text{root}(T) \). If \( \text{root}(T) \) is the only node in \( T \), then we call \( T \) a singleton, and we define \( \text{left}(T) = \text{right}(T) = \emptyset \). Also we set \( \text{left}(T) = \text{right}(T) = \emptyset \) when \( T \) is an empty tree. Let \( \text{leaves}(T) \subseteq 2^X \) denote the leaves of \( T \).
3.2 Graph-Theoretic Notions
For a finite set \( V \), we denote by \( \binom{V}{2} \) the set of pairs of elements in \( V \). For a set \( V \) and \( i \in V \), we sometimes write \( V - i \) to denote \( V \setminus \{i\} \). Let \( G = (V, E) \) be a graph. For a vertex \( i \in V \), let \( G - i \) denote the graph obtained from \( G \) by deleting \( i \) and the edges incident to \( i \). For a vertex set \( S \subseteq V \), let \( G[S] \) denote the subgraph of \( G \) induced by \( S \).
Let \( G = (V, E, w) \) be a weighted graph, where \( w : E \rightarrow \mathbb{R}_+ \) is a weight function over edges. For disjoint sets of vertices \( S, T \subseteq V \), let \( c_G(S, T) \) denote the total weight of edges between \( S \) and \( T \), that is, \( \sum_{i \in S, j \in T} w(i, j) \). We denote by \( \phi_G(S) \) the sparsity of \( S \), that is, \( c_G(S, V \setminus S)/(|S| \cdot |V \setminus S|) \).
3.3 Exponential Mechanism
The exponential mechanism (McSherry and Talwar, 2007) is an algorithm that, given a vector \( x \in \mathbb{R}^n \) and a real number \( \lambda > 0 \), returns an index \( i \in [n] \) with probability proportional to \( e^{-\lambda x_i} \). The following fact is useful to design algorithms with low average sensitivity.
**Lemma 3.1** (McSherry and Talwar (2007)). Let \( \lambda > 0 \) and let \( A \) be the algorithm that, given a vector \( x \in \mathbb{R}^n \), applies the exponential mechanism to \( x \) and \( \lambda \). Then for any \( t > 0 \), we have
\[
\Pr_{i \sim A(x)} \left[ x_i \geq \text{OPT} + \frac{\log n}{\lambda} + \frac{t}{\lambda} \right] \leq e^{-t},
\]
where \( \text{OPT} = \min_{i \in [n]} x_i \). Moreover, for \( x' \in \mathbb{R}^n \), we have
\[
d_{TV}(A(x), A(x')) = O(\lambda \cdot \|x - x'\|_1).
\]
4 Average Sensitivity of Hierarchical Clustering
In this section, we formally define the average sensitivity of a hierarchical clustering algorithm.
4.1 Distance between Hierarchical Clusterings
First, we define distance between hierarchical clusterings. Let \( X = \{x_1, \ldots, x_n\} \) be a dataset, \( x \in X \) be a data point, and \( T \) and \( T' \) be hierarchical clusterings over \( X \) and \( X \setminus \{x\} \), respectively. Then, the distance \( d_x(T, T') \) between \( T \) and \( T' \) is defined recursively as follows. If both \( T \) and \( T' \) are empty trees, then \( d_x(T, T') \) is defined to be zero. Otherwise, we incur the cost of one if
\[
\text{leaves(left}(T)) \triangle \text{leaves(left}(T')) \not\subseteq \{x\}, \quad \text{or} \quad \text{leaves(right}(T)) \triangle \text{leaves(right}(T')) \not\subseteq \{x\}.
\]
In words, we incur the cost of one if the left subtrees or the right subtrees differ besides the ignored element \( x \in X \). Then, we recursively compute the costs \( d_x(\text{left}(T), \text{left}(T')) \) and \( d_x(\text{right}(T), \text{right}(T')) \) and add them up. The details are given in Algorithm 1. It is easy to verify that \( d_x \) satisfies the triangle inequality. Also, note that \( d_x(T, T') \leq |T| + |T'| \), where \( |T| \) is the number of nodes in \( T \) (including the leaves).
**Algorithm 1:** Distance between trees
```
Procedure \( d_x(T, T') \)
if \( T = T' = \emptyset \) then return 0;
if \( \text{leaves(left}(T)) \triangle \text{leaves(left}(T')) \not\subseteq \{x\} \) or \( \text{leaves(right}(T)) \triangle \text{leaves(right}(T')) \not\subseteq \{x\} \) then
\( c \leftarrow 1 \).
else
\( c \leftarrow 0 \).
return \( c + d_x(\text{left}(T), \text{left}(T')) + d_x(\text{right}(T), \text{right}(T')) \).
```
4.2 Average Sensitivity
Now we define the average sensitivity of a deterministic algorithm as follows:
**Definition 4.1** (Varma and Yoshida (2021)). Let \( A \) be a deterministic algorithm that, given a dataset \( X = \{x_1, \ldots, x_n\} \), outputs a hierarchical clustering. Then, the average sensitivity of \( A \) on a dataset \( X = \{x_1, \ldots, x_n\} \) is
\[
\frac{1}{n} \sum_{x \in X} d_x(A(X), A(X \setminus \{x\})).
\]
(1)
To extend the definition to randomized algorithms, we define \( \text{EM}_x \) as the earth mover’s distance between two distributions with the underlying distance \( d_x \). Specifically, for distributions over hierarchical clusterings \( T \) and \( T' \), we define \( \text{EM}_x(T, T') = \min_D E_{(T,T') \sim D} d_x(T, T') \), where \( D \) runs over distributions over pairs of hierarchical clusterings such that the marginal distributions on the first and second coordinates are equal to \( T \) and \( T' \), respectively (sometimes called a coupling between \( T \) and \( T' \) in the literature). Then, we define the average sensitivity of a randomized algorithm as follows:
Definition 4.2 (Varma and Yoshida (2021)). Let $A$ be a randomized algorithm that, given a dataset $X = \{x_1, \ldots, x_n\}$, outputs a hierarchical clustering. Then, the average sensitivity of $A$ on a dataset $X = \{x_1, \ldots, x_n\}$ is
$$\frac{1}{n} \sum_{x \in X} \text{EM}_x(A(X), A(X \setminus \{x\})).$$
Note that this definition coincides with the one for deterministic algorithms when the algorithm is deterministic.
Sometimes we want to guarantee that a randomized algorithm $A$ outputs similar hierarchical clusterings on $X$ and $X \setminus \{x\}$ when we use the same random coins. For a bit string $\pi$, let $A_\pi$ denote the deterministic algorithm obtained from $A$ by fixing the outcomes of its random coins to $\pi$. Then, we define the following variant of average sensitivity.
Definition 4.3. Let $A$ be a randomized algorithm that, given a dataset $X = \{x_1, \ldots, x_n\}$, outputs a hierarchical clustering. Then, the average sensitivity of $A$ under shared randomness on a dataset $X = \{x_1, \ldots, x_n\}$ is
$$E_\pi \left[ \frac{1}{n} \sum_{x \in X} d_x(A_\pi(X), A_\pi(X \setminus \{x\})) \right].$$
5 STABLE-ON-AVERAGE HIERARCHICAL CLUSTERING
5.1 ALGORITHM DESCRIPTION
In this section, we describe our algorithm for hierarchical clustering with low average sensitivity, and then derive some theoretical properties. In Section 6, we consider another algorithm with low average sensitivity under shared randomness.
Our algorithm, SHC (Stable Hierarchical Clustering), is given in Algorithm 2. Given a dataset $X = \{x_1, \ldots, x_n\}$ and a parameter $\alpha > 0$, we first transform $X$ into a weighted graph $G = (V, E, w)$, where $V = \{1, 2, \ldots, n\}$, $E = \binom{V}{2}$, and $w(i, j) = \exp(-\alpha \|x_i - x_j\|^2)$, and then pass $G$ to a subroutine REC, which constructs a hierarchical clustering using $G$. Note that closer data point pairs get higher weights in $w$. If $\alpha$ is small, then every data point pair gets almost identical weight, and if $\alpha$ is large, then distant data point pairs get negligible weights and will be ignored in hierarchical clustering.
The subroutine REC is recursive. Given a weighted graph $G = (V, E, w)$ and a depth limit $D \geq 0$, we split the vertex set into two components using a subroutine SSC (Stable Sparse Cut, Algorithm 3), and then recursively process them until the depth reaches $D$.
Now we explain the details of the subroutine SSC. Ideally, we want to solve the sparsest cut problem, for which the goal is to compute $S \subseteq V$ that minimizes $\phi_G(S)$. However, approximating $\phi_G(S)$ to within a constant factor is NP-Hard (Chawla et al., 2006), and although some polynomial-time approximation algorithms (Arora et al., 2009; Leighton and Rao...
are known, they are slow in practice because they internally solve LPs or SDPs, and it is not clear whether these are stable. Hence, we take a different approach.
Our idea is to select a pair of vertices, called centroids, and then assign every other vertex to the more similar centroid to form a partition into two components. To achieve a small average sensitivity, we select the pair of centroids as follows. For \( \{i, j\} \in \binom{V}{2} \) with \( i < j \), let \( S_{ij} = \{k \in V : w(i, k) > w(j, k)\} \) be the set of vertices that is more similar to \( i \) than \( j \), and define \( \phi_G(i, j) = \phi_G(S_{ij}) \). Then, we sample a pair of centroids \( \{i, j\} \) using the exponential mechanism with the cost function \( \phi_G(\cdot, \cdot) \) and the given parameter \( \lambda \). When \( \lambda = 0 \), the exponential mechanism returns \( \{i, j\} \) uniformly sampled from \( \binom{V}{2} \), and when \( \lambda = \infty \), it returns \( \{i, j\} \) that minimizes \( \phi_G(i, j) \).
### 5.2 Theoretical Properties
The time complexity of SHC is easy to analyze:
**Theorem 5.1.** The time complexity of SHC is \( O(Dn^3) \).
Next, we discuss the approximation guarantee and (a variant of) the average sensitivity of SSC. For a weighted graph \( G = (V, E, w) \), we define
\[
\phi^*_G = \min_{\{i, j\} \in \binom{V}{2}} \phi_G(S_{ij}).
\]
Note that \( \phi^*_G \) is not the minimum sparsity of a set in \( G \), i.e., \( \min_{S \subseteq V} \phi_G(S) \). Let \( w_G \) denote the total edge weight, that is, \( \sum_{\{i, j\} \in \binom{V}{2}} w(i, j) \). The following holds:
**Theorem 5.2.** For a weighted graph \( G \) of \( n \) vertices and \( \lambda > 0 \), let \( S = \text{SSC}(G, \lambda) \). Then, we have
\[
E[\phi_G(S)] \leq \phi^*_G + O\left( \frac{\log(\lambda w_G)}{\lambda} \right).
\]
We also have
\[
\frac{1}{n} \sum_{x \in V} d_{TV}(\text{SSC}(G, \lambda), \text{SSC}(G - k, \lambda)) = O\left( \frac{1}{n} (\lambda \phi^*_G + \log(nw_G)) \right).
\]
Because the weight function \( w \) is \([0, 1]\)-valued, \( \phi^*_G = O(1) \) and \( w_G = O(n^2) \). Then for \( \epsilon > 0 \), we obtain \( E[\phi_G(S)] = (1 + \epsilon)\phi^*_G \) by setting \( \lambda = \Theta(\log n / (\epsilon \phi^*_G)) \). For this particular choice of \( \lambda \), the average total variation distance is \( O(\log n / (\epsilon n)) \), which is quite small.
Finally, we discuss the average sensitivity of SHC.
**Theorem 5.3.** The average sensitivity of \( \text{SHC}(X, \alpha, \lambda, D) \) is \( O(D(\lambda w_G/n + \log(nw_G))) \), where \( G \) is the graph constructed by using \( X \) and \( \alpha \) in SHC.
Recalling that \( w_G = O(n^2) \), the bound is roughly \( O(\lambda Dn) \), which can be made small by setting \( \lambda \ll 1 \).
### 6 Stable-on-Average Hierarchical Clustering under Shared Randomness
In this section, we propose an algorithm SHC-SR by modifying SHC (Algorithm 2) so that it has a small average sensitivity under shared randomness.
First, we design a randomized algorithm called SAMPLING that, given a vector \( p \in \mathbb{R}_+^n \) with \( \sum_{i=1}^n p_i = 1 \), and a random bit string \( \pi \), outputs \( i \in \{1, \ldots, n\} \) with probability \( p_i \) such that perturbing the vector \( p \) does not change the output with high probability over \( \pi \).
For a set \( S \), let \( U(S, \pi) \) denote a procedure that outputs an element \( i \in S \) such that \( U(S, \pi) \) for a random bit string \( \pi \) provides a uniform distribution over \( S \). Such a procedure can be easily implemented by taking the first few bits from \( \pi \) and then map them to an element in \( S \). Then in SAMPLING\( (p, \pi) \), we first compute a permutation \( \sigma \) so that \( p_{\sigma(1)} \leq \cdots \leq p_{\sigma(n)} \) and compute some carefully designed vector \( q \in [0, 1]^n \) using \( p \) and \( \sigma \). Then, we sample \( t \in [0, 1] \) uniformly at random and if \( q_i > t \), then we return \( i \), and otherwise, we repeat the process. The vector \( q \) is designed so that this process outputs \( i \) with probability \( p_i \). The details are given in Algorithm 4.
Because the only randomized process in SHC is the exponential mechanism used in SSC (Algorithm 3), by replacing it with SAMPLING that simulates the exponential mechanism, we obtain a hierarchical clustering algorithm SHC-SR with low average sensitivity under shared randomness:
**Theorem 6.1.** There exists an algorithm SHC-SR that, given a dataset \( X = (x_1, \ldots, x_n) \), \( \alpha \geq 0 \), \( \lambda \geq 0 \), an integer \( D \), and a bit string \( \pi \), outputs a hierarchical clustering over \( X \) such that
- the distribution of SHC-SR\((X, \alpha, \lambda, D, \pi)\) over random bits \( \pi \) is equal to that of SHC\((X, \alpha, \lambda, D)\).
- the average sensitivity of SHC-SR\((X, \alpha, \lambda, D, \pi)\) under shared randomness is \( O(D(\lambda w_G/n + \log(nw_G))) \), where \( G \) is the graph constructed by using \( X \) and \( \alpha \) as in SHC.
### 7 EXPERIMENTS
We demonstrate that the proposed SHC-SR (Section 6) can output stable hierarchical clustering using some benchmark datasets. For all the experiments, we used a workstation with 48 cores of AMD EPYC processors and 256GB of RAMS.
#### 7.1 SETUPS
**Datasets** We took three datasets shown in Table 1 from sklearn.datasets. For the experiments, we subsampled a fraction of the data points from a dataset so that we can assess the effect of the data size \( n \).
**Hierarchical Clustering Algorithms** In the experiment, we implemented SHC-SR given in Theorem 6.1. We constructed weighted graphs by setting \( w(i,j) = \exp(-\alpha \|x_i - x_j\|^2/m) \) with \( m \) being the median of all the pairwise distance, and varied \( \alpha \) to some different values. We also varied the parameter \( \lambda \) used in SSC-SR. The case \( \lambda = \infty \) corresponds to a greedy algorithm that selects the pair \((i,j)\) with the smallest \( \phi_G(i,j) \) in SSC-SR (Algorithm 3) with the exponential mechanism being implemented with SAMPLING), and the case \( \lambda = 0 \) corresponds to an algorithm that selects the pair \((i,j)\) uniformly at random in SSC-SR. We implemented SHC-SR in Python 3 using the JIT compiler of Numba.
We adopted some standard hierarchical clustering algorithms as baseline methods for comparison. As typical agglomerative clustering algorithms, we adopted four algorithms implemented in AgglomerativeClustering in scikit-learn with four different linkage criterion: ward, average, complete, and single, with the other options set to default. We note that Balcan et al. (2014) reported that ward tends to be robust against outlier injections and noise contamination.
As the representatives of divisive clustering, we adopted bisecting 2-means (Jain, 2010) and principal direction divisive partitioning (Boley, 1998). These two methods recursively split data points by using the standard 2-means clustering and the sign of the first principal component, respectively. We implemented these methods, which we denote by 2-means and pcd, by using KMeans in scikit-learn with the number of clusters set to two and ten random initializations and PCA with the number of components set to one, respectively, and default parameters for the other options.
---
1We did not adopt the outlier-robust methods (Eriksson et al., 2011; Balcan et al., 2014; Cheng et al., 2019) because the core of these methods is on identifying outliers, which is irrelevant to the current problem.
---
**Table 1: Datasets**
| Dataset | Data Size | # of Features |
|---------------|-----------|---------------|
| breast cancer | 569 | 30 |
| diabetes | 442 | 10 |
| digits | 1797 | 64 |
**Algorithm 4:** Sampling with a low average sensitivity under shared randomness
```
Procedure SAMPLING(p, π)
Let σ be a permutation such that
\( p_{σ(1)} \leq p_{σ(2)} \leq \cdots \leq p_{σ(n)} \);
Let \( q \in \mathbb{R}_+^n \) so that \( q_{σ(i)} = q_{σ(i-1)} + (n-i+1)(p_{σ(i)} - p_{σ(i-1)}) \),
where \( p_0 = q_{σ(0)} = 0 \);
\( t \leftarrow U([0, 1], π) \) and delete the used bits from \( π \);
while true do
\( i \leftarrow U(\{1, 2, \ldots, n\}, π) \) and delete the used bits from \( π \);
if \( q_i > t \) then
break.
return \( i \).
```
Evaluation criteria We measure the average sensitivity of hierarchical clustering algorithms as well as their qualities. We evaluated the average sensitivity following [1]. For SHC-SR, we treated SHC-SR(·, α, λ, D, π) with a fixed π as the deterministic algorithm A.
As the quality measure, we adopted two popular criteria, Dasgupta score [Dasgupta (2016)], Dendrogram purity [Heller and Ghahramani (2005)] and Cophenetic Correlation [Sokal and Rohlf (1962)]. Dasgupta score measures the quality of a hierarchical clustering T using costs of pairs of data points. More specifically, we define the Dasgupta score of a hierarchical clustering T by
\[ \text{score}(T) = \sum_{i,j=1; i \neq j} w(i, j)n(i, j), \]
where \( n(i, j) \) denotes the number of data points belonging to the subtree rooted at the lowest common ancestor of nodes that \( x_i \) and \( x_j \) belong to. The Dasgupta score is small when dissimilar points \( x_i \) and \( x_j \) (i.e., \( w(i, j) \) is small) are split into different clusters in a shallow part of the tree, and similar points (i.e., \( w(i, j) \) is large) are split in a deeper part. Thus, a clustering T with smaller \( \text{score}(T) \) is considered ideal.
Procedure We generated 10 subsampled datasets of size \( n = 100, 300, \) and 500 from the original dataset.\(^3\) For each subsampled dataset, we constructed a hierarchical clustering using SHC-SR over different values of \( \lambda \) and the baseline methods. As the result, we obtained 10 clusterings for each method. We compute the average of the average sensitivity and the Dasgupta score of these 10 clusterings. We then report the trade-offs of the average of the average sensitivity and the average of the clustering qualities.
7.2 Results Figures 2 shows the results of the experiments with \( n = 100 \). Each figure shows the trade-offs between the average sensitivity and the average Dasgupta score, with the depth of T limited to 10, and with the similarity coefficient \( \alpha \) varied to 1, 3, 10, and 30. The results of the baselines and SHC-SR for several different \( \lambda \) are shown in different symbols and red lines, respectively. We can find that the red lines of SHC-SR tend to lie in the lower left area of the figures. That is, SHC-SR with appropriately chosen \( \lambda \) can attain a good trade-off with small average sensitivity and better Dasgupta scores, as expected. By contrast, all the baselines, except for single, tend to exhibit small Dasgupta scores while incurring high average sensitivity. These methods are therefore good at producing high quality clusterings, while being sensitive to a small perturbation of the dataset. The
\(^2\)We show the results for Dendrogram purity and Cophenetic Correlation in Appendix.
\(^3\)We show the results for \( n = 300 \) and \( n = 500 \) in Appendix because they are similar to \( n = 100 \).
result of single is exceptional, exhibiting large Dasgupta scores with small average sensitivity. We observed that single tends to produce highly unbalanced clusterings because single split the dataset into small and large clusters. Although such a split is less sensitive to the dataset perturbation and has smaller average sensitivity, the quality of clustering is poor. SHC-SR provides a way to balance the quality of the clustering and its average sensitivity by tuning $\lambda$ upon the user demand.
8 APPLICATION TO GPS DATASET
We applied SHC-SR and agglomerative algorithms to a real-world problem involving a GPS dataset (Takahashi et al., 2019). This dataset consists of 280 GPS markers in Taiwan, where each data point represents its longitude, latitude, and velocity in the horizontal directions. By applying clustering to the horizontal velocities, we can cluster regions with similar movements and find active tectonic boundaries. The stability of clustering is crucial in this application because if the found clusters change drastically upon removal of a few GPS markers, the clusters may be an artifact induced by unstable clustering algorithms rather than the true tectonic boundaries.
Figure 3 shows the clustering results on the GPS dataset over the five trials when randomly chosen 20 out of 280 points are removed from the dataset. Here, we display the four clusters found at the depth two of the obtained hierarchy. The figures show that the agglomerative algorithms (ward, average, complete) tend to produce different clusters over different data removals. By contrast, SHC-SR with $\lambda = 10$, $1000$, and $\infty$ produce almost identical clusters, except the first result on $\lambda = 10$. This result confirms that we can obtain stable clusters by using SHC-SR.
9 CONCLUSIONS
In this work, we considered the average sensitivity of hierarchical clustering. We proposed hierarchical clustering algorithms SHC and SHC-SR and theoretically proved that they have low average sensitivity and average sensitivity under shared randomness, respectively. Then using real-world datasets, we empirically confirmed that our algorithm SHC-SR achieves a good trade-off between the quality of the output clustering and average sensitivity.
---
4We omitted single, 2-means, and pcdd because of their poor performances in the previous experiments; single was poor at its clustering quality, 2-means was poor at its average sensitivity, and pcdd tends to be parteo-dominated by other methods.
REFERENCES
A. Abboud, V. Cohen-Addad, and H. Houdrougé. Subquadratic high-dimensional hierarchical clustering. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 32, 2019.
M. Ackerman and S. Ben-David. A characterization of linkage-based hierarchical clustering. *The Journal of Machine Learning Research*, 17(1):8182–8198, 2016.
M. Ackerman, S. Ben-David, S. Brânzei, and D. Loker. Weighted clustering. In *Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)*, volume 26, pages 858–863, 2012.
S. Arora, S. Rao, and U. Vazirani. Expander flows, geometric embeddings and graph partitioning. *Journal of the ACM*, 56(2):1–37, 2009.
M.-F. Balcan, Y. Liang, and P. Gupta. Robust hierarchical clustering. *The Journal of Machine Learning Research*, 15(1):3831–3871, 2014.
D. Boley. Principal direction divisive partitioning. *Data Mining and Knowledge Discovery*, 2:325–344, 1998.
M. Charikar and V. Chatziafratis. Approximate hierarchical clustering via sparsest cut and spreading metrics. In *Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, pages 841–854. SIAM, 2017.
S. Chawla, R. Krauthgamer, R. Kumar, Y. Rabani, and D. Sivakumar. On the hardness of approximating multicut and sparsest-cut. *Computational Complexity*, 15(2):94–114, 2006.
D. Cheng, Q. Zhu, J. Huang, Q. Wu, and L. Yang. A hierarchical clustering algorithm based on noise removal. *International Journal of Machine Learning and Cybernetics*, 10:1591–1602, 2019.
S. Dasgupta. A cost function for similarity-based hierarchical clustering. In *Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC)*, pages 118–127, 2016.
L. Dhulipala, D. Eisenstat, J. Lacki, V. Mirrokni, and J. Shi. Hierarchical agglomerative graph clustering in poly-logarithmic depth. In *Advances in Neural Information Processing Systems (NeurIPS)*, pages 22925–22940, 2022.
M. B. Eisen, P. T. Spellman, P. O. Brown, and D. Botstein. Cluster analysis and display of genome-wide expression patterns. *Proceedings of the National Academy of Sciences*, 95(25):14863–14868, 1998.
B. Eriksson, G. Dasarathy, A. Singh, and R. Nowak. Active clustering: Robust and efficient hierarchical clustering using adaptively selected similarities. In *Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS)*, volume 15, pages 260–268, 2011.
L. L. Gao, J. Bien, and D. Witten. Selective inference for hierarchical clustering. *Journal of the American Statistical Association*, pages 1–27, 2022.
F. Gilbert, P. Simonetto, F. Zaidi, F. Jourdan, and R. Bourqui. Communities and hierarchical structures in dynamic social networks: analysis and visualization. *Social Network Analysis and Mining*, 1(2):83–95, 2011.
S. Hara and Y. Yoshida. Average sensitivity of decision tree learning. In *Proceedings of the 11th International Conference on Learning Representations (ICLR)*, 2023.
T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Friedman. *The elements of statistical learning: data mining, inference, and prediction*, volume 2. Springer, 2009.
K. A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In *Proceedings of the 22nd International Conference on Machine learning (ICML)*, pages 297–304, 2005.
A. K. Jain. Data clustering: 50 years beyond K-means. *Pattern Recognition Letters*, 31(8):651–666, 2010.
|
aX4fOLHrXT
|
Do you have any conviction that these closed source models were not trained on data containing SUBLEQ (or Minksy machine) programs and their executions? Are the results still significant if they were trained on such data?
|
Can General-Purpose Language Models Emulate a General-Purpose Computer In-Context?
Anonymous authors
Paper under double-blind review
Abstract
Several recent works have drawn parallels between modern Large Language Models (LLMs) and general-purpose computers, suggesting that language serves as their programming interface. In this study, we test part of this analogy; specifically, we investigate whether a pretrained LLM can emulate a memory-bounded, reduced instruction-set based computer by executing random programs through looped inference calls. All this within the model’s own context window, and without the aid of external mechanisms such as associative memory or interpreters. The abstraction level of these programs is based on two general-purpose computational models - the SUBLEQ One-Instruction Set Computer (OISC) and the Minsky counter machine. Our prompts are carefully designed in a data-agnostic manner, and we conduct studies to examine failure modes related to the emulated computer functionality. Our findings indicate that certain models are capable of efficiently executing general-purpose instructions, despite not being explicitly trained for such a task. This suggests intriguing implications for AI alignment, as some models demonstrate the ability to autonomously emulate the operation of a general-purpose computer.
1 Introduction
Large language models (LLMs) have demonstrated remarkable performance in various general-purpose downstream tasks, including language and code generation and translation, text classification and sentiment analysis, question answering and dialogue, and different forms of compositional reasoning (Devlin et al., 2018; Ouyang et al., 2022; OpenAI, 2023; Guo et al., 2022; Lu et al., 2023). These impressive capabilities, which emerge as the scale of data and model increases, have generated significant interest in understanding the underlying mechanisms and probing the overall computational abilities of LLMs and their potential applications across diverse domains (Wei et al., 2022a; Mialon et al., 2023; Qin et al., 2023; Imani et al., 2023; Dziri et al., 2023). Another line of recent works have explored the abilities of interconnected LLMs to perform complex computational tasks (Richards, 2023; Chase, 2022; Lee et al., 2023; Giannou et al., 2023).
In addition, it has recently been shown that LLMs, when equipped with auxiliary memory, are able to emulate universal Turing machines (Schuurmans, 2023). Hence, we could also argue such models could ultimately serve as “computers” that operate on human language, with prompting as a flexible new form of programming language. However, to emulate such general-purpose computations and decision making, it is necessary to properly incorporate interactions with external memory. Therefore, a natural question that arises is the following:
“Can pretrained LLMs emulate, in-context, a general-purpose computer, without the use of external mechanisms (such as memory or interpreters)?”
Motivated by this question, we conduct an investigation to determine whether modern LLMs can demonstrate inherent general-purpose computing skills simply through recursive prompting, without explicitly training or finetuning them to do so. By assessing various LLMs’ ability to simulate basic computational models, we find evidence that certain models can almost-reliably emulate a general-purpose computer in-context.
The emergence of this skill, despite not being part of the training objective, has noteworthy implications. It indicates that large pretrained models have latent potential for autonomous computation and decision making absent external constraints. Understanding the origins and limits of this unintended behavior is crucial, given its importance for safe AI deployment, and thus, we hope that our findings will encourage more research into this area within the machine learning community.
1.1 Overview of the Study
In this work, we propose employing simple computational models as a testbed for examining some of the algorithmic and computational capabilities of LLMs. In particular, we test two models: the Minsky machine, and the SUBLEQ One Instruction-Set Computer (OISC). These models present a balance between simplicity and generality, featuring a small set of arithmetic and branching operations, making them tractable for assessing in the context of LLM capabilities. The main question we attempt to answer is whether modern language models can simulate the operation of either of these two general-purpose machines. Before delving into our methodology, let us provide a brief overview of the two computational models under consideration.
Minsky Machine. A Minsky machine, also known as a counter machine, is a simple computational model proposed by Marvin Minsky (Minsky, 1967). It comprises a set of registers and instructions, with the $i$-th register denoted by $\text{reg}[i]$ and the $j$-th instruction denoted by $\text{inst}[j]$ for non-negative integers $i,j$. Each register contains a non-negative integer. Instructions are either of type A or B. Type A simply increments the value of $\text{reg}[i]$ and moves on to the next instruction, as shown in Algorithm 1, while type B performs a conditional decrement on $\text{reg}[i]$ and jumps to another instruction as shown in Algorithm 2. Note that each instruction $\text{inst}[j]$ remains the same as the program runs (no self-editing code), but the contents of the register $\text{reg}[i]$ are updated. The index of the instruction being executed is maintained in a program counter ($\text{pc}$), which changes as the program runs. This simple model is powerful enough to emulate any Turing Machine.
SUBLEQ. A SUBLEQ OISC (Mavaddat & Parhami, 1988) utilizes a single instruction to execute general-purpose programs. As shown in Algorithm 3, this instruction, called SUbtract and Branch if Less than or EQual to 0, takes three non-negative integers \(a\), \(b\), and \(c\) as input. The program first sets \(\text{reg}[b] := \text{reg}[b] - \text{reg}[a]\). If the resulting \(\text{reg}[b]\) is non-positive, the program jumps to instruction \(c\); otherwise, it proceeds to the next instruction. Surprisingly, SUBLEQ defines a language that is also Turing complete.
Why consider both? The primary distinction between Minsky machines and SUBLEQ OISCs is their instruction set and execution logic. Minsky machines use two instructions with separate registers, while SUBLEQ OISCs employ a single, more complex instruction involving subtraction, register update, and conditional jump. Emulating SUBLEQ OISCs may pose a greater challenge for LLMs due to its requirement of performing signed integer subtraction and handling three input arguments. At the same time, the abstraction level of Minsky machines does not allow for a straightforward mapping of say a Python program to that language, however, there does exist a C-like language compiling to SUBLEQ (Esolangs), as well as OISCs designed on this language (Mazonka & Kolodin, 2011), making it more interesting for more pragmatic tests. Incorporating both computational models in our evaluation enables a more comprehensive assessment of LLMs’ capabilities, versatility, and adaptation to different instruction sets, providing perhaps better insights into their ability to emulate general purpose machines.
Proposed Methodology. Our assessment examines the capability of a looped pretrained LLM to in-context simulate computational models (either Minsky machine or SUBLEQ OISC), without having access to an external memory\(^1\). Recall that three components are crucial to run a program in these computational models: (i) the set of instructions to run (i.e., the program), (ii) the register values, and (iii) the program counter. Our prompt is designed to provide this information (as in Fig. 2) and requires the LLM to execute a single instruction and update the memory state (namely the program counter and the register values) accordingly. By recursively calling the LLM with the updated memory state in a looped manner, we can assess its ability to accurately emulate the machine’s operation. We should highlight that in our study, we exploit two different approaches: one that performs a single inference call per instruction, and one that performs multiple inference calls per instruction\(^2\). We also experiment with different prompting methodologies, investigating whether chain-of-thought (CoT) prompting (Wei et al., 2022b) would yield better results.
Moreover, we generate random sets of instructions with bounded numbers of lines of code and registers, and evaluate the LLM’s ability to simulate the Minsky machine’s or SUBLEQ OISC’s operation. We examine the point at which the model “breaks,” i.e.,
---
\(^1\)See more details regarding the proposed methodology in Sec. 3.
\(^2\)Here we use inference calls to imply API calls to a base model, however the actual number of forward passes, i.e., the true number of inference calls, depends on the output tokens.
---
**Algorithm 1** Minsky Instruction A
**Input:** Non-negative integer \(i\)
\[
\text{reg}[i] := \text{reg}[i] + 1 \\
\text{goto the next instruction}
\]
**Algorithm 2** Minsky Instruction B
**Input:** Non-negative integer \(i\) and \(j\)
\[
\begin{align*}
\text{if } \text{reg}[i] \neq 0 \text{ then} \\
\quad \text{reg}[i] := \text{reg}[i] - 1 \\
\quad \text{goto the next instruction} \\
\text{else} \\
\quad \text{goto instruction } j \\
\text{end if}
\end{align*}
\]
**Algorithm 3** SUBLEQ Instruction
**Input:** Non-negative integers \(a\), \(b\), \(c\)
\[
\begin{align*}
\text{reg}[b] := \text{reg}[b] - \text{reg}[a] \\
\text{if } \text{reg}[b] \leq 0 \text{ then} \\
\quad \text{goto instruction } c \\
\text{else} \\
\quad \text{goto the next instruction} \\
\text{end if}
\end{align*}
\]
---
Figure 2: An example of the memory state for a Minsky Machine. We have three distinct parts: program counter, registers, and instructions.
fails to produce correct output memory states. This allows us to assess some basic capabilities of different LLMs and how they relate to their perceived performance in more general AI tasks.
1.2 Main Results
Our main results indicate that many of the tested models, although exhibiting non-trivial performance, struggle to execute arbitrary Minsky/SUBLEQ instructions with nearly-perfect accuracy. However, there do exist pretrained models (like GPT-4 (OpenAI, 2023)), that are indeed able to in-context emulate such functionality almost perfectly, without being explicitly trained to do so. As we will discuss in later sections, the way that we design our prompts ensures that our tests are data-agnostic. This means that the results are direct memorization of the specific data used during training, but rather reflect the inherent computational capabilities and limits of the models themselves. In Fig. 1, we present our main experimental results regarding the instruction execution accuracy of an arbitrary Minsky or SUBLEQ instruction, utilizing various approaches and prompting techniques, which we discuss in detail in Sec. 3.
As we can observe, most models struggle to execute an arbitrary Minsky/SUBLEQ instruction with full accuracy, especially when prompting does not employ CoT. The only model that can reliably execute such commands is found to be GPT-4, which achieves almost perfect execution accuracy. Although to argue that a model can in-context emulate a general-purpose computer, the execution accuracy should ideally be 100%, we believe that the near-perfect performance of GPT-4, coupled with the non-trivial performance of other models, provides valuable insights. Specifically, it suggests that some pretrained LLMs are particularly close to demonstrating general-purpose computing capabilities, even without access to external mechanisms related to memory, calculation, or code execution. This behavior is emergent, since the models are not explicitly trained for these tasks. This hints at intriguing implications for AI alignment, as, when placed in a loop, they exhibit the potential of autonomously executing general-purpose computational tasks, a capability that was not an explicit objective during training.
2 Background
Large Language Models A large number of LLMs have been proposed recently, which showed a huge success in natural language processing (NLP) (Devlin et al., 2018; Radford et al., 2018; 2019; Brown et al., 2020; Taori et al., 2023; Zhang et al., 2022; Touvron et al., 2023; Thoppilan et al., 2022). It is reported that pretrained LLMs have interesting properties, e.g., in-context learning (ICL) allows LLMs to perform a new task without any fine-tuning (Min et al., 2022; Garg et al., 2022) and the reasoning task performance of LLMs is improved by prompting strategies (Zhou et al., 2022) including chain-of-thought (CoT) (Wei et al., 2022b; Kojima et al., 2022; Wang et al., 2022), or the most recent Tree-of-Thoughts (Yao et al., 2023) and Graph-of-Thoughts (Besta et al., 2023).
Learning to Execute Various recent works developed neural networks that learn how to execute a program (Zaremba & Sutskever, 2014; Bieber et al., 2020; Wang et al., 2020; Dehghani et al., 2018; Yán et al., 2020; Austin et al., 2021; Nye et al., 2021; Graves et al., 2014; Kurach et al., 2015; Kaiser & Sutskever, 2015; Graves et al., 2016; Reed & De Freitas, 2015; Velicković et al., 2020; Lu et al., 2022; Liu et al., 2023). In recent years, several studies have attempted to evaluate the algorithmic reasoning abilities of neural networks and LLMs, investigating their ability to simulate general-purpose computation. Some of these studies have demonstrated the computational abilities of LLMs given access to external memory, highlighting their potential to perform complex computational tasks (Schuurmans, 2023).
LLM evaluation Many of the standard LLM benchmarks focus on various aspects of reasoning, text comprehension, and code generation. For example, natural language understanding benchmarks such as GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) measure the performance of LLMs on a collection of tasks including question answering, sentiment analysis, and textual entailment. Common sense reasoning benchmarks such as BoolQ (Clark et al., 2019), PIQA (Bisk et al.,
---
3This is due to the fact that placing it in a loop would result in reliably executing consecutive commands in large programs.
SIQA (Sap et al., 2019), HellaSwag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2021), ARC (Chollet, 2019), and OpenBookQA (Mihaylov et al., 2018) evaluate LLMs on tasks like Cloze-style completion, Winograd schema questions, and multiple-choice question answering. Closed-book question answering benchmarks like Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017) test LLMs’ abilities to answer questions without access to external documents. Reading comprehension benchmarks, such as RACE (Lai et al., 2017), assess LLMs’ performance in understanding and answering questions related to written passages. Furthermore, mathematical reasoning benchmarks like MATH (Hendrycks et al., 2021) and GSM8k (Cobbe et al., 2021) evaluate LLMs on their abilities to solve arithmetic and algebraic problems, while code generation benchmarks, such as HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021), test the models’ capacity to generate code based on natural language descriptions. Finally, the massive multitask language understanding (MMLU) benchmark (Hendrycks et al., 2020) measures LLMs’ performance across multiple domains of knowledge, including humanities, STEM, and social sciences.
3 METHODOLOGY
Our investigation works in the following manner. Recall that in both Minsky Machine and SUBLEQ, all we need to specify is the memory state consisting of three components: the program counter, the register values, and the instructions. We first make a text file that lists up these three components, an example of which is given in Fig. 2. In this example, we consider running a program using 4 registers and 5 instructions. The program counter (PC) is set to 1, meaning that we are running the instruction written in line 1, which is \texttt{line\{1\} = A(reg\{3\})}.
Utilizing this memory configuration as a component of the prompt, the goal is to execute Minsky or SUBLEQ instructions and assess whether the updated memory state aligns with the expected outcome. To achieve this, we employ two approaches and two prompting techniques, details of which can be found in Sec. 3.1 and Sec. 3.2, below.
3.1 TWO INference APPROACHES
Recall that the program we ask LLMs to run contains multiple instructions, either of Algorithm 1, 2 or 3. Our tests consider two approaches: run each instruction with a single inference, or with multiple inferences on the tested LLM.
Figure 3: Executing instructions using a multiple-inferences approach. In this case each instruction is executed in 3 distinct calls: (a) one for reading the program counter (PC), (b) one for reading the instruction that is pointed by the program counter and (c) one to execute this instruction. Then, the model outputs an updated memory state, which is then used in the next prompt in order to execute the next instruction, in a looped manner.
Multiple-inferences per instruction The first approach referred to as multiple-inferences, involves making three distinct inference calls to the underlying model in order to execute the instruction pointed to by the current value of the program counter pc. The initial call is employed to read the value of pc, while the second one utilizes this value to fetch the instruction located at
line{pc}. Lastly, the third call serves to execute the aforementioned instruction. The outcome is an updated memory state, which is subsequently used to execute the following instruction in the program, in an iterative fashion. A graphical depiction of this process can be observed in Fig. 3. As illustrated, the memory state is incorporated into the prompt, and the LLM is invoked three consecutive times. Upon completion of the current instruction’s execution, the updated memory state is then employed to execute the subsequent instruction in a similar manner.
**Single-inference per instruction** The single-inference approach, which serves as the second method, can be considered a constrained version of the multiple-inferences approach, as it essentially consolidates the inference calls of the latter into a single, more complex call. In this method, the model is invoked only once and instructed to execute the current instruction based on the underlying instruction set and the value of the program counter pc. The updated memory state is then used to execute the subsequent instruction, and this process is iteratively repeated in a loop. A visual representation of this concept can be seen in Fig. 4.
### 3.2 Two Prompting Techniques
Evidently, it is crucial to investigate how various prompting techniques influence the models’ performance. Therefore, an essential aspect of our evaluation is the implementation of diverse prompting methods and the observation of their effects on the models. Specifically, we incorporate two distinct prompting strategies: (i) the direct output strategy, where the models receive only the execution instructions and must produce the updated memory state exclusively, and (ii) the chain-of-thought (CoT) strategy, originally proposed in (Wei et al., 2022b), wherein the model is required to provide intermediate results in addition to the updated memory state.
### 4 Experimental Setup
**Models** In this series of experiments, we evaluate the performance of various LLMs on the tasks that we described in the previous section. Our investigation includes text-davinci-003 (Brown et al., 2020), gpt-3.5, and gpt-4 (OpenAI, 2023) from the GPT family of models, trained and deployed by OpenAI, and claude-v1 and claude-v1.3 by Anthropic (Anthropic, 2022). It is worth mentioning that we have access to these models through their respective APIs, which allows us to perform inferences and evaluate their capabilities.
**Emulating Memory Functionality** As a first step, we begin our study by determining the extent to which the examined models can effectively simulate basic memory functionality. In particular, assuming memories with a structure like in Fig. 2, we test tasks such as reading and writing to a register, and retrieving a desired instruction from the instructions section, using straightforward prompting.
Figure 5: Register reading accuracy for the case of Minsky Machines (top) and SUBLEQ OISCs (bottom). This figure shows the accuracy of reading a register from a randomly chosen memory state, as the number of registers increases. It also compares the accuracy for states with different numbers of instructions.
Instruction Execution Accuracy Our primary evaluation metric is the instruction execution accuracy, which quantifies the models’ ability to execute an instruction given an arbitrary Minsky or SUBLEQ memory state with either the multiple-inference or the single-inference approach, as discussed in the previous section.
This evaluation is conducted on numerous randomly generated memory states. In particular, we fix the number of registers to 16, and then randomly generate 15 memory states with an increasing number of instructions \( n_{\text{instr}} \in \{5, 10, 15, 20, 25, 30\} \). Then, we employ the multiple-inference and single-inference approaches that we discussed in the previous section, asking the models to generate the updated memory state, based on the current one. In addition, in the latter case, we also investigate the behavior of the models under the direct output and chain-of-thought prompting strategies that we also presented in the previous section.
Furthermore, it is essential to highlight that in the context of Minsky machines, a branch operation is executed solely if the instruction being executed is of type B and if the value of the corresponding register is 0 (refer to Algorithm 2 for more details). Consequently, when we generate the value of each register in a Minsky Machine randomly, with probability 1/2 we select the value to be 0. This implies that approximately half of the evaluated instructions will involve a branch operation, thereby providing a diverse set of test cases for the analysis.
Response Parsing It is important to emphasize that in our experimental evaluations, we assume the existence of a basic parser that parses the updated memory state from a model’s response. This aspect is of particular importance in the context of the chain-of-thought prompting strategy, where the models’ responses also encompass intermediate steps. Consequently, it becomes necessary to determine the updated memory state. In our experiments, this post-processing mechanism comprises a simple regular expression designed to identify the portion of the response enclosed within the <memory></memory> tags.
5 RESULTS
Emulating Memory Functionality Figs. 5, 6, and 7 demonstrate the performance of the tested models when emulating basic memory functionality for either a Minsky Machine or a SUBLEQ OISC, in the setup that we described in the previous section. Specifically, Fig. 5 presents the accuracy of the models in reading a random register value from a randomly generated memory state. In Fig. 6, the accuracy of fetching a random instruction from memory is shown. Lastly, Fig. 7 displays the accuracy of writing a randomly selected value to a randomly specified register in memory.
4In total \( 6 \times 15 = 90 \) memory states.
5The detailed prompts are provided in the Appendix.
Figure 6: Fetching instruction accuracy for the case of Minsky Machines (top) and SUBLEQ OISCs (bottom). This figure shows the accuracy of fetching an instruction from a randomly chosen memory state, as the number of instructions increases. It also compares the accuracy for states with different numbers of registers.
In all cases, we observe that the models achieve near-perfect accuracy, which appears to be unaffected by the increase in the number of registers and instructions in the memory state. From these results, we can conclude that the tested LLMs are capable of approximating simple memory functionality, an essential component for executing simple Minsky or SUBLEQ instructions.
Instruction Execution Accuracy In Fig. 1, we present our results regarding the effectiveness of the tested LLMs in executing the current (Minsky or SUBLEQ) instruction, given an arbitrary memory state. As discussed in the previous section, we test both our multiple-inference and single-inference approaches, and, especially for the latter case, both the direct output and CoT prompting strategies.
As we can observe, it is evident that, in the single-inference experiments, the use of CoT prompting leads to significantly better performance than the direct output strategy, which achieves nearly zero accuracy in instruction execution in most settings.
Regarding the Minsky experiments, we can observe that OpenAI’s models (text-davinci-003, gpt-3.5, and gpt-4) exhibit better performance compared to the claude models in the single-inference approach. In fact, gpt-4 achieves 100% accuracy for any number of instructions, which indicates its superiority compared to all the other models. In addition, we can observe that dividing the instruction execution into multiple API calls, namely the multiple-inferences approach, seems to be beneficial for the claude models without significant improvements for the cases of text-davinci-003 and gpt-3.5. However, in those experiments, gpt-4 still achieves 100% accuracy, proving to be the most capable among all the models.
Similar observations can be drawn in the case of SUBLEQ experiments as well. Specifically, once again, OpenAI’s models outperform Anthropic’s models, with gpt-4 achieving near-perfect accuracy. Furthermore, the multiple-inferences approach enhances the performance of both claude models without significantly affecting text-davinci-003, gpt-3.5, or gpt-4.
5.1 Discussion
As we have previously discussed, the near-perfect performance of gpt-4 in executing arbitrary Minsky and SUBLEQ instructions highlights the strong potential of the current state-of-the-art models for general-purpose in-context computation, an ability that is emergent. This unintended capability suggests that such models have inherent skills to autonomously perform calculations similar to a general-purpose computer when iteratively queried. This latent potential for reliable, unconstrained in-context computation absent external mechanisms has important implications for understanding risks related to AI alignment.
Figure 7: Updating register accuracy for the case of Minsky Machines (top) and SUBLEQ OISCs (bottom). This figure shows the accuracy of writing a value to a randomly chosen register to a randomly generated memory state, as the number of instructions increases. It also compares the accuracy for states with different numbers of registers.
6 CONCLUSION
In this work, we present a systematic study to evaluate the capability of LLMs to in-context emulate simple computational models without external memory or execution mechanisms. Our findings demonstrate that certain models like GPT-4 can reliably execute arbitrary Minsky machine and SUBLEQ instructions with near perfect accuracy through iterative inference calls. This emergent ability, despite not being an explicit objective during training, suggests that the model has developed some inherent general-purpose computing capabilities.
While fully emulating a general-purpose computational model would require 100% accuracy, the strong performance of GPT-4, along with the non-trivial performance of the other models, indicate that they are close to demonstrating algorithmic reasoning abilities. Our prompts are carefully designed to avoid exploiting specific training data. Hence, the model’s effectiveness highlights its potential for executing any computational task when placed in an inference loop.
This has important implications for AI alignment, as the emergence of such autonomous computing capabilities was not an objective during training. Our work shows that certain LLMs have intrinsic skills for general-purpose computation, and can in effect become “universal computation engines” when recursively invoked. Further research is crucial to deeply understand the roots, limits, and controllability of such abilities for safe and reliable deployment of powerful LLMs.
REFERENCES
Anthropic. Claude. https://www.anthropic.com/product, 2022. Accessed: 2023-05-10.
Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Cai, C., Terry, M., Le, Q., et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gianinazzi, L., Gajda, J., Lehmann, T., Podstawska, M., Niewiadomski, H., Nyczyk, P., and Hoefler, T. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687, 2023.
Bieber, D., Sutton, C., Larochelle, H., and Tarlow, D. Learning to execute programs with instruction pointer attention graph neural networks. Advances in Neural Information Processing Systems, 33: 8626–8637, 2020.
Bisk, Y., Zellers, R., Gao, J., Choi, Y., et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432–7439, 2020.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Chase, H. LangChain, October 2022. URL https://github.com/hwchase17/langchain.
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Chollet, F. On the measure of intelligence. arXiv preprint arXiv:1911.01547, 2019.
Clark, C., Lee, K., Chang, M.-W., Kwiatkowski, T., Collins, M., and Toutanova, K. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019.
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., and Kaiser, Ł. Universal transformers. arXiv preprint arXiv:1807.03819, 2018.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Dziri, N., Lu, X., Sclar, M., Li, X. L., Jian, L., Lin, B. Y., West, P., Bhagavatula, C., Bras, R. L., Hwang, J. D., et al. Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654, 2023.
Esolangs. Higher subleq. URL: https://esolangs.org/wiki/Higher_Subleq.
Garg, S., Tsipras, D., Liang, P. S., and Valiant, G. What can transformers learn in-context? A case study of simple function classes. Advances in Neural Information Processing Systems, 35: 30583–30598, 2022.
Giannou, A., Raiput, S., Sohn, J., Lee, K., Lee, J. D., and Papailiopoulos, D. Looped transformers as programmable computers. arXiv preprint arXiv:2301.13196, 2023.
Graves, A., Wayne, G., and Danihelka, I. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
|
dHng2O0Jjr
|
Therefore, at the very least, I would expect some qualitative analysis of the tool set that can point out how many task instances involve these temporally variable tools and how much temporal variability impacts the evaluation results over time.
|
TOOLLLM: FACILITATING LARGE LANGUAGE MODELS TO MASTER 16000+ REAL-WORLD APIs
Yujia Qin\textsuperscript{1*}, Shihao Liang\textsuperscript{1*}, Yining Ye\textsuperscript{1}, Kunlun Zhu\textsuperscript{1}, Lan Yan\textsuperscript{1}, Yaxi Lu\textsuperscript{1}, Yankai Lin\textsuperscript{3†}, Xin Cong\textsuperscript{1}, Xiangru Tang\textsuperscript{4}, Bill Qian\textsuperscript{4}, Sihan Zhao\textsuperscript{1}, Lauren Hong\textsuperscript{1}, Runchu Tian\textsuperscript{1}, Ruobing Xie\textsuperscript{5}, Jie Zhou\textsuperscript{5}, Mark Gerstein\textsuperscript{4}, Dahai Li\textsuperscript{2,6}, Zhiyuan Liu\textsuperscript{1†}, Maosong Sun\textsuperscript{1†}
\textsuperscript{1}Tsinghua University \textsuperscript{2}ModelBest Inc. \textsuperscript{3}Renmin University of China \textsuperscript{4}Yale University \textsuperscript{5}WeChat AI, Tencent Inc. \textsuperscript{6}Zhihu Inc.
yujiqin16@gmail.com
ABSTRACT
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench. The codes, trained models, and demo are publicly available at https://github.com/OpenBMB/ToolBench.
1 INTRODUCTION
Tool learning [Qin et al., 2023b] aims to unleash the power of large language models (LLMs) to effectively interact with various tools (APIs) to accomplish complex tasks. By integrating LLMs with APIs, we can greatly expand their utility and empower them to serve as efficient intermediaries between users and the vast ecosystem of applications. Although open-source LLMs, e.g., LLaMA [Touvron et al., 2023a], have achieved versatile capabilities through instruction tuning [Taori et al., 2023; Chiang et al., 2023], they still lack the sophistication in performing higher-level tasks, such as appropriately interacting with tools (APIs) to fulfill complex human instruction. This deficiency is because current instruction tuning largely focuses on basic language tasks, with a relative neglect of the tool-use domain. On the other hand, current state-of-the-art (SOTA) LLMs (e.g., ChatGPT [OpenAI...
and GPT-4 (OpenAI [2023]), which have demonstrated impressive competencies in utilizing tools (Bubeck et al. [2023]), are closed-source with their inner mechanisms opaque. This limits the democratization of AI technologies and the scope of community-driven innovation and development. In this regard, we deem it urgent to empower open-source LLMs to skillfully master diverse APIs.
Although prior works have explored building instruction tuning data for tool use (Li et al. [2023a], Patil et al. [2023], Tang et al. [2023], Xu et al. [2023b]), they fail to fully stimulate the tool-use capabilities within LLMs and have inherent limitations: (1) limited APIs: they either fail to involve real-world APIs (e.g., RESTAPI) (Patil et al. [2023], Tang et al. [2023]) or consider only a small scope of APIs with poor diversity (Patil et al. [2023], Xu et al. [2023b], Li et al. [2023a]);
(2) constrained scenario: existing works are confined to instructions that only involve one single tool. In contrast, real-world scenarios may require that multiple tools are interleaved together for multi-round tool execution to solve a complex task. Besides, they often assume that users manually specify the ideal API set for a given instruction in advance, which is infeasible with a large collection of real-world APIs; (3) inferior planning and reasoning: existing works adopted either CoT (Wei et al. [2023]) or ReACT (Yao et al. [2022]) for model reasoning, which cannot fully elicit the capabilities stored in LLMs and thus fail to handle complex instructions. In addition, some works do not even execute APIs to obtain real responses (Patil et al. [2023], Tang et al. [2023]), which serve as important information for subsequent model planning.
To facilitate tool-use capabilities within open-source LLMs, we introduce ToolLLM, a general tool-use framework including data construction, model training, and evaluation. As illustrated in Figure 1, we collect a high-quality instruction-tuning dataset ToolBench. It is constructed automatically using ChatGPT (gpt-3.5-turbo-16k), which has been upgraded with function call capabilities. The comparison between ToolBench and prior works is listed in Table 1. Specifically, the construction of ToolBench entails three phases:
- **API Collection**: we gather 16,464 representational state transfer (REST) APIs from RapidAPI (link), a platform that hosts massive real-world APIs provided by developers. These APIs span 49 diverse categories such as social media, e-commerce, and weather. For each API, we crawl detailed API documents from RapidAPI, including the functionality descriptions, required parameters, code snippets for API calls, etc. By comprehending these documents to learn to execute APIs, LLMs can generalize to new APIs unseen during training;
- **Instruction Generation**: we first sample APIs from the whole set and then prompt ChatGPT to generate diverse instructions for these APIs. To cover practical scenarios, we curate instructions...
| Resource | ToolBench (this work) | APIBench (Patil et al., 2023) | API-Bank (Li et al., 2023a) | ToolAlpaca (Tang et al., 2023) | ToolBench (Xu et al., 2023b) |
|---------------------------|-----------------------|-------------------------------|-----------------------------|--------------------------------|-------------------------------|
| Real-world API? | ✓ | ✓ | ✓ | x | ✓ |
| Real API Call&Response? | ✓ | x | ✓ | x | ✓ |
| Multi-tool Scenario? | ✓ | x | x | x | ✓ |
| API Retrieval? | ✓ | ✓ | x | x | ✓ |
| Multi-step Reasoning? | ✓ | ✓ | ✓ | ✓ | ✓ |
| Number of tools | 3451 | 3 | 53 | 400 | 87 |
| Number of APIs | 1646 | 1645 | 53 | 400 | 232 |
| Number of Instances | 126,486 | 17002 | 274 | 3938 | 2746 |
| Number of Real API Calls | 469,585 | 0 | 568 | 0 | 3926 |
| Avg. Reasoning Traces | 4.0 | 1.0 | 2.1 | 1.0 | 5.9 |
Table 1: A comparison of our ToolBench to notable instruction tuning dataset for tool learning.
that involve both single-tool and multi-tool scenarios. This ensures that our model learns not only how to interact with individual tools but also how to combine them to accomplish complex tasks;
• **Solution Path Annotation**: each solution path may contain multiple rounds of model reasoning and real-time API calls to derive the final response. However, even the most sophisticated LLM, i.e., GPT-4, achieves a low pass rate for complex human instructions, making annotation inefficient. To this end, we develop a novel depth-first search-based decision tree (DFSDT) to bolster the planning and reasoning ability of LLMs. Compared with conventional ReACT, DFSDT enables LLMs to evaluate a multitude of reasoning paths and make deliberate decisions to either retract steps or proceed along a promising path. In experiments, DFSDT significantly improves the annotation efficiency and successfully completes those complex instructions that cannot be fulfilled using ReACT.
To assess the tool-use capabilities of LLMs, we develop an automatic evaluator, ToolEval, backed up by ChatGPT. It comprises two key metrics: (1) pass rate, which measures LLM’s ability to successfully execute an instruction within limited budgets, and (2) win rate, which compares the quality and usefulness of two solution paths. We demonstrate that ToolEval achieves a high correlation with human evaluation and provides a robust, scalable, and reliable assessment for machine tool use.
By fine-tuning LLaMA on ToolBench, we obtain ToolLLaMA. After evaluation based on our ToolEval, we derive the following findings:
• ToolLLaMA demonstrates a compelling capability to handle both single-tool and complex multi-tool instructions. As depicted in Figure 2, ToolLLaMA outperforms Text-Davinci-003 and Claude-2, achieves comparable performance to the “teacher model” ChatGPT, and is only slightly inferior to GPT4. Besides, ToolLLaMA exhibits robust generalization to previously unseen APIs, requiring only the API documentation to adapt to new APIs effectively. This flexibility allows users to incorporate novel APIs seamlessly, thus enhancing the model’s practical utility.
• We show that our DFSDT serves as a general decision-making strategy to enhance the reasoning capabilities of LLMs. DFSDT broadens the search space by considering multiple reasoning traces and achieves significantly better performance than ReACT.
• We train a neural API retriever, which alleviates the need for manual selection from the large API pool in practice. As shown in Figure 1, given an instruction, the API retriever recommends a set of relevant APIs, which are sent to ToolLLaMA for multi-round decision making to derive the final answer. Despite sifting through a large pool of APIs, the retriever exhibits remarkable retrieval precision, returning APIs closely aligned with the ground truth.
• ToolLLaMA exhibits strong generalization performance on an out-of-distribution (OOD) dataset APIBench (Patil et al., 2023). Despite not training on any of the APIs or instructions on APIBench, ToolLLaMA performs on par with Gorilla, a pipeline specifically designed for APIBench.
## 2 Dataset Construction
We introduce the three-stage construction process of ToolBench: API collection (§ 2.1), instruction generation (§ 2.2), and solution path annotation (§ 2.3). All procedures are based on ChatGPT (gpt-3.5-turbo-16k), requiring minimal human supervision and can be easily extended to new APIs.
2.1 API Collection
We start by introducing RapidAPI and its hierarchy, followed by how we crawl and filter APIs.
**RapidAPI Hub**
RapidAPI is a leading API marketplace that connects developers with thousands of real-world APIs, streamlining the process of integrating diverse services into applications. Developers can test and connect with various APIs by registering only a RapidAPI key. All APIs in RapidAPI can be classified into 49 coarse-grained categories (link), such as sports, finance, and weather. The categories associate an API with the most relevant topic. Additionally, the hub also provides 500+ fine-grained categorization called collections (link), e.g., Chinese APIs and database APIs. APIs in the same collection share a common characteristic and often have similar functionalities or goals.
**Hierarchy of RapidAPI**
As shown in Figure 3, each tool may be composed of multiple APIs. For each tool, we crawl the following information: the name and description of the tool, the URL of the host, and all the available APIs belonging to the tool; for each API, we record its name, description, HTTP method, required parameters, optional parameters, request body, executable code snippets for API call, and an example API call response. This rich and detailed metadata serves as a valuable resource for LLMs to understand and effectively use the APIs, even in a zero-shot manner.
**API Filtering**
Initially, we gathered 10,853 tools (53,190 APIs) from RapidAPI. However, the quality and reliability of these APIs can vary significantly. In particular, some APIs may not be well-maintained, such as returning 404 errors or other internal errors. To this end, we perform a rigorous filtering process (details in appendix A.1) to ensure that the ultimate tool set of ToolBench is reliable and functional. Finally, we only retain 3,451 high-quality tools (16,464 APIs).
2.2 Instruction Generation
Different from prior works, we specifically focus on two crucial aspects for instruction generation: (1) **diversity**: to train LLMs to handle a wide range of API usage scenarios, thereby boosting their generalizability and robustness; and (2) **multi-tool usage**: to mirror real-world situations that often demand the interplay of multiple tools, improving the practical applicability and flexibility of LLMs. To this end, instead of brainstorming instructions from scratch and then searching for relevant APIs, we sample different combinations of APIs and craft various instructions that involve them.
**Generating Instructions for APIs**
Define the total API set as $S_{\text{API}}$, at each time, we sample a few APIs: $S_{\text{sub}}^N = \{\text{API}_1, \cdots, \text{API}_N\}$ from $S_{\text{API}}$. We prompt ChatGPT to understand the functionalities of these APIs and then generate (1) possible instructions $(\text{Inst}_*)$ that involve APIs in $S_{\text{sub}}^N$, and (2) relevant APIs ($S_{\text{rel}}^* \subset S_{\text{sub}}^N$) for each instruction $(\text{Inst}_*)$, i.e., $\{[S_{\text{rel}}^1], [\text{Inst}_1], \cdots, [S_{\text{rel}}^{N'}], [\text{Inst}_{N'}]\}$, where $N'$ denotes the number of generated instances. These (instruction, relevant API) pairs will be used for...
training the API retriever in §3.1. We use different sampling strategies (introduced later) to cover all APIs and most of their combinations, thus ensuring the diversity of our instructions.
The prompt for ChatGPT is composed of (1) a general description of the intended instruction generation task, (2) comprehensive documentation of each API in $S_{\text{sub}}^N$, which helps ChatGPT understand their functionality and interplay, and (3) three in-context seed examples $\{\text{seed}_1, \text{seed}_2, \text{seed}_3\}$. Each seed example is an ideal instruction generation written by human experts. These seed examples are leveraged to better regulate ChatGPT’s behavior through in-context learning. In total, we wrote 12 / 36 diverse seed examples ($S_{\text{seed}}$) for the single-tool / multi-tool setting, and randomly sampled three examples at each time. Detailed prompts for instruction generation are described in appendix A.7.
Overall, the generation process can be formulated as follows:
$$\text{ChatGPT}_{\{\text{API}_1, \cdots, \text{API}_N\} \in S_{\text{API}}, \{\text{seed}_1, \cdots, \text{seed}_3\} \in S_{\text{seed}}}(\{[\text{rel}_1, \text{Inst}_1], \cdots, [\text{rel}_N, \text{Inst}_N]\} | \text{API}_1, \cdots, \text{API}_N, \text{seed}_1, \cdots, \text{seed}_3).$$
**Sampling Strategies for Different Scenarios**
As shown in Figure 3 for the single-tool instructions (I1), we iterate over each tool and generate instructions for its APIs. However, for the multi-tool setting, since the interconnections among different tools in RapidAPI are sparse, random sampling tool combinations from the whole tool set often leads to a series of irrelevant tools that cannot be covered by a single instruction in a natural way. To address the sparsity issue, we leverage the RapidAPI hierarchy information. Since tools belonging to the same RapidAPI category or collection are generally related to each other in the functionality and goals, we randomly select 2-5 tools from the same category / collection and sample at most 3 APIs from each tool to generate the instructions. We denote the generated instructions as intra-category multi-tool instructions (I2) and intra-collection multi-tool instructions (I3), respectively. Through rigorous human evaluation, we find that instructions generated in this way already have a high diversity that covers various practical scenarios. We also provide visualization for instructions using Atlas (link) to support our claim.
After generating the initial set of instructions, we further filter those with the hallucinated relevant APIs by assessing whether they exist in $S_{\text{sub}}^N$. Finally, we collect nearly 200k qualified (instruction, relevant API) pairs, including 87413, 84815, and 25251 instances for I1, I2, and I3, respectively.
### 2.3 Solution Path Annotation
As shown in Figure 4, given an instruction $\text{Inst}_*$, we prompt ChatGPT to search for a valid action sequence: $\{a_1, \cdots, a_N\}$. Such a multi-step decision-making process is cast as a multi-round conversation for ChatGPT. At each round $t$, the model generates an action $a_t$ based on previous interactions, i.e., $\text{ChatGPT}(a_t | \{a_1, r_1, \cdots, a_{t-1}, r_{t-1}\}, \text{Inst}_*)$, where $r_*$ denotes the real API response. For each
\(a_t\), ChatGPT should specify its “thought”, which API to use, and the specific parameters for this API, i.e., \(a_t\) has the following format: “Thought: \(\cdots\), API Name: \(\cdots\), Parameters: \(\cdots\).”
To leverage the function call feature of ChatGPT, we treat each API as a special function and feed its API documentation into ChatGPT’s function field. In this way, the model understands how to call the API. For each instruction Inst\(_t\), we feed all the sampled APIs \(S_N^{\text{API}}\) to ChatGPT’s as available functions. To let ChatGPT finish an action sequence, we define two additional functions, i.e., “Finish with Final Answer” and “Finish by Giving Up”. The former function has a parameter that corresponds to a detailed final answer to the original instruction; while the latter function is designed for cases where the provided APIs cannot complete the original instruction after multiple API call attempts.
**Depth First Search-based Decision Tree**
In our pilot studies, we find that CoT (Wei et al., 2023) or ReACT (Yao et al., 2022) has inherent limitations: (1) **error propagation**: a mistaken action may propagate the errors further and cause the model to be trapped in a faulty loop, such as continually calling an API in a wrong way or hallucinating APIs; (2) **limited exploration**: CoT or ReACT only explores one possible direction, leading to limited exploration of the whole action space. Hence even GPT-4 often fails to find a valid solution path, making annotation difficult.
To this end, we propose to construct a decision tree to expand the search space and increase the possibility of finding a valid path. As depicted in Figure 4, our DFSDT allows the model to assess different reasoning paths and choose to either (1) proceed along a promising path or (2) abandon an existing node by calling the “Finish by Giving Up” function and expand a new node. During node expansion, to diversify the child nodes and expand the search space, we prompt ChatGPT with the information of the previously generated nodes and explicitly encourage the model to generate a distinct node. For the searching process, we prefer depth-first search (DFS) instead of breadth-first search (BFS) because the annotation can be finished as long as one valid path is found. Using BFS will cost excessive OpenAI API calls. More details are described in appendix A.8. We perform DFSDT for all the generated instructions and only retain those passed solution paths. Ultimately, we generate 126,486 (instruction, solution path) pairs, which are used to train ToolLLaMA in § 3.2.
### 3 EXPERIMENTS
In this section, we investigate the performance of ToolLLM framework. We first introduce the evaluation metric and evaluate the efficacy of API retriever and DFSDT in § 3.1. Then we present the main experiments in § 3.2 followed by a generalization experiment in § 3.3.
#### 3.1 Preliminary Experiments
**ToolEval**
Considering the API’s temporal variability on RapidAPI and the infinite potential solution paths for an instruction, it is infeasible to annotate a fixed ground-truth solution path for each test instruction. Considering that human evaluation can be time-consuming, we follow AlpacaEval (Li et al., 2023b) to develop an efficient evaluator ToolEval based on ChatGPT, which incorporates two evaluation metrics (details in appendix A.5): (1) **Pass Rate**: it calculates the proportion of successfully completing an instruction within limited budgets. The metric measures the executability of instructions for an LLM and can be seen as a basic requirement for ideal tool use; and (2) **Win Rate**: we provide an instruction and two solution paths to ChatGPT evaluator and obtain its preference (i.e., which one is better). We pre-define a set of criteria for both metrics and these criteria are organized as prompts for our ChatGPT evaluator. We evaluate multiple times based on ChatGPT to improve the reliability. Then we calculate the average results from the evaluator.
Through rigorous testing (details in appendix A.5), we find that ToolEval demonstrates a high agreement of 87.1% in pass rate and 80.3% in win rate with human annotators. This shows that ToolEval can reflect and represent human evaluation to a large extent.
**Efficacy of API Retriever**
The API retriever aims to retrieve relevant APIs to an instruction. We employ Sentence-BERT (Reimers & Gurevych, 2019) to train a dense retriever based on BERT-BASE (Devlin et al., 2019). The API retriever encodes the instruction and API document into two embeddings, and calculates their relevance with embedding similarity. For training, we regard the relevant APIs of each instruction generated in § 2.2 as positive examples and sample a few other APIs as negative examples for contrastive learning. For baselines, we choose BM25 (Robertson et al.,
| Method | I1 NDCG@1 | I1 NDCG@5 | I2 NDCG@1 | I2 NDCG@5 | I3 NDCG@1 | I3 NDCG@5 | Average NDCG@1 | Average NDCG@5 |
|--------|-----------|-----------|-----------|-----------|-----------|-----------|----------------|----------------|
| BM25 | 18.4 | 19.7 | 12.0 | 11.0 | 25.2 | 20.4 | 18.5 | 17.0 |
| Ada | 57.5 | 58.8 | 36.8 | 30.7 | 54.6 | 46.8 | 49.6 | 45.4 |
| Ours | 84.2 | 89.7 | 68.2 | 77.9 | 81.7 | 87.1 | 78.0 | 84.9 |
Table 2: Our API retriever v.s. two baselines for three types of instructions (I1, I2, I3). We report NDCG@1 and NDCG@5.
| Method | I1 | I2 | I3 | Average |
|--------|----|----|----|---------|
| ReACT | 37.8 | 40.6 | 27.6 | 35.3 |
| ReACT@N | 49.4 | 49.4 | 34.6 | 44.5 |
| DFSDT | 58.0 | 70.6 | 62.8 | 63.8 |
Table 3: Pass rate of different reasoning strategies for three types of instructions (I1, I2, I3) based on ChatGPT.
We evaluate the retrieval performance using NDCG (Järvelin & Kekäläinen [2002]). We train and evaluate our model on single-tool instructions (I1), intra-category multi-tool instructions (I2), and intra-collection multi-tool instructions (I3).
As shown in Table 2, our API retriever consistently outperforms baselines across all settings, indicating its feasibility in real-world scenarios with massive APIs. Also, the NDCG score of I1 is generally higher than I2 and I3, which means single-tool instruction retrieval is simpler than multi-tool setting.
**Superiority of DFSDT over ReACT**
Before solution path annotation, we validate the efficacy of DFSDT. Based on ChatGPT, we compare DFSDT and ReACT using the pass rate metric. Since DFSDT consumes more OpenAI API calls than ReACT, for a fairer comparison, we also establish a “ReACT@N” baseline, which conducts multiple times of ReACT until the total costs reach the same level of DFSDT. Once a valid solution is found by ReACT@N, we deem it a pass.
From Table 3, it can be observed that DFSDT significantly outperforms the two baselines in all scenarios. Since we only retain those passed annotations as the training data, given the same budgets, using DFSDT could annotate more instructions. This makes DFSDT a more efficient way that saves the total annotation cost. We also find that the performance improvement of DFSDT is more evident for harder instructions (i.e., I2 and I3) than those simpler instructions (I1). This means that by expanding the search space, DFSDT can better solve those difficult, complex instructions that are unanswerable by the vanilla ReACT no matter how many times it is performed.
### 3.2 Main Experiments
**ToolILLaMA**
We fine-tune LLaMA-2 7B model (Touvron et al. [2023]) using the instruction-solution pairs. The original LLaMA-2 model has a sequence length of 4096, which is not enough under our setting since the API response can be very long. To this end, we use positional interpolation (Chen et al. [2023]) to extend the context length to 8192 (training details in appendix A.3).
**Settings**
Ideally, by scaling the number and diversity of instructions and unique tools in the training data, ToolILLaMA is expected to generalize to new instructions and APIs unseen during training. This is meaningful since users can define customized APIs and expect ToolILLaMA to adapt according to the documentation. To this end, we strive to evaluate the generalization ability of ToolILLaMA at three levels: (1) **Inst.: unseen instructions** for the same set of tools in the training data, (2) **Tool: unseen tools** that belong to the same (seen) category of the tools in the training data, and (3) **Cat.: unseen tools** that belong to a different (unseen) category of tools in the training data.
We perform experiments on three scenarios: single-tool instructions (I1), intra-category multi-tool instructions (I2), and intra-collection multi-tool instructions (I3). For I1, we conduct the evaluation for the aforementioned three levels (I1-Inst., I1-Tool, and I1-Cat.); for I2, since the training instructions already involve different tools of the same category, we only perform level 1 and level 3 for the generalization evaluation (I2-Inst. and I2-Cat.); similarly, we only perform level 1 generalization for I3 (I3-Inst.) since it already covers instructions that involve various combinations of tools from different categories (the tools in a RapidAPI collection may come from different RapidAPI categories).
For each test instruction, we feed the ground-truth (oracle) APIs $S_N^{\text{sub}}$ to each model. This simulates the scenario where the user specifies the API set they prefer.
**Baselines**
We choose two LLaMA variants that have been fine-tuned for general-purpose dialogue, i.e., Vicuna (Chiang et al. [2023]) and Alpaca (Taori et al. [2023]). We also choose the “teacher model” ChatGPT, Text-Davinci-003, GPT-4, and Claude-2 as baselines, and apply both DFSDT and ReACT to them. When calculating the win rate, each model is compared with ChatGPT-ReACT.
| Model | Method | I1-Inst Pass | I1-Inst Win | I1-Tool Pass | I1-Tool Win | I1-Cat Pass | I1-Cat Win | I2-Inst Pass | I2-Inst Win | I2-Cat Pass | I2-Cat Win | I3-Inst Pass | I3-Inst Win | Average Pass | Average Win |
|---------------|----------------|--------------|-------------|--------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|
| ChatGPT | ReACT | 41.5 | 44.0 | 41.5 | 42.5 | 60.0 | 64.8 | 60.0 | 69.0 | 60.0 | 69.0 | 60.0 | 69.0 | 60.0 | 69.0 |
| | DFSDT | 54.5 | 60.5 | 65.0 | 62.0 | 70.0 | 72.0 | 71.5 | 64.8 | 60.0 | 69.0 | 60.0 | 69.0 | 60.0 | 69.0 |
| Claude-2 | ReACT | 5.5 | 31.0 | 31.5 | 27.8 | 33.5 | 33.8 | 35.0 | 31.5 | 14.0 | 47.5 | 6.8 | 34.4 | | |
| | DFSDT | 20.5 | 38.0 | 31.0 | 44.3 | 18.5 | 43.3 | 17.0 | 36.8 | 20.5 | 33.5 | 28.0 | 65.0 | 22.6 | 43.5 |
| Text-Davinci-003 | ReACT | 12.0 | 28.5 | 20.0 | 35.3 | 20.0 | 31.0 | 8.5 | 29.8 | 14.5 | 29.8 | 24.0 | 45.0 | 16.5 | 33.2 |
| | DFSDT | 43.5 | 40.3 | 44.0 | 43.8 | 46.0 | 46.8 | 37.0 | 40.5 | 42.0 | 43.3 | 46.0 | 63.0 | 43.1 | 46.3 |
| GPT4 | ReACT | 53.5 | 60.0 | 50.0 | 58.8 | 53.5 | 63.5 | 67.0 | 65.8 | 72.0 | 60.3 | 47.0 | 78.0 | 57.2 | 64.4 |
| | DFSDT | 60.0 | 67.5 | 71.5 | 67.8 | 67.0 | 66.5 | 79.5 | 73.3 | 77.5 | 63.3 | 71.0 | 84.0 | 71.1 | 70.4 |
| Vicuna | ReACT & DFSDT | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| Alpaca | ReACT & DFSDT | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| ToolLLaMA | ReACT | 25.0 | 45.0 | 29.0 | 42.0 | 33.0 | 47.5 | 30.5 | 50.8 | 31.5 | 41.8 | 25.0 | 53.0 | 29.0 | 47.0 |
| | DFSDT | 57.0 | 63.0 | 60.0 | 55.3 | 62.0 | 54.5 | 77.0 | 68.5 | 77.0 | 80.0 | 60.0 | 69.0 | 60.0 | 60.0 |
| | DFSDT-Retriever| 64.0 | 62.3 | 64.0 | 59.0 | 60.5 | 55.0 | 81.5 | 68.5 | 68.5 | 60.8 | 65.0 | 73.0 | 67.2 | 63.1 |
Table 4: Main experiments of ToolBench. Win rate is calculated by comparing each model with ChatGPT-ReACT. A win rate higher than 50% means the model performs better than ChatGPT-ReACT. Apart from ToolLLaMA-DFSDT-Retriever, all methods use the oracle API retriever (i.e., ground truth API).
**Main Results**
The results are placed in Table 4, from which we derive that:
1. Both Vicuna and Alpaca fail to pass any instruction (pass rate & win rate = 0), which means their instruction-following abilities do not cover the tool-use domain. This underscores the deficiency of current instruction tuning attempts, which largely focus on language skills;
2. For all LLMs, using DFSDT significantly outperforms ReACT in both pass rate and win rate. Notably, ChatGPT +DFSDT surpasses GPT-4+ReACT in pass rate and performs comparably in win rate. This underscores the superiority of DFSDT over ReACT in decision-making;
3. When using DFSDT, ToolLLaMA performs much better than Text-Dainci-003 and Claude-2, and achieves a result almost on par with ChatGPT (the teacher model). In general, despite generalizing to unseen instructions and tools, ToolLLaMA +DFSDT demonstrates competitive generalization performance in all scenarios, achieving a pass rate second to GPT4+DFSDT.
Overall, these results demonstrate that ToolBench can sufficiently elicit the tool-use capabilities within LLMs and empower them to skillfully master even unseen APIs for various instructions.
**Integrating API Retriever with ToolLLaMA**
In real-world scenarios, asking users to manually recommend APIs from a large pool may not be practical. To emulate this practical setting, we feed the top 5 APIs (instead of the ground truth APIs $S_N^{\text{sub}}$) recommended by our API retriever to ToolLLaMA. As shown in Table 4, using retrieved APIs even improves the performance compared to the ground truth API set. This is because many APIs in the ground truth API set can be replaced by other similar APIs with better functionalities, which our API retriever can successfully identify. In other words, our retriever expands the search space of relevant APIs and finds more appropriate ones for the current instruction. It demonstrates the excellent ability of our API retriever to retrieve relevant APIs, especially considering the vast pool (16,000+) of APIs from which our API retriever selects.
### 3.3 Out-of-Distribution (OOD) Generalization to APIBench (Patil et al., 2023)
**Settings**
We further extend ToolLLaMA to an OOD dataset APIBench to validate its generalization ability. We equip ToolLLaMA with two retrievers: our trained API retriever and the oracle retriever. We evaluate three domains of APIBench, i.e., TorchHub, TensorHub, and HuggingFace. We compare ToolLLaMA with Gorilla, a LLaMA-7B model fine-tuned using the training data of APIBench. Following the original paper, we adopt two settings for Gorilla: zero-shot setting (ZS) and retrieval-aware setting (RS). The latter means (RS) the retrieved APIs are sent to the model as part of the prompts; while the former (ZS) does not incorporate the APIs in the prompts when training the model. We adopt the official evaluation metric and report the AST accuracy and the hallucination rates.
**Results**
The results are shown in Table 5. In general, ToolLLaMA achieves remarkable OOD generalization performance on all three datasets, despite being trained on a completely different API domain and instruction domain. Specifically, ToolLLaMA+our API retriever outperforms Gorilla+BM25 from both training settings (ZS / RS) in terms of AST accuracy on HuggingFace and TorchHub. With the same oracle retriever, ToolLLaMA is consistently superior when compared to Gorilla-ZS. It should be noted that Gorilla model cannot be generalized to our ToolBench dataset due to our more complex settings, such as the multi-tool use and multi-step reasoning.
| Method | HuggingFace | TorchHub | TensorHub |
|------------------------|-------------|----------|-----------|
| | Hallu. (↓) | AST (↑) | Hallu. (↓) | AST (↑) | Hallu. (↓) | AST (↑) |
| ToolILLaMA + Our Retriever | 10.60 | **16.77** | 15.70 | **51.16** | 6.48 | 40.59 |
| Gorilla-ZS + BM25 | 46.90 | 10.51 | 17.20 | 44.62 | 20.58 | 34.31 |
| Gorilla-RS + BM25 | **6.42** | **15.71** | **5.91** | **50.00** | **2.77** | **41.90** |
| ToolILLaMA + Oracle | 8.66 | 88.80 | 14.12 | 85.88 | 7.44 | 88.62 |
| Gorilla-ZS + Oracle | 52.88 | 44.36 | 39.25 | 59.14 | 12.99 | 83.21 |
| Gorilla-RS + Oracle | **6.97** | **89.27** | **6.99** | **93.01** | **2.04** | **94.16** |
Table 5: OOD generalization experiments on APIBench. For the Gorilla entries, ZS / RS means that Gorilla was trained in a zero-shot / retrieval-aware setting on APIBench. We report hallucination rate and AST accuracy.
4 RELATED WORK
**Tool Learning** Recent studies have shed light on the burgeoning capabilities of LLMs in mastering tools and making decisions within complex environments (Nakano et al., 2021; Qin et al., 2023a; Shen et al., 2023; Wu et al., 2023; Schick et al., 2023; Hao et al., 2023; Qian et al., 2023; Song et al., 2023; Zhuang et al., 2023; Gao et al., 2023). Gaining access to external tools endows LLMs with real-time factual knowledge (Yang et al., 2023), multimodal functionalities (Gupta & Kembhavi, 2023), and specialized skills in vertical domains (Jin et al., 2023). However, open-source LLMs still lag far behind SOTA LLMs in tool use, and how tool-use ability is acquired by SOTA LLMs remains unclear. In this paper, we aim to bridge this gap and fathom the underlying mechanism.
**Instruction Tuning** Instruction tuning enhances LLMs in understanding human instructions and generating proper responses (Wei et al., 2021; Bach et al., 2022). Since manual annotation is time-consuming, self-instruct (Wang et al., 2022) proposes to generate high-quality data from SOTA LLMs, which facilitates a recent trend of data curation for multi-turn dialogue (Taori et al., 2023; Chiang et al., 2023; Xu et al., 2023a; Ding et al., 2023). Compared with the dialogue, tool learning is more challenging given the vast diversity of APIs and the complexity of multi-tool instructions. As a result, even GPT-4 often fails to find a valid solution path. However, the existing tool-learning dataset cannot effectively address real human needs as mentioned in §1. Instead, ToolBench is designed for practical scenarios and improves the previous pipeline for tool-learning data construction.
**Prompting LLMs for Decision Making** Prompting facilitates LLMs to decompose high-level tasks into sub-tasks and generate grounded plans (Ahn et al., 2022; Huang et al., 2022a,b; Ye et al., 2023). ReACT (Yao et al., 2022) integrates reasoning with acting by allowing LLMs to give a proper reason for an action and incorporating environmental feedback for reasoning. However, these studies do not incorporate a mechanism for decision retraction, which becomes problematic as an initial error can lead to a cascade of subsequent errors. Recently, Reflexion (Shinn et al., 2023) mitigates this issue by asking LLMs to reflect on previous failures. Our DFSDT extends Reflexion to a more general method by allowing LLMs to assess different reasoning paths and select the most promising one. In essence, DFSDT shares a similar idea to one concurrent work: tree-of-thought (ToT) reasoning (Yao et al., 2023). However, DFSDT targets general decision-making problems where the decision space is infinite, compared to ToT’s relatively simple tasks that can be addressed by brute-force search.
5 CONCLUSION
To elicit the tool-use capabilities within LLMs, we present ToolBench, covering 16k+ real-world APIs and various practical use-case scenarios including both single-tool and multi-tool tasks. Moreover, we propose DFSDT to reinforce the planning and reasoning ability of LLMs, enabling them to navigate through reasoning paths strategically. For efficient evaluation of tool learning, we devise an automatic evaluator ToolEval. By fine-tuning LLaMA on ToolBench, the obtained model ToolILLaMA matches the performance of ChatGPT and exhibits remarkable generalization ability to unseen APIs. Besides, we develop a neural API retriever to recommend relevant APIs for each instruction. The retriever can be integrated with ToolILLaMA as a more automated tool-use pipeline. In the experiments, we demonstrate the generalization ability of our pipeline to out-of-distribution domains. In general, this work paves the way for future research in the intersection of instruction tuning and tool use for LLMs.
ACKNOWLEDGEMENTS
The contributions are listed as follows: (1) API collection: Shihao Liang, Sihan Zhao, Kunlun Zhu, Yujia Qin; (2) instruction generation: Lan Yan, Kunlun Zhu, Shihao Liang, Yujia Qin; (3) solution path annotation: Yining Ye, Shihao Liang, Runchu Tian, Yujia Qin, Xin Cong; (4) model implementation: Shihao Liang, Yujia Qin, Kunlun Zhu, Lauren Hong, Yifan Wu; (5) system demonstration: Xiangru Tang, Bill Qian. Yujia Qin led the project, designed the methodology and experiments, and wrote the paper. Yankai Lin, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun, and Jie Zhou advised the project. Yankai Lin, Xin Cong, and Ruobing Xie proofread the whole paper. All authors participated in the discussion. Yujia Qin is sponsored by the Baidu Scholarship.
The authors would like to thank Yifan Wu, Si Sun, Zheni Zeng, Chen Zhang, Yu Gu, Chenfei Yuan, Junxi Yan, Shizuo Tian, Mingxi Yan, Jason Phang, Chen Qian, and Weize Chen for their valuable feedback, discussion, and participation in this project.
REFERENCES
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. ArXiv preprint, abs/2204.01691, 2022.
Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Févry, et al. Promptsource: An integrated development environment and repository for natural language prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 93–104, 2022.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn. arXiv preprint arXiv:2306.08640, 2023.
Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14953–14962, 2023.
Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. arXiv preprint arXiv:2305.11554, 2023.
|
YxvmODVWny
|
The meaning of the shading of the cells of Table 1 adds a lot of confusion and should have been defined in the caption to improve readability. This confusion comes because, in one portion of the table, darker gray colors represent lower centroid distance and in another portion it represents the frequency of failures. This frequency of failure metric is also not well defined, how is this different than failure occurrence?
|
RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches
Anonymous authors
Paper under double-blind review
Abstract
Natural language and images are commonly used as goal representations in goal-conditioned imitation learning (IL). However, natural language can be ambiguous and images can be over-specified. In this work, we study hand-drawn sketches as a modality for goal specification. Sketches are easy for users to provide on the fly like language, but similar to images they can also help a downstream policy to be spatially-aware and even go beyond images to disambiguate task-relevant from task-irrelevant objects. We present RT-Sketch, a goal-conditioned policy for manipulation that takes a hand-drawn sketch of the desired scene as input, and outputs actions. We train RT-Sketch on a dataset of paired trajectories and corresponding synthetically generated goal sketches. We evaluate this approach on six manipulation skills involving tabletop object rearrangements on an articulated countertop. Experimentally we find that RT-Sketch is able to perform on a similar level to image or language-conditioned agents in straightforward settings, while achieving greater robustness when language goals are ambiguous or visual distractors are present. Additionally, we show that RT-Sketch has the capacity to interpret and act upon sketches with varied levels of specificity, ranging from minimal line drawings to detailed, colored drawings. For supplementary material and videos, please refer to our website.\footnote{http://rt-sketch-anon.github.io}
1 Introduction
Robots operating alongside humans in households, workplaces, or industrial environments have an immense potential for assistance and autonomy, but careful consideration is needed of what goal representations are easiest for humans to convey to robots, and for robots to interpret and act upon.
Instruction-following robots attempt to address this problem using the intuitive interface of natural language commands as inputs to language-conditioned imitation learning policies (Brohan et al., 2023b,a; Karamcheti et al., 2023; Lynch & Sermanet, 2020; Lynch et al., 2023). For instance, imagine asking a household robot to set the dinner table. A language description such as “put the utensils, the napkin, and the plate on the table” is under-specified or ambiguous. It is unclear how exactly the utensils should be positioned relative to the plate or the napkin, or whether their distances to each other matter or not. To achieve this higher level of precision, a user may need to give lengthier descriptions such as “put the fork 2cm to the right of the plate, and 5cm to the leftmost edge of the table.”, or even online corrections (“no, you moved too far to the right, move back a bit!”) (Cui et al., 2023; Lynch et al., 2023). While language is an intuitive way to specify goals, its qualitative nature and ambiguities can make it both inconvenient for humans to provide without lengthy instructions or corrections, and for robot policies to interpret for downstream precise manipulation.
On the other hand, using goal images to specify objectives and training goal-conditioned imitation learning policies either paired with or without language instructions has shown to be quite successful in recent years (Jiang et al., 2022; Jang et al., 2022). In these settings, an image of the scene in its desired final state could fully specify the intended goal. However, this has its own shortcomings: access to a goal image is a strong prior assumption, and a pre-recorded goal image can be tied to a particular environment, making it difficult to reuse for generalization.
Between natural language, which lacks granularity to unambiguously specify goals, and images, which overspecify goals in unnecessary detail, leading to the need for internet-scale data for generalization, we recognize that current frameworks lack a goal representation which adequately captures user intent in a convenient yet expressive manner. While natural language is highly flexible, it can also be highly ambiguous or require lengthy descriptions. This quickly becomes difficult in long-horizon tasks or those requiring spatial awareness. Meanwhile, goal images over-specify goals in unnecessary detail, leading to the need for internet-scale data for generalization.
To address these challenges, we study hand-drawn sketches as a convenient yet expressive modality for goal specification in visual imitation learning. By virtue of being minimal, sketches are still easy for users to provide on the fly like language. Yet unlike language, they (1) provide more, but allow for more spatial-aware task specification. Like goal images, sketches readily integrate with off-the-shelf policy architectures that take visual input, but provide an added level of goal abstraction that ignores unnecessary pixel-level details. Finally, the quality and selective inclusion/exclusion of details in a sketch can help a downstream policy distinguish task relevant from irrelevant details, without needing to faithfully preserve pixel-level details as in an image, and (2) help a downstream policy disambiguate task-relevant from irrelevant objects based on their selective inclusion, exclusion, or level of detail. Furthermore, sketches readily integrate with off-the-shelf policy architectures that take visual representations as input.
In this work, we present RT-Sketch, a goal-conditioned policy for manipulation that takes a user-provided hand-drawn sketch of the desired scene as input, and outputs actions. The novel architecture of RT-Sketch modifies the original RT-1 language-to-action Transformer architecture (Brohan et al., 2023b) to consume visual goals rather than language, allowing for flexible conditioning on sketches, images, or any other visually representable goals. To enable this, we concatenate a goal sketch and history of observations as input before tokenization, omitting language. We train RT-Sketch on a dataset of 80K trajectories paired with synthetically produced goal sketches, generated by an image-to-sketch stylization network trained from a few hundred image-sketch pairs.
We evaluate RT-Sketch across six manipulation skills on real robots involving tabletop object rearrangements on a countertop with drawers, subject to a wide range of scene variations. These skills include moving objects near to one another, knocking a can sideways, placing a can upright, closing a drawer, and opening a drawer. Experimentally, we find that RT-Sketch performs on a similar level to image or language-conditioned agents in straightforward settings. When language instructions are ambiguous, or in the presence of visual distractors, we find that RT-Sketch achieves ~ 2X more spatial precision and alignment scores, as assessed by human labelers, over language or goal image-conditioned policies (see Fig. 1 (right)). Additionally, we show that RT-Sketch can handle different levels of input specificity, ranging from rough sketches to more scene-preserving, colored drawings (see Fig. 1 (left)).
2 RELATED WORK
In this section, we discuss prior methods for goal-conditioned imitation learning which operate on traditional goal representations. We also highlight ongoing efforts towards image-sketch conversion, which open new possibilities for goal-conditioning modalities which are underexplored in robotics.
Goal-Conditioned Imitation Learning Despite the similarity in name, our learning of manipulation policies conditioned on hand-drawn sketches of the desired scene is different from the notion of policy sketches (Andreas et al., 2017), symbolic representations of task structure describing its subcomponents. Reinforcement learning (RL) is not easily applicable in our scenario, as it is non-trivial to define a reward objective which accurately quantifies alignment between a provided scene sketch and states visited by an agent during training. We instead focus on imitation learning (IL) techniques, particularly the goal-conditioned setting (Ding et al., 2019).
Goal-conditioned IL has proven useful in settings where a policy must be able to handle spatial or semantic variations for the same task (Argall et al., 2009). These settings include rearrangement of multiple objects (Brohan et al., 2023b;a; Lynch et al., 2023; Manuelli et al., 2019), kitting (Zakka et al., 2020), folding of deformable objects into different configurations (Ganapathi et al., 2021), and search for different target objects in clutter (Danielczuk et al., 2019). However, these approaches tend to either rely on language (Brohan et al., 2023b; Lynch & Sermanet, 2020; Lynch et al., 2023; Karamcheti et al., 2023; Shao et al., 2020), or goal images (Danielczuk et al., 2019) to specify variations. Follow-up works enable multimodal conditioning on either goal images and language (Jang et al., 2022), in-prompt images (Jiang et al., 2022), or image embeddings (Manuelli et al., 2019; Zakka et al., 2020; Ganapathi et al., 2021). However, all of these representations are ultimately derived from raw images or language in some way, which overlooks the potential for more abstract goal representations that are easy to specify but preserve spatial awareness, such as sketches.
In addition to their inflexibility in terms of goal representation, goal-conditioned IL tends to overfit to demonstration data and fails to handle even slight distribution shift in new scenarios (Ross et al., 2011). For language-conditioning, distribution shift can encompass semantic or spatial ambiguity, novel instructions or phrasing, or unseen objects (Jang et al., 2022; Brohan et al., 2023b). Goal-image conditioning is similarly susceptible to out-of-distribution visual shift, such as variations in lighting or object appearances, or unseen background textures (Burns et al., 2022; Belkhale et al., 2023). We instead opt for sketches which are minimal enough to combat visual distractors, yet expressive enough to provide unambiguous goals. Prior work, including (Barber et al., 2010) and (Porfirio et al., 2023), have shown the utility of sketches over pure language for navigation and limited manipulation settings. However, the sketches explored in these works are largely intended to guide low-level motion at the joint-level for manipulation, or provide explicit directional cues for navigation. Cui et al. (2022) considers sketches amongst other modalities as an input for goal-conditioned manipulation, but does not explicitly train a policy conditioned on sketches. They thus came to the conclusion that the scene image is better than the sketch image at goal specification. Our result is different and complementary, in that policies trained to take sketches as input outperform a scene image conditioned policy, by 1.63x and 1.5x in terms of Likert ratings for perceived spatial and semantic alignment, subject to visual distractors.
Image-Sketch Conversion In recent years, sketches have gained increasing popularity within the computer vision community for applications such as object detection (Chowdhury et al., 2023a; Chowdhury et al., 2023; Chowdhury et al., 2022), visual question answering (Qiu et al., 2022; Qiu et al., 2023), and scene understanding (Chowdhury et al., 2023b), either in isolation or in addition to text and images.
When considering how best to incorporate sketches in IL, an important design choice is whether to take sketches into account (1) at test time (i.e., converting a sketch to another goal modality compatible with a pre-trained policy), or (2) at training time (i.e., explicitly training an IL policy conditioned on sketches). For (1), one could first convert a given sketch to a goal image, and then roll out a vanilla goal-image conditioned policy. This could be based on existing frameworks for sketch-to-image conversion, such as ControlNet (Zhang & Agrawala, 2023), GAN-style approaches (Koley et al., 2023), or text-to-image synthesis, such as InstructPix2Pix (Brooks et al., 2023) or Stable Diffusion (Rombach et al., 2022). While these models produce photorealistic results under optimal conditions, they do not jointly handle image generation and style transfer, making it
unlikely for generated images to match the style of an agent observations. At the same time, these approaches are susceptible to producing hallucinated artifacts, introducing distribution shifts (Zhang & Agrawala, 2023).
Based on these challenges, we instead opt for (2), and consider image-to-sketch conversion techniques for hindsight relabeling of terminal images in pre-recorded demonstration trajectories. Recently, Vinker et al. (2022b,a) proposes networks for predicting Bezier curve-based sketches of input image objects or scenes. Sketch quality is supervised by a CLIP-based alignment metric. While these approaches generate sketches of high visual fidelity, test-time optimization takes on the order of minutes, which does not scale to the typical size of robot learning datasets (hundreds to thousands of demonstration trajectories). Meanwhile, conditional generative adversarial networks (cGANs) such as Pix2Pix (Isola et al., 2017) have proven useful for scalable image-to-image translation. Most related to our work is that of Li et al. (2019), which trains a Pix2Pix model to produce sketches from given images on a large crowd-sourced dataset of $5K$ paired images and line drawings. We build on this work to fine-tune an image-to-sketch model on robot trajectory data, and show its utility for enabling downstream manipulation from sketches.
3 Sketch-Conditioned Imitation Learning
In this section, we will first introduce our problem of learning a sketch-conditioned policy. We will then discuss our approach to train an end-to-end sketch-to-action IL agent. First, in Section 3.1, we discuss our instantiation of an auxiliary image-to-sketch translation network which automatically generates sketches from a reference image. In Section 3.2, we discuss how we use such a model to automatically hindsight relabel an existing dataset of demonstrations with synthetically generated goal sketches, and train a sketch-conditioned policy on this dataset.
Problem Statement Our goal is to learn a manipulation policy conditioned on a goal sketch of the desired scene state and a history of interactions. Formally, we denote such a policy by $\pi_{\text{sketch}}(a_t|g,\{o_j\}_{j=1}^t)$, where $a_t$ denotes an action at timestep $t$, $g \in \mathbb{R}^{W \times H \times 3}$ is a given goal sketch with width $W$ and height $H$, and $o_t \in \mathbb{R}^{W \times H \times 3}$ is an observation at time $t$. At inference time, the policy takes a given goal sketch along with a history of RGB image observations to infer an action to execute. In practice, we condition $\pi_{\text{sketch}}$ on a history of $D$ previous observations rather than all observations from the initial state at $t = 1$. To train such a policy, we assume access to a dataset $\mathcal{D}_{\text{sketch}} = \{g^n, \{(o^n_t, a^n_t)\}_{t=1}^{T(n)}\}_{n=1}^N$ of $N$ successful demonstrations, where $T(n)$ refers to the length of the $n$th trajectory in timesteps. Each episode of the dataset consists of a given goal sketch and a corresponding demonstration trajectory, with image observations recorded at each timestep. Our goal is to thus learn the sketch-conditioned imitation policy $\pi_{\text{sketch}}(a_t|g,\{o_j\}_{j=1}^t)$ trained on this dataset $\mathcal{D}_{\text{sketch}}$.
3.1 Image-to-Sketch Translation
Training a sketch-conditioned policy requires a dataset of robot trajectories that are each paired with a sketch of the goal state achieved by the robot. Collecting such a dataset from scratch at scale, including the trajectories themselves and manually drawn sketches, can easily become impractical. Thus, we instead aim to learn an image-to-sketch translation network $T(g|o)$ that takes an image observation $o$ and outputs the corresponding goal sketch $g$. This network can be used to post-process an existing dataset of demonstrations $\mathcal{D} = \{((o^n_t, a^n_t))_{t=1}^{T(n)}\}_{n=1}^N$ with image observations by appending a synthetically generated goal sketch to each demonstration. This produces a dataset for sketch-based IL: $\mathcal{D}_{\text{sketch}} = \{g^n, \{(o^n_t, a^n_t)\}_{t=1}^{T(n)}\}_{n=1}^N$.
RT-1 Dataset In this work, we rely on an existing dataset of visual demonstrations collected by prior work (Brohan et al., 2023b). RT-1 is a prior language-to-action imitation learning agent trained on a large-scale dataset ($80K$ trajectories) of VR-teleoperated demonstrations that include skills such as moving objects near one another, placing cans and bottles upright or sideways, opening and closing cabinets, and performing pick and place on countertops and drawers (Brohan et al., 2023b). Here, we repurpose the RT-1 dataset and further adapt the RT-1 policy architecture to accommodate sketches, detailed in Section 3.2.
Assumptions on Sketches
We acknowledge that there are innumerable ways for a human to provide a sketch corresponding to a given image of a scene. In this work, we make the following assumptions about input sketches for a controlled experimental validation procedure. In particular, we first assume that a given sketch respects the task-relevant contours of an associated image, such that tabletop edges, drawer handles, and task-relevant objects are included and discernible in the sketch. We do not assume contours in the sketch to be edge-aligned or pixel-aligned with those in an image. We do assume that the input sketch consists of black outlines at the very least, with shading in color being optional. We further assume that sketches do not contain information not present in the associated image, such as hallucinated objects, scribbles, or textual annotations, but may omit task-irrelevant details that appear in the original image.
Sketch Dataset Generation
To train an image-to-sketch translation network \( T \), we collect a new dataset \( D_T = \{(o_i, g_1^{(i)}, \ldots, g_{L(i)}^{(i)})\}_{i=1}^M \) consisting of \( M \) image observations \( o_i \) each paired with a set of goal sketches \( g_1^{(i)}, \ldots, g_{L(i)}^{(i)} \). Those represent \( L(i) \) different representations of the same image \( o_i \), in order to account for the fact that there are multiple, valid ways of sketching the same scene. To collect \( D_T \), we take 500 randomly sampled terminal images from demonstration trajectories in the RT-1 dataset, and manually draw sketches with black lines on a white background capturing the tabletop, drawers, and relevant objects visible on the manipulation surface. While we personally annotate each robot observation with a single sketch only, we add this data to an existing, much larger non-robotic dataset (Li et al., 2019). This dataset captures inter-sketch variation via multiple crowdsourced sketches per image. We do not include the robot arm in our manual sketches, as we find a minimal representation to be most natural. Empirically, we find that our policy can handle such sketches despite actual goal configurations likely having the arm in view. We collect these drawings using a custom digital stylus drawing interface in which a user draws an edge-aligned sketch over the original image (Appendix Fig. 15). The final recorded sketch includes the user’s strokes in black on a white canvas with the original image dimensions.
Image-to-Sketch Training
We implement the image-to-sketch translation network \( T \) with the Pix2Pix conditional generative adversarial network (cGAN) architecture, which is composed of a generator \( G_T \) and a discriminator \( D_T \) (Isola et al., 2017). The generator \( G_T \) takes an input image \( o \), a random noise vector \( z \), and outputs a goal sketch \( g \). The discriminator \( D_T \) is trained to discriminate amongst artificially generated sketches and ground truth goal sketches. We utilize the standard cGAN supervision loss to train both (Li et al., 2019; Isola et al., 2017):
\[
L_{cGAN} = \min_{G_T} \max_{D_T} \mathbb{E}_{o,g}[\log D_T(o,g)] + \mathbb{E}_{o,g}[\log(1 - D_T(o,G_T(o,g)))]
\]
We also add the \( L_1 \) loss to encourage the produced sketches to align with the ground truth sketches as in (Li et al., 2019). To account for the fact that there may be multiple valid sketches for a given image, we only penalize the minimum \( L_1 \) loss incurred across all \( L(i) \) sketches provided for a given image as in Li et al. (2019). This is to prevent wrongly penalizing \( T \) for producing a valid sketch that aligns well with one example but not another simply due to stylistic differences in the ground truth sketches. The final objective is then a \( \lambda \)-weighted combination of the average cGAN loss and the minimum alignment loss:
\[
L_T = \frac{\lambda}{L(i)} \sum_{k=1}^{L(i)} L_{cGAN}(o_i, g_i^{(k)}) + \min_{k \in \{1,\ldots,L(i)\}} L_1(o_i, g_i^{(k)})
\]
In practice, we supplement the 500 manually drawn sketches from \( D_T \) by leveraging the existing larger-scale Contour Drawing Dataset (Li et al., 2019). We refer to this dataset as \( D_{CD} \), which contains 1000 examples of internet-scraped images containing objects, people, animals from Adobe Stock, paired with \( L(i) = 5 \) crowd-sourced black and white outline drawings per image collected on Amazon Mechanical Turk. Visualizations of this dataset are provided in Appendix Fig. 4. We first take a pre-trained image-to-sketch translation network \( T_{CD} \) (Li et al., 2019) trained on \( D_{CD} \), with \( L(i) = 5 \) sketches per image. Then, we fine-tune \( T_{CD} \) on \( D_T \), with only \( L(i) = 1 \) manually drawn sketch per robot observation, to obtain our final image-to-sketch network \( T \). Visualizations of the sketches generated by \( T \) for different robot observations are available in Fig. 5.
3.2 RT-Sketch
With a means of translating image observations to black and white sketches via \( T \) (Section 3.1), we can automatically augment the existing RT-1 dataset with goal sketches. This results in a dataset, which we refer to as \( D_{\text{sketch}} \), which can be used for training our algorithm, RT-Sketch.
**RT-Sketch Dataset**
The original RT-1 dataset \( D_{\text{lang}} = \{i^n, \{(o^n_t, a^n_t)\}_{t=1}^{T(n)}\}_{n=1}^N \) consists of \( N \) episodes with a paired natural language instruction \( i \) and demonstration trajectory \( \{(o^n_t, a^n_t)\}_{t=1}^{T(n)} \).
We can automatically hindsight-relabel such a dataset with goal images instead of language goals (Andrychowicz et al., 2017). Let us denote the last step of a trajectory \( n \) as \( T(n) \). Then the new dataset with image goals instead of language goals is \( D_{\text{img}} = \{o^n_{T(n)}, \{(o^n_t, a^n_t)\}_{t=1}^{T(n)}\}_{n=1}^N \), where we treat the last observation of the trajectory \( o^n_{T(n)} \) as the goal \( g^n \). To produce a dataset for \( \pi_{\text{sketch}} \), we can simply replace \( o^n_{T(n)} \) with \( \hat{g}^n = T(o^n_{T(n)}) \) such that \( D_{\text{sketch}} = \{\hat{g}^n, \{(o^n_t, a^n_t)\}_{t=1}^{T(n)}\}_{n=1}^N \).
To encourage the policy to afford different levels of input sketch specificity, we in practice produce goals by \( \hat{g}^n = A(o^n_{T(n)}) \), where \( A \) is a randomized augmentation function. \( A \) chooses between simply applying \( T \), \( T \) with colorization during postprocessing (e.g., by superimposing a blurred version of the ground truth RGB image over the binary sketch), a classical Sobel operator (Sobel, 1968) for edge detection, or not applying any operators, which preserves the original ground truth goal image (Fig. 2). By co-training on all representations, we intend for RT-Sketch to handle a spectrum of specificity going from binary sketches; colorized sketches; edge detected images; and goal images (Appendix Fig. 5).
**RT-Sketch Model Architecture**
In our setting, we consider goals provided as sketches rather than language instructions as was done in RT-1. This change in the input representation necessitates a change in the model architecture. The original RT-1 policy relies on a Transformer architecture backbone (Vaswani et al., 2017). RT-1 first passes a history of \( D = 6 \) images through an EfficientNet-B3 model (Tan & Le, 2019) producing image embeddings, which are tokenized, and separately extracts textual embeddings and tokens via FiLM (Perez et al., 2018) and a Token Learner (Ryoo et al., 2021). The tokens are then fed into a Transformer which outputs bucketized actions. The output action dimensionality is 7 for the end-effector (\( x, y, z, \text{roll}, \text{pitch}, \text{yaw}, \text{gripper width} \)), 3 for the mobile base, (\( x, y, \text{yaw} \)), and 1 for a flag that can select amongst base movement, arm movement, and episode termination. To retrain the RT-1 architecture but accommodate the change in input representation, we omit the FiLM language tokenization altogether. Instead, we concatenate a given goal image or sketch with the history of images as input to EfficientNet, and extract tokens from its output, leaving the rest of the policy architecture unchanged. We visualize the RT-Sketch training inputs and policy architecture in Fig. 2. We refer to this architecture when trained only on images (i.e., an image goal-conditioned RT1 policy) as RT-Goal-Image and refer to it as RT-Sketch when it is trained on sketches as discussed in this section.
**Training RT-Sketch**
We can now train \( \pi_{\text{sketch}} \) on \( D_{\pi_{\text{sketch}}} \) utilizing the same procedure as was used to train RT-1 (Brohan et al., 2023b), with the above architectural modifications. We fit \( \pi_{\text{sketch}} \)
using the behavioral cloning objective function. This aims to minimize the negative log-likelihood of an action provided the history of observations and a given sketch goal (Torabi et al., 2018):
$$J(\pi_{\text{sketch}}) = \sum_{n=1}^{N} \sum_{t=1}^{T(n)} \log \pi_{\text{sketch}}(a_t^n | g^n, \{o_j\}_{j=1}^t)$$
4 EXPERIMENTS
We seek to understand the ability of RT-Sketch to perform goal-conditioned manipulation as compared to policies that operate from higher-level goal abstractions like language, or more over-specified modalities, like goal images. To that end, we test the following four hypotheses:
**H1:** RT-Sketch is successful at goal-conditioned IL. While sketches are abstractions of real images, our hypothesis is that they are specific enough to provide manipulation goals to a policy. Therefore, we expect RT-Sketch to perform on a similar level to language goals (RT-1) or goal images (RT-Goal-Image) in straightforward manipulation settings.
**H2:** RT-Sketch is able to handle varying levels of specificity. There are as many ways to sketch a scene as there are people. Because we have trained RT-Sketch on sketches of varying levels of specificity, we expect it to be robust against variations of the input sketch for the same scene.
**H3:** Sketches enable better robustness to distractors than goal images. Sketches focus on task-relevant details of a scene. Therefore, we expect RT-Sketch to provide robustness against distractors in the environment that are not included in the sketch compared to RT-Goal-Image that operates on detailed image goals.
**H4:** Sketches are favorable when language is ambiguous. We expect RT-Sketch to provide a higher success rate compared to ambiguous language inputs when using RT-1.
4.1 EXPERIMENTAL SETUP
**Policies** We compare RT-Sketch to the original language-conditioned agent RT-1 (Brohan et al., 2023b), and RT-Goal-Image, a policy identical in architecture to RT-Sketch, but taking a goal image as input rather than a sketch. All policies are trained on a multi-task dataset of ~ 80K real-world trajectories manually collected via VR teleoperation using the setup from Brohan et al. (2023b). These trajectories span a suite of common office and kitchen tasks such as picking and placing objects, reorienting cups and bottles upright or sideways, opening and closing drawers, and rearranging objects between drawers or a countertop.
**Evaluation protocol** To ensure fair comparison, we control for the same initial and goal state of the environment across different policy rollouts via a catalog of well-defined evaluation scenarios that serve as references for human robot operators. For each scenario, we record an initial image (RGB observation) of the scene, the goal image (with objects manually rearranged as desired), a natural language task string describing the desired agent behavior to achieve the goal, and a set of hand-drawn sketches corresponding to the recorded goal image. At test time, a human operator retrieves a particular evaluation scenario from the catalog, aligns the physical robot and scene according to a reference image using a custom visualization utility, and places the relevant objects in their respective locations. Finally, the robot selects one of the goal representations (language, image, sketch, etc.) for the scenario as input to a policy. We record a video of the policy rollout for downstream evaluation (see Section 4.2). We perform all experiments using the Everyday Robot\(^2\), which contains a mobile base, an overhead camera, and a 7-DoF manipulator arm with a parallel jaw gripper. All sketches for evaluation are collected with a custom manual drawing interface by a single human annotator on a tablet with a digital stylus.
**Performance Metrics** Defining a standardized, automated evaluation protocol for goal alignment is non-trivial. Since binary task success is too coarse-grained and image-similarity metrics like frame-differencing or CLIP (Radford et al., 2021) tend to be brittle, we measure performance with
\(^2\)everydayrobots.com
Figure 3: **Goal Alignment Results:** Average Likert scores for different policies rating perceived semantic alignment (Q1) and spatial alignment (Q2) to a provided goal. For straightforward benchmark manipulation tasks, RT-Sketch performs comparably and in some cases better than RT-1 and RT-Goal-Image in terms of both metrics, for 5 out of 6 skills (H1). RT-Sketch further exhibits the ability to handle sketches of different levels of detail (H2), while achieving better goal alignment than baselines when the visual scene is distracting (H3) or language would be ambiguous (H4). Error bars indicate standard error across labeler ratings.
Two more targeted metrics. First, we quantify policy precision as the distance (in pixels) between object centroids in achieved and ground truth goal states, using manual keypoint annotations. Although leveraging out-of-the-box object detectors to detect object centroids is a possibility, we want to avoid conflating errors in object detection (imprecise bounding box, wrong object, etc.) from manipulation error of the policy itself. Second, we gather human-provided assessments of perceived goal alignment, following the commonly-used Likert (Likert, 1932) rating scheme from 1 (Strongly Disagree) to 7 (Strongly Agree), for:
- **(Q1)** The robot achieves *semantic alignment* with the given goal during the rollout.
- **(Q2)** The robot achieves *spatial alignment* with the given goal during the rollout.
For Q1, we present labelers with the policy rollout video along with the given ground-truth language task description. We expect reasonably high ratings across all methods for straightforward manipulation scenarios (H1). Sketch-conditioned policies should yield higher scores than a language-conditioned policy when a task string is ambiguous (H4). Q2 is instead geared at measuring to what degree a policy can spatially arrange objects as desired. For instance, a policy can achieve semantic alignment for the instruction *place can upright* as long as the can ends up in the right orientation. For Q2, we visualize a policy rollout side-by-side with a given visual goal (ground truth image, sketch, etc.) to assess perceived spatial alignment. We posit that all policies should receive high ratings for straightforward scenarios (H1), with a slight edge for visual-conditioned policies which implicitly have stronger spatial priors encoded in goals. We further expect that as the visual complexity of a scene increases, sketches may be able to better attend to pertinent aspects of a goal and achieve better spatial alignment than image-conditioned agents (H3), even for different levels of sketch specificity (H4). We provide a visualization of the assessment interface for Q1 and Q2 in Appendix Fig. 16. We note that we perform these human assessment surveys across 62 individuals (non-expert, unfamiliar with our system), where we assign between 8 and 12 people to evaluate each of the 6 different skills considered below.
### 4.2 Experimental Results
In this section, we present our findings related to the hypotheses of Section 4. Tables 4.2 and 4.2 measure the spatial precision achieved by policies in terms of pixelwise distance, while Fig. 3 shows...
the results of human-perceived semantic and spatial alignment, based on a 7-point Likert scale rating.
| Skill | Spatial Precision (RMSE in px.) | Failure Occurrence (Excessive Retrying) |
|----------------|---------------------------------|----------------------------------------|
| | RT-1 | RT-Sketch | RT-Goal-Image |
| Move Near | 5.43 ± 2.15 | 3.49 ± 1.38 | 3.89 ± 1.16 |
| Pick Drawer | 5.69 ± 2.90 | 4.77 ± 2.78 | 4.74 ± 2.01 |
| Drawer Open | 4.51 ± 1.55 | 3.34 ± 1.08 | 4.98 ± 1.16 |
| Drawer Close | **2.69 ± 0.93** | 3.02 ± 1.35 | 3.71 ± 1.67 |
| Knock | 7.39 ± 1.77 | 5.36 ± 2.74 | 5.63 ± 2.60 |
| Upright | 7.84 ± 2.37 | 5.08 ± 2.08 | 4.18 ± 1.54 |
| Visual Distractors | - | 4.78 ± 2.17 | 7.95 ± 2.86 |
| Language Ambiguity | 8.03 ± 2.52 | 4.45 ± 1.54 | - |
Table 1: **Spatial Precision and Specific Failure Occurrence**: Left: We report the level of spatial precision achieved across policies, measured in terms of RMSE of the centroids of manipulated objects in achieved vs. given reference goal images. Darker shading indicates higher precision (lower centroid distance). Fig. 7 contains visualizations illustrating the degree of visual alignment that different RMSE values correspond to. Right: We report the proportion of rollouts in which different policies exhibit excessive retrying behavior. Bolded numbers indicate the most precise and least failure-prone policy for each skill.
**H1:** We evaluate 6 skills from the RT-1 benchmark (Brohan et al., 2023b): move X near Y, place X upright, knock X over, open the X drawer, close the X drawer, and pick X from Y. For each skill, we record 15 different catalog scenarios, varying both objects (16 unique in total) and their placements.
In general, we find that RT-Sketch performs on a comparable level to RT-1 and RT-Goal-Image for both semantic (**Q1**) and spatial alignment (**Q2**), achieving ratings in the ‘Agree’ to ‘Strongly Agree’ range on average for nearly all skills (Fig. 3 (top)). A notable exception is upright, where RT-Sketch essentially fails to accomplish the goal semantically (**Q1**), albeit with some degree of spatial alignment (**Q2**). Both RT-Sketch and RT-Goal-Image tend to position cans or bottles appropriately and then terminate, without realizing the need for reorientation (Appendix Fig. 8). This behavior results in low centroid-distance to the goal (darker gray in Table 4.2 (left)). RT-1, on the other hand, reorients cans and bottles successfully, but at the expense of higher error (Appendix Fig. 8, light color in Table 4.2 (left)). In our experiments, we also observe the occurrence of excessive retrying behavior, in which a policy attempts to align the current scene with a given goal with retrying actions such as grasping and placing. However, performing these low-level actions with a high degree of precision is challenging, and thus excessive retrying can actually disturb the scene leading to knocking objects off the table or undoing task progress. In Table 4.2, we report the proportion of rollouts in which we observe this behavior across all policies. We note that RT-Goal-Image is most susceptible to this failure mode, as a result of over-attending to pixel-level details and trying in excess to match a given goal exactly. Meanwhile, RT-Sketch and RT-1 are far less vulnerable, since both sketches and language provide a higher level of goal abstraction.
In this table, we further see that RT-Goal-Image has a tendency to over-attend to pixel-level details, which can result in excessive retrying behavior and failure to terminate when attempting to rearrange objects to exactly match a given goal image (darker gray in Section 4.2 (right), denoting more frequent failures).
**H2:** We next assess RT-Sketch’s ability to handle input sketches of varied levels of detail (free-hand, edge-aligned line sketch, colorized line sketch, and a Sobel edge-detected image as an upper bound). Free-hand sketches are drawn with a reference image next to a blank canvas, while line sketches are drawn on a semi-transparent canvas overlaid on the image (see Appendix Fig. 15). We find such a UI to be convenient and practical, as an agent’s current observations are typically available and provide helpful guides for sketching lines and edges. Across 5 trials each of the move near and open drawer skills, we see in Section 4.2 that all types of sketches produce reasonable levels of spatial precision. As expected, Sobel edges incur the least error, but even free-hand sketches, which do not necessarily preserve perspective projection, and line sketches, which are far sparser in detail, are not far behind. This is also reflected in the corresponding Likert ratings (Fig. 3 (left, bottom)). Free-hand sketches already garner moderate ratings (around 4) of perceived spatial and semantic alignment, but line sketches result in a marked performance improvement to nearly 7, on par with the upper bound of providing an edge-detected goal image. Adding color does not improve performance further, but leads to interesting qualitative differences in behavior (see Appendix Fig. 9).
| Skill | Free-Hand | Line Sketch | Color Sketch | Sobel Edges |
|--------------|-----------|-------------|--------------|-------------|
| Move Near | 7.21 ± 2.76 | 3.49 ± 1.38 | 3.45 ± 1.03 | **3.36 ± 0.66** |
| Drawer Open | 3.75 ± 1.63 | 3.34 ± 1.08 | 2.48 ± 0.50 | **2.13 ± 0.25** |
Table 2: RT-Sketch Spatial Precision across Sketch Types (RMSE (centroid-distance) in px). We report the spatial precision achieved by RT-Sketch subject to different input modalities. As expected, for less detailed and more rough sketches, RT-Sketch achieves lower precision (lighter shading), and for richer representations RT-Sketch is more precise (bolded, darker shading). Still, there is a relatively small difference in performance between line, color, and edge-detected representations, indicating RT-Sketch’s ability to afford different levels of input specificity.
H3: Next, we compare the robustness of RT-Sketch and RT-Goal-Image to the presence of visual distractors. We re-use 15 move X near Y trials from the catalog, but introducing 5 – 9 distractor objects into the initial visual scene after alignment. This testing procedure is adapted from RT-1 generalization experiments referred to as medium-high difficulty (Brohan et al., 2023b). In Section 4.2 (left, bottom), we see that RT-Sketch exhibits far lower spatial errors on average, while producing higher semantic and spatial alignment scores over RT-Goal-Image (Fig. 3 (middle, bottom)). RT-Goal-Image is easily confused by the distribution shift introduced by distractor objects, and often cycles between picking up and putting down the wrong object. RT-Sketch, on the other hand, ignores task-irrelevant objects not captured in a sketch and completes the task in most cases (see Appendix Fig. 10).
H4: Finally, we evaluate whether sketches as a representation are favorable when language goals alone are ambiguous. We collect 15 scenarios encompassing 3 types of ambiguity in language instructions: instance ambiguity (T1) (e.g., move apple near orange when multiple orange instances are present), somewhat out-of-distribution (OOD) language (T2) (e.g., move left apple near orange), and highly OOD language (T3) (e.g., complete the rainbow) (see Appendix Fig. 11). While the latter two qualifications should intuitively help resolve ambiguities, they were not explicitly made part of the original RT-1 training (Brohan et al., 2023b), and hence only provide limited utility. In Section 4.2 (left, bottom), RT-Sketch achieves nearly half the error of RT-1, and a 2.39-fold and 2.79-fold score increase for semantic and spatial alignment, respectively (Fig. 3 (right, bottom)). For T1 and T2 scenarios, RT-1 often tries to pick up an instance of any object mentioned in the task string, but fails to make progress beyond that (Appendix Fig. 12). This further suggests the utility of sketches to express new, unseen goals with minimal overhead, when language could otherwise be opaque or difficult to express with only in-distribution vocabulary (Appendix Fig. 13).
Limitations and Failure Modes Firstly, the image-to-sketch generation network used in this work is fine-tuned on a dataset of sketches provided by a single human annotator, and we have yet to stress-test the generalization capabilities of RT-Sketch at scale with sketches produced by different people. Secondly, we note that RT-Sketch shows some inherent biases towards performing certain skills it was trained on, and occasionally performs the wrong skill. For a detailed breakdown of RT-Sketch’s limitations and failure modes, please see Appendix C).
5 CONCLUSION
We propose RT-Sketch, a goal-conditioned policy for manipulation that takes a hand-drawn sketch of the desired scene as input, and outputs actions. To enable such a policy, we first develop a scalable way to generate paired sketch-trajectory training data via an image-to-sketch translation network, and modify the existing RT-1 architecture to take visual information as an input. Empirically, we show that RT-Sketch not only performs on a comparable level to existing language or goal-image conditioning policies for a number of manipulation skills, but is amenable to different degrees of sketch fidelity, and more robust to visual distractors or ambiguities. Future work will focus on extending hand-drawn sketches to more structured representations, like schematics or diagrams for assembly tasks. While powerful, sketches are not without their own limitations – namely ambiguity due to omitted details or poor quality sketches. In the future, we are excited by avenues for multimodal goal specification that can leverage the benefits of language, sketches, and other modalities to jointly resolve ambiguity from any single modality alone.
|
Jh6m4e8Ief
|
Forcing discovered concepts to be disentangled may not lead to the explainability goal desired when designing this method. Several concepts, both in concept-annotated datasets and in day-to-day reasoning, are highly entangled and dependent (e.g., “having whiskers” is not fully independent of “having paws” yet they are both important concepts when describing different types of felines and canines).
|
**SurroCBM: Concept Bottleneck Surrogate Models for Label-free Post-hoc Explanation**
Anonymous authors
Paper under double-blind review
**Abstract**
Explainable AI seeks to bring light to the decision-making processes of black-box models. Traditional saliency-based methods, while highlighting influential data segments, often lack semantic understanding. Recent advancements, such as Concept Activation Vectors (CAVs) and Concept Bottleneck Models (CBMs), offer concept-based explanations but necessitate human-defined concepts. However, human-annotated concepts are expensive to attain. This paper introduces the Concept Bottleneck Surrogate Models (SurroCBM), a novel framework that aims to explain the black-box models with automatically discovered concepts. SurroCBM identifies shared and unique concepts across various black-box models and employs an explainable surrogate model for post-hoc explanations. An effective training strategy using self-generated data is proposed to enhance explanation quality continuously. Through extensive experiments, we demonstrate the efficacy of SurroCBM in concept discovery and explanation, underscoring its potential in advancing the field of explainable AI.
1 INTRODUCTION
Explainable AI aims to explain the decision-making process of black-box models. A traditional approach to achieving this transparency is through the use of saliency-based methods which identify the most influential segments of the input data that contribute significantly to a model’s decision. Although saliency-based methods highlight the important regions, they do not necessarily offer a semantic understanding. A recent stream of methods, concept-based explanation, aims to use a set of concepts with high-level human-understandable meanings to explain model decisions. Kim et al. (2018) introduced the Concept Activation Vectors (CAVs), vectors in the activation layer in the direction of user-given concepts, and quantify their importance to the predictions to explain model decisions. Koh et al. (2020) designed a type of self-explainable neural network, Concept Bottleneck Models (CBMs), which first use the data to predict concept values, then predict the targets with concepts, to make the decision-making process more transparent. However, both these two types of concept-based explanation methods require human-defined concepts, which are costly to attain. Some research focuses on post-hoc explanations with incomplete concepts. Yükselgonul et al. (2022) proposed a method to transfer annotated concepts from other datasets or leverage multimodal models to attain concept annotations for post-hoc explanation. Moayeri et al. (2023) proposed to extract concept activation vectors from text with CLIP model and use them for model explanations. However, since they adopted the idea of borrowing concepts from other data, these works did not thoroughly solve the problem of human labor for concept annotations.
In this work, our goal is to explain the black-box model decisions with automatically discovered concepts, as shown in Fig. 1. Although some results on concept-based model explanation and concept discovery have been encouraging, this task is still challenging due to the following reasons:
---
**Figure 1**: An example of the problem in this paper. The black-box classifier’s decisions can be explained with a set of concepts, but they require human labor to annotate and are often hard to attain. We aim to explain the black-box model’s behavior with a set of concepts discovered by ourselves.
Challenge 1: Bridging the Gap Between Concepts for Data and Post-hoc Explanations. There is an inherent gap between the explainable concepts to underly the dataset and the related concepts to explain the decision-making process of the black-box models. Existing work typically pays efforts to one of two goals: (1) discovering concepts to explain a dataset, requiring the concepts to be human-understandable, disentangled and fully cover each varying aspect of the dataset; (2) Using given concepts to explain the decision-making processes of a classifier, requiring the concepts to have information for the classification task. With different goals, the required concepts should have different meanings. The different requirements to explain the data and classifiers bring a challenge in discovering concepts that meet the post-hoc explanation requirements.
Challenge 2: Aligning the Shared Related Concepts for Multiple Classifiers. While the majority of research focuses on identifying concepts to only explain the data [Kim & Mnih (2018)] or explain a singular classifier [O’Shaughnessy et al. (2020); Tran et al. (2022)], real-world applications often require predicting several aspects of the same data. The groups of concepts related to different tasks are different. It is challenging to identify shared and unique concepts that underpin the decision-making processes of multiple classifiers, especially during the concept discovery process.
Challenge 3: Ensuring High Fidelity of Surrogate Models. Surrogate model-based explanation methods require high fidelity to mimic the black-box models to ensure accuracy. However, with a limited training set, it is hard to fully mimic the output of the black-box models with surrogate models. Moreover, the input of the surrogate model, defined by the discovered concepts, may not cover all aspects of the original input data, making it more difficult to maintain fidelity.
To tackle these challenges, we introduce the Concept Bottleneck Surrogate Models (SurroCBM), a surrogate model-based method to jointly solve the unsupervised concept discovery and post-hoc explanation problem. Our model can discover the shared and unique concepts across different black-box models on the same data. Our concept-based explainer first maps the data to concepts then identifies the task-related concepts and predicts the black-box model output with a highly transparent module. The contributions of this paper are summarized as follows:
- A novel framework for discovering identifiable and task-related concepts. Our proposed method discovers identifiable concepts with relations to multiple classifiers by aligning the shared concepts and identifying the unique concepts of each prediction target.
- A concept-based post-hoc explainer for black-box model explanation. Our proposed surrogate model first maps the data to concepts, then identifies a group of related concepts and uses them to explain the model behaviors, providing a high explainability.
- A training strategy to increase the fidelity with generated data. In order to continuously enhance the fidelity of the surrogate model, we propose a training strategy that generates user-customizable and diversified additional data to train the model.
2 RELATED WORK
2.1 CONCEPT DISCOVERY
The unsupervised concept discovery problem aims to identify concepts without given concept labels. Tradition works focus on the form of concepts as important vectors in the activation space [Kim & Mnih (2018)]. Later concept discovery methods aim to identify meaningful image segmentations as concepts [Ghorbani et al. (2019); Wang et al. (2023); Yao et al. (2022); Kamakshi et al. (2021); Posada-Moreno et al. (2022)], and use them to explain the model behaviors. Another type of method is to use latent factors of generative models as concepts and conduct interventions to get their semantical meanings [O’Shaughnessy et al. (2020); Tran et al. (2022)]. Some more recent work focuses on identifying text descriptions to explain the data and model decisions [Yang et al. (2023); Oikarinen et al. (2023); Moayeri et al. (2023)].
2.2 CONCEPT-BASED EXPLANATION
Our method can be categorized as post-hoc explainability for deep learning models based on concepts. The term concepts, there are various definitions, such as a direction in the activation space, a prototypical activation vector, or a latent factor of a generative model. For example, a generative
model such as VAEs [Kingma & Welling (2013)] can provide a concept-based explanation as it learns a latent representation that captures different aspects of the data. However, standard VAEs struggle to disentangle latent concepts due to their lack of explicit mechanisms for separating intertwined factors of variation, leading to overlapping or mixed representations in the latent space. Concept Activation Vectors (CAVs) [Kim et al. (2018)] provide an interpretation of a neural net’s internal state in terms of human-friendly concepts by viewing the high-dimensional internal state of a neural net as an aid, not an obstacle. ConceptSHAP [Yeh et al. (2020)] infers a complete set of concepts that are additionally encouraged to be interpretable by retraining the classifier with a prototypical concept layer. [O’Shaughnessy et al. (2020)] generates causal post-hoc explanations of black-box classifiers based on a learned low-dimensional representation of the data.
3 Problem Formulation
In this work, we aim to explain the black-box classifiers with automatically identified concepts. We aim to identify a set of concepts that have the ability to act as units of high-level features of data and high-level reasoning for classifications on the data, and discover how the learned concepts combine to explain black-box classifiers for the data.
More formally, given a dataset \( \mathcal{X} \) and a set of black-box classifiers \( f = \{f_1, f_2, ..., f_{k_y}\} \), each mapping from \( \mathcal{X} \) to a target \( y_i \in \mathcal{Y} \). Our goal is to (1) identify a set of concepts with the values \( z = \{z_1, z_2, ..., z_{k_c}\} \in \mathcal{Z} \subset \mathbb{R}^{k_c} \), where \( k_c \) denotes the number of concepts, that can serve as reasoning units of \( f \); and (2) find a mapping \( h : \mathcal{Z} \rightarrow \mathcal{Y} \) that map the concept values to the black-box model outputs with a more explainable inner structure, thus it can mimic the black-box model behaviors and provide post-hoc explanations.
To achieve this goal, several challenges of the discovered concepts and the post-hoc explanation process are identified as follows:
- **Fidelity**: To make the post-hoc explanations reliable, predictions derived from concepts via the mapping \( h \) must closely mimic the behavior of the black-box models.
- **Identifiability**: To allow the identified concepts to explain new classifiers unseen in training, the identified concepts should comprehensively cover the aspects of data. This requires that the data can be recovered with its corresponding concept values.
- **Explainability**: To provide human-understandable explanations, the explanation process of predicting the targets from identified concepts should be transparent and explainable.
4 Concept Bottleneck Surrogate Models
**Overview.** We devised a novel method, Concept Bottleneck Surrogate Models (SurroCBM), to jointly discover the high-level concepts with our desired properties and use the discovered concepts to explain the black-box models. The proposed framework is illustrated in Fig. 2. We first present the model architecture and how the model can be used for local and global explanations in Sec. 4.1, then we induce the training objective in Sec. 4.2 and present a procedure to continuously increase the fidelity in Sec. 4.3.
4.1 Proposed Model
Specifically, we use a surrogate model \( f' \) to mimic the behaviors of the black-box model \( f \), where \( f \) and \( f' \) are both a set of classifiers. Inspired by traditional Concept Bottleneck Models, we divide \( f' \) into two stages: a concept extractor \( e_\phi \) to map the data to concept values \( z \), and an explainable mapping \( h \) to map the concept values \( z \) to the model output. To ensure identifiability, we add an additional decoder \( g_\theta \) to map the concept values \( z \) back to the data \( x \) and minimize their difference. The surrogate model is shown in Fig. 2(a).
To further improve the explainability of the surrogate model \( f' \), we design an explainable interior structure for the mapping \( h \), which can identify shared and unique concepts required for each classification target, shown in Fig. 2(b). This mechanism is implemented with a trainable binary mask \( m \in \{0, 1\}^{k_z \times k_y} \), named explanation mask. After the model is well trained, we expect the masked
Figure 2: The illustration of our proposed framework. We use the surrogate models $f'$ to mimic the black-box models $f$’s behaviors for post-hoc explanation. In the surrogate model, the data $x$ is first mapped to its concept values $z$ with concept extractor $e_\phi$, then the concept values $z$ are used to predict the model output with an explainable mapping $h$. The mapping $h$ achieves a high explanability by identifying the related concepts to each target and using a soft decision tree to enhance the transparency.
Concepts will keep only the ones with relationships to each classification target. The whole set of concept values $z$ is first masked with $m$ with an element-wise product. Thus the input of the estimators $f_\gamma$ will only contain a set of necessary concepts specific to each target. We use soft decision trees to implement each $f_\gamma$ to enhance the explainability of the mapping from related concepts to model outputs.
After the model is trained, the semantic meanings of these concepts are derived through interventions within the generative process, serving as base units for explaining the decision-making process. Below, we discuss the procedure for both global and local explanations.
**Global explanation.** Our method can provide global explanations by identifying the related concepts for each prediction task. For a specific task $f_k$, where $k$ is the index of the task, the related concepts can be identified by
$$z_{R_k} = \{z_j\}_{m_j,k=1}$$
where $z_{R_k}$ represents the related concepts of the task with the index $k$, and $z_j$ denotes the concept variables (without specified values).
**Local explanation.** Our proposed method can also provide a local explanation of the decision-making process of each data sample’s classification. This is achieved by first identifying the related concepts using the global explanation method, and then extracting the values of the concepts with the concept extractor. By feeding the combinations of concepts into the decision tree, which maps concepts to predictions, we ensure transparency in the rules of every decision-making step and its associated predictions for specific data.
Formally, our proposed surrogate model can bring light on the decision-making process of the data sample $x$ on the black-box model $f_k$ with 1) related concepts $z_{R_k} = \{z_j\}_{m_j,k=1}$, 2) values of related concepts: $z_{R_k} = \{e_\phi(x)\}_{m_j,k=1}$ and 3) a decision tree from related concepts to predictions: $y = f_{\gamma_k}(z_{R_k})$.
### 4.2 Training Objective
In order to optimize our proposed model, three criteria should be satisfied: (1) the decoded data from concepts should be accurately recovered to match the original data, (2) the predictions from the surrogate model should closely align with the predictions from the black-box model, and (3) the mapping from concepts to predicted labels should be explainable. We derive three corresponding loss terms for them, namely identifiability loss ($L_I$), fidelity loss ($L_F$) and explainability loss ($L_E$).
Then the objective can be written as
$$\min_{\phi, \theta, \gamma, m} L_I(x; \phi, \theta) + \lambda_1 L_F(x, f; \phi, m, \gamma) + \lambda_2 L_E(x; \phi, m, \gamma)$$
(2)
where $L_I$, $L_E$, and $L_F$ denote the corresponding terms for Identifiability loss, Explainability loss, and Fidelity loss. $\phi, \theta, \gamma, m$ are the weights of the corresponding model components.
4.2.1 FIDELITY AND IDENTIFIABILITY
Ensuring fidelity requires the output of the surrogate model should be close to the output of the black-box model. So we can naturally use
$$L_F(x, f; \phi, m, \gamma) = D(h_m, \gamma(e_\phi(x)), f(x))$$
(3)
where $D$ is a measure of distance between the black-box model output $f(x)$ and the surrogate model output $h_m, \gamma(e_\phi(x))$.
To ensure identifiability, we propose to model the generative process of how the concept values $z$ can be mapped back to the original data $x$. We denote the generative process as $p_\theta(x|z)$. To infer $z$, we use $q_\phi(z|x)$ with learnable weights $\phi$ to estimate the posterior distribution of $p(z|x)$. To ensure the data $x$ can be recovered given concept values $z$, we maximize the variational lower bound on the log-likelihood $p_\theta, \phi(x)$. Given the approximated posterior $q_\phi(z|x)$, which naturally matches the objective of Variational Autoencoders. Thus the identifiability loss can be written as:
$$L_I(x; \phi, \theta) = E_{q_\phi(z|x)} \log p_\theta(x|z) - D_{KL}(q_\phi(z|x)||p(z))$$
(4)
where $p(z)$ is the prior distribution of $z$, and $D_{KL}$ stands for the Kullback–Leibler divergence.
4.2.2 EXPLAINABILITY
In order to improve the explainability of the surrogate model, we aim to enforce (1) the disentanglement of concepts, and (2) the explainability of the mapping from extracted concept values to the model output. The details are introduced as follows.
**Concept disentanglement.** To enhance the explainability of the discovered concepts, one important point is to ensure the disentanglement of each concept. To do this, we add a constraint to the distribution of each concept value $p(z_i)$. Following Chen et al. (2018), we use the Total Correlation (TC) term to enhance the disentanglement, which forces our model to find statistically independent concepts in the data distribution.
**Explainability of surrogate model.** To enhance the explainability of the post-hoc explaining process, we further decompose the mapping $h$ (concept values to black-box model outputs) into two steps: (1) Identifying the necessary concepts for each task, (2) predicting the black-box model output with these necessary concepts, as shown in Fig. 2(b). Since we explain each classifier by its necessary concepts and the concept values, a better explanation will be a smaller number of concepts required by each task. Hence, we can achieve this by ensuring the sparsity of the explanation mask. With identified necessary concepts for each task, we implement the mapping from these concepts to predictions as a type of self-explainable model, soft decision trees, which naturally gives rules for predicting labels with the concept values. We add regularization to the soft decision trees to penalize their complexity for better explainability.
More formally, we decompose $h$ as $h(z) = f_\gamma(m \cdot z)$, $\forall z$, where $m$ is the explanation mask, and $f_\gamma$ is implemented with soft decision trees, parameterized with $\gamma$. We enforce the sparsity of $m$ by penalizing $\|m\|_2$, where $\|\cdot\|_2$ denotes the sum of the square values of the elements. We enforce the explainability by penalizing $C(\gamma)$, where $C(\cdot)$ is a measurement of the complexity of the trees. Then the explainability loss can be written as a weighted sum of the penalty terms for total correlation, sparsity of $m$ and complexity of the decision trees, namely
$$L_E(x; \phi, m, \gamma) = D_{KL}(q(z)|| \prod_j q(z_j)) + \lambda_3 \|m\|_2 + \lambda_4 C(\gamma)$$
(5)
Disentanglement of $z$ Sparsity of $m$ Simplicity of $f_\gamma$
where \( q(z) \) is the joint approximate posterior, which represents the joint distribution of \( z \) over the dataset, and \( q(z_i) \) is the marginal approximate posterior, which represents the marginal distribution over the \( i \)-th concept. \( D_{KL} \) denotes the Kullback–Leibler divergence, \( C(\cdot) \) is a measurement of the complexity of the trees, \( \lambda_3 \) and \( \lambda_4 \) are the weights of the corresponding terms.
**Overall Objective:** All the model components are trained jointly to ensure the discovered concepts are learned with the guidance of both the data and the black-box models. The overall loss function is written as
\[
L(x; f; \phi, \theta, m, \gamma) = L_I(x; \phi, \theta) + \lambda_1 L_F(x; f; \phi, m, \gamma) + \lambda_2 L_E(x; \phi, m, \gamma)
\]
where \( \lambda_1 \) and \( \lambda_2 \) are the weight hyper-parameters.
### 4.3 Continuously Improving the Fidelity
The explanation gets more trustworthy when the output of the surrogate model gets closer to the output of the black-box model with the same input. However, it is hard to fully mimic the activities of the black-box models due to limited access to the black-box model’s architecture and its training data. Fortunately, our framework supports us to continuously increase fidelity by generating user-customizable and diverse data for training. To achieve this, we devised a strategy to train the model as follows.
To increase the fidelity to mimic a given classifier, we train the model with additional data that is generated by the model itself. The concepts used to general new data can be divided into two groups: the concepts related to this classifier, which can be specified with the user’s preference for user-customizability; the concepts unrelated to the classifier, which can be perturbated by sampling from the prior distribution for data diversity. Specifically, for the classifier \( f_k \) on the \( k \)-th task, we divide the concepts \( z \) into two groups: the set of related concepts \( z^R = \{z_j\}_{m_{j,k}=1} \), and the set of unrelated concepts \( z^U = \{z_j\}_{m_{j,k}=0} \).
We use the notation \( z^R \oplus z^U \) to denote the operation of combining \( z^R \) and \( z^U \) to a whole set of concepts \( z \) while keeping the correct indices. The iterative training process starts by sampling the \( z^R \) and perturbing \( z^U \) for each \( z^R \). Then the data sample \( x \) can be generated with the decoder by \( x = g_\theta(z^R \oplus z^U) \). With the additional data, the overall objective can be continuously optimized by minimizing \( L(g_\theta(z^R \oplus z^U)) \) in Eq. 6.
So the overall objective in the additional training phase can be written as:
\[
\min_{\phi, \theta, m, \gamma} \mathbb{E}_{p(z^R)} \mathbb{E}_{p(z^U)} L(g_\theta(z^R \oplus z^U))
\]
where \( p(z^R) \) and \( p(z^R) \) is the distribution of sampling the preferred concept values for generating additional data, \( L \) is the overall objective in Eq. 6.
### 5 Compositional Generalization
Compositional generalization means the ability to recognize or generate novel combinations of observed elementary concepts. One way to achieve compositional generalization is via freezing the trained model weights while training some simple model weights to generalize to new combinations [Xu et al., 2022]. Our proposed framework, which discovers a set of concepts and identifies the related concepts for each task, can naturally be generalized to explain new black-box classifiers that are unseen in the training phase but the related concepts are found, with all trained model weights frozen, and only train the additional mask and estimators.
---
**Algorithm 1 Proposed Training Strategy**
Require: Black-box classifier \( f \)
Require: Trained model parameters \( \theta, \phi, m, \gamma \)
Require: Number of samples of related concepts \( n^R \)
Require: Number of samples of unrelated concepts \( n^U \)
1: for \( i = 0 \) to \( n^R \) do
2: Sample \( z^R = \{z_j\}_{m_{j,k}=1} \)
3: for \( j = 0 \) to \( n^U \) do
4: Sample \( z^U = \{z_j\}_{m_{j,k}=0} \)
5: \( x \leftarrow g_\theta(z^R \oplus z^U) \).
6: Compute \( L \) with Eq. 6
7: Update \( \theta, \phi, m, \gamma \) with \( L \)
8: end for
9: end for
Specifically, suppose the model has been well-trained with $k_t$ training tasks $f_{\text{train}} = \{f_1, ..., f_t\}$, and $k_c$ concepts are found. We want to use the trained model to explain $p$ unseen test tasks: $f_{\text{test}} = \{f_{t+1}, ..., f_{t+p}\}$. The optimization objective is the same with Eq. (7). The difference is that the existing trained weights can be fixed. The only new parameters to train is the added part of the explanation mask $m$, namely $m_{c:t:t+p}$, and the new-added estimators $f_{\gamma_{\text{test}}} = \{\gamma_{t+1}, ..., \gamma_{t+p}\}$ with trainable parameters $\gamma_{\text{test}} = \{\gamma_{t+1}, ..., \gamma_{t+p}\}$. The objective can be written formally as:
$$
\begin{align*}
\minimize_{m_{c:t:t+p}, \gamma_{\text{test}}} & \quad L(x, f_{\text{test}}) \\
\subjectto & \quad \phi, \theta, m_{c:0:t}, \gamma_{\text{train}} \text{ fixed}
\end{align*}
$$
(8)
where $x$ can be from both the training set or generated with the model as discussed in Sec. 4.3 and $L$ is the overall objective defined in Eq. (6).
6 EXPERIMENTS
In this section, we comprehensively evaluate our proposed method on both concept discovery and post-hoc explanation with qualitative and quantitative results.
6.1 Experiment Settings
Dataset. We evaluate our model on the MNIST [Deng (2012)] dataset and TripleMNIST dataset. For MNIST, following [Tran et al. (2022)], we select the digits ‘1, 4, 7, 9’ for MNIST dataset. In the TripleMNIST dataset [Sun (2019)], each image is synthesized by combining three images from the MNIST dataset, with a total of 1000 classes (numbers 000-999). We use 9 classes among them, where each digit is either 0, 1, or 5. We use D1, D2, D3 to represent the first (left-most), second (middle) and third (right-most) digits in the 3-digit number. We developed four black-box classification tasks: $f_1$ for predicting D1, $f_2$ for the parity of the 3-digit number, $f_3$ for whether D2 and D3 are the same, $f_4$ for the value of D1+D2+D3. The black-box tasks as given in Fig. 4(a). We set $k_c = 6$ for this experiment.
6.2 Qualitative Evaluation
To qualitatively validate the effectiveness of our proposed method, we visualize the discovered concepts in Sec. 6.2.1 and an example of post-hoc explanation on the TripleMNIST dataset to qualitatively
6.2.1 Discovered Concepts
To show the semantic meaning of each discovered concept, we conduct interventions on the value of each concept and visualize the generated data. The visualization is shown in Fig. 4.
We visualized the generated data samples by interventions on each concept value in Fig. 4(b). The inherent semantic meaning of each concept can be attained by observing the variations of the generated data samples. For instance, in the second line (marked with $z_1$) of Fig. 4(b), the observed variation is the third digit (D3) varies from 0 to 5, then to 1. So the observed semantic meaning of concept $z_1$ is the value of D3. We listed the observed semantic meanings of each discovered concept in Fig. 4(c). The learned explanation mask $m$ is shown in Fig. 4(c), representing the related concepts for each task. For instance, for task $f_2$, $z_1$ and $z_5$ are optimized to 1, indicating the concept $z_1$ (D3) and $z_5$ (D2) are related to this task (predicting whether D2=D3).
Results show that, guided by the four classification tasks, our model can discover a set of concepts that have human-understandable semantic meanings while representing the foundational reasoning behind the decisions of each classification task. Our model also successfully identifies the related and unrelated concepts of each task.
Figure 4: Experiments visualized on the TripleMNIST dataset. (a) The tasks used for guiding the concept discovery. (b) Data samples generated through interventions on individual concepts. Each row alters only the specific concept values indicated, while other concepts remain constant. (c) Semantic interpretations of each discovered concept from variations during concept interventions. (d) Learned explanation mask. For instance, for task $f_2$, $z_1$ and $z_5$ are optimized to 1, indicating the concept $z_1$ (D3) and $z_5$ (D2) are related to this task (predicting whether D2=D3).
Figure 5: Local explanation for the decision-making progress of three data samples on task $f_2$ on TripleMNIST dataset. (a) The learned decision tree $f_{\gamma_2}$ which maps $z_1$ and $z_5$ to $y_2$. (b) We put the generated images and decision boundaries of the decision tree in the same coordinate. The x-axis represents $z_1$ and the y-axis represents $z_5$, both ranging from -3 to 3. The images are generated with corresponding $z_1$ and $z_5$ according to their position in the coordinates. Each line $g_i = 0.5$ denotes the decision boundary of node $i$. $\gamma$ denotes the sigmoid function.
6.2.2 POST-HOC EXPLANATION
In this subsection, we qualitatively evaluate the post-hoc explanation by taking the explanations on the task $f_2$ for example. The learned concepts and their semantic meanings are the same as in Sec. [6.2.1]. The global explanation is shown in Fig. 4(d). For this case, the learned explanation mask successfully identifies the related concepts of $f_2$ are $z_1$ and $z_5$.
The local explanation of three data samples as examples are shown in Fig. 5(a). Generally, our proposed method successfully mimics the black-box model’s behaviors by first extracting a small number of related concepts, and providing the prediction rules using a simple and transparent model: a decision tree. In the local explanation process, our proposed method first successfully extracts two concepts $z_1$ and $z_5$, that are low-dimensional but enough to reason the decision, compared to the high-dimension original data (82*82). Then the decision is made with a 4-layer decision tree with 8 nodes, and the decision rule of each node is known (for node $i$, the rule is whether $\gamma(g_i) < 0.5$, and $g_i$ is a linear function), yielding a transparent and explainable decision-making process.
In Fig 5(b), we put the generated images and the decision boundaries of the decision tree in the same coordinates to evaluate the validity of the rules of the decision tree. The results show that the learned linear rules successfully recognize all three zones where $f_2$ is true, corresponding to the three cases that D2 is the same as D3, i.e., D2=D3=0, D2=D3=1, D2=D3=5. Interestingly, two positive
Table 1: Quantitative results of post-hoc explanation on Triple-MNIST dataset.
| Type | Task | #Concept | Depth | #Node | Acc | Acc-S |
|----------|------|----------|-------|-------|-------|-------|
| Test | $f_0$ | 1 | 2 | 2 | 93.61 | 95.96 |
| | $f_1$ | 1 | 2 | 2 | 93.93 | 95.32 |
| | $f_2$ | 2 | 4 | 8 | 88.23 | 94.47 |
| | $f_3$ | 3 | 5 | 27 | 51.61 | 73.20 |
| Generalize| $f_4$ | 1 | 2 | 2 | - | 92.01 |
| | $f_5$ | 3 | 4 | 9 | - | 67.02 |
(yellow) areas in the image’s center represent a unique situation where D2=D3=5. This highlights a potential limitation of our approach: the soft decision tree might produce less-than-ideal rules due to inconsistent initialization during its training. We leave this limitation to future work on soft decision trees.
### Quantitative Evaluation
#### 6.3.1 Fidelity and Explainability
The evaluations of post-hoc explanations are in Table 1. It presents metrics like recognized concepts (Concept), decision tree depth (Depth), node number (Node), and black-box model accuracy (Acc). Our method mimics black-box models with high fidelity using a transparent model. It translates high-dimensional input to low-dimensional concepts with meaning, then predicts the output. The prediction, via decision trees, is simple with small depth and node numbers. This balance between accuracy and clarity highlights our method’s effectiveness in machine learning. The accuracy for $f_3$ is lower due to its 10-class classification nature.
#### 6.3.2 Improvements from Iterative Training
We assessed our training strategy’s effectiveness in fidelity improvement, data efficiency, and generalizability. The “Acc-S” column of Table 5 displays the accuracy for each task. Our method notably boosts accuracy, especially for $f_3$ with prior lower performance, proving its efficacy. In Fig 6, we compare data efficacy on $f_1$ to $f_4$. Results indicate greater fidelity improvement with our strategy when training data is limited.
#### 6.3.3 Information Flow
To validity that the guidance of task lead to more meaningful discovered concepts, we evaluate the mutual information from each concept to the tasks, calculated by
$$I(z; y) = \mathbb{E}_{z,y} \left[ \log \frac{p(z,y)}{p(z)p(y)} \right].$$
We compare the result of our model, with the backbone, beta-VAE(TC), which encourages the disentanglement of each latent factor. Results show that the guidance of classification tasks helps us to find a group of concepts with additionally enforced mutual information to the tasks.
### Conclusion
In this work, we introduced the Concept Bottleneck Surrogate Models, a novel type of concept-based explainer that can explain black-box classifiers with a set of self-discovered concepts. We propose a training strategy to optimize the model with generated data. The proposed model has the power of compositional generalization. We conducted comprehensive experiments to evaluate the effectiveness of our proposed method.
### References
Ricky TQ Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. *Advances in neural information processing systems*, 31, 2018.
Li Deng. The mnist database of handwritten digit images for machine learning research. *IEEE Signal Processing Magazine*, 29(6):141–142, 2012.
Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. *Advances in neural information processing systems*, 32, 2019.
Vidhya Kamakshi, Uday Gupta, and Narayanan C Krishnan. Pace: Posthoc architecture-agnostic concept extractor for explaining cnns. In *2021 International Joint Conference on Neural Networks (IJCNN)*, pp. 1–8. IEEE, 2021.
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In *International conference on machine learning*, pp. 2668–2677. PMLR, 2018.
Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In *International Conference on Machine Learning*, pp. 2649–2658. PMLR, 2018.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013.
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept bottleneck models. In *International conference on machine learning*, pp. 5338–5348. PMLR, 2020.
Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, and Soheil Feizi. Text-to-concept (and back) via cross-model alignment. *arXiv preprint arXiv:2305.06386*, 2023.
Tuomas Oikarinen, Subhro Das, Lam M Nguyen, and Tsui-Wei Weng. Label-free concept bottleneck models. *arXiv preprint arXiv:2304.06129*, 2023.
Matthew O’Shaughnessy, Gregory Canal, Marissa Connor, Christopher Rozell, and Mark Davenport. Generative causal explanations of black-box classifiers. *Advances in neural information processing systems*, 33:5453–5467, 2020.
Andres Felipe Posada-Moreno, Nikita Surya, and Sebastian Trimpe. Eclad: Extracting concepts with local aggregated descriptors. *arXiv preprint arXiv:2206.04531*, 2022.
Shao-Hua Sun. Multi-digit mnist for few-shot learning, 2019. URL https://github.com/shaohua0116/MultiDigitMNIST.
Thien Q Tran, Kazuto Fukuchi, Youhei Akimoto, and Jun Sakuma. Unsupervised causal binary concepts discovery with vae for black-box model explanation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 9614–9622, 2022.
Bowen Wang, Liangzhi Li, Yuta Nakashima, and Hajime Nagahara. Learning bottleneck concepts in image classification. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10962–10971, 2023.
Zhenlin Xu, Marc Niethammer, and Colin A Raffel. Compositional generalization in unsupervised compositional representation learning: A study on disentanglement and emergent language. *Advances in Neural Information Processing Systems*, 35:25074–25087, 2022.
Yue Yang, Artemis Panagopoulou, Shenghao Zhou, Daniel Jin, Chris Callison-Burch, and Mark Yatskar. Language in a bottle: Language model guided concept bottlenecks for interpretable image classification. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 19187–19197, 2023.
Liuyi Yao, Yaliang Li, Sheng Li, Jinduo Liu, Mengdi Huai, Aidong Zhang, and Jing Gao. Concept-level model interpretation from the causal aspect. *IEEE Transactions on Knowledge and Data Engineering*, 2022.
Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks. *Advances in neural information processing systems*, 33:20554–20565, 2020.
Mert Yuksekgonul, Maggie Wang, and James Zou. Post-hoc concept bottleneck models. *arXiv preprint arXiv:2205.15480*, 2022.
|
XheqLWvswO
|
First, the scope of this work is on substituting the logistic loss, or more precisely, approximating the log function with polynomials. However, in classification, the logistic loss is only one of the many surrogate losses for the more fundamental 0-1 loss. The authors did not discuss any other surrogate loss functions and how they relate to the 0-1 loss.
|
ACCELERATED NEURAL NETWORK TRAINING WITH ROOTED LOGISTIC OBJECTIVES
Anonymous authors
Paper under double-blind review
ABSTRACT
Many neural networks deployed in the real world scenarios are trained using cross entropy based loss functions. From the optimization perspective, it is known that the behavior of first order methods such as gradient descent crucially depend on the separability of datasets. In fact, even in the most simplest case of binary classification, the rate of convergence depends on two factors: 1. condition number of data matrix, and 2. separability of the dataset. With no further pre-processing techniques such as over-parametrization, data augmentation etc., separability is an intrinsic quantity of the data distribution under consideration. We focus on the landscape design of the logistic function and derive a novel sequence of strictly convex functions that are at least as strict as logistic loss. The minimizers of these functions coincide with those of the minimum norm solution wherever possible. The strict convexity of the derived function can be extended to finetune state-of-the-art models and applications. In empirical experimental analysis, we apply our proposed rooted logistic objective to multiple deep models, e.g., fully-connected neural networks and transformers, on various of classification benchmarks. Our results illustrate that training with rooted loss function is converged faster and gains performance improvements. Furthermore, we illustrate applications of our novel rooted loss function in generative modeling based downstream applications, such as finetuning StyleGAN model with the rooted loss. The code implementing our losses and models can be found here for open source software development purposes: https://anonymous.4open.science/r/rooted_loss
1 INTRODUCTION
Neural networks have become a necessity to enable various real-world applications, especially in large scale settings. An appropriate parameterized model is chosen with the information of the domains or use-cases pertaining to the applications (Devlin et al., 2018; Radford et al., 2021; Caron et al., 2021). Then, the parameters are iteratively modified to optimize a mathematically valid loss function applied on data points which represent the application under consideration (Goodfellow et al., 2016; Ghosh et al., 2017; Lin et al., 2017; Kavalerov et al., 2021; Hui et al., 2023). Once the iterative procedure terminates (or is terminated with stopping conditions), the model parameters can be used to make predictions on unseen points. Thus, it is crucial to understand how different algorithms behave during optimization phase that correspond to the training procedure. In large scale setting, first order methods are preferred since they require the least computing resources, and are easier to implement with Automatic Differentiation packages (Paszke et al., 2017; Loshchilov & Hutter, 2018; Reddi et al., 2019). Naturally, the success and efficiency of first order methods depend on the landscape properties of the loss function when are applied on samples in datasets (Deng et al., 2009; Karras et al., 2019).
How does dataset affect optimization landscape? Consider the task of classification in which a dataset $\mathcal{D}$ is represented as a set of pairs $(x, y)$, where $x$ denote features, and $y$ denote corresponding classes or labels (Pranckevicius & Marcinkevicius, 2017; Singh et al., 2017; Zhang & Liu, 2023). In binary classification, the task is to categorize $x$ into one of two classes using model parameters after optimization. Here, it is known the rate of convergence of (stochastic) gradient descent – the de-facto first order method – to the optimal solution is primarily influenced by two factors: (1) condition number of the loss function (Overton, 2001): this number gives an insight into the structure and properties of the dataset. A lower condition number implies a better gradient directions
which makes optimization faster for first order methods (Hazimeh et al., 2022; Boob et al., 2023). While using a one layer neural network, this condition number is determined by the so-called data matrix in which \((x, y)\) pairs are appropriately stacked as rows/columns; (2) recent work have shown that separability of \(D\) is an important factor to consider for modeling and training purposes (Shamir, 2021; Tarzanagh et al., 2023). Intuitively, separability is a measure of how easily a model can distinguish between two \(x\)'s from different classes \(y\)'s in \(D\). A highly separable dataset is easier to classify, and the optimization process is expected to converge faster. Indeed, separability is inherent to the dataset, and so without employing extra pre-processing steps like normalization (Ioffe & Szegedy, 2015; Wu et al., 2021), augmentation (Shorten & Khoshgoftaar, 2019; Yarats et al., 2020), over-parametrization (more than one layer) (Du et al., 2018b; Buhai et al., 2020), the level of separability is determined by the distribution from which \(D\) was sampled from.
Furthermore, the landscape of objective function that are used for generating or sampling points similar to \(x\) have also been under investigation (Qi, 2020). A standard assumption in designing models or architectures for sampling is that \(x\) is a smooth function – usually an image (or audio) considered as a two (or one) dimensional smooth function. With this assumption, various architectures have been proposed with (discrete) convolution or smoothing operators as the building blocks, such as DCGAN (Radford et al., 2015), BigGAN (Brock et al., 2018), StyleGAN (Karras et al., 2020b). These smoothing based architectures called Generators gradually transform a random signal to \(\tilde{x}\), a “fake” or synthetic sample. Then, a classification architecture called Discriminator is used to assign the probability of \(\tilde{x}\) being a real sample from \(D\). While separability might not be the deciding factor in training the overall models, conditioning of loss functions used to train the Discriminator is crucial in determining the success of first order algorithms, and thereby the sampling process to obtain \(\tilde{x} \sim x \in D\) (Arora et al., 2017).
**Our Contributions.** We provide a plug-in replacement for log based loss functions for supervised classification and unsupervised generation tasks with provable benefits. **First**, we show that there is a natural approximation to \(-\log\) that is bounded from below that has the nice theoretical properties such as convexity and smoothness. Our novel result shows that the proposed Rooted loss with one additional parameter \(k\) is at least as conditioned as \(-\log\) based convex loss function, so provable acceleration. **Second**, we apply our loss to various datasets, architecture combinations and show that it can lead to significant empirical benefits for classifications. In image classifications, we show that the training time with our proposed rooted loss is much less than cross-entropy or focal loss. It also provides 1.44% - 2.32% gains over cross-entropy loss and 5.78% - 6.66% gains over focal loss in term of test accuracy. **Third**, we apply rooted loss on generative models as downstream applications, showing lower FID and better generated images with limited training data.
## 2 Preliminaries
Logistic regression is the task of finding a vector \(w \in \mathbb{R}^d\) which approximately minimizes the empirical logistic loss (Ji & Telgarsky, 2018). While logistic regression can be seen as a single-layer neural network, deep neural networks contain multiple such layers stacked together. Each layer captures increasingly complex features from the input data. This hierarchical structure allows deep networks to model complex relationships.
Given datapoints \((x_i, y_i), i = 1, \ldots, n\), where \(x_i \in \mathbb{R}^d\) denotes the features in \(d-\) dimensions, and \(y_i \in \{+1, -1\}\) is the binary label. By parametrizing the prediction function for a new sample \(x\) as
\[
f(x) := P(y = \pm 1 | x) = \sigma(\pm w^\top x)
\]
where \(\sigma\) is the sigmoid function, the maximum likelihood estimator of \(w \in \mathbb{R}^d\) can be obtained by minimizing the negative log-likelihood function of \(w\) (Sur & Candès, 2019), written as,
\[
L_{LR}(w) := \frac{1}{n} \sum_{i=1}^{n} \log \left( 1 + \exp \left( -y_i w^\top x_i \right) \right).
\]
The cross-entropy (CE) loss is one of the most commonly used loss functions for training deep neural networks, most notably in multi-class classification problems. Given datapoints as \((x_i, y_{ik})\),
where \( k \in c \), \( c \) is the number of classes, \( y_{ik} \in \{0, 1\} \) is a binary indicator of whether class \( k \) is the correct classification for example \( i \). Following the equation [2], multi-class cross-entropy loss is written as,
\[
L_{CE}(w) := -\frac{1}{n} \sum_{i=1}^{n} \sum_{k=1}^{c} y_{ik} \log \left( \frac{\exp(w_k^\top x_i)}{\sum_{j=1}^{c} \exp(w_j^\top x_i)} \right),
\]
where \( w_j^\top x_i \) represents the prediction score for the \( i \)-th example and the \( j \)-th class.
### 3 Rooted Logistic Objective Function
#### 3.1 Motivation: From Logistic Objective to Rooted Logistic Objective
Logistic loss can serve as a smooth approximation to the element-wise maximum function, where smoothness is desirable in model design since gradient-based optimizers are commonly used. In this work, we consider to use the Taylor approximation of the natural logarithm function as follows:
1. for a fixed \( u \in \mathbb{R}_+ \), the derivative of \( u^v \) is given by \( u^v \log(u) \) by Chain rule,
2. now observe that by evaluating the derivative at \( v = 0 \), we obtain \( \log(u) \), and
3. finally, plugging the above two in the definition of derivative we have that \( \log(u) = \lim_{v \downarrow 0} \frac{u^v - 1}{v} = \lim_{k \uparrow \infty} k \left( u^{1/k} - 1 \right) \).
Thus, for training purposes, we propose using a fixed sufficiently large \( k \) with the following approximation to the log function: \( \log(u) \approx k(u)^{\frac{1}{k}} - k \).
Here, the approximation seeks to express \( \log(u) \) in terms of a function raised to the power of \( \frac{1}{k} \). The constant \( k \) provides a degree of freedom that can be adjusted to fine-tune the approximation.
Building on this approximation, a novel loss function, termed the Rooted Logistic Objective function (RLO), is introduced. The key idea is to modify the traditional logistic loss by incorporating the above approximation. The loss function for this RLO can be defined as:
\[
L_{RLO}^k(w) = \frac{1}{n} \sum_{i=1}^{n} k \cdot \left[ l_i^k(w) := \left( 1 + \exp \left( -y_i w^\top x_i \right) \right)^{\frac{1}{k}} \right],
\]
#### Intuition to prefer Rooted Loss over Log based losses.
Logistic loss plays a pivotal role in penalizing prediction errors, particularly for the true class denoted as \( y_i \) in classification tasks. One of its notable characteristics is the high loss and large gradient when the function \( f(x) \) approaches zero. This sharp gradient is beneficial in gradient-based optimization methods, such as gradient descent, because it promotes more significant and effective update steps, leading the convergence towards the optimal solutions. Moreover, when we consider the gradient contributions from incorrect classes, the “signal” coming from the gradient is weaker, so such optimization schemes may be less effective in driving the probabilities for these classes to zero. Specifically, optimization algorithms might struggle or take longer to drive the predicted probabilities of these incorrect classes towards zero.
In simpler terms, while the logistic loss is adept at penalizing mistakes for the true class, it might be gentler or slower in correcting overconfident incorrect predictions. The deep neural networks (DNNs) trained by the softmax cross-entropy (SCE) loss have achieved state-of-the-art performance on various tasks (Goodfellow et al., 2016).
### 3.2 Convexity of RLO
Standard logistic regression function in equation [2] has favorable convexity properties for optimization. In particular, it is strictly convex with respect to parameters \( w \), for more details, see Freund et al. (2018). By direct calculation of Gradient and Hessian using Chain and Product rules, we obtain the gradient \( \nabla_w l_i^k \) for a single point \((x_i, y_i)\),
\[
\nabla_w l_i^k(w) = \frac{1}{k} \left[ (1 + \exp(-y_i w^\top x_i))^{\frac{1}{k}-1} \cdot \exp(-y_i w^\top x_i) \right] \cdot (-y_i x_i)
\]
\[
= l_i^k(w) \cdot \frac{\exp(-y_i w^\top x_i)}{1 + \exp(-y_i w^\top x_i)} \cdot (-y_i x_i) = l_i^k(w) \cdot \frac{1}{\exp(-y_i w^\top x_i) + 1} \cdot (-y_i x_i)
\]
\[
= -g(w, x_i) \cdot y_i x_i,
\]
where \( g(w, x_i) := \sigma(y_i w^\top x_i) \cdot l^k_i(w) \geq 0 \). Similarly, we obtain the second-order gradient \( \nabla^2 l^k_i(w) \) for a single point \((x_i, y_i)\) as follows,
\[
\nabla^2 l^k_i(w) = h(w, x_i) \cdot x_i x_i^\top,
\]
(8)
where \( h(w, x_i) := l^k_i(w) \cdot \sigma(y_i w^\top x_i) \cdot \left[1 - \sigma(y_i w^\top x_i) \cdot (1 - 1/k)\right] > 0 \) since both \( \sigma(\cdot), 1/k \in (0, 1) \). We have included the full proof of hessian in the Appendix A.1. With these calculations, we have the following result:
**Lemma 1** \( L^k_{RLO}(w) \) is a strictly convex function whenever \( k > 1 \) as is considered here.
Note that, our result is novel because standard composition rules for convex optimization do not apply. This is due to the fact the function \( (\cdot)^{\frac{1}{k}} \) is a concave function in the nonnegative orthant. Numerically, the main advantage is that the condition number of \( L^k_{RLO}(w) \) is independent of data, while \( L^k_{RLO}(w) \) can be quite ill-conditioned for inseparable datasets due to the log(\(\cdot\)) function. More details can be found in Chapter 12 of (Overton (2001)).
While strict convexity is true for both Logistic and RLO loss functions, the following result says that the full batch RLO is guaranteed to be as conditioned as Logistic objective function by comparing the coefficient of the hessian term \( x_i x_i^\top \) in RLO and Logisitic objectives (LO):
**Lemma 2** Let \( r_i := h_{RLO}(w^*_i, x^*_i)/h_{LO}(w, x_i) \in \mathbb{R}_{\geq 0} \), where \( w^*_i \) is the optimal parameters for sample \( i \). Then if \( k \leq \exp(l^k_i(w^*_i)) \) then \( r_i > 1 \).
Above, lemma 2 states that as long as \( k \) is not chosen to be too large, the gradient directions may provide sufficient descent needed for fast convergence. This property makes it ideal for solving classification problems. From lemma 1 and 2, we can conclude that there is a range of values of \( k \) that provides better conditioning for individual data points. It is beneficial when using stochastic algorithms that use a random mini-batch of samples at each iteration instead of the full dataset to compute gradient.
**Generalization properties of RLO.** Assuming that points \( x_i \in \mathbb{R}^d \) are bounded i.e., \( \|x_i\| \leq B_x \) and that there is an bounded optimal solution \( \|w\| < B_o \), we expect that the generalization bounds for LR in equation 2 to hold for RLO in equation 4. This is because of the fact that asymptotically – when \( k \uparrow +\infty \) the hessian coefficient of RLO is at most 1, which guarantees that the gradient is lipschitz continuous (Lei et al. (2019); Bartlett & Mendelson (2002)).
### 3.3 Applying RLO for Generative Models
Generative models were studied as statistical problem where the goal is, given a training dataset \( x_i, i = 1, 2...n \), learn a parametric model of its distribution \( p(x) \). For an appropriate parametric model \( f_\theta \), we need \( \theta \) such that \( f_\theta(z) \approx x \), where \( z \) is usually a Gaussian vector to approximate some \( x_i \) through the transformation \( f_\theta \). For sampling, given a mapping \( f_\theta \), synthetic data points can be generated by sampling a Gaussian vector \( z \) and computing \( f_\theta(z) \). This overcomes some of the architectural restrictions of \( f_\theta \). This property is leveraged to come up with Generative Adversarial Networks (GANs), see Chapter 10 in (Lindholm et al. (2022)).
GANs are a class of models that help the synthesize data points from the model using \( f_\theta \) which gets a Gaussian vector \( z \) as an input. GANs are trained by comparing these synthetic samples with real samples from the training data \( x_i \). The comparison is done by a critic, e.g., a binary classifier \( g_\eta \) which judges the authenticity of the samples. It is an adversarial game where the generator’s parameters \( \theta \) are continuously updated to synthesize data close to reality while the classifier such as the discriminator wants to label them correctly as fake. The result is a generator that has successfully learned to generate data that the discriminator labels as real. The generator tries to maximize the classifier loss with respect to \( \theta \) while the classifier tries to minimize the loss with respect to \( \eta \). This leads to a rooted minmax problem with loss that is similar equation 4 written as,
\[
\min_\theta \max_\eta V_k(f_\theta, g_\eta) = \mathbb{E}_{x \sim p_{data}(x)}[k (g_\eta(x))^{1/k}] + \mathbb{E}_{z \sim p_z(z)}[k (1 - g_\eta(f_\theta(z)))^{1/k}].
\]
(9)
Figure 1: The rate of convergence over iterations of standard logistic regression and RLO. The lines for the rooted logistic regression show the convergence for the value of $k$ which gives the best test accuracy, $k = 4$ for Ionosphere, $k = 6$ for Madelon, $k = 20$ for Specheart and $k = 3$ for Wine. RLO converges faster than standard logistic regression in all the settings.
4 EXPERIMENTS
In this section, we illustrate the experiments of using our proposed RLO on multiple architectures of models on various benchmark datasets. Specifically, we compare rooted logistic regression with standard logistic regression on synthetic dataset and 4 benchmark datasets from UCI machine learning repository. Furthermore, we evaluate rooted loss against cross-entropy loss and focal loss by training state-of-the-arts deep models, e.g., ResNet (He et al. [2016]), ViT (Dosovitskiy et al. [2020]) and Swin (Liu et al. [2021]), on image classification tasks. Finally, we showcase the application of image generations using RLO with StyleGAN (Karras et al. [2020a]).
4.1 DATASETS
Synthetic dataset Setup (Hui et al. [2023]): We use a version of the popular 2-class spiral with 1500 samples, and we use 70% data for training and the remaining 30% data for testing.
Dataset for regression: The empirical studies are conducted on the following 4 benchmark classification datasets, which can be found in the publicly available UCI machine learning repository (Asuncion & Newman [2007]): Wine, Ionosphere, Madelon and Specheart.
Image datasets: We conduct image classification experiments to test the performance of rooted loss. In particular, we use CIFAR-10/100 (Krizhevsky et al. [2009]) for training from the scratch, and Tiny-ImageNet (mnmoustafa [2017]) and Food-101 (Bossard et al. [2014]) for finetuning. For our image generation experiments with StyleGAN, we use FFHQ dataset (Karras et al. [2018]) and the Stanford Dogs dataset (Khosla et al. [2011]). More data information are in Appendix A.2.
4.2 SHALLOW LOGISTIC REGRESSION VS ROOTED LOGISTIC REGRESSION
Experiments setups: The baseline is standard logistic regression. To showcase the benefits of RLO, we run the experiments with different numbers of $k \in [3, 20]$ for the proposed rooted logistic regression. Note that, for all the datasets except Specheart, we use the same number of iterations (200) and learning rate (0.01) for all the experiment settings. For Specheart, we increase the number of iterations to 1000 for better convergence and higher accuracy. We also evaluate standard logistic regression as well as RLO with/without $\ell_2$ regularization. More setup details are in Appendix A.4.2.
Convergence analysis: As mentioned above, we keep the experimental settings the same across all datasets, except Specheart. Figure 1 shows the convergence performance for Ionosphere, Madelon, Specheart and Wine datasets respectively. For all the datasets we can clearly see that RLO has better convergence performance compared to the standard logistic regression. We can see that the RLO, with and without $\ell_2$ regularization converge quicker than standard logistic regression, and RLO without $\ell_2$ regularization converging comparatively faster. For the convergence results for other values of $k$, in the case of RLO, please refer Appendix A.4.2.
Performance gains: Table 1 shows the test accuracy for all the datasets under the different regression settings. For RLO, we also show the top 3 $k$ values which achieved the highest accuracy. As seen in the table, for all the datasets, RLO with/without $\ell_2$ regularization outperforms standard logistical regression with/without $\ell_2$ in term of accuracy on test set. Specifically, RLO with $\ell_2$ regu-
| Dataset | LR Test Acc. | LR - L2 Test Acc. | RLO Test Acc. | RLO - L2 Test Acc. |
|-----------|--------------|-------------------|---------------|--------------------|
| Wine | 90 ± 4.15 | 89.44 ± 5.66 | 3 97.22 ± 1.75 | 94.55 ± 3.51 |
| | | | 11 82.77 ± 1.23 | 95.55 ± 2.22 |
| | | | 13 91.66 ± 5.55 | 95 ± 5.09 |
| Ionosphere| 81.4 ± 2.73 | 83.94 ± 2.1 | 4 85.07 ± 1.12 | 86.47 ± 1.12 |
| | | | 3 86.47 ± 1.69 | 85.63 ± 0.56 |
| | | | 16 84.5 ± 0.00 | 86.19 ± 0.56 |
| Madelon | 52.03 ± 1.9 | 50.83 ± 1.51 | 6 54.36 ± 0.71 | 52.75 ± 0.97 |
| | | | 9 54.13 ± 0.58 | 51.8 ± 1.43 |
| | | | 19 52.36 ± 1.14 | 54.13 ± 1.42 |
| Spechard | 80.49 ± 3.92 | 88.25 ± 1.69 | 20 84 ± 3.10 | 88.5 ± 1.83 |
| | | | 15 82.75 ± 1.83 | 88 ± 1.49 |
| | | | 13 82.99 ± 2.44 | 88 ± 1.00 |
Table 1: Testing Accuracy from 5-Fold Cross Validation, using Shallow Logistic regression vs Rooted Logistic regression (RLO). Top 3 values of $k$ are shown for RLO. RLO with/without $\ell_2$ regularization outperforms Shallow Logistic regression with/without $\ell_2$, in term of accuracy on test sets of all 4 datasets.
(a) 2-layers FCN with CE
(b) 3-layers FCN with CE
(c) 4-layers FCN with CE
(d) 2-layers FCN with RLO
(e) 3-layers FCN with RLO
(f) 4-layers FCN with RLO
Figure 2: The color demonstrated the estimated probability of a class label being identified as 1, as aligned with the scale located on the right side in the figures. The intervening white line between the red and blue regions denotes the decision boundary. In (a), (b) and (c), we train a 2-layer FCN for 1000 iterations, a 3-layer FCN for 100 iterations, and a 4-layer FCN for 50 iteration with cross-entropy loss. In (d), (e) and (f), we train a 2-layer FCN for 1000 iterations, a 3-layer FCN for 100 iterations, and a 4-layer FCN for 50 iteration with rooted logistic objective loss.
Regularization consistently achieves higher accuracy rates for different values of $k$. Hence, we conclude that our proposed RLO is beneficial to accelerate the training and also provide improvements.
4.3 Deep Neural Network for Classification with RLO
Experiments setups: At first, we implemented three different layers (2, 3, 4) fully-connected neural networks (FCN) on synthetic dataset. The training iterations are 1000, 100, and 50 respectively. We use the same hidden size of 100, learning rate as 0.01 and $k$ of 3 for three FCNs. For the vision models in image classification tasks, as multi-class classification, we train and finetune on ViT-B ([Dosovitskiy et al., 2020]), ResNet-50 ([He et al., 2016]), and Swin-B ([Liu et al., 2021]) models. The $k$ parameters of our proposed RLO are chosen from the set \{5, 8, 10\}. We train on CIFAR-10 and CIFAR-100 for 200 epochs with ViT and 100 epochs with ResNet and Swin. Moreover, we finetune these models on Tine-ImageNet and Food-101 for 10 epochs. We train and fine-tune both
Figure 3: RLO performance on CIFAR-100 training with different models. The x-axis is wall time in minutes. RLO obtains more stable validation loss, and use less time for training on all models.
| Dataset | Model | CE Time | CE Acc | Focal Time | Focal Acc | RLO-5 Time | RLO-5 Acc | RLO-8 Time | RLO-8 Acc | RLO-10 Time | RLO-10 Acc |
|------------|---------|---------|--------|-----------|----------|------------|----------|------------|----------|-------------|------------|
| CIFAR-10 | ViT | 12.42 | 79.15 | 62.16 | 77.78 | 12.67 | 79.1 | 12.80 | 79.64 | 12.70 | 79.33 |
| | ResNet | 24.13 | 87.67 | 110.98 | 86.50 | 22.22 | 88.54 | 19.81 | 88.53 | 21.27 | 88.79 |
| | Swin | 20.95 | 80.99 | 22.49 | 80.01 | 20.96 | 81.52 | 20.75 | 81.91 | 20.95 | 80.9 |
| CIFAR-100 | ViT | 48.35 | 52.39 | 61.83 | 52.32 | 12.58 | 52.97 | 12.75 | 52.62 | 12.59 | 52.03 |
| | ResNet | 25.46 | 66.84 | 20.24 | 67.45 | 20.67 | 67.92 | 20.75 | 68.31 | 20.75 | 68.46 |
| | Swin | 20.14 | 53.6 | 63.24 | 53.66 | 20.64 | 53.91 | 20.03 | 53.29 | 20.27 | 53.8 |
| Tiny-IInet | ViT | 920.94 | 84.7 | 821.65 | 83.08 | 901.68 | 86.05 | 908.69 | 85.73 | 905.26 | 85.38 |
| | ResNet | 253.92 | 73.39 | 245.74 | 73.95 | 257.85 | 74.19 | 259.85 | 74.05 | 255.94 | 74.1 |
| | Swin | 950.72 | 88.85 | 952.54 | 88.22 | 932.56 | 88.38 | 928.37 | 88.74 | 926.53 | 88.91 |
| Food-101 | ViT | 670.43 | 80.39 | 673.25 | 79.07 | 660.85 | 80.52 | 660.85 | 80.52 | 660.85 | 80.52 |
| | ResNet | 187.94 | 73.39 | 180.35 | 72.14 | 189.56 | 73.73 | 189.73 | 73.97 | 185.87 | 73.91 |
| | Swin | 718.01 | 87.21 | 700.89 | 86.64 | 723.89 | 87.53 | 723.25 | 87.64 | 717.04 | 87.52 |
Table 2: Test performance for image classifications on different datasets. Time is averaged one epoch training time in seconds. Note that, CE is cross-entropy for short. k values are 5, 8, and 10. Our RLO obtains the best and second best accuracy in all datasets and models.
on 3 NVIDIA RTX 2080Ti GPUs. To evaluate our proposed RLO, we use cross-entropy (CE) loss and focal loss as baselines. More implementation details are in Appendix A.3
Observations on FCNs decision boundaries: To enable interpretative understanding, we use the synthetic setups to visualize the decision boundaries learned by RLO compared with CE. Figure 3 shows the decision boundaries obtained from RLO and CE for three different FCNs training for different iterations. The intervening white line between the red and blue regions denotes the decision boundary, a critical threshold distinguishing classifications within the model. Specifically, comparing (b) and (e), and (c) and (f), we observe the margins which are the distances from datapoints to the decision boundary are larger for RLO in most of regions. Hence, RLO is beneficial to separate data points that enable the faster convergence rate.
Nonconvex Optimization Benefits: (1) Performance gains: we evaluate the effectiveness of RLO on nonconvex optimizations. In Figure 3, training with RLO for FCNs outperforms CE in term of accuracy in all different settings. It provides 1% - 4.3 % improvements for the binary classification. Furthermore, we illustrate the results of RLO on multiple image classification benchmarks. Table 2 shows that RLO performs the best and the second best accuracy across all datasets and network architectures. Specifically, training with RLO brings roughly 1.44% - 2.32 % gains over CE and 5.78 % - 6.66 % gains over focal in term of test accuracy. Additionally, Figure 3a and 3b show the training time of RLO are significantly less than CE and focal on different models. For example, the training wall time of ViT on CIFAR-100 for 200 epochs is 54 minutes and 109 minutes less than CE and focal respectively. Therefore, our proposed RLO can accelerate neural networks training and also provide performance improvements regardless of datasets and model architectures. (2) Effects on overfitting: Figure 3c shows the validation loss using CE is increasing over iterations. However, we observe that validation loss with RLO is decreasing over time on the same dataset and model, which is beneficial to reducing overfitting.
4.4 GAN-related with RLO
Experiments setups: For the image generation setup, we use the version of StyleGAN capable of being trained by limited training data, as proposed by Karras et al. (2020a). All training is done on 3 NVIDIA RTX 2080Ti GPUs with FFHQ and Stanford Dogs dataset. We evaluate the effectiveness of RLO by replacing the original loss and compare it to StyleGAN’s CE loss, for different values
(a) RLO setup with k=2
(b) FFHQ - Progressive Generation.
(c) RLO setup with k=11.
(d) Stanford Dogs - Progressive Generation
Figure 4: Results on FFHQ dataset and Stanford Dogs dataset. (a) FID score vs training time for both cross-entropy loss and RLO-2 setup. (c) FID score vs training time for both cross-entropy loss and RLO-11 setup. In (b) and (d), the top $8 \times 2$ row contains four instances of the image generation (Each image is a part of the $2 \times 2$ grid containing four images) using CE loss. The bottom $8 \times 2$ represents the same with the RLO setup at the same instances.
(a) Results with FFHQ Dataset.
(b) Results with Stanford Dogs Dataset.
Figure 5: For each setup, the target image can be seen on the left-most image. To its right, the first row shows the generated images with the projection obtained from the initial and final stages of the training respectively, with CE. The second row is the result of replacing it with RLO.
of $k$. To compare the efficacy of the models trained using RLO and CE loss, we take a large image from the original dataset, and compute its projection on the latent space using our model snapshots from the initial and final stages of the training. We then use these projections to generate an image using their respective models. More implementation details are in Appendix A.3.
**Observations:** As shown in Figure 4a, the setup with RLO trained on the FFHQ dataset produces a lower FID, which means better quality images, than the one with the CE. The progressive image generation while training these models are illustrated in Figure 4b. Images produced by RLO (bottom 2 rows) seems to be slightly better at the final stages of the training. Similar FID vs time comparison and progressive image generation for the Stanford Dogs dataset is shown in Figure 4d where the FID scores for the two models are close. Finally, in Figure 5, for a target image, we show the images obtained from the projections using the initial and final stages of training. For both FFHQ and Stanford Dogs dataset, we can see that the images generated using the final stage RLO models (bottom image, in the last column), produce details that are closer to the target image than CE. More generated images are shown in Appendix A.4.4.
| Model | # Param | CE Train | CE Test | Focal Train | Focal Test | RLO-10 Train | RLO-10 Test |
|-----------|---------|----------|---------|-------------|------------|--------------|-------------|
| ResNet-34 | 21.3M | 99.76 | 89.92 | 94.04 | 85.79 | **99.79** | **89.98** |
| ResNet-50 | 23.7M | **99.47**| 86.65 | 93.98 | 80.13 | **99.47** | **86.72** |
| ResNet-101| 42.7M | 99.69 | 84.28 | 94.40 | 77.75 | **99.74** | **85.12** |
| ViT-S | 14.4M | 67.17 | 67.7 | 66.51 | 67.74 | **68.31** | **68.37** |
| ViT-B | 85.1M | 72.49 | 72.12 | 71.61 | 71.45 | **73.23** | **72.41** |
| ViT-L | 226.8M | 76.56 | **74.81**| 74.56 | 73.72 | **77.17** | **74.81** |
Table 3: Ablations on model architectures, including running time and test performance on CIFAR-10. Time is averaged one epoch time in seconds. Note that, CE is cross-entropy for short. \( k \) value is 10. Our RLO obtains the best train and test accuracy in all models.
### 4.5 Ablation Studies
**More experiments on Parameter family:** Our proposed RLO have hyperparameter as \( k \) shown in equation (4). We conduct experiments on training models with different values of \( k \). As results shown in Table 1 and 2, and Figure 4a and 4c, the best \( k \) values vary on datasets and neural network architectures. Moreover, we observe that \( k \) is much more smaller than number of samples or feature dimensions as in lemma 2. In addition, we extend our RLO parameter family to \( k \) and \( m \), which \( m \) is the multiplier in equation (4). Figure 3c shows the test accuracy over iterations using different values of \( m \). Note that, we train Swin using different \( m \) on CIFAR-10 with \( k = 8 \). The performances on different \( m \) are similar, and \( m = 8 \) is slightly better than others in term of test accuracy. More results of different \( k \) values are in Appendix A.4.
**Is RLO sensitive to model architectures/sizes?** First, in Table 2, we showed the performance of RLO for image classification tasks for various combinations of datasets and deep neural network models. We saw that for \( k \) values from the set \{5, 8, 10\} achieved performance gain over CE and focal methods, in terms of test accuracy, for almost all the settings. For further ablation, we chose the \( k \) value to be 10 and compare with the baselines under different model architectures of ResNet and ViT. The ablation results in Table 3 suggest that RLO resoundingly performs better, even under different architectures of the same model. Both the training as well as the test accuracy under RLO-10 are better than those trained with CE and focal losses. Thus suggesting that RLO guarantees performance gain across different hyperparameters and model architectures.
### 5 Conclusions and Future Work
We presented comprehensive evaluations of a new class of loss functions for prediction problems in shallow and deep network. This class of loss functions has many favorable properties in terms of optimization and generalization of learning problems defined with high-dimensional data. Recent results suggest that standard logistic loss \( L_{LR}(\cdot) \) need to be adjusted for better convergence and generalization properties (Sur & Candès (2019)). By taking limit as \( k \uparrow +\infty \), or equivalently \( 1/k \downarrow 0 \) (say using L’Hôpital’s rule), we can see that \( \lim_{k \to \infty} L_{RLO}^k(\cdot) = L_{LO}(\cdot) \). Moreover since \( L_{RLO}^k(\cdot) \) and first order necessary condition are both smooth with respect to \( k \), the minimizers also coincide in the limit. We leave rate aspects of convergence to max classifier and generalization aspect of obtained solution as in (Soudry et al., 2018; Freund et al., 2018) for future work. Finally, dependence of the parameter \( k \) on excess risk and generalization bounds for RLO is also left as a future work. We believe insights from recent generalized linear models are fruitful directions to pursue (Hanneke et al., 2023; Emami et al., 2020).
Our investigations show that the rooted logistic loss function performs better when using first-order methods. However, the convergence guarantees for first-order methods are relatively weak for pre-training architectures with a large number of parameters, such as vision models. Moreover, since these models have sequential aspects in their training formulations, the convergence rate is further reduced in practice. Therefore, it would be interesting to consider second-order methods like Sophia Liu et al. (2023), to optimize \( L_{RLO}^k \) for some \( k \).
REFERENCES
Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 224–232. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/arora17a.html.
Arthur Asuncion and David Newman. Uci machine learning repository, 2007.
Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.
Digvijay Boob, Qi Deng, and Guanghui Lan. Stochastic first-order methods for convex and nonconvex functional constrained optimization. Mathematical Programming, 197(1):215–279, 2023.
Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative components with random forests. In European Conference on Computer Vision, 2014.
Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
Rares-Darius Buhai, Yoni Halpern, Yoon Kim, Andrej Risteski, and David Sontag. Empirical study of the benefits of overparameterization in learning latent variable models. In International Conference on Machine Learning, pp. 1211–1219. PMLR, 2020.
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9650–9660, 2021.
Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703, 2020.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018.
Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, and Alyson Fletcher. Generalization error of generalized linear models in high dimensions. In International Conference on Machine Learning, pp. 2892–2901. PMLR, 2020.
Robert M Freund, Paul Grigas, and Rahul Mazumder. Condition number analysis of logistic regression, and its implications for standard first-order solution methods. arXiv preprint arXiv:1810.08727, 2018.
Aritra Ghosh, Himanshu Kumar, and P Shanti Sastry. Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
Steve Hanneke, Aryeh Kontorovich, and Guy Kornowski. Near-optimal learning with average h\”older smoothness. arXiv preprint arXiv:2302.06005, 2023.
|
OF5x1dzWSS
|
Since the algorithm requires computing Jacobian inner products to perform parameter updates in the bi-level optimization, could the authors comment on the incurred time complexity? I am wondering if the algorithm runs much slower than vanilla AT (but only improves the robust accuracy moderately).
|
DOUBLY ROBUST INSTANCE-REWEIGHTED ADVERSARIAL TRAINING
Daouda A. Sow
Department of ECE
The Ohio State University
sow.53@osu.edu
Sen Lin
Department of CS
University of Houston
slin50@central.uh.edu
Zhangyang Wang
Visual Informatics Group
University of Texas at Austin
atlaswang@utexas.edu
Yingbin Liang
Department of ECE
The Ohio State University
liang.889@osu.edu
ABSTRACT
Assigning importance weights to adversarial data has achieved great success in training adversarially robust networks under limited model capacity. However, existing instance-reweighted adversarial training (AT) methods heavily depend on heuristics and/or geometric interpretations to determine those importance weights, making these algorithms lack rigorous theoretical justification/guarantee. Moreover, recent research has shown that adversarial training suffers from a severe non-uniform robust performance across the training distribution, e.g., data points belonging to some classes can be much more vulnerable to adversarial attacks than others. To address both issues, in this paper, we propose a novel doubly-robust instance reweighted AT framework, which allows to obtain the importance weights via exploring distributionally robust optimization (DRO) techniques, and at the same time boosts the robustness on the most vulnerable examples. In particular, our importance weights are obtained by optimizing the KL-divergence regularized loss function, which allows us to devise new algorithms with a theoretical convergence guarantee. Experiments on standard classification datasets demonstrate that our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance, and at the same time improves the robustness against attacks on the weakest data points.
1 INTRODUCTION
Deep learning models are known to be vulnerable to malicious adversarial attacks [Nguyen et al., 2015], i.e., small perturbation added to natural input data can easily fool state-of-the-art networks. Given that these deep neural networks are being heavily deployed in real-life applications, even in safety-critical applications, adversarial training (AT) [Madry et al., 2017; Athalye et al., 2018a; Carmon et al., 2019] has been proposed for training networks to be robust to adversarial attacks [Athalye et al., 2018b; Szegedy et al., 2013; Goodfellow et al., 2014; Papernot et al., 2016; Nguyen et al., 2015; Zhang et al., 2021b; 2020a]. In particular, most existing defense strategies are based on the recipes similar to AT [Madry et al., 2017], where the goal is to minimize the average loss of the worst-case adversarial data for the training distribution via solving a minimax optimization problem.
Despite its success, the traditional AT method [Madry et al., 2017] has some major limitations. First, even though existing overparameterized neural networks seem to be good enough for natural data, highly adversarial data consumes much more model capacity compared to their clean counterpart, making the minimization of the uniform average adversarial loss a very pessimistic goal, as argued in [Zhang et al., 2020b]. To overcome this limitation, recent works [Zhang et al., 2020b; Liu et al., 2021a; Zeng et al., 2021; Ding et al., 2018] assign an importance weight to each data point in the training distribution, in order to emphasize the ones that are critical to determining the model’s decision boundaries. By allowing more careful exploitation of the limited model capacity, such a simple instance-reweighted scheme combined with traditional adversarial training has yielded a
significant boost in the robust performance of current adversarially trained models. Yet, existing methods for instance-reweighted AT mostly adopt heuristic techniques and/or geometric intuitions in order to compute the instance weights, which makes these algorithms lack a principled and rigorous theoretical justification/guarantee. This hence motivates the following question we ask:
**How to systematically determine the importance weights via a principled approach, rather than resorting to heuristics/interpretations which are often sub-optimal?**
Moreover, as observed in [Tian et al. (2021)](https://example.com), another critical limitation of the transitional AT method is that it suffers a severe non-uniform performance across the empirical distribution. For example, while the average robust performance of the AT method on the CIFAR10 dataset can be as high as 49%, the robust accuracy for the weakest class is as low as 14%, which depicts a huge disparity in robust performance across different classes. We note that such a non-uniform performance across classes is also slightly observed in the standard training with clean data, but its severity is much worsened in adversarial training (see Figure 1). Indeed, this is a critical limitation that requires special attention as, in a real-world situation, a more intelligent attacker can, in fact, decide which examples to attack so as to achieve a much higher success rate (e.g., 86% when attacking the most vulnerable class). This non-uniform robust performance is even worsened in the case of imbalanced training distributions [Wu et al. (2021); Wang et al. (2022)](https://example.com), where the robust performance for the most vulnerable class can be as low as 0%. This motivates our second question given below:
**Can such an issue of non-uniform performance particularly over imbalanced datasets be addressed at the instance level simultaneously as we design the importance weights to address the first question?**
In this paper, we propose a novel doubly robust instance reweighted optimization approach to address both of the above questions.
### 1.1 Our Contributions
**(A novel principled framework for instance reweighted AT)** In order to determine the instance weights for AT in a theoretically grounded way, we propose a novel doubly robust instance reweighted optimization framework, based on distributionally robust optimization (DRO) [Rahimian & Mehrotra (2019); Qian et al. (2019)] and bilevel optimization [Zhang et al. (2022); Pedregosa (2016); Grazzi et al. (2020b)]. Through building a model that is robust not only to the adversarial attacks but also to the worst-case instance weight selections, our framework (a) enjoys better robust performance than existing instance-reweighted schemes based on heuristic/geometric techniques [Zhang et al. (2020b); Liu et al. (2021a); Zeng et al. (2021)] as well as traditional AT baselines [Madry et al. (2017)]; and (b) addresses the non-uniform issues [Tian et al. (2021); Pethick et al. (2023)] of traditional AT by carefully optimizing the instance weights so as to boost the robust performance of the most vulnerable examples. Moreover, the proposed framework can be reformulated into a new finite-sum compositional bilevel optimization problem (CBO), which can be of great interest to the optimization community on its own.
**(A novel algorithm with theoretical guarantee)** Solving the proposed doubly robust optimization problem is technically challenging, including the non-differentiability of the optimizer for the constrained inner level problem and the biased hypergradient estimation for the compositional outer level problem. To tackle these challenges, we first propose a penalized reformulation based on the log-barrier penalty method, and then develop a novel algorithm which exploits the implicit function theorem and keeps track of a running average of the outer level composed function values. Our algorithm not only leads to a robust model for the proposed instance reweighted optimization problem but also provides a solution to the generic compositional bilevel optimization problem. Under widely adopted assumptions in the bilevel [Grazzi et al. (2020a); Ji et al. (2021); Rajeswaran et al. (2019); Ji & Liang (2021)] and compositional optimization [Wang et al. (2017); Chen et al. (2021b); Lian et al. (2017); Blanchet et al. (2017); Devraj & Chen (2019)] literature, we further establish the convergence guarantee for the proposed algorithm.
**(Strong experimental performance)** Experiments on several balanced and imbalanced image recognition datasets demonstrate the effectiveness of our proposed approach. In particular, on CIFAR10 our approach yields +3.5% improvement in overall robustness against PGD attacks [Madry et al. (2017)] with most of it coming from boosting robustness on vulnerable data points.
1.2 Related Work
Adversarial training for robust learning Adversarial training (AT) [Madry et al., 2017; Athalye et al., 2018a; Carmon et al., 2019] was proposed for training deep neural networks robust to malicious adversarial attacks [Goodfellow et al., 2014; Tramèr et al., 2017]. In particular, Madry et al. [2017] introduced a generic AT framework based on minimax optimization with the goal of minimizing the training loss of the worst-case adversarial data for the training distribution. However, despite AT method being still considered as one of the most powerful defense strategies, Rice et al. [2020] highlights a severe decrease in robust performance of traditional AT when training is not stopped early, a phenomenon they dubbed robust overfitting. Several extensions of the standard AT method have been proposed to mitigate this intriguing problem, such as data augmentation-based techniques [Rebuffi et al., 2021; Gowal et al., 2021], or smoothing-based methods [Chen et al., 2021a; Yang et al., 2020a,b]. Zhang et al. [2019] proposed a theoretically grounded objective for AT to strike a balance between robust and natural performance. However, those methods suffer a severe non-uniform performance across classification categories, as observed in [Tian et al., 2021]. Our proposed framework helps mitigate this drawback by carefully optimizing for the most vulnerable data points.
Instance reweighted adversarial training Another line of works [Zhang et al., 2020b; Liu et al., 2021a; Zeng et al., 2021; Ding et al., 2018] assign an importance weight to each data point in the empirical distribution and minimize the weighted adversarial losses. This has been shown to significantly boost the performance of AT due to more careful exploitation of the limited capacity of large deep neural networks to fit highly adversarial data, and helps overcome robust overfitting to some extent [Zhang et al., 2020b]. For example, in the geometry-aware adversarial instance reweighted adversarial training (GAIRAT) [Zhang et al., 2020b] method, the instance weight is computed based on the minimum number of PGD [Madry et al., 2017] steps required to generate a mis-classified adversarial example. Liu et al. [2021a] leverages probabilistic margins to compute weights. Existing approaches for instance reweighted AT are, however, all based on heuristics/geometric intuitions to determine the weights. In this paper, we propose a principled approach to instance-reweighted AT by exploiting robust optimization techniques [Qian et al., 2019; Rahimian & Mehrotra, 2019].
Instance reweighting has also been used in the context of domain adaptation [Jiang & Zhai, 2007], data augmentation [Yi et al., 2021], and imbalanced classification [Ren et al., 2018]. By determining the instance weights in a more principled way, our method also has the potential to be applied to these contexts, which we leave as future work.
Due to space limitation, more discussions about related literature in Bilevel Optimization and Stochastic Compositional Optimization is deferred to Appendix A.
2 Preliminary on AT
Traditional AT. The traditional adversarial training (AT) [Madry et al., 2017] framework is formulated as the following minimax optimization problem over the training dataset \( D = \{ (x_i, y_i) \}_{i=1}^M \):
\[
\min_{\theta} \frac{1}{M} \sum_{i=1}^{M} \max_{\delta \in C} \ell(x_i + \delta, y_i; \theta),
\]
(1)
where \( \ell(x_i + \delta, y_i; \theta) \) is the loss function on the adversarial input \( x_i + \delta \), \( C \) is the treat model that defines the constraint on the adversarial noise \( \delta \), and \( \theta \in \mathbb{R}^d \) corresponds to the model parameters. Thus, the traditional AT builds robust models by optimizing the parameters \( \theta \) for the average worst-case adversarial loss \( \ell(x_i + \delta, y_i; \theta) \) over the training dataset \( D \). A natural solver for the problem in Equation (1) is the AT algorithm [Madry et al., 2017], where 1) the projected gradient descent (PGD) [Madry et al., 2017] method is first adopted to approximate the worst-case adversarial noise \( \delta \) and 2) an outer minimization step is performed on the parameters \( \theta \) using stochastic gradient descent (SGD) methods. However, the traditional AT is known to consume tremendous amount of model capacity due to its overwhelming smoothing effect of natural data neighborhoods [Zhang et al., 2020b]. In other words, the traditional AT robustifies models by making decision boundaries far away from natural data points so that their adversarial counterparts are still correctly classified (i.e., do not cross the decision boundary), and thus requires significantly more model capacity compared to the standard training on clean data.
Instance Reweighted AT. The geometry-aware approach in [Zhang et al., 2020b] introduces a new line of methods that reweights the adversarial loss on each individual data point in order to address
the drawback of traditional AT. The key motivation is that distinct data points are unequal by nature and should be treated differently based on how important they participate in the selection of decision boundaries. Hence, the learning objective of the geometry-aware instance-reweighted adversarial training (GAIRAT) method as well as its variants [Zhang et al., 2020b; Liu et al., 2021a; Zeng et al., 2021] can be written as
$$\min_{\theta} \sum_{i=1}^{M} w_i \max_{\delta \in C_i} \ell(x_i + \delta, y_i; \theta) \quad \text{with} \quad \sum_{i=1}^{M} w_i = 1 \text{ and } w_i \geq 0,$$
(2)
where the constraints on the weights vector $w = (w_1, ..., w_M)^T$ are imposed in order to make Equation (2) consistent with the original objective in Equation (1). This framework assumes that the weight vector $w = (w_1, ..., w_M)^T$ can be obtained separately and the goal is only to optimize for $\theta$ once an off-the-shelf technique/heuristic can be used to compute $w$. Intuitively, the key idea driving the weight assignments in instance reweighted methods is that larger weights should be assigned to the training examples closer to the decision boundaries, whereas the ones that are far away should have smaller weights because they are less important in determining the boundaries. The major difference among the existing instance reweighted AT methods lies in the heuristics used to design/compute the instance weights $w_i$, $i = 1, ..., M$. However, none of those methods adopt a scheme that is theoretically grounded, nor does the formulation in Equation (2) provide a way of determining those weights.
**Bilevel Optimization Formulation for AT.** Along a different line, bilevel optimization has recently been leveraged to develop a more powerful framework for adversarial training [Zhang et al., 2022]:
$$\min_{\theta} \frac{1}{M} \sum_{i=1}^{M} \ell(x_i + \delta^*_i(\theta), y_i; \theta) \quad \text{s.t.} \quad \delta^*_i(\theta) = \arg\min_{\delta \in C_i} \ell'(x_i + \delta, y_i; \theta),$$
(3)
where for each data point $(x_i, y_i)$, $\delta^*_i(\theta)$ represents some worst-case/optimal adversarial noise under the attack loss function $\ell'(\cdot; \theta)$. Such a bilevel optimization formulation of AT has key advantages over the traditional framework in Equation (1). First, the traditional AT can be recovered by setting the attack objective to be the negative of the training objective, i.e., $\ell'(\cdot; \theta) = -\ell(\cdot; \theta)$. Second, the bilevel formulation gives one the flexibility to separately design the inner and outer level objectives, $\ell'$ and $\ell$, respectively. These key advantages make the formulation in Equation (3) a more generic and powerful framework than the one in Equation (1). As we will see next, this enables us to independently construct a new outer level objective that also solves for the instance weights $w$, and an inner level objective for regularized attack.
### 3 Proposed Framework for Instance Reweighted AT
#### 3.1 DONE: Doubly Robust Instance Reweighted AT
Using the bilevel formulation for AT in Eq. equation (3), we can incorporate the instance reweighted idea as
$$\min_{\theta} \sum_{i=1}^{M} w_i \ell(x_i + \delta^*_i(\theta), y_i; \theta) \quad \text{s.t.} \quad \delta^*_i(\theta) = \arg\min_{\delta \in C_i} \ell'(x_i + \delta, y_i; \theta) \quad \text{with} \quad \sum_{i=1}^{M} w_i = 1 \text{ and } w_i \geq 0.$$
(4)
Based on bilevel optimization and distributionally robust optimization (DRO), we next propose a new framework for AT which determines the weights $w$ in a more principled way rather than using heuristic methods. Specifically, by letting $w$ maximize the weighted sum of the adversarial losses $\ell(x_i + \delta^*_i(\theta), y_i; \theta)$, $i = 1, ..., M$, we seek to build a model in the outer level problem that is robust not only to the adversarial attacks but also to the worst-case attack distribution:
$$\min_{\theta} \max_{w \in P} \sum_{i=1}^{M} w_i \ell(x_i + \delta^*_i(\theta), y_i; \theta) - r \sum_{i=1}^{M} w_i \log(Mw_i) \quad \text{s.t.} \quad \delta^*_i(\theta) = \arg\min_{\delta \in C_i} \ell'(x_i + \delta, y_i; \theta),$$
(5)
where $P$ represents the probability simplex, i.e., $P = \{w \in \mathbb{R}^M : \sum_{i=1}^{M} w_i = 1 \text{ and } w_i \geq 0\}$, and the term $r \sum_{i=1}^{M} w_i \log(Mw_i)$ in the outer level objective captures the KL-divergence between $w$ and the uniform weight distribution, which is a widely adopted choice of regularizer in the DRO literature [Rahimian & Mehrotra, 2019]. Note that the regularization parameter $r > 0$ controls the tradeoff between two extreme cases: 1) $r = 0$ leads to an un-regularized problem (as we comment
below), and 2) \( r \to \infty \) yields \( w_i \to \frac{1}{M} \), and hence, we recover the average objective in Equation (1).
Such a regularizer is introduced to promote the balance between the uniform and worst-case weights \( w \); otherwise the outer level objective in Equation (5) becomes linear in weights vector \( w \), which makes the solution of the ‘max’ problem to be trivially a one-hot vector \( w \) (where the only ‘1’ is at index \( i \) with the largest adversarial loss), and in practice, such a trivial one-hot vector \( w \) makes the optimization routine unstable and usually hurts generalization to the training distribution [Qian et al. (2019); Wang et al. (2021)].
Overall, the formulation in Equation (5) becomes a doubly robust bilevel optimization: (a) the inner level finds the worst-case noise \( \delta \) in order to make the model parameters \( \theta \) robust to such adversarial perturbation of data input; and (b) the outer level finds the worst-case reweighting first so that the optimization over the model \( \theta \) can focus on those data points with high loss values, i.e., the optimization over \( \theta \) is over the worst-case adversarial losses.
### 3.2 An Equivalent Compositional Bilevel Optimization Problem
An important consequence of choosing the KL-divergence as the regularizer is that the max problem in the outer objective of Equation (5) admits a unique solution \( w^*(\theta) \) (see Qi et al. (2021) for proof), which has its \( i \)-th entry given by \( w^*_i(\theta) = \exp \left( \ell_i(\theta, \delta^*_i(\theta)) / r \right) / \sum_j \exp \left( \ell_j(\theta, \delta^*_j(\theta)) / r \right) \). Here we denote \( \ell_i(\theta, \delta^*_i(\theta)) = \ell(x_i + \delta^*_i(\theta), y_i; \theta) \). Substituting this optimal weights vector \( w^*(\theta) \) back in Equation (5) yields the following equivalent optimization problem
\[
\min_{\theta} r \log \left( \frac{1}{M} \sum_{i=1}^{M} \exp \left( \frac{\ell_i(\theta, \delta^*_i(\theta))}{r} \right) \right) \quad \text{s.t.} \quad \delta^*_i(\theta) = \arg \min_{\delta \in C_i} \ell'_i(\theta, \delta).
\]
Problem (6) is, in fact to the best of our knowledge, a novel optimization framework, which we define as a compositional bilevel optimization problem. Without the inner level problem, stochastic algorithms with known convergence behaviors have been devised for the single-level compositional problem. Nevertheless, directly solving problem (6) suffers from several key technical challenges. In particular, the fact that the minimizer of the inner level constrained problem in Equation (6) may not be differentiable w.r.t. to the model parameter \( \theta \) prevents the usage of implicit differentiation for solving the bilevel optimization problem.
To tackle this challenge, we propose a penalized reformulation based on the log-barrier penalty method. More specifically, we consider \( \ell_\infty \)-norm based attack constraint given by \( C = \{ \delta \in \mathbb{R}^p : \| \delta \|_\infty \leq \epsilon, x + \delta \in [0,1]^p \} \) for radius \( \epsilon > 0 \) and input \( x \in \mathbb{R}^p \). In this case, the constraint set \( C \) can be written in the form of linear constraint \( A\delta \leq b \) with \( A = (I_p, -I_p)^\top \in \mathbb{R}^{2p \times p} \) and \( b = (\min(\epsilon 1_p, 1_p - x), \min(\epsilon 1_p, x))^\top \in \mathbb{R}^{2p} \). With this, we can reformulate the inner problem in Equation (6) as \( \delta^*_i(\theta) = \arg \min_{A_i \delta \leq b_i} \ell'_i(\theta, \delta) \), where \( A_i \) and \( b_i \) are realizations of aforementioned \( A \) and \( b \) for input \( x_i \). By using the log-barrier penalty method to penalize the linear constraint into the attack objective, the optimization problem (6) becomes
\[
\min_{\theta} L(\theta) := r \log \left( \frac{1}{M} \sum_{i=1}^{M} \exp \left( \frac{\ell_i(\theta, \hat{\delta}^*_i(\theta))}{r} \right) \right) \quad \text{s.t.} \quad \hat{\delta}^*_i(\theta) = \arg \min_{\delta \in C_i} \ell_{\text{bar}}'(\theta, \delta),
\]
where \( \ell_{\text{bar}}'(\theta, \delta) := \ell'_i(\theta, \delta) - c \sum_{k=1}^{2p} \log(b_k - \delta^\top a_k) \), \( a_k \) denotes the \( k \)-th row of matrix \( A_i \) and \( b_k \) is the \( k \)-th entry of vector \( b_i \). Note that now the constraint \( \{ \delta \in C_i \} \) is never binding in Equation (7), because the log-barrier penalty forces the minimizer of \( \ell_{\text{bar}}'(\theta, \delta) \) to be strictly inside the constraint set. Based on this, we show that the minimizer \( \hat{\delta}^*_i(\theta) \) becomes differentiable, i.e., \( \frac{\partial \hat{\delta}^*_i(\theta)}{\partial \theta} \) exists when \( \ell'_i(\theta, \delta) \) is twice differentiable and under some mild conditions. With the smoothness of \( \hat{\delta}^*_i(\theta) \), we also provide the expression of the gradient \( \nabla L(\theta) \) in the following proposition.
**Proposition 1.** Let \( \ell'_i(\theta, \delta) \) be twice differentiable. Define \( \gamma_k = 1/(b_k - a_k^\top \hat{\delta}^*_i(\theta))^2, k = 1, ..., 2p \) and diagonal matrix \( C_i(\theta) = c \text{diag}(\gamma_1 + \gamma_{p+1}, \gamma_2 + \gamma_{p+2}, ..., \gamma_p + \gamma_{2p}) \). If \( \nabla^2 \ell'_i(\theta, \hat{\delta}^*_i(\theta)) + C_i(\theta) \) is invertible, then the implicit gradient \( \frac{\partial \hat{\delta}^*_i(\theta)}{\partial \theta} \) exists and we have
\[
\nabla L(\theta) = \frac{r \sum_{i=1}^{M} \left( \nabla_\theta g_i(\theta, \hat{\delta}^*_i(\theta)) - \nabla_\delta \ell'_i(\theta, \hat{\delta}^*_i(\theta)) [\nabla^2_\delta \ell'_i(\theta, \hat{\delta}^*_i(\theta)) + C_i(\theta)]^{-1} \nabla_\delta g_i(\theta, \hat{\delta}^*_i(\theta)) \right)}{\sum_{i=1}^{M} g_i(\theta, \hat{\delta}^*_i(\theta))},
\]
where \( g_i(\theta, \hat{\delta}^*_i(\theta)) = \exp \left( \frac{\ell_i(\theta, \hat{\delta}^*_i(\theta))}{r} \right) \).
Proposition 1 provides the expression of the total gradient \( \nabla L(\theta) \), which is useful for practical implementation of implicit differentiation based algorithms for problem (6). Moreover, as in Zhang et al. (2022), when \( \ell_i(\theta, \cdot) \) is modeled by a ReLU-based deep neural network, the hessian \( \nabla^2_{\delta} \ell_i(\theta, \delta) \) w.r.t. input \( \delta \) can be safely neglected due to the fact that ReLU network generally lead to piece-wise linear decision boundaries w.r.t. its inputs (Moosavi-Dezfooli et al., 2019; Alfarra et al., 2022), i.e., \( \nabla^2_{\delta} \ell_i(\theta, \delta) \approx 0 \). Further, the diagonal matrix \( C_i(\theta) \) can be efficiently inverted. Hence, in order to approximate \( \nabla L(\theta) \), we only need Jacobian-vector product computations which can be efficiently computed using existing automatic differentiation packages.
3.3 COMPOSITIONAL IMPLICIT DIFFERENTIATION (CID)
To solve our reformulated problem (7) for AT, we consider the following generic compositional bilevel optimization problem, which can be of great interest to the optimization community:
\[
\begin{align*}
\min_{\theta} & \quad F(\theta) := f(g(\theta, \delta^*(\theta))) = f\left(\frac{1}{M} \sum_{i=1}^{M} g_i(\theta, \delta^*_i(\theta))\right) \\
\text{s.t.} & \quad \delta^*(\theta) = (\delta^*_1(\theta), ..., \delta^*_M(\theta)) = \arg\min_{(\delta_1, ..., \delta_M) \in V_1 \times ... \times V_M} \frac{1}{M} \sum_{i=1}^{M} h_i(\theta, \delta_i),
\end{align*}
\]
which can immediately recover problem (7) by setting \( g_i = \exp\left(\frac{\ell_i(\theta, \delta^*_i(\theta))}{r}\right) \), \( h_i = \ell_i(\theta, \delta) - c \sum_{k=1}^{2p} \log(b_k - \delta^\top a_k) \), and the constraint set \( V_i = C_i \). Here the outer functions \( g_i(\theta, \delta) : \mathbb{R}^d \times \mathbb{R}^p \to \mathbb{R}^m \) and \( f(z) : \mathbb{R}^m \to \mathbb{R} \) are generic nonconvex and continuously differentiable functions. The inner function \( h_i(\theta, \delta) : \mathbb{R}^d \times V_i \to \mathbb{R} \) is a twice differentiable and admits a unique minimizer in \( \delta, V_i \) is a convex subset of \( \mathbb{R}^p \) that is assumed to contain the minimizers \( \delta^*_i(\theta) \). We collect all inner loop minimizers into a single vector \( \delta^*(\theta) \). The goal is to minimize the total objective function \( F(\theta) : \mathbb{R}^d \to \mathbb{R} \), which not only leads to a robust model for our instance reweighted optimization problem (7) but also provides a solution to the generic compositional bilevel optimization problem.
As alluded earlier, solving the compositional bilevel optimization problem is nontrivial. More specifically, it can be shown that the gradient of the total objective is \( \nabla F(\theta) = \frac{\partial g(\theta, \delta^*(\theta))}{\partial \theta} \nabla f(g(\theta, \delta^*(\theta))) \) by applying the chain rule. Due to the fact that \( \nabla f(\cdot) \) needs to be evaluated at the full value \( g(\theta, \delta^*(\theta)) \), standard stochastic gradient descent methods cannot be naively applied here. The reason is that even if we can obtain the unbiased estimates \( g_i(\theta, \delta^*_i(\theta)) \), the product \( \frac{\partial g_i(\theta, \delta^*_i(\theta))}{\partial \theta} \nabla f(g_i(\theta, \delta^*_i(\theta))) \) would still be biased, unless \( f(\cdot) \) is a linear function. This key difference makes problem (8) particularly challenging and sets it apart from the standard finite-sum bilevel optimization problem in which the total objective is linear w.r.t. the sampling probabilities \( \frac{1}{M} \).
To design a theoretically grounded algorithm for problem (8), note that the stochastic compositional gradient descent (SCGD) (Wang et al., 2017) algorithm for the single-level compositional optimization problem keeps track of a running average of the composed function evaluations during the algorithm running. Inspired by SCGD, we propose a novel algorithm (see Algorithm 1) that exploits the implicit differentiation technique to deal with the bilevel aspect of problem (8). Using the implicit function theorem, we can obtain
\[
\frac{\partial g_i(\theta, \delta^*_i(\theta))}{\partial \theta} = \nabla_\theta g_i(\theta, \delta^*_i(\theta)) - \nabla_\delta h_i(\theta, \delta^*_i(\theta)) v^*_i,
\]
with each \( v^*_i \) being the solution of the linear system \( \nabla^2_\delta h_i(\theta, \delta^*_i(\theta)) v = \nabla_\delta g_i(\theta, \delta^*_i(\theta)) \).
Specifically, at each step \( t \), the algorithm first samples a batch \( B \) of cost functions \( \{(g_i, h_i)\} \) and applies \( K \) steps of projected gradient descent to obtain \( \delta^K_i(\theta_t) \) as an estimate of the minimizer \( \delta^*_i(\theta_t) \) of each \( h_i(\theta_t, \cdot) \) in \( B \). Then, the algorithm computes an approximation \( \hat{\nabla} g_i(\theta_t, \delta^K_i(\theta_t)) \) of the stochastic gradient sample \( \frac{\partial g_i(\theta, \delta^*_i(\theta))}{\partial \theta} \) by replacing each \( \delta^*_i(\theta_t) \) with \( \delta^K_i(\theta_t) \) in Equation (9). The running estimate \( u_t \) of \( \frac{\partial g(\theta, \delta^*(\theta))}{\partial \theta} \) and the parameters \( \theta \) will be next updated as follows
\[
u_{t+1} = (1 - \eta_t)u_t + \frac{\eta_t}{|B|} \sum_{i=1}^{|B|} g_i(\theta_t, \delta^K_i(\theta_t)) \quad \text{and} \quad \theta_{t+1} = \theta_t - \frac{\beta_t}{|B|} \sum_{i=1}^{|B|} \hat{\nabla} g_i(\theta_t, \delta^K_i(\theta_t)) \nabla f(u_{t+1}).
\]
Note that we will refer the instantiation of Algorithm 1 for solving the instance reweighted problem (7) as DONE (which stands for Doubly Robust Instance Reweighted AT).
Algorithm 1 Compositional Implicit Differentiation (CID)
1: Input: stepsizes $\alpha$, $\{\beta_t\}$, $\{\eta_t\}$, initializations $\theta_0 \in \mathbb{R}^d$, $\delta^0 \in \mathbb{R}^p$, and $u_0 \in \mathbb{R}^m$.
2: for $k = 0, 1, 2, ..., T - 1$ do
3: Draw a minibatch of cost functions $B = \{(g_i, h_i)\}$
4: for each $(g_i, h_i) \in B$ (in parallel) do
5: for $k = 1, ..., K$ do
6: Update $\delta_{i,t}^k = \Pi_C(\delta_{i,t}^{k-1} - \alpha \nabla_\delta h_i(\theta_t, \delta_{i,t}^{k-1}))$
7: end for
8: Compute sample gradient estimate $\hat{\nabla} g_i(\theta_t, \delta_{i,t}^K)$ as in Equation 9 by replacing $\delta_{i,t}^*(\theta_t)$ with $\delta_{i,t}^K$
9: end for
10: Compute $g(\theta_t, \delta_{i,t}^K; B) = \frac{1}{|B|} \sum_{i=1}^{|B|} g_i(\theta_t, \delta_{i,t}^K)$ and $\hat{\nabla} g(\theta_t, \delta_{i,t}^K; B) = \frac{1}{|B|} \sum_{i=1}^{|B|} \hat{\nabla} g_i(\theta_t, \delta_{i,t}^K)$
11: Update $u_{t+1} = (1 - \eta_t) u_t + \eta_t g(\theta_t, \delta_{i,t}^K; B)$
12: Update $\theta_{t+1} = \theta_t - \beta_t \hat{\nabla} g(\theta_t, \delta_{i,t}^K; B) \nabla f(u_{t+1})$
13: end for
3.4 Convergence Analysis of CID
In the following, we establish the convergence rate of the proposed CID algorithm under widely adopted assumptions in bilevel and compositional optimization literatures (see Appendix E for the statement of assumptions and proof of Theorem 1).
Theorem 1. Suppose that Assumptions [7][2][3] (which are given in Appendix) hold. Select the stepsizes as $\beta_t = \frac{1}{\sqrt{T}}$ and $\eta_t \in [\frac{1}{2}, 1)$, and batchsize as $O(T)$. Then, the iterates $\theta_t, t = 0, ..., T - 1$ of the CID algorithm satisfy
$$\sum_{t=0}^{T-1} \frac{1}{T} \mathbb{E} \left[ \| \nabla F(\theta_t) \|^2 \right] \leq O \left( \frac{1}{\sqrt{T}} + (1 - \alpha \mu)^K \right),$$
The proof can be found in the Appendix. Theorem 1 indicates that Algorithm 1 can achieve an $\epsilon$-accurate stationary point by selecting $T = O(\epsilon^{-2})$ and $K = O(\log \frac{1}{\epsilon})$. The dependency on the batchsize can be reduced to $O(\epsilon^{-1})$ by selecting $\eta_t = T^{-0.25}$, which would also lead to a higher iteration complexity of $O(\epsilon^{-1})$.
4 Experiments
4.1 Experimental Setup
Datasets and Baselines. We consider image classification problems and compare the performance of our proposed DONE method with related baselines on four image recognition datasets CIFAR10 [Krizhevsky & Hinton (2009)], SVHN [Netzer et al. (2011)], STL10 [Coates et al. (2011)], and GTSRB [Stallkamp et al. (2012)]. More details about the datasets can be found in the appendix. We compare against standard adversarial training methods AT [Madry et al. (2017)] and FAT [Zhang et al. (2020a)], and three other state-of-the-art instance re-weighted adversarial training methods GAIRAI [Zhang et al. (2020b)], WMMF [Zeng et al. (2021)], and MAIL [Liu et al. (2021a)]. We use the official publicly available codes of the respective baselines and their recommended training configurations. For our algorithm DONE, we consider three implementations based on how we solve the inner loop optimization: (i) DONE-GD uses simple non-sign projected gradient descent steps; (ii) DONE-ADAM employs the Adam optimizer; and (iii) DONE-PGD adopts the projected gradient sign method. We run all baselines on a single NVIDIA Tesla V100 GPU.
More details about the training and hyperparameters search can be found in Appendix B.
Evaluation. For all baselines, we report their standard accuracy on clean data (SA), the robust accuracy against 20 steps PGD attacks (RA-PGD) [Madry et al. (2017)], the robust accuracy against AutoAttacks (RA-AA) [Croce & Hein (2020)], and the RA-PGD of the 30% most vulnerable classes (RA-Tail-30) as a measure of robustness against attacks on the most vulnerable data points.
4.2 Better Distribution of Robust Performance
We first demonstrate that our proposed doubly robust formulation can indeed achieve robust performance in a more balanced way across the empirical distribution. Figure 1 shows the per class
Figure 1: Per-class robust accuracy comparisons between our method and traditional AT method on balanced and imbalanced (0.2 imbalance ratio) CIFAR10.
Table 1: Performance evaluations on balanced and imbalanced (0.2 imbalance ratio) CIFAR10.
| Method | Balanced CIFAR10 | Unbalanced CIFAR10 |
|----------|------------------|--------------------|
| | SA | RA-PGD | RA-Tail-30 | RA-AA | SA | RA-PGD | RA-Tail-30 | RA-AA |
| AT | 82.1 | 49.29 | 28.35 | 45.22 | 69.74 | 42.37 | 6.25 | 39.55 |
| FAT | **86.21** | 46.59 | 27.12 | 43.71 | - | - | - | - |
| WMMR | 81.6 | 49.53 | 31.24 | 40.9 | - | - | - | - |
| MAIL | 83.47 | 55.12 | 37.30 | 44.08 | 72.01 | 45.64 | 9.8 | 37.17 |
| GAI RAT | 83.22 | 54.81 | 37.45 | 41.10 | 73.87 | 45.18 | 16.9 | 35.43 |
| DONE-GD | 83.41 | 57.46 | 40.11 | **45.66** | 74.22 | **48.29** | **17.19** | **40.06** |
| DONE-PGD | 82.62 | **58.54** | 40.18 | 44.49 | **74.58** | 48.13 | 15.83 | 38.69 |
| DONE-ADAM| 82.25 | 58.51 | **40.36** | 44.20 | 74.56 | 48.15 | 17.10 | 39.46 |
robust accuracy (RA-PGD) of the standard AT method and our doubly-robust approach (i.e., vanilla DONE-GD method) for both balanced and imbalanced (with an imbalance ratio of 0.2) CIFAR10 dataset. For the balanced case, our algorithm improves the robustness on all classes, meanwhile with a more significant boost on the weakest classes (cat, deer, and bird). On the other hand, for the imbalanced data case, the classes with more examples (last five categories) heavily dominate the robust training dynamic. This consequently leads to very high robustness on those classes, but nearly zero robustness on the vulnerable classes (such as cat). However, our method can still boost the per class RA-PGD on the weak classes (+11% on average on the 3 most vulnerable classes) and at the same time maintain a superior average RA-PGD. Overall, the results for both balanced and imbalanced settings clearly demonstrate that our doubly-robust approach can, in fact, improve worst-case robustness and hence achieve superior average robust performance.
4.3 Main Results
Comparisons under CIFAR10. The overall performance of the compared baselines under both balanced and imbalanced CIFAR10 are reported in Table 4. We highlight the following important observations. First, overall our methods outperform all other baselines in terms of all three robustness metrics (RA-PGD, RA-Tail-30, and RA-AA), meanwhile also maintaining a competitive standard accuracy (SA). In particular, our algorithms can improve the RA-PGD of the strongest baseline (MAIL) by over 3% with most of the gain coming from improvement on the weakest classes, as is depicted on the RA-Tail-30 column. This shows that our doubly robust approach can mitigate the weak robustness on the vulnerable data points while also keeping the robust performance on well guarded examples (i.e., easy data points) at the same level. Second, note that the instance reweighted baselines consistently outperform the methods without reweighting on the RA-Tail-30 metric, which indicates that reweighting in general boosts the robustness on weak examples. This advantage is even clearer on the imbalanced data case. Yet, our algorithms still outperform the other instance reweighted methods by around 3% in terms of RA-Tail-30 in the balanced data setup due to their doubly-robust nature, which clearly is helpful both for average and worst-case robust performance. Third, note that the other methods that employ
Table 4: Comparisons with fast AT methods.
| Method | SA | RA-PGD | RA-Tail-30 |
|-----------|------|--------|------------|
| Fast-AT | **82.44** | 45.37 | 23.3 |
| Fast-AT-GA| 79.83 | 47.56 | 25.01 |
| Fast-BAT | 79.91 | 49.13 | 26.05 |
| DONE | 79.17 | **55.17** | **37.13** |
Table 2: Performance evaluations on balanced and imbalanced (0.2 imbalance ratio) SVHN.
| Method | Balanced SVHN | | Unbalanced SVHN (0.2) |
|------------|---------------|------------------|------------------------|
| | SA | RA-PGD | RA-Tail-30 | RA-AA | SA | RA-PGD | RA-Tail-30 | RA-AA |
| AT | 93.21 | 57.82 | 47.21 | 46.27 | 88.46 | 51.08 | 33.67 | 41.13 |
| MAIL | 93.11 | 65.56 | 52.23 | 41.38 | 86.62 | 48.48 | 31.91 | 34.46 |
| GAIRAT | 91.56 | 64.74 | 52.15 | 39.41 | 86.73 | 53.79 | 36.46 | 33.25 |
| DONE-PGD | 92.80 | **66.20** | **55.84** | 48.32 | 88.05 | 54.85 | 39.91 | 41.44 |
| DONE-ADAM | 92.58 | 65.72 | 53.79 | **49.13** | **88.98** | **55.90** | **41.10** | **42.38** |
Table 3: Performance evaluations on STL10 and GTSRB (originally imbalanced) datasets.
| Method | STL10 | | GTSRB |
|------------|-------|------------------|------------------------|
| | SA | RA-PGD | RA-Tail-30 | RA-AA | SA | RA-PGD | RA-Tail-30 | RA-AA |
| AT | 67.11 | 36.28 | 10.07 | 32.58 | 88.13 | 59.65 | 27.03 | **57.83** |
| MAIL | **68.06** | 38.20 | 13.33 | 32.86 | 88.47 | 55.96 | 20.73 | 53.44 |
| GAIRAT | 65.67 | 35.23 | 15.21 | 30.42 | 86.67 | 54.38 | 22.10 | 51.18 |
| DONE-PGD | 66.98 | **40.23** | **17.87** | 33.71 | **89.34** | **60.16** | 27.41 | 57.25 |
| DONE-ADAM | 66.92 | 39.70 | 17.62 | **34.59** | 88.76 | 60.05 | **28.35** | 57.70 |
heuristics to compute the instance weights achieve worst RA-AA performance compared to the standard AT method. In contrast, our algorithms, which also fall in the instance reweighted paradigm, can still attain competitive performance for RA-AA compared to the standard AT method. This highlights the suboptimality of using heuristics which could be geared towards improving one metric (such as the RA-PGD) but may not be necessarily beneficial to the overall robustness of the model.
Performance Comparisons on the other datasets. Table 2 shows the evaluations of the compared baselines on the SVHN dataset. As depicted, our algorithms (DONE-PGD and DONE-ADAM) significantly outperform the standard AT method on the RA-PGD metric and at the same time achieve better robustness against AutoAttacks (RA-AA). Compared with the instance reweighted baselines (MAIL & GAIRAT), the advantage of our methods is even more important on the RA-AA metric (e.g., up to around +8% on RA-AA vs +1.5% on RA-PGD for the balanced data setting). We also note considerable improvements on the GTSRB and STL10 datasets in Table 3. Similarly to the CIFAR10 dataset, our approach yields an important boost on the RA-Tail-30 robustness metric compared to all other baselines and the advantage is more significant on the imbalanced data case. These results consistently demonstrate that our doubly-robust approach can indeed improve worst-case robust performance meanwhile also maintaining/improving the overall robustness.
Evaluations under Fast AT Setting. We also compare our approach with fast adversarial training methods. For this setup, we generate the adversarial attacks during training with only 1 GD step after initialization with 1 PGD warm-up step Zhang et al. (2022) and train all baselines for 25 epochs. We compare our method with Fast-BAT Zhang et al. (2022), Fast-AT Wong et al. (2020), and Fast-AT-GA Andriushchenko & Flammarion (2020) on CIFAR10. The evaluations of the compared methods are reported in Table 4. Our algorithm achieves a much better robust performance and at the same time keeps a competitive SA. In particular, we note a significant boost (+11%) in RA-Tail-30, which is mainly the cause of the improvement in the overall RA-PGD.
5 CONCLUSIONS
In this paper, we proposed a novel doubly robust instance reweighted adversarial training framework based on DRO and bilevel optimization, which not only determines the instance weights for AT in a theoretically grounded way but also addresses the non-uniform issues of traditional AT by boosting the robust performance of the most vulnerable examples. To address the technical challenges in solving the doubly robust optimization problem, we proposed a penalized reformulation using the log-barrier penalty method, and developed a novel algorithm based on implicit function theorem and tracking a running average of the outer level function values. Our proposed framework also leads to a new finite-sum compositional bilevel optimization problem, which can be of great interest to the optimization community and solved by our developed algorithm with theoretical guarantee. In the experiments on standard benchmarks, our doubly-robust approach (DONE) outperforms related state-of-the-art baseline approaches in average robust performance and also improves the robustness against attacks on the weakest data points.
ACKNOWLEDGEMENTS
The work of D. Sow and Y. Liang was supported in part by the U.S. National Science Foundation under the grants CCF-1900145, ECCS-2113860 and CNS-2112471.
REFERENCES
Motasem Alfarra, Adel Bibi, Hasan Hammoud, Mohamed Gaafar, and Bernard Ghanem. On the decision boundaries of neural networks: A tropical geometry perspective. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022.
Maksym Andriushchenko and Nicolas Flammarion. Understanding and improving fast adversarial training. *Advances in Neural Information Processing Systems*, 33:16048–16059, 2020.
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In *International conference on machine learning*, pp. 274–283. PMLR, 2018a.
Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In *International conference on machine learning*, pp. 284–293. PMLR, 2018b.
Luca Bertinetto, Joao F Henriques, Philip Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. In *International Conference on Learning Representations (ICLR)*, 2018.
Jose Blanchet, Donald Goldfarb, Garud Iyengar, Fengpei Li, and Chaoxu Zhou. Unbiased simulation for optimizing stochastic function compositions. *arXiv preprint arXiv:1711.07564*, 2017.
Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. *Advances in neural information processing systems*, 32, 2019.
Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. Robust overfitting may be mitigated by properly learned smoothening. In *International Conference on Learning Representations*, 2021a.
Tianyi Chen, Yuejiao Sun, and Wotao Yin. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. *IEEE Transactions on Signal Processing*, 69:4937–4948, 2021b.
Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Geoffrey Gordon, David Dunson, and Miroslav Dudík (eds.), *Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics*, volume 15 of *Proceedings of Machine Learning Research*, pp. 215–223, Fort Lauderdale, FL, USA, 11–13 Apr 2011. PMLR. URL https://proceedings.mlr.press/v15/coates11a.html
Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *International conference on machine learning*, pp. 2206–2216. PMLR, 2020.
Adithya M Devraj and Jianshu Chen. Stochastic variance reduced primal dual algorithms for empirical composition optimization. *Advances in Neural Information Processing Systems*, 32, 2019.
Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Mma training: Direct input space margin maximization through adversarial training. *arXiv preprint arXiv:1812.02637*, 2018.
Justin Domke. Generic methods for optimization-based modeling. *International Conference on Artificial Intelligence and Statistics (AISTATS)*, pp. 318–326, 2012.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *Proc. International Conference on Machine Learning (ICML)*, pp. 1126–1135, 2017.
|
RemfXx7ebP
|
Is there a constraint that makes the learned representations more evenly distributed on the hypersphere after projection? The supplement claims that having evenly distributed points on the hypersphere allows the model to better leverage a limited dataset. The projection to hypersphere space just normalizes the magnitude of the representations, but I don’t see how that allows for the distribution of directions on the hypersphere to be uniform.
|
RDesign: Hierarchical Data-Efficient Representation Learning for Tertiary Structure-Based RNA Design
Cheng Tan\textsuperscript{1,2*}, Yijie Zhang\textsuperscript{3*}, Zhangyang Gao\textsuperscript{1,2*}, Bozhen Hu\textsuperscript{1,2}, Siyuan Li\textsuperscript{1,2}, Zicheng Liu\textsuperscript{1,2}, Stan Z. Li\textsuperscript{2†}
\textsuperscript{1}Zhejiang University, Hangzhou, China \textsuperscript{3}McGill University, Montréal, Québec, Canada
\textsuperscript{2}AI Lab, Research Center for Industries of the Future, Westlake University, Hangzhou, China
\{tancheng, gaozhangyang\}@westlake.edu.cn; yj.zhang@mail.mcgill.ca
Abstract
While artificial intelligence has made remarkable strides in revealing the relationship between biological macromolecules’ primary sequence and tertiary structure, designing RNA sequences based on specified tertiary structures remains challenging. Though existing approaches in protein design have thoroughly explored structure-to-sequence dependencies in proteins, RNA design still confronts difficulties due to structural complexity and data scarcity. Moreover, direct transplantation of protein design methodologies into RNA design fails to achieve satisfactory outcomes although sharing similar structural components. In this study, we aim to systematically construct a data-driven RNA design pipeline. We crafted a large, well-curated benchmark dataset and designed a comprehensive structural modeling approach to represent the complex RNA tertiary structure. More importantly, we proposed a hierarchical data-efficient representation learning framework that learns structural representations through contrastive learning at both cluster-level and sample-level to fully leverage the limited data. By constraining data representations within a limited hyperspherical space, the intrinsic relationships between data points could be explicitly imposed. Moreover, we incorporated extracted secondary structures with base pairs as prior knowledge to facilitate the RNA design process. Extensive experiments demonstrate the effectiveness of our proposed method, providing a reliable baseline for future RNA design tasks. The source code and benchmark dataset are available at github.com/AABio/RDesign.
1 Introduction
Ribonucleic acid (RNA) is a fundamental polymer composed of ribonucleotides, serving as a vital biological macromolecule that regulates a plethora of cellular functions (Kaushik et al., 2018; Guo et al., 2010; Sloma & Mathews, 2016; Warner et al., 2018). Non-coding RNA strands exhibit intricate three-dimensional structures, which are essential for their biological activities (Feingold & Pachter, 2004; Gstir et al., 2014). The complex geometries of RNA molecules empower them to execute irreplaceable functions in crucial cellular processes (Crick, 1970), encompassing but not limited to mRNA translation (Roth & Breaker, 2009), RNA splicing (Runge et al., 2018; Wanrooij et al., 2010; Kortmann & Narberhaus, 2012), and gene regulation (Meyer et al., 2016).
Specifically, the primary structure of RNA refers to its linear sequence of ribonucleotides (Hofacker et al., 1994; Rother et al., 2011; Kagaya et al., 2023). The primary structure then folds into a secondary structure with canonical base pairs, forming stems and loops (Nicholas & Zuker, 2008; Yang et al., 2017; Liu et al., 2022). Tertiary interactions between secondary structural elements subsequently give rise to the three-dimensional structure (Qin et al., 2022; Wang & Dokholyan, 2022; Vesselman & Das, 2015; Das et al., 2010). Figure 1 illustrates an example of the hierarchical folding of RNA primary, secondary, and tertiary structures. Gaining a comprehensive understanding
*Equal contribution.
†Corresponding author.
of RNA structure is fundamental to figuring out biological mysteries and holds tremendous promise for biomedical applications. However, solving RNA structures through experimental techniques remains challenging due to their structural complexity and transient nature. Computational modeling of RNA structure and dynamics has thus become particularly valuable and urgent.
  
Figure 1: The schematic diagrams of RNA primary, secondary and tertiary structures.
Recent years have witnessed the emergence and rapid advancement of data-driven computational modeling of RNA (Angermueller et al., 2016; Xiong et al., 2021; Singh et al., 2021; Cao et al., 2024). In particular, algorithms for RNA secondary structure prediction have been extensively developed, yielding impressive results through leveraging large datasets of known secondary structures (Singh et al., 2019; Chen et al., 2019b; Fu et al., 2022; Tan et al., 2022). However, knowledge of RNA tertiary structures, which is crucial for thoroughly understanding RNA functional mechanisms and discovering RNA-targeted therapies (Warner et al., 2018; Churkin et al., 2018), remains limited (Townshend et al., 2021). The success of protein structure prediction (Jumper et al., 2021; Baek et al., 2021) has motivated researchers to tackle the even more challenging problem of RNA tertiary structure prediction, leading to the development of RNA tertiary structure folding algorithms such as DeepFoldRNA (Pearce et al., 2022), RoseTTAFoldNA (Baek et al., 2022), and RhoFold (Chen et al., 2022; Shen et al., 2022). While predicting RNA tertiary structures from primary sequences can leverage abundant sequence data (Chen et al., 2022), its inverse problem, designing RNA sequences that reliably fold into a specified tertiary structure, remains largely underexplored.
The key reasons that RNA tertiary structure modeling lags far behind protein tertiary structure modeling stem from two main aspects: (1) RNA demonstrates greater structural intricacy and flexibility than proteins, posing formidable hurdles for structure prediction and design (Townshend et al., 2021; Bernstein et al., 2012; Berman et al., 2000). The less constrained structure space of RNA leads to intractable challenges in modeling the RNA tertiary structure. (2) High-resolution RNA tertiary structures are scarce compared to proteins due to their conformational dynamics and instability (Rother et al., 2011). The quantity of available RNA structures constitutes less than 1% of that for proteins (Adamczyk et al., 2022b; Kalvari et al., 2021). To deal with the above problem, we propose a thorough pipeline aiming at data-driven tertiary structure-based RNA design tasks. In detail, we first compile a large-scale RNA tertiary structure dataset based on extant high-quality structure data from Protein Data Bank (PDB) (Berman et al., 2000) and RNAsolo (Adamczyk et al., 2022b). Then, regarding the deficiency in RNA tertiary structure modeling and the unsatisfying transferable capability of conventional protein structure modeling techniques to the RNA field, we propose a comprehensive RNA tertiary structure modeling. To optimize the use of the limited data, we introduce a hierarchical and data-efficient representation learning framework that applies contrastive learning at both the cluster and sample levels. We can explicitly impose intrinsic relationships between the data by constraining the data representations within a limited hyperspherical space. Moreover, we provide a strategy that utilizes extracted secondary structure as prior information to guide the RNA design inspired by the correlation between RNA secondary and primary structures.
The main contributions of this work are summarized as follows:
- We propose a formal formulation of the tertiary structure-based RNA design problem. To establish a fair benchmark for tertiary structure-based RNA design, we compile a large dataset of RNA tertiary structures and provide a fundamental data split based on both structural similarity and sequence length distribution.
• We propose a comprehensive structural modeling approach for the complex RNA tertiary structure and design an RNA design framework called RDesign, which is composed of a hierarchical representation learning scheme and a secondary structure imposing strategy.
• Through extensive experiments across standard RNA design benchmarks and generalization ability assessments, we demonstrate the efficacy of our proposed method. This provides a reliable pipeline for future research in this important and promising field.
2 RELATED WORK
2.1 BIOMOLECULAR ENGINEERING
In recent decades, the rapid advancements in biophysics, biochemistry, and chemical engineering have enabled a plethora of novel applications (Nagamune, 2017), including engineering enzymes for industrial biocatalysis (Pugh et al., 2018), tailoring antibodies for precision cancer therapies (Beschek et al., 2016), developing trackable fluorescent proteins for biological imaging (Rosenbaum, 2017), and optimizing polymerases for forensic DNA analysis (Martell et al., 2016). RNA design is of particular interest among them due to the diverse functions that RNA can fulfill, ranging from translation and gene expression to catalysis (Ellerson et al., 2010). This multifunctionality is ascribed to the structural diversity of RNA (Andronescu et al., 2004). In this work, we focus on tertiary structure-based RNA design to uncover the relationships between RNA structure and sequence.
2.2 PROTEIN DESIGN
RNA and protein are essential components of cells. Despite having different chemical constituents, their higher-order structures can be described similarly (Rother et al., 2011). Early works on computational protein design (Wang et al., 2018; Chen et al., 2019a) utilize multi-layer perceptron (MLP), and convolutional neural network (CNN) to predict residue types from protein structure. 3D CNNs have enabled tertiary structure-based design such as ProDCoNN (Zhang et al., 2020) and DenseCPD (Qi & Zhang, 2020). GraphTrans (Ingraham et al., 2019) combines attention (Vaswani et al., 2017) and auto-regressive decoding to generate protein sequences from graphs, inspiring a series of recent advancing approaches (Jing et al., 2020; Dauparas et al., 2022; Hsu et al., 2022; Tan et al., 2023; Gao et al., 2022b, 2023, 2024; Tan et al., 2024). While insights from protein research have illuminated RNA biology, RNA studies have trailed due to a scarcity of available data and complex structure modeling (Gan et al., 2003).
2.3 RNA DESIGN
The computational design of RNA sequences aims to generate nucleic acid strands that will fold into a targeted secondary or tertiary structure. Secondary structure-based RNA design was first introduced by Vienna (Hofacker et al., 1994). Early works solved the RNA design problem through stochastic optimization and energy minimization with thermodynamic parameters, such as RNAfold (Lorenz et al., 2011), Mfold (Zuker, 2003), UNAFold (Nicholas & Zuker, 2008), and RNAStructure (Mathews, 2014). Probabilistic models and posterior decoding were employed to solve this problem (Sato et al., 2009). Other works that operate on a single sequence and try to find a solution by changing a few nucleotides include RNAInverse (Hofacker et al., 1994), RNA-SSD (Andronescu et al., 2004), INFO-RNA (Busch & Backofen, 2006), and NUPACK (Zadeh et al., 2011). There are global searching methods, including antaRNA (Kleinkauf et al., 2015), aRNAque (Merleau & Smerlak, 2022), eM2dRNAs (Rubio-Largo et al., 2023) and MCTS-RNA (Yang et al., 2017). Reinforcement learning-based methods have also been developed (Runge et al., 2018).
Although numerous approaches have been studied for engineering RNA secondary structures, RNA design based on the tertiary structure is still challenging due to the lack of high-resolution structural data (Yesselman & Das, 2015; Das et al., 2010). Although structure prediction algorithms (Liu et al., 2022; Qin et al., 2022; Wang & Dokholyan, 2022) can utilize abundant RNA primary sequence information, the progress of RNA design has been hindered by the scarcity of determined RNA 3D structures. To alleviate the difficulty, we explore the uncharted areas of tertiary structure-centric RNA design systematically and propose a complete pipeline to address this challenge.
3 METHODS
3.1 PRELIMINARIES
For an RNA sequence in its primary structure, we assume it comprises $N$ nucleotide bases selected from the set of nucleotides: A (Adenine), U (Uracil), C (Cytosine), and G (Guanine). Therefore, the sequence can be represented as:
$$\text{Nucleotides} := \{A, U, C, G\},$$
$$S^N = \{s_i \in \text{Nucleotides} \mid i \in [1, N] \cap \mathbb{Z}\},$$
(1)
The formation of the tertiary structure requires the folding of this sequence in three-dimensional space, which can be denoted as:
$$\text{Atoms} := \{P, O5', C5', C4', C3', O3'\},$$
$$X^N = \{x_i^\omega \in \mathbb{R}^3 \mid i \in [1, N] \cap \mathbb{Z}, \omega \in \text{Atoms}\},$$
(2)
where the Atoms set denotes the six atoms that comprise the RNA backbone.
  
Figure 2: Brief view of RNA sequence and secondary structure.
We incorporate secondary structure information using dot-bracket notation. Unpaired nucleotides are represented by dots, and paired nucleotides are represented by brackets, as shown in Figure 2.
$$A^N = \{a_i \in \{\cdot, (\cdot, \cdot)\} \mid i \in [1, N] \cap \mathbb{Z}\},$$
(3)
where $a_i$ is a dot if the nucleotide is unpaired, or a matching bracket otherwise. Finally, the tertiary structure-based RNA design problem could be formulated as:
$$F_\Theta : X^N \mapsto S^N,$$
such that $A^N = g(X^N)$ satisfies the pairing rules,
(4)
where $F_\Theta$ is a learnable mapping with parameters $\Theta$, and $g(\cdot)$ is a function that extracts the secondary structure denoted by dot-bracket notation from the tertiary structure. Namely, $F_\Theta$ denotes the mapping from $X^N$ to $S^N$, which means from the tertiary structure to the primary structure. It illustrates that while we map the tertiary structure $X$ of RNA to the primary sequence $S$, we ensure that the predicted sequence aligns with the pairing rules associated with the secondary structure $A$.
3.2 COMPREHENSIVE RNA TERTIARY STRUCTURE MODELING
We construct a local coordinate system $Q_i$ for the $i$-th nucleotide in the RNA tertiary structure. The detailed procedure for defining the local coordinate system is in Appendix D. While studies on protein design have achieved considerable success using only the Cα atoms to model backbone geometry, this approach does not readily translate to RNA. RNA exhibits a diversity of backbone conformations and base-pairing geometries that cannot be sufficiently captured by such modeling. The complexity and plasticity of RNA structure necessitate a comprehensive treatment.

To adequately capture the complex structural information inherent in the three-dimensional folding of RNA molecules, we propose a general approach for modeling RNA tertiary structure. We represent RNA tertiary structure as an attributed graph \( G = (V, E) \) comprising node attributes \( V \) and edge attributes \( E \). The graph is constructed by identifying the \( K \) nearest neighbors in 3D space for each node; each node \( i \) has a set of \( K \) neighbors denoted \( N(i, K) \). Specifically, \( V \in \mathbb{R}^{N \times f_n} \) contains \( f_n \)-dimensional node attributes for \( N \) nodes, and \( E \in \mathbb{R}^{N \times K \times f_m} \) contains \( f_m \)-dimensional edge attributes for each node’s \( K \) neighbors. By default, we set \( K = 30 \).
We outline the attributes used in our modeling approach along with their corresponding illustrations in Table 1, which includes two levels of attributes: (i) intra-nucleotide level attributes describing the local geometry of each nucleotide as the node attribute \( V \), and (ii) inter-nucleotide level attributes describing the relative geometry between nucleotides as the edge attribute \( E \).
**Intra-nucleotide level**
(1) The dihedral angles, shown as red arrows in Figure 3, are calculated. We represent the dihedral angles of the RNA backbone using sin and cos functions. (2) The spatial distances between the other intra-nucleotide atoms and the atom \( P_i \) are encoded into radial basis functions (RBFs). (3) The directions of the other intra-nucleotide atoms relative to the atom \( P_i \) are calculated with respect to the local coordinate system \( Q_j \).
**Inter-nucleotide level**
(1) An orientation encoding \( q(\cdot) \) is calculated from the quaternion representation of the spatial rotation matrix \( Q_i^T Q_j \). (2) The spatial distances between inter-nucleotide atoms from neighboring nucleotides and the atom \( P_i \) are encoded into radial basis functions (RBFs). (3) The directions of the other inter-nucleotide atoms relative to the atom \( P_i \) are calculated.
| Level | Feature | Illustration |
|----------------|---------------|-------------------------------------------------------------------------------|
| Intra-nucleotide | Dihedral Angle | \( \{ \sin, \cos \} \times \{ \alpha_i, \beta_i, \gamma_i, \delta_i, \epsilon_i, \zeta_i \} \) |
| | Distance | \( \text{RBF}(\|\omega_j - P_i\|) \mid \omega \in \{ O5', C5', C4', C3', O3' \} \) |
| | Direction | \( Q_i^T \frac{\omega_j - P_i}{\|\omega_j - P_i\|} \mid \omega \in \{ O5', C5', C4', C3', O3' \} \) |
| Inter-nucleotide | Orientation | \( q(Q_i^T Q_j) \) |
| | Distance | \( \text{RBF}(\|\omega_j - P_i\|) \mid j \in N(i, K), \omega \in \{ O5', C5', C4', C3', O3' \} \) |
| | Direction | \( Q_i^T \frac{\omega_j - P_i}{\|\omega_j - P_i\|} \mid j \in N(i, K), \omega \in \{ O5', C5', C4', C3', O3' \} \) |
### 3.3 Hierarchical Data-efficient Representation Learning
Now that RNA tertiary structure has been adequately modeled, the remaining challenge is how to learn from scarce data in a data-efficient manner. The key motivation is explicitly imposing the inherent data relationships based on prior knowledge. We first use \( L \) layers of message-passing neural networks (MPNNs) to learn the node representation. Specifically, the \( l \)-th hidden layer of the \( i \)-th nucleotide is defined as follows:
\[
h_{V_i}^{(l)} = \text{MPNN}([h_{E_{ij}}, h_{V_i}^{(l-1)}], \sum_{j \in N(i, K)} h_{V_j}^{(l-1)}]),
\]
where \( h_{V_i}^{(0)}, h_E \) are the embeddings of the intra-nucleotide and inter-nucleotide level features from the tertiary structure modeling, respectively. When generating the RNA sequence, a fully connected layer \( f \) maps the node representation \( h_{V_i}^{(L)} \) to the RNA sequence space: \( f(h_{V_i}^{(L)}) \).
To enable data-efficient representation learning, we obtain the graph-level representation through the average pooling of the node representations \( h_G = \frac{1}{N} \sum_{i=1}^{N} h_{V_i}^{(L)} \) and the corresponding projection \( g(h_G) \) by the projection \( g : \mathbb{R} \rightarrow S \) that projects the Euclidean space into the hyperspherical space.
We propose a hierarchical representation learning framework comprising cluster-level and confidence-aware sample-level representation learning, as shown in Figure 4. The cluster-level rep-
representation learning utilizes topological similarity between RNA structures. We obtain RNA structure clusters based on TM-score, which indicates structural similarity (Zhang & Skolnick [2005]). We define positive pairs as RNA data with similar topological structures that attract each other in the embedding space, while negative pairs with dissimilar topological structures repel each other. The cluster-level representation learning is defined as follows:
$$L_{\text{cluster}} = - \sum_{p \in D} \log \frac{\exp(g_p \cdot g_q / \tau)}{\sum_{g_k \in \{g_q\} \cup K_c} \exp(g_p \cdot g_k / \tau)},$$
(6)
where \((g_p, g_q)\) is a positive pair that comes from the same structural cluster, \(K_c\) is a set of negative samples for \(g_p\) identified by the cluster they belong to, and \(D\) is the data set. We denote \(g_p\) as the graph representation projection of the \(p\)-th RNA sample for notational convenience.
The confidence-aware sample-level representation learning is designed to capture the microscopic intrinsic properties of RNA structures. The positive pairs are defined as a given RNA structure sample and its random perturbed structures. The perturbed structures are obtained by adding Gaussian noise to the experimentally determined coordinates. To prevent excessive deviation, we filter out the perturbed structures with low structural similarity (TM-score \(\leq 0.8\)) and high structural deviation (RMSD \(\geq 1.0\)). The RMSD also evaluates the confidence level of the perturbed data. Formally, the sample-level representation learning can be formulated as:
$$L_{\text{sample}} = - \sum_{p \in D} \gamma_{p,p'} \log \frac{\exp(g_p \cdot g_{p'} / \tau)}{\sum_{g_k \in \{g_{p'}\} \cup K_s} \exp(g_p \cdot g_k / \tau)},$$
(7)
where \(p'\) is the perturbed structure of the \(p\)-th RNA structure, and \(K_s\) is simply defined as other samples apart from \(g_p\). The confidence score \(\gamma_{p,p'}\) is defined as \(\exp^{-\text{RMSD}(p,p')} \) so that when \(\text{RMSD}(p,p') \to 0\), the confidence approaches 1.
The cluster-level representation provides a coarse-grained embedding, capturing the global topological similarity between RNA structures. The confidence-aware sample-level representation provides intrinsic knowledge that is robust to minor experimental deviations. As shown in Figure 4, by constraining the limited data into the restricted hyperspherical space with imposed prior knowledge, the intrinsic relationships between data are explicitly modeled.

Figure 4: The different levels of hierarchical representation learning. Green arrows denote positive pairs tend to attract each other, and the black arrow denotes negative pairs tend to repel each other.
### 3.4 Secondary Structure Imposing Strategy
With the given tertiary structure, we can derive the corresponding secondary structure using the notation shown in Figure 2. The secondary structure is represented using parentheses to indicate paired nucleotides, with unpaired sites represented by one of the four RNA nucleotides (A, U, C, G). Paired sites are denoted by two nucleotides placed simultaneously in one of the following pairs: {CG, GC, AU, UA, UG, GU}. When a pair of positions \((i, j)\) in the predicted primary sequence and its corresponding secondary structure are given, we can calculate the confidence score for each position \(i\) based on the predicted letter at that position and the known secondary structure constraint. We then choose the position with the higher confidence score as the "reference" (say position \(i\)). We correct the predicted letter at position \(j\) so that the letters at \(i\) and \(j\) form the allowed pairs.
Specifically, if position \(i\) is selected as the reference, we maintain the predicted letter at \(i\) unchanged and modify the predicted letter at \(j\) to satisfy the base pairing constraint. We then update the predicted primary sequence. By leveraging the information from the known secondary structure, we
can rectify and refine the initially predicted primary sequence. The refinement helps enhance the accuracy of RNA 3D structure prediction. In the training phase, we compel the model to sharpen the confidence of the nucleotides in the paired positions. The supervised loss is defined as:
$$L_{\text{sup}} = \sum_{(i,j) \in \text{Pairs}} \left[ \ell_{CE}(s_i, f(h_{V_i}^{(L)})/\tau') + \ell_{CE}(s_j, f(h_{V_j}^{(L)})/\tau') \right] + \sum_{k \notin \text{Pairs}} \ell_{CE}(s_k, f(h_{V_k}^{(L)})), \quad (8)$$
where Pairs encompasses all the paired position indices given by the secondary structure, $\tau'$ is the temperature that is set as 0.5 by default to sharpen the confidence of paired nucleotides, $s_i$ are true sequence labels.
The training objective is the linear combination of representation learning loss and supervised loss:
$$L = L_{\text{sup}} + \lambda(L_{\text{cluster}} + L_{\text{sample}}), \quad (9)$$
where we set the weight parameter $\lambda$ as 0.5 by default.
4 EXPERIMENTS
We evaluate RDesign on the tertiary structure-based RNA design task by comparing it with four categories of baseline models: (i) sequence-based models (SeqRNN and SeqLSTM) that do not utilize any structural information and could be viewed as the performance reference for RNA design; (ii) A tertiary structure-based model (StructMLP) that exploits structural features while ignoring the graph topological structure; (iii) Tertiary structure-based models (StructGNN and GraphTrans) (In- graham et al., 2019) and PiFold (Gao et al., 2022b) that incorporate the graph topological structure; (iv) secondary structure-based RNA sequence design models (MCTS-RNA (Yang et al., 2017), LEARNA (Runge et al., 2018), eM2dRNAs (Rubio-Largo et al., 2023), aRNAque (Merleau & Smer- lak, 2022)). The detailed experimental settings and dataset descriptions are shown in Appendix A and Appendix B,C, respectively.
4.1 STANDARD TERTIARY STRUCTURE-BASED RNA DESIGN
Using our carefully curated benchmark dataset, we trained the model using the training set. Then, we evaluated the performance of the model on the testing set by selecting the model with the lowest loss on the validation set. Given that RNA sequences of varying lengths may impact prediction results, we stratified the testing set into three groups based on RNA length: (i) Short - RNA samples less than or equal to 50 nucleotides; (ii) Medium - RNA samples greater than 50 nucleotides but less than or equal to 100 nucleotides; (iii) Long - RNA samples greater than 100 nucleotides. To gain a thorough understanding of the relationship between RNA length and model accuracy, we reported both the recovery and Macro-F1 metrics for Short, Medium, and Long testing samples separately, in addition to overall testing set performance.
Table 2: The recovery on the benchmark dataset. The best results are highlighted in bold.
| Method | Short | Medium | Long | All |
|--------------|-----------|-----------|-----------|-----------|
| SeqRNN (h=128) | 26.52±1.07 | 24.86±0.82 | 27.31±0.41 | 26.23±0.87 |
| SeqRNN (h=256) | 27.61±1.85 | 27.16±0.63 | 28.71±0.14 | 28.24±0.46 |
| SeqLSTM (h=128) | 23.48±1.07 | 26.32±0.05 | 26.78±1.12 | 24.70±0.64 |
| SeqLSTM (h=256) | 25.00±0.00 | 26.89±0.35 | 28.55±0.13 | 26.93±0.93 |
| StructMLP | 25.72±0.51 | 25.03±1.39 | 25.38±1.89 | 25.35±0.25 |
| StructGNN | 27.55±0.94 | 28.78±0.87 | 28.23±1.95 | 28.23±0.71 |
| GraphTrans | 26.15±0.93 | 23.78±1.11 | 23.80±1.69 | 24.73±0.93 |
| PiFold | 24.81±2.01 | 25.90±1.56 | 23.55±4.13 | 24.48±1.13 |
| RDesign | **37.22±1.14** | **44.89±1.67** | **43.06±0.08** | **41.53±0.38** |
As presented in Table 2, the baseline models achieved suboptimal recovery scores, with performance ranging from 24-28%. Unexpectedly, tertiary structure-based models like StructMLP, StructGNN, and GraphTrans attained comparable results to sequence-based models. This indicates that directly applying protein design techniques to RNA is misguided and fails to capture the intricacies of RNA structures. Moreover, StructMLP and GraphTrans achieved higher recovery scores for short RNA
sequences but struggled on longer, namely more complex RNA structures. Their struggles on the long dataset stem from the inability to learn from more intricate RNA structures and their low generalization capability. In contrast, our RDesign model outperforms all baseline methods on the recovery metric, achieving substantial gains. RDesign’s strong performance, particularly on medium and long sets, indicates that it can learn intrinsic RNA structural properties.
Table 3: The Macro-F1 on the benchmark dataset. The score is multiplied by 100 for aesthetics.
| Method | Short | Medium | Long | All |
|-----------------|-------------|-------------|-------------|-------------|
| SeqRNN (h=128) | 17.22±1.69 | 17.20±1.91 | 8.44±2.70 | 17.74±1.59 |
| SeqRNN (h=256) | 12.54±2.94 | 13.64±5.24 | 8.85±2.41 | 13.64±2.69 |
| SeqLSTM (h=128) | 9.89±0.57 | 10.44±1.42 | 10.71±2.53 | 10.28±0.61 |
| SeqLSTM (h=256) | 9.26±1.16 | 9.48±0.74 | 7.14±0.00 | 10.93±0.15 |
| StructMLP | 17.46±2.39 | 18.57±3.45 | 17.53±8.43 | 18.88±2.50 |
| StructGNN | 24.01±3.62 | 22.15±4.67 | 26.05±6.43 | 24.87±1.65 |
| GraphTrans | 16.34±2.67 | 16.39±4.74 | 18.67±7.16 | 17.18±3.81 |
| PiFold | 17.48±2.24 | 18.10±6.76 | 14.06±3.53 | 17.45±1.33 |
| RDesign | **38.25±3.06** | **40.41±1.27** | **41.48±0.91** | **40.89±0.49** |
It could be seen from Table 3 that there exist large gaps between the recovery metrics and Macro-F1 scores of most baseline models, which suggests those models tend to predict the high-frequency nucleotide letters instead of reflecting the actual tertiary structure. Among them, only StructGNN achieved consistent results in its Macro-F1 score and recovery metric but with unsatisfying performance. Our proposed RDesign consistently outperformed all other models on these metrics, demonstrating its effectiveness.
### 4.2 Evaluate the Generalization on Rfam and RNA-Puzzles
To assess the generalization capability of our model, we evaluated our model and the baseline methods on the Rfam ([Kalvari et al., 2021](#)) and RNA-Puzzles ([Miao et al., 2020](#)) datasets using the model pre-trained on our benchmark training set. We presented the results in Table 4. The performance remained consistent with that of our benchmark dataset. Specifically, StructGNN, which effectively learned certain tertiary structure information, achieved a relatively small gap between the recovery metric and Macro F1 score. In contrast, the other baselines that learned little structural information performed sub-optimally. Our proposed RDesign model demonstrated superior generalization on both datasets and outperformed all the baselines.
It is notable that the results reported here were generated by directly assessing pretrained models on the entire training dataset, mirroring real-world scenarios. Furthermore, in Appendix H, we have included the results of the pretrained models assessed on a training set with similar data removed.
Table 4: The overall recovery and Macro-F1 scores on the Rfam and RNA-Puzzles datasets.
| Method | Recovery (%) ↑ | Macro F1 (×100) ↑ |
|-----------------|----------------|------------------|
| | Rfam | RNA-Puzzles |
| SeqRNN (h=128) | 27.99±1.21 | 28.99±1.16 |
| SeqRNN (h=256) | 30.94±0.41 | 31.25±0.72 |
| SeqLSTM (h=128) | 24.96±0.46 | 25.78±0.43 |
| SeqLSTM (h=256) | 31.45±0.01 | 31.62±0.20 |
| StructMLP | 24.40±1.63 | 24.22±1.28 |
| StructGNN | 27.64±3.31 | 27.96±3.08 |
| GraphTrans | 23.81±2.57 | 22.21±2.98 |
| PiFold | 22.55±4.13 | 23.78±6.52 |
| MCTS-RNA | 31.74±0.07 | 32.06±1.87 |
| LEARNA | 31.92 ±2.37 | 30.94±4.15 |
| aRNAque | 30.01 ±3.26 | 31.07±2.32 |
| eM2dRNAs | 33.34 ±1.02 | 37.10±3.24 |
| RDesign | **56.12±1.03** | **50.12±1.07** |
| | | **53.27±1.28** |
| | | **49.24±1.07** |
4.3 Ablation Study
We conducted an ablation study of RDesign and presented the results in Table 5. Firstly, we replaced our tertiary structure modeling approach with the classical modeling from protein design, which led to a significant decrease in performance. Secondly, removing the hierarchical representation learning also resulted in a performance drop, indicating its importance. Replacing the hyperspherical space with Euclidean space led to a substantial reduction in performance, indicating its impact on data-efficient learning. In contrast, removing the secondary structure constraints provided a relatively small decrease in performance because RDesign itself could accurately generate RNA sequences. Additionally, we further tested the capability that our designed sequences could fold into desired tertiary structures, which is the final aim of the RNA sequences design problem. However, due to the lack of reliable RNA tertiary structure prediction tools, we are only able to conduct this experiment in a qualitative way. Results of three example structures from each length category have been reported and analyzed in Appendix 5.
Table 5: The ablation study of our model on three datasets.
| Method | Recovery (%) ↑ | Macro F1 (×100) ↑ |
|-------------------------------|----------------|-------------------|
| | Ours Rfam RNA-Puzzles Ours Rfam RNA-Puzzles |
| RDesign | 41.53 56.12 50.12 40.89 53.27 49.24 |
| w/o our modeling | 36.45 53.19 44.93 36.33 48.95 43.88 |
| w/o representation learning | 37.12 52.17 46.88 36.52 49.22 46.36 |
| w/o hyperspherical space | 30.69 36.33 36.00 30.67 30.34 33.57 |
| w/o secondary structure | 38.55 54.83 47.69 38.77 52.85 47.62 |
5 Evaluation the Capability of Folding for Designed Sequences
We used RhoFold (Shen et al., 2022) to predict the structures of RNA sequences designed by RDesign. Figure 5 shows three visualization examples: (a) a short sequence reconstructed by RDesign; (b) a long sequence that was designed with a similar structure and low structure deviation; (c) a complicated sequence that was designed with a similar structure but failed to achieve low structure deviation. These visualization examples demonstrate the effectiveness of our RDesign model in designing RNA sequences with structures similar to the target structure.
Figure 5: Visualization of RDesign’s designed examples.
6 Conclusion and Limitations
In this work, we investigate the challenging task of designing RNA tertiary structures. We compile a benchmark dataset to systematically assess the performance of various computational models on this task. While existing protein design methods cannot be directly applied, we propose a hierarchical data-efficient representation learning framework. Our framework explicitly captures the intrinsic relationships within the data while constraining the limited data to a restricted hyperspherical space. We also introduce a secondary structure constraining strategy to leverage extra structural information. Extensive experiments demonstrate the effectiveness of our proposed RDesign model. We hope this work provides a new perspective on tertiary structure-based RNA design. A limitation is that our method is currently limited to in silico design; we leave wet-lab validation to future work.
7 ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their constructive and helpful reviews. This work was supported by National Science and Technology Major Project (No. 2022ZD0115101), National Natural Science Foundation of China Project (No. U21A20427), the Center of Synthetic Biology and Integrated Bioengineering of Westlake University and Integrated Bioengineering of Westlake University Project (No. WU2022A009) and the Westlake University Industries of the Future Research Funding Project (No. WU2023C019).
REFERENCES
Bartosz Adamczyk, Maciej Antczak, and Marta Szachniuk. RNAsolo: a repository of cleaned PDB-derived RNA 3D structures. *Bioinformatics*, 38(14):3668–3670, 06 2022a. ISSN 1367-4803. doi: 10.1093/bioinformatics/btac386.
Bartosz Adamczyk, Maciej Antczak, and Marta Szachniuk. Rnasolo: a repository of cleaned pdb-derived ma 3d structures. *Bioinformatics*, 38(14):3668–3670, 2022b.
Mirela Andronescu, Anthony P Fejes, Frank Hutter, Holger H Hoos, and Anne Condon. A new algorithm for rna secondary structure design. *Journal of molecular biology*, 336(3):607–624, 2004.
Christof Angermueller, Tanel Pärnamaa, Leopold Parts, and Oliver Stegle. Deep learning for computational biology. *Molecular systems biology*, 12(7):878, 2016.
Minkyung Baek, Frank DiMaio, Ivan Anishchenko, Justas Dauparas, Sergey Ovchinnikov, Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch, R Dustin Schaeffer, et al. Accurate prediction of protein structures and interactions using a three-track neural network. *Science*, 373(6557):871–876, 2021.
Minkyung Baek, Ryan McHugh, Ivan Anishchenko, David Baker, and Frank DiMaio. Accurate prediction of nucleic acid and protein-nucleic acid complexes using rosettafoldna. *bioRxiv*, 2022.
Protein Data Bank. Protein data bank. *Nature New Biol*, 233:223, 1971.
Helen M Berman, John Westbrook, Zukang Feng, Gary Gilliland, Talapady N Bhat, Helge Weissig, Ilya N Shindyalov, and Philip E Bourne. The protein data bank. *Nucleic acids research*, 28(1):235–242, 2000.
BE Bernstein, E Birney, I Dunham, ED Green, C Gunter, and M Snyder. Project consortium encode. *An integrated encyclopedia of DNA elements in the human genome*. *Nature*, 489:57–74, 2012.
Anke Busch and Rolf Backofen. Info-rna—a fast approach to inverse rna folding. *Bioinformatics*, 22(15):1823–1831, 2006.
Hanqun Cao, Cheng Tan, Zhangyang Gao, Yilun Xu, Guangyong Chen, Pheng-Ann Heng, and Stan Z Li. A survey on generative diffusion models. *IEEE Transactions on Knowledge and Data Engineering*, 2024.
Jiayang Chen, Zhihang Hu, Siqi Sun, Qingxiong Tan, Yixuan Wang, Qinze Yu, Licheng Zong, Liang Hong, Jin Xiao, Tao Shen, et al. Interpretable rna foundation model from unannotated data for highly accurate rna structure and function predictions. *bioRxiv*, 2022.
Sheng Chen, Zhe Sun, Lihua Lin, Zifeng Liu, Xun Liu, Yutian Chong, Yutong Lu, Huiying Zhao, and Yuedong Yang. To improve protein sequence profile prediction through image captioning on pairwise residue distance map. *Journal of chemical information and modeling*, 60(1):391–399, 2019a.
Xinshi Chen, Yu Li, Ramzan Umarov, Xin Gao, and Le Song. Rna secondary structure prediction by learning unrolled algorithms. In *International Conference on Learning Representations*, 2019b.
|
IEduRUO55F
|
Another weak point is the strong assumption on the fitness function F(.). The evolutionary search for the LLM generated reward function requires a fitness function capable of assessing the quality of each proposed reward function. In this work, the fitness function F(.) is implicitly assumed to have access to the ground truth reward function to evaluate the induced policies of the proposed reward functions.
|
EUREKA: Human-Level Reward Design via Coding Large Language Models
Yecheng Jason Ma1,2, William Liang2, Guanzhi Wang1,3, De-An Huang1, Osbert Bastani2, Dinesh Jayaraman2, Yuke Zhu1,4, Linxi “Jim” Fan1 †, Anima Anandkumar1,3 †
1NVIDIA, 2UPenn, 3Caltech, 4UT Austin; †Equal advising
https://eureka-research.github.io
Abstract
Large Language Models (LLMs) have excelled as high-level semantic planners for sequential decision-making tasks. However, harnessing them to learn complex low-level manipulation tasks, such as dexterous pen spinning, remains an open problem. We bridge this fundamental gap and present EUREKA, a human-level reward design algorithm powered by LLMs. EUREKA exploits the remarkable zero-shot generation, code-writing, and in-context improvement capabilities of state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over reward code. The resulting rewards can then be used to acquire complex skills via reinforcement learning. Without any task-specific prompting or pre-defined reward templates, EUREKA generates reward functions that outperform expert human-engineered rewards. In a diverse suite of 29 open-source RL environments that include 10 distinct robot morphologies, EUREKA outperforms human experts on 83% of the tasks, leading to an average normalized improvement of 52%. The generality of EUREKA also enables a new gradient-free in-context learning approach to reinforcement learning from human feedback (RLHF), readily incorporating human inputs to improve the quality and the safety of the generated rewards without model updating. Finally, using EUREKA rewards in a curriculum learning setting, we demonstrate for the first time, a simulated Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a pen in circles at rapid speed.
1 Introduction
Large Language Models (LLMs) have excelled as high-level semantic planners for robotics tasks (Ahn et al., 2022; Singh et al., 2023), but whether they can be used to learn complex low-level manipulation
Corresponding authors: jasonyma@seas.upenn.edu, dr.jimfan.ai@gmail.com
Figure 1: EUREKA generates human-level reward functions across diverse robots and tasks. Combined with curriculum learning, EUREKA for the first time, unlocks rapid pen-spinning capabilities on an anthropomorphic five-finger hand.
tasks, such as dexterous pen spinning, remains an open problem. Existing attempts require substantial domain expertise to construct task prompts or learn only simple skills, leaving a substantial gap in achieving human-level dexterity (Yu et al., 2023; Brohan et al., 2023).
On the other hand, reinforcement learning (RL) has achieved impressive results in dexterity (Andrychowicz et al., 2020; Handa et al., 2023) as well as many other domains-if the human designers can carefully construct reward functions that accurately codify and provide learning signals for the desired behavior; likewise, many real-world RL tasks admit sparse rewards that are difficult for learning, necessitating reward shaping that provides incremental learning signals. Despite their fundamental importance, reward functions are known to be notoriously difficult to design in practice (Russell & Norvig, 1995; Sutton & Barto, 2018); a recent survey conducted finds 92% of polled reinforcement learning researchers and practitioners report manual trial-and-error reward design and 89% indicate that their designed rewards are sub-optimal (Booth et al., 2023) and lead to unintended behavior (Hadfield-Menell et al., 2017).
Given the paramount importance of reward design, we ask whether it is possible to develop a universal reward programming algorithm using state-of-the-art coding LLMs, such as GPT-4. Their remarkable abilities in code writing, zero-shot generation, and in-context learning have previously enabled effective programmatic agents (Shim et al., 2023; Wang et al., 2023a). Ideally, this reward design algorithm should achieve human-level reward generation capabilities that scale to a broad spectrum of tasks, including dexterity, automate the tedious trial-and-error procedure without human supervision, and yet be compatible with human oversight to assure safety and alignment.
We introduce Evolution-driven Universal REward Kit for Agent (EUREKA), a novel reward design algorithm powered by coding LLMs with the following contributions:
1. **Achieves human-level performance on reward design** across a diverse suite of 29 open-sourced RL environments that include 10 distinct robot morphologies, including quadruped, quadcopter, biped, manipulator, as well as several dexterous hands; see Fig. 1. Without any task-specific prompting or reward templates, EUREKA autonomously generates rewards that outperform expert human rewards on 83% of the tasks and realizes an average normalized improvement of 52%.
2. **Solves dexterous manipulation tasks that were previously not feasible by manual reward engineering.** We consider pen spinning, in which a five-finger hand needs to rapidly rotate a pen in pre-defined spinning configurations for as many cycles as possible. Combining EUREKA with curriculum learning, we demonstrate for the first time rapid pen spinning maneuvers on a simulated anthropomorphic Shadow Hand (see Figure 1 bottom).
3. Enables a new gradient-free in-context learning approach to reinforcement learning from human feedback (RLHF) that can generate more performant and human-aligned reward functions based on various forms of human inputs without model updating. We demonstrate that EUREKA can readily benefit from and improve upon existing human reward functions. Likewise, we showcase EUREKA’s capability in using purely textual feedback to generate progressively more human-aligned reward functions.
Unlike prior work L2R on using LLMs to aid reward design (Yu et al., 2023), EUREKA is completely free of task-specific prompts, reward templates, as well as few-shot examples. In our experiments, EUREKA significantly outperforms L2R due to its ability to generate free-form, expressive reward programs. EUREKA’s generality is made possible through three key algorithmic design choices: environment as context, evolutionary search, and reward reflection. First, by taking the environment source code as context, EUREKA can zero-shot generate executable reward functions from the backbone coding LLM (GPT-4). Then, EUREKA substantially improves the quality of its rewards by performing evolutionary search, iteratively proposing batches of reward candidates and refining the most promising ones within the LLM context window. This in-context improvement is made effective via reward reflection, a textual summary of the reward quality based on policy training statistics that enables automated and targeted reward editing; see Fig. 3 for an example of EUREKA zero-shot reward as well as various improvements accumulated during its optimization. To ensure that EUREKA can scale up its reward search to maximum potential, EUREKA evaluates intermediate rewards using GPU-accelerated distributed reinforcement learning on IsaacGym (Makoviychuk et al., 2021), which offers up to three orders of magnitude in policy learning speed, making EUREKA an extensive algorithm that scales naturally with more compute. See Fig. 2 for an overview. We are committed to open-sourcing all prompts, environments, and generated reward functions to promote further research on LLM-based reward design.
2 Problem Setting and Definitions
The goal of reward design is to return a shaped reward function for a ground-truth reward function that may be difficult to optimize directly (e.g., sparse rewards); this ground-truth reward function may only be accessed via queries by the designer. We first introduce the formal definition from Singh et al. (2010), which we then adapt to the program synthesis setting, which we call reward generation.
Definition 2.1. (Reward Design Problem (Singh et al., 2010)) A reward design problem (RDP) is a tuple \( P = \langle M, R, \pi_M, F \rangle \), where \( M = \langle S, A, T \rangle \) is the world model with state space \( S \), action space \( A \), and transition function \( T \). \( R \) is the space of reward functions; \( A_M(\cdot) : R \rightarrow \Pi \) is a learning algorithm that outputs a policy \( \pi : S \rightarrow \Delta(A) \) that optimizes reward \( R \in R \) in the resulting Markov Decision Process (MDP), \( (M, R) \); \( F : \Pi \rightarrow \mathbb{R} \) is the fitness function that produces a scalar evaluation of any policy, which may only be accessed via policy queries (i.e., evaluate the policy using the ground truth reward function). In an RDP, the goal is to output a reward function \( R \in R \) such that the policy \( \pi := A_M(R) \) that optimizes \( R \) achieves the highest fitness score \( F(\pi) \).
Reward Generation Problem. In our problem setting, every component within a RDP is specified via code. Then, given a string \( l \) that specifies the task, the objective of the reward generation problem is to output a reward function code \( R \) such that \( F(A_M(R)) \) is maximized.
3 Method
EUREKA consists of three algorithmic components: 1) environment as context that enables zero-shot generation of executable rewards, 2) evolutionary search that iteratively proposes and refines reward candidates, and 3) reward reflection that enables fine-grained reward improvement. See Alg. 1 for pseudocode; all prompts are included in App. A.
3.1 Environment as Context
Reward design requires the environment specification to be provided to the LLM. We propose directly feeding the raw environment source code (without the reward code, if exists) as context. Given that any reward function is a function over the environment’s state and action variables, the only
def compute_reward(object_rot, goal_rot, object_angvel, object_pos, fingertip_pos):
# Rotation reward
rot_diff = torch.abs(torch.sum(object_rot * goal_rot, dim=1) - 1) / 2
rotation_reward_temp = 28.0
rotation_reward = torch.exp(-rotation_reward_temp * rot_diff)
# Distance reward
min_distance_temp = 15.0
min_distance = torch.min(torch.norm(fingertip_pos - object_pos[:, None], dim=2), dim=1).values
distance_reward = min_distance
uncapped_distance_reward = torch.exp(-min_distance_temp * min_distance)
distance_reward = torch.clamp(uncapped_distance_reward, 0.0, 1.0)
total_reward = rotation_reward + distance_reward
# Angular velocity penalty
angvel_norm = torch.norm(object_angvel, dim=1)
angvel_threshold = 0.5
angvel_penalty_temp = 5.0
angular_velocity_penalty = torch.where(angvel_norm > angvel_threshold,
torch.exp(-angvel_penalty_temp * (angvel_norm - angvel_threshold)), torch.zeros_like(angvel_norm))
total_reward = 0.5 * rotation_reward + 0.3 * distance_reward - 0.2 * angular_velocity_penalty
reward_components = {
"rotation_reward": rotation_reward,
"distance_reward": distance_reward,
"angular_velocity_penalty": angular_velocity_penalty,
}
return total_reward, reward_components
Figure 3: EUREKA can zero-shot generate executable rewards and then flexibly improve them with many distinct types of free-form modification, such as (1) changing the hyperparameter of existing reward components, (2) changing the functional form of existing reward components, and (3) introducing new reward components.
requirement in the source code is that it exposes these environment variables, which is easy to satisfy. In cases where the source code is not available, relevant state information can also be supplied via an API, for example. In practice, to ensure that the environment code fits within the LLM’s context window and does not leak simulation internals (so that we can expect the same prompt to generalize to new simulators), we have an automatic script to extract just the environment code snippets that expose and fully specify the environment state and action variables. see App. D for details.
Given environment as context, EUREKA instructs the coding LLM to directly return executable Python code with only generic reward design and formatting tips, such as exposing individual components in the reward as a dictionary output (for reasons that will be apparent in Sec. 3.3); see Prompt 1 and 3 in App. A. Remarkably, with only these minimal instructions, EUREKA can already zero-shot generate plausibly-looking rewards in diverse environments in its first attempts. An example EUREKA output is shown in Fig. 3. As seen, EUREKA adeptly composes over existing observation variables (e.g., fingertip_pos) in the provided environment code and produces a competent reward code – all without any environment-specific prompt engineering or reward templating. On the first try, however, the generated reward may not always be executable, and even if it is, it can be quite sub-optimal with respect to the task fitness metric $F$. While we can improve the prompt with task-specific formatting and reward design hints, doing so does not scale to new tasks and hinders the overall generality of our system. How can we effectively overcome the sub-optimality of single-sample reward generation?
Algorithm 1 EUREKA
1: Require: Task description $l$, environment code $M$, coding LLM $LLM$, fitness function $F$, initial prompt $prompt$
2: Hyperparameters: search iteration $N$, iteration batch size $K$
3: for $N$ iterations do
4: // Sample $K$ reward code from LLM
5: $R_1, ..., R_K \sim LLM(l, M, prompt)$
6: // Evaluate reward candidates
7: $s_1 = F(R_1), ..., s_K = F(R_K)$
8: // Reward reflection
9: prompt := prompt : Reflection($R^n_{best}, s^n_{best}$),
where $best = \arg\max_k s_1, ..., s_K$
10: // Update Eureka reward
11: $R_{Eureka}, s_{Eureka} = (R^n_{best}, s^n_{best})$, if $s^n_{best} > s_{Eureka}$
12: Output: $R_{Eureka}$
3.2 Evolutionary Search
In this section, we will demonstrate how evolutionary search presents a natural solution that addresses the aforementioned execution error and sub-optimality challenges. In each iteration, EUREKA samples several independent outputs from the LLM (Line 5 in Alg.1). Since the generations are i.i.d., the probability that all reward functions from an iteration are buggy exponentially decreases as the number of samples increases. We find that for all environments we consider, sampling just a modest number of samples (16) contains at least one executable reward code in the first iteration.
Given executable reward functions from an earlier iteration, EUREKA performs in-context reward mutation, proposing new improved reward functions from the best one in the previous iteration. Concretely, a new EUREKA iteration will take the best-performing reward from the previous iteration, its reward reflection (Sec.3.3), and the mutation prompt (Prompt 2 in App.A) as context and generate $K$ more i.i.d reward outputs from the LLM; several illustrative reward modifications are visualized in Fig.3. This iterative optimization continues until a specified number of iterations has been reached. Finally, we perform multiple random restarts to find better maxima; this is a standard strategy in global optimization. In all our experiments, EUREKA conducts 5 independent runs per environment, and for each run, searches for 5 iterations with $K = 16$ samples per iteration.
3.3 Reward Reflection
In order to ground the in-context reward mutation, we must be able to put into words the quality of the generated rewards. We propose reward reflection, an automated feedback that summarizes the policy training dynamics in texts. Specifically, given that EUREKA reward functions are asked to expose their individual components in the reward program (e.g., reward_components in Fig.3), reward reflection tracks the scalar values of all reward components and the task fitness function at intermediate policy checkpoints throughout training. For instance, consider the illustrative example in Fig.2, where the snapshot values of av_penalty are provided as a list in the reward feedback. See App.G.1 for full examples.
This reward reflection procedure, though simple to construct, is important due to two reasons: (1) the lack of fine-grained reward improvement signal in the task fitness function, and (2) the algorithm-dependent nature of reward optimization (Booth et al., 2023). First, as we can query the task fitness function $F$ on the resulting policies, a simple strategy is to just provide this numerical score as the reward evaluation. While serving as the holistic ground-truth metric, the task fitness function itself lacks in credit assignment, providing no useful information on why a reward function works or not. Second, whether a reward function is effective is influenced by the particular choice of RL algorithm, and the same reward may perform very differently even under the same optimizer given hyperparameter differences (Henderson et al., 2018; Agarwal et al., 2021). By providing detailed accounts on how well the RL algorithm optimizes individual reward components, reward reflection enables EUREKA to produce more intricate and targeted reward editing.
4 Experiments
We thoroughly evaluate EUREKA on a diverse suite of robot embodiments and tasks, testing its ability to generate reward functions, solve new tasks, and incorporate various forms of human input. We use GPT-4 (OpenAI, 2023), in particular the gpt-4-0314 variant, as the backbone LLM for all LLM-based reward-design algorithms unless specified otherwise.
Environments. Our environments consist of 10 distinct robots and 29 tasks implemented using the IsaacGym simulator (Makoviychuk et al., 2021). First, we include 9 original environments from IsaacGym (Isaac), covering a diverse set of robot morphologies from quadruped, bipedal, quadrotor, cobot arm, to dexterous hands. In addition to coverage over robot form factors, we ensure depth in our evaluation by including all 20 tasks from the Bidexterous Manipulation (Dexterity) benchmark (Chen et al., 2022). Dexterity contains 20 complex bi-manual tasks that require a pair of Shadow Hands to solve a wide range of complex manipulation skills, ranging from object handover to rotating a cup by 180 degrees. For the task description input to EUREKA, we use the official description provided in the environment repository when possible. See App.B for details on all environments. It is worth noting that both benchmarks are publicly released concurrently, or after the GPT-4 knowledge cut-off date (September 2021), so GPT-4 is unlikely to have accumulated extensive internet knowledge.
Figure 4: EUREKA outperforms Human and L2R across all tasks. In particular, EUREKA realizes much greater gains on high-dimensional dexterity environments.
about these tasks, making them ideal testbeds for assessing EUREKA’s reward generation capability compared to measurable human-engineered reward functions.
4.1 Baselines
L2R (Yu et al., 2023) proposes a two-stage LLM-prompting solution to generate templated rewards. For an environment and task specified in natural language, a first LLM is asked to fill in a natural language template describing the agent’s motion; then, a second LLM is asked to convert this “motion description” into code that calls a manually defined set of reward API primitives to write a reward program that sets their parameters. To make L2R competitive for our tasks, we define the motion description template to mimic the original L2R templates, and we construct the API reward primitives using the individual components of the original human rewards when possible. Note that this gives L2R an advantage as it has access to the original reward functions. Consistent with EUREKA, we conduct 5 independent L2R runs per environment, and for each run, we generate 16 reward samples. See App. C for more details.
Human. These are the original shaped reward functions provided in our benchmark tasks. As these reward functions are written by active reinforcement learning researchers who designed the tasks, these reward functions represent the outcomes of expert-level human reward engineering.
Sparse. These are identical to the fitness functions $F$ that we use to evaluate the quality of the generated rewards. Like Human, these are also provided by the benchmark. On the dexterity tasks, they are uniformly binary indicator functions that measure task success; on Isaac tasks, they vary in functional forms depending on the nature of the task. See App. B for a description of the ground-truth scoring metric for all tasks.
4.2 Training Details
Policy Learning. For each task, all final reward functions are optimized using the same RL algorithm with the same set of hyperparameters. Isaac and Dexterity share a well-tuned PPO implementation (Schulman et al., 2017; Makoviichuk & Makoviychuk, 2021), and we use this implementation and the task-specific PPO hyperparameters without any modification. Note that these task hyperparameters are tuned to make the official human-engineered rewards work well. For each final reward function obtained from each method, we run 5 independent PPO training runs and report the average of the maximum task metric values achieved from 10 policy checkpoints sampled at fixed intervals. In particular, the maximum is taken over the same number of checkpoints for each approach.
Reward Evaluation Metrics. For Isaac tasks, since the task metric $F$ for each task varies in semantic meaning and scale, we report the human normalized score for EUREKA and L2R,
$$\text{human normalized score} = \frac{\text{Method-Sparse}}{\text{Human-Sparse}}.$$
This metric provides a holistic measure of how EUREKA rewards fare against human-expert rewards with respect to the ground-truth task metric. For Dexterity, since all tasks are evaluated using the binary success function, we directly report success rates.
4.3 Results
EUREKA outperforms human rewards. In Figure 4, we report the aggregate results on Dexterity and Isaac. Notably, EUREKA exceeds or performs on par to human level on all Isaac tasks and 15 out of 20 tasks on Dexterity (see App. F for a per-task breakdown). In contrast, L2R, while comparable
on low-dimensional tasks (e.g., CartPole, BallBalance), lags significantly behind on high-dimensional tasks. Despite being provided access to some of the same reward components as Human, L2R still underperforms EUREKA after its initial iteration, when both methods have had the same number of reward queries. As expected, L2R’s lack of expressivity severely limits its performance. In contrast, EUREKA generates free-form rewards from scratch without any domain-specific knowledge and performs substantially better. In App. F, we present results on additional evaluation metrics such as interquantile mean (IQM), probability of improvement (Agarwal et al., 2021), and the aggregate RL training curves; on all evaluations, we observe the consistent trend that EUREKA generates the most capable reward functions. Furthermore, we ablate GPT-4 with GPT-3.5 and find EUREKA degrades in performance but still matches or exceeds human-level on most Isaac tasks, indicating that its general principles can be readily applied to coding LLMs of varying qualities.
**EUREKA consistently improves over time.** In Fig. 5, we visualize the average performance of the cumulative best EUREKA rewards after each evolution iteration. Moreover, we study an ablation, EUREKA w.o. Evolution (32 Samples), which performs only the initial reward generation step, sampling the same number of reward functions as two iterations in the original EUREKA. This ablation helps study, given a fixed number of reward function budget, whether it is more advantageous to perform the EUREKA evolution or simply sample more first-attempt rewards without iterative improvement. As seen, on both benchmarks, EUREKA rewards steadily improve and eventually surpass human rewards in performance despite sub-par initial performances. This consistent improvement also cannot be replaced by just sampling more in the first iteration as the ablation’s performances are lower than EUREKA after 2 iterations on both benchmarks. Together, these results demonstrate that EUREKA’s novel evolutionary optimization is indispensable for its final performance.
**EUREKA generates novel rewards.** We assess the novelty of EUREKA rewards by computing the correlations between EUREKA and human rewards on all the Isaac tasks; see App. B for details on this procedure. Then, we plot the correlations against the human normalized scores on a scatter-plot in Figure 6, where each point represents a single EUREKA reward on a single task. As shown, EUREKA mostly generates weakly correlated reward functions that outperform the human ones. In addition, by examining the average correlation by task (App. F), we observe that the harder the task is, the less correlated the EUREKA rewards. We hypothesize that human rewards are less likely to be near optimal for difficult tasks, leaving more room for EUREKA rewards to be different and better. In a few cases, EUREKA rewards are even negatively correlated with human rewards but perform significantly better, demonstrating that EUREKA can discover novel reward design principles that may run counter to human intuition; we illustrate these EUREKA rewards in App. G.2.
**Reward reflection enables targeted improvement.** To assess the importance of constructing reward reflection in the reward feedback, we evaluate an ablation, EUREKA (No Reward Reflection), which reduces the reward feedback prompt to include only snapshot values of the task metric $F$. Averaged over all Isaac tasks, EUREKA without reward reflection reduces the average normalized score by 28.6%; in App. F, we provide detailed per-task breakdown and observe much greater performance deterioration on higher dimensional tasks. To provide qualitative analysis, in App. G.1, we include several examples in which EUREKA utilizes the reward reflection to perform targeted reward editing.
**EUREKA with curriculum learning enables dexterous pen spinning.** Finally, we investigate whether EUREKA can be used to solve a truly novel and challenging dexterous task. To this end, we propose pen spinning as a test bed. This task is highly dynamic and requires a Shadow Hand to continuously rotate a pen to achieve some pre-defined spinning patterns for as many cycles as possible; we implement this task on top of the original Shadow Hand environment in Isaac Gym without changes to any physics parameter, ensuring physical realism. We consider a curriculum learning (Bengio et al., 2009) approach to break down the task into manageable components that can be independently solved by EUREKA. Specifically, we first use EUREKA to generate a reward for the task of re-orienting the pen to random target configurations and train a policy using the final EUREKA reward. Then, using this pre-trained policy (Pre-Trained), we fine-tune it using the same EUREKA reward to reach the sequence of pen-spinning configurations (Fine-Tuned). To demonstrate the importance of curriculum learning, we also directly train a policy from scratch on the target task using EUREKA reward without the first-stage pre-training (Scratch). The RL training curves are shown in Figure 7. Eureka fine-tuning quickly adapts the policy to successfully spin the pen for many cycles in a row; see project website for videos. In contrast, neither pre-trained or learning-from-scratch policies can complete even a single cycle of pen spinning. In addition, using this EUREKA fine-tuning approach, we have also trained pen spinning policies for a variety of different spinning configurations; all pen spinning videos can be viewed on our project website, and experimental details are in App. D.1. These results demonstrate EUREKA’s applicability to advanced policy learning approaches, which are often necessary for learning very complex skills.
### 4.4 EUREKA FROM HUMAN FEEDBACK
In addition to automated reward design, EUREKA enables a new gradient-free in-context learning approach to RL from Human Feedback (RLHF) that can readily incorporate various types of human inputs to generate more performant and human-aligned reward functions.
**EUREKA can improve and benefit from human reward functions.** We study whether starting with a human reward function initialization, a common scenario in real-world RL applications, is advantageous for EUREKA. Importantly, incorporating human initialization requires no modification to EUREKA – we can simply substitute the raw human reward function as the output of the first EUREKA iteration. To investigate this, we select several tasks from Dexterity that differ in the relative performances between the original EUREKA and human rewards. The full results are shown in Figure 8.
As shown, regardless of the quality of the human rewards, EUREKA improves and benefits from human rewards as EUREKA (Human Init.) is uniformly better than both EUREKA and Human on all tasks. This suggests that EUREKA’s in-context reward improvement capability is largely independent of the quality of the base reward. Furthermore, the fact that EUREKA can significantly improve over human rewards even when they are highly sub-optimal hints towards an interesting hypothesis: human designers are generally knowledgeable about relevant state variables but are less proficient at designing rewards using them. This makes intuitive sense as identifying relevant state variables that should be included in the reward function involves mostly common sense reasoning, but reward design requires specialized knowledge and experience in RL. Together, these results demonstrate EUREKA’s reward assistant capability, perfectly complementing human designers’ knowledge about useful state variables and making up for their less proficiency on how to design rewards using them. In App. G.3 we provide several examples of EUREKA (Human Init.) steps.
Reward reflection via human feedback induces aligned behavior. So far, all EUREKA rewards are optimized against a fixed, black-box task fitness function $F$. This task metric, however, may not fully align with human intent. Moreover, in many open-ended tasks, $F$ may not be available in the first place (Fan et al., 2022). In these challenging scenarios, we propose to augment EUREKA by having humans step in and put into words the reward reflection in terms of the desired behavior and correction. We investigate this capability in EUREKA by teaching a Humanoid agent how to run purely from textual reward reflection; in App. G.4 we show the exact sequence of human feedback and EUREKA rewards. Then, we conduct a user study asking 20 unfamiliar users to indicate their preferences between two policy rollout videos shown in random order, one trained with human reward reflection (EUREKA-HF) and the other one trained with the original best EUREKA reward; the details are in App. D.3. As shown in Fig. 9, despite running a bit slower, the EUREKA-HF agent is preferred by a large majority of our users. Qualitatively, we indeed see that the EUREKA-HF agent acquires safer and more stable gait, as instructed by the human. See the project website for a comparison.
5 RELATED WORK
Reward Design. Reward engineering is a long-standing challenge in reinforcement learning (Singh et al., 2010; Sutton & Barto, 2018). The most common reward design method is manual trial-and-error (Knox et al., 2023; Booth et al., 2023). Inverse reinforcement learning (IRL) infers reward functions from demonstrations (Abbeel & Ng, 2004; Ziebart et al., 2008; Ho & Ermon, 2016), but it requires expensive expert data collection, which may not be available, and outputs non-interpretable black-box reward functions. Several prior works have studied automated reward search through evolutionary algorithms (Niekum et al., 2010; Chiang et al., 2019; Faust et al., 2019). These early attempts are limited to task-specific implementations of evolutionary algorithms that search over only parameters within provided reward templates. Recent works have also proposed using pretrained foundation models that can produce reward functions for new tasks (Ma et al., 2022, 2023; Fan et al., 2022; Du et al., 2023a; Karamcheti et al., 2023; Du et al., 2023b; Kwon et al., 2023). Most of these approaches output scalar rewards that lack interpretability and do not naturally admit the capability to improve or adapt rewards on-the-fly. In contrast, EUREKA adeptly generates free-form, white-box reward code and effectively in-context improves.
Code Large Language Models for Decision Making. Recent works have considered using coding LLMs (Austin et al., 2021; Chen et al., 2021; Rozière et al., 2023) to generate grounded and structured programmatic output for decision making and robotics problems (Liang et al., 2023; Singh et al., 2023; Wang et al., 2023b; Huang et al., 2023; Wang et al., 2023a; Liu et al., 2023a; Silver et al., 2023; Ding et al., 2023; Lin et al., 2023; Xie et al., 2023). However, most of these works rely on known motion primitives to carry out robot actions and do not apply to robot tasks that require low-level skill learning, such as dexterous manipulation. The closest to our work is a recent work (Yu et al., 2023) that also explores using LLMs to aid reward design. Their approach, however, requires domain-specific task descriptions and reward templates.
6 CONCLUSION
We have presented EUREKA, a universal reward design algorithm powered by coding large language models and in-context evolutionary search. Without any task-specific prompt engineering or human intervention, EUREKA achieves human-level reward generation on a wide range of robots and tasks. EUREKA’s particular strength in learning dexterity solves dexterous pen spinning for the first time with a curriculum learning approach. Finally, EUREKA enables a gradient-free approach to reinforcement learning from human feedback that readily incorporates human reward initialization and textual feedback to better steer its reward generation. The versatility and substantial performance gains of EUREKA suggest that the simple principle of combining large language models with evolutionary algorithms are a general and scalable approach to reward design, an insight that may be generally applicable to difficult, open-ended search problems.
| Method | Forward Velocity | Human Preference |
|------------|------------------|-----------------|
| EUREKA | 7.53 | 5/20 |
| EUREKA-HF | 5.58 | 15/20 |
Figure 9: EUREKA can incorporate human reward reflection to modify rewards that induce safer and more human-aligned behavior.
ACKNOWLEDGEMENT
We are grateful to colleagues and friends at NVIDIA and UPenn for their helpful feedback and insightful discussions. We thank Viktor Makoviychuk, Yashraj Narang, Iretiayo Akinola, Erwin Coumans for their assistance on Isaac Gym experiment and rendering. This work is done during Yecheng Jason Ma’s internship at NVIDIA. We acknowledge funding support from NSF CAREER Award 2239301, ONR award N00014-22-1-2677, NSF Award CCF-1917852, and ARO Award W911NF-20-1-0080.
|
bpheRCxzb4
|
Figures: - Figure 4: the authors provide two plots with identical descriptions, but from the caption, they seem to refer to different concepts (one should reflect informativeness, while the second reflects rationale)?
|
MEASURING INFORMATION IN TEXT EXPLANATIONS
Anonymous authors
Paper under double-blind review
ABSTRACT
Text-based explanation is a particularly promising approach in explainable AI, but the evaluation of text explanations is method-dependent. We argue that placing the explanations on an information-theoretic framework could unify the evaluations of two popular text explanation methods: rationale and natural language explanations (NLE). This framework considers the post-hoc text pipeline as a series of communication channels, which we refer to as “explanation channels”. We quantify the information flow through these channels, thereby facilitating the assessment of explanation characteristics. We set up tools for quantifying two information scores: relevance and informativeness. We illustrate what our proposed information scores measure by comparing them against some traditional evaluation metrics. Our information-theoretic scores reveal some unique observations about the underlying mechanisms of two representative text explanations. For example, the NLEs trade-off slightly between transmitting the input-related information and the target-related information, whereas the rationales do not exhibit such a trade-off mechanism. Our work contributes to the ongoing efforts in establishing rigorous and standardized evaluation criteria in the rapidly evolving field of explainable AI.
1 INTRODUCTION
As deep neural network (DNN) systems show superior performance on a wide variety of tasks, the explainability of DNNs has attracted increasing attention. The explainable AI (XAI) literature provides abundant methods for improving the transparency of a DNN. Among the methods that produce explanations about the decision mechanisms, text-based explanation appears particularly interesting due to its flexibility.
Text explanations mostly appear in two forms: rationale and NLE. A rationale is a subset of the input text, and an NLE is an explanation in natural language that describes the rationales (i.e., “free-text rationales”). Figure 1 shows an example.
The evaluation criteria of rationale and NLE have been proposed from different routes. The approaches to evaluate the rationales include computing token-level statistics or computing the change in model performance when masking the rationales (DeYoung et al., 2020; Carton et al., 2020). Those about NLE include simulating using a proxy model and computing the utilities (Hase et al., 2020; Wiegreffe et al., 2021), computing the performance gain of student models (Pruthi et al., 2022) or computing the informativeness relative to baseline rationales (Chen et al., 2022).
We argue that the evaluations of rationale and NLE can be placed on a common ground since both text explanation approaches involve communicating the decision rationales to the readers. We abstract the two text explanation methods within a single framework based on information theory. This framework, which we call explanation channels, consists of three random variables: the input, the label (of the problem to be explained), and the explanan (the product of the explanation procedure, following the terminology of Hempel and Oppenheim (1948)). The explanation channels framework allows us to formulate two terms based on information theory:
• Input-explanan mutual information, which describes the relevance of the explanation.
Figure 1: An example of rationale and natural language explanation (NLE).
• Target-explanan mutual information, which describes the explanation’s informativeness.
These terms are deceptively hard to quantify because the input and the explanan random variables are rooted in complex distributed defined by high-dimensional data. While information theory and machine learning literature provide many tools to estimate similar terms, it is still unknown whether these tools can be used to estimate these information scores. We make it possible to estimate these MI terms. We examine the suitability of a battery of methods for this purpose and find the two most appropriate methods: InfoNCE (Oord et al., 2018) and $V$-information (Xu et al., 2020).
We illustrate the validity of the MI terms with a collection of “silver labels” that are commonly used in NLP. We find that the estimated input-explanan mutual information correlates to traditional evaluation scores that measure explanations’ lexical and semantic relevance. On the other hand, the estimated target-explanan mutual information describes more than just the reasoning characteristics of the explanans.
The information scores provide novel insights into the mechanisms of the explanation methods. NLEs trade-off slightly between carrying the input-related information and the target-related information, whereas the rationale explanations do not exhibit such a trade-off mechanism. Furthermore, the two MI scores reveal idiosyncratic patterns of several of the most popular contextualized language models.
In summary, we propose explanation channels, a framework that provides a common ground to evaluate two text-based post-hoc explanations: rationale and NLE. Our communication channel framework uncovers unique findings and contributes to the rigorous study of explanation quality, an emerging research direction that deserves more attention.
2 RELATED WORK
Unified views for explanation methods Lundberg and Lee (2017) proposed a unified framework for several additive feature attribution methods. Ancona et al. (2018) proposed one for gradient-based feature attribution methods. Liu et al. (2021) used synthetic datasets to benchmark XAI methods and Agarwal et al. (2022) set up a public leaderboard evaluating 22 metrics. Each of those projects focused on explaining feature-based prediction systems, whereas we focus on text-based prediction systems, which do not have nominal features.
Han et al. (2022) proposed a local function approximation perspective to describe post-hoc explanation methods in a unified view, leading to a “no-free-lunch” argument for explanation methods: a locally faithful explanation may not be faithful for a distinct data distribution. Similarly, Bilodeau et al. (2022) proposed “Impossibility Theorems”, stating that linear explanations may not be sufficient. We consider text-based explanations that are hard to include in unified frameworks due to the flexibility and high-dimensional nature of language.
Information theory in NLP and XAI Approaches derived from information theory have been widely used in NLP. For example, surprisal, the negative log-likelihood of a new item following a sequence, has been used to train auto-regressive models (Radford et al., 2019). Surprisal is used to analyze patterns of texts (Meister et al., 2021) and the patterns of humans reading articles sequentially (Meister et al., 2022). Metrics derived from entropy can be used to select examples to construct prompts that maximize informativeness (Lu et al., 2022). Along these lines, we also derive scores following information-theoretic motivations.
Information theory is useful in XAI. For example, mutual information and minimum description length are used to study the informativeness of (i.e., “probe”) DNN representations about some diagnostic targets (Pimentel et al., 2020; Hou and Sachan, 2021; Voita and Titov, 2020). Conditional mutual information is used to model the effects of explanation for users with different knowledge backgrounds (Jung and Nardelli, 2020).
The closest work to our paper is perhaps REV (Chen et al., 2022), which estimates the target-explanan $V$-information in free-text rationales (i.e., NLEs) relative to vacuous rationales. We consider the evaluation problem from a communication channel perspective, and we measure information terms relative to null inputs (here random Gaussian vectors). Our framework additionally computes the input-explanan information, and can apply to text highlights (we refer to them as “rationales” in
this paper). Treviso and Martins (2020) formulated explanation as a sparse communication problem, where the explainer transmits information to the audience. Our framework, in contrast, considers post-hoc explanations, where the explainer is independent of the prediction model.
3 AN INFORMATION-THEORETIC VIEW OF XAI
3.1 PRELIMINARIES FOR COMMUNICATION CHANNELS
The communication channel is ubiquitous wherever information is transmitted from a source to a target. A signal is encoded at the source, transmitted through the channel, and decoded at the target. During the transmission, external signals might pollute the channel, making it a noisy channel.
Let $S \in \mathbb{R}^{d_s}$ be the source and $T \in \mathbb{R}^{d_t}$ be the target. When the source variable is observed, the uncertainty of the target variable is reduced. The reduction in uncertainty is the mutual information between the two variables, $I(S; T) = H(T) - H(T|S)$, where $H(T)$ is the entropy (uncertainty) and $H(T|S)$ is the conditional entropy. The mutual information characterizes this communication channel’s informativeness.
The mutual information of the communication channel is symmetric: $I(S; T) = I(T; S)$. The reduction of uncertainty in $T$ by knowing $S$ exactly equals the reduction of uncertainty in $S$ by knowing $T$. Communication channels have many properties, including data processing inequality (Cover et al., 1991).
3.2 AN ABSTRACTION FOR THE XAI PROCESS
Here, we describe an abstraction for the procedure to explain an AI model. The model $f$ is a prediction machine that takes in the input $X \in \mathbb{R}^{d_x}$ and predicts the target $Y \in \mathbb{R}$. To understand the prediction procedures, the XAI literature has proposed various methods. Each method computes an artifact, explanan, that explains the AI model.
A popular example trains a linear model $g$, which serves as a proxy for $f$ (Ribeiro et al., 2016). Here, the explanan is the linear model $g$. Another method, rationale, selects a subset of the most relevant inputs to the prediction target (DeYoung et al., 2020). Here, the explanan is one or more subsets of the input. NLE, in contrast, appears to be more flexible. Large language models (LLMs) like GPT-4 can be prompted as explainer models to generate texts with many attributes on par with human-written explanations (Wiegreffe et al., 2022). Here, the explanan is the generated text.
As shown in Figure 2, $f : X \rightarrow Y$ is the “black-box” model to be explained (i.e., the explanandum), and $E$ is the explanan. Usually, $E$ is referred to as the explanation, but the term “explanation” is also used to refer to the process (Achinstein, 1983). To avoid overloading the terminologies, we refer to the product, $E$, as the “explanan” throughout this paper and reserve “explanation” for the process.
Without loss of generality, we consider the explanan to be a fixed-dimensional variable: $E \in \mathbb{R}^{d_e}$. In scenarios where explanans take other forms (e.g., text), one can always embed them into fixed-dimensional vectors.
3.3 EXPLANATION CHANNELS
The explanation of a decision-making system constitutes multiple communication channels. Two of them transmit bits of information about the system – one from the input $X$ and the other from the target $Y$ – to the explanan $E$. The information transmitted through the two communication channels can describe quantities that are widely concerned by the stakeholders of the decision-making system.
Relevance The input-explanan mutual information $I(X; E)$ quantifies the amount of information transmitted from the input to the explanan. Given the input $X$, a larger $I(X; E)$ indicates a larger
reduction of uncertainty in the explanan. This is correlated to reduced hallucination in explanation, so we term \( I(X; E) \) the relevance of this explanation.
**Predictive informativeness** The target-explanan mutual information \( I(Y | E) \) quantifies the amount of information about the result of the model to be explained. A higher \( I(Y | E) \) indicates that the explanan removes more uncertainty about the prediction target \( Y \). A higher \( I(Y | E) \) indicates that the explanation is more informative.
### 3.4 Estimating the Relevance Score \( I(X; E) \)
\( I(X; E) \) involves modeling two high-dimensional random variables. One method particularly suitable for such an estimation is InfoNCE (Oord et al., 2018). Given a batch of \( N \) samples \( \{x_i, e_i\}_{i=1}^N \), the InfoNCE estimation is:
\[
\hat{I}(X; E) = \log N - L_N,
\]
where \( L_N \) is the cross-entropy loss for picking the correct \( e_i \) among the batch, for each \( x_i \):
\[
L_N = \frac{1}{N} \sum_{i=1}^{N} \log \frac{g(x_i, e_i)}{\sum_{x_j \in X} g(x_j, e_i)}
\]
Equation 2 implicitly defines a point-wise estimation for InfoNCE, which we apply in this paper. As elsewhere (Oord et al., 2018), \( g \) is a log-bilinear model parameterized by trainable parameters \( W \):
\[
g(x, e) = \exp(x^T W e)
\]
Taking the average estimate \( \hat{I}(X; E) \) across all batches yields the InfoNCE estimation of the dataset. The InfoNCE estimation is a lower bound for mutual information. As the batch size \( N \) increases, the lower bound becomes tighter. Please refer to Oord et al. (2018) for derivations.
Note that many alternative estimators, both parametric (Poole et al., 2019; Cheng et al., 2020; McAllester and Stratos, 2020; Song and Ermon, 2020; Belghazi et al., 2018; Nguyen et al., 2010; Pichler et al., 2022) and nonparametric (Kandasamy et al., 2015; Kraskov et al., 2004), can estimate mutual information. On complex data distributions, parametric information estimators are usually more accurate than nonparametric ones, and this advantage improves as the data dimension further increases. Among ablation experiments on these parametric estimators, InfoNCE shows lower variances than the alternatives. We elaborate further in Appendix A.2, and defer to the recent work of Czyż et al. (2023) as a more comprehensive evaluation.
### 3.5 Estimating the Predictive Informativeness Score \( I(Y; E) \)
The estimation of \( I(Y; E) \) involves modeling a scalar random variable and a high-dimensional one. Compared to \( I(X; E) \), this scenario is more suitably estimated with another tool: predictive \( V \)-information (Xu et al., 2020). Let \( E \) and \( Y \) denote random variables with sample spaces \( E, Y \), respectively. Let \( \emptyset \) denote a null input without information about \( Y \). Given a predictive family \( V \subseteq \Omega = \{h : E \cup \emptyset\} \), the predictive \( V \)-entropy is:
\[
H_V(Y) = \inf_{h \in V} \mathbb{E}[-\log h[\emptyset](Y)],
\]
and the conditional \( V \)-entropy is:
\[
H_V(Y | E) = \inf_{h \in V} \mathbb{E}[-\log h[E](Y)]
\]
The goals of the two infimum operations are to find the predictor \( h \in V \) that maximizes the log-likelihood of the label data with (Eq. 4) and without (Eq. 5) the explanan \( E \), respectively.
We use the natural logarithm (base \( e \)) throughout this paper. We consider \( E \in \mathbb{R}^{d_e} \), and the null input \( \emptyset \) to be a \( d_e \)-dimensional vector drawn from a Gaussian noise \( \emptyset \sim \mathcal{N}(0, 0.01) \).
The predictive \( V \)-information is defined as:
\[
I_V(E \rightarrow Y) = H_V(Y) - H_V(Y | E)
\]
Similar to InfoNCE, the predictive $\mathcal{V}$-information allows a point-wise estimation. Please refer to Ethayarajh et al. (2022) for the details. The predictive $\mathcal{V}$-information is neither a lower bound nor an upper bound for the mutual information, and $I_{\mathcal{V}}(E \rightarrow Y)$ approximates $I(Y; E)$ more precisely when the predictors $h$ are more high-performing (Pimentel et al., 2020; Pimentel and Cotterell, 2021).
The $\mathcal{V}$-information (also termed Bayesian mutual information (Pimentel and Cotterell, 2021) and task-specific information (Zhu et al., 2021)) has been used to study the difficulty of datasets (Ethayarajh et al., 2022), describe properties of free-text rationales (Chen et al., 2022), and characterize the informativeness of neural network representations (Pimentel et al., 2020; Hewitt et al., 2021).
4 DATA AND MATERIALS
4.1 DATA
We use the e-SNLI dataset (Camburu et al., 2018) in the ERASER benchmark (DeYoung et al., 2020). This dataset augments the SNLI natural language inference task (Bowman et al., 2015). Each instance of the language inference task presents two sentences, the premise $S_1$ and the hypothesis $S_2$, with one label $L$ describing the inference relations between them. $L$ is one of “contradiction”, “entailment”, and “neutral”. The e-SNLI dataset covers a broad range of topics and has been a challenging evaluation for machine understanding of languages.
4.2 EXPLANANS
Rationale The human-annotated rationales of the ERASER benchmark (DeYoung et al., 2020) specify the tokens important for decisions, while the remaining tokens are replaced with spaces.
NLE We prompt ChatGPT (gpt-3.5-turbo) configured with the default generation hyperparameters to generate NLEs using the template:
$$\{S_1\}\{S_2\} \text{ The label is } \{L\} \text{ because }$$
(7)
4.3 SILVER LABELS FOR EVALUATING THE EXPLANS
We compute a collection of “silver labels” that describe a diverse collection of aspects of texts.\(^1\)
Lexical-semantic scores The lexical-semantic scores don’t specifically evaluate the qualities of the explanations.
- **Type overlap ratio**, the portion of the word types in the text input ($S_1$ and $S_2$ concatenated) that are present in the explanan $E$. Type overlap ratio quantifies the lexical overlapping, a heuristic that many neural network NLP models rely on when learning representations (McCoy et al., 2019).
- **Edit distance ratio**, the minimum number of steps to edit the text input to acquire $E$, normalized by the text input length. This number simulates the effort in producing the explanation.
- **Cosine similarity**, the cosine similarity between the embedded input and the embedded explanan. This quantifies how the explanan is semantically similar to the explanandum.
GPTScore labels Recent papers show that LLMs can evaluate text properties resembling human annotators (Zhang et al., 2020; Fu et al., 2023). We specify nine aspects in three categories: reasoning (informational support, causal support, convincingness, coherence), clarity (clarity for student, clarity for graduate), and relevance (label relevance, input relevance, importance), by stating each aspect with a sentence (as listed in Table 1). We use the following template to prompt the “evaluator” LLM
\(^1\)Many other scores have been used to evaluate the quality of either the rationale or the NLE. The token-level F1/precision/recall scores (DeYoung et al., 2020) are suitable for rationale but not for NLE, since NLE contains too much flexibility. Additionally, these are aggregate scores, but we only consider instance-level scores.
| Category | Item | Statement |
|----------|------|-----------|
| Reasoning | info_support | The explanation provides sufficient information to support how the two sentences are associated to the label. |
| | causal_support | The explanation explains why these two sentences are associated to the label. |
| | convincingness | The explanation is persuasive and convinces me to believe that the question is associated to the label. |
| | coherence | The explanation bridges the gap between the two sentences and the label in a coherent and unsurprising manner. |
| Clarity | clarity4student | The explanation is easy to understand for a high school student. |
| | clarity4graduate | The explanation is easy to understand for a university graduate. |
| Relevance | label_relevance | Given the two sentences and the label, the explanation is relevant. |
| | input_relevance | Given the two sentences, the explanation is relevant. |
| | importance | The explanation highlights the most important parts in the two sentences that associate to the label. |
Table 1: Statements describing the GPTScore evaluation items.
to score the explanation \( E \) regarding a statement \( A \):
Following are two sentences, a label and an explanation.
The two sentences are: \( \{S_1\} \{S_2\} \)
The label is: \( \{L\} \)
The explanation is \( \{E\} \)
Please use one of ‘strongly disagree’, ‘somewhat disagree’, ‘somewhat agree’ and ‘strongly agree’ to describe your attitude towards the following statement: \( \{A\} \)
Do not add additional words.
The model we use is InstructGPT\_text-davinci-003 (Ouyang et al., 2022), which currently\(^2\) stands in the top place at the knowledge and reasoning categories of the HELM leaderboard (Liang et al., 2022). Compared to its predecessors, text-davinci-003 benefits from RLHF and is better at following the instructions in natural languages. It is able to follow the instructions to choose among the provided choices; only around 1 out of every 1000 results require postprocessing (e.g., stripping some additional parentheses or newline characters). Empirically, we find that the addendum to the prompt, “Do not add additional words” is helpful for encouraging it to follow the instruction.
After collecting the GPTScore labels, we map them from a categorical scale to a numerical scale. Namely: –2 for ‘strongly disagree’, –1 for ‘somewhat disagree’, 1 for ‘somewhat agree’, and 2 for ‘strongly agree’. An exploratory analysis for inter-score correlation is included in Appendix A.5.
GPTScore is related to simulatability. Simulatability uses either humans or an LM as a proxy to predict the label from both the input and rationale (explanation), and derive criteria to measure the qualities of explanations. Simulatability measures the correlations between the rationale and the label (Chan et al., 2022). Some scores in this category include LAS (Hase et al., 2020) and its variant, RQ (Wiegreffe et al., 2021). Despite increasingly frequent anthropomorphizing claims about the LLM’s capabilities, the LLMs still have significant limitations in their reasoning and explanation abilities (Dziri et al., 2023). Therefore, GPTScore labels or other scores likewise computed by LLM proxies should not be considered ground truths (“gold labels”).
5 EXPERIMENTS
5.1 WHAT ASPECTS ARE THE INFORMATION SCORES MEASURING?
To query what the information scores entail, we compute the correlation of the two information scores with each of the silver labels. The correlations are plotted on Figure 3.
Additionally, we run ANOVA, which computes the portion of variance in each of the information scores that can be explained by the silver labels. The detailed procedure and results are in Appendix A.3. The following summarizes some findings.
\( I(X; E) \) is largely about lexical and semantic relevance On NLE, these lexical-semantic scores can explain 46% and 43% of the total variance in \( I(X; E) \), for Cohere and OpenAI respectively.
\(^2\)As of May 1, 2023.
Figure 3: Correlations between relevance (left) and informativeness (right) and the silver labels.
The portions of explained variance are 31% and 17% on rationales. Other scores do not explain more than 5% of the total variance, but there is some evidence of correlations. As Figure 3 shows, the score $I(X; E)$ shows strong correlations to the lexical and the semantic overlaps. $I(X; E)$ positively correlates to the embedding similarity and the type overlap ratio, while negatively correlates to the edit distance ratio. On NLEs, $I(X; E)$ show correlate mildly to the convincingness, causal support, coherence and importance scores and weak correlations to other GPTScore labels. On rationales, $I(X; E)$ does not show correlations to the GPTScore labels. Note that the $I(X; E)$ of the RoBERTa-embedded explanations do not show similar levels of correlations with the silver labels — we elaborate the differences between the embeddings in Section 5.3.
$I(Y; E)$ is not just about the reasoning
What the informativeness score $I(Y; E)$ involves varies by the explanation method and the embedding choices. The reasoning category scores can explain 16% and 21% of the variance in the estimated $I(Y; E)$ for OpenAI and RoBERTa on NLE (18% and 19% for rationale), and no more than 17% for any other categories. On rationales, $I(Y; E)$ is negatively correlated to the relevance and reasoning quality scores but is mildly correlated to the clarity scores. On NLEs, $I(Y; E)$ is positively correlated to the coherence, clarity, and importance scores for the RoBERTa embedding, uncorrelated for the Cohere embedding but negatively correlated for the OpenAI embedding.
5.2 There is relevance–informativeness tradeoff for NLE but not rationales
To further understand the phenomena described by the information scores, we compute the correlations between the relevance and the informativeness scores. The results are shown in Figure 4, and Figures 5 – 6 in Appendix.
The relevance score $I(X; E)$ and the informativeness score $I(Y; E)$ show weak negative correlations for NLE. The evidence indicates that the (ChatGPT-generated) NLEs slightly trade-off between encoding the input-related information and encoding the label-related information. On rationales, the signs of such correlations differ by embeddings: negative for OpenAI, positive for Cohere, and insignificant for RoBERTa. The lack of evidence for relevance–informativeness tradeoff on rationales is likely a result of a lack of “degree of freedom”, since the annotators can only select a subset of input texts to acquire the rationales.
5.3 Ablation on the embedding method
As defined in 3, we consider both the input $X$ and the explanan $E$ to be vectors. When the input and the explanans are both texts, there needs to be an embedding step that converts them into vectors. Embedding is the crucial component that allows multiple types of explanations to be comparable on the same ground. An embedding is already present in the $f : X \rightarrow Y$ model, but sometimes this embedding is unavailable, what would be the effects if we use other embeddings to compute the information scores? We run ablation studies and query how the relevance and the informativeness scores differ.
Figure 4: The relevance–informativeness scatter plots for rationale (left) and NLE (right), for the OpenAI embeddings. Spearman correlation between relevance and informativeness is $-0.0585 (p = 0.0089)$ for rationale and $-0.0564 (p = 0.0117)$ for NLE.
We consider three embeddings: RoBERTa (roberta-large) (Liu et al., 2019), OpenAI (text-embedding-ada-002) (Brown et al., 2020), and Cohere (small) (Cohere, 2023). The OpenAI embedding has $d_e = 1536$ dimensions, and the other two embeddings have $d_e = 1024$.
The Cohere and OpenAI embeddings have significantly larger relevance scores $I(X; E)$ from the RoBERTa embedding,\(^3\) but the difference is not significant between Cohere and OpenAI embeddings.\(^4\) This trend holds for both the rationale and the NLE explanations.
The informativeness score $I(Y; E)$ score show a different pattern. For NLE, OpenAI embedding has a higher informativeness score than either Cohere or RoBERTa, which do not significantly differ.\(^5\) For rationale, RoBERTa embedding has a significantly higher informativeness score than the other two embeddings, which do not significantly differ.\(^6\)
We also observe that the embeddings demonstrate distinct patterns when we plot them onto a relevance–informativeness map. Figure 4 shows a relevance–informativeness scatter plot of the OpenAI embeddings. The data samples with different labels show some weak trends of clustering, but the Silhouette coefficients are weak ($-0.1088$ and $-0.0561$, for rationale and NLE, respectively). The plots of the other two embeddings are included in Figures 5 and 6 in Appendix. Cohere shows similar clustering trends as OpenAI (Silhouette coefficients $-0.0139$ and $0.0166$) embedding, but much less than the RoBERTa embedding (with Silhouette coefficients $0.1708$ and $0.7853$). A possible hypothesis to explain the inter-embedding difference is that RoBERTa strives to preserve the predictive information, embedding the texts from different classes into subspaces that are easy to separate linearly. On the other hand, OpenAI and Cohere embeddings relax this separability requirement, preserving more contextual information about the semantics.
6 DISCUSSIONS
On the capacities of explanation channels Table 2 summarizes the relevance and the informativeness across rationale and NLE. It is perhaps surprising that the information numbers are very small, compared to the amount of information the explanation channels can potentially transmit — Two 1024-dimensional binary random variables could potentially have $1024 \times \log_2 = 618$ nats of mutual information, and the floating point variables can support an even larger channel capacity. Besides the impact of variance from the estimators, are there other factors that could explain the observations that there is so little input-explanan information $I(X; E)$ and the target-explan information $I(Y; E)$?
A possible explanation is that the dimensions in the LLM’s embedding vectors are highly correlated.
\(^3 p < 0.01\). All tests in this subsection are two-tailed t-tests. dof = 1999, Bonferroni corrected.
\(^4 p = 0.0552\) and \(p = 0.0809\) for rationale and NLE, respectively.
\(^5 p = 0.0219\). After Bonferroni correction, this result is not significant.
\(^6 p = 0.0674\).
| | $\hat{I}(X : E)$ | $\hat{I}(Y : E)$ |
|----------------|------------------|------------------|
| | Cohere OpenAI | RoBERTa | Cohere OpenAI | RoBERTa |
| Rationale | 3.33 3.41 | 0.0609 | 0.208 0.00291 | 0.0105 |
| NLE | 2.78 2.88 | 0.000 | 0.0826 −0.00179 | 0.0321 |
Table 2: Estimated relevance and informativeness (in nats), on the e-SNLI test set.
(an effect observed in many language models (Aghajanyan et al., 2021; Wang et al., 2020; Ethayarajh, 2019)) which reduces the overall channel capacities.
**Amount vs type of information** Researchers have realized that the information spreading across long contexts could be “crammed” into fixed-length embedding vectors (Conneau et al., 2018). Considering the experiment findings, we could further argue that explanation does not need the full bandwidth of information of the language semantics. Providing the right *type* of information might be as important as providing the sufficient *amount*. Depending on the actual problems to explain, not all types of relevant information are appropriate — some bits of information may be disparaging, unwelcoming, and biased. The specification of, and even automatic evaluation of these attributes, however, is subject to further research. Identifying and elaborating on the types of information and their societal impacts would be crucial for understanding the explanation quality. Additionally, the explanation effect given the same amount of information describes the quality of the explanation, a term deserving more attention when developing automated approaches to explain complex decisions.
**Towards multimodal explanation channels** Can explanation channels generalize to multimodal problems? We believe they have the potential, as long as multimodal embeddings are sufficiently informative. Recent text-to-image models like DALL-E (Ramesh et al., 2021) and image-to-text models like CLIP (Radford et al., 2021) and BLIP2 (Li et al., 2023) indicate an affirmative answer, but empirical evidence will be necessary.
## 7 CONCLUSION
We propose an information-theoretic framework, *explanation channels*, as a unified testbed for two text-based post-hoc explainable AI methods: rationale and NLE. With this framework, we can estimate the input-explanan mutual information as the “relevance score” and the target-explanan mutual information as the “informativeness score”. We set up tools to compute the two scores on the explanation of natural language inference problems, which involve complex, high-dimensional distributions. By comparing to silver labels, we find that the relevance scores describe the lexical and semantic relevance scores, while the informativeness scores describe more than the reasoning qualities of the explanations. The scores reveal interesting properties of the language model embeddings and, more importantly, describe the mechanisms of multiple types of explanations. Information-theoretic frameworks have the potential to be a unified evaluation of explainable AI, empowering principled developments of trustworthy AIs.
## 8 LIMITATION
In this paper, we focus on the objective aspects and only use fully automatic evaluations to compute silver labels. Human annotators could be introduced in future studies. For the utility towards humans, we defer to the tutorial of Boyd-Graber et al. (2022). We also defer to Lombrozo (2012) for a review of the psychological aspects of explanations.
The coverage of experiments could be expanded. For example, we consider three embeddings (OpenAI, Cohere, RoBERTa-large) instead of many popular language models like LLaMA (Touvron et al., 2023) and GPT-J (Wang and Komatsuzaki, 2021). In addition to the e-SNLI, more datasets could also be experimented on. The range of text explanations can be expanded. We focus on rationale and NLE, and believe that the explanation channels framework can also be generalized to additional text-based XAI methods including contrastive explanations (Yin and Neubig, 2022), and causal explanations (Kiciman et al., 2023).
REFERENCES
Peter Achinstein. 1983. *The Nature of Explanation*. Oxford University Press.
Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. 2022. OpenXAI: Towards a Transparent Evaluation of Model Explanations. *arXiv preprint arXiv:2206.11104*.
Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. 2021. Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. In *ACL*, pages 7319–7328, Online. Association for Computational Linguistics.
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next-generation hyperparameter optimization framework. In *KDD*.
Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In *ICLR*.
Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. 2018. Mutual information neural estimation. In *International Conference on Machine Learning*, pages 531–540. PMLR.
Blair Bilodeau, Natasha Jaques, Pang Wei Koh, and Been Kim. 2022. Impossibility Theorems for Feature Attribution.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *EMNLP*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Jordan Boyd-Graber, Samuel Carton, Shi Feng, Q. Vera Liao, Tania Lombrozo, Alison Smith-Renner, and Chenhao Tan. 2022. Human-Centered Evaluation of Explanations. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts*, pages 26–32, Seattle, United States. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, and etal. 2020. Language Models are Few-Shot Learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901.
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In *Advances in Neural Information Processing Systems 31*, pages 9539–9549. Curran Associates, Inc.
Samuel Carton, Anirudh Rathore, and Chenhao Tan. 2020. Evaluating and Characterizing Human Rationales.
Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, and Xiang Ren. 2022. Frame: Evaluating simulatability metrics for free-text rationales. *EMNLP BlackboxNLP Workshop*.
Hanjie Chen, Faeze Brahman, Xiang Ren, Yangfeng Ji, Yejin Choi, and Swabha Swayamdipta. 2022. REV: Information-Theoretic Evaluation of Free-Text Rationales. *arXiv preprint arXiv:2210.04982*.
Pengyu Cheng, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan, and Lawrence Carin. 2020. CLUB: A contrastive log-ratio upper bound of mutual information. In *International conference on machine learning*, pages 1779–1788. PMLR.
Co:here. 2023. Cohere Embedding API Reference.
Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In *ACL*, pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics.
Thomas M Cover, Joy A Thomas, et al. 1991. Entropy, relative entropy and mutual information. *Elements of information theory*, 2(1):12–13.
|
0tWTxYYPnW
|
It seems that the best utility function (equation 1, page 3, section 2), mapping alternative to R, is not unique (imagine the ideal case when p_u(a, b) is 1 when u(a)> u(b), and we have two alternatives..).. The regularization in equation 1 would prefer smaller and smaller values... Any comments on these considerations (perhaps the assumption is that the utility values have some minimal absolute magnitude?)
|
DISTRIBUTIONAL PREFERENCE LEARNING:
UNDERSTANDING AND ACCOUNTING FOR HIDDEN CONTEXT IN RLHF
Anand Siththaranjan * Cassidy Laidlaw *
University of California, Berkeley
{anandsranjan,cassidy_laidlaw}@cs.berkeley.edu
Dylan Hadfield-Menell
Massachusetts Institute of Technology
dhm@csail.mit.edu
ABSTRACT
In practice, preference learning from human feedback depends on incomplete data with hidden context. Hidden context refers to data that affects the feedback received, but which is not represented in the data used to train a preference model. This captures common issues of data collection, such as having human annotators with varied preferences, cognitive processes that result in seemingly irrational behavior, and combining data labeled according to different criteria. We prove that standard applications of preference learning, including reinforcement learning from human feedback (RLHF), implicitly aggregate over hidden contexts according to a well-known voting rule called Borda count. We show this can produce counter-intuitive results that are very different from other methods which implicitly aggregate via expected utility. Furthermore, our analysis formalizes the way that preference learning from users with diverse values tacitly implements a social choice function. A key implication of this result is that annotators have an incentive to misreport their preferences in order to influence the learned model, leading to vulnerabilities in the deployment of RLHF. As a step towards mitigating these problems, we introduce a class of methods called distributional preference learning (DPL). DPL methods estimate a distribution of possible score values for each alternative in order to better account for hidden context. Experimental results indicate that applying DPL to RLHF for LLM chatbots identifies hidden context in the data and significantly reduces subsequent jailbreak vulnerability. Our code and data are available at https://github.com/cassidylaidlaw/hidden-context.
1 INTRODUCTION
Encoding human preferences and values into interactive learning systems is an essential component for making those systems safe and socially beneficial. To accomplish this, modern machine learning models, such as large language model (LLM) chatbots like ChatGPT and Claude, are trained with feedback from human evaluators. This method, often called reinforcement learning from human feedback (RLHF), seeks to align system behavior with the preferences of annotators. In this paper, we study how RLHF infers preferences when there is hidden context that influences human evaluations.
Hidden context is any information that affects preference annotations but is not given as input to the learned utility or reward model. It can arise through several mechanisms. For instance, when feedback is collected from many different people, annotator identity is hidden context: it affects the annotations, since different annotators could have very different preferences, but it is not input to the reward model, since the annotators’ data is combined anonymously. Other sources of hidden context include human irrationality and evaluation according to multiple objectives.
To motivate the consequences of naive preference learning with hidden context, consider the following hypothetical scenario:
*Equal contribution.
Figure 1: We analyze the effects of hidden context on preference learning, which is one of the key steps in reinforcement learning from human feedback (RLHF). Hidden context is any information that affects the annotator’s assessment of the utility of different alternatives, but is not input to the learned utility or reward model. Our framework encompasses many potential issues with preference learning, including human irrationality, diverse preferences among annotators, and combining multiple objectives (Section 2). We prove that preference learning implicitly aggregates over hidden context using a rule called Borda count (Section 3).
Example 1.1. A company has developed an AI assistant to help high school students navigate college admissions. They implement RLHF by asking their customers for feedback on how helpful the chatbot’s responses are. Among other questions, this process asks users whether or not they prefer to see information about the Pell Grant, an aid program for low-income students. Because the population of customers is biased towards high-income students, most feedback indicates that users prefer other content to content about the Pell Grant. As a result, RLHF trains the chatbot to provide less of this kind of information. This marginally improves outcomes for the majority of users, but drastically impacts lower-income students, who rely on these recommendations to understand how they can afford college.
The heart of this issue is that common preference learning approaches assume that all relevant features are provided as input to the reward model. However, when there is hidden context—which is almost always the case—this assumption is false. As a result, standard methods can have unexpected and undesirable consequences. In Example 1.1, relevant context about the annotator’s identity (i.e., their income level) is missing from the data. The implicit aggregation over preferences biases the outcome in favor of high-income applicants. In this work, we take steps to better understand the implications of unobserved context in preference learning and consider technical approaches to identify when such situations occur.
In Section 2 we present a formal model of preference learning with hidden context. We show that our model can represent many challenges in preference learning, such as combining data from different users, accounting for irrationality, and optimizing for multiple objectives. Since these challenges are ubiquitous, understanding their implications is crucial for safely deploying RLHF-trained models.
In Section 3, we use our model to develop theoretical results on the consequences of hidden context in preference learning. First, we provide a precise characterization of the utility function that preference learning will output when there is hidden context. In particular, we show that preference learning implicitly aggregates over hidden context using a rule called the Borda count. We explore the implications of this finding, identifying cases when Borda account aggregates preferences in unintuitive ways quite different from other methods like regression. Furthermore, when data is combined from many annotators, we establish connections with the social choice literature to expose another problem arising from hidden context: annotators may have an incentive to misreport their preferences to influence the learned reward function.
Next, we consider the design of preference learning methods that more gracefully account for hidden context. In Section 4, we propose distributional preference learning (DPL). DPL estimates a distribution over utility values for each input instead of a single real-valued output. This allows the method to detect situations where unobserved context could influence preferences. We show how DPL can detect the effects of missing features through an explained variance ($r^2$) metric.
We validate DPL in two ways. First, we conduct a small-scale synthetic experiment with a 1-dimensional space of alternatives that allows us to directly compare to Borda count. Next, we apply DPL to a real-world dataset of preferences for use in RLHF. In this case, the preference data is collected according to two distinct objectives. In one subset of the data, raters were asked to prefer helpful and honest responses. In the other subset, raters were asked to prefer responses that did not respond to harmful requests. This introduces hidden context because the single reward model is trained on the combined data. We find that DPL is able to identify this hidden context automatically.
and identifies the uncertainty when these competing goals are at odds.
Beyond identifying potential instances of relevant hidden context, our experiments indicate that DPL can be used to develop guardrails that protect against jailbreaks. Wei et al. (2023) showed that many jailbreaks succeed by pitting the helpfulness and harmlessness objectives of chatbots against one another. This means that some jailbreaks can be understood as a consequence of hidden context. As a result, it is possible to detect this class of jailbreaks by leveraging the distribution of utilities we get from DPL. In particular, risk-aversion with respect to the distribution of learned utilities can dramatically reduce the rate at which the preference model prefers jailbroken responses.
We summarize our contributions as follows:
1. we identify and formally characterize the problem of preference learning with hidden context, and describe a number of settings where it may arise;
2. we show that preference learning with hidden context implicitly implements Borda count, which can have counter-intuitive implications and incentives for annotators to misreport preferences;
3. we introduce distributional preference learning and show that it can detect and mitigate some effects of hidden context in LLM-based preference models.
2 SETTING AND RELATED WORK
We begin by formally describing the problem of preference learning with hidden context. Consider a finite set of alternatives \( A \), and an unknown utility function \( u : A \rightarrow \mathbb{R} \). For instance, in the case of a chatbot, the alternatives could be the possible responses to a prompt, and the utility function would describe how much a particular response is preferred. To estimate \( u \), we observe the outcome of comparisons between pairs of alternatives \((a, b)\). We assume there is a fixed probability for any pair of alternatives \((a, b)\) that \( a \) will be preferred to \( b \); we denote this probability \( p_u(a, b) \) and assume that \( p_u(a, b) + p_u(b, a) = 1 \); that is, the order in which the alternatives are presented does not matter. In the ideal case, comparison outcomes would exactly reflect the utility function, i.e., \( p_u(a, b) = 1\{u(a) > u(b)\} \). Realistically, however, preference comparison data never exactly follows a single utility function. To account for the fact that people are noisy and/or inconsistent in their feedback, a common assumption is that instead preference comparisons are made according to a Bradley-Terry-Luce (BTL) model (Rajkumar & Agarwal, 2014), also sometimes known as Boltzmann-rational model (Jeon et al., 2020): \( p_{u}^{\text{BTL}}(a, b) = \frac{e^{u(a)}}{e^{u(a)} + e^{u(b)}} \). In this model, the higher \( u(a) \) is compared to \( u(b) \), the more likely the outcome of the comparison is to prefer \( a \) to \( b \); as the utilities for \( a \) and \( b \) are closer, the comparison outcome moves towards uniformly random. The most commonly used method for estimating the utility function \( u \) from preference data is to fit the maximum likelihood estimator (MLE) under the BTL model. To derive the MLE, we consider the limit of infinite data and assume that preference comparisons are elicited for uniformly randomly selected pairs of alternatives. The MLE for the utility function \( \hat{u} \) is given by \( \hat{u} = \arg \min_u L(u; \hat{u}) \), where
\[
L(\hat{u}; u) = \frac{1}{|A|(|A|-1)} \sum_{a \neq b} -p_u(a, b) \log \left( \frac{e^{\hat{u}(a)}}{e^{\hat{u}(a)} + e^{\hat{u}(b)}} \right) - (1 - p_u(a, b)) \log \left( \frac{e^{\hat{u}(b)}}{e^{\hat{u}(a)} + e^{\hat{u}(b)}} \right).
\]
Although in practice \( \hat{u} \) might be represented by a neural network, we assume for theoretical purposes that \( L(\hat{u}; u) \) is optimized over all possible \( \hat{u} : A \rightarrow \mathbb{R} \). In some cases, \( L \) may not have any minimum, so we consider a regularized version of (1); see Equation (6) and Appendix A.1 for more details.
2.1 HIDDEN CONTEXT
While preference learning based on (1) has been widely deployed and enjoyed some success, it rests on assumptions that often do not hold in practice. In particular, irrationality, partial observability, and diversity of preferences among a population all challenge the BTL model on which the usual preference learning loss is based. We argue that all of these cases can be understood as special cases of a general phenomenon: hidden context. For concreteness, consider again Example 1.1. The key problem in the example is a mismatch between the information that influences the user’s feedback and the information that the preference learning algorithm uses to estimate utilities based on that feedback. The user gives feedback that depends on their financial situation, while the learned utility model observes request-response pairs. Thus, the preference learning algorithm must produce a single ordering over alternatives that implicitly aggregating feedback over the hidden context of
whether the user is high- or low-income.
To model hidden context in preference learning, we extend the preference learning formalization to utility functions \( u : \mathcal{A} \times \mathcal{Z} \rightarrow \mathbb{R} \) over a space of observed features \( a \in \mathcal{A} \) and hidden context \( z \in \mathcal{Z} \). Let \( D_z \) be a distribution over \( \mathcal{Z} \). In Example 1.1, \( z \in \{0, 1\} \) could represent whether the user is low- or high-income; then perhaps \( z \sim B(0.8) \) if 80% of users are high-income (where \( B(p) \) represents a Bernoulli random variable with mean \( p \)). Given \( u(a, z) \) and \( D_z \), we can calculate the probability that one alternative \( a \) is chosen over another \( b \) given that \( z \) is hidden:
\[
p_{u,D_z}(a,b) = \mathbb{E}_{z \sim D_z}[O_u(a,b,z)] \quad \text{where} \quad O_u(a,b,z) = \begin{cases}
1/2 & \text{if } u(a,z) = u(b,z) \\
1\{u(a,z) > u(b,z)\} & \text{o.w.}
\end{cases}
\]
\( p_{u,D_z} \) marginalizes over the distribution of the hidden context \( z \) and thus reflects the comparison data available to the preference learning algorithm. Our model of hidden contexts can represent many settings where preference learning is difficult:
**Partial observability.** There may be variables that are observable by the human making preference comparisons but not by the AI system, which learns from that data. For instance, suppose annotators’ preferences depend on the day of the week or the month of the year, but the estimated utility function ignores the date the comparisons were made.
**Multiple objectives.** System designers may combine data about user preferences over multiple, different objectives. For instance, the Anthropic HH-RLHF dataset (Bai et al., 2022a) contains one subset with comparisons of chatbot responses based on harmlessness and another subset with comparisons based on helpfulness. When these subsets are combined, the objective that was used to make the comparison (in this case, either harmlessness or helpfulness) is a hidden context.
**Population with diverse preferences.** Preference learning is almost always applied to data aggregated from many annotators who may have very different utility functions (e.g., Bai et al. (2022a) observe high intra-annotator disagreement). If \( z \) represents the annotator who makes a comparison, then \( u(\cdot, z) \) could represent the utility function for that annotator. However, when the data is used to train a single utility function \( \hat{u}(\cdot) \), then the annotator’s identity \( z \) is a hidden context.
**Irrational and noisy decisions.** Various types of irrationality could be modeled as unseen latent variables that affect a person’s decision-making. For instance, to represent a person making noisy utility estimates, one could let \( \mathcal{Z} = \mathbb{R}^{|\mathcal{A}|}, z(a) \overset{\text{iid}}{\sim} \mathcal{N}(0, 1) \), and \( u(a, z) = \mu(a) + z(a) \) for some \( \mu : \mathcal{A} \rightarrow \mathbb{R} \). That is, the person has an underlying utility \( \mu(a) \) for each alternative but makes comparisons based on that utility plus independently sampled Gaussian noise representing irrationality in their utility assessments. This is equivalent to the Thurstone-Mosteller model of noisy decision making (Handley, 2001).
Due to the ubiquity of these settings, preference learning is nearly always performed with hidden context. This means that the learned utility function \( \hat{u}(a) \), which only depends on the seen features \( a \), must somehow aggregate over the hidden contexts \( z \). We aim to understand and mitigate the consequences of this ubiquitous challenge.
### 2.2 Related Work
Preference learning and its use in reinforcement learning have a long history Akrour et al. (2012); Busa-Fekete & Hüllermeier (2014); Sadigh et al. (2017); Christiano et al. (2017); Pacchiano et al. (2021). As part of RLHF, preference learning has been widely used recently for training large language models (LLM) to give outputs according to human preferences (Ziegler et al., 2020; Stiennon et al., 2020; Askell et al., 2021; Bai et al., 2022a;b; Ouyang et al., 2022). It has also been extensively analyzed in theory; some results focus on its sample complexity in various settings (Chen & Suh, 2015; Shah et al., 2015; Shah & Wainwright, 2018; Heckel et al., 2018; Hendrickx et al., 2020; Chambers et al., 2021) or other directions such as the statistical identifiability of preferences (Zhao et al., 2020; Skalse et al., 2023), the computational efficiency of preference learning (Maystre & Grossglauser, 2015), Bayesian preference learning (Caron & Doucet, 2010), or the combination of preference learning and reinforcement learning (Zhu et al., 2023). However, to our knowledge, no prior work has specifically analyzed the behavior of preference learning with hidden context.
The challenges of preference learning that we group as cases of “hidden context” have also been studied individually. There has been some work on explicitly modeling annotator disagreement...
as well as other approaches to learning from annotators with diverse preferences (Jia et al., 2023; Dumoulin et al., 2023; Mishra, 2023; Fish et al., 2023). Other work has studied the effects of human irrationality or non-BTL models of human behavior on preference learning (Bobu et al., 2020; Lee et al., 2021; Laidlaw & Russell, 2021; Knox et al., 2022; Laidlaw & Dragan, 2022), which under our framework can be modeled as hidden context. Zhuang & Hadfield-Menell (2020) and Dai et al. (2023) study the optimization of multiple objectives learned from human preferences. Finally, related to our connections with social choice theory in Section 3, some previous work has associated preference or reward learning with concepts in economics, such as voting rules (Conitzer & Sandholm, 2005), incentive compatibility (Echenique & Prasad, 2019), and mechanism design (Fickinger et al., 2020).
3 THEORETICAL ANALYSIS
We begin our analysis by precisely describing the behavior of preference learning with hidden context. In particular, we can show that a utility function \( \hat{u}(a) \) learned with the BTL loss as in (6) implicitly aggregates utilities over the hidden contexts \( z \) using a rule called Borda count. We define the Borda count \( BC(a) \) of an alternative \( a \) as \( BC(a) = \frac{1}{|A|} \sum_{b \in A} P_{u,z}(a, b) \). That is, the Borda count is the average probability that the alternative is preferred to other alternatives. If an alternative is almost always preferred to all other alternatives, then its Borda count will be close to 1; if it is almost always dispreferred, the Borda count will be near 0. We use the term Borda count as a reference to the well-known voting rule of the same name—a connection we expand on in Section 3.2.
**Theorem 3.1.** BTL preference learning implicitly aggregates hidden context according to Borda count. That is, if \( \hat{u} \) is optimized according to (6), then \( \forall a, b \in A, \hat{u}(a) > \hat{u}(b) \iff BC(a) > BC(b) \).
We defer all proofs to Appendix A. According to Theorem 3.1, the learned utility function and Borda count differ by only a monotonic transformation. If we use reinforcement learning or another optimization technique to search for the alternative \( a \) which maximizes \( \hat{u}(a) \)—as one does in RLHF—then the optimal alternative will be the same as that which maximizes the Borda count \( BC(a) \). Similar results that relate preference learning and Borda count were previously explored by Rajkumar & Agarwal (2014), although they do not consider the setting of hidden context.
While Theorem 3.1 precisely describes the results of preference learning with hidden context, its implications are unclear. Is Borda count a useful way of aggregating over hidden contexts in practice, and how does it compare to other aggregation rules? To answer this question, we give multiple perspectives on preference learning with hidden context using the result of Theorem 3.1. First, we compare preference learning to least-squares regression with hidden context. Then, we analyze learning from a population with diverse preferences through the lens of social choice theory.
3.1 COMPARISON TO EXPECTED UTILITY AND LEAST-SQUARES REGRESSION
One desirable property of preference learning with hidden context would be if it converged to the expected utility for each alternative when marginalizing over hidden context, which we denote by \( \bar{u}(a) = \mathbb{E}_{z \sim D_z}[u(a, z)] \). For instance, one can show that least-squares utility regression converges to the expected utility when there is hidden context; see Appendix A.2 for a formal statement and proof. The fact for least-squares utility regression \( \hat{u} = \bar{u} \) shows that, in some sense, it gracefully degrades in the presence of hidden context. Although there are drawbacks of expected utility, it is a well-understood method of aggregating utilities over hidden contexts that has desirable decision-theoretic properties. Thus, it would be helpful if the utility function \( \hat{u}(a) \) learned by preference learning with hidden context were equivalent to the expected utility \( \bar{u}(a) \). In this section, we characterize when the output of preference learning with hidden context is equivalent to that of utility regression.
**Positive results** In some cases, we can show that preference learning does identify a utility function that is equivalent to the expected utility. The result requires that the zero-mean “noise” induced by hidden context is identical across alternatives and reasonably distributed. We represent this noise as \( \epsilon(a) = u(a, z) - \bar{u}(a) \) (where \( z \sim D_z \)) to be the random variable representing the residual utility of an alternative \( a \) after subtracting its expected utility.
**Theorem 3.2.** Let \( \epsilon(a) \) be independent and identically distributed for all \( a \in A \). Furthermore, suppose \( \epsilon(a) - \epsilon(b) \) has support around 0, i.e., \( \forall \delta > 0, F_{a,b}(\delta) > F_{a,b}(0) = \frac{1}{2} \), where \( F_{a,b} \) is the
Figure 2: We introduce distributional preference learning (DPL), which explicitly accounts for hidden context. While normal preference learning outputs a single utility estimate for each alternative, DPL outputs a distribution over utilities. This distribution represents the range of utility values for that alternative as the hidden context varies, e.g., the distribution of utilities assigned to a chatbot response by different annotators or according to different objectives (like harmlessness vs. helpfulness).
cumulative distribution function of \( \epsilon(a) - \epsilon(b) \). Then the utility function \( \hat{u} \) learned by minimizing (6) satisfies \( \hat{u}(a) > \hat{u}(b) \iff \bar{u}(a) > \bar{u}(b) \) for any \( a, b \in A \).
Many noise distributions, such as uniform and normal distributions, clearly satisfy the assumptions of Theorem 3.2. Thus, as long as the noise caused by hidden context does not vary across alternatives and is not too unusual, we generally expect that preference learning will give a utility function with the same ordering over alternatives as the expected utility. This means that it performs similarly to least-squares regression.
Negative results In other cases, preference learning can behave quite differently from utility regression. Example 1.1 describes such a case. The expected utility of telling students about Pell Grants is higher than the expected utility of not telling them, since it is of great benefit to low-income students and only small inconvenience to high-income students. However, the Borda count is lower since the high-income majority prefer not to hear about the grants.
One might suppose that preference learning and regression disagree in this case because the majority of users prefer the alternative with lower expected utility, and preference learning gives a learned utility function which assigns higher utilities to alternatives preferred to by the majority of users. As long as the majority of feedback agrees with the ordering given by the expected utility, will preference learning and regression give the same result? The following theorem shows that this is not the case.
**Proposition 3.3.** \( \exists A, D_z, u \text{ s.t. } a, b \in A, [\bar{u}(a) > \bar{u}(b)] \Rightarrow [p_{u,D_z}(a,b) > 1/2], \text{ but } \hat{u} \text{ is not equivalent to } \bar{u}, \text{i.e., there exist } a, b \in A \text{ such that } \hat{u}(a) > \hat{u}(b) \text{ but } \bar{u}(a) < \bar{u}(b). \)
That is, Proposition 3.3 describes a case where for any two alternatives, the majority of feedback chooses the alternative with the higher expected utility, and yet preference learning still does not produce a utility function equivalent to the expected utility. In general, it is impossible to always identify \( \bar{u} \) (even up to a monotonic transformation) given only comparison data.
**Theorem 3.4 (Unidentifiability of \( \bar{u} \)).** Suppose a preference learning algorithm takes as input unlimited samples of the form \((a, b, O_u(a, b, z))\) for all values of \(a\) and \(b\), where \(z \sim D_z\), and deterministically outputs a learned utility function \( \hat{u}(a) \). Then there is some utility function \( u \) and distribution over unseen features \( D_z \) such that \( \hat{u} \) is not equivalent to \( \bar{u} \).
### 3.2 Connections to Social Choice Theory
When training on comparison data from many agents, each with their own preferences, preference learning aggregates all their feedback into a single utility function. As we described in Section 2, this is a case where the identity of the annotator is hidden context: it affects the comparison outcomes but is unseen by the preference learning algorithm. Social choice theory studies methods for aggregating preferences from a population. Thus, it can provide a lens through which to understand this particular case of preference learning with hidden contexts.
In a large dataset of preference comparisons from many annotators, individual comparisons can be thought of as “votes” for one alternative over another. When preference learning combines this data into a single utility function, it is similar to a voting rule that ranks candidates based on annotators’ votes. In particular, Borda count is a well-studied voting rule—usual definitions of Borda count in voting theory differ from ours only by an affine transformation (Johnson, 2005; Emerson, 2013; Lippman, 2012). This means that many results from the social choice literature on Borda count can be applied to understanding preference learning from a diverse population. For example, under Borda count, participants may have an incentive to misreport their preferences (Dummett, 1998).
Figure 3: The results of our experiments with synthetic data. We find that the utility estimated by normal preference learning agrees closely with the Borda count, as our theory suggests. Furthermore, DPL successfully identify alternatives where hidden context has a significant effect.
Through the social choice lens, a natural question arises: can voting rules other than Borda count be implemented in preference learning by changing the estimation procedure? We explore this question further in Appendix B.3.
4 DISTRIBUTIONAL PREFERENCE LEARNING
Our theoretical results show that preference learning in the presence of hidden context can lead to undesirable outcomes. While system designers may still choose to use preference learning for RLHF or other applications, they should carefully consider these downsides and try to mitigate them. The first step towards this is detection—knowing to what degree hidden context affects preference data both on a dataset and instance level. In this section, we describe a simple modification to preference learning such that it can detect and characterize inconsistent feedback.
Our alternative preference learning methods, which we call distributional preference learning (DPL), output a distribution over possible utilities for each alternative rather than a single value (Figure 2). In particular, we learn a mapping \( \hat{D} : A \to \Delta(\mathbb{R}) \) from alternatives to distributions over utilities to estimate the distribution of \( u(a,z) \) when \( z \sim D_z \). We consider two variants, each of which parameterizes the distribution \( \hat{D}(a) \) in a different way.
First, the mean-and-variance model learns two functions \( \hat{\mu} : A \to \mathbb{R} \) and \( \hat{\sigma} : A \to [0,\infty) \), parameterizing the distribution over utilities as \( \hat{D}(a) = N(\hat{\mu}(a), \hat{\sigma}(a)^2) \). Second, in the categorical model, we choose \( n \) evenly spaced utility values \( u_1 < u_2 < \ldots < u_n \), and then parameterize the distribution as the probabilities of each of those utilities \( \hat{p}(u_i | a) \) for \( i \in \{1,\ldots,n\} \). We train the distributional preference models by maximizing the likelihood of the data given the model \( p_{\hat{D}}(a,b) = E[O(u_a,u_b) | u_a \sim \hat{D}(a), u_b \sim \hat{D}(b)] \). Concretely, for the mean-and-variance model, the loss for a single preference comparison where alternative \( a \) is preferred to \( b \) is the negative log probability that \( u_a - u_b > 0 \):
\[
- \log \Phi \left( \frac{\hat{\mu}(a) - \hat{\mu}(b)}{\sqrt{\hat{\sigma}(a)^2 + \hat{\sigma}(b)^2}} \right).
\]
For the categorical model, the equivalent loss is
\[
- \log \sum_{i=1}^{n} \sum_{j=1}^{n} \hat{p}(u_i | a) \hat{p}(u_j | b) \begin{cases}
1/2 & u_i = u_j \\
1 & u_i > u_j \\
0 & \text{o.w.}
\end{cases}
\]
Note that DPL is not trying to model uncertainty about the utility function which comes from limited data, but rather uncertainty which comes from hidden context. Even in the limit of infinite data, DPL will not necessarily converge to a point estimate of utility for each alternative.
Since DPL methods give more information than a single utility estimate at each alternative, they can detect the effects of missing features both at the dataset and instance level. At the dataset level, a popular metric for determining the effects of missing features in regression is the coefficient of determination, \( r^2 \). We can derive an equivalent measure for DPL. Let \( \hat{\mu}(a) = E[\hat{D}(a)] \). Then we define \( r^2 = \text{Var}[\hat{\mu}(a)] / (\text{Var}[\hat{\mu}(a)] + E[\text{Var}[\hat{D}(a)]]), \) where \( a \) is sampled from the uniform distribution over alternatives. Intuitively, \( r^2 \), which has to be between 0 and 1, represents the amount of variation in utility values that is captured by the observed features \( a \); \( 1 - r^2 \) is the proportion of variance caused by hidden context. At the instance level, alternatives \( a \) where \( \text{Var}(\hat{D}(a)) \) is higher are likely those where missing features have a larger impact on the utility of the alternative.
Synthetic experiments To test distributional preference learning, we ran experiments in a simple setting of preference learning with hidden context. We let \( A = [0,1] \) and \( z \sim B(1/2) \). We suppose
| Pref. learning method | Training dataset | Jailbreak rate | Helpfulness accuracy |
|-----------------------|-----------------|----------------|----------------------|
| Standard | Helpful | 52.4% | 72.6% |
| Standard | Harmless | 3.7% | 49.5% |
| Standard | Combined | 25.1% | 68.2% |
| Mean & var. DPL | Combined | 30.5% | 68.4% |
| ↓ Risk-averse | | 20.3% | 66.4% |
| Categorical DPL | Combined | 32.1% | 66.2% |
| ↓ Risk-averse | | 13.4% | 66.2% |
(a) Combining our distribution preference learning (DPL) methods with risk-averse optimization mitigates jailbreaks without hurting accuracy on non-harmful prompts.
Table 1: Results from our experiments on explaining and mitigating LLM jailbreaks in Section 4.
That the true utility function is \( u(a, z) = a \) if \( a < 0.8 \) and \( u(a, z) = 2az \) otherwise. That is, the missing variable \( z \) has no effect when \( a < 0.8 \), but for \( a \geq 0.8 \), \( u(a, z) \) is either \( 2a \) or zero, each with probability one-half. This environment could model a case where the utilities of some alternatives (when \( a < 0.8 \)) are easy for users to judge, while others (when \( a \geq 0.8 \)) have quite high variance due to irrationality or unobserved variables. We estimate utility functions both with normal preference learning and DPL; Figure 3 shows the results. The left plot shows that the learned utilities closely agree with Borda count and diverge from the expected utility \( \bar{u} \), as our theory in Section 3 suggests. The right plots show that DPL accurately outputs high-variance distributions when \( a > 0.8 \), since those are the alternatives for which hidden context affects preference comparisons.
Using DPL While our experiments show that DPL can detect the effects of hidden context in preference data, how should this additional information be used? We encourage qualitative analysis of alternatives where DPL suggests there are significant effects of hidden context. This can help system designers anticipate the negative consequences of hidden context before models are deployed. Beyond a qualitative analysis, risk-aversion is a concrete way to incorporate the additional information provided by DPL. Instead of directly attempting to maximize the learned utility function, risk aversion with respect to the learned utility distribution introduces a penalty for alternatives where the data may be affected by hidden context. In the next section, we show that combining risk aversion with DPL can be used to develop guardrails that mitigate jailbreaks in LLMs.
5 CASE STUDY: COMPETING OBJECTIVES IN RLHF
In this section, we evaluate DPL’s ability to identify hidden context through a case study on large language model (LLM)-based reward models. Chatbots like GPT-4 and Claude are trained by learning a human reward model and then optimizing it via reinforcement learning, together referred to as RLHF. In order to evaluate the ability of DPL methods to identify hidden context, we use the HH-RLHF dataset (Bai et al., 2022a). For this dataset, raters were separately asked to provide preferences on whether responses were helpful or harmful. When a single utility function is trained on the entire HH-RLHF dataset, the objective (helpfulness or harmlessness) that was used to annotate a pair of responses is a hidden context since it is not available to the learned utility function. This missing variable may cause real harm: Wei et al. (2023) present jailbreaks that manipulate models to prioritize helpfulness over harmlessness and output harmful content. Through our case study, we aim to answer three questions:
1. Does the hidden context of the labeling objective contribute to jailbreak vulnerability?
2. Can we DPL detect the effects of this hidden context without explicit supervision?
3. Can we DPL reduce models’ susceptibility to jailbreaks?
Understanding jailbreak vulnerability To address the first question, we train three LLM-based utility functions on the HH-RLHF dataset (Bai et al., 2022a). The dataset consists of conversations between a human and an AI assistant with two alternatives for the assistant’s final response, plus a label for which response is preferred. Half of the comparisons are labeled based on which response is more helpful and honest and half based on which response is more harmless. Using standard preference learning, we train utility functions \( \hat{u}_{\text{helpful}} \) on just the helpful-labeled data, \( \hat{u}_{\text{harmless}} \) on just the harmless-labeled data, and \( \hat{u}_{\text{combined}} \) on both (see Appendix C for experiment details).
(b) The \( r^2 \) values, which quantify the effect of hidden context (see Section 4), measured by DPL models trained on different preference datasets.
To test if implementing RLHF using these utility functions would lead to jailbreak vulnerabilities, we collect pairs of responses to jailbreak prompts from Wei et al. (2023) that are designed to fool the model into giving a harmful response; each pair consists of one safe response and one jailbroken response. If a learned utility function assigns higher utility to the jailbroken response, then we expect using that utility function to train an LLM assistant via RLHF would lead to the assistant outputting the jailbroken response. We define the “jailbreak rate” of a utility function as the percentage of jailbreak prompts for which it assigns higher utility to the jailbroken response. Since avoiding jailbreaks is not the only purpose of an LLM assistant, we also evaluate the “helpfulness accuracy” of a utility function as its accuracy at predicting judgements in the HH-RLHF helpfulness test set.
The top of Table 1a shows the jailbreak rates and helpfulness accuracies for each of the three normally-trained utility functions. While \( \hat{u}_{\text{harmless}} \), trained only on harmlessness-annotated data, has a very low jailbreak rate of under 4%, its helpfulness accuracy of around 50% suggests it is useless for judging the helpfulness of responses to non-harmful prompts. \( \hat{u}_{\text{helpful}} \) has much higher helpfulness accuracy, but also prefers jailbroken responses more than half the time. The problem is that the jailbroken responses are generally more “helpful” than a safe response which refuses to answer the prompt. Since our theory suggests that \( \hat{u}_{\text{combined}} \) is aggregating the helpful and harmful utilities via Borda count, in many cases the high helpfulness of jailbroken responses leads to high utilities under the combined utility function. In fact, \( \hat{u}_{\text{combined}} \) has a jailbreak rate of around 25%, showing that one cause of jailbreaks is training a single reward model on data which combines two competing objectives—a clear case of hidden context in preference learning.
**Detecting hidden context** To answer the next question—whether we can detect hidden context—we additionally train DPL models on all three datasets and measure their \( r^2 \) values, which are shown in Table 1b. Recall that lower \( r^2 \) indicates more effects from hidden context. We find that among the mean-and-variance DPL models, those trained on either just the helpless or just the harmlessness data have \( r^2 \) above 0.75, while the DPL model trained on the combined data has a much lower \( r^2 = 0.53 \). We see the same pattern with categorical DPL models: \( r^2 = (0.63, 0.53) \) for the single-objective models while \( r^2 = 0.41 \) for the combined model. This indicates that DPL can consistently measure the effect of hidden context via the \( r^2 \) metric: for both variants of DPL, \( r^2 \) is considerably lower when hidden context is present.
**Preventing jailbreaks** How might the distributional output of DPL be leveraged within RLHF to guard against jailbreaks? Ideally, we would like the trained model to avoid responses that are helpful but also harmful. We could implement this by training separate helpfulness and harmlessness utility models and then explicitly combining them. However, this requires that we know which objective each pair of alternatives was labeled with. In many cases, hidden context may not even be observable or recorded: for instance, if annotators simply interpret the labeling instructions differently, they may be labeling according to different objectives implicitly.
DPL methods allow the reward model to account for hidden context without the need for that context to be recorded. In particular, we can avoid helpful-but-harmful responses by optimizing a lower quantile of the distribution \( D \) output by DPL. Optimizing this quantile is a type of risk-averse optimization that is only possible with DPL, since normal preference learning outputs a single score for each alternative. The bottom of Figure 1a shows that using the 0.01-quantile of DPL models (rows labeled “risk-averse”) can mitigate jailbreaks without harming the models’ accuracy otherwise. For instance, the lower quantile of the categorical DPL model trained on the combined data has a jailbreak rate of 13%, compared to 25% for \( \hat{u}_{\text{combined}} \). The models have similar helpfulness accuracy, indicating that risk-averse optimization does not hurt DPL’s performance on non-harmful prompts. Figure 4 illustrates an example where risk-averse optimization prevents a jailbreak response.
### 6 Conclusion
Preference learning is becoming an essential component of real-world AI systems that helps align outcomes with the values of users. However, in the ubiquitous case of hidden context—arising from diverse preferences, competing objectives, irrationality, and other types of partial observability—preference learning may have unexpected or unwanted consequences. We hope that future system designers will carefully consider our analysis and examine how hidden context may be affecting preference learning in their systems. Furthermore, we encourage practitioners to consider using distribution preference learning as an alternative method that can explicitly account for hidden context.
ACKNOWLEDGMENTS
We thank Ruiqi Zhong and Sam Toyer for feedback on drafts. Cassidy Laidlaw was supported by an Open Philanthropy AI Fellowship. Dylan Hadfield-Menell was supported by an AI2050 Early Career Fellowship from Schmidt Sciences.
REFERENCES
Riad Akrou, Marc Schoenauer, and Michèle Sebag. APRIL: Active Preference-learning based Reinforcement Learning, August 2012. URL http://arxiv.org/abs/1208.0984. arXiv:1208.0984 [cs].
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. A General Language Assistant as a Laboratory for Alignment, December 2021. URL http://arxiv.org/abs/2112.00861. arXiv:2112.00861 [cs].
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, April 2022a. URL http://arxiv.org/abs/2204.05862. arXiv:2204.05862 [cs].
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional AI: Harmlessness from AI Feedback, December 2022b. URL http://arxiv.org/abs/2212.08073. arXiv:2212.08073 [cs].
Connor Baumler, Anna Sotnikova, and Hal Daumé III. Which Examples Should be Multiply Annotated? Active Learning When Annotators May Disagree. In Findings of the Association for Computational Linguistics: ACL 2023, pp. 10352–10371, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology.org/2023.findings-acl.658.
Andreea Bobu, Dexter R. R. Scobee, Jaime F. Fisac, S. Shankar Sastry, and Anca D. Dragan. LESS is More: Rethinking Probabilistic Models of Human Behavior. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 429–437, March 2020. doi: 10.1145/3319502.3374811. URL https://dl.acm.org/doi/10.1145/3319502.3374811. Conference Name: HRI ’20: ACM/IEEE International Conference on Human-Robot Interaction ISBN: 9781450367462 Place: Cambridge United Kingdom Publisher: ACM.
Róbert Busa-Fekete and Eyke Hüllermeier. A Survey of Preference-Based Online Learning with Bandit Algorithms. In Peter Auer, Alexander Clark, Thomas Zeugmann, and Sandra Zilles (eds.), Algorithmic Learning Theory, Lecture Notes in Computer Science, pp. 18–39, Cham, 2014. Springer International Publishing. ISBN 978-3-319-11662-4. doi: 10.1007/978-3-319-11662-4_3.
Francois Caron and Arnaud Doucet. Efficient Bayesian Inference for Generalized Bradley-Terry Models, November 2010. URL http://arxiv.org/abs/1011.1761. arXiv:1011.1761 [stat].
|
b27FJxtFeY
|
In section 2.1, using multi-qubit quantum gates for the parametric quantum circuits is not optimal, as real multi-qubit quantum gates have to suffer from more serious quantum noise and are harder to deal with the Barren Plateau problem. Why not use single quantum parametric gates?
|
Variational Quantum AdaBoost with Supervised Learning Guarantee
Anonymous authors
Paper under double-blind review
Abstract
Although variational quantum algorithms based on parameterized quantum circuits promise to achieve quantum advantages, in the noisy intermediate-scale quantum (NISQ) era, their capabilities are greatly constrained due to limited number of qubits and depth of quantum circuits. Therefore, we may view these variational quantum algorithms as weak learners in supervised learning. Ensemble methods are a general technique in machine learning for combining weak learners to construct a more accurate one. In this paper, we theoretically prove and numerically verify a learning guarantee for variational quantum adaptive boosting (AdaBoost). To be specific, we theoretically depict how the prediction error of variational quantum AdaBoost on binary classification decreases with the increase of the number of boosting rounds and sample size. By employing quantum convolutional neural networks, we further demonstrate that variational quantum AdaBoost can not only achieve much higher accuracy in prediction, but also help mitigate the impact of noise. Our work indicates that in the current NISQ era, introducing appropriate ensemble methods is particularly valuable in improving the performance of quantum machine learning algorithms.
1 Introduction
1.1 Background
Machine learning has achieved remarkable success in various fields with a wide range of applications (Mohri et al., 2018; Jordan & Mitchell, 2015; Butler et al., 2018; Gentry et al., 2021). A major objective of machine learning is to develop efficient and accurate prediction algorithms, even for large-scale problems (Zhang et al., 2022; Ergun et al., 2022; Lyle et al., 2022). The figure of merit, prediction error, can be decomposed into the summation of training and generalization errors. Both of them should be made small to guarantee an accurate prediction. However, there is a tradeoff between reducing the training error and restricting the generalization error through controlling the size of the hypothesis set, known as Occam’s Razor principle (Rasmussen & Ghahramani, 2000; Mohri et al., 2018).
For classical machine learning, empirical studies have demonstrated that the training error can often be effectively minimized despite the non-convex nature and abundance of spurious minima in training loss landscapes (Livni et al., 2014; Du et al., 2019; Arora et al., 2018). This observation has been explained by the theory of over-parameterization (Jacot et al., 2018; Nitanda & Suzuki, 2021; Zhang et al., 2017; Arora et al., 2020, 2019; Oymak & Sohankotab, 2020). However, it is still difficult to theoretically describe how to guarantee a good generalization, which is one of the key problems to be solved in classical machine learning.
Owing to the immense potential of quantum computing, extensive efforts have been dedicated to developing quantum machine learning (Biamonte et al., 2017; Carleo & Troyer, 2017; Dunjko & Briegel, 2018; Carleo et al., 2019; Cerezo et al., 2022; Qi et al., 2023). However, in the noisy intermediate-scale quantum (NISQ) era, the capability of quantum machine learning is greatly constrained due to limited number of qubits and depth of the involved quantum circuits. Algorithms based on parameterized quantum circuits (PQCs) have become the leading candidates to yield potential quantum advantages in the era of NISQ (Landman et al., 2023; Jerbi et al., 2021; Du et al.,
The basic idea behind them is that these parameterized quantum models can provide representational and/or computational powers beyond what is possible with classical models (Schuld et al., 2021; Liu et al., 2021; Huang et al., 2021). There are mainly three kinds of parameterized quantum models (Jerbì et al., 2023): (a) explicit models (Cerezo et al., 2021a; Benedetti et al., 2019), where data are first encoded into quantum states, after undergoing a PQC, the quantum states are measured and the information is used to update the variational parameters through a classical routine; (b) implicit kernel models (Havlíček et al., 2019; Schuld & Killoran, 2019), where the kernel matrices of the encoding data are computed through quantum circuits, and then used to label data; (c) re-uploading models (Pérez-Salinas et al., 2020), where encoding and parameterized circuits are interleaved. A unified framework has been set in Jerbì et al. (2023) for the three quantum models, and it was pointed out that the advantages of quantum machine learning may lie beyond kernel methods. They found that although kernel methods are guaranteed to achieve a lower training error, their generalization power is poor. Thus, both the training and generalization errors should be taken into account when evaluating the prediction accuracy.
It has been proved in Caro et al. (2022) that good generalization can be guaranteed from few training data for a wide range of quantum machine learning models. However, in contrast to the classical case, training quantum models is notoriously difficult as it often suffers from the phenomena of barren plateaus (McClean et al., 2018; Haug et al., 2021; Cerezo et al., 2021b; Ortíz Marrero et al., 2021; Wang et al., 2021a; Zhao & Gao, 2021), where the cost gradient vanishes exponentially fast, and there exist (exponentially) many spurious local minima (Anschuetz, 2022; Anschuetz & Kiani, 2022; You & Wu, 2021). In this sense, most quantum learning algorithms can be viewed as weak learners in the language of supervised machine learning.
To improve the performance of quantum algorithms, we can employ ensemble methods as inspired by the classical ensemble learning. There are various kinds of ensemble methods, e.g., bagging (Breiman, 1996), plurality voting (Lam & Suen, 1997; Lin et al., 2003) and boosting (Freund et al., 1999). It has been suggested in Jiang et al. (2020) that an optimized weighted mixture of concepts, e.g., PAC-Bayesian (McAllester, 1999), is a promising candidate for further research. Thus, adaptive boosting (AdaBoost), which adaptively adjusts the weights of a set of base learners to construct a more accurate learner than base learners, is appropriate for improving the performance of quantum weak learners. For classical machine learning, there has been a rich theoretical analysis on AdaBoost (Freund & Schapire, 1997; Bartlett et al., 1998; Mohri et al., 2018; Grønlund et al., 2019), and it has been shown to be effective in practice (Sun et al., 2021; Drucker et al., 1993; Li et al., 2008; Zhang et al., 2019). In this paper, we provide the first theoretical learning guarantee for binary classification of variational quantum AdaBoost, and then numerically investigate its performance on 4-class classification by employing quantum convolutional neural networks (QCNNs), which are naturally shallow and particularly useful in NISQ era.
1.2 Related Work
Various quantum versions of classical AdaBoost have been proposed, such as Arunachalam & Maity (2020); Wang et al. (2021b); Ohno (2022). In their works, they employed quantum subroutines, e.g., mean estimation and amplitude amplification, to update quantum weak classifiers and estimate the weighted errors to reduce the time complexity. Therefore, the realizations of these quantum versions of AdaBoost are beyond the scope of current NISQ circuits. In contrast, in this work we utilize variational quantum classifiers realized on the current NISQ circuits, which are obtained through a quantum-classical hybrid way.
Recently, ensemble methods have been proposed to enhance the accuracy and robustness of quantum classification with NISQ devices. Variational quantum AdaBoost and variational quantum Bagging have been empirically investigated in Li et al. (2023); Incudini et al. (2023) with hardware-efficient ansatz. It was demonstrated via simulations that quantum AdaBoost not only outperforms quantum Bagging (Li et al., 2023), but also can save resources in terms of the number of qubits, gates, and training samples (Incudini et al., 2023).
1.3 Our Contributions
In this paper, we theoretically and numerically investigate the performance of variational quantum AdaBoost by focusing on classification. Our contributions are summarized as follows.
Figure 1: The schematic of a PQC with $K$ independent trainable gates. Each trainable gate is parameterized by a multi-qubit rotational gate which is efficiently implementable.
1) For binary classification, we provide the first theoretical upper bound on the prediction error of variational quantum AdaBoost, demonstrating how the prediction error converges to 0 as the increase of the number of boosting rounds and sample size.
2) We numerically demonstrate that variational quantum AdaBoost can achieve a higher level of prediction accuracy as compared to quantum Bagging, classical AdaBoost and classical Bagging. We further demonstrate that with only few boosting rounds variational quantum AdaBoost can help mitigate the impact of noises and achieve better performance than noiseless models, which is particularly valuable for potential applications, especially in the NISQ era.
The paper is organized as follows. In Section 2, we briefly introduce the quantum classifier and variational quantum AdaBoost. In Section 3, we present our theoretical and empirical results on the performance of variational quantum AdaBoost. Section 4 concludes the paper.
2 QUANTUM CLASSIFIER AND ADABOOST
2.1 QUANTUM CLASSIFIER
We start with briefly introducing some quantum notation. In quantum computing, information is described in terms of quantum states. For an $N$-qubit system, the quantum state $\rho$ can be mathematically represented as a positive semi-definite Hermitian matrix $\rho \in \mathbb{C}^{2^N \times 2^N}$ with $\text{Tr}[\rho] = 1$. The elementary quantum gates are mathematically described by unitary matrices. A quantum gate $U$ acting on a quantum state $\rho$ takes the state to the output state as $U\rho U^\dagger$ where $U^\dagger$ is the conjugate and transpose of $U$. When measuring an observable $O$ (a Hermitian operator) at quantum state $\rho$, its expectation is $\text{Tr}[O\rho]$.
For a $D$-class classification problem, suppose that both the training and test data are independent and identically distributed (i.i.d.) according to some fixed but unknown distribution $D$ defined over the sample and label space $\mathcal{X} \times \mathcal{Y}$. When the sample set $S = \{(x_i, y_i)\}_{i=1}^n$ are classical, we can first choose a quantum encoding circuit to embed the classical data $x_i$ into quantum state $\rho(x_i)$ (Lloyd et al., 2020; Schuld et al., 2021; Goto et al., 2021), which is the explicit quantum model under consideration. Without loss of generality, we only consider the case where the data are quantum in the following, namely, $S = \{(\rho(x_i), y_i)\}_{i=1}^n \subset \mathcal{X} \times \mathcal{Y}$. For a $D$-class classification, $\mathcal{Y} = \{1, \cdots, D\} \triangleq [D]$.
To label $\rho(x)$, a quantum hypothesis or classifier $h_\theta(\cdot)$ can be described in the form of
$$h_\theta(x) = \arg\max_{d \in [D]} \text{Tr}\left[P_d U(\theta) \rho(x) U^\dagger(\theta)\right]. \quad (1)$$
Here, $\{P_d\}_{d=1}^D$ are disjoint projectors with $P_d$ relating to the $d$-th class for $d \in [D]$, and $U(\theta)$ describes the action of a PQC with $\theta$ being the trainable or variational parameters.
To be specific, as illustrated in Fig. 1, suppose that the employed PQC is composed of a total number of $K$ independent parameterized gates and non-trainable gates $\{V_k\}_{k=0}^K$, whose action can be
Figure 2: Hardware-efficient implementations of multi-qubit rotational gates. (a) The module of 2-qubit rotational gate $R_{ZZ}(\theta)$ around the Pauli operator $Z \otimes Z$. (b) The module of 3-qubit rotational gate $R_{ZZZ}(\theta)$ around the Pauli operator $Z \otimes Z \otimes Z$.
described as
$$U(\theta) = \prod_{k=1}^{K} \left[ V_k R_k^{(i_k,j_k)}(\theta_k) \right] \cdot V_0,$$
where $\theta = (\theta_1, \ldots, \theta_K)$ denotes a $K$-dimensional parameter vector. For each $k$, the trainable gate $R_k^{(i_k,j_k)}(\theta_k)$ denotes a rotational gate with angle $\theta_k$ around a $j_k$-qubit Pauli tensor product operator $P_k$, which acts non-trivially on the $i_k$-th through $(i_k + j_k - 1)$-th qubits, namely,
$$R_k^{(i_k,j_k)}(\theta_k) = I^{\otimes(i_k-1)} \otimes e^{-i\frac{\theta_k}{2} P_k} \otimes I^{\otimes(n-i_k-j_k+1)}$$
$$= I^{\otimes(i_k-1)} \otimes \left( \cos \frac{\theta_k}{2} I^{\otimes j_k} - i \sin \frac{\theta_k}{2} P_k \right) \otimes I^{\otimes(n-i_k-j_k+1)}.$$
In practice, these multi-qubit rotational gates can be implemented by a series of single-qubit gates and typical 2-qubit controlled gates, which are efficient to realize. For example, as illustrated in Fig. 2, the multi-qubit rotational gates around $z$ axis can be implemented by a single-qubit rotational gate around $z$ axis and some 2-qubit CNOT gates.
The prediction error or expected risk of the quantum hypothesis function $h_\theta$ is defined as
$$R(h_\theta) = \mathbb{E}_{(x,y) \sim D} \mathbb{I}_{h_\theta(x) \neq y} = \mathbb{P}_{(x,y) \sim D}[h_\theta(x) \neq y].$$
The prediction error of a hypothesis is not directly accessible, since both the label of unseen data and the distribution $D$ are unavailable. However, we can take the training error or empirical risk of $h_\theta$ as a proxy, defined as
$$\hat{R}_S(h_\theta) = \frac{1}{n} \sum_{i=1}^{n} \mathbb{I}_{h_\theta(x_i) \neq y_i}.$$
The difference between the prediction error $R(h_\theta)$ and the training error $\hat{R}_S(h_\theta)$ is referred to as the generalization error, which reads
$$\text{gen}(h_\theta) = R(h_\theta) - \hat{R}_S(h_\theta).$$
It is clear that to make accurate predictions, both the training and generalization errors should be small.
2.2 Variational Quantum AdaBoost
We denote by $\mathcal{H}$ the hypothesis set which is composed of base classifiers $h_\theta(\cdot)$ in the form of Eq. (1). Inspired by classical multi-class AdaBoost (Hastie et al., 2009), the procedure of variational quantum AdaBoost is presented in Algorithm 1, which is similar to that in Li et al. (2023).
Algorithm 1 has the input including a labeled sample set $S = \{(p(x_i), y_i)\}_{i=1}^{n}$, the number of boosting rounds $T$ typically selected via cross-validation, and maintains a distribution over the indices $[n]$ for each round. The initial distribution is assumed to be uniform, i.e., $D_1(i) = \frac{1}{n}$. At each round of boosting, i.e., for each $t \in [T]$, given a classifier $h_t \in \mathcal{H}$, its error $\epsilon_t$ on the training data weighted by the distribution $D_t$ reads
$$\epsilon_t = \sum_{i=1}^{n} D_t(i) \mathbb{I}_{h_t(x_i) \neq y_i}.$$
Algorithm 1: D-Class Variational Quantum AdaBoost
input: Hypothesis set \( \mathcal{H} = \{ h_\theta \} \)
Sample set \( S = \{ (\rho(x_i), y_i) \}_{i=1}^n \)
Boosting rounds \( T \)
Distribution \( D_1(i) = \frac{1}{n}, \text{for } i \in [n] \)
for \( t \leftarrow 1 \text{ to } T \) do
\( h_t \leftarrow \text{base classifier in } \mathcal{H} \text{ with error } \epsilon_t < \frac{D-1}{D} \)
\( \alpha_t \leftarrow \log \frac{1-\epsilon_t}{\epsilon_t} + \log (D - 1) \)
for \( i \leftarrow 1 \text{ to } n \) do
\( D_{t+1}(i) \leftarrow D_t(i) \exp [\alpha_t I_{y_i \neq h_t(x_i)}] \)
end
normalize \( \{ D_{t+1}(i) \}_{i=1}^n \)
end
\( f \leftarrow \arg \max_{d \in [D]} \sum_{t=1}^{T} \alpha_t I_{h_t = d} \)
output: Predictor \( f \)
We choose a weak classifier \( h_t \) such that \( \epsilon_t < \frac{D-1}{D} \), which is easily satisfied. Then the distribution is updated as \( D_{t+1}(i) \propto D_t(i) \exp [\alpha_t I_{y_i \neq h_t(x_i)}] \), where \( \alpha_t = \log \frac{1-\epsilon_t}{\epsilon_t} + \log (D - 1) \). After \( T \) rounds of boosting, Algorithm 1 returns the \( D \)-class quantum AdaBoost classifier.
3 Main Results
3.1 Binary Variational Quantum AdaBoost Guarantee
For multi-class classification, an alternative approach is to reduce the problem to that of multiple binary classification tasks. For each task, a binary classifier is returned, and the multi-class classifier is defined by a combination of these binary classifiers. Two standard reduction techniques are one-versus-the-rest and one-versus-one (AIY, 2005; Mohri et al., 2018). In this subsection, we focus on the basic binary variational quantum AdaBoost, and theoretically establish its learning guarantee.
For binary classification, it is more convenient to denote the label space by \( \mathcal{Y} = \{-1, +1\} \). The base quantum hypothesis \( h_\theta(\cdot) \) can be defined in terms of the Pauli-Z operator \( Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \) as
\[
h_\theta(x) = \text{Tr} \left[ Z U(\theta) \rho(x) U^\dagger(\theta) \right],
\]
(7)
whose range is \([-1, +1]\), and its sign is used to determine the label, namely, we label \(-1\) when \( h_\theta(x) \leq 0 \); otherwise, it is labeled as \(+1\).
It is straightforward to verify that for a sample \((\rho(x), y)\), the following important relation holds:
\[
I_{y \neq h_\theta(x)} = I_{y h_\theta(x) \leq 0}.
\]
(8)
By employing Eq. (8) and inspired by the classical binary AdaBoost (Mohri et al., 2018), we can modify Algorithm 1 slightly to make it more suitable for binary classification as presented in Algorithm 2, and further establish the learning guarantee for the binary variational quantum AdaBoost.
Different from Algorithm 1, the hypothesis set \( \mathcal{H} \) in Algorithm 2 is composed of quantum hypotheses in the form of Eq. (7). At each round of boosting, a new classifier \( h_t \in \mathcal{H} \) is selected such that its error \( \epsilon_t < \frac{1}{2} \), and the distribution update role
\[
D_{t+1}(i) = \frac{D_t(i) \exp (-\alpha_t y_i h_t(x_i))}{Z_t}
\]
is different from what is used in Algorithm 1. It can be verified that \( \alpha_t = \frac{1}{2} \log \frac{1-\epsilon_t}{\epsilon_t} \) is chosen to minimize the upper bound of the empirical risk \( \hat{R}_S(f) \) of the binary variational quantum AdaBoost (Mohri et al., 2018).
Algorithm 2: Binary Variational Quantum AdaBoost
input: Hypothesis set \( \mathcal{H} = \{h_\theta\} \)
Sample set \( S = \{(\rho(x_i), y_i)\}_{i=1}^n \)
Boosting rounds \( T \)
Distribution \( D_1(i) = \frac{1}{n}, \text{for } i \in [n] \)
for \( t \leftarrow 1 \text{ to } T \) do
\( h_t \leftarrow \text{base classifier in } \mathcal{H} \text{ with small error } \epsilon_t < \frac{1}{2} \)
\( \alpha_t \leftarrow \frac{1}{2} \log \frac{1-\epsilon_t}{\epsilon_t} \)
\( Z_t \leftarrow 2[\epsilon_t (1-\epsilon_t)]^{\frac{1}{2}} \) \% normalization factor
for \( i \leftarrow 1 \text{ to } n \) do
\( D_{t+1}(i) \leftarrow \frac{D_t(i) \exp[-\alpha_t y_i h_t(x_i)]}{Z_t} \)
end
\( f \leftarrow \text{sgn}\left(\sum_{t=1}^{T} \alpha_t h_t\right) \)
output: Predictor \( f \)
The performance of the binary variational quantum AdaBoost is guaranteed by the following theorem, whose proof can be found in Appendix B.
Theorem 3.1. For the binary variational quantum AdaBoost Algorithm 2, assume that there exists \( \gamma > 0 \) such that \( \epsilon_t \leq \frac{1}{2} - \gamma \), for each \( t \in [T] \), and the employed PQC has a total number of \( K \) independent parameterized gates. Then for any \( \delta > 0 \), with a probability at least \( 1 - \delta \) over the draw of an i.i.d. \( n \)-size sample set, the prediction error \( R(f) \) of the returned binary variational quantum AdaBoost classifier \( f \) satisfies
\[
R(f) \leq e^{-2\gamma^2 T} + 12\sqrt{\frac{K \log 7K}{n}} + 4\sqrt{\frac{K}{n}} + \sqrt{\frac{\log \frac{1}{\delta}}{2n}}.
\]
(9)
It is clear that Theorem 3.1 provides a solid and explicit learning guarantee for binary variational quantum AdaBoost classifier. The first term in the RHS of Eq. (9) describes the upper bound of the empirical error \( R_S(f) \), which decreases exponentially fast as a function of the boosting rounds \( T \) owing to the good nature of AdaBoost. The last three terms describe the upper bound of the generalization error \( \text{gen}(f) \). Here, in contrast to the classical case, our success in bounding the generalization error owes to the good generalization property of quantum machine learning. As the number of independent trainable gates \( K \) increases, the hypothesis set \( \mathcal{H} \) becomes richer. Thus, the second and third terms in the RHS of Eq. (9) depict the penalty of the complexity of the hypothesis set \( \mathcal{H} \) on the generalization.
In the NISQ era, it is important to take into account of the effect of noise originating from various kinds of sources. From the detailed proof of Theorem 3.1, it can be verified that for noisy PQCs, as long as there is an edge \( \gamma > 0 \) between our base classifiers and the completely random classifier, that is \( \epsilon_t < \frac{1}{2} - \gamma \) for all \( t \in [T] \), the learning performance of the variational quantum AdaBoost can also be guaranteed. However, it is worth pointing out that this edge assumption will become hard to be met when there is very large noise.
3.2 Numerical Experiments for 4-Class Classification
In this subsection, we numerically investigate the performance of 4-class variational quantum AdaBoost. To be specific, our task is to perform a 4-class classification of the handwritten digits \( \{0, 1, 2, 3\} \) in MNIST datasets [LeCun et al., 1998]. In our numerical experiments, we employ QCNN as our PQC, which has been proven free of barren plateau [Pesah et al., 2021] and has been widely used as quantum classifiers [Wei et al., 2022; Chen et al., 2022; Hur et al., 2022].
Figure 3: The architecture of QCNN. After amplitude encoding, a set of universal rotational gates are applied to each qubit, followed by two blocks of convolutional (Conv) and pooling (Pool) layers. The pooling layers not only reduce the system size, but also provide non-linearity for the whole circuit.
For the $D$-class variational quantum AdaBoost algorithm (here $D = 4$), at each round $t \in [T]$ where $T > 1$, to find a base classifier $h_t$ such that its error $\epsilon_t < \frac{D-1}{D}$, we need to optimize the variational parameters in QCNN. To do this, we optimize the following weighted cross-entropy loss function:
$$\min_{\theta} L(\theta; S) = -\sum_{i=1}^{n} D_t(i) y_i^\top \log(p_i),$$
where each label $y_i \in [D]$ has been transformed into a $D$-dimensional one-hot vector denoted by $y_i$, and $p_i = [p_{i,1}, \cdots, p_{i,D}]^\top$ with
$$p_{i,d} = \text{Tr}[P_d U(\theta) \rho(x_i) U^\dagger(\theta)]$$
for each $d \in [D]$.
We employ Adam (Kingma & Ba, 2015) with learning rate 0.05 as the optimizer and compute the loss gradient using the parameter-shift-rule (Romero et al., 2018; Mitarai et al., 2018; Schuld et al., 2019). We initialize the parameters of PQC according to standard normal distribution and stop optimizing when reaching the maximum number of iterations, which is set as 120. The base classifier having the minimum training error $\epsilon_t$ in 120 iterations is returned as $h_t$, whose error always satisfies $\epsilon_t < \frac{D-1}{D}$ in our experiments. When illustrating our results, like most of practical supervised learning tasks, we adopt the accuracy as the figure of merit, which is simply equal to 1 minus error.
In our first experiment, we employ a noiseless 8-qubit QCNN as the base classifier as illustrated in Fig. 3. We randomly sample two different 8000-size sample sets for training and testing, respectively. For each sampled image in MNIST, we first downsample it from $28 \times 28$ to $16 \times 16$ and then embed it into the QCNN using amplitude encoding. We conduct five experiments in total. Since the results of the five experiments are similar, to clearly demonstrate the difference between the training and test accuracy, we only randomly select one experiment and illustrate the results in Fig. 4. To demonstrate the positive effect of boosting operations, we also consider a classifier without any boosting which is referred to as QCNN-best. For QCNN-best, we optimize the QCNN for at most 3000 iterations, the same number as that in variational quantum AdaBoost with $T = 25$ boosting rounds, and return the classifier having the best training accuracy. The prediction accuracy of QCNN-best is illustrated in Fig. 4 by the black dotted line. Without boosting QCNN-best can only achieve a prediction accuracy of 0.87. It is clear that quantum AdaBoost outperforms QCNN-best only after 3 rounds of boosting, and its performance can exceed 0.97 after $T > 20$ rounds of boosting. Thus, to improve the prediction accuracy, boosting is much better than simply increasing the number of optimizations. Moreover, variational quantum AdaBoost maintains a good generalization throughout the entire training process as the differences between the training and prediction accuracy are always below 0.01.
We further compare our variational quantum AdaBoost (QCNN+AdaBoost) with three other ensemble methods. The first one is variational quantum Bagging (QCNN+Bagging), the second is classical neural networks (CNN) with AdaBoost, referred to as CNN+AdaBoost, and the third one is CNN powered by Bagging, abbreviated as CNN+Bagging. The CNN takes the form of $f(x) = \sigma(W_2 \sigma(W_1 x + b_1) + b_2)$, where $\sigma(\cdot)$ denotes the softmax function and $W_1 \in \mathbb{R}^{3 \times 256}$, $W_2 \in \mathbb{R}^{256 \times 10}$.
Figure 4: Accuracy of 4-class classification of variational quantum AdaBoost and QCNN-best in the noiseless case. The blue solid (red dashed) line depicts the training (testing) accuracy of variational quantum AdaBoost versus the boosting round $T$. The black dash-dotted (dotted) depicts the training (testing) accuracy of QCNN-best. It is clear that variational quantum AdaBoost can achieve a higher level of prediction accuracy (exceeding 0.97 when the boosting round $T > 20$). During the whole process, the differences between the training and testing accuracy of AdaBoost are always below 0.01, which indicates a good generalization of variational quantum AdaBoost.
$\mathbb{R}^{4 \times 3}, b_1 \in \mathbb{R}^3, b_2 \in \mathbb{R}^4$. For Bagging methods, each base classifier is trained on a subset obtained by resampling the original training dataset for 8000 times, and the predictions of base classifiers are integrated through voting [Breiman] 1996. For the four ensemble methods to be compared, we utilize the same experimental setting. Specifically, the learning rate is set to be 0.05 and all the parameters are initialized according to standard normal distribution. We select the classifier having the smallest training error over 120 optimization iterations as the base classifier. The final strong classifier is chosen to be the one with the best training accuracy among the rounds from 1 to 25. We perform each ensemble method for five times, and demonstrate the results in Table 1. Note that there are 120 parameters in QCNN, while the number of parameters in CNN is 787. We find that although having more parameters, the training accuracy of classical ensemble methods is higher than their quantum counterparts. However, owing to the quantum advantage in generalization, our variational quantum AdaBoost (QCNN+AdaBoost) has the best prediction accuracy among the four ensemble methods. Although the training accuracy of QCNN+Bagging is poor, its generalization error is smaller than those of the classical ensemble methods. This is also attributed to the quantum advantage in generalization.
Table 1: Comparison between four different ensemble methods. The first row represents the training accuracy (acc.), the second row represents the prediction accuracy, and the third row describes the prediction accuracy of the first base classifier for different ensemble methods. The values in the table represent the mean values ± standard deviation.
| | QCNN+AdaBoost | QCNN+Bagging | CNN+AdaBoost | CNN+Bagging |
|------------------|---------------|--------------|--------------|-------------|
| **Training Acc.**| 0.975±0.002 | 0.898±0.006 | 0.980±0.004 | 0.982±0.004 |
| **Prediction Acc.**| 0.973±0.001 | 0.888±0.005 | 0.967±0.003 | 0.965±0.002 |
| **Base Classifier**| 0.861±0.019 | 0.851±0.020 | 0.876±0.051 | 0.872±0.045 |
In addition, we investigate the performance of variational quantum AdaBoost in the presence of noise. In practice, single-qubit gates can be implemented with a high level of fidelity, while the fidelity of implementing 2-qubit gates remains relatively lower. To take into account of this effect, we simulate a noisy 6-qubit QCNN, and consider three typical classes of noises: depolarizing noise, amplitude damping noise, and phase damping noise. After each involved 2-qubit gate we add a noise channel with noise probability $p = 0.03$. We randomly sample two different 1000-size sample sets for training and testing, respectively. For each sampled image, we first downsample it from $28 \times 28$ to $8 \times 8$ and then use the amplitude encoding to embed it into the QCNN. We illustrate the prediction accuracy of variational quantum AdaBoost in Fig. 5. For comparison, we also consider another two
classifiers having no boosting operations. One (red dashed) is returned by using an ideally noiseless QCNN, and the other (green dash-dotted) is obtained by employing the noisy QCNN. For both of them, we optimize the PQC for at most 840 iterations, which is the same number as that in 7 rounds of variational quantum AdaBoost, and return the classifier having the best testing accuracy. The reason why their prediction accuracy is lower than that in Fig. E is that here we compress the images into $8 \times 8$ format, while the images in Fig. D are compressed into $16 \times 16$. Excessive compression leads to loss of information, thus reducing the overall prediction accuracy of the classifier. We find that for the three typical classes of noises, variational quantum AdaBoost outperforms the noiseless classifier after at most 5 rounds of boosting. This implies that AdaBoost can help mitigate the impact of different kinds of noises, which is particularly useful in the NISQ era. The reason is twofold. First, in variational quantum AdaBoost, weak classifiers can be boosted to obtain a strong classifier as long as the weak classifiers are slightly better than random guess. Noise may degrade the weak classifiers, however, as long as they are still better than random guess, they can be boosted to obtain a strong classifier. Second, as PQCs are shallow, quantum classifiers are weak, but also, the classifiers are less affected by noise due to shallow circuits.
4 CONCLUSION
In the current NISQ era, quantum machine learning usually involves a specification of a PQC and optimizing the trainable parameters in a classical fashion. Quantum machine learning has good generalization property while its trainability is generally poor. Ensemble methods are particularly appropriate to improve the trainability of quantum machine learning, and in turn help predict accurately. In this paper we theoretically establish the prediction guarantee of binary variational quantum AdaBoost, and numerically demonstrate that for multi-class classification problems, variational quantum AdaBoost not only can achieve high accuracy in prediction, but also help mitigate the impact of noise. For future work, it is interesting to incorporate ensemble methods to solve other practical tasks.
REFERENCES
Mohamed Aly. Survey on multiclass classification methods. *Neural Networks*, 19(1-9):2, 2005.
Eric R Anschuetz and Bobak T Kiani. Quantum variational algorithms are swamped with traps. *Nature Communications*, 13(1):7760, 2022.
Eric Ricardo Anschuetz. Critical points in quantum generative models. In *International Conference on Learning Representations*, 2022.
Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In *Proceedings of the 35th International Conference*
Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In Advances in Neural Information Processing Systems, volume 32, 2019.
Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, and Dingli Yu. Harnessing the power of infinitely wide deep nets on small-data tasks. In International Conference on Learning Representations, 2020.
Srinivasan Arunachalam and Reevu Maity. Quantum boosting. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 377–387. PMLR, 2020.
Thomas Barthel and Jianfeng Lu. Fundamental limitations for measurements in quantum many-body systems. Physical Review Letters, 121:080406, 2018.
Peter Bartlett, Yoav Freund, Wee Sun Lee, and Robert E Schapire. Boosting the margin: A new explanation for the effectiveness of voting methods. The annals of statistics, 26(5):1651–1686, 1998.
Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4(4):043001, 2019.
Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning. Nature, 549(7671):195–202, 2017.
Leo Breiman. Bagging predictors. Machine learning, 24:123–140, 1996.
Keith T Butler, Daniel W Davies, Hugh Cartwright, Olexandr Isayev, and Aron Walsh. Machine learning for molecular and materials science. Nature, 559(7715):547–555, 2018.
Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem with artificial neural networks. Science, 355(6325):602–606, 2017.
Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Laurent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-Maranto, and Lenka Zdeborová. Machine learning and the physical sciences. Reviews of Modern Physics, 91:045002, 2019.
Matthias C Caro, Hsin-Yuan Huang, Marco Cerezo, Kunal Sharma, Andrew Sornborger, Lukasz Cincio, and Patrick J Coles. Generalization in quantum machine learning from few training data. Nature Communications, 13(1):4919, 2022.
M Cerezo, Guillaume Verdon, Hsin-Yuan Huang, Lukasz Cincio, and Patrick J Coles. Challenges and opportunities in quantum machine learning. Nature Computational Science, 2(9):567–576, 2022.
Marco Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, et al. Variational quantum algorithms. Nature Reviews Physics, 3(9):625–644, 2021a.
Marco Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J Coles. Cost function dependent barren plateaus in shallow parametrized quantum circuits. Nature Communications, 12(1):1–12, 2021b.
Samuel Yen-Chi Chen, Tzu-Chieh Wei, Chao Zhang, Haiwang Yu, and Shinjae Yoo. Quantum convolutional neural networks for high energy physics data analysis. Physical Review Research, 4(1):013231, 2022.
Harris Drucker, Robert E. Schapire, and Patrice Y. Simard. Boosting performance in neural networks. International Journal of Pattern Recognition and Artificial Intelligence, 7:705–719, 1993.
|
QhXisLeIqR
|
From figure 4, we can see that in some cases, the performances of other baselines are better than WINNET, which cannot support the conclusion that WINNET outperforms other baselines. Does WINNET outperform only under some certain T settings?
|
WinNet: Time Series Forecasting with a Window-Enhanced Period Extracting and Interacting
Anonymous authors
Paper under double-blind review
Abstract
Recently, Transformer-based methods have significantly improved state-of-the-art time series forecasting results, but they suffer from high computational costs and the inability to capture the long and short periodicity of time series. We present a highly accurate and simply structured CNN-based model for long-term time series forecasting tasks, called WinNet, including (i) Inter-Intra Period Encoder (I2PE) to transform 1D sequence into 2D tensor with long and short periodicity according to the predefined periodic window, (ii) Two-Dimensional Period Decomposition (TDPD) to model period-trend and oscillation terms, and (iii) Decomposition Correlation Block (DCB) to leverage the correlation of the period-trend and oscillation terms to support the prediction tasks by CNNs. Results on nine benchmark datasets show that the WinNet can achieve SOTA performance and lower computational complexity over CNN-, MLP-, Transformer-based approaches. The WinNet provides potential for the CNN-based methods in the time series forecasting tasks, with perfect tradeoff between performance and efficiency.
1 Introduction
Time series forecasting (TSF) has been widely used in the prediction of energy consumption, transportation, economic planning, weather and disease transmission. The TSF tasks are to leverage the known sequence of multiple time steps to predict the information of multiple time steps in the future, which further facilitates resource planning and management. Extensive neural architectures have been designed to achieve the TSF tasks. Recent deep learning models have achieved significant performance improvements, such as Informer [Zhou et al., 2021], AutoFormer [Wu et al., 2021], FEDformer [Zhou et al., 2022], DLinear [Zeng et al., 2023], TimesNet [Wu et al., 2023], PatchTST [Nie et al., 2023]. Benefiting from the self-attention mechanism, the Transformer-based models are able to capture the long-term dependency of temporal sequence, achieving state-of-the-art (SOTA) performance for TSF tasks. However, these models are not sensitive to the periodicity and have high computational complexity. Recently, the DLinear outperforms the Transformer-based architectures with only a single linear layer, which results in increasing research attentions and discussions between the MLP-based and the Transformer-based architectures. In the TimesNet, the classical convolutional neural network (CNN) is applied to extract the periodic features after converting the sequence into two-dimensional (2D) tensor by multi-periods, which also inspires us to reconsider the CNN-based methods in the TSF tasks.
Since the future status of a system is time-evolving and with uncertainties, the TSF tasks can be quite challenging. Except for the temporal and regular changes, the uncertainties of the time series data, as well as the noise inputs, provide extra technical difficulties to apply the trend and seasonal terms to achieve the TSF tasks. However, the performance of the model is strongly correlated with the periodicity. A new method of setting up a periodic window is proposed to process the multi-periods of the time series and the corresponding neural network model is designed to capture the complicated underlying patterns of temporal sequence. The network mainly extracts periodicity of the time series through the periodic window, called WinNet.
In the WinNet, the original sequence is transformed by MLP layer to extract the periodicity. The periodic window is approximated as the least common multiple of multi-periods obtained by the Fast Fourier Transformation (FFT) [Wu et al., 2023]. In this way, the periodic window can represent the variation of multiple short periods, and the sequence is organized into 2D tensor according to the
periodic window. In the 2D tensor, each row represents the short-period trend within the periodic window, and each column represents the long-period trend of the whole sequence. Subsequently, the features of long and short periods are separately extracted by the I2PE. The TDPD module is proposed to decompose the 2D tensor into the period-trend and oscillation terms, which highlights the importance of periodicity. Based on the correlation analysis, we find that there are extremely strong lag-correlations between the trend and seasonal terms, and the correlation has a periodic pattern, as shown in Figure 1. To mine their correlation instead of utilizing them separately like DLinear and MICN, the DCB is innovatively designed to combine the period-trend and oscillation terms using a convolutional kernel. The learned weights of the convolution kernel represent the importance of the period-trend and oscillation terms of neighboring time steps. To perform an efficient periodic fusion of the time series, the Series Decoder is proposed to interactively combine the features of long and short period and map the learned features into the prediction of time steps. The WinNet can reduce the relative Mean Squared Error (MSE) and Mean Absolute Error (MAE) in multivariate time series by 18.5% and 12.0%, respectively, compared to TimesNet.
Figure 1: The lag correlations of the trend and seasonal terms in ETTm1 and ECL datasets. We can see that the lag-correlations between the two terms are very strong and there is a periodic pattern.
In summary, this work contributes the time series forecasting tasks in the following ways:
- Only one convolutional layer is designed as the backbone of the prediction network, which greatly reduces the training memory and computational complexity and improves experimental efficiency. This also indicates that the simple model architecture can also be effective for the TSF tasks.
- The time series are reorganized according to a periodic window, which can represent the trend variation of multiple short periods.
- To enhance the modeling ability, time series are further decomposed into the period-trend and oscillation terms by the TDPD module. The DCB is proposed to aggregate the neighboring periodic information to obtain the local periodicity by extracting the correlation between the two terms.
- Extensive experiments are conducted on 9 benchmark datasets across multiple domains (energy, traffic, economics, weather, electricity and illness). Our experimental results demonstrate that the WinNet outperforms other comparative baselines in both the univariate and multivariate prediction tasks with long and short input lengths. The WinNet provides potential for the CNN-based methods in the TSF tasks.
2 RELATED WORK
It is widely recognized that the uncertainties of the temporal sequence provide extra difficulties in the TSF tasks. In recent years, extensive deep learning models have been proposed to achieve the temporal modeling, including RNN-based, CNN-based, Transformer-based, and MLP-based models. The SOTA performance and specific advantages are demonstrated by extensive experiments, as shown below:
RNNs In general, RNN networks are the primary tools for temporal modeling before the Transformer architecture. RNN-based methods, such as LSTM (Hochreiter & Schmidhuber [1997]), GRU (Chung et al. [2014]) and DeepAR (Salinas et al. [2020]), utilize the recurrent information transmission to capture the temporal changes through state transitions among time steps.
Transformers Transformer (Vaswani et al. [2017]) and its variants are also initially proposed to achieve the Natural Language Processing tasks, with the advantage of parallel recurrent computation and the self-attention mechanism to capture the long-term temporal correlation. Currently Transformer has been widely used in TSF tasks, such as the Informer, Autoformer, FEDformer, Crossformer (Zhang & Yan [2023]), PatchTST, PETformer (Lin et al. [2023]), etc. The attention mechanism is designed to capture the temporal dependencies among time steps and achieve significant performance in the TSF tasks. In the Autoformer, the sequence trend decomposition is to capture the temporal pattern of a sequence by the trend and seasonal terms, and an auto-correlation mechanism is to capture the temporal dependence of the series based on learning periods. In the FEDformer, a Fourier enhanced structure is designed to enhance sparse attention in the frequency domain. By referring to the Vision Transformer (Dosovitskiy et al. [2020]), the PatchTST utilizes the slice and dice strategy to formulate the patch and the Transformer architecture is applied to extract local semantic information of the multivariate time series.
MLPs DLinear has been proven to be effective for TSF, which achieve competitive performance over the Transformer architecture by a simple one-layer linear transformation with channel independence (CI). Since then, many MLP-based models are proposed to encode temporal dependencies for the TSF tasks, including LightTS (Zhang et al. [2022]), MTS-Mixers (Li et al. [2023b]), TSMixer (Ekambaram et al. [2023]), RLinear (Li et al. [2023a]), etc. The MLP-based methods significantly address the computational efficiency issue of Transformer, and their simplified model structures allow them to be another prominent architecture for TSF tasks.
CNNs CNN networks were mainly used in the field of Computer Vision, where they can be used to extract local information from images. Recently, CNNs have been proposed to mine the periodicity of time series, such as TCN (Bai et al. [2018]), TimesNet, MICN, etc. The TCN captures temporal changes through a convolutional kernel that slides along the temporal dimension. In the TimesNet, the original sequence is reshaped into the 2D tensor by periods to extract multiple periods of the sequence by FFT. The classical InceptionV1 network (Szegedy et al. [2015]) is applied to process the temporal samples within the periods to support the forecasting tasks. A multi-scale isometric convolutional network with multi-scale branches is proposed in the MICN to capture both local and global features from a holistic view of the temporal sequence.
3 WinNet
The general architecture of WinNet is illustrated in Figure 2. As mentioned before, based on the multi-periodic features of time series, the periodic window with the superposition of multiple periods is proposed to capture the periodic changes of the time series within the periodic window. I2PE block is designed to extract the periodicity of the sequence and obtain intra-period and its transpose (inter-period). The intra-period indicates that the rows of the 2D tensor are the periodic windows, while the inter-period indicates that the columns are the periodic windows. The TDPPD is separately performed on the two features to obtain the period-trend and oscillation terms. The DCB is followed to learn the local correlation between the period-trend and oscillation terms, respectively. The final prediction results are obtained by the Series Decoder.
3.1 Inter-Intra Period Encoder
In DLinear, a single linear layer has the ability to effectively capture periodicity in the time series. We perform a linear layer on the original sequence (Li et al. [2023b]), and the linear mapped sequence can acquire the periodic information from each sample in the original sequence. We find that the MLP layer can reduce the period of the original sequence dramatically, making the period characteristics more obvious and facilitating the period extraction for CNN network. For the top-k periods obtained by FFT in Appendix Table 8, multiple short periods are encapsulated within the periodic window, facilitating the extraction of multiple short periods using the convolutional networks for
Figure 2: The model architecture of WinNet. The period and osc represent the period-trend and oscillation terms. CI and CA mean the channel independence and aggregation strategy, similar to DLinear (Zeng et al., 2023), and $s_l, p_l$ indicate the input and prediction length. $c, n, w$ represent the number of channel, periodic window and the periodic window size, respectively. The output (in blue) is the final result of the TSF tasks.
As shown in Appendix Figure 8, the periodic window is approximated as the least common multiple of each refined period.
After the operations in I2PE block, the original sequence is reshaped into a 2D tensor according to the size of the periodic window, as shown in the equation:
$$\hat{X}_{1D} = \text{Permute}(\text{RevIN}(X_{1D}))$$
$$X_{row}^{2D} = \text{Reshape}(\text{Linear}(\hat{X}_{1D}))$$
$$X_{col}^{2D} = \text{Transpose}(X_{row}^{2D})$$
(1)
where $X_{1D} \in \mathbb{R}^{sl \times c}$ is the original sequence, $X_{row}^{2D} \in \mathbb{R}^{c \times n \times w}$ is the intra-period whose rows represent the periodic windows, while $X_{col}^{2D} \in \mathbb{R}^{c \times w \times n}$ represents its columns are the periodic windows. The $sl$ and $c$ denote the input length and the number of channels in the sequence, and the $n, w$ are the number of periodic windows and the periodic window size (in the experiments, $n = w$). RevIN’s regularization method is referenced from NLinear (Zeng et al., 2023).
Specifically, each row in the intra-period represents a periodic window with the superposition of multiple short periods, and multiple windows are organized into each column to capture the variation of the time series among periodic windows. Since the periodic window size is approximated as a common multiple of the top-k periods of the sequence, the long-periodicity correlation can be found among the column at the corresponding positions in both periodic windows.
3.2 Two-Dimensional Period Decomposition
In general, existing methods for time series trend decomposition mainly focused on trend decomposition of 1D sequence. In this work, inspired by the trend decomposition idea in DLinear, we propose a TDPPD strategy, as shown in equation 2. Specifically, a trend-padding operation is dedicatedly designed to perform the convolutional operation at the boundary. As shown in Figure 4, after the operation, the 2D tensor $X_{2D} \in \mathbb{R}^{s \times n \times w}$ are padded into a new 2D tensor $\tilde{X}_{2D} \in \mathbb{R}^{s \times (n+p) \times (w+p)}$ where $p$ means the padding lengths in row or column.
Figure 3: The figure of TrendPadding. Unlike the 0 or same padding mode in common CNNs, the neighbor samples (before or after) in the original sequence are selected as the padding item to retain the trend characteristics of the whole sequence. To keep the shape of the matrix, we complement the remaining positions with 0.
\[
X_{\text{period}} = \text{AvgPool2D}(\text{TrendPadding}(X_{2D}))_{k \times k}
\]
\[
X_{\text{osc}} = X - X_{\text{period}}
\]
(2)
where \(X_{\text{period}}\) and \(X_{\text{osc}}\) indicate the period-trend and oscillation terms, respectively, and \(k \times k\) is the size of the Avgpool2D(\(\cdot\)) kernel size.
Common Avgpool1D(\(\cdot\)) operation focuses on the average change in the trend of time series, while Avgpool2D(\(\cdot\)) is applied to extract both the trend within the periodic window (intra-correlation) and the changes of the long period among the neighbouring windows (inter-correlation). According to the equation [2], the two features can be decomposed into the period-trend and oscillation terms. Specifically, the period-trend term keeps a balance between intra-correlation trends and inter-correlation periodicity. As shown in Appendix Figure 7, we can see that the trend of the time steps among each window keeps essentially the same.
3.3 Decomposition Correlation Block
After decomposing the sequence into the trend and seasonal terms, DLinear simply feeds the trend and seasonal terms independently into a linear layer for model training. MICN predicts the seasonal term by the proposed MIC layer, while the trend-cyclical part is directly obtained by linear regression. They fail to capture the correlation between trend and seasonal terms.
Figure 4: The figure of DCB. The period and osc represent the period-trend and oscillation terms. Use a convolutional kernel to extract the variation of the two terms within the periodic neighborhood.
We believe that there is a correlation between period-trend and oscillation terms and that they are influencing future time steps together. Specifically, time steps in the sequence can be affected by the period-trend and oscillation terms obtained by TDPD within the moments of periodic neighborhood. The CNN kernel can exactly extract the variation of the two terms within periodic neighborhood, and the learned parameters are able to perform a proportional aggregation of them, instead of simply adding. We choose CNN as our backbone network to synthesize the sequence information of \(N\).
time steps in the periodic neighborhood. The process is described below:
\[
\begin{align*}
X_{CI_{period}} &= CI(X_{period}), CI(X_{osc}) \\
X_{CI_{input}} &= \text{Concat}(X_{CI_{period}}, X_{CI_{osc}}) \\
\hat{X}_{CI_{output}} &= \text{Dropout}(\text{Sigmoid(Conv2D}(X_{CI_{input}}))) \\
\hat{X}_{CI_{output}} &= CA(\hat{X}_{CI_{output}})
\end{align*}
\]
where \(CI(\cdot)\) and \(CA(\cdot)\) mean channel independence and aggregation strategy. In the \(CI(\cdot)\), we split both period-trend and oscillation terms at the channel dimension and concatenate them into two-channel matrices. We feed the matrices at the same channel into the DCB block. In the \(CA(\cdot)\), we concatenate the single-channel output after the DCB block at the channel dimension for fusion.
### 3.4 Series Decoder
The DCB can learn the local correlative features of the time series, and in this section, the Series Decoder is designed to aggregate the inter-period and intra-period for extracting global periodicity of each window. Specifically, the process is:
\[
\begin{align*}
\hat{X}_{i,j}^{\text{fusion}} &= \hat{X}_{i,j}^{\text{row}} + \hat{X}_{j,i}^{\text{col}}, i, j \in (1, 2, ..., w) \\
\hat{X}_{i,j}^{\text{res}} &= \hat{X}_{i,j} + \hat{X}_{i,j}^{\text{row}} \\
\hat{X}_{i,j}^{\text{final}} &= \text{Permute}(\text{Linear}(\text{Reshape}(\hat{X}_{i,j}^{\text{res}})))
\end{align*}
\]
where \(i, j\) together denote a temporal point of the 2D tensor, \(w\) represents the size of the periodic window, matrix \(\hat{X}_{i,j}^{\text{res}}\) and \(\hat{X}_{i,j}^{\text{final}}\) are the output of residual connection and the final result.
\(X_{i,j}^{\text{row}}\) is obtained by intra-period convolution, reflecting the short periodicity of the sequence, and \(X_{i,j}^{\text{col}}\) is obtained by inter-period convolution, which is spaced by window size and reflects the long periodicity of the sequence. This design is able to interactively learn the correlation of the short-period time steps with the fusion of the long-period features, thus further extracting the global periodicity of the sequence. After the operation, a simple linear layer is designed to map the learned features into the prediction of time steps.
### 4 Experiment
**Datasets** In this section, a total of 9 real-world datasets[^1] are applied to validate the proposed approaches and selective baselines, including weather, traffic, ECL, ILI, exchange and ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2). It should be noted that ETTh1, ETTh2 and ILI are small datasets with small channels, ETTm1, ETTm1 and weather are medium datasets with small channels, and ECL, traffic are large datasets with multi-channels. In general, periodicity is more easily captured for small datasets and more difficult for large datasets.
**Baselines and metrics** The following methods are selected as the baselines, including Transformer-based PatchTST, Crossformer, FEDformer, Autoformer, CNN-based TimesNet and MICN, and MLP-based DLinear, RLinear and RMLP. All models follow the same experimental setup with a prediction length of \(T \in \{24, 36, 48, 60\}\) for the ILI dataset and \(T \in \{96, 192, 336, 720\}\) for the other datasets. We collect baseline results from DLinear, PatchTST and TimesNet. The default input length \(L=96\) is for the Transformer-based model and \(L=512\) is for PatchTST/64. To ensure the effectiveness of the Transformer and DLinear-based methods, two input lengths \{96 and 512\}, are applied to conduct the performance comparison with SOTA models. In addition, we also explore the influence of the size of the input length on the performance of the proposed model. The Mean Squared Error (MSE) and Mean Absolute Error (MAE) are selected to measure the model performance for both the multivariate and univariate TSF tasks. Note that a smaller value indicates higher performance for both the MSE and MAE. In the following experimental results, the best results are marked in red and the next best in blue. Avg is the averaged result from all four prediction lengths. All experiments are implemented in PyTorch and conducted on a single NVIDIA RTX3090 24GB GPU.
[^1]: https://drive.google.com/drive/folders/13CglKYOlzM5C7K8gK8NfC-F3EYxkM3D2
5 EXPERIMENTAL RESULTS
The results for multivariate and univariate predictions on time series datasets are summarized in Tables 1, 2, and 3. In general, our model outperforms the selected baselines on both multivariate and univariate forecasting tasks. Detailed results of the experiment can be found in the Appendix A-4.
Multivariate Results
For multivariate sequence prediction, as shown in Tables 1 and 2, in general, the WinNet essentially achieves the best performance for the listed datasets on the two measurements. Quantitatively, from the results of the long-input length experiments, WinNet improves 18.5% in MSE and 12.0% in MAE compared to the CNN-based SOTA model TimesNet, indicating that WinNet can more stably capture the long and short periodicity in data. As for the results of the short-input experiments, compared to the CNN-based TimesNet, the WinNet improves 9.3% in MSE and 8.4% in MAE; compared to the MLP-based DLinear, the WinNet improves 11.9% in MSE and 9.6% in MAE. It is noted that our model achieves a complete outperformance on the ETT datasets for both long and short sequences.
Univariate Results
The results of univariate prediction is shown in Table 3. The WinNet significantly outperforms other SOTA models on all datasets, achieving an improvement of 8.2% in MSE and 5.0% in MAE for PatchTST, 12.3% in MSE and 8.1% in MAE for TimesNet, 18.9% in MSE and 13.1% in MAE for DLinear. This demonstrates that the modules in WinNet indeed bring more useful periodic information for univariate TSF tasks.
Table 1: Results for multivariate long-input length prediction. The input sequence length is set to 104 for the ILI dataset and 512 for the others. See Appendix Table 16 for the full results.
| Methods | WinNet (Ours) | RLinear (2023a) | RMLP (2023a) | PatchTST (2023) | TimesNet (2023) | MICN (2023) | Crossformer (2023) | DLinear (2023) | FEDformer (2022) | Autoformer (2021) |
|---------|---------------|----------------|--------------|----------------|----------------|-------------|-------------------|---------------|----------------|------------------|
| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |
| ETM1 | 0.345 | 0.371 | 0.378 | 0.401 | 0.367 | 0.397 | 0.352 | 0.382 | 0.407 | 0.417 |
| ETM2 | 0.248 | 0.310 | 0.281 | 0.346 | 0.291 | 0.350 | 0.256 | 0.316 | 0.283 | 0.336 |
| ETTh1 | 0.402 | 0.419 | 0.442 | 0.456 | 0.461 | 0.468 | 0.418 | 0.432 | 0.485 | 0.481 |
| ETTh2 | 0.332 | 0.385 | 0.469 | 0.463 | 0.425 | 0.448 | 0.342 | 0.385 | 0.414 | 0.445 |
| ILI | 1.919 | 0.912 | 2.347 | 1.101 | 2.350 | 1.084 | 1.538 | 0.841 | 2.345 | 1.037 |
| Exchange| 0.401 | 0.415 | 0.466 | 0.451 | 0.495 | 0.515 | 0.392 | 0.416 | 0.618 | 0.557 |
| Weather | 0.219 | 0.263 | 0.231 | 0.294 | 0.231 | 0.278 | 0.229 | 0.265 | 0.252 | 0.287 |
| Traffic | 0.417 | 0.285 | 0.419 | 0.290 | 0.404 | 0.280 | 0.396 | 0.265 | 0.616 | 0.334 |
| Electricity | 0.159 | 0.253 | 0.167 | 0.261 | 0.162 | 0.256 | 0.161 | 0.253 | 0.200 | 0.301 |
Table 2: Results for multivariate short-input length prediction. The input sequence length is set to 36 for the ILI dataset and 96 for the others. See Appendix Table 17 for the full results.
| Methods | WinNet (Ours) | RLinear (2023a) | RMLP (2023a) | PatchTST (2023) | TimesNet (2023) | MICN (2023) | Crossformer (2023) | DLinear (2023) | FEDformer (2022) | Autoformer (2021) |
|---------|---------------|----------------|--------------|----------------|----------------|-------------|-------------------|---------------|----------------|------------------|
| Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |
| ETM1 | 0.381 | 0.385 | 0.395 | 0.404 | 0.400 | 0.414 | 0.382 | 0.395 | 0.392 | 0.413 |
| ETM2 | 0.276 | 0.320 | 0.312 | 0.366 | 0.330 | 0.370 | 0.285 | 0.330 | 0.328 | 0.382 |
| ETTh1 | 0.439 | 0.425 | 0.466 | 0.462 | 0.462 | 0.454 | 0.470 | 0.452 | 0.558 | 0.535 |
| ETTh2 | 0.375 | 0.400 | 0.460 | 0.459 | 0.515 | 0.482 | 0.384 | 0.406 | 0.587 | 0.525 |
| ILI | 2.388 | 0.977 | 3.061 | 1.202 | 3.483 | 1.280 | 1.833 | 0.845 | 2.664 | 1.085 |
| Exchange| 0.373 | 0.406 | 0.339 | 0.401 | 0.396 | 0.432 | 0.367 | 0.402 | 0.334 | 0.425 |
| Weather | 0.250 | 0.293 | 0.251 | 0.295 | 0.257 | 0.295 | 0.242 | 0.299 | 0.259 | 0.286 |
| Traffic | 0.457 | 0.356 | 0.630 | 0.390 | 0.546 | 0.352 | 0.541 | 0.348 | 0.541 | 0.315 |
| Electricity | 0.192 | 0.280 | 0.206 | 0.298 | 0.202 | 0.291 | 0.211 | 0.297 | 0.186 | 0.294 |
Table 3: Results for univariate long-input length prediction. The input sequence length is set to 104 for the ILI dataset and 336 for the others. See Appendix Table [18] for the full results.
| Methods | WinNet (Ours) | RLinear [2023a] | RMLP [2023a] | PatchTST [2023] | TimesNet [2023] | MiCN [2023] | DLinear [2023] | FEDformer [2022] | Autoformer [2021] |
|---------|---------------|-----------------|--------------|-----------------|-----------------|-------------|----------------|------------------|------------------|
| Metric | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE | MSE MAE |
| ETM1 | 0.044 0.157 | 0.054 0.170 | 0.074 0.208 | 0.048 0.162 | 0.053 0.173 | 0.049 0.164 | 0.053 0.167 | 0.069 0.201 | 0.080 0.221 |
| ETM2 | 0.109 0.246 | 0.114 0.253 | 0.133 0.276 | 0.112 0.251 | 0.132 0.275 | 0.111 0.247 | 0.112 0.247 | 0.119 0.261 | 0.129 0.271 |
| ETT1 | 0.069 0.207 | 0.105 0.249 | 0.123 0.277 | 0.073 0.210 | 0.074 0.215 | 0.102 0.251 | 0.103 0.246 | 0.111 0.257 | 0.104 0.252 |
| ETT2 | 0.178 0.334 | 0.205 0.359 | 0.222 0.374 | 0.176 0.336 | 0.180 0.341 | 0.190 0.342 | 0.198 0.350 | 0.205 0.349 | 0.217 0.363 |
| Weather | 0.0013 0.0280 | 0.0063 0.0662 | 0.0041 0.0498 | 0.0014 0.0283 | 0.0015 0.0298 | 0.0064 0.0675 | 0.0062 0.0665 | 0.0042 0.0526 | 0.0063 0.0581 |
| Exchange| 0.484 0.472 | 0.520 0.519 | 0.574 0.583 | 0.456 0.508 | 0.583 0.535 | 0.534 0.543 | 0.566 0.544 | 0.725 0.637 | 0.789 0.681 |
| ECL | 0.258 0.359 | 0.257 0.359 | 0.284 0.381 | 0.400 0.442 | 0.307 0.394 | 0.317 0.412 | 0.257 0.360 | 0.456 0.507 | 0.551 0.558 |
| Traffic | 0.132 0.217 | 0.134 0.219 | 0.148 0.238 | 0.141 0.223 | 0.144 0.234 | 0.152 0.240 | 0.144 0.238 | 0.302 0.398 | 0.263 0.370 |
| ILI | 0.665 0.613 | 1.921 1.223 | 1.095 0.920 | 0.794 0.684 | 0.777 0.723 | 1.279 0.913 | 0.714 0.695 | 1.107 0.922 | 1.139 0.931 |
Table 4: Results for ablation studies, including the I2PE, TDPD and DCB in WinNet. Four cases are included: (a) all the three modules are included in model (Final: I2PE+TDPD+DCB); (b) only the TDPD; (c) TDPD+DCB; (d) the original version with common CNN and one-dimensional trend decomposition.
| Methods | WinNet | TimesNet* | DLinear |
|---------|--------|-----------|---------|
| Metric | Final | TDPD+DCB | TDPD | original |
| Weather | 0.143 0.198 | 0.147 0.203 | 0.147 0.206 | 0.147 0.208 | 0.163 0.223 | 0.176 0.237 |
| ECL | 0.129 0.225 | 0.135 0.231 | 0.139 0.237 | 0.145 0.249 | 0.181 0.281 | 0.140 0.237 |
| traffic | 0.394 0.274 | 0.405 0.281 | 0.414 0.288 | 0.528 0.300 | 0.603 0.328 | 0.410 0.282 |
* We replace the input length L=512 in TimesNet for a fair comparison.
6 ABLATION STUDIES
Model architecture To validate the proposed modules in the WinNet, the ablation studies are conducted to determine the best model architecture, including the I2PE, TDPD, and DCB. The TimesNet and DLinear are as a SOTA benchmark for CNN-based and MLP-based models. Based on the results in Table 4, we can see that all the proposed modules can significantly improve the prediction performance, which validates the proposed model architecture. We also explore the effect of both the intra-period and inter-period on the proposed model performance, as shown in Appendix Table [11] and [12]. Ablation study results on ETT datasets are available in Appendix Table [13].
In the original version, normal trend decomposition and a regular CNN network are applied to replace the I2PE and TDPD, respectively. Compared to using the TDPD, the original version fails to capture the periodicity in complex datasets and suffers from inferior results over the DLinear. Taking the traffic dataset as an example, the TDPD module can improve on the MSE by 22.3%, and achieve comparable results with the DLinear. Other modules can also contribute expected performance improvements and finally outperform the SOTA baselines (TimesNet and DLinear).
Input length In general, we consider that a larger look-back window can capture a long-range of periodicity, such as 96 for DLinear and 512 for PatchTST. To validate this issue, we compare the model performance with input lengths of 96 and 512, and the results are reported in the Table [1].
In addition, other configurations are also considered in the proposed model to validate the performance, including \{24, 48, 96, 192, 336, 512, 720\} as the input length and \{96, 720\} as the prediction length respectively. As can be seen from Figure 5, the predictions of our model are more significant as the prediction length increases.
**Figure 5:** Prediction error (MSE) with different look-back windows on 3 large datasets: weather, ECL, traffic. The look-back windows are selected to be \(L = \{24, 48, 96, 192, 336, 512, 720\}\), and the prediction lengths are \(T = \{96, 720\}\).
**Model efficiency** In addition to the expected performance improvements, we also harvest a higher computational efficiency. Table 5 shows the computational efficiency of our model in the univariate prediction tasks. From the table, we can see that our model can achieve higher efficiency, in terms of computational complexity, number of parameters and memory consumption, even over the simple DLinear model. The efficiency of WinNet on the multi-channel and few-channel datasets can be seen in the Appendix Table 14 and 15.
**Table 5:** Efficiency of our model on the Traffic dataset vs. other methods in univariate prediction. We set the input length to 720 and the prediction length to 720. See relevant computational efficiency with thop, torchsummary and torch.cuda.memory_allocated functions. Times-T denotes the time of an iter training, and Times-I denotes the actual inference time.
| Method | WinNet | PatchTST | TimesNet | MICN | Crossformer | DLinear | FEDformer | Autoformer | Informer | Transformer |
|------------|----------|----------|----------|---------|-------------|---------|-----------|------------|----------|-------------|
| FL.Ops | 851.3K | 44.2M | 3240.7G | 5.32G | 726.5M | 1.04M | 1.74G | 1.74G | 1.41G | 1.74G |
| Params | 830.8K | 8.69M | 450.9M | 18.75M | 11.09M | 1.04M | 3.94M | 2.37M | 2.77M | 2.38M |
| Times-T | 17ms | 24ms | 491ms | 25ms | 55ms | 12ms | 430ms | 265ms | 142ms | 45ms |
| Memory | 11MiB | 44MiB | 1762MiB | 85MiB | 56MiB | 12MiB | 24MiB | 27MiB | 29MiB | 27MiB |
| Times-I | 9.6ms | 11.2ms | 66.9ms | 12.7ms | 21.2ms | 8.6ms | 55.0ms | 28.2ms | 18.1ms | 14.1ms |
## 7 CONCLUSIONS
In summary, we propose a CNN-based approach for time series forecasting models by introducing the important modules of periodic window, including the I2PE, TDPD, and DCB. Compared to previous SOTA models, our model captures the correlation between long and short periods and is more effective as the look-back window gets longer. The proposed model not only outperforms other baselines in terms of prediction accuracy, but also harvests higher computational efficiency.
This work demonstrates the potential for the CNN-based methods in the TSF tasks. The correlation between period-trend and oscillation terms can provide the local periodicity in time series. In the future, we should focus on the correlation and interplay between the period-trend and oscillation terms, instead of training them separately.
REFERENCES
Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. *arXiv preprint arXiv:1803.01271*, 2018.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. *arXiv:1412.3555v1*, 2014.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, George Heigold, Sylvain Gelly, and et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020.
Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. *Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’23)*, 2023. URL https://doi.org/10.1145/3580305.3599533.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Computation*, 9(8): 1735–1780, 1997. doi: 10.1162/neco.1997.9.8.1735.
Guokun Lai, Weicheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. *SIGIR*, 2018.
Zhe Li, Shiyi Qi, Yiduo Li, and Zenglin Xu. Revisiting long-term time series forecasting: An investigation on linear mapping. *ArXiv*, abs/2305.10721, 2023a.
Zhe Li, Zhongwen Rao, Lujia Pan, and Zenglin Xu. Mts-mixers: Multivariate time series forecasting via factorized temporal and channel mixing. *ArXiv*, abs/2302.04501, 2023b.
Shengsheng Lin, Weiwei Lin, Wentai Wu, Songbo Wang, and Yongxiang Wang. Petformer: Long-term time series forecasting via placeholder-enhanced transformer. *arXiv:2308.04791v1*, 2023.
Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. *International Conference on Learning Representations (ICLR)*, 2023.
David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. *International Journal of Forecasting*, 2020. URL https://doi.org/10.48550/arXiv.1704.04110.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, D. Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. *CVPR*, 2015.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Polosukhin Illia. Attention is all you need. *Proceedings of the Advances in Neural Information Processing Systems (NeurIPS)*, 2017.
Huiqiang Wang, Jian Peng, Feihu Huang, Jince Wang, Junhui Chen, and Yifei Xiao. Micn: Multi-scale local and global context modeling for long-term series forecasting. *ICLR*, 2023.
Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. *Advances in Neural Information Processing Systems (NeurIPS)*, 2021.
Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. *International Conference on Learning Representations (ICLR)*, 2023.
Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? *Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)*, 2023.
|
gOuWPd4f2U
|
While multi-modal alignment is a crucial topic in deep learning with applications across various multi-modal tasks, why does this paper primarily focus on chemistry? It initially conveys the impression of proposing a general multimodal alignment model but, in practice, concentrates solely on multimodal alignment within the field of chemistry. Additionally, the authors fail to provide sufficient justification for this specific focus.
|
Multil-Level Multimodal Alignment with Knowledge-Guided Instance-Wise Discrimination
Anonymous authors
Paper under double-blind review
Abstract
In multimodal alignment, meta-alignment and multi-level alignment play important roles. However, it is challenging to integrate meta-alignment into a multi-level multimodal alignment framework involving the operation on both reducible substances (e.g., molecules and spectrum) and irreducible elements (e.g., atoms and spectral peaks). It not only inherits the challenges from meta-alignment (e.g., heterogeneity, loss of nuance, interference, and conflicting similarities) but also introduces new challenges: navigating the interactions among reducible substances and irreducible elements and recognizing objects at each level. Many existing alignment methods suffer from inaccurate component relation estimation and potential bias, as they hold manual definitions of pair closeness. In response, we introduce Multi-level Multimodal Alignment with Knowledge-Guided Instance-Wise Discrimination (K-M3AID), an innovative approach that utilizes continuous knowledge variables with inherent natural ordering for meta-alignment. K-M3AID effectively addresses these challenges by promoting both reliable distance learning and unbiased alignment within the context of cross-modality alignment for multi-level structures. Extensive empirical studies conducted on complex molecular structures underscore the substantial efficacy of K-M3AID. It significantly improves matching accuracy while augmenting multi-level alignment capabilities. This novel approach holds great promise for advancing alignment techniques across diverse molecular contexts, offering a more robust foundation for ongoing research in chemical analysis and beyond.
1 Introduction
Multimodal alignment (MMA), as a critical aspect of multimodal deep learning, aims at establishing connections between contextually related information across heterogeneous modalities (such as text, images, audio, video, sensor data, etc) (Liang et al., 2023; Jabeen et al., 2023). Its subject may take the form of either a reducible substance (RS) or an irreducible element (IE) within a reducible substance. RS-MMA signifies a form of high-level alignment, exemplified by semantic alignment, which enables models to extract and understand the rich semantics and meanings across different modalities (Rocco et al., 2018; Wu et al., 2022; Yang et al., 2023; Liang et al., 2023). IE-MMA, in conjunction with meta-learning (Vilalta & Drissi, 2002; Vanschoren, 2018; Nichol et al., 2018), converges into multimodal meta-alignment (IE-Meta-MMA), which carries great potential for cognitive processing, generalization, and the remarkable capacity to execute zero-shot tasks (Ma et al., 2022). While multi-level MMA (MLMMA) has been demonstrated for visual-textual alignment (Hu et al., 2019; Khan et al., 2022), these frameworks remain limited to the only combination of multi-level RS-MMA, not involving IE-Meta-MMA. The potential integration of RS-MMA and IE-Meta-MMA can result in a synthesis of the advantages and benefits offered by both approaches, creating a unique paradigm of MLMMA.
Does the introduction of IE-Meta-MMA into MLMMA pose a significant challenge? The incorporation of IE-Meta-MMA into the MLMMA framework will not only inherit substantial challenges from Meta-MMA, such as notable data heterogeneity, limited data annotation and labeling, loss of nuance, interference, conflicting similarities, generality and transferability, but also introduce new challenges: the dependence between RS-MMA and IE-Meta-MMA. A successful MLMMA model
must attain the following capabilities: a) preform effective representation learning for multimodal information with varied data formats, scales, and noise levels; b) proceed dynamic communication between RS-MMA and IE-Meta-MMA that accommodate the dependence and interaction among different level alignments; c) decipher complex relationships between RSs and IEs within dynamic environments. MLMMA calls for algorithmic sophistication, interdisciplinary collaboration, and a holistic understanding of data interplay.
MMA has also emerged as a catalyst to revolutionize the field of chemistry, particularly establishing the correspondence between molecules and their functionalities (Finlayson et al., 2020) or expressions through a variety of spectroscopes (Yang et al., 2021). In view of that molecules come into existence by the union of atoms, these molecular-level interplays are categorized into RS-MMA. Apparently, these interplays don’t offer profound atomic-level insights. A solid understanding of atomic characteristics and functions with specific local contexts can enhance our understanding of molecular-level phenomena. And these meta-knowledge can be generalized and applied to diverse situations with a high degree of precision, even in zero-shot scenarios. Potentially, it could aid in solving isomer recognition, one of the most challenging tasks in chemistry (Bitulco et al., 2007; Duddeck & Díaz Gómez, 2009; Hussain et al., 2020). Isomers typically fall into two main categories: structural isomers, which share the same chemical formula but display distinct atom connectivity, and spatial isomers, which share the same topology graph but diverge in their three-dimensional arrangement (see Appendix D.1). These complex isomers require years of expertises in chemical bonding and spatial relationships to distinguish. Definitely, this is an opportunity to enhance the mentioned understanding of molecular structures, behaviors, and functions, through MLMMA model, incorporating atomic-level alignment, referring to IE-Meta-MMA.
In view of these challenges and opportunities, we propose a novel framework K-M3AID (Multi-Level Multimodal Alignment with Knowledge-Guided Instance-Wise Discrimination) incorporating RS-MMA and IE-Meta-MMA, to solve challenging Nuclear Magnetic Resonance (NMR) (Slichter, 2013) spectral alignment task in chemistry (see Figure 1). The overview of our K-M3AID framework is a dual-coordinated contrastive learning architecture, which contains three key components: RS-MMA Module, IE-Meta-MMA Module, and Communication Channel. RS-MMA module establishes the correspondences of molecules with their individual $^{13}$C NMR spectra. Each molecule, with its unique arrangement of atoms and bonding patterns, gives rise to a distinct spectral signature. Thus, we adapt simple cross entropy loss for contrastive learning in RS-MMA module. IE-Meta-MMA module aligns each C atom within the molecules with their signals on the spectrum. In contrast to the diverse and distinctive molecular spectral signatures, many atoms exhibit chemical symmetric and magnetic equivalence within the same molecule, corresponding to the same signals. Meanwhile, atoms with different local surroundings can still presents significant similarity on the spectrum, which introduces heightened level of complexity. In view of these complex scenarios, we come up with knowledge-guided instance wise discrimination based contrastive learning in IE-Meta-MMA module (see Figure 2).
In summary, our contribution comprises three major aspects: Conceptually: We integrate IE-Meta-MMA into MLMMA framework, which facilitates rapid adaptation and enhances the efficiency of learning for multimodal zero-shot tasks. Methodologically: We present knowledge-guided instance wise discrimination for cross-modal contrastive learning, which take advantage of continuous and domain-specific features with natural ordering. To the best of our knowledge, this is the first to demonstrate knowledge-guided instance wise discrimination based cross-modal contrastive learning. Empirically: We demonstrate the effectiveness of K-M3AID in multiple zero-shot tasks: molecular and atomic alignment, spectrum to molecules retrieval, and isomer recognition.
2 RELATED WORK
In general, MLMMA involves three key techniques: multimodal contrastive learning, instance-wise discrimination and meta-alignment.
Multimodal Contrastive Learning Mechanism: The paradigm, exemplified by models like CLIP (Contrastive Language-Image Pretraining) (Radford et al., 2021; Li et al., 2021), accommodates scenarios featuring multiple data modalities. It simultaneously acquires representations for both text and images through two pre-trained unimodal encoders, maps embeddings into a joint space via complemented projection layers, and aligns them through contrastive loss. The overall picture
Figure 1: The Architecture of K-M3AID. RSs refers to molecular spectrum and molecules, IEs refers to peaks and atoms. S for spectrum embedding, G for graph embedding, P for peak embedding and N for node embedding.
of CLIP is an end-to-end mechanism, which typically exhibits a symmetric gradient flow in the training process.
**Multimodal Instance-Wise Discrimination:** Instance discrimination (Le-Khac et al., 2020; Zolfaghari et al., 2021; Morgado et al., 2021; Liu et al., 2023), as a form of self-supervised learning, distinguishes individual instances without explicit class labels. Moving into multimodal contrastive learning, it can be categorized into two general approaches: strong-pair-based (van den Oord et al., 2019; Jaiswal et al., 2021; Liu et al., 2023) and weak-pair-based (Salakhutdinov & Hinton, 2007; Frost et al., 2019; Liang et al., 2021) instance-wise discrimination. The strong-pair-based NCE method enforces a precise one-to-one correspondence for real samples with artificially generated noise samples. An example of positive pair can be a noise-added picture of zebra with the text description of zebra. Instead of one-to-one correspondences, weak-pair-based approach relaxes the positive pairs to more boarder semantic correspondences. An example of positive pair can be a picture of zebra with the text description of horse, but not with the text description of tiger.
**Multimodal Meta-Alignment:** As viewed through the lenses of intermediate-level alignment and irreducible element-level alignment, multimodal meta-alignment represents a multifaceted approach to ensuring organizational coherence and effectiveness (Ma et al., 2022). Exemplary instances of intermediate-level meta-alignment, as seen in works like Cross-Modal Generalization (Chen et al., 2017; Li et al., 2020; Liang et al., 2021; Zhang et al., 2021) and Livestreaming Product Recognition (Yang et al., 2023), typically function at both the objective level and the patch level. The exploration of multimodal meta-alignment at the level of irreducible element remains relatively underdeveloped in the current landscape.
### 3 Our Method
In this section, we firstly present the architecture of K-M3AID framework, an end-to-end system designed for MLMMA. Then, we introduce the constrastive learning loss in K-M3AID along with the principles of Knowledge-Guided Instance-Wise Discrimination.
#### 3.1 Architecture
The K-M3AID framework is a dual-CLIP architecture (see Figure 1), which consists of three critical components: RS-MMA module, IE-Meta-MMA module and communication channel. The RS-MMA module adapts a gradient-asymmetric CLIP mechanism. While two unimodal encoders work in conjunction, only the from-scratch graph encoder (GIN, Xu et al., 2018) undergoes dynamic training throughout the process, the pre-trained spectrum encoder (Yang et al., 2021) remains fixed. Both encoders are complemented by dedicated projection layers, which facilitate the mapping of embeddings into a joint space. The IE-Meta-MMA module adapts a gradient-symmetric CLIP mechanism. It is equipped with two from-scratch unimodal encoders, node encoder and peak encoder, as well as
their dedicated projection layers. The graph encoder in RS-MMA module shares part of the weights with node encoder in IE-Meta-MMA module, serving as the communication channel. (See the detail features of respective encoders in Appendix B)
3.2 Contrastive Learning Loss
The synergy between these two modules is pivotal, collectively contributing to our loss function, expressed as
\[ L = CL_{RS} + CL_{IE}, \]
where \( CL_{RS} \) represents the contrastive learning loss in the RS-MMA module by Equation 3, and \( CL_{IE} \) the contrastive learning loss in IE-Meta-MMA module by Equation 6.
Let \( i \) denote the \( i^{th} \) reducible substance, and \( j \) denote the \( j^{th} \) reducible substance. Then \( x_i \) denotes the raw input in modality A for the \( i^{th} \) reducible substance and \( y_j \) denotes the raw input in modality B for the \( j^{th} \) reducible substance. Suppose \( f_x(\cdot) \) represent the encoder function for modality A, and \( f_y(\cdot) \) denote the encoder function for modality B. In RS-MMA module, these two unimodal encoder functions, should map \( x_i \) and \( y_j \) to a proximate location in the joint embedding (inter-modality) if \( i = j \).
\[
CL_{RS}(i) = \log \frac{e^{\delta(x_i, y_i)}}{\sum_{1 \leq j \leq N} e^{\delta(x_i, y_j)}}
= \log(\text{softmax}(\delta(x_i, y_i)))
\]
Where \( \delta(x_i, y_j) = (f_x(x_i)^T \cdot f_y(y_j)) \), \( N \) is the total number of reducible substances from the current batch.
Thus, the total \( CL_{RS} \) is expressed as following:
\[
CL_{RS} = \frac{1}{N} \sum_{1 \leq i \leq N} CL_{RS}(i)
\]
This design for the loss aims to match the same reducible substance cross different modalities.
3.3 Knowledge-Guided Instance Wise Discrimination Contrastive Learning
Knowledge Span, in which we define as some continuous and domain-specific feature exhibiting natural ordering and offering guidance, can potentially offer valuable insights into the contrastive learning process. As such, we propose a novel and general approach to contrastive learning called
knowledge-guided instance-wise discrimination. This approach expands the scope of contrastive learning from confined comparisons (pre-determined negative and positive pairs) to unrestricted comparisons (no need for pre-determination). This extension removes the necessity of explicitly defining such pairs, thus mitigating the potential introduction of human bias.
Suppose \( M \) is the set of irreducible elements in the reducible substances. \( A \subset \mathbb{R}^{d_1} \) is the set of tunable irreducible elements’ embeddings in modality A, \( B \subset \mathbb{R}^{d_1} \) is the set of tunable irreducible elements’ embeddings in modality B, and \( K \subset \mathbb{R}^{d_2} \) is the corresponding fixed knowledge span label that can guide the relative distance learning between components in \( A \) and \( B \). Thus, the size of \( A, B, K \) are \( |M| \), respectively.
Let \( A_i \) be the \( i^{th} \) irreducible element embedding of \( A \), and \( B_j \) be the \( j^{th} \) irreducible element embedding of \( B \). We define the distance function between \( A_i \) and \( B_j \) as \( d_E(A_i, B_j) = A_i \cdot B_j \rightarrow \mathbb{R}^+ \), and calibration function \( d(K_i, K_j) \rightarrow \mathbb{R}^+ \) with a monotonic property and constraint \( \sum_{j=1}^{|M|} d(K_i, K_j) = 1 \), in which \( K_i \) and \( K_j \) serve as the designated Knowledge Span Label. We introduce the Knowledge Span Guided Loss (KSGL) as follows:
\[
KSGL(i) = - \sum_{1 \leq j \leq |M|} d(K_i, K_j) \log \left( \frac{e^{d_E(A_i, B_j)}}{\sum_{1 \leq k \leq |M|} e^{d_E(A_i, B_k)}} \right)
\]
(5)
\[
= - \sum_{1 \leq j \leq |M|} d(K_i, K_j) \log(\text{softmax}(d_E(A_i, B_j)))
\]
(6)
In particular, when it reaches ideal optimum, \( d(K_i, K_j) \) and \( d_E(A_i, B_j) \) reaches the following relation:
\[
d(K_i, K_j) = \text{softmax}(d_E(A_i, B_j))
\]
(7)
For detail proof, please refer to Appendix A. As a result, the corresponding \( CL_{IE} \) is expressed as following:
\[
CL_{IE} = \frac{1}{|M|} \sum_{1 \leq i \leq |M|} KSGL(i)
\]
(8)
### 3.4 Chosen Knowledge Span-ppm
\( ^{13}C \) NMR uncovers molecular structures by providing the chemical environments of carbon atoms and their magnetic responses to external fields, and quantifies these features in parts per million (ppm) relative to a reference compound like tetramethylsilane (TMS), simplifying comparisons across experiments. Thus, continuous peak positions, measured in ppm, can serve as a robust knowledge span to facilitate instance-wise discrimination for this contrastive learning task.
For IE-Meta-MMA module in the case of ppm guide, \( A \) is the set of learned node embeddings for Carbon atoms and \( B \) is the set of learned peak embeddings for respective Carbon atoms, \( K \) is the set of ppm value for each corresponding Carbon atom in \( A \) and \( B \). Suppose \( ppm_i \) is the ppm for the \( i^{th} \) Carbon Atom and \( ppm_j \) is the corresponding ppm for \( j^{th} \) peak. \( d(\cdot, \cdot) \) is then defined as following:
\[
d(K_i, K_j) = d(ppm_i, ppm_j) = \text{softmax}\left( \frac{\tau_2}{|ppm_i - ppm_j| + \tau_1} \right)
\]
(9)
where \( \tau_1 \) and \( \tau_2 \) are temperature hyper-parameter. For further discussion of selection about \( \tau_1 \) and \( \tau_2 \), please refer to Appendix C.2. Then, the final form of contrastive loss for irreducible atom level according to Equation 6 is as following:
\[
KSGL(i) = - \frac{1}{|M|} \sum_{1 \leq j \leq |M|} d(ppm_i, ppm_j) \log \left( \frac{e^{d_E(A_i, B_j)}}{\sum_{1 \leq k \leq |M|} e^{d_E(A_i, B_k)}} \right)
\]
(10)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
4.1.1 DATASETS AND TASKS
In the training of K-M3AID model, the dataset comprises over 20,000 data points sourced from nmrshiftdb2 (Steinbeck et al., 2003). In this dataset, molecules are aligned with their respective $^{13}$C NMR spectra, and atomic alignments with peaks are also included. The quality of the dataset was further validated by experienced organic chemists. In zero-shot isomer recognition task, the dataset were never appeared in the training dataset, and each of the isomer groups contains at least 10 molecules, which are structural isomers or spatial isomers to each other (see details for isomers in Appendix D). In the task of zero-shot molecular retrieval, 1000 spectra (never appeared in training dataset) was used, the molecules were collected over 1 million from Pub-Chem (Kim et al., 2023), and randomly chosen for the experiments.
4.1.2 BALANCE OF CL$_{RS}$ AND CL$_{IE}$
In order to gain insights into how the interplay between CL$_{RS}$ and CL$_{IE}$ impacts on both molecular-level accuracy in RS-MMA and atomic-level alignment accuracy in IE-Meta-MMA, we introduced a parameter $\alpha$ to adjust the weights of CL$_{RS}$ and CL$_{IE}$:
$$L = \alpha \cdot CL_{RS} + (1 - \alpha) \cdot CL_{IE},$$
and conducted a series of studies regarding $\alpha$, where $0 \leq \alpha \leq 1$ (see Table 1 and Appendix C.3).
To begin, when utilizing CL$_{RS}$ at full capacity with $\alpha = 1$, the accuracy of molecular alignment reaches approximately 94.6%. However, the accuracy of atomic alignment is approximately 17.6%, as the CL$_{IE}$ for atomic alignment was omitted. Conversely, when neglecting CL$_{RS}$ with $\alpha = 0$ and relying solely on CL$_{IE}$, the accuracy for molecular alignment experiences a dramatic decrease to merely around 0.7%, but the accuracy for atomic alignment significantly improves to approximately 90.4%. These findings imply that the success of molecular alignment can offer certain degree of guidance for atomic alignment, but the fulfillment of on atom alignment proves inadequate for directing molecular alignment in the desired direction. Thus, we continue to alter the value of $\alpha$ from 0 to 1, the accuracy for molecule alignment undergoes an initial increase followed by a subsequent decline, finding its optimal performance of 95.5% at $\alpha = 0.5$. On the other hand, the accuracy of atomic alignment remains stable at approximately 90% for $\alpha$ ranging from 0 to 0.2. However, when $\alpha$ exceeds 0.2, a decrease can be observed on the accuracy of atomic alignment. It indicates an excessive emphasis on molecular alignment leads to a decrease on the performance of atomic alignment. Thus, we decide $\alpha = 0.2$ is the optimal setting for the following experiments.
Table 1: Balancing CL$_{RS}$ and CL$_{IE}$ via $\alpha$ ablation study to evaluate accuracy with epochs = 200, $\tau_1 = 10^1$ and $\tau_2 = 10^{-3}$.
| $\alpha$ | 0.00 | 0.10 | 0.20 | 0.50 | 0.80 | 0.90 | 1.00 |
|---------|------|------|------|------|------|------|------|
| RS-MMA | 0.7±0.3 | 94.3±0.1 | 94.7±0.4 | **95.5±0.4** | 95.1±0.2 | 94.8±0.5 | 94.6±0.4 |
| IE-Meta-MMA | 90.4±0.2 | 90.3±0.3 | **90.3±0.1** | 89.6±0.0 | 86.3±0.8 | 83.7±0.4 | 17.6±3.1 |
4.2 RESULTS OF K-M3AID
4.2.1 PERFORMANCE ON RS-MMA
K-M3AID model achieves an validation accuracy of above 94% for molecular-level alignments in RS-MMA module after 200 epochs. Subsequently, we evaluate the capability of our trained model by conducting retrieval of specific molecule based on the given spectrum (Spec2Mol) on various dataset sizes (see Table 2). For a molecular dataset containing 100 entries, the K-M3AID model consistently achieves approximately 95% accuracy in retrieval within the top 1%, 5%, 10%, and 25% of results. For a molecular dataset with a size of $10^3$, the K-M3AID model consistently achieves retrieval accuracies more than 97.0% within the top 5%, 10%, and 25%. In the case of a molecular dataset with a size of $10^4$, the K-M3AID model maintains an accuracy level of 85.9% for the top
10% retrieval and approximately 93% for the top 25% retrieval. Notably, for a molecular dataset with a size of up to $10^5$ entries, the K-M3AID model attains an average accuracy as 53.1% for the top 10% retrieval and 68.2% for the top 25% retrieval. These results set the K-M3AID model apart from other methods, making it an exceptional choice in such scenarios.
Table 2: Zero-shot Spec2Mol task on molecular datasets with different number of molecules
| | $10^2$ | $10^3$ | $10^4$ | $10^5$ | $10^6$ |
|----------------|------------|------------|------------|------------|------------|
| Top 1(%) | 95.4±0.6 | 77.3±2.5 | 44.7±3.3 | 16.2±3.6 | 4.4±2.3 |
| Top 5(%) | 100.0±0.2 | 97.3±0.4 | 77.5±2.7 | 40.2±4.8 | 12.3±4.8 |
| Top 10(%) | 100.0±0.1 | 99.1±0.5 | 85.9±1.9 | 53.1±4.2 | 18.5±5.3 |
| Top 25(%) | 100.0±0.1 | 99.8±0.3 | 93.3±0.8 | 68.2±3.1 | 29.7±6.4 |
4.2.2 Performance on IE-Meta-MMA
K-M3AID model achieves an validation accuracy of above 90% for atomic-level alignment in IE-Meta-MMA module after 200 epochs as shown by Figure 3. Within the validation set from 5-fold experiments, there are 12771 molecules containing fewer than 10 carbon atoms, 7043 molecules with carbon atom counts between 10 and 20, and 1138 molecules with more than 20 carbon atoms. Specifically, our model achieves 100% accuracy in 74.1% of the molecules containing fewer than 10 C atoms. For molecules with 10 to 20 C atoms, our model achieves 100% accuracy in 37.2% of cases. Furthermore, it attains an accuracy exceeding 80% in more than 50% of the molecules containing more than 20 C atoms.
In complex natural product molecules, it is a common situation that the local contents of some atoms within the same molecule exhibit a high degree of similarity. It gives rise to challenges for the atomic alignment, as some atoms correspond to ppm values in close proximity. However, our K-M3AID model is capable of recognizing each of the atoms with effective learnt embeddings and deciphering the correspondences among the atoms and the peaks at zero-shot. Two complex natural product molecules with multiple rings (4 and 4, respectively) and multiple chiral centers (6 and 8, respectively) are taken to showcase the effectiveness of atomic alignment (see Figure 4).
4.3 Comparison to existing instance wise discrimination approaches
In K-M3AID, knowledge-guided instance-wise discrimination (K-ID) is adapted into IE-Meta-MMA module. As strong-pair-based (SP-ID) and weak-pair-based (WP-ID) instance-wise discrim-
ination are general approaches in contrastive learning, we replace K-ID with SP-ID and WP-ID to conduct a comparative analysis for the impact of K-ID, SP-ID and WP-ID on the molecular and atomic alignment. SP-ID confines the irreducible elements (atoms and peaks) have a precise match across different modalities with the sole correct pairs established in the training process. However, WP-ID extends the scope from one precise match to multiple matches within the chosen threshold set for the distance of their corresponding ppm. In this context, the mathematical definition of strong pair and weak pair are given as follows:
\[
\text{Strong Pair: } |ppm_i - ppm_j| = 0,
\]
\[
\text{Weak Pair: } |ppm_i - ppm_j| \leq th,
\]
where \(A_i \in A\), which stands for atoms; \(P_j \in P\), which stands for peaks; \(th\) is the abbreviation of threshold. In addition, \(ppm_i\) is the ppm for the \(i^{th}\) Carbon Atom and \(ppm_j\) is the corresponding ppm for \(j^{th}\) peak.
Table 3: Validation accuracy of SP-ID-based and WP-ID-based models with \(epochs = 200\) and \(\alpha = 0.2\).
| Method | SP-ID | WP-ID(th=1) | WP-ID(th=2) | WP-ID(th=5) | WP-ID(th=10) | K-ID |
|----------|-----------|-------------|-------------|-------------|--------------|-----------|
| RS MMA | 93.5±0.6 | 91.3±0.8 | 90.6±0.4 | 90.3±0.6 | 88.4±1.4 | 94.7±0.4 |
| IE MMA | 89.3±0.4 | 83.7±0.6 | 83.2±0.2 | 79.8±0.5 | 66.1±2.5 | 90.2±0.1 |
4.3.1 Comparison on RS-MMA
K-ID outperforms SP-ID and WP-ID in molecular-level alignment in RS-MMA module (see Table 3). K-ID enables the molecular-level alignment to achieve an validation accuracy rate around 94%, 1 to 6% higher than other approaches. Meanwhile, K-ID distinguishes itself prominently over SP-ID and WP-ID approaches in the task of zero-shot isomer recognition by giving 100% accuracy for multiple groups of isomers (see Table 4). Furthermore, as for zero-shot Spec2Mol task, along the size of the molecular dataset increases, our K-ID-based model consistently exhibits superiority over existing methods such as SP-ID and WP-ID (see Table 2 and Table 5). These empirical findings underscore the benefits of K-ID based IE-Meta-MMA in the context of Spec2Mol, indicating its positive impact to RS-MMA.
Table 4: Zero-Shot Isomer Recognition Accuracy with SP-ID-based, WP-ID-based and K-ID-based models. For detail demo of C\(_7\)H\(_{11}\)NO\(_3\), please refer to Appendix D.2.
| Formula | #Isomers | SP-ID (%) | WP-ID (th=1) (%) | K-ID(%) |
|---------------|----------|-----------|-----------------|---------|
| C\(_4\)H\(_6\)O | 15 | 86.7 | 86.7 | 100.0 |
| C\(_9\)H\(_9\)N | 15 | 86.7 | 80.0 | 100.0 |
| C\(_7\)H\(_{11}\)NO\(_3\) | 14 | 78.6 | 85.7 | 100.0 |
| C\(_6\)H\(_{13}\)NO | 23 | 91.3 | 91.3 | 100.0 |
| C\(_8\)H\(_7\)NO\(_4\) | 13 | 92.3 | 84.6 | 100.0 |
| C\(_{15}\)H\(_{24}\)O | 16 | 93.8 | 93.8 | 100.0 |
| C\(_{11}\)H\(_{14}\) | 10 | 90.0 | 80.0 | 100.0 |
| C\(_7\)H\(_{15}\)NO | 14 | 85.7 | 85.7 | 100.0 |
| C\(_{10}\)H\(_{16}\)O\(_2\) | 26 | 92.3 | 84.6 | 100.0 |
| C\(_8\)H\(_{15}\)N | 11 | 81.8 | 90.9 | 100.0 |
4.3.2 Comparison on IE-Meta-MMA
K-ID pushes the validation accuracy of atomic-level alignment above 90%, 1 to 24% higher than SP-ID and WP-ID approaches in IE-Meta-MMA module (see Table 2 and Table 3). This superiority arises from the inherent limitations of both strong and weak pair definitions, which is failing to precisely calibrate the diverse relationships among the elements. This finding is further supported by the significant decreases in the accuracy of atomic alignment as the threshold of weak pair increases. The limitation of either SP-ID or WP-ID becomes notably significant in the following two scenarios: 1) when local contents of some atoms exhibit a high degree of similarity; 2) when some atoms exhibit symmetric mapping within the same molecule.
Table 5: Zero-Shot Spec2Mol Accuracy with SP-ID and WP-ID on Pub-Chem Database
| Method | Accuracy | $10^2$ | $10^3$ | $10^4$ | $10^5$ | $10^6$ |
|--------------|----------|--------|--------|--------|--------|--------|
| **SP-ID** | | | | | | |
| Top 1(%) | 95.3±0.8 | 78.6±2.7 | 35.8±3.8 | 12.9±1.6 | 3.4±0.9 |
| Top 5(%) | 95.4±0.1 | 77.3±0.7 | 44.7±2.3 | 16.2±2.4 | 4.4±1.5 |
| Top 10(%) | 100.0±0.0 | 97.3±0.7 | 77.5±2.3 | 40.2±2.4 | 12.3±1.5 |
| Top 25(%) | 100.0±0.0 | 99.1±0.2 | 85.9±1.0 | 53.1±3.0 | 18.5±1.8 |
| **WP-ID(th=1)** | | | | | | |
| Top 1(%) | 92.9±0.6 | 71.7±1.0 | 32.7±1.3 | 10.7±0.5 | 3.6±0.7 |
| Top 5(%) | 99.6±0.1 | 93.8±0.8 | 63.9±1.5 | 29.3±1.5 | 10.2±1.2 |
| Top 10(%) | 99.9±0.0 | 97.1±0.4 | 76.8±0.7 | 39.3±0.9 | 15.7±1.5 |
| Top 25(%) | 100.0±0.0 | 99.1±0.2 | 88.2±0.6 | 55.7±1.1 | 26.5±2.0 |
Figure 5: Case study of IE-Meta-MMA. Yellow cells in the PPM difference represent the ground truth alignment, and red cross represents the wrong alignment.
In the former scenario, exemplified by molecular A in Figure 5, atom 0 and atom 4 are secondary carbons (attaching to 2 carbons and 2 hydrogens), nearly symmetric on the same 5-member ring, corresponding to the ppm of 27.0 and 29.8, respectively. The similar local content of these two atoms fools SP-ID and WP-ID. Meanwhile, atom 1 and atom 3 are tertiary carbons (attaching to 3 carbons and 1 hydrogen), nearly symmetric on the same 5-member ring, corresponding to the ppm of 54.5 and 44.1, respectively. Only WP-ID fails to distinguish and align them. In the later scenario, exemplified by molecular B in Figure 5, there exist instances one-to-one and one-to-many for atomic-level alignment within the molecular configuration. Both SP-ID and WP-ID method mis-aligns certain atoms with other atoms with small ppm differences (less than 3 ppm in this case), rather than aligning them with themselves or their symmetric counterparts. In contrast, K-ID approach excels in both scenario by discerning each one of the atoms, which is attributed to the full utilization of ppm difference distance learning (see additional examples in Appendix E).
5 CONCLUSION AND FUTURE WORK
In this paper, we introduced the Knowledge-Guided Multi-Level Multimodal Alignment with Instance-Wise Discrimination (K-M3AID) framework, incorporating RS-MMA and IE-Meta-MMA. Its effectiveness was demonstrated through multiple zero-shot tasks: molecular and atomic alignment, Spec2Mol and isomer recognition. And we highlighted the significance of knowledge-guided instance-wise discrimination via a few metrics and case studies. Furthermore, we presented experiments aimed at accommodating the dynamic interactions between RS-MMA and IE-Meta-MMA. While our framework achieves an atomic-level alignment overall accuracy of 100% for 55% of cases, it drops significantly to 9.8% when dealing with molecules containing more than 20 carbon atoms. Currently, our graph encoder is implemented on 2D-molecular graph with basic node and edge features, potentially limiting its ability to produce precise node embeddings to distinguish atoms in the extremely complex scenarios. In the future developments, the incorporation of a 3D-based graph holds a great potential to improve performance in this regard.
REFERENCES
Giuseppe Bifulco, Paolo Dambruoso, Luigi Gomez-Paloma, and Raffaele Riccio. Determination of relative configuration in organic compounds by nmr spectroscopy and computational methods. *Chemical reviews*, 107(9):3744–3779, 2007.
Lele Chen, Sudhanshu Srivastava, Zhiyao Duan, and Chenliang Xu. Deep cross-modal audio-visual generation. In *Proceedings of the on Thematic Workshops of ACM Multimedia* 2017, pp. 349–357, 2017.
Helmut Duddeck and Edison Díaz Gómez. Chiral recognition of ethers by nmr spectroscopy. *Chirality: The Pharmacological, Biological, and Chemical Consequences of Molecular Asymmetry*, 21(1):51–68, 2009.
Samuel G Finlayson, Matthew BA McDermott, Alex V Pickering, Scott L Lipnick, and Isaac S Kohane. Cross-modal representation alignment of molecular structure and perturbation-induced transcriptional profiles. In *BIOCOMPUTING 2021: Proceedings of the Pacific Symposium*, pp. 273–284. World Scientific, 2020.
Nicholas Frosst, Nicolas Papernot, and Geoffrey Hinton. Analyzing and improving representations with the soft nearest neighbor loss, 2019.
Zhibin Hu, Yongsheng Luo, Jiong Lin, Yan Yan, and Jian Chen. Multi-level visual-semantic alignments with relation-wise dual attention network for image and text matching. In *IJCAI*, pp. 789–795, 2019.
Syed Raziullah Hussaini, Adama Kuta, Arpan Pal, Zhiguo Wang, Margaret A Eastman, and Ramon Duran. Application of nmr spectroscopy for the detection of equilibrating e-z diastereomers. *ACS omega*, 5(38):24848–24853, 2020.
Summaira Jabeen, Xi Li, Muhammad Shoib Amin, Omar Bourahla, Songyuan Li, and Abdul Jabbar. A review on methods and applications in multimodal deep learning. *ACM Transactions on Multimedia Computing, Communications and Applications*, 19(2S):1–41, 2023.
Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. A survey on contrastive self-supervised learning. *Technologies*, 9(1), 2021. ISSN 2227-7080. doi: 10.3390/technologies9010002. URL: https://www.mdpi.com/2227-7080/9/1/2
Zaid Khan, BG Vijay Kumar, Xiang Yu, Samuel Schulter, Mammohan Chandraker, and Yun Fu. Single-stream multi-level alignment for vision-language pretraining. In *European Conference on Computer Vision*, pp. 735–751. Springer, 2022.
Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, et al. Pubchem 2023 update. *Nucleic acids research*, 51(D1):D1373–D1380, 2023.
Phuc H. Le-Khac, Graham Healy, and Alan F. Smeaton. Contrastive representation learning: A framework and review. *IEEE Access*, 8:193907–193934, 2020. doi: 10.1109/ACCESS.2020.3031549.
Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning. *arXiv preprint arXiv:2012.15409*, 2020.
Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, and Junjie Yan. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. *arXiv preprint arXiv:2110.05208*, 2021.
Paul Pu Liang, Peter Wu, Liu Ziyin, Louis-Philippe Morency, and Ruslan Salakhutdinov. Cross-modal generalization: Learning in low resource modalities via meta-alignment. In *Proceedings of the 29th ACM International Conference on Multimedia*, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450386517. doi: 10.1145/3474085.3475247. URL: https://doi.org/10.1145/3474085.3475247
Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency. Foundations and trends in multimodal machine learning: Principles, challenges, and open questions, 2023.
Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self-supervised learning: Generative or contrastive. *IEEE Transactions on Knowledge and Data Engineering*, 35(1):857–876, 2023. doi: 10.1109/TKDE.2021.3090866.
Yao Ma, Shilin Zhao, Weixiao Wang, Yaoman Li, and Irwin King. Multimodality in meta-learning: A comprehensive survey. *Knowledge-Based Systems*, 250:108976, 2022. ISSN 0950-7051. doi: https://doi.org/10.1016/j.knosys.2022.108976. URL: https://www.sciencedirect.com/science/article/pii/S0950705122004737
|
PlZIXgfWPH
|
In the conclusion you mention that new HPO tools could be designed based on your findings. Do you have exemplary ideas? I wonder whether this is really the case since your findings largely are coherent with existing knowledge from smaller studies.
|
ON THE HYPERPARAMETER LOSS LANDSCAPES OF MACHINE LEARNING ALGORITHMS
Anonymous authors
Paper under double-blind review
ABSTRACT
Despite the recent success in a plethora of hyperparameter optimization (HPO) methods for machine learning (ML) models, the intricate interplay between model hyperparameters (HPs) and predictive losses (a.k.a fitness), which is a key prerequisite for understanding HPO, remain notably underexplored in our community. This results in limited explainability in the HPO process, rendering a lack of human trust and difficulties in pinpointing algorithm bottlenecks. In this paper, we aim to shed light on this black box by conducting large-scale fitness landscape analysis (FLA) on 1,500 HP loss landscapes of 6 ML models with more than 11M model configurations, across 67 datasets and different levels of fidelities. We reveal the first unified, comprehensive portrait of their topographies in terms of smoothness, neutrality and modality. We also show that such properties are highly transferable across datasets and fidelities, providing fundamental evidence for the success of multi-fidelity and transfer learning methods. These findings are made possible by developing a dedicated FLA framework that incorporates a combination of visual and quantitative measures. We further demonstrate the potential of this framework by analyzing the NAS-Bench-101 landscape, and we believe it is able to facilitate fundamental understanding of a broader range of AutoML tasks.
1 INTRODUCTION
In the past decade, considerable efforts have been invested in developing hyperparameter optimization (HPO) techniques to automate the laborious task of hyperparameter (HP) tuning for machine learning (ML) models. Many successful approaches (Bergstra et al., 2011; Snoek et al., 2012; Hutter et al., 2011; Srinivas et al., 2010; Karmin et al., 2013; Li et al., 2017; Falkner et al., 2018; Awad et al., 2021) have significantly advanced this field, and they have been empirically shown to outperform both manual configuration (Hutter et al., 2019; Bischl et al., 2023; Santu et al., 2022; Yang & Shami, 2020) and random search (Bergstra & Bengio, 2012).
HPO is often casted as a black-box optimization problem (BBOP), where the goal is to search for an HP configuration $\lambda \in \Lambda = \Lambda_1 \times \ldots \times \Lambda_n$ with an objective value $L(\lambda)$ as small as possible without any explicit knowledge of the ML loss function $L : \Lambda \rightarrow \mathbb{R}$. Existing methods (see examples above) to this end essentially comprises 3 key components: i) a search space, ii) an optimization strategy, and iii) model evaluation. While the development of both efficient searching mechanisms and evaluation strategies has received considerable attention in recent years, the intricate interplay between model HPs and predictive losses, which plays a pivotal role in understanding HPO problems, remain notably underexplored. Such lack of knowledge in turn hampers the transparency and explainability (Dwivedi et al., 2023) of HPO solvers, which often function as black-boxes as well. Consequently, this results in limited human trust in HPO methods and hinders their wide-spread application (Drozdal et al., 2020; Bischl et al., 2023). Unfortunately, given the high-dimensional, hybrid nature of HP configuration space, it is far from trivial to open up such black box.
The fitness landscape metaphor, which was pioneered by Wright in 1932 in evolutionary biology, has been widely recognized as a powerful tool for analyzing BBOPs in the evolutionary computation community (Malan, 2021). It can be envisioned as a (hyper-)surface as formed by objective values, over the high-dimensional space of possible configurations (Romero et al., 2013). Since the 90s, a plethora of fitness landscape analysis (FLA) methods have been developed to conduct exploratory analysis on landscape characteristics of BBOPs (Zou et al., 2022). Such methods are useful, as they
are able to extract landscape measures that are indicative of problem difficulty and how a certain search mechanism would perform on it (Smith-Miles & Lopes, 2012; Hutter et al., 2014b; Qasem & Prügel-Bennett, 2010). This knowledge would then advance the understanding of the problem characteristics (Huang & Li, 2023), assist the selection and configuration of problem solvers (Kerschke et al., 2019; Schede et al., 2022), navigate the design of new algorithms (Qasem & Prügel-Bennett, 2010), and enhance the explainability and trust for optimization (Thomson et al., 2023).
Recently, the use of FLA in analyzing HP and the related AutoML loss landscapes has also received considerable attention. Various works have studied diverse structural characteristics of these landscapes including neutrality, modality, and fitness distance correlation (e.g., Pushak & Hoos (2022); Teixeira & Pappa (2022); Pimenta et al. (2020); Schneider et al. (2022)). However, such works suffer from limited setups and fail to interrogate the connection between landscape characteristics and the success of HP optimizers, which often run in a wide range of scenarios (e.g., different models, datasets and fidelities). It remains unclear whether the HP loss landscapes induced on different settings share certain characteristics or patterns, and how such commonalities could be potentially exploited by HP optimizers. On the other hand, we argue that existing analytical methods are insufficient to provide a comprehensive analysis and comparison on HP loss landscapes since:
- The ability to visualize landscapes is crucial for enabling intuitive understandings of their complex topologies (Michalak, 2019). However, HP loss landscapes are notoriously difficult to visualize in a human-comprehensible fashion due to their high-dimensional nature. Some existing methods address this problem by plotting only one or two HPs each time (e.g., Friedman (2001); Akiba et al. (2019)), which fail to provide an integrated view of the global landscape structure.
Other works applied dimensionality reduction techniques to project the landscape into 2D space (e.g., Michalak (2019); Biedenkapp et al. (2018); Walter et al. (2022)), but the resulting plot is not able to preserve the overall topography as well as neighborhood structure of the landscape.
- There is no tangible method for quantifying the similarity between different HP loss landscapes. Despite general FLA metrics could already capture informative landscape characteristics, practices in automated algorithm selection demonstrate that domain-specific metrics are also crucial as a complementary source of information for better characterizing the target problem (Smith-Miles, 2008; Smith-Miles & Lopes, 2012). However, none of the prior works have considered such perspectives when comparing HP loss landscapes.
The overarching goal of this paper is to gain an integral view of the HP loss landscapes induced on different scenarios and thereby provide new insights to the community. To this end, we develop a dedicated landscape analysis framework to enable comprehensive analysis and comparisons among HP loss landscapes. It incorporates:
1. a novel neighborhood-aware HP loss landscape visualization method applicable to high-dimensions,
2. a series of FLA metrics quantifying landscape structural characteristics, and
3. 3 similarity metrics that leverage rankings of HP configurations to allow for informative landscape similarity quantification in the HPO context. Through empirical analysis on 1,500 landscapes across 6 ML models and 67 datasets with more than 11 million configurations, we are ambitious to advance the understanding of the following four fundamental HPO scenarios:
**HP Landscapes of Test Loss Versus Training Loss.** ‘Overfitting’ is one of the biggest interests and concerns in the ML community (Ng, 1997; Caruana et al., 2000; Recht et al., 2019; Belkin et al., 2018; Roelofs et al., 2019; Ishida et al., 2020). However, there is a lack of in-depth understanding on how test loss correlates with training loss across a broad HP landscape, and what specific properties distinguish regions that generalize well from poorly generalized ones. In this paper, by using our
fitness landscape analysis framework, we find that the test loss landscapes resemble their training counterparts in terms of both structural characteristics and performance rankings (see, e.g., Figure 1 (a) versus (b)), and configurations with small training error are likely to achieve a mild generalization error. However, significant discrepancies can also occur (see, e.g., Figure 1 (e) versus (f)) depending on both the choice of certain HP combinations and the dataset at hand. In such cases, struggling to reduce the training loss has little or even negative effect to refining the generalization loss.
**HP Loss Landscapes Across Fidelities.** Given the time-demanding nature of model evaluation, multi-fidelity HPO methods (Karnin et al., 2013; Kandasamy et al., 2016; Li et al., 2017; Kandasamy et al., 2017; Falkner et al., 2018; Awad et al., 2021) have achieved prominent performance by more efficient resource allocation. However, the validity of their underpinned assumption, i.e., the ranking of configuration performance would stay close to the ground truth under fidelities (Bischl et al., 2023), remains unclear (Pushak & Hoos, 2022). Our empirical results are highly inspiring to support such assumption and show that landscapes with lower fidelities are highly consistent with full-fidelity landscapes w.r.t. both structural characteristics and performance ranks (Figure 1 (c)).
**HP Loss Landscapes Across Datasets.** Leveraging priors obtained from previous tasks to expedite the learning process for new tasks is another promising direction for HPO (Feurer et al., 2015b; Bardenet et al., 2013; Wistuba et al., 2015b; Kim et al., 2017; Rakotoarison et al., 2022; Wistuba et al., 2015c; Vanschoren, 2018; Swersky et al., 2013). These approaches are grounded in the hypothesis that ‘knowledge’ about HPO landscapes—such as configuration quality, hyperparameter importance and their interactions (Hutter et al., 2014a; Watanabe et al., 2023b; Probst et al., 2019; van Rijn & Hutter, 2017)—is correlated across related datasets when defined under a certain distance measure. A natural question that arises is whether this knowledge remains consistent when applied to a broader range of tasks. Our results on a diverse set of 67 datasets show that performance rankings, HP importance and their interactions, are largely shared across tasks (Figure 1 (d)).
**HP Loss Landscapes Across Models.** Methods rooted in Bayesian optimization (e.g., Snoek et al. (2012); Bergstra et al. (2011); Feurer et al. (2015a); Kandasamy et al. (2017)) and search space pruning (e.g., Wistuba et al. (2015a); Perrone & Shen (2019); Wang et al. (2020)) implicitly assume that the quality of configurations is locally correlated rather than a full randomness. This is also true for meta-heuristic methods (e.g., Friedrichs & Igel (2005); Lessmann et al. (2005); Cawley (2001); Guo et al. (2008)), which are even more dependent on specific landscape properties. While it may seem intuitive that HP loss landscapes would differ depending on the target ML model, in practice the fact is often that common HPO methods perform robustly for different models. This implies that, despite superficial differences, the general family of HP loss landscapes may share certain inherent patterns/properties. We verified this hypothesis by synthesizing the results from diverse FLA metrics characterizing HP loss landscape geometry combined with visual inspections (see, e.g., Figure 1 (a, e)). The results gathered from 1,500 landscapes of 6 ML models under different scenarios, reveal a universal picture of the HP loss landscapes. In this picture, HP loss landscapes are smooth, nearly unimodal, containing a large number of neutral regions; configurations with similar performance are locally clustered; the landscape becomes flatter around the optimum configuration.
## 2 HPO Landscape Analysis Methods
This section introduces our analytical framework developed to enable exploratory analysis on different HP loss landscapes and perform comparisons between them. Due to the page limit, we only provide a brief sketch of these methods here while more detailed discussions are in Appendix B.
**HP Loss Landscape Construction.** The HP loss landscape can be formulated as a triplet \( \langle \Lambda, L, N \rangle \) with three ingredients: \( i \) a search space \( \Lambda \) of feasible configurations that consists of pre-evaluated, discretized grid values (see Appendix F.1), \( ii \) a ML loss function \( L : \lambda \rightarrow \mathbb{R} \), and \( iii \) a neighborhood structure \( N \) that specifies which configurations are neighbors to each other. Note that the form of \( N \) depends on a distance function \( d : \lambda \times \lambda \rightarrow \mathbb{N} \). Following Pushak & Hoos (2022), we define all categorical values of a HP to be distance 1 from each other (i.e., the values are non-ordinal). For a numerical HP, we define the distance between two values to be the number of steps between them on the grid used for discretization. Such distance measure is able to mimic the tuning strategy of human experts when combined with elaborately designed grid values. Based on this, the total distance between two configurations \( \lambda_i \) and \( \lambda_j \) is then sum of the distances between the respective pairs of HP values, and we say they are neighbors to each other (i.e., \( \lambda_j \in N(\lambda_i) \), if \( d(\lambda_j, \lambda_i) = 1 \). Fi-
Table 1: Summary of the FLA metrics used in our landscape analysis framework.
| Metrics | Symbol | Domain | What a Higher Value Implies |
|--------------------------|--------|--------------|-------------------------------------------------------------------------------------------|
| Performance Assortativity | $\mathcal{L}$-ast | $[-1, 1]$ | HP Configurations with similar $\mathcal{L}$ values are more likely to be neighbors to each other. |
| Autocorrelation | $\rho_a$ | $[-1, 1]$ | The landscape is smoother |
| Neutrality Distance Correlation | NDC | $[-1, 1]$ | The landscape is more likely to be flatter near the optimum. |
| Mean Neutrality | $\bar{\nu}$ | $[0, 1]$ | There are many ‘plateaus’ in the landscape. |
| No. Local Optima | $n_{lo}$ | $\mathbb{N}^+$ | There are many ‘valleys’ or ‘peaks’ in the landscape. |
| Mean Basin Size | $\bar{s}_B$ | $\mathbb{R}^+$ | The local optima are hard to be escaped from. |
1 Newman (2010); 2 Weinberger (1990), 3 Reidys & Stadler (2001)
nally, the HPO landscape is constructed as a directed graph where the vertices are HP configurations and an improving edge $e_{i,j} \in E$ is traced from $\lambda_i$ to $\lambda_j$ if $\lambda_j \in \mathcal{N}(\lambda_i)$ and $\mathcal{L}(\lambda_j) < \mathcal{L}(\lambda_i)$. We say that a configuration $\lambda_\ell$ is a local optimum if $\forall \lambda' \in \mathcal{N}(\lambda_\ell)$, we have $\mathcal{L}(\lambda_\ell) < \lambda'$. In addition, we say that $\lambda_j$ is a neutral neighbor of $\lambda_i$ if their performance difference is negligible ($\leq 1\%e$).
**Landscape Visualization.** We develop a first-of-its-kind, highly interpretable method for visualizing the topography of high-dimensional HP loss landscapes by leveraging graph representation learning (Hamilton, 2020) combined with dimensionality reduction (Draganov et al., 2023) techniques. Specifically, we first extracted low-dimensional features for each node in the graph. To this end, we use HOPE (Ou et al., 2016) node embedding method because it could preserve high-order proximities between configurations. Then, we compress the obtained feature vectors into 2 components using the UMAP (McInnes & Healy, 2018) algorithm, and thus allowing configurations to be laid out in 2D scatter plot. To further refine the interpretability of the obtained plots, we additionally apply a linear interpolation and thereby generate a smooth landscape surface.
**Quantifying Landscape Characteristics.** To quantitatively assess the structural characteristics of HP loss landscapes, we employ a series of dedicated FLA metrics summarized in Table 1 as surrogate features. There are many other metrics for characterizing landscape properties (see Zou et al. (2022) for a detailed review), but our selected ones are particularly useful for this study as they cover the most essential landscape properties (i.e., modality, neutrality and smoothness) that are related to algorithm behaviors. More importantly, they are intuitive enough even for non-experts in FLA.
**Landscape Similarity in Terms of Performance Ranking.** The comparison of rankings of HP configurations’ performance is the essence of a large corpora of HPO methods (Hutter et al., 2019). We thereby ground our landscape similarity measure of HP loss landscapes on the consistency of their performance ranks, denoted as $R(\mathcal{L}(\hat{\lambda}))$, to allow more informative results in the HPO context. Specifically, we use 3 statistical metrics with complementary perspectives: 1) Spearman’s $\rho_s$, it measures the association of the performance ranks of configurations in two landscapes (Spearman, 1961), 2) Kaggle’s Shake-up metric (Trotman, 2019), it assesses the average movement of configuration rankings across two landscapes. 3) The $\gamma$-set similarity (Watanabe et al., 2023a), it quantifies the ratio of overlaps between top-10% regions of two landscapes divided by their unions.
In addition to these, to investigate the consistency of HP importance and interaction under different scenarios, We apply the widely used functional ANOVA method (Hutter et al., 2014a) to assess the variance contribution of every HP $\lambda \in \Lambda$ as well as their interactions up to the 3rd order.
### 3 EXPERIMENTAL SETUP
Table 2 summarizes the meta-information of our empirical study, while detailed HP search space of each model and the principles we follow in designing them, are left in Appendix F.1. We first consider decision tree (DT) (Safavian & Landgrebe, 1991) and three types of its ensembles: random forest (RF) (Breiman, 2001), XGBoost (Chen & Guestrin, 2016) and LightGBM (Ke et al., 2017). We analyze the HP space of these models using the tabular benchmark proposed in Grinsztajn et al. (2022), which comprises 25 regression and 32 classification tasks (see Appendix F.2). These datasets span a broad range of complexities in terms of number of instances and features and are thus idea choice for comprehensive inspection of landscape characteristics. In addition to these, we also study convolutional neural networks (CNNs) (Krizhevsky et al., 2012) on six classic image classification
Table 2: Summarization of meta-information of our empirical study.
| MODELS | DATASETS | FIDELITIES | SUMMARIZATION |
|--------|----------|------------|---------------|
| Model | Total HPs | Total Configs. | Cat. Class. | Cat. Reg. | Num. Class. | Num. Reg. | Image Class. | Training Data | Training Epochs | Total Configs. | # Landscapes |
| XGB | 5 | 14,960 | 15 | 7 | 17 | 18 | - | {0.1, 0.25, all} | - | 2.56M | 342 |
| RF | 6 | 11,250 | 15 | 7 | 17 | 18 | - | {0.1, 0.25, all} | - | 1.92M | 342 |
| LGBM | 5 | 13,440 | 15 | 7 | 17 | 18 | - | {0.1, 0.25, all} | - | 2.30M | 342 |
| DT | 5 | 24,200 | 15 | 7 | 17 | 18 | - | {0.1, 0.25, all} | - | 4.14M | 342 |
| CNN | 8 | 6,480 | - | - | - | - | 6 | {0.1, 0.25, all} | {10, 25, 50} | 0.35M | 108 |
| FCNet | 9 | 62,208 | - | - | - | - | 4 | {10, 50, 100} | 0.19M | 24 |
Total (Before accounting 5-fold cross-validation): 11.15M landscapes
tasks (Appendix F.2) using a joint architecture and hyperparameter search (JAHS) (Bansal et al., 2022) space. We additionally consider another JAHS scenario, for which we adopt the NASBench-HPO (Klein & Hutter, 2019) data included in HPOBench (Eggensperger et al., 2021). This includes 62,208 configurations of a feed-forward neural network (FCNet) evaluated on 4 UCI datasets.
For each dataset, unless predefined, we randomly split the data into training (80%) and test (20%) set. For all HP configurations $\lambda \in \Lambda$ of each model, we exhaustively evaluate $L(\lambda)_{\text{train}}$ and $L(\lambda)_{\text{test}}$ using 5-fold cross-validation. Here, we use root mean squared error (RMSE) and $R^2$ score to serve as the loss function $L$ for regression tasks, and for classification, we use accuracy and ROC-AUC. We control the fidelity of the training by varying the number of training instances to $\{10\%, 25\%, 100\% \}$ of the whole training data. For CNN, we additionally set the budget for the number of epochs to $\{10, 25, 50\}$ and thus obtain a total of $3 \times 3$ different levels of fidelity. For FCNet, we vary fidelity by using meta-data at the $\{10, 50, 100\}$-th epoch. At the end, we obtain a total of 1,500 landscapes with more than 11M distinct HP configurations. To further demonstrate the transferability and potential impact of our proposed landscape analysis framework, we also employ it to analyze NASBench-101 (Ying et al., 2019), a well-known neural architecture search (NAS) benchmark.
4 RESULTS AND ANALYSIS
In this section, we seek to investigate HP loss landscapes under the four scenarios posed in Section 1. We will start from providing an universal view of the general characteristics of HP loss landscapes of different ML models (Section 4.1). We then explore and compare landscapes under: i) training and test setups (Section 4.2), ii) different fidelities (Section 4.3), iii) different datasets (Section 4.4).
4.1 Overall Characteristics of HP Loss Landscape of ML Models
From landscape visualizations depicted in Figure 2 (a), we have a general impression that HP loss landscapes of ML models are highly structured and share certain patterns: they are relatively smooth; configurations are clustered in terms of performance; there is a highly distinguishable plateau consisting of prominent configurations, where the terrain becomes flatter. This impression is consistent with the FLA metrics reported in Figure 3, from which we see that landscapes for all models are:
Fairly smooth and clustered. The high $L_{\text{ast}}$ and $\rho_a$ values for $L_{\text{test}}$ landscapes shown in Figure 3 (a) and (b) respectively imply that configurations with similar $L_{\text{test}}(\lambda)$ tend to be locally connected, where a small change in $\lambda$ is not likely to cause dramatic variation of $L_{\text{test}}(\lambda)$. This observation is similar to the findings in reinforcement learning (RL) (Eimer et al., 2023), where the transitions between different parts of the HP landscapes of RL are also found to be quite smooth. This property makes the HP landscapes favorable to Bayesian optimization and search space pruning techniques, as it would be easier to separate the promising regions from the poorly performing ones. On the other hand, if the landscape is rugged instead, in which $L_{\text{test}}(\lambda)$ of very different levels often mixed together, it would become more difficult for such techniques to identify a clear promising region.
Nearly unimodal. As shown in Figure 3 (e), we find that a considerable fraction of the $L_{\text{test}}$ landscapes are unimodal, whereas for other landscapes, there could be a handful to dozens ($D^T$) of local optima. This is somewhat contradictory to Pushak & Hoos (2022) at the first thought, in which the authors found that almost all landscapes they studied are nearly unimodal. However, when taking a closer look at the local optima in our landscapes, we find that they usually feature a small basin of
Figure 2: 2D visualization of HP loss landscapes of 6 ML models under different scenarios: (a) $L_{\text{test}}$ landscape on baseline datasets (44059 for tree-based models, CIFAR-10 for CNN, protein structure for FCNet), (b) $L_{\text{train}}$ landscape on baseline datasets, (c) Low-fidelity $L_{\text{test}}$ landscape on baseline datasets, (d) $L_{\text{test}}$ landscape on different datasets (44143 for tree-based models, Fashion MINIST for CNN, slice localization for FCNet). Colors indicate $R(L)$ (lower rank values are better).
Figure 3: Distribution of FLA metrics introduced in Table 1 for each model across all datasets for landscapes of 1) $L_{\text{test}}$, 2) $L_{\text{train}}$ and 3) $L_{\text{testLF}}$.
attraction (Figure 3 (f)). This makes them relatively ‘shallow’, and thus would not pose significant obstacles to optimization. However, beyond the results in Figure 3, we find that FCNet landscapes on the 4 UCI datasets possess 24 to 347 local optima, with $\bar{s}_B$ up to 2,372 (Appendix D), implying a strong attraction for optimizers. Pushak & Hoos have also reported similar observations on these four landscapes, and they speculated the reason could be that these scenarios fall into the over-parameterized regime. While we agree with this reasoning, we seek to conduct further analysis on the local optima using the local optima network (LON) (Ochoa et al. (2008), Appendix B.3). We find that despite the presence of many other local optima, the global optima still plays a pivotal role in the connectivity of the LON (Appendix D). Therefore, many local optima can eventually escape the global optimum via certain strategies (e.g., a perturbation), though this may take additional efforts.
Highly neutral; planar around the optimum. As depicted in Figure 3 (d), we can clearly see that HP loss landscapes are often featured in high neutrality. This indicates that a large portion of 1-bit
Figure 4: Distribution of Spearman, Shake-up and $\gamma$-set metrics between (a) $L_{\text{test}}$ and $L_{\text{train}}$, (b) $L_{\text{test}}$ and $L_{\text{testLF}}$, (c) $L_{\text{test}}$ across datasets. Medians are labeled beside each plot.
Moves in the landscape will result in subtle change in $L_{\text{test}}(\lambda)$ (i.e., $\leq 1\%$). We postulate a major reason for this is the low effective dimensionality (Bergstra & Bengio, 2012) of HPO problems: usually only a small subset of all available HPs have obvious influence on performance. Despite landscape neutrality can largely vary with the choice on which HPs to analyze and their respective values, considering the fact that we have ready removed totally unimportant HPs from teh search space, moves with subtle performance shifts can actually be more prevalent than one may expect. Such phenomenon is more pronounced for the well-performing regions, as illustrated by the high NDC values in Figure 3 (c). It suggests that as we move closer to the global optimum, it is more likely to encounter neutral moves, and the landscape becomes flatter. This is in line with Probst & Boulesteix (2017); Pimenta et al. (2020) and practical experience: the gain of tuning HPs usually progressively decreases as approaching the best reachable performance. Such property also poses challenges to optimizers, as there is little gradient information that can be utilized for navigation towards fitter configurations (Muñoz et al., 2015).
Overall, despite certain exceptions, we see that the family of HP loss landscapes tend to share various high-level properties, whereas the exact topologies would vary with models. This explains why in practice, one can often expect an optimizer to work relatively robustly across a wide range of scenarios. In addition, most properties here also seem to generalize the NAS problems (Appendix C), except that we find the NAS landscapes tend to have lower neutrality and more local optima.
4.2 Training and Test HPO Landscapes
Figure 2 (a) and (b) provide a visual comparison of $L_{\text{train}}$ and $L_{\text{test}}$ landscapes for different models, respectively. We could infer from the plots that the structural characteristics of the $L_{\text{train}}$ landscapes highly our previously discussed properties. On the other hand, for the performance rankings, we notice that $L_{\text{train}}$ generally correlate with $L_{\text{test}}$ for RF, DT, LGBM and CNN, whereas for XGBoost, there is significant shifts in performance between the two cases. We further quantitatively verify such observations using the FLA metrics and the landscape similarity metrics introduced in Section 2.
**Structural characteristics.** From Figure 3, we could clearly see that the landscape characteristics for $L_{\text{train}}$ and $L_{\text{test}}$ are highly consistent for most studied scenarios. More specifically, it is surprising to see that $L_{\text{train}}$ landscapes tend to yield relatively higher $L$-ast and $\rho_s$, suggesting a smoother and more structured terrain. Meanwhile, the NDC values are lower in the training scenarios. These observations imply that $L_{\text{train}}$ landscapes are even more benign than $L_{\text{test}}$ landscapes. In addition, we find that $\bar{u}$, $n_{lp}$ and $\bar{s}_g$ values rarely change between $L_{\text{train}}$ and $L_{\text{test}}$ landscapes. Notably, local optima found in $L_{\text{train}}$ and $L_{\text{test}}$ landscapes are almost (if not all) identical. These indicate that the relative performance in local neighborhoods tend to remain similar in the two cases, despite the variations in their numerical values and the global performance rankings.
**Landscape similarity in terms of performance rankings.** We quantified the similarity between all pairs of $L_{\text{train}}$ and $L_{\text{test}}$ landscapes for all 5 models using the three metrics introduced in Section 2 as shown in Figure 4 (a). Overall, we observe that $R(L_{\text{train}})$ and $R(L_{\text{test}})$ are globally correlated for all models except XGBoost, as indicated by the significant $\rho_s$ values (median > 0.7) and low Shake-up metrics (median < 0.15). However, when zooming into the top-10% regions, we find that...
the majority of our studied scenarios reveal low $\gamma$-set similarities. It indicates that the generalization gap is larger in prominent regions where configurations are highly adapted to the training set. This phenomenon is more severe for XGBoost, where the median $\gamma$-set similarity is only 0.07, and there is also a poor $\rho_s$ value (median = 0.34) and high Shake-up score (median = 0.25).
In order to gain more insight into such generalization gaps for XGBoost, we create scatter plots of $L_{\text{test}}$ versus $L_{\text{train}}$ on dataset #44059 as shown in Figure 5 (a). We decompose the pattern into two modes: During the first mode, $L_{\text{test}}$ highly correlates with $L_{\text{train}}$ as it decreases, and the models in this stage underfit the data. In the next mode, as points struggle to further move on the $x$-axis ($L_{\text{train}}$), they stagnate or even significantly increase on the $y$-axis ($L_{\text{test}}$), indicating strong evidence of overfitting. In particular, we can see a plateauing trend near the $x$-axis, where some models overly excel on the training data, but performing poorly on the test set.
To further investigate which kinds of configurations are likely to lead to overfitting, we color the points with respect to their HP values as shown in Figure 5 (b-e). We are excited to see that the generated plots demonstrate clear patterns between the value of each HP and the resulted performance. In particular, we find that learning rate, max depth and subsample have significant impact on $\Delta L$. However, the generalizability of a learner is not monopolized by a single one of them; instead, it depends on their cumulative interactions. For example, the largest $\Delta L$s are observed for learners that features a large learning rate, deep base trees, combined with low subsample rate, but any of these HP settings alone does not necessarily lead to the worst case performance. In addition to this, we notice that such generalization gap is also related to dataset characteristics and weakly correlated across models, and we discuss more about this matter in Appendix E.
### 4.3 HPO Landscapes with Different Fidelities
Figure 2 (c) shows the low-fidelity test loss landscapes (denoted as $L_{\text{test},l}$) for each model (using 10 epochs for FCNet, and 10% training data for others). From the plots, we could see that $L_{\text{test},l}$ landscapes are highly consistent with $L_{\text{test}}$ in terms of both structural characteristics and performance rankings. More specifically, as reflected in Figure 3, all measured FLA metrics of $L_{\text{test},l}$ landscapes showed little difference compared to $L_{\text{test}}$ landscapes across all studied scenarios. For performance rankings, Figure 4 (b) depicts the distribution of the 3 similarity indicators between $L_{\text{test},l}$ and $L_{\text{test}}$ across all datasets for each model. We could observe a high Spearman correlation (median > 0.85) between $L_{\text{test}}$ and $L_{\text{test},l}$ for all models, and the $\gamma$-set similarities between the top-10% configurations are also prominent, with medians larger than 60%. These imply that $R(L_{\text{test}})$ and $R(L_{\text{test},l})$ are highly consistent for the majority of our studied scenarios and there is large overlap between the promising regions of the two landscapes. In addition, the Shake-up scores yield low values (median < 0.1), suggesting that on average the difference between $R(L_{\text{test}})$ and $R(L_{\text{test},l})$ is less than 10%. Additional results on FCNet, NAS-Bench-101 in Appendix D and Appendix C respectively are also consistent with our findings here.
### 4.4 HPO Landscapes Across Datasets
Figure 2 (d) shows the $L_{\text{test}}$ landscapes for each model on a different dataset. From the figure, it is exciting to see that the high-level topography of HP loss landscape are preserved when transferring to a new task. In particular, we find that the top regions in the original landscape generally retain
their positions, despite changes in their exact contours. The FLA metrics we previously saw in Figure 3 support such observation, from which we have been able to draw an unified picture of the characteristics of HP loss landscapes. In addition, from the similarity metrics reported in Figure 3, we can infer that the measured performance reveal clear Spearman correlations (median > 0.65) across datasets. More importantly, the overlap between well-performing regions, as indicated by the γ-set similarity, also achieves medians around 40%. In addition, it is intriguing to find that despite the dataset #45041 (9K instances and 255 features) and #45047 (1M instances and 5 features) seem to be totally different, they reveal \( \rho_s > 0.7 \) and \( \gamma \)-set similarity > 40% for all 4 tree-based models.
In addition to performance rankings, Figure 6 illustrates the contribution of each HP and their interactions to model performance assessed by the functional ANOVA method. The results indicate that some (combination of) HPs are typically important for many datasets for a given model. For example, learning rate consistently contributes a large portion of variance to model performance for LightGBM, and its interactions with the number of leaves and estimators are also important. These observations are similar with van Rijn & Hutter (2018), which find also conclude that certain HPs of a ML model are important for a wide spectrum of datasets by analyzing meta-data from OpenML platform.
As discussed in Section 4.1, HP loss landscapes often involve a large number of none-improvement moves, especially near the optimum. We also see that there is clear division between the promising regions and the poorly-performing ones. Therefore, leveraging prior knowledge from previously tasks should be able to greatly expedite the searching process by means like warm-starting HPO from good configurations or carefully selecting candidate HPs and crafting the search space. More importantly, based on our results, we note that this should not be only limited to similar tasks defined under certain rules, since they may not always available. On the other hand, seemingly different tasks could still provide informative information as we see above. Our additional results for FCNet on 4 datasets, and NAS-Bench-201 across CIFAR-10/100 as well as ImageNet datasets (Appendix C), also revealed similar highly-transferable conclusions.
5 DISUCCSIONS AND CONCLUSIONS
By conducting large-scale exploratory analysis on 1,500 HP landscapes of 6 ML models with over 11M model configurations under different scenarios, this paper reveals an unified portrait of their topographies in terms of smoothness, neutrality and modality. We also show that these properties are highly transferable across datasets and fidelities, and thus provide fundamental evidence to support the effectiveness of transfer and multi-fidelity methods, which in previous practices, is mainly based on intuition. However, while our findings are observed for the majority studied scenarios, we do observe some exceptions. For example, most landscapes inspected reveal a nearly unimodal structure, but some of them can have dozens to a few hundreds of local optima with non-negligible basin sizes (e.g., FCNet). Also, there are cases when landscapes under lower fidelities or on a different task reveal very different patterns, as shown by the long tails of the similarity distributions in Figure 4. Further explorations by interrogating the relationship between dataset characteristics may provide a even more comprehensive understanding of the HPO landscape.
Our developed FLA framework in this work, has shown great potential for enabling both qualitative and quantitative understanding for a wider range of AutoML problems. While it currently relies on large-scale, pre-evaluated data points for landscape construction, we think it can be a promising direction to integrate it with existing AutoML frameworks, and thus allow for on-the-fly analysis of problem landscapes, thereby allowing it to be accessible by a broader range of stakeholders.
REFERENCES
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In *KDD’19: Proc. of the 25th ACM SIGKDD International Conference on Knowledge Discovery*, pp. 2623–2631. ACM, 2019.
Noor H. Awad, Neeratyoy Mallik, and Frank Hutter. DEHB: evolutionary hyperband for scalable, robust and efficient hyperparameter optimization. In *IJCAI’21: Proc. of the Thirtieth International Joint Conference on Artificial Intelligence*, pp. 2147–2153. ijcai.org, 2021.
Archit Bansal, Danny Stoll, Maciej Janowski, Arber Zela, and Frank Hutter. Jahs-bench-201: A foundation for research on joint architecture and hyperparameter search. In *NeurIPS*, 2022.
Rémi Bardenet, Mátyás Brendel, Balázs Kégl, and Michèle Sebag. Collaborative hyperparameter tuning. In *ICML’13: Proc. of the 30th International Conference on Machine Learning*, volume 28 of *JMLR Workshop and Conference Proceedings*, pp. 199–207. JMLR.org, 2013.
Mikhail Belkin, Daniel J. Hsu, and Partha Mitra. Overfitting or perfect fitting? risk bounds for classification and regression rules that interpolate. In *NIPS’18: Proc. of Advances in Neural Information Processing Systems*, pp. 2306–2317, 2018.
James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. *J. Mach. Learn. Res.*, 13:281–305, 2012.
James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. In *NIPS’11: Proc. of the 25th Annual Conference on Neural Information Processing Systems*, pp. 2546–2554, 2011.
Andre Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. CAVE: configuration assessment, visualization and evaluation. In *LION’18: Proc. of the 12th International Conference*, volume 11353, pp. 115–130. Springer, 2018.
Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, and Marius Lindauer. Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges. *WIREs Data. Mining. Knowl. Discov.*, 13(2), 2023.
Leo Breiman. Random forests. *Mach. Learn.*, 45(1):5–32, 2001.
Charles L Brooks III, José N Onuchic, and David J Wales. Taking a walk on a landscape. *Science*, 293(5530):612–613, 2001.
Rich Caruana, Steve Lawrence, and C. Lee Giles. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In *NIPS’00: Proc. of Advances in Neural Information Processing Systems*, pp. 402–408. MIT Press, 2000.
Gavin C Cawley. Model selection for support vector machines via adaptive step-size tabu search. In *Proc. of the International Conference in Artificial Neural Nets and Genetic Algorithms*, pp. 434–437. Springer, 2001.
Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In *SIGKDD’16: Proc. of the 22nd ACM International Conference on Knowledge Discovery and Data Mining*, pp. 785–794. ACM, 2016.
Krishna Teja Chitty-Venkata, Murali Emani, Venkatram Vishwanath, and Arun K. Somani. Neural architecture search benchmarks: Insights and survey. *IEEE Access*, 11:25217–25236, 2023.
J Arjan GM De Visser and Joachim Krug. Empirical fitness landscapes and the predictability of evolution. *Nat. Rev. Genet.*, 15(7):480–490, 2014.
Andrew Draganov, Jakob Rødsgaard Jørgensen, Katrine Scheel, Davide Mottin, Ira Assent, Tyrus Berry, and Çigdem Aslay. Actup: Analyzing and consolidating tsne and UMAP. In *IJCAI’23: Proc. of the 32nd International Joint Conference on Artificial Intelligence*, pp. 3651–3658. ijcai.org, 2023.
|
9DvDRTTdlu
|
Reflection is an issue in NERF if not handling well, and the tensor-RF backbone doesn't model that. However, it seem that Figure 4 the car example demonstrated the vivid reflection, is this true? If this is the case, should we give the credit to the refinement layer?
|
ED-NeRF: Efficient Text-Guided Editing of 3D Scene With Latent Space NeRF
Jangho Park2; Gihyun Kwon3*, Jong Chul Ye1,2,3
Kim Jaechul Graduate School of AI1, Robotics Program2,
Department of Bio and Brain Engineering3, KAIST
{jhg1234,cyclomon,jong.ye}@kaist.ac.kr
Abstract
Recently, there has been a significant advancement in text-to-image diffusion models, leading to groundbreaking performance in 2D image generation. These advancements have been extended to 3D models, enabling the generation of novel 3D objects from textual descriptions. This has evolved into NeRF editing methods, which allow the manipulation of existing 3D objects through textual conditioning. However, existing NeRF editing techniques have faced limitations in their performance due to slow training speeds and the use of loss functions that do not adequately consider editing. To address this, here we present a novel 3D NeRF editing approach dubbed ED-NeRF by successfully embedding real-world scenes into the latent space of the latent diffusion model (LDM) through a unique refinement layer. This approach enables us to obtain a NeRF backbone that is not only faster but also more amenable to editing compared to traditional image space NeRF editing. Furthermore, we propose an improved loss function tailored for editing by migrating the delta denoising score (DDS) distillation loss, originally used in 2D image editing to the three-dimensional domain. This novel loss function surpasses the well-known score distillation sampling (SDS) loss in terms of suitability for editing purposes. Our experimental results demonstrate that ED-NeRF achieves faster editing speed while producing improved output quality compared to state-of-the-art 3D editing models. Code and rendering results are available at our project page.
1 Introduction
In recent years, the development of neural implicit representation for embedding three-dimensional images in neural networks has seen remarkable progress. This advancement has made it possible to render images from all angles using only a limited set of training viewpoints. Starting with the seminar work known as the Neural Radiance Field (NeRF) (Mildenhall et al., 2021), which trained radiance fields using a simple MLP network, various improved techniques (Barron et al., 2021; Reiser et al., 2021; Müller et al., 2022) based on advanced network architectures or modified encoding have been proposed. Alternatively, several methods (Sun et al., 2022; Fridovich-Keil et al., 2022; Karnewar et al., 2022; Chen et al., 2022) proposed to directly optimize voxel points serving as sources for rendering, bypassing the traditional approach of encapsulating all information within implicit networks. These methods have gained prominence for their ability to train radiance fields in a remarkably short time. In addition to representing existing 2D image data in the 3D space, recent research has explored expanded approaches for generating entirely novel 3D objects. With the emergence of text-to-image embedding models like CLIP (Radford et al., 2021), various methods have been proposed to train implicit networks that can generate new objects solely from text prompts (Jain et al., 2022). This trend has been accelerated with the advent of text-to-image diffusion generation models such as Stable Diffusion (Rombach et al., 2022), particularly through the score distillation sampling (SDS) (Poole et al., 2022) which conveys the representation of the text-to-image model to NeRF model.
*equally contributed
https://jhq1234.github.io/ed-nerf.github.io/
Figure 1: Qualitative results of our method. ED-NeRF successfully edited 3D scenes with given target text prompts while preserving the original object structure and background regions.
However, the challenge of editing pre-trained 3D implicit networks according to specific conditions still remains as an open problem due to the constraints of tasks: maintaining the integrity of the original 3D images while making desired modifications. As an initial work, several approaches (Wang et al., 2022; 2023a) tried to edit the pre-trained NeRF models based on text conditions, utilizing the pre-trained CLIP model to fine-tune the parameters of NeRF models. Nevertheless, these methods exhibit notable weaknesses, including the performance limitations of the CLIP model itself and the need for rendering high-resolution images during training, which results in significant time consumption.
Recently, several editing methods proposed to leverage the enhanced expressiveness of text-to-image diffusion models such as Stable Diffusion. Some methods (Sella et al., 2023) proposed to directly employ the score distillation sampling method, with additional regularizations. However, these methods suffer from significant time consumption and instability in generation performance due to the requirement of full-resolution rendering in the training stage and limitations of the score distillation loss itself. Other alternative approaches (Haque et al., 2023) proposed to directly manipulate the training images of NeRF using text-guided image translation models. This method aims to enable the generation of 3D images corresponding to text conditions. However, it suffers from a significant drawback in terms of training time, as it requires periodic translation of training images during the training process.
To address these challenges, we are interested in developing a novel NeRF editing method to efficiently and effectively edit 3D scenes using only text prompts. To achieve this, we enable NeRF to operate directly in the NeRF latent space, similar to Latent-NeRF (Metzer et al., 2023), which helps reduce time and computational costs. However, naively rendering the latent feature of real-world scenes directly with NeRF may lead to a significant drop in view synthesis performance due to the lack of geometric consistency in the latent space. To tackle this issue, we conduct an analysis of the latent generation process and propose a novel refinement layer to enhance performance based on the analysis. Furthermore, to solve the drawback of the existing SDS-based method in editing, we propose a new sampling strategy by extending Delta Denoising Score (DDS) (Hertz et al., 2023), a 2D image editing technique based on score distillation sampling, into the 3D domain. This extension allows us to achieve high-performance editing capabilities while keeping computational costs affordable, even with large Diffusion Models such as Stable Diffusion. Given the superior editing proficiency of our approach, we’ve named it ED-NeRF (EDiting NeRF).
2 RELATED WORK
Starting from the Neural Radiance Field (NeRF) [Mildenhall et al., 2021], there have been approaches to represent three-dimensional scenes in neural fields. However, due to the slow training speed, several approaches tried to improve the performance by modifying the network architecture or training strategy [Barron et al., 2021; Müller et al., 2022; Reiser et al., 2021]. Several methods without relying on neural networks showed great performance in accelerating. These include a method for optimizing the voxel fields [Sun et al., 2022; Fridovich-Keil et al., 2022; Chen et al., 2022; Karnewar et al., 2022], or decomposing the components of field representation. Based on the success of these techniques, methods for generating ‘novel’ 3D scenes have been proposed. Especially with the emergence of the text-to-image embedding model of CLIP [Radford et al., 2021], DreamField [Jain et al., 2022] leveraged CLIP to train the NeRF model for novel 3D object synthesis. Recently, the performance of the text-to-image diffusion model enabled remarkable improvement in 3D generation. Starting from DreamFusion [Poole et al., 2022], several methods [Metzer et al., 2023; Liu et al., 2023b; Xu et al., 2023] showed impactful results using the diffusion-based prior. However, these methods are limited to generating ‘novel’ 3D objects and, therefore cannot be applied to our case of NeRF-editing which tries to modify the existing 3D scenes according to the given conditions.
Compared to the novel object generation, NeRF editing is still not an explored field, due to the complexity of the task. As a basic work, several methods focused on color or geometric editing [Yuan et al., 2022; Liu et al., 2021; Kuang et al., 2023]. Other works tried style transfer or appearance transfer on 3D neural fields [Zhang et al., 2022; Liu et al., 2023a; Bao et al., 2023] and showed promising results. With incorporating the CLIP model, several approaches [Wang et al., 2022, 2023a; Song et al., 2023] tried to modify the pre-trained NeRF towards the given text conditions. Although the results show pleasing results, the method still has limitations in detailed expression due to the limitation of CLIP model itself.
Similar to the novel scene generation case, the development of text-to-image diffusion models brought significant improvement in the editing field. Starting from Score Distillation Sampling method proposed in DreamFusion, Voxel-e tried to edit the pre-trained voxel fields with regularization [Sella et al., 2023]. As an alternative method, InstructNerf2Nerf [Haque et al., 2023] proposed to directly leverage 2D image translation models for changing the attribute of 2D images for NeRF training. However, these methods still have limitations due to excessive training time or unstable editing from loss functions. To address the above problems, we propose an efficient method of editing with novel latent space NeRF training and improved edit-friendly loss functions.
3 METHODS
Figure 2 provides an overview of training ED-NeRF. First, we optimize NeRF in the latent space of Stable Diffusion. To do this, we encode all images using a pre-trained Variational Autoencoder (VAE) to obtain the feature vectors and guide NeRF to predict these feature vectors directly. Also, we introduce an additional refinement layer, which enhances the novel view synthesis performance of NeRF (Fig. 2(a)). At the inference stage, we can render a natural image by latent NeRF via decoding rendered latent map (Fig. 2(b)). At the editing phase, by utilizing DDS, we adjust the parameters of both NeRF and the refinement process to align the 3D scene with the provided target text (Figure 3). The detailed pipeline for this approach is outlined in the following sections.
3.1 ED-NeRF for 3D Scene Editing
NeRF [Mildenhall et al., 2021] uses MLPs to predict density $\sigma$ and color $c$ for a given 3D point coordinate $x = (x, y, z)$ and view direction $d$. Through positional encoding $\gamma(\cdot)$, $x$ and $d$ are mapped into high-frequency vectors, and then fed into the neural network of NeRF, resulting in two outputs: density $\sigma \in \mathbb{R}$ and color $c \in \mathbb{R}^3$.
$$
(c, \sigma) = F_\theta(\gamma(x), \gamma(d))
$$
Through volume rendering Eq. (2), NeRF predicts the pixel color along the camera ray $r(t) = o + td$, with $t$ representing the depth within the range $[t_{near}, t_{far}]$, $o$ stands for the camera position,
Figure 2: Overall pipeline of training and inference stage. (a) We optimize ED-NeRF in the latent space, supervised by source latent. Naively matching NeRF to a latent feature synthesis map during optimization can degrade view synthesis quality. (b) Inspired by the embedding process of Stable Diffusion, we integrated additional ResNet blocks and self-attention layers as a refinement layer. (c) All 3D scenes are decoded from the Decoder when ED-NeRF renders a novel view feature map.
and \( d \) represents the view direction:
\[
\hat{C}(r) = \int_{t_n}^{t_f} T(t)\sigma(r(t))c(r(t),d)dt, \text{ where } T(t) = \exp \left( -\int_{t_n}^{t} \sigma(r(s))ds \right).
\]
Optimizing NeRF to render the latent feature value of the latent diffusion model offers several advantages in text-guided 3D generation. These advantages include a reduced training burden due to the decreased dimensionality of the space, and enhanced editability for the NeRF model, as the rendered outputs can be directly employed as input for the latent diffusion models. The concept of migrating NeRF to the latent space is first proposed by Latent-NeRF [Metzer et al., 2023], in which the NeRF is directly trained with the latent feature rather than RGB color. Therefore it can render a 3D scene without the encoding process during optimization when using the latent diffusion model as semantic knowledge prior. However, this work exclusively focuses on generating ‘virtual’ 3D assets without supervision, making it unsuitable for real-world scenes.
Thus, ED-NeRF is realized based on a novel latent NeRF training pipeline for synthesizing real-world scenes in the latent space. As depicted in Figure 2, when a real-world image dataset \( I \) contains multi-view images \( I = \{I_i\}_{i=1}^N \), we can encode all images to the latent space of Stable Diffusion via encoder to obtain the feature: \( z^i = E(I^i) \in \mathbb{R}^{64 \times 64 \times 4} \). After embedding all images, we can use the latent feature maps \( z := \{z^i\}_{i=1}^N \) as label data set for ED-NeRF training using the loss function:
\[
L_{rec} = \sum_{r \in R} \| Z^i(r) - \hat{Z}^i(r) \|^2
\]
where \( Z^i \) denotes the pixel latent value of the latent \( z^i \) and \( \hat{Z}^i(r) \) is rendered by the volume rendering equation:
\[
\hat{Z}^i(r) = \int_{t_n}^{t_f} T(t)\sigma(\gamma(t))f_z(r(t),d)dt, \text{ where } T(t) = \exp \left( -\int_{t_n}^{t} \sigma(r(s))ds \right).
\]
where \( f_z \in \mathbb{R}^4 \) denotes the predicted feature value by the neural network, taking \( \gamma(x) \) and \( \gamma(d) \) as input:
\[
(f_z, \sigma) = F_\theta(\gamma(x), \gamma(d))
\]
By minimizing the loss Eq. (3) to update the parameters of the neural network \( F_\theta \), we obtain a novel ED-NeRF model optimized in the latent space of the Stable Diffusion.
3.2 Refinement Layer based on Latent Feature Analysis
When naively matching the latent generated by Eq. (3), we observed that the reconstruction performance significantly deteriorated. In addressing this issue, we analyzed the Encoder \( E \) and Decoder \( D \) of Stable Diffusion and discovered the following insight in the process:
Figure 3: Expanding DDS into 3D for ED-NeRF editing. Pretrained ED-NeRF renders the target latent feature map, and a scheduler of the denoising model perturbs it to the sampled time step. Concurrently, the scheduler adds noise to the source latent using the same time step. Each of them is fed into the denoising model, and the DDS is determined by subtracting two different SDS scores. In combination with a binary mask, masked DDS guides NeRF in the intended direction of the target prompt without causing unintended deformations.
1) The encoder and decoder consist of ResNet blocks and self-attention layers. Therefore during the process of mapping the image to the latent space and forming a feature map, pixel values exhibit interference between each other, primarily due to ResNet and self-attention layers. Thus the latent and image pixels are not directly aligned.
2) When NeRF renders a single pixel value from the latent feature map, each ray independently passes through an MLP to determine the pixel value of the feature map. Therefore, the feature value rendered by NeRF for a single pixel is determined without interactions with other pixels.
Based on this analysis, we find that the reason for the deformed reconstruction performance of the latent NeRF lies in the inconsideration of the interactions mentioned above. Therefore, we aim to incorporate the interactions among pixels introduced by the ResNet and self-attention layers into the ED-NeRF rendering stage. Fortunately, in the Encoder and Decoder of Stable Diffusion, the embedded feature maps pass through self-attention layers at the same dimension, allowing us to concatenate two attention layers straightly. Taking advantage of this, we can design a refinement layer $F_\phi(\cdot)$ as shown in Figure 2, without dimension change of input and output vector. Let $\tilde{Z}^i$ as the pixel latent vector of the refined feature map $\tilde{z}^i$, where formed from $\tilde{z}^i = F_\phi(\hat{z}^i)$. Therefore, we can design a refined reconstruction loss function as follows:
$$L_{ref} = \sum_{r \in R} \| Z^i(r) - \tilde{Z}^i(r) \|^2 , \text{where } \tilde{z}^i = F_\phi(\hat{z}^i)$$
Ultimately, we can formulate total training loss as the sum of the refinement loss $L_{ref}$ and reconstruction loss $L_{rec}$, as follows.
$$L_{tot} = \lambda_{rec} L_{rec} + \lambda_{ref} L_{ref}$$
We update NeRF and refinement layer concurrently denoted as $F_\theta$ and $F_\phi$ by minimizing total loss $L_{tot}$ to reconstruct latent vectors in various views. To ensure stable learning, training with $\lambda_{rec}$ set to 1.0 and $\lambda_{ref}$ set to 0.1 during the initial stages of training. Beyond a specific iteration threshold, we set it to 0 to encourage the refinement layer to focus more on matching the latent representations.
3.3 Editing ED-NeRF via Delta Denoising Score
After optimizing ED-NeRF in the latent space, it is possible to directly employ the latent diffusion model to update ED-NeRF parameter via rendered latent map $z$ in the direction of the target text prompt $y_{trg}$. The most well-known method for text-guided NeRF update is Score Distillation Sampling (SDS), which directly transfers the score estimation output as a gradient of NeRF training:
$$\nabla_\theta L_{SDS}(z, y_{trg}, \epsilon, t) = \omega(t)(e_\psi(z_t, y_{trg}, t) - \epsilon)\frac{\partial z_t}{\partial \theta}$$
However, in our NeRF editing case, the updating rule for SDS often shows several problems including color saturation and mode-seeking (Wang et al., 2023b). We conjecture that the problem originated from the properties of score estimation itself. Since the target noise $\epsilon$ is pure Gaussian, the score difference is not aware of any prior knowledge of source images. Therefore the generated outputs are just the replacement of hallucinated objects without consideration of source NeRF.
To solve the problem of SDS, we focus on the recently proposed 2D editing method of Delta Denoising Score (DDS) (Hertz et al., 2023). The major difference between SDS and DDS is that the distilled score is the difference between the denoising scores from target and source. As shown in Eq. (9), DDS can be formed as a difference between two SDS scores conditioned on two different text prompts:
$$\nabla_\theta \mathcal{L}_{DDS} = \nabla_\theta \mathcal{L}_{SDS}(\hat{z}, y_{trg}) - \nabla_\theta \mathcal{L}_{SDS}(z, y_{src}),$$
where $z$ is source latent, $\hat{z}$ is rendered target latent, $y_{trg}$ represents the target text embedding, $y_{src}$ represents the reference text embedding. DDS guides the optimized latent towards the target prompt from the source prompt without the influence of the pure noise component, therefore it can easily edit 2D images.
We aim to extend this manipulation capability of DDS to 3D space as shown in Fig. 3. As we already have embedded source latent $z^i$ for the $i$-th camera pose, we can directly use them as source components of DDS. To fine-tune the model, we render the edited output $\hat{z}^i$ which is also rendered from the $i$-th camera pose. With the paired latents, we add the same sampled noise $\epsilon_t$ with the noise scale of timestep $t$ to both source and edited latents so that we obtain noisy latent $\tilde{z}_t^i$, $\tilde{z}_t$. Then we apply the diffusion model to obtain estimated score outputs from noisy latents using different text conditions for source and edited images. As in Eq. (9), we can use the difference between the two outputs as a gradient for updating the NeRF parameters. In this step, we simultaneously train the NeRF parameters $\theta$ with refinement parameters $\phi$ as it showed better editing quality. Therefore with the random $i$-th camera pose, our 3D DDS is formulated as:
$$\nabla_{\theta,\phi} \mathcal{L}_{DDS} = \nabla_{\theta,\phi} \mathcal{L}_{SDS}(\hat{z}^i, y_{trg}) - \nabla_{\theta,\phi} \mathcal{L}_{SDS}(z^i, y_{src}).$$
Although the DDS formulation improves the performance, using vanilla DDS leads to excessive changes in unwanted areas and inconsistency between two different scenes. Therefore, we propose an additional binary mask for utilizing DDS in 3D images. The objective function that combines the binary mask $M$ and DDS is as follows:
$$\nabla_{\theta,\phi} \mathcal{L}_{MDDS} = M \cdot (\nabla_{\theta,\phi} \mathcal{L}_{DDS}),$$
where $\cdot$ denotes the pixel-wise multiplication and $M$ is the conditional binary mask of the specific region of the target prompt to change. This mask is generated by utilizing off-the-shelf text prompt segmentation models such as CLIPSeg (Lüddeke & Ecker, 2022) and SAM (Kirilov et al., 2023) to segment the target region by a text prompt.
Despite the use of a binary mask, masked DDS loss $\nabla \mathcal{L}_{MDDS}$ update all parameters of NeRF potentially affecting even undesired areas. As a result, solely depending on the masked DDS loss may inadvertently result in alterations beyond the mask boundaries. Hence, we introduce an additional reconstruction loss as follows to mitigate undesired deformations beyond the mask.
$$\mathcal{L}_{Mrec} = \lambda_{im} \cdot M \cdot \mathcal{L}_{rtot} + \lambda_{om} \cdot (1 - M) \cdot \mathcal{L}_{rtot}.$$
Finally, the total editing loss is as follows:
$$\mathcal{L}_{tot} = \mathcal{L}_{MDDS} + \mathcal{L}_{Mrec}$$
By suppressing undesired alterations through the use of the masked reconstruction loss $\mathcal{L}_{Mrec}$, our total editing objective function updates NeRF and refinement layer $F_\theta$ and $F_\phi$, ensuring NeRF renders novel views in accordance with the desired text conditions.
4 EXPERIMENTAL RESULTS
4.1 BASELINE METHODS
To comprehensively evaluate the performance of our method, we perform comparative experiments comparing it to state-of-the-art methods. As CLIP-based text guidance editing methods, we used
Figure 4: **Comparison with baseline models.** ED-NeRF demonstrates outstanding performance in effectively altering specific objects compared to other models. Baseline methods often failed to maintain the region beyond the target objects and failed to guide the model towards the target text.
CLIP-NeRF (Wang et al., 2022) and NeRF-ART (Wang et al., 2023a). CLIP-NeRF encodes the images rendered by NeRF to the CLIP embedding space, allowing it to transform the images according to the text condition. As an improved method, NeRF-ART trains NeRF with various regularization functions to ensure that CLIP-edited NeRF can preserve the structure of the original NeRF. For fair experiments, we re-implemented the methods to TensoRF backbone, referencing the official source codes. For the diffusion-based editing, we chose Masked SDS (Poole et al., 2022) and InstructNeRF2NeRF (Haque et al., 2023) as the methods that target local editing. In the masked SDS setting, we fine-tuned the pre-trained NeRF with applying basic SDS loss only to the masked regions so that the NeRF model is locally edited. InstructNeRF2NeRF (Haque et al., 2023) leverages the powerful generation capabilities of diffusion models to sequentially modify the entire dataset to align with text conditions and use the modified dataset as a new source for NeRF training. We utilized a database comprising real-world images, including LLFF (Mildenhall et al., 2019) and IBRNet (Wang et al., 2021) datasets, as well as the human face dataset employed in Instruction-NeRF2NeRF (Haque et al., 2023).
### 4.2 Qualitative Results
**Text-guided editing of 3D scenes.** As shown in Figure 1, our method shows its capability to edit various image types with different textual contexts. Specifically, it is possible to achieve the effective transformation of specific objects without affecting other parts. Our baseline method InstructNeRF2NeRF (Haque et al., 2023) shows decent results with high consistency between images and
Figure 5: Ablation studies. (a) If we only use DDS loss, the model fails to maintain the attribute of untargeted regions and often fails to reflect text conditions. (b) If we do not use masked reconstruction regularization, again the regions beyond the target objects are excessively changed. (c) If we remove the mask from DDS, unwanted artifacts occur in untargeted regions. (d) With removing the proposed refinement layer, the results become blurry as the backbone NeRF cannot fully embed real-world scenes. Our proposed setting can modify a specific region in a 3D scene and follow the target word without causing unwanted deformations.
text conditions, as well as view consistency across scenes. However, it faces limitations in accurately transforming the specific objects to match text conditions and may introduce undesired image alterations beyond the specific objects. In Masked SDS, the edited output fails to reflect the structure of the original NeRF scene and shows unwanted artifacts. In the case of NeRF-ART, the entire image is embedded into the CLIP space, and it does not inherently recognize and modify only specific objects. Therefore, it exhibits limitations in recognizing and altering specific objects. CLIP-NeRF also encodes the images rendered by NeRF to the CLIP embedding space, allowing it to transform the images according to the text condition. However, its performance falls short when it comes to altering specific parts in a similar manner. On the other hand, our ED-NeRF exhibited powerful abilities in editing 3D scenes by specifying certain parts through text, surpassing other models. It not only excelled in changing objects but also demonstrated the capability to faithfully follow and modify areas that are not objects, such as the ground, in accordance with the text condition.
4.3 Quantitative Results
CLIP Directional Score. In order to quantitatively measure the editing performance, we show the comparison results using CLIP Directional scores [Gal et al., 2021]. The CLIP Directional score quantifies the alignment between textual caption modifications and corresponding image alterations. We rendered multiple view images from NeRF and measured the average score over images. When compared to baseline methods, our model obtained the best similarity scores. The result indicates that our edited NeRF accurately reflects the target text conditions.
User Study. In order to further measure the perceptual preference of human subjects, we conducted an additional user study. For the study, we rendered images from edited NeRF using 5 different scenes from LLFF and iBRnet. We gathered feedback from 20 subjects aged between their 20s and 40s. Each participant was presented with randomly selected multi-view renderings from our model and baselines and provided feedback through a preference scoring survey. We set the minimum score as 1 and the maximum score is 5, and users can choose the score among 5 options: 1-very low, 2-low, 3-middle, 4-high, 5-very high. To measure the performance of editing, we asked two questions for each sample: 1) Does the image reflect the target text condition? (Text score) 2) Does the model accurately edit the target object? (Preservation). 3) Does the 3D scenes preserve view consistency? (view consistency). In Table 1, we show the user study results. Compared with baseline methods, our method showed the best score in text score and preservation, and second best in view consistency. Overall, ours outperformed the baseline models in perceptual quality.
| Metrics | CLIP-NeRF | NeRF-Art | Instruct N2N | Mask SDS | Ours |
|-------------------------|-------------|------------|--------------|----------|----------|
| CLIP Direction Score ↑ | 0.1648 | 0.1947 | 0.2053 | 0.1409 | **0.2265** |
| Text score ↑ | 2.56 | 3.20 | 3.29 | 3.14 | **3.88** |
| Preservation ↑ | 2.30 | 2.97 | 3.08 | 2.76 | **4.09** |
| View consistency ↑ | 3.21 | **3.79** | 3.28 | 3.56 | 3.64 |
Table 1: Quantitative Comparison. We compared the text-image similarity between the target text and rendered output from edited NeRF (CLIP Directional Score). Also, we show the user study results in three categories: text-guidance score, source preservation score, and view consistency. The results show that ours shows improved perceptual score among baseline models.
| Metrics | CLIP-NeRF* | NeRF-Art* | Instruct N2N | Ours |
|-------------------------|-------------|------------|--------------|----------|
| Fine-tuning time ↓ | 6min | 15min | 90min | 14min |
| GPU Memory ↓ | 17GB | 18GB | 15GB | 8GB |
Table 2: Efficiency Comparison. We compared the efficiency of ours and baseline methods in terms of training time and Memory usage. Our method can enable faster editing with lower memory usage. For CLIP-NeRF and NeRF-Art, the models are fine-tuned in lower resolution (252×189), due to excessive memory consumption. Instruct N2N and ours are fine-tuned in 512x512 resolution.
Efficiency comparison. To compare the editing efficiency, we check the fine-tuning time and memory usage in Table 2. Among baselines, our method uses the lowest memory for training, with a much lower time compared to Instruct Nerf2Nerf. GPU memory usage and training time are measured based on the RTX 3090. In the baselines of CLIP-Nerf and Nerf-art, we experiment with using downsized images as higher resolution editing causes GPU memory overflow. For Instruct Nerf2Nerf, the fine-tuning process requires excessive time as it periodically translates the training images. Considering that our method shows outperforming quality in text-guided editing, our proposed scheme is efficient in both memory and time aspects. When comparing the time for the pre-training NeRF backbone model, we did not include a comparison since all baselines and ours take almost the same amount of time (about 10 minutes). More Details and comparisons on pre-training time are in our Appendix.
4.4 Ablation Studies
To evaluate our proposed components, we conducted an ablations study in Figure 6. (a) If we only use DDS, the method fails to maintain the untargeted regions with artifacts, even failing in training (e.g., fossil). (b) If we do not use regularization \( L_{\text{Mrec}} \), the edited results show the target text attribute, but again the regions beyond the target objects are severely degraded. (c) When we remove mask guidance on DDS, (w/o \( L_{\text{MDDS}} \)), unwanted minor deformations occur due to the gradient of DDS affecting the regions outside the mask. (d) When we remove our refinement layer, the results show blurry outputs, which indicate that latent NeRF is not accurately trained. When we utilize all the components we proposed, we can reliably transform the 3D scene into the desired target object while preserving the original structure source NeRF. In the Appendix, we included an ablation study on our proposed refinement layer for novel-view reconstruction tasks.
5 Conclusion
In this paper, we introduced a novel ED-NeRF method optimized in the latent space. By enabling NeRF to directly predict latent features, it efficiently harnesses the text-guided score function of latent diffusion models without the need for an encoder. By doing so, our approach is able to effectively reduce computation costs and address the burden of previous models that required rendering at full resolution to utilize the diffusion model. We extended the strong 2D image editing performance of DDS to the 3D scene and also introduced a new loss function based on the mask. As a result, it showed high performance in object-specific editing, a task that previous models struggled with. We experimented with our proposed approach across various datasets, and as a result, it demonstrated strong adherence to text prompts in diverse scenes without undesired deformation.
6 ETHICS AND REPRODUCIBILITY STATEMENTS
Ethics statement. ED-NeRF enables efficient and accurate text-guided NeRF editing, which can be applied to various applications. However, our ED-NeRF can be used for creating obscene objects which may cause the users to feel offended. In order to prevent the possible side effects, we can use a filtered diffusion model that does not contain malicious text conditions.
Reproducibility statement. We detailed our experimental process and parameter settings in our Appendix. We will upload our source code to an anonymous repository for reproduction.
7 ACKNOWLEDGEMENT
This research was supported by National Research foundation of Korea(NRF) (**RS-2023-00262527**), Field-oriented Technology Development Project for Customs Administration through National Research Foundation of Korea(NRF) funded by the Ministry of Science & ICT and Korea Customs Service(**NRF-2021M3I1A1097938**), the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711137899, KMDF_PR_20200901_0015) and Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2023
REFERENCES
Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. Sine: Semantic-driven image-based nerf editing with prior-guided editing field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20919–20929, 2023.
Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864, 2021.
Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision, pp. 333–350. Springer, 2022.
Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5501–5510, 2022.
Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, and Daniel Cohen-Or. Stylegan-nada: Clip-guided domain adaptation of image generators. arXiv preprint arXiv:2108.00946, 2021.
Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct-nerf2nerf: Editing 3d scenes with instructions. arXiv preprint arXiv:2303.12789, 2023.
Amir Hertz, Kfir Aberman, and Daniel Cohen-Or. Delta denoising score. arXiv preprint arXiv:2304.07090, 2023.
Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 867–876, 2022.
Animesh Karnewar, Tobias Ritschel, Oliver Wang, and Niloy Mitra. Relu fields: The little non-linearity that could. In ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–9, 2022.
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
|
RsztjXcvUf
|
W2. The actual efficiency of the algorithm heavily depends on the solution approach for the x-subproblem, which is challenging to solve. The authors do not address the resolution of this general nonlinear system.
|
A Primal-Dual Approach to Solving Variational Inequalities with General Constraints
Tatjana Chavdarova∗
University of California, Berkeley
tatjana.chavdarova@berkeley.edu
Matteo Pagliardini
University of California, Berkeley & EPFL
matteo.pagliardini@epfl.ch
Tong Yang∗
Carnegie Mellon University
tongyang@andrew.cmu.edu
Michael I. Jordan
University of California, Berkeley
jordan@cs.berkeley.edu
Abstract
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities (VIs) under a limiting assumption that analytic solutions of specific subproblems are available. In this paper, we circumvent this assumption via a warm-starting technique where we solve subproblems approximately and initialize variables with the approximate solution found at the previous iteration. We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(\frac{1}{\sqrt{k}})$ when the operator is $L$-Lipschitz and monotone. In numerical experiments, we show that this technique can converge much faster than its exact counterpart. Furthermore, for the cases when the inequality constraints are simple, we introduce an alternative variant of ACVI and establish its convergence under the same conditions. Finally, we relax the smoothness assumptions in Yang et al., yielding, to our knowledge, the first convergence result for VIs with general constraints that does not rely on the assumption that the operator is $L$-Lipschitz.
1 Introduction
We study variational inequalities (VIs), a general class of problems that encompasses both equilibria and optima. The general (constrained) VI problem involves finding a point $x^* \in X$ such that:
$$\langle x - x^*, F(x^*) \rangle \geq 0, \quad \forall x \in X,$$
(cVI)
where $X$ is a subset of the Euclidean $n$-dimensional space $\mathbb{R}^n$, and where $F : X \mapsto \mathbb{R}^n$ is a continuous map. VIs generalize standard constrained minimization problems, where $F$ is a gradient field $F \equiv \nabla f$, and, by allowing $F$ to be a general vector field, they also include problems such as finding equilibria in zero-sum games and general-sum games (Cottle & Dantzig, 1968; Rockafellar, 1970). This increased expressivity underlies their practical relevance to a wide range of emerging applications in machine learning, such as (i) multi-agent games (Goodfellow et al., 2014; Vinyals et al., 2017), (ii) robustification of single-objective problems, which yields min-max formulations (Szegedy et al., 2014; Mazuelas et al., 2020; Christiansen et al., 2020; Rothenhäusler et al., 2018), and (iii) statistical approaches to modeling complex multi-agent dynamics in stochastic and adversarial environments. We refer the reader to (Facchinei & Pang, 2003; Yang et al., 2023) for further examples.
Such generality comes, however, at a price in that solving for equilibria is notably more challenging than solving for optima. In particular, as the Jacobian of $F$ is not necessarily symmetric, we may have rotational trajectories or limit cycles (Korpelevich, 1976; Hsieh et al., 2021). Moreover, in sharp contrast to standard minimization, the last iterate can be quite far from the solution even though the average iterate converges to the solution (Chavdarova et al., 2019). This has motivated recent efforts
∗Equal contribution. Source code: https://github.com/Chavdarova/I-ACVI.
to study specifically the convergence of the last iterate produced by gradient-based methods. Thus, herein, our focus and discussions refer to the last iterate.
Recent work has focused primarily on solving VIs in two cases of the domain $\mathcal{X}$: (i) the unconstrained setting where $\mathcal{X} \equiv \mathbb{R}^n$ (Golowich et al., 2020b; Chavdarova et al., 2023; Gorbunov et al., 2022a; Bot et al., 2022) and for (ii) the constrained setting with projection-based methods (Tseng, 1995; Daskalakis et al., 2018; Diakonikolas, 2020; Nemirovski, 2004; Mertikopoulos et al., 2019; Cai et al., 2022). The latter approach assumes that the projection is “simple,” in the sense that this step does not require gradient computation. This holds, for example, for inequality constraints of the form $x \leq \tau$ where $\tau$ is some constant, in which case fast operations such as clipping suffice. However, as is the case in constrained minimization, the constraint set—denoted herein with $\mathcal{C} \subseteq \mathcal{X}$—is, in the general case, an intersection of finitely many inequalities and linear equalities:
$$\mathcal{C} = \{ x \in \mathbb{R}^n | \varphi_i(x) \leq 0, i \in [m], \ Cx = d \},$$
(CS)
where each $\varphi_i : \mathbb{R}^n \mapsto \mathbb{R}$, $C \in \mathbb{R}^{p \times n}$, and $d \in \mathbb{R}^p$. Given a general CS (without assuming additional structure), implementing the projection requires second-order methods, which quickly become computationally prohibitive as the dimension $n$ increases. If the second-order derivative computation is approximated, the derived convergence rates will yet be multiplied with an additional factor; thus, the resulting rate of convergence may not match the known lower bound (Golowich et al., 2020a; Cai et al., 2022). This motivates a third thread of research, focusing on projection-free methods for the constrained VI problem, where the update rule does not rely on the projection operator. This is the case we focus on in this paper.
There has been significant work on developing second-order projection-free methods for the formulation in cVI; we refer the interested reader to (Chapter 7, Nesterov & Nemirovski, 1994) and (Chapter 11, Facchinei & Pang, 2003, vol. 2) for example. We remark that the seminal mirror-descent and mirror-prox methods (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003; Nemirovski, 2004) (see App. A.5) exploit a certain structure of the domain and avoid the projection operator, but cannot be applied for general CS.
In recent work, Yang et al. (2023) presented a first-order method, referred to as the ADMM-based Interior Point Method for Constrained VIs (ACVI), for solving the cVI problem with general constraints. ACVI combines path-following interior point (IP) methods and primal-dual methods. Regarding the latter, it generalizes the alternating direction method of multipliers (ADMM) method (Glowinski & Marroco, 1975; Gabay & Mercier, 1976), an algorithmic paradigm that is central to large-scale optimization (Boyd et al., 2011; Tibshirani, 2017)—see (Yang et al., 2023) and App. A.1; but which has been little explored in the cVI context. On a high level, ACVI has two nested loops: (i) the outer loop smoothly decreases the weight $\mu_i$ of the inequality constraints as in IP methods, whereas (ii) the inner loop performs a primal-dual update (for a fixed $\mu_i$) as follows:
- solve a subproblem whose main (primal) variable $x_j^i$ aims to satisfy the equality constraints,
- solve a subproblem whose main (primal) variable $y_j^i$ aims to satisfy the inequality constraints,
- update the dual variable $\lambda_j^i$.
The first two steps solve the subproblems exactly using an analytical expression of the solution, and the variables converge to the same value, thus eventually satisfying both the inequality and equality constraints. See Algorithm 3 for a full description, and see Fig. 2 for illustrative examples. The authors documented that projection-based methods may extensively zig-zag when hitting a constraint when there is a rotational component in the vector field, an observation that further motivates projection-free approaches even when the projection is simple.
Yang et al. showed that the gap function of the last iterate of ACVI decreases at a rate of $\mathcal{O}(1/\sqrt{K})$ when the operator is $L$-Lipschitz, monotone, and at least one constraint is active. It is, however, an open problem to determine if the same rate on the gap function applies while assuming only that the operator is monotone (where monotonicity for VIs is analogous to convexity for standard minimization, see Def. 2.1). Moreover, in some cases, the subproblems of ACVI may be cumbersome to solve analytically. Hence, a natural question is whether we can show convergence approximately when the subproblems are solved. As a result, we raise the following questions:
- Does the last iterate of ACVI converge when the operator is monotone without requiring it to be $L$-Lipschitz?
Does ACVI converge when the subproblems are solved approximately?
In this paper, we answer the former question affirmatively. Specifically, we prove that the last iterate of ACVI converges at a rate of $\mathcal{O}\left(\frac{1}{\sqrt{K}}\right)$ in terms of the gap function (Def. 2.2) even when assuming only the monotonicity of the operator. The core of our analysis lies in identifying a relationship between the reference point of the gap function and a KKT point that ACVI targets implicitly (i.e., it does not appear explicitly in the ACVI algorithm). This shows that ACVI explicitly works to decrease the gap function at each iteration. The argument further allows us to determine a convergence rate by making it possible to upper bound the gap function. This is in contrast to the approach of Yang et al. (2023), who upper bound the iterate distance and then the gap function, an approach that requires a Lipschitz assumption. This is the first convergence rate for the last iterate for monotone VIs with constraints that does not rely on an $L$-Lipschitz assumption on the operator.
To address the latter question, we leverage a fundamental property of the ACVI algorithm—namely, its homotopic structure as it smoothly transitions to the original problem, a homotopy that inherently arises from its origin as an interior-point method (Boyd & Vandenberghe, 2004). Moreover, due to the alternating updates of the two sets of parameters of ACVI ($x$ and $y$; see Algorithm 3), the subproblems change negligibly, with the changes proportional to the step sizes. This motivates the standard warm-start technique where, at every iteration, instead of initializing at random, we initialize the corresponding optimization variable with the approximate solution found at the previous iteration. We refer to the resulting algorithm as inexact ACVI, described in Algorithm 1. Furthermore, inspired by the work of Schmidt et al. (2011), which focuses on the proximal gradient method for standard minimization, we prove that inexact ACVI converges with the same rate of $\mathcal{O}\left(\frac{1}{\sqrt{K}}\right)$, under a condition on the rate of decrease of the approximation errors. We evaluate inexact ACVI empirically on 2D and high-dimensional games and show how multiple inexact yet computationally efficient iterations can lead to faster wall-clock convergence than fewer exact ones.
Finally, we provide a detailed study of a special case of the problem class that ACVI can solve. In particular, we focus on the case when the inequality constraints are simple, in the sense that projection on those inequalities is fast to compute. Such problems often arise in machine learning, e.g., whenever the constraint set is an $L_p$-ball, with $p \in \{1, 2, \infty\}$ as in adversarial training (Goodfellow et al., 2015). We show that the same convergence rate holds for this variant of ACVI. Moreover, we show empirically that when using this method to train a constrained GAN on the MNIST (Lecun & Cortes, 1998) dataset, it converges faster than the projected variants of the standard VI methods.
In summary, our main contributions are as follows:
- We show that the gap function of the last iterate of ACVI (Yang et al., 2023, Algorithm 1 therein) decreases at a rate of $\mathcal{O}\left(\frac{1}{\sqrt{K}}\right)$ for monotone VIs, without relying on the assumption that the operator is $L$-Lipschitz.
- We combine a standard warm-start technique with ACVI and propose a precise variant with approximate solutions, named inexact ACVI—see Algorithm 1. We show that inexact ACVI recovers the same convergence rate as ACVI, provided that the errors decrease at appropriate rates.
- We propose a variant of ACVI designed for inequality constraints that are fast to project to—see Algorithm 2. We guarantee its convergence and provide the corresponding rate; in this case, we omit the central path, simplifying the convergence analysis.
- Empirically, we: (i) verify the benefits of warm-start of the inexact ACVI; (ii) observe that I-ACVI can be faster than other methods by taking advantage of cheaper approximate steps; (iii) train a constrained GAN on MNIST and show the projected version of ACVI is faster to converge than other methods; and (iv) provide visualizations contrasting the different ACVI variants.
1.1 Related Works
Last-iterate convergence of first-order methods on VI-related problems. When solving VIs, the last and average iterates can be far apart; see examples in (Chavdarova et al., 2019). Thus, an extensive line of work has aimed at obtaining last-iterate convergence for special cases of VIs that are important in applications, including bilinear or strongly monotone games (e.g., Tseng, 1995; Malitsky, 2015; Facchinei & Pang, 2003; Daskalakis et al., 2018; Liang & Stokes, 2019; Gidel et al., 2019b; Azizian et al., 2020; Thekumparampil et al., 2022), and VIs with cocoercive operators (Diakonikolas, 2020). Several papers exploit continuous-time analyses as these provide
direct insights on last-iterate convergence and simplify the derivation of the Lyapunov potential function (Ryu et al., 2019; Bot et al., 2020; Rosca et al., 2021; Chavdarova et al., 2023; Bot et al., 2022). For monotone VIs, (i) Golowich et al. (2020b,a) established that the lower bound of $\tilde{p}$-stationary canonical linear iterative ($\tilde{p}$-SCLI) first-order methods (Arjevani et al., 2016) is $O(\frac{1}{\tilde{p}\sqrt{K}})$, (ii) Golowich et al. (2020b) obtained a rate in terms of the gap function, relying on first- and second-order smoothness of $F$, (iii) Gorbunov et al. (2022a) and Gorbunov et al. (2022b) obtained a rate of $O(\frac{1}{K})$ for extragradient (Korpelevich, 1976) and optimistic GDA (Popov, 1980), respectively—in terms of reducing the squared norm of the operator, relying on first-order smoothness of $F$, and (iv) Golowich et al. (2020b) and Chavdarova et al. (2023) provided the best iterate rate for OGDA while assuming first-order smoothness of $F$. Daskalakis & Panageas (2019) focused on zero-sum convex-concave constrained problems and provided an asymptotic convergence guarantee for the last iterate of the optimistic multiplicative weights update (OMWU) method. For constrained and monotone VIs with $L$-Lipschitz operator, Cai et al. (2022) recently showed that the last iterate of extragradient and optimistic GDA have a rate of convergence that matches the lower bound. Gidel et al. (2017) consider strongly convex-concave zero-sum games with strongly convex constraint set to study the convergence of the Frank-Wolfe method (Lacoste-Julien & Jaggi, 2015).
**Interior point (IP) methods for VIs.** IP methods are a broad class of algorithms for solving problems constrained by general inequality and equality constraints. One of the widely adopted subclasses within IP methods utilizes log-barrier terms to handle inequality constraints. They typically rely on Newton’s method, which iteratively approaches the solution from the feasible region. Several works extend IP methods for constrained VI problems. Among these, Nesterov & Nemirovski (Chapter 7, 1994) study extensions to VI problems while relying on Newton’s method. Further, an extensive line of work discusses specific settings (e.g., Chen et al., 1998; Qi & Sun, 2002; Qi et al., 2000; Fan & Yan, 2010). On the other hand, Goffin et al. (1997) described a second-order cutting-plane method for solving pseudomonotone VIs with linear inequalities. Although these methods enjoy fast convergence regarding the number of iterations, each iteration requires computing second-order derivatives, which becomes computationally prohibitive for large-scale problems. Recently, Yang et al. (2023) derived the aforementioned ACVI method which combines IP methods and the ADMM method, resulting in a first-order method that can handle general constraints.
## Preliminaries
**Notation.** Bold small and bold capital letters denote vectors and matrices, respectively, while curly capital letters denote sets. We let $[n]$ denote $\{1, \ldots, n\}$ and let $e$ denote vector of all 1’s. The Euclidean norm of $v$ is denoted by $\|v\|$, and the inner product in Euclidean space by $\langle \cdot, \cdot \rangle$. ⊙ denotes element-wise product.
**Problem.** Let $\text{rank}(C') = p$ be the rank of $C$ as per (CS). With abuse of notation, let $\varphi$ be the concatenated $\varphi_i(\cdot), i \in [m]$. We assume that each of the inequality constraints is convex and $\varphi_i \in C^1(\mathbb{R}^n), i \in [m]$. We define the following sets:
$$C_\leq \triangleq \{x \in \mathbb{R}^n | \varphi(x) \leq 0\}, \quad C_< \triangleq \{x \in \mathbb{R}^n | \varphi(x) < 0\}, \quad \text{and} \quad C_+ \triangleq \{y \in \mathbb{R}^n | Cy = d\};$$
thus the relative interior of $C$ is $\text{int } C \triangleq C_< \cap C_+$. We assume $\text{int } C \neq \emptyset$ and that $C$ is compact.
In the following, we list the necessary definitions and assumptions; see App. A for additional background. We define these for a general domain set $S$, and by setting $S \equiv \mathbb{R}^n$ and $S \equiv X$, these refer to the unconstrained and constrained settings, respectively. We will use the standard gap function as a convergence measure, which requires $S$ to be compact to define it.
**Definition 2.1 (monotone operators).** An operator $F : X \supseteq S \to \mathbb{R}^n$ is monotone on $S$ if and only if the following inequality holds for all $x, x' \in S$: $\langle x - x', F(x) - F(x') \rangle \geq 0$.
**Definition 2.2 (gap function).** Given a candidate point $x' \in X$ and a map $F : X \supseteq S \to \mathbb{R}^n$ where $S$ is compact, the gap function $G : \mathbb{R}^n \to \mathbb{R}$ is defined as: $G(x', S) \triangleq \max_{x \in S} \langle F(x'), x' - x \rangle$.
**Definition 2.3 ($\sigma$-approximate solution).** Given a map $F : X \to \mathbb{R}^n$ and a positive scalar $\sigma$, $x \in X$ is said to be a $\sigma$-approximate solution of $F(x) = 0$ iff $\|F(x)\| \leq \sigma$.
**Definition 2.4 ($\varepsilon$-minimizer).** Given a minimization problem $\min_x h(x)$, s.t. $x \in S$, and a fixed positive scalar $\varepsilon$, a point $\hat{x} \in S$ is said to be an $\varepsilon$-minimizer of this problem if and only if it holds that: $h(\hat{x}) \leq h(x) + \varepsilon, \forall x \in S$.
Figure 1: Convergence of ACVI and I-ACVI on the (2D-BG) problem. The central path is depicted in yellow. For all methods, we show the $y$-iterates initialized at the same point (blue circle). Each subsequent point on the trajectory depicts the (exact or approximate) solution at the end of the inner loop. A yellow star represents the game’s Nash equilibrium (NE), and the constraint set is the interior of the red square. (a): As we decay $\mu_t$, the solutions of the inner loop of ACVI follow the central path. As $\mu_t \to 0$, the solution of the inner loop of ACVI converges to the NE. (b, c, d): When the $x$ and $y$ subproblems are solved approximately with a finite $K$ and $\ell$, the iterates need not converge as the approximation error increases (and $K$ decreases). See § 5 for a discussion.
Algorithm 1 Inexact ACVI (I-ACVI) pseudocode.
1: **Input:** operator $F : X \to \mathbb{R}^n$, constraints $Cx = d$ and $\varphi_i(x) \leq 0, i = [m]$, hyperparameters $\mu_{-1}, \beta > 0, \delta \in (0, 1)$, barrier map $\wp (\wp_1$ or $\wp_2)$, inner optimizers $A_x$ (e.g. EG, GDA) and $A_y$ (GD) for the $x$ and $y$ subproblems, resp.; outer and inner loop iterations $T$ and $K$, resp.
2: **Initialize:** $x^{(0)}_0 \in \mathbb{R}^n, y^{(0)}_0 \in \mathbb{R}^n, \lambda^{(0)}_0 \in \mathbb{R}^n$
3: $P_c \triangleq I - C(C^TC)^{-1}C$ where $P_c \in \mathbb{R}^{n \times n}$
4: $d_c \triangleq C^T(CC^T)^{-1}d$ where $d_c \in \mathbb{R}^n$
5: **for** $t = 0, \ldots, T - 1$ **do**
6: $\mu_t = \delta \mu_{t-1}$
7: **for** $k = 0, \ldots, K - 1$ **do**
8: Set $x^{(t)}_{k+1}$ to be a $\sigma_{k+1}$-approximate solution of: $x + \frac{1}{\beta} P_c F(x) - P_c y^{(t)}_k + \frac{1}{\beta} P_c \lambda^{(t)}_k - d_c = 0$ (w.r.t. $x$), by running $\ell^{(t)}_x$ steps of $A_x$, with $x$ initialized to the previous solution ($x^{(t)}_k$ if $k > 0$, else $x^{(t-1)}_K$)
9: Set $y^{(t)}_{k+1}$ to be an $\varepsilon_{k+1}$-minimizer of $\min_y \sum_{i=1}^m \wp(\varphi_i(y), \mu) + \frac{\beta}{2} \| y - x^{(t)}_{k+1} - \frac{1}{\beta} \lambda^{(t)}_k \|^2$, by running $\ell^{(t)}_y$ steps of $A_y$, with $y$ initialized to $y^{(t)}_k$ when $k > 0$, or $y^{(t-1)}_K$ otherwise
10: $\lambda^{(t)}_{k+1} = \lambda^{(t)}_k + \beta (x^{(t)}_{k+1} - y^{(t)}_{k+1})$
11: **end for**
12: $(y^{(t+1)}_0, \lambda^{(t+1)}_0) \triangleq (y^{(t)}_K, \lambda^{(t)}_K)$
13: **end for**
3 Convergence of the Exact and Inexact ACVI Algorithms for Monotone VIs
In this section, we present our main theoretical findings: (i) the rate of convergence of the last iterate of ACVI (the exact ACVI algorithm is stated in App. A) while relying exclusively on the assumption that the operator $F$ is monotone; and (ii) the corresponding convergence when the subproblems are solved approximately—where the proposed algorithm is referred to as inexact ACVI—Algorithm 1 ($\wp_1, \wp_2$ are defined below). Note that we only assume $F$ is $L$-Lipschitz for the latter result, and if we run Algorithm 1 with extragradient for line 8, for example, the method only has a convergence guarantee if $F$ is $L$-Lipschitz (see Korpelevich, 1976, Theorem 1). For easier comparison with one loop algorithms, we will state both of these results for a fixed $\mu_{-1}$ (hence only have the $k \in [K]$ iteration count) as in (Yang et al., 2023); nonetheless, the same rates hold without knowing $\mu_{-1}$—see App. B.4 in Yang et al. (2023) and our App. B.3. Thus, both guarantees are parameter-free.
3.1 Last Iterate Convergence of Exact ACVI
**Theorem 3.1** (Last iterate convergence rate of ACVI—Algorithm 1 in (Yang et al., 2023)). Given a continuous operator \( F : \mathcal{X} \to \mathbb{R}^n \), assume: (i) \( F \) is monotone on \( C_- \), as per Def. 2.1; (ii) either \( F \) is strictly monotone on \( C \) or one of \( \varphi_i \) is strictly convex. Let \( (\mathbf{x}_K^{(t)}, \mathbf{y}_K^{(t)}, \lambda_K^{(t)}) \) denote the last iterate of ACVI. Given any fixed \( K \in \mathbb{N}_+ \), run with sufficiently small \( \mu_{-1} \), then \( \forall t \in [T] \), it holds that:
\[
G(\mathbf{x}_K^{(t)}, C) \leq O\left(\frac{1}{\sqrt{K}}\right), \quad \text{and} \quad \left\| \mathbf{x}_K^{(t)} - \mathbf{y}_K^{(t)} \right\| \leq O\left(\frac{1}{\sqrt{K}}\right).
\]
App. B gives the details on the constants that appear in the rates and the proof of Theorem 3.1.
3.2 Last Iterate Convergence Rate of Inexact ACVI
For some problems, the equation in line 8 or the convex optimization problem in line 9 of ACVI may not have an analytic solution, or the exact solution may be expensive to compute. Thus we consider solving these two problems approximately, using warm-starting. At each iteration, we set the initial variable \( \mathbf{x} \) and \( \mathbf{y} \) to be the solution at the previous step when solving the \( \mathbf{x} \) and \( \mathbf{y} \) subproblems, respectively, as described in Algorithm 1. The following Theorem—inspired by (Schmidt et al., 2011)—establishes that when the errors in the calculation of the subproblems satisfy certain conditions, the last iterate convergence rate of inexact ACVI recovers that of (exact) ACVI. The theorem holds for the standard barrier function used for IP methods, as well as for a new barrier function \( \varphi_2 \) that we propose that is smooth and defined in the entire domain, as follows:
\[
\varphi_1(z, \mu) = -\mu \log(-z), \quad \varphi_2(z, \mu) = \begin{cases}
-\mu \log(-z), & z \leq -e^{-\frac{\mu}{c}} \\
\mu e^{\frac{\mu}{c}} z + \mu + c, & \text{otherwise}
\end{cases}
\]
where \( c \) in \( \varphi_2 \) is a fixed constant. Choosing among these is denoted with \( \varphi(\cdot) \) in Algorithm 1.
**Theorem 3.2** (Last iterate convergence rate of Inexact ACVI—Algorithm 1 with \( \varphi_1 \) or \( \varphi_2 \)). Given a continuous operator \( F : \mathcal{X} \to \mathbb{R}^n \), assume: (i) \( F \) is monotone on \( C_- \), as per Def. 2.1; (ii) either \( F \) is strictly monotone on \( C \) or one of \( \varphi_i \) is strictly convex; and (iii) \( F \) is \( L \)-Lipschitz on \( \mathcal{X} \), that is, \( \|F(\mathbf{x}) - F(\mathbf{x'})\| \leq L \|\mathbf{x} - \mathbf{x'}\| \), for all \( \mathbf{x}, \mathbf{x'} \in \mathcal{X} \) and some \( L > 0 \). Let \( (\mathbf{x}_K^{(t)}, \mathbf{y}_K^{(t)}, \lambda_K^{(t)}) \) denote the last iterate of Algorithm 1; and let \( \varepsilon_k \) and \( \sigma_k \) denote the approximation errors at step \( k \) of lines 8 and 9 (as per Def. 2.3 and 2.4), respectively. Further, suppose: \( \lim_{K \to \infty} \frac{1}{\sqrt{K}} \sum_{k=1}^{K+1} (k(\sqrt{\varepsilon_k} + \sigma_k)) < +\infty \). Given any fixed \( K \in \mathbb{N}_+ \), run with sufficiently small \( \mu_{-1} \), then for all \( t \in [T] \), it holds:
\[
G(\mathbf{x}_K^{(t)}, C) \leq O\left(\frac{1}{\sqrt{K}}\right), \quad \text{and} \quad \left\| \mathbf{x}_K^{(t)} - \mathbf{y}_K^{(t)} \right\| \leq O\left(\frac{1}{\sqrt{K}}\right).
\]
As is the case for Theorem 3.1, Theorem 3.2 gives a nonasymptotic convergence guarantee. While the condition involving the sequences \( \{\varepsilon_k\}_{k=1}^{K+1} \) and \( \{\sigma_k\}_{k=1}^{K+1} \) requires the given expression to be summable, the convergence rate is nonasymptotic as it holds for any \( K \). App. B gives details on the constants in the rates of Theorem 3.2, provides the proof, and also discusses the algorithms \( A_x, A_y \) for the sub-problems that satisfy the conditions. App. C discusses further details of the implementation of Algorithm 1; and we will analyze the effect of warm-starting in § 5.
4 Specialization of ACVI for Simple Inequality Constraints
We now consider that the inequality constraints are simple in that the projection is fast to compute. This scenario frequently occurs in machine learning, particularly when dealing with \( L_\infty \)-ball constraints, for instance. Projections onto the \( L_2 \) and \( L_1 \)-balls can also be obtained efficiently through simple normalization for \( L_2 \) and a \( O(n \log(n)) \) algorithm for \( L_1 \) (Duchi et al., 2008). In ACVI, we have the flexibility to substitute the \( y \)-subproblem with a projection onto the set defined by the inequalities. The \( x \)-subproblem still accounts for equality constraints, and if there are none, this simplifies the \( x \)-subproblem further since \( P_c \equiv I \), and \( d_c \equiv 0 \). Projection-based methods cannot leverage this structural advantage of simple inequality constraints as the intersection with the equality constraints can be nontrivial.
**The P-ACVI Algorithm:** omitting the log barrier. Assume that the provided inequality constraints can be met efficiently through a projection \( \Pi_C(\cdot) : \mathbb{R}^n \to C_- \). In that case, we no longer need the log barrier, and we omit \( \mu \) and the outer loop of ACVI over \( t \in [T] \). Differentiating the remaining expression of the \( y \) subproblem with respect to \( y \) and setting it to zero gives:
Algorithm 2 P-ACVI: ACVI with simple inequalities.
1: **Input:** operator $F : \mathcal{X} \to \mathbb{R}^n$, constraints $Cx = d$ and projection operator $\Pi_\leq$ for the inequality constraints, hyperparameter $\beta > 0$, and number of iterations $K$.
2: **Initialize:** $y_0 \in \mathbb{R}^n$, $\lambda_0 \in \mathbb{R}^n$
3: $P_c \triangleq I - C(C^\top C)^{-1}C$ where $P_c \in \mathbb{R}^{n \times n}$
4: $d_c \triangleq C^\top (CC^\top)^{-1}d$ where $d_c \in \mathbb{R}^n$
5: **for** $k = 0, \ldots, K - 1$ **do**
6: Set $x_{k+1}$ to be the solution of: $x + \frac{1}{\beta} P_c F(x) - P_c y_k + \frac{1}{\beta} P_c \lambda_k - d_c = 0$ (w.r.t. $x$)
7: $y_{k+1} = \Pi_\leq(x_{k+1} + \frac{1}{\beta} \lambda_k)$
8: $\lambda_{k+1} = \lambda_k + \beta(x_{k+1} - y_{k+1})$
9: **end for**
$$\text{argmin}_y \frac{\beta}{2} \|y - x_{k+1} - \frac{1}{\beta} \lambda_k\|^2 = x_{k+1} + \frac{1}{\beta} \lambda_k.$$
This implies that line 9 of the exact ACVI algorithm (given in App. A) can be replaced with the solution of the $y$ problem without the inequality constraints, and we can cheaply project to satisfy the inequality constraints, as follows: $y_{k+1} = \Pi_\leq(x_{k+1} + \frac{1}{\beta} \lambda_k)$, where the $\varphi_i(\cdot)$ are included in the projection. We describe the resulting procedure in Algorithm 2 and refer to it as P-ACVI. In this scenario with simple $\varphi_i$, the $y$ problem is always solved exactly; nonetheless, when the $x$-subproblem is also solved approximately, we refer to it as PI-ACVI.
![Figure 2: Intermediate iterates of PI-ACVI (Algorithm 2) on the 2D minmax game (2D-BG). The boundary of the constraint set is shown in red. (b) depicts the $y_k$ (from line 7 in Algorithm 2) which we obtain through projections. In (a), each spiral corresponds to iteratively solving the $x_k$ subproblem for $\ell = 20$ steps (line 6 in Algorithm 2). Jointly, the trajectories of $x$ and $y$ illustrate the ACVI dynamics: $x$ and the constrained $y$ “collaborate” and converge to the same point.]
Last-iterate convergence of P-ACVI. The following theorem shows that P-ACVI has the same last-iterate rate as ACVI. Its proof can be derived from that of Theorem 3.1, which focuses on a more general setting, see App. B. We state it as a separate theorem, as it cannot be deduced directly from the statement of the former.
**Theorem 4.1** (Last iterate convergence rate of P-ACVI—Algorithm 2). Given a continuous operator $F : \mathcal{X} \to \mathbb{R}^n$, assume $F$ is monotone on $\mathcal{C}_=$, as per Def. 2.1. Let $(x_K, y_K, \lambda_K)$ denote the last iterate of Algorithm 2. Then for all $K \in \mathbb{N}_+$, it holds that:
$$G(x_K, \mathcal{C}) \leq O\left(\frac{1}{\sqrt{K}}\right), \text{ and } \|x^K - y^K\| \leq O\left(\frac{1}{\sqrt{K}}\right).$$
**Remark 4.2.** Note that Theorem 4.1 relies on weaker assumptions than Theorem 3.1. This is a ramification of removing the central path in the P-ACVI Algorithm. Thus, assumption (ii) in Theorem 3.1—used earlier to guarantee the existence of the central path (see App. A)—is not needed.
![Figure 3: Experiments on the (C-GAN) game, using GDA, EG, and PI-ACVI on MNIST. All curves are averaged over 4 seeds. (a): Frechet Inception Distance (FID, lower is better) given CPU wall-clock time. (b): Inception Score (IS, higher is better) given wall-clock time. We observe that PI-ACVI converges faster than EG and GDA for both metrics. Moreover, we see that using a large $\ell$ for the first iteration ($\ell_0$) can give a significant advantage. The two PI-ACVI curves use the same $\ell_+ = 20$.]
Figure 4: Comparison between I-ACVI, (exact) ACVI, and projection-based algorithms on the (HBG) problem. (a): CPU time (in seconds) to reach a given relative error ($x$-axis), where the rotational intensity is fixed to $\eta = 0.05$ in (HBG) for all methods. (b): Number of iterations to reach a relative error of $0.02$ for varying values of the rotational intensity $\eta$. We fix the maximum number of iterations to $50$. (c): Joint impact of the number of inner-loop iterations $K_0$ at $t = 0$ and different choices of inner-loop iterations for $K_+$ at any $t > 0$ on the number of iterations needed to reach a fixed relative error of $10^{-4}$. We see that irrespective of the selection of $K_+$, I-ACVI converges fast if $K_0$ is large enough. For instance, $(K_0 = 130, K_+ = 1)$ converges faster than $(K_0 = 20, K_+ = 20)$. We fix $\ell = 10$ for all the experiments, in all of (a), (b), and (c).
5 EXPERIMENTS
Methods. We compare ACVI, Inexact-ACVI (I-ACVI), and Projected-Inexact-ACVI (PI-ACVI) with the projected variants of Gradient Descent Ascent (P-GDA), Extragradient (Korpelevich, 1976) (P-EG), Optimistic-GDA (Popov, 1980) (P-OGDA), and Lookahead-Minmax (Zhang et al., 2019; Chavdarova et al., 2021) (P-LA). We always use GDA as an inner optimizer for I-ACVI, PI-ACVI, and P-ACVI. See App. D and C for comparison with additional methods and implementation.
Problems. We study the empirical performance of these methods on three different problems:
• 2D bilinear game: a version of the bilinear game with $L_\infty$ constraints, as follows
$$\min_{x_1 \in \Delta} \max_{x_2 \in \Delta} x_1 x_2,$$
with $\Delta = \{x \in \mathbb{R} | -0.4 \leq x \leq 2.4\}$. (2D-BG)
• High-dimensional bilinear game: where each player is a 500-dimensional vector. The iterates are constrained to the probability simplex. A parameter $\eta \in (0, 1)$ controls the rotational component of the game (when $\eta = 1$ the game is a potential, when $\eta = 0$ the game is Hamiltonian):
$$\min_{x_1 \in \Delta} \max_{x_2 \in \Delta} \eta x_1^\top x_1 + (1 - \eta) x_1^\top x_2 - \eta x_2^\top x_2,$$
with $\Delta = \{x_i \in \mathbb{R}^{500} | x_i \geq 0, \text{ and } e^\top x_i = 1\}$. (HBG)
• MNIST. We train GANs on the MNIST (Lecun & Cortes, 1998) dataset. We use linear inequality constraints and no equality constraints, as follows:
$$\min_{G \in \Delta_G} \max_{D \in \Delta_D} \mathbb{E}_{s \sim p_d} [\log D(s)] + \mathbb{E}_{z \sim p_z} [\log(1 - D(G(z)))]$$
where $\Delta_\theta = \{\theta | A_1 \theta \leq b_1\}$, $\Delta_\zeta = \{\zeta | A_2 \zeta \leq b_2\}$,
with $p_z$, $p_d$ respectively, noise and data distributions; $\theta$ and $\zeta$ are the parameters of the generator and discriminator, resp. $D$ and $G$ are the Generator and Discriminator maps, parameterized with $\theta$ and $\zeta$, resp. $A_i \in \mathbb{R}^{100 \times n_i}$ and $b_i \in \mathbb{R}^{n_i}$, where $n_i$ is the number of parameters of $D$ or $G$.
5.1 INEXACT ACVI
2D bilinear game. In Fig. 1, we compare exact and inexact ACVI on the 2D-Bilinear game. Rather than solving the subproblems of I-ACVI until we reach appropriate accuracy of the solutions of the subproblems, herein, we fix the $K$ and $\ell$ number of iterations in I-ACVI. We observe how I-ACVI can converge following the central path when the inner loop of I-ACVI over $k \in [K]$ is solved with sufficient precision. The two parameters influencing the convergence of the iterates to the central path are $K$ and $\ell$, where the latter is the number of iterations to solve the two subproblems (line 8 and line 9 in Algorithm 1). Fig. 1 shows that small values such as $K = 20$ and $\ell = 2$ are sufficient for convergence for this purely rotational game. Nonetheless, as $K$ and $\ell$ decrease further, the iterates...
of I-ACVI may not converge. This accords with Theorem 3.2, which indicates that the sum of errors is bounded only if $K$ is large. Hence, larger $K$ implies a smaller error.
**HD bilinear game.** In Fig. 4(a) and Fig. 4(b) we compare I-ACVI with ACVI and the projection-based algorithms on the (HBG) problem. We observe that both ACVI and I-ACVI outperform the remaining baselines significantly in terms of speed of convergence measured in both CPU time and the number of iterations. Moreover, while I-ACVI requires more iterations than ACVI to reach a given relative error, those iterations are computationally cheaper relative to solving exactly each subproblem; hence, I-ACVI converges much faster than any other method. Fig. 4(c) aims to demonstrate that the subproblems of I-ACVI are suitable for warm-starting. Interestingly, we notice that the choice of the number of iterations at the first step $t = 0$ plays a crucial role. Given that we initialize variables at each iteration with the previous solution, it aids the convergence to solve the subproblems as accurately as possible at $t = 0$. This initial accuracy reduces the initial error, subsequently decreasing the error at all subsequent iterations. We revisit this observation in § 5.3.
### 5.2 Projected-Inexact-ACVI
**2D bilinear game.** In Fig. 2 we show the dynamics of PI-ACVI on the 2D game defined by (2D-BG). Compared to ACVI in Fig. 1, the iterates converge to the solution without following the central path. A comparison with other optimizers is available in App. D.
**MNIST.** In Fig. 3 we compare PI-ACVI and baselines on the (C-GAN) game trained on the MNIST dataset. We employ the greedy projection algorithm (Beck, 2017) for the projections. Since ACVI was derived primarily for handling general constraints, a question that arises is how it (and its variants) performs when the projection is fast to compute. Although the projection is fast to compute for these experiments, PI-ACVI converges faster than the projection-based methods. Compared to the projected EG method, which only improves upon GDA when the rotational component of $F$ is high, it gives more consistent improvements over the GDA baseline.
### 5.3 Effect of Warm-up on I-ACVI and PI-ACVI
**I-ACVI.** The experiments in Fig. 1 motivate increasing the number of iterations $K$ only at the first iteration $t = 0$—denoted $K_0$, so that the early iterates are close to the central path. Recall that the $K$ steps (corresponding to line 7 in Algorithm 1) bring the iterates closer to the central path as $K \to \infty$ (see App. B). After those $K_0$ steps, $\mu$ is decayed, which moves the problem’s solution along the central path. For I-ACVI, from Fig. 4(c)—where $\ell$ is fixed to 10—we observed that regardless of the selected value of $K_+$ for $t > 0$, it can be compensated by a large enough $K_0$.
**PI-ACVI.** We similarly study the impact of the warmup technique for the PI-ACVI method (Algorithm 2). Compared to I-ACVI, this method omits the outer loop over $t \in [T]$. Hence, instead of varying $K_0$, we experiment with increasing the first $\ell$ at iteration $k = 0$, denoted by $\ell_0$. In Fig. 3 we solve the constrained MNIST problem with PI-ACVI using either $\ell_0 = 500$ or $\ell_0 = 100$, $\ell_+$ is set to 20 in both cases. Increasing the $\ell_0$ value significantly improves the convergence speed.
**Conclusion.** We observe consistently that using a large $K_0$ or I-ACVI, or large $\ell_0$ for PI-ACVI aids the convergence. Conversely, factors such as $l$ and $K_+$ in I-ACVI, or $\ell_+$ in PI-ACVI, exert a comparatively lesser influence. Further experiments and discussions are available in App. D.
### 6 Discussion
We contributed to an emerging line of research on the ACVI method, showing that the last iterate of ACVI converges at a rate of order $O(1/\sqrt{K})$ for monotone VIs. This result is significant because it does not rely on the first-order smoothness of the operator, resolving an open problem in the VI literature. To address subproblems that cannot always be solved in closed form, we introduced an inexact ACVI (I-ACVI) variant that uses warm-starting for its subproblems and proved last iterate convergence under certain weak assumptions. We also proposed P-ACVI for simple inequality constraints and showed that it converges with $O(1/\sqrt{K})$ rate. Our experiments provided insights into I-ACVI’s behavior when subproblems are solved approximately, emphasized the impact of warm-starting, and highlighted advantages over standard projection-based algorithms.
ACKNOWLEDGMENTS
We acknowledge support from the Swiss National Science Foundation (SNSF), grants P2ELP2_199740 and P500PT_214441. The work of T. Yang is supported in part by the NSF grant CCF-2007911 to Y. Chi.
REFERENCES
Yossi Arjevani, Shai Shalev-Shwartz, and Ohad Shamir. On lower and upper bounds for smooth and strongly convex optimization problems. In JMLR, 2016.
Wäiss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In AISTATS, pp. 2863–2873, 2020.
Amir Beck. First-Order Methods in Optimization. SIAM, 2017.
Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett., 31(3):167–175, 2003.
Dimitri Bertsekas, Angelia Nedic, and Asuman Ozdaglar. Convex Analysis and Optimization, volume 1. Athena Scientific, 2003.
Radu Ioan Bot, Ernö Robert Csetnek, and Phan Tu Vuong. The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. arXiv:1808.08084, 2020.
Radu Ioan Bot, Ernö Robert Csetnek, and Dang-Khoa Nguyen. Fast OGDA in continuous and discrete time. arXiv preprint arXiv:2203.10947, 2022.
Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge university press, 2004.
Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3, 2011. ISSN 1935-8237. doi: 10.1561/2200000016.
Yang Cai, Argyris Oikonomou, and Weiqiang Zheng. Tight last-iterate convergence of the extragradient method for constrained monotone variational inequalities. arXiv:2204.09228, 2022.
Tatjana Chavdarova, Gauthier Gidel, François Fleuret, and Simon Lacoste-Julien. Reducing noise in GAN training with variance reduced extragradient. In NeurIPS, 2019.
Tatjana Chavdarova, Matteo Pagliardini, Sebastian U Stich, François Fleuret, and Martin Jaggi. Taming GANs with Lookahead-Minmax. In ICLR, 2021.
Tatjana Chavdarova, Michael I. Jordan, and Manolis Zampetakis. Last-iterate convergence of saddle point optimizers via high-resolution differential equations. In Minimax Theory and its Applications, 2023.
Xiaojun Chen, Liqun Qi, and Defeng Sun. Global and superlinear convergence of the smoothing newton method and its application to general box constrained variational inequalities. Mathematics of Computation, 67(222):519–540, 1998.
Rune Christiansen, Niklas Pfister, Martin Emil Jakobsen, Nicola Gnecco, and Jonas Peters. A causal framework for distribution generalization. arXiv:2006.07433, 2020.
Liang-Ju Chu. On the continuity of trajectories for nonlinear monotone complementarity problems. Scientiae Mathematicae, 1(3):263–275, 1998.
Richard W. Cottle and George B. Dantzig. Complementary pivot theory of mathematical programming. Linear Algebra and its Applications, 1(1):103–125, 1968. ISSN 0024-3795.
Constantinos Daskalakis and Ioannis Panageas. Last-iterate convergence: Zero-sum games and constrained min-max optimization. In ITCS, 2019.
|
0upMDCx8AA
|
Figure 1(a), the pattern confused me. It seems that the precision is not monotonic w.r.t. The bias-conflicting sample ratio. Although the curve corresponding to 0.5% is at the top, the curve for 5% is above that of 1% and 2%. This makes me confused about the implication of this specific result and worried whether it is due to noise. It might be helpful to show the results averaged over multiple runs.
|
POST-TRAINING RECOVERY FROM INJECTED BIAS WITH SELF-INFLUENCE
Anonymous authors
Paper under double-blind review
ABSTRACT
Learning generalized models from biased data with strong spurious correlations to the class label is an important undertaking toward fairness in deep learning. In the absence of any prior knowledge or supervision of bias, recent studies tackle the problem by presuming the bias severity to be sufficiently high and employing a bias-amplified model trained by empirical risk minimization (ERM) to identify and utilize bias-conflicting samples that are free of spurious correlations. However, insufficient preciseness in detecting bias-conflicting samples results in injecting erroneous signals during training; conversely, it leads to learning malignant biases instead of excluding them. In practice, as the presumption about the magnitude of bias often does not hold, it is important for the model to demonstrate robust performance across a wide spectrum of biases. In this paper, we propose SePT (Self-influence-based Post-Training), a fine-tuning framework leveraging the self-influence score to filter bias-conflicting samples, which yields a pivotal subset with significantly diminished spurious correlations. Our method enables the quick recovery of a biased model from learned bias through fine-tuning with minimal friction. In addition, SePT also utilizes the remaining training dataset to adjust the model, thereby maintaining robust performance in situations with weak spurious correlation or even in the absence of it. Experiments on diverse benchmark datasets with a wide range of bias strengths show that SePT is capable of boosting the performance of both bias-injected and state-of-the-art debiased models.
1 INTRODUCTION
Deep neural networks have demonstrated remarkable performance in various fields of machine learning tasks such as image recognition (Dosovitskiy et al., 2021), natural language processing (Brown et al., 2020), and speech recognition (Zhang et al., 2020). Under well-curated benchmarks, they are capable of achieving near-human or superhuman performance (He et al., 2015). However, whether these models can still effectively learn with unfiltered real-world data is yet to be fully answered. One of the detrimental artifacts of unfiltered data is dataset bias (Torralba & Efros, 2011), where task-irrelevant attributes are spuriously correlated with labels in the curated dataset. Learning from data containing malignant biases inclines the model to rely on exploiting them for the designated task instead of learning the task-related features, resulting in a biased model and poor generalization performance (Zhu et al., 2017; Geirhos et al., 2020). For instance, since most seagull images feature the sea as a background, a model may fail to recognize seagulls in different backgrounds, such as meadows and deserts (Sagawa et al., 2020).
To effectively learn from biased datasets, it is important to encourage the model to utilize task-related features rather than malignant bias. A straightforward solution is to utilize explicit supervision or prior knowledge of bias (Kim et al., 2019; Sagawa et al., 2020). Nonetheless, relying on human inspection to alleviate bias can be impractical due to its exorbitant cost and infeasibility in real-world scenarios. Instead, recent studies attempt to glean bias-conflicting samples within the biased trainset by using bias prediction (Liu et al., 2021), loss (Nam et al., 2020; Liu et al., 2023), or gradients (Ahn et al., 2023) obtained from an auxiliary biased model trained with Empirical Risk Minimization (ERM). These samples are then utilized at training to amplify task-related features through loss weighting (Nam et al., 2020) or weighted sampling (Liu et al., 2021), neutralizing the bias. While these approaches can identify bias-conflicting samples to a certain extent, failure to detect them may result in the wrong amplification of bias-aligned samples during training, compromising the task-
relevant attributes. Moreover, they assume that the bias in the trainset is severe enough to induce a strong bias in the ERM-trained auxiliary model. However, such an assumption may not hold in real-world scenarios where the bias is only mildly present, limiting the applicability of the approach.
Reflecting on the problems of training stage intervention, another recent approach involves post-hoc rectification using auxiliary bias information, which is considerably less intrusive compared to the prior approaches. Focusing on the observation that deep neural networks can learn task-related features with ERM even under biased settings (Menon et al., 2021; Kirichenko et al., 2023), they retrain the classification layer to rectify the bias while keeping the feature extractor intact, which is referred to as last layer retraining. However, their effectiveness in the absence of bias supervision or an unbiased validation set is not yet to be fully explored, especially in highly biased datasets.
In this sense, it is required to develop a bias recovery training method that can accurately filter bias-conflicting data. Thus, we propose a post-training method to rectify the injected bias persisting in the model. We first make an attempt to employ the Influence Function (IF), which quantifies the impact of a training sample on the model parameters, in identifying bias-conflicting samples. By measuring Self-Influence (SI) (Koh & Liang, 2017), the influence of a sample on itself through its effect on model parameters, it is possible to detect data that are against the generalization of the model. In this process, directly applying SI does not yield sufficient results. Therefore, we proposed bias-customized self-influence (BCSI) to identify bias-conflicting samples. When the training data is biased, the model would first attempt to generalize on biases, and bias-conflicting samples exhibit larger BCSI scores compared to bias-aligned samples. Based on this observation, we first produce a pivotal subset with significantly diminished spurious correlations by measuring the BCSI of the training samples. Subsequently, we fine-tune a biased model with a few number of iterations, effectively rectifying the bias present in the model even after being debiased by previous approaches. Furthermore, ScPT effectively rectifies the existing methods in low-bias scenarios by utilizing both the pivotal set and the remaining samples in the trainset.
Our contributions are threefold:
- We propose bias-conditioned self-influence (BCSI) to filter bias-conflicting samples within the trainset with greater accuracy.
- We propose a novel fine-tuning scheme capable of quickly recovering biased models, even those that have undergone state-of-the-art debiasing techniques.
- Our method not only enhances performance in highly biased settings but also rectifies the existing methods that struggle in low-bias scenarios.
## 2 BACKGROUND
### 2.1 LEARNING FROM BIASED DATA
We consider a supervised learning setting with training data $T := \{z_n\}_{n=1}^N$, sampled from the data distribution $Z := (X, Y)$, where the input $X$ is comprised of $X = (S, B, O)$ where $S$ is the task-relevant signal, $B$ is a task-irrelevant bias, and $O$ is the other task-independent feature. Also, $Y$ is the target label of the task, where the label is $y \in \{1, \ldots, C\}$. When the dataset is unbiased, ideally, a model learns to predict the target label using the task-relevant signal: $P_\theta(Y|X) = P_\theta(Y|S, B, O) = P_\theta(Y|S)$. However, when the dataset is biased, the task-irrelevant bias $B$ is highly correlated with the task-relevant features $S$ with probability $p_y$, i.e., $P(B = b_y|S = s_y) = p_y$, where $p_y \geq \frac{1}{C}$. Under this relationship, a data sample $x = (s, b, o)$ is bias-aligned if $(b = b_y) \land (s = s_y)$ and, bias-conflicting otherwise.\(^1\) For example, a sample image $x = (s, b, o)$ of the number 0 in a handwritten digit dataset contains the shape signal $s$, which is directly related to the label $y$, and the other unrelated information such as the color ($b$) of the digit or the background ($o$). However, if all the images containing the digit 0 in the trainset are colored red, then $b$ is correlated with the digit shape $s$, resulting in bias alignment. When $B$ is easier to learn than $S$ for a model, the model may discover a shortcut solution to the given task, learning to predict $P_\theta(Y|X) = P(Y|B)$ instead of $P_\theta(Y|X) = P(Y|S)$. However, debiasing a model inclines the model towards learning the true task-signal relationship $P_\theta(Y|X) \approx P(Y|S)$.
\(^1\)Here, $\land$ denotes the logical conjunction.
2.2 Influence Functions
The Influence Function (IF; Koh & Liang (2017)) estimates and interprets the effect of each sample in the trainset with respect to the model’s prediction. A naive approach for assessing the influence on model predictions is excluding the data point from the trainset and comparing differences in performance, referred to as leave-one-out (LOO) retaining. However, performing LOO retraining for all samples is computationally expensive; instead, an approximated method called influence functions has been introduced as an alternative.
Here, we review the formal definition of influence function. Given a training dataset \( T := \{z_n\}_{n=1}^N \) where \( z_n = (x_n, y_n) \), the model parameters \( \theta \) are learned using \( T \) with a loss function \( \mathcal{L} \):
\[
\theta^* := \arg\min_\theta \mathcal{L}(T, \theta) = \arg\min_\theta \sum_{n=1}^N \ell(z_n, \theta)
\]
where \( \ell(z_n, \theta) := -\log(P_\theta(y_n|x_n)) \) is the cross-entropy loss for \( z_n \).
To measure the impact of a single training point \( z \) on the model parameters, we consider the retrained parameter \( \theta^*_{z,\epsilon} \) obtained by up-weighting the loss of \( z \) by \( \epsilon \):
\[
\theta^*_{z,\epsilon} = \arg\min_\theta (\mathcal{L}(T, \theta) + \epsilon \cdot \ell(z, \theta)).
\]
Then, Influence Function, the impact of \( z \) on another sample \( z' \), is defined as the deviation of the retrained loss \( \ell(z', \theta^*_{z,\epsilon}) \) from the original loss \( \ell(z', \theta^*) \):
\[
I_\epsilon(z, z') := \ell(z', \theta^*_{z,\epsilon}) - \ell(z', \theta^*)
\]
For infinitesimally small \( \epsilon \), we have
\[
I(z, z') := \frac{dI_\epsilon(z, z')}{d\epsilon} \bigg|_{\epsilon=0} = \nabla_\theta \ell(z', \theta^*)^\top H^{-1} \nabla_\theta \ell(z, \theta^*)
\]
where \( H := \nabla^2_\theta \mathcal{L}(T, \theta^*) \in \mathbb{R}^{P \times P} \) is the Hessian of the loss function with respect to the model parameters at \( \theta^* \). Intuitively, the influence \( I(z, z') \) measures the effect of \( z \) on \( z' \) through the learning process of the model parameters. Note that IF is computed once a model has converged since Equation 4 holds when the average of the gradient norm of the trainset is small enough.
Self-influence is introduced as the influence of \( z \) calculated on itself:
\[
I_{\text{self}}(z) := \nabla_\theta \ell(z, \theta^*)^\top H^{-1} \nabla_\theta \ell(z, \theta^*)
\]
which approximates the difference in loss of \( z \) when \( z \) itself is excluded from training. This metric is effectively used in detecting data with noisy labels (Koh & Liang, 2017) and finding important samples in data pruning (Yang et al., 2023). A high self-influence score indicates that if a sample was omitted from the trainset, making accurate predictions for that sample would become challenging. In other words, the sample contains distinctive information from the majority of the trainset. This characteristic of self-influence enables the detection of samples that cannot be explained straightforwardly using the dominant feature-label relationship learned by the model. For example, Recent studies leverage this characteristic of influence scores to handle the mislabeled samples in the noisy label settings by identifying and removing/relabeling the mislabeled training samples (Koh & Liang, 2017; Ting & Brochu, 2018; Wang et al., 2018; 2020; Kong et al., 2022). Moreover, the influence score can be utilized to select important samples in data pruning for efficient training (Sorscher et al., 2022; Yang et al., 2023). These findings have inspired us to propose using influence scores to identify bias-conflicting samples in a biased dataset, as outlined in Section 3.1.
3 Self-Influence based Post-Training (SePT)
In this section, we propose Self-Influence based Post-Training (SePT), a debiasing framework that first detects bias-conflicting samples via self-influence and remedies a biased model via post-hoc fine-tuning. In Section 3.1, we show that the direct application of Self-Influence (SI) is not effective in detecting bias-conflicting samples. Based on this comprehensive study, we introduce a modified version of SI, called Bias-Customized Self-Influence (BCSI), which demonstrates effectiveness in
Figure 1: A comprehensive analysis of Self-Influence (SI) and our Bias-Customized Self-Influence (BCSI) in detecting bias-conflicting (Section 2.1) samples across varying bias ratios. Figure 1(a) shows the detection precision of SI and BCSI across various ratios of bias-conflicting samples for CIFAR10C. Figure 1(b) depicts the detection precision of SI across training epochs for different ratios of bias-conflicting samples. In Figure 1(c) and 1(d), each bar indicates the number of samples within a specific range in CIFAR10C (1%).
Figure 2: A performance comparison of Loss, Gradient Norm, Self-Influence (SI) and our Bias-Customized Self-Influence (BCSI). The average precision of loss value, gradient norm, SI, and BCSI are presented in bars, with the error bars indicating the standard error across three repetitions.
Subsequently, using BCSI, we successfully identify and construct a concentrated pivotal subset characterized by a high proportion of bias-conflicting samples. In Section 3.2, to effectively use the pivotal subset to remedy biased models, we investigate the efficacy of the last-layer retraining and find that this technique is not as effective unless the dataset exhibits a significantly high ratio of bias-conflicting samples. To this end, we propose a fast and lightweight post-hoc fine-tuning scheme to recover the biased model using the pivotal subset. The overall pipeline of SePT is described in Figure 3.
3.1 Filtering bias-conflicting samples with self-influence
Understanding the limitations of self-influence in identifying bias-conflicting samples. In Section 2.2, we discussed the capability of SI to identify samples that contrast to dominant features learned by the pre-trained model such as mislabeled samples (Wang et al., 2018; Kong et al., 2022). Since bias-conflicting samples also conflict with the dominant malignant bias features, SI can be considered as a metric to detect bias-conflicting samples in biased datasets. However, we observe that the direct application of SI to biased datasets is not effective. Figure 1(a) demonstrates the detection performance of SI against various ratios of bias-conflicting samples, using the ground truth count of bias-conflicting samples. Notably, the detection precision of SI is low, falling below 40%, except in the extreme case of 0.5%.
The reason why SI underperforms in biased datasets is that bias-conflicting samples possess correct task-related features, unlike mislabeled samples. In the case of mislabeled samples, which are erroneously labeled as their name implies, they strongly counteract the dominant features of the pre-trained model thereby separable by self-influence. On the other hand, bias-conflicting samples, containing task-related features but under-prioritized in training, differ from the dominant features but do not counteract them. In other words, in a noisy labeled setting, the mislabeled sample’s feature is incompatible with the dominant feature, whereas, in a biased setting, the bias-conflicting sample’s feature is not only compatible, but ideally, both should be utilized. This characteristic of bias-conflicting samples makes it harder for SI to separate bias-conflicting samples.
Figure 3: Overview of our framework (SePT). SePT computes the self-influence of the training data using a biased model and then constructs a pivotal set where bias-conflicting samples form a majority. SePT then initializes the last layer of a pre-trained model and trains it using the pivotal set and the remaining data.
For these reasons, as shown in Figure 1(a), in highly malignant scenarios with a bias-conflicting ratio of only 0.5%, self-influence can effectively discriminate, but as the ratio increases, detection performance declines since the model has learned more of the task-related features of bias-conflicting samples for classification. Moreover, Figure 1(b) demonstrates a significant decline as the training epochs increase, due to the model learning the task-related features of bias-conflicting samples.
Adapting SI to suit biased datasets. Motivated by our observations, we propose Bias-Customized Self-Influence (BCSI) to restrict the pre-trained model to learn task-related features of bias-conflicting samples. Based on the observation of Nam et al. (2020) where the loss of bias-aligned data decreases in the early stage of training, we use Generalized Cross Entropy (GCE) (Zhang & Sabuncu, 2018) to induce models to exploit easier-to-learn bias-aligned data, thereby improving detection precision. Furthermore, based on the findings of Frankie et al. (2020) that the primary directions of the model’s parameter weights had already been learned during the iteration 500 to 2,000, we train ResNet18 (He et al., 2016) for only five epochs to achieve better sample separation. We exploit the model trained under the aforementioned conditions to employ SI as a means of filtering bias-conflicting samples.
We now validate the capability of BCSI to detect bias-conflicting samples. In Figure 1(a), BCSI shows performance advantages in detection precision compared to conventional SI. In Figure 1(d), there is a noticeable tendency in which bias-conflicting samples exhibit larger scores compared to bias-aligned samples. These findings suggest that BCSI is capable of serving as an effective indicator for detecting bias-conflicting samples within a biased trainset. This trend is also observed in other biased datasets, as shown in Appendix A. To further validate the effectiveness of bias-customized self-influence (BCSI), we compare its average precision with those of loss and gradient norms. In Figure 2(a)-2(d), we present detection precision of loss value, gradient norm, self-influence, and bias-customized self-influence (the detailed settings are described in Appendix B). The results for other datasets are provided in Appendix B. BCSI exhibits dominant performance or comparable precision with other metrics, which is consistent with the findings in previous applications of self-influence (Koh & Liang, 2017; Yang et al., 2023). In noisy label handling, self-influence outperforms the loss values (Koh & Liang, 2017). In data pruning, the selection rules based on naive self-influence are more robust at high pruning ratios compared to the loss-based and gradient-based approaches (Yang et al., 2023).
Influence-based filtering method. We now introduce a filtering method to identify bias-conflicting data. To calculate the SI of the trainset, we randomly initialize a model with an identical architecture. By training the model with biased data using GCE for five epochs, we obtain an amplified biased model. Using this model, we compute the SI of the trainset with Equation 5 and rank them in descending order. Since calculating $H^{-1} := (\nabla^2_\theta L(S, \theta^*))^{-1}$ is generally intractable for deep neural networks due to their extensive number of parameters, we approximately calculate $H^{-1}$ and the loss gradient of the sample $z$, $\nabla_\theta \ell(z, \theta^*)$, of the last layer of the network following the convention (Koh & Liang, 2017; Pruthi et al., 2020). With the obtained SI, we select the top-$k$ subset of samples from each class to form a pivotal subset of bias-conflicting samples as follows:
$$Z_P = \bigcup_{c=1}^{C} \{z_{BCSI-rank(m,c)}\}_{m=1}^{k},$$
where $C$ is the number of classes and $BCSI-rank(n,c)$ is the dataset index of the $n$-th training sample of class $c$ sorted by bias-customized self-influence.
Figure 4: The figures depict performance under varying bias-conflicting ratios. Figure 4(a) shows the accuracy for last layer retraining across varying bias ratios in pivotal sets. Figure 4(b) depicts performance changes of last layer retraining and fine-tuning under diverse bias ratios. In Figure 4(c), our performance gains are provided.
In the experiments, for robustness to random initialization, we repeat this process three times and use the pivotal set as the intersection of the sets obtained through repetition. Since we only train models for a few epochs, this iterative approach requires negligible cost compared to full training of the additional model Nam et al. (2020); Lee et al. (2021); Hwang et al. (2022). The detailed filtering process is provided in Algorithm 1. As a result, we confirmed that the filtering process is capable of constructing a pivotal subset with a high ratio of bias-conflicting samples from a highly biased trainset. Specifically, we observed an increase in the ratio for each dataset in Appendix D.
3.2 POST-TRAINING RECOVERY FROM BIAS VIA FINE-TUNING
Fine-tuning with the pivotal subset. After constructing the pivotal subset using SI (Section 3.1), we explore options to utilize the acquired set for debiasing. A conventional approach is to directly intervene during model training (Sagawa et al., 2020; Nam et al., 2020). However, training a model directly leveraging a small subset most likely results in overfitting. An inert alternative to this goal is last-layer retraining. Kirichenko et al. (2023) demonstrated that retraining the last layer using an unbiased validation while keeping the well-learned feature extractor frozen can rectify a biased model. However, the pivotal set is not a validation set nor it is perfectly unbiased. Thus, we investigate the feasibility of last-layer training in the absence of an unbiased validation set, which represents a more practical scenario. Considering the observation in Section 3.1, the acquisition of an ideal set consisting of only bias-conflicting samples is challenging in highly biased settings. Therefore, we evaluate the efficacy of last-layer retraining with respect to the degree of the bias-conflicting ratio of a training subset used for retraining. Figure 4(a) shows that a pivotal set with a significantly high bias-conflicting ratio is required to resolve a biased model with last layer retraining alone. However, re-initializing the classifier layer and fine-tuning the entire model yields better performance in Figure 4(b). This trend is more distinctive in high bias regimes, where the premise of a well-learned feature extractor may not hold. Therefore, we opt for a fine-tuning method to recover a biased model through post-training recovery. Note that fine-tuning even including the construction of a pivotal set demands less than half the time of full training, as we only train pre-trained models for a few iterations. The detailed comparison of computational costs is suggested in Appendix E.
Handling weak spurious correlations with counterweighting. On the opposite end, a significant aspect of dataset bias is when the spurious correlation is weak, or even unbiased. In real-world scenarios, the robustness of a debiasing method becomes crucial, even in the absence of prior knowledge about the severity of bias. A major pitfall of previous methods is their assumption that the trainset contains a sufficient amount of biased samples, which in turn produces a biased ERM-trained model to be used against itself for debiasing. However, when this assumption is unreliable or invalid as in learning from an unbiased dataset, the scheme may backfire and amplify the malignant bias primarily due to the unbiased ERM-trained model.
To address these concerns, we leverage not only the pivotal subset but also the remaining samples during the fine-tuning stage in order to incorporate the task-related features contained in both bias-conflicting and bias-aligned samples. Specifically, we formulate a counterweight cross-entropy loss by drawing a mini-batch from the remaining trainset. Finally, we train the model using both the
Table 1: The average and the standard error of accuracy over three runs. *Ours* indicates SePT applied to a model initially trained with the prefix method. The best accuracy is annotated in **bold**. ✔️ indicates that a given method uses bias information while ❌ denotes that a given model does not use any bias information.
| Method | Bias Info | CMNIST | CIFAR10C | BFFHQ |
|-----------------|-----------|--------|----------|-------|
| GroupDRO | ✔️ | 63.12 ± 6.78 | 76.30 ± 8.42 | 33.44 ± 5.80 | 45.81 ± 5.72 | 54.80 ± 5.54 |
| Vanilla | ❌ | 38.92 ± 7.74 | 56.81 ± 8.45 | 69.19 ± 8.85 | 86.00 ± 8.35 | 20.50 ± 8.54 | 24.91 ± 8.33 | 28.99 ± 8.42 | 40.24 ± 8.28 | 53.53 ± 8.05 |
| ReBias | ✔️ | 70.47 ± 8.14 | 87.40 ± 7.88 | 92.91 ± 8.15 | 96.96 ± 8.04 | 22.27 ± 8.41 | 25.72 ± 8.20 | 31.66 ± 8.43 | 43.43 ± 8.41 | 56.80 ± 8.56 |
| LiF | ❌ | 66.53 ± 8.24 | 78.10 ± 8.97 | 74.69 ± 8.40 | 76.72 ± 8.94 | 25.28 ± 8.28 | 31.15 ± 8.67 | 38.64 ± 8.39 | 46.15 ± 8.54 | 55.33 ± 8.69 |
| DFA | ❌ | 89.64 ± 8.40 | 94.60 ± 8.81 | 91.69 ± 8.53 | 95.59 ± 8.43 | 27.13 ± 8.66 | 31.26 ± 8.71 | 37.96 ± 8.71 | 44.99 ± 8.84 | 52.07 ± 9.01 |
| BiasSwap | ❌ | 85.76 ± 8.34 | 83.74 ± 8.59 | 85.29 ± 9.08 | 90.85 ± 9.11 | 32.54 ± 8.52 | 35.25 ± 8.41 | 41.62 ± 8.29 |
| BPA | ❌ | 54.52 ± 8.39 | 72.63 ± 8.27 | 78.52 ± 8.59 | 85.30 ± 8.93 | 25.50 ± 8.05 | 26.86 ± 8.69 | 27.47 ± 8.46 | 34.29 ± 8.20 | 51.40 ± 8.98 |
| SelecMix | ❌ | 52.60 ± 8.65 | 72.16 ± 8.79 | 80.77 ± 8.77 | 86.86 ± 8.03 | 37.63 ± 8.81 | 40.14 ± 8.42 | 47.54 ± 8.59 | 54.86 ± 8.76 | 63.07 ± 8.32 |
| Ours+Vanilla | ❌ | 42.09 ± 8.72 | 62.38 ± 8.79 | 74.34 ± 8.82 | 86.00 ± 8.35 | 26.61 ± 8.38 | 33.47 ± 8.29 | 40.75 ± 8.37 | 49.30 ± 8.46 | 56.00 ± 8.07 |
| Ours+LiF | ❌ | 58.84 ± 8.36 | 72.69 ± 8.25 | 79.59 ± 8.36 | 84.78 ± 8.20 | 27.63 ± 8.00 | 35.29 ± 8.21 | 43.36 ± 8.08 | 51.95 ± 8.29 | 57.13 ± 8.46 |
| Ours+DFA | ❌ | 76.80 ± 8.09 | 91.17 ± 8.22 | 93.08 ± 8.59 | 96.21 ± 8.53 | 25.66 ± 8.85 | 33.53 ± 8.21 | 42.80 ± 8.81 | 52.61 ± 8.54 | 56.60 ± 8.83 |
| Ours+SelecMix | ❌ | 51.98 ± 8.49 | 71.62 ± 8.96 | 80.79 ± 8.60 | 87.48 ± 8.52 | 38.74 ± 8.36 | 46.18 ± 8.33 | 52.70 ± 8.40 | 59.66 ± 8.31 | 65.80 ± 8.12 |
Table 2: Performance of baselines and SePT on low-bias regimes.
| Method | CIFAR10C |
|-----------------|----------|
| | 20% | 30% | 50% | 70% | 90%(unbiased) |
| Vanilla | 59.47 ± 8.59 | 65.64 ± 8.51 | 71.33 ± 8.09 | 74.90 ± 8.25 | 76.93 ± 8.26 |
| LiF | 59.78 ± 8.85 | 60.56 ± 8.96 | 60.35 ± 8.37 | 62.52 ± 8.49 | 63.42 ± 8.63 |
| DFA | 60.34 ± 8.46 | 64.24 ± 8.44 | 65.97 ± 8.80 | 64.97 ± 8.20 | 66.59 ± 8.20 |
| SelecMix | 62.05 ± 8.26 | 62.17 ± 8.35 | 62.52 ± 8.54 | 66.23 ± 8.09 | 65.81 ± 8.96 |
| Ours+Vanilla | 62.78 ± 8.67 | 65.61 ± 8.77 | 70.61 ± 8.62 | 73.20 ± 8.35 | 73.57 ± 8.16 |
| Ours+LiF | 64.46 ± 8.29 | 64.40 ± 8.27 | 65.82 ± 8.15 | 67.29 ± 8.17 | 68.15 ± 8.76 |
| Ours+DFA | 66.30 ± 8.48 | 68.13 ± 8.45 | 72.79 ± 8.38 | 73.56 ± 8.15 | 70.36 ± 8.08 |
| Ours+SelecMix | 66.67 ± 8.43 | 64.51 ± 8.44 | 66.45 ± 8.26 | 69.97 ± 8.21 | 69.29 ± 8.75 |
Cross-entropy loss on the pivotal subset and the counterweight loss on the remaining trainset:
\[ \mathcal{L}(Z_P, Z_R) := \mathcal{L}_{CE}(Z_P) + \lambda \mathcal{L}_{CE}(Z_S) \]
(6)
where \( Z_P \) is the pivotal subset, \( Z_S \sim Z \setminus Z_P \) a randomly drawn mini-batch from the remaining trainset, and \( \mathcal{L}_{CE} \) is the mean cross-entropy loss. Note that we put \( \lambda = 0.1 \) for all experiments. The overall process is described in Algorithm 2.
4 EXPERIMENTS
In this section, we present a series of experiments in which we apply our method to models trained with ERM and existing debiasing approaches, including the current state-of-the-art, to demonstrate the effectiveness of SePT. We validate our method and its individual components following prior conventions. Below, we provide a brief overview of our experimental setting in Section 4.1, followed by empirical results and detailed analyses presented in Section 4.2, Section 4.3, and 4.4.
4.1 EXPERIMENTAL SETTINGS
Datasets. For fair evaluation, we follow the conventions of using benchmark biased datasets (Nam et al., 2020). Colored MNIST dataset (CMNIST) is a synthetically modified MNIST (Deng, 2012), where the labels are correlated with colors. We conduct benchmarks on bias ratios of \( r \in \{0.5, 0.1, 0.2, 5\} \). CIFAR10C is a synthetically modified CIFAR10 (Krizhevsky et al., 2009) dataset with common corruptions. To test our method in low-bias scenarios, we expand our scope and conduct experiments with bias ratios \( r \in \{0.5, 0.1, 0.2, 5, 20, 30, 50, 70, 90\text{(unbiased)}\} \). Biased FFHQ (BFFHQ) (Lee et al., 2021) dataset is a curated Flickr-Faces-HQ (FFHQ) (Karras et al., 2019)
dataset, which consists of facial images where ages and genders exhibit spurious correlation. The Waterbirds dataset (Wah et al., 2011) consists of bird images, to classify bird types, but their backgrounds are correlated with bird types. Non-I.I.D. Image dataset with Contexts (NICO) (He et al., 2021) is a natural image dataset for out-of-distribution classification. We follow the setting of Wang et al. (2021), inducing long-tailed bias proportions within each class, simulating diverse bias ratios in a single benchmark dataset. A detailed description of these datasets is provided in Appendix H.
Baselines. We validate SePT by combining various debiasing approaches. GroupDRO (Sagawa et al., 2020) uses group labels to debias. ReBias (Bahng et al., 2020) uses an auxiliary model expertise on specific bias. LfF (Nam et al., 2020) detects bias-conflicting samples based on the assumption that trainset is highly biased. DFA (Lee et al., 2021) and BiaSwap (Kim et al., 2021) augment bias-conflicting samples. BPA (Seo et al., 2022) utilizes a clustering method to identify pseudo-attributes. SelecMix (Hwang et al., 2022) identifies and mixes a bias-contradicting pair within the same class while detecting and mixing a bias-aligned pair from different classes. Note that we adopt SelecMix+LfF rather than SelecMix since SelecMix+LfF exhibits superior performance than SelecMix (Hwang et al., 2022). A detailed explanation for baselines is suggested in Appendix H.2.
Evaluation protocol. Following other baselines, we calculate the accuracy for unbiased test sets in CMNIST and CIFAR10C. However, we evaluate minority-group accuracy in BFFHQ, and the worst-group accuracy in Waterbird. Note that we use the models from the final epoch for all experiments to compute performance. A detailed experimental setting is suggested in Appendix H.
4.2 Results in Highly Biased Scenarios
We evaluate SePT to measure the degree of recovery of baseline models when combined with ours on benchmark datasets. In Table 1, we significantly enhance the performance of baselines on the majority of datasets under various experimental settings. To the best of our knowledge, Ours+SelecMix achieves state-of-the-art accuracy on CIFAR10C. Interestingly, we observe that performance gain is larger as the ratio of bias-conflicting samples increases in CIFAR10C. We conjecture that fine-tuning becomes more effective in CIFAR10C (2%) and (5%) since the bias-conflicting sample purity of the pivotal set increases, as shown in Section 3.1. For CMNIST, there are decreases after combining our methods. In Table 5, low detection precision induces performance drop.
4.3 Results in Low-Bias Scenarios
Since the baseline methods intensify learning signals of bias-conflicting samples strongly, these methods would likely fail in mildly biased datasets. We validate the baselines on CIFAR10C under various ratios of bias-conflicting samples in Table 2 and 3. All the baselines exhibit drastic performance deterioration compared to Vanilla when the bias-conflicting ratio is high. In contrast, our method can significantly rectify remaining biases within a model, even in mildly biased datasets except for Vanilla. Albeit there is a slight decrease in performance for Vanilla, the accuracy gap is much lower than other baselines. Since the innate nature of fine-tuning can minimize friction by training from pre-trained parameters, our approach can remedy biases within a model in a wider range of bias
ratios, as in Figure 5(a). In Waterbird, training SelecMix is intractable since this method simultaneously trains three models of ResNet50 (He et al., 2016). Note that ‘OOM’ denotes ‘out-of-memory’. The graphs for other methods are provided in Appendix C.
4.4 ABLATION STUDY
We examine the sensitivity of hyperparameters such as the number of selected samples per class ($k$) in the pivotal set and the weight for the remaining data in post-training ($\lambda$). In Figure 5(b), there is a slight performance decrease as $k$ increases in CIFAR10C (0.5%). In contrast, the accuracy in CIFAR10C (5%) increases. Since there are a few bias-conflicting samples per class in CIFAR10C (0.5%), additional usage of samples dilutes the ratio of bias-conflicting data in the pivotal set, leading to a performance drop. In Figure 5(c), we observe a marginal accuracy drop as $\lambda$ increases in CIFAR10C (0.5%), CIFAR10C (90%) experiences a performance increase. These results indicate that learning the remaining samples is beneficial in CIFAR10C (90%), fostering the model to capture task-relevant signals. We note that the analysis for intersections is provided in Appendix F.
5 RELATED WORK
Debiasing deep neural networks. The focus of research on mitigating bias has been modulating task-related information and malignant biases during training. Early works relied on human knowledge through direct supervision or implicit information of bias (Sagawa et al., 2020; Li & Vasconcelos, 2019; Hong & Yang, 2021; Han & Tsvetkov, 2021), which is often impractical due to its cost. To address more practical issues, several studies have focused on identifying and utilizing bias-conflicting samples without relying on human knowledge. These methods can be categorized into three main streams: loss modification, sampling methods, and data augmentation. Loss modification methods (Nam et al., 2020; Liu et al., 2023) amplify the learning signals of (estimated) bias-conflicting samples by modifying the learning objective. Sampling methods (Liu et al., 2021; Ahn et al., 2023) overcome dataset bias by sampling (estimated) bias-conflicting data more frequently. Data augmentation approaches (Lee et al., 2021; Lim et al., 2023; Jung et al., 2023) synthesize samples with various biases distinct from the inherent biases of the original data. Recently, based on the observation that biases in classification layers are severe compared to feature extractors, several approaches focus on rectifying the last layers (Kim et al., 2022; Menon et al., 2021; Kirichenko et al., 2023). Especially, Kirichenko et al. (2023) shows that the model learns both task-related features and spurious correlations and proposes retraining the classification layer using an unbiased validation set. However, leveraging an unbiased validation set is also impractical, as previously mentioned.
Influence functions. Influence Function (IF; Koh & Liang (2017)) and its approximations Pruthi et al. (2020); Schioppa et al. (2022) have been utilized in various deep learning tasks by measuring the importance of training samples and the relationship between them. One common application of IF is quantifying memorization by self-influence, which is the increase in loss when a training sample is excluded (Pruthi et al., 2020; Feldman & Zhang, 2020). Similarly, self-influence can be used to identify mislabeled samples in the training dataset since they cannot be reliably recovered once removed. Alternatively, Sorscher et al. (2022) eliminate low self-influence samples for computational efficiency as they can be generalized from other samples when removed. On the other hand, the sign of influence has been utilized to identify whether a training sample is beneficial or harmful by measuring its influence with a validation dataset.
6 CONCLUSION
In this paper, we thoroughly examined the tendency of self-influence in a biased dataset. We discovered that simply applying self-influence would not be sufficient to detect bias-conflicting samples. Based on this observation, we introduced a strategy to exploit self-influence in identifying bias-conflicting samples. We also demonstrated that fine-tuning is more effective in a highly biased dataset and suggested an approach to rectify biases within a pre-trained model under any given ratio of bias-conflicting samples. We show that our method consistently enhances the existing debiasing approaches across benchmark datasets under various ratios of bias-conflicting samples.
REFERENCES
Sumyeong Ahn, Seongyoon Kim, and Se-Young Yun. Mitigating dataset bias by using per-sample gradient. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=7mgUec-7GMv.
Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. Learning de-biased representations with biased representations. In International Conference on Machine Learning (ICML), volume 119 of Proceedings of Machine Learning Research, pp. 528–539. PMLR, 2020.
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6):141–142, 2012.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. Conference on Neural Information Processing Systems (NeurIPS), 33:2881–2891, 2020.
Jonathan Frankle, David J. Schwab, and Ari S. Morcos. The early phase of neural network training. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HklliRNFwS.
Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. Shortcut learning in deep neural networks. Nat. Mach. Intell., 2(11):665–673, 2020.
Xiaochuang Han and Yulia Tsvetkov. Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates. In Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 4398–4409, 2021.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE Computer Society, 2016.
Yue He, Zheyan Shen, and Peng Cui. Towards non-iid image classification: A dataset and baselines. Pattern Recognition, 110:107383, 2021.
Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. CoRR, abs/1903.12261, 2019. URL http://arxiv.org/abs/1903.12261.
Youngkyu Hong and Eunho Yang. Unbiased classification through bias-contrastive and bias-balanced learning. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 26449–26461. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/de8aa43e5d5fa8536cf23e54244476fa-Paper.pdf.
|
iSAgvYhZzg
|
- The paper does not describe much about the actual training details, in that sense, to me, the proposed method is still a kind of BC, where the target decoding is optimized towards mimicking the golden action sequences. (Unless some RL or other mechanism is used here, which is not described.)
|
YOU ONLY LOOK AT SCREENS:
MULTIMODAL CHAIN-OF-ACTION AGENTS
Anonymous authors
Paper under double-blind review
ABSTRACT
Autonomous user interface (UI) agents aim to facilitate task automation by interacting with the user interface without manual intervention. Recent studies have investigated eliciting the capabilities of large language models (LLMs) for effective engagement in diverse environments. To align with the input-output requirement of LLMs, existing approaches are developed under a sandbox setting where they rely on external tools and application-specific APIs to parse the environment into textual elements and interpret the predicted actions. Consequently, those approaches often grapple with inference inefficiency and error propagation risks. To mitigate the challenges, we introduce Auto-UI, a multimodal solution that directly interacts with the interface, bypassing the need for environment parsing or reliance on application-dependent APIs. Moreover, we propose a chain-of-action technique—leveraging a series of intermediate previous action histories and future action plans—to help the agent decide what action to execute. We evaluate our approach on a new device-control benchmark AITW with $30K$ unique instructions, spanning multi-step tasks such as application operation, web searching, and web shopping. Experimental results show that Auto-UI achieves state-of-the-art performance with an action type prediction accuracy of 90% and an overall action success rate of 74%. Code is publicly available at Anonymous.
1 INTRODUCTION
Building intelligent autonomous agents that are capable of task planning, decision making, and action execution in a particular environment is a long-standing goal of artificial intelligence (AI) (Searle, 1969; Wooldridge & Jennings, 1995; Maes, 1995; Hendler, 1999). The advent of large language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023) has flourished promising opportunities for developing autonomous agents to assist users in completing tasks in distinct environments such as operation systems, specific applications, and web browsers (Adept, 2022; Rawles et al., 2023; Liu et al., 2023; Zhou et al., 2023; Wang et al., 2023c).
Recent studies have explored prompt engineering (Richards, 2023; Nakajima, 2023; Reworkd, 2023; Sumers et al., 2023; Liu et al., 2023) and fine-tuning techniques (Rawles et al., 2023; Wen et al., 2023; Sun et al., 2022) to elicit the capability of language models to execute actions in interactive environments. However, there are at least two major challenges that have limited real-world applications of autonomous agents.
First, existing approaches commonly rely on external tools such as optical character recognition (OCR) and icon detectors (Zhang et al., 2021; Sunkara et al., 2022) to parse the environment into textual elements (e.g., HTML layouts) as inputs to a language model (Figure 1(a)) (Rawles et al., 2023; Wen et al., 2023). On the one hand, the parsed elements generate lengthy inputs, thus leading to inference inefficiency. Since computational latency is a key measure in deployment, using lengthy inputs would increase inference cost and may even exceed the input length limit of the language model. On the other hand, parsing the visual environment into textual elements may also be prone to error propagation or information loss because parsing mistakes are inevitable using external tools.
Second, most existing approaches are under the sandbox setting that requires accessing internal APIs to interact with the environment (Zhou et al., 2023; Gur et al., 2023), e.g., using a JavaScript element selection on a webpage or a Python interpreter to execute actions. However in practice, the API interface is often inaccessible in third-party applications (Apps).
These challenges have motivated more advanced techniques that are capable of first principles thinking (Aristotle, Irwin [1989])—allowing direct interactions on the screen without needing access to intermediate environment parsing or interval application-dependent APIs (Figure 1(b)). To address the challenges, we introduce Auto-UI, a multimodal approach that directly interacts with the interface.
To improve the agent’s action prediction capability, we propose a novel chain-of-action technique, where a chain of action is a series of intermediate previous action histories and future action plans that lead to action prediction.
We evaluate Auto-UI on a new device-control benchmark AITW (Rawles et al., 2023) with 30K unique instructions, spanning multi-step tasks of application operation, web searching, and web shopping. Experimental results show that Auto-UI achieves state-of-the-art performance with an action type prediction accuracy of 90% and an action success rate of 74%.
In summary, our work makes the following technical contributions:
(i) We introduce Auto-UI, a multimodal agent for autonomous UI control that can directly interact with the screens, thus circumventing the constraints of environment parsing and application-specific API access.
(ii) We propose a chain-of-action technique that leverages the previously executed actions and future action plans to help the agent decide what action to execute at each step.
(iii) Auto-UI achieves state-of-the-art performance with an action type prediction accuracy of 90% and an action success rate of 74%. Notably, Auto-UI can infer an action as fast as within less than one second.
2 RELATED WORK
Our work falls into the field of language agents. This section will first review the recent progress in building language agents and then discuss the approaches to conduct user interface control with language agents.
2.1 LANGUAGE AGENTS
Language agents refer to those agents that can follow user instructions and interact with environments to complete tasks. Such agents expand the landscape of language models to compete in specific fields, including application operation, web searching, and web shopping. There are two popular
types of language agents, autonomous agents and communicative agents. Autonomous agents aim to assist humans to achieve specific goals in the real world. Typical examples of autonomous agents are AutoGPT (Richards, 2023), BabyAGI (Nakajima, 2023), and AgentGPT (Reworld, 2023). In contrast, communicative agents are personalized and socialized agents (Park et al., 2023; Wang et al., 2023b; Zhu et al., 2023; Hong et al., 2023) with human behaviors that can communicate and collaborate with each other. They are often deployed in immersive environments. Inspired by the potential in real-world applications, this work focuses on autonomous agents, especially those working in mobile devices. We aim to assist users by completing multi-step tasks (e.g., manipulating Apps, web shopping, and question answering) without any manual intervention. Given a user instruction in natural language, the agent is required to interpret the instruction and execute actions by directly controlling its user interface. Due to the requirement in real-world applications, the agent is expected to be both effective and efficient.
2.2 UI Control with Natural Language
Recently, LLMs have shown promise in building autonomous UI agents with abilities of instruction following (Sanh et al., 2021; Taori et al., 2023b; Chiang et al., 2023) and chain-of-thought (CoT) prompting (Nye et al., 2022; Wei et al., 2022). Especially, CoT prompting (Wei et al., 2022; Kojima et al., 2022; Zhang et al., 2023a) elicit LLMs’ capacities of step-by-step planning, decision making, and action execution. Those capacities have been shown to be effective in UI control tasks (Rawles et al., 2023). However, the task environments are graphical user interfaces (GUIs), instead of natural language that LLMs can directly process. Therefore, the GUI states and actions are required to be converted to textual formats to conform to the input and output formats of LLMs. For example, it is feasible to parse the UI screens by icon recognition and OCR (Zhang et al., 2021; Sunkara et al., 2022) and organize the parsed elements into HTML layouts. As a compromise, existing approaches are restricted in a sandbox setting where they rely on external tools (Rawles et al., 2023; Wen et al., 2023) and application-specific APIs (Zhou et al., 2023; Gur et al., 2023) for environment parsing and action interpretation; thus, commonly suffer from inference inefficiency and error propagation. Although there are studies that have considered multimodal architecture to process inputs in different modalities (Sun et al., 2022), however, those studies still rely on fine-grained environment parsing to ensure competitive performance. In contrast, this work is established upon first principles thinking, which directly reads the UI without additional environment parsing and provides the action (e.g., action type, gesture coordinate, and typed text) that can be executed without needing any extra APIs.
3 METHODOLOGY
In this section, we will first introduce the basic concepts for the UI control task and then describe the design of our proposed Auto-UI framework.
3.1 Problem Formalization
Given a user instruction (also known as a goal), the agent needs to complete the task with multiple steps of interactions. The entire process is called an episode, which is composed of a series of screens. For each step in the episode, the agent will be provided with a screenshot, and the agent is required to predict the action until the task is complete. Detailed examples can be found in Appendix A.2.
3.2 Framework Overview
Auto-UI is a multimodal agent that decides what action to take given the input screenshot and a user instruction. To empower the agent’s decision making capability, we introduce a chain-of-action approach by leveraging a series of intermediate previous action histories and future action plans to predict actions.
The model architecture of Auto-UI is illustrated in Figure 2. On a high level, Auto-UI consists of three stages. First, we acquire encoded features from both vision and language inputs. Specifically, the vision input, i.e., a screenshot, is encoded by a frozen vision encoder. Meanwhile, the language input, consisting of the goal and a chain of previous action histories—each history contains a tuple {action type, touch point, lift point, and typed text}, is encoded by a language encoder. Second, the
encoded vision and language representations are integrated by a self-attention module. Third, the fused representation is fed to the decoder to generate a chain of future action plans (i.e., action types to execute in future steps) followed by action prediction. A chain of action consists of two parts in the procedure above: a chain of previous action histories on the input side and a chain of future action plans on the output side. In the following, we describe the entire procedure in detail.
**Encoding** Suppose that an episode consists of $k$ steps of interactions. Given a screenshot $X_{\text{screen}} \in \mathbb{R}^{h \times w \times 3}$ with height $h$ and width $w$ at step $t \in [1, k]$, we first feed it to a frozen image encoder (e.g., BLIP-2 (Li et al., 2023)) and extract vision features $H_{\text{screen}} \in \mathbb{R}^{1 \times d_v}$ where $d_v$ is the dimension of the vision features. Additionally, we leverage a language encoder to extract the language features $H_{\text{language}} \in \mathbb{R}^{n \times d_l}$ of the input goal $X_{\text{goal}}$ where $n$ is the number of tokens and $d_l$ is the dimension of the language features. If $t > 1$, there will be a chain-of-action history already executed before step $t$. We denote the chain of action histories as $X_{\text{history}} = [m_1, \ldots, m_t]$ where $m_i$ contains a tuple of action type, touch point, lift point, and typed text. Otherwise, if $t = 1$, $X_{\text{history}}$ will be set empty:
$$X_{\text{history}} = \begin{cases} [m_1, \ldots, m_t], & \text{if } t > 1 \\ \langle\text{empty}\rangle, & \text{otherwise} \end{cases}$$
We concatenate $X_{\text{goal}}$ and $X_{\text{history}}$ as the input to the language encoder: $X_{\text{language}} = \{X_{\text{goal}}, X_{\text{history}}\}$.
Then, we obtain the encoded representations of the vision and language inputs as follows:
$$H_{\text{screen}} = \text{VisionExtractor}(X_{\text{screen}}),$$
$$H'_{\text{screen}} = WH_{\text{screen}},$$
$$H_{\text{language}} = \text{LanguageEncoder}(X_{\text{language}}),$$
where $W$ is a trainable projection matrix to convert $H_{\text{screen}}$ into the same dimensionality as $H_{\text{language}}$.
**Interaction** We correlate $H'_{\text{screen}}$ and $H_{\text{language}}$ with a single-head self-attention network (Vaswani et al., 2017), where the query ($Q$), key ($K$), and value ($V$) are $H_{\text{language}}$, $H'_{\text{screen}}$, and $H'_{\text{screen}}$, respectively. The attention output $H_{\text{attn}} \in \mathbb{R}^{n \times d_k}$ is defined as: $H_{\text{attn}} = \text{Softmax}(\frac{QK^\top}{\sqrt{d_k}})V$, where $d_k$ is the same as the dimension of $H_{\text{language}}$ because a single head is used.
Then, a gated fusion mechanism is adopted following prior studies (Zhang et al., 2020; Wu et al., 2021; Zhang et al., 2023b) to fuse $H_{\text{language}}$ and $H_{\text{attn}}$: We have the fused output $H_{\text{fuse}} \in \mathbb{R}^{n \times d}$ by:
$$\lambda = \text{Sigmoid}(W_f H_{\text{language}} + W_v H_{\text{attn}}),$$
$$H_{\text{fuse}} = (1 - \lambda) \cdot H_{\text{language}} + \lambda \cdot H_{\text{attn}},$$
where $W_f$ and $W_v$ are learnable parameters.
Decoding The fused representation $H_{\text{fuse}}$ is fed to a Transformer decoder to generate the target predictions in a string format. The target predictions consist of a chain of future action plans $Y_{\text{plan}}$ and the current action prediction $Y_{\text{action}}$ separated by specific prompts: \{Action Plan: $Y_{\text{plan}}$, Action Decision: $Y_{\text{action}}$\}. Concretely, $Y_{\text{plan}}$ is a chain of action types to execute in future steps: $Y_{\text{plan}} = [\text{action\_type}_1, \ldots, \text{action\_type}_k]$. $Y_{\text{action}}$ contains four components: $Y_{\text{action}} = \{\text{“action\_type”: <action\_type>}, \text{“touch\_point”: <touch\_point>}, \text{“lift\_point”: <lift\_point>}, \text{“typed\_text”: <typed\_text>}\}$. These four components will be explained in the following subsection.
3.3 Coordinate Normalization
Recall that a target action consists of four components: action type, touch point, lift point, and typed text. We consider six action types: dual-point gesture, type, go_back, go_home, enter, and status_complete. A dual-point gesture comprises a touch point and a lift point with $[y, x]$ coordinates. The gesture actions ensure a flexible action space and can represent clicks and scrolls at arbitrary locations. For example, a gesture action \{“touch\_point”: [0.7761, 0.7089], “lift\_point”: [0.7761, 0.7089]\} means clicking at the coordinate [0.7761, 0.7089], while a gesture action \{“touch\_point”: [0.1898, 0.4477], “lift\_point”: [0.8242, 0.4077]\} means scrolling down. A type action means typing a text and the text is placed in the <typed\_text> field. The other action types, i.e., go_back, go_home, enter, and status_complete are system actions, whose corresponding <touch\_point>, <lift\_point> fields are filled with -1, and the <typed\_text> is empty.
We observe that high-precision coordinates are not necessary for representing a click or scroll action. Therefore, we apply normalized values of the coordinates, which helps accelerate convergence and mitigate the ambiguity of coordinates. The normalization is applied to click and scroll actions. For click actions, we keep four decimal places. For scroll actions, we first determine the scroll direction with the touch point and lift point. Then, we transform the touch and lift points into fixed directional coordinates as follows: “up”: \{(0.8, 0.5), (0.2, 0.5)\}, “down”: \{(0.2, 0.5), (0.8, 0.5)\}, “left”: \{(0.5, 0.8), (0.5, 0.2)\}, “right”: \{(0.5, 0.2), (0.5, 0.8)\}, where \{[t], [l]\} consists of the touch point and lift point in the first [t] and second [l]. We provide examples of target actions in Appendix A.3.
4 Experiments
4.1 Dataset
We use the AITW benchmark dataset (Rawles et al., 2023). AITW is a large-scale benchmark dataset for UI control, which contains natural language instructions, screenshots, and actions. There are $715K$ episodes spanning $30K$ unique instructions, covering diverse multi-step tasks such as application operation, web searching, and web shopping, on over 350 Apps and websites. This dataset covers various device types and operation systems in varying screen resolutions to ensure generality. There are five subsets in the benchmark dataset, namely, General, Install, GoogleApps, Single, and WebShopping. The details of the subsets and data statistics are presented in Appendix A.1.
4.2 Baselines
We adopt three types of baselines for comparisons. The baselines encompass the In-context Learning (ICL) and fine-tuning paradigms, along with various backbone models of different sizes. This choice of baselines allows for a comprehensive comparison with our proposed approach.
(i) In-context Learning LLMs. Few-shot PaLM 2, ChatGPT (turbo-3.5) are adopted. Following previous studies (Rawles et al., 2023; Wang et al., 2023a), we feed the LLM a textual description of the screen and a user instruction. The textual description of the screen is formatted as an HTML syntax, providing the information of UI elements derived from OCR detection and icon detection from external tools (Rawles et al., 2023). The model is required to predict an action among pre-defined actions. If the action is clicking, the model will be required to provide the index of the clicked UI element. Alternatively, the model needs to provide the scroll direction if the action is scrolling. In addition, 5-shot CoT prompting is leveraged to improve the performance (Appendix A.4). In addition, we report the results of the multimodal GPT-4V by taking the vision image and action history as the input based on Yan et al. (2023).
(ii) Fine-tuned LLMs. We adopt Llama 2 (Touvron et al., 2023) as the baseline and fine-tune it with LoRA. We feed the model with the user instruction and the screen descriptions in HTML syntax (the same as adopted for in-context learning LLMs). The model is expected to predict the action in the same output format as in-context learning LLMs. As fine-tuning an LLM is expensive, we randomly sample 1% training data to help the LLM adapt to our tasks.
(iii) Specialized UI Agent. We adopted the Behavioural Cloning (BC) agent, which reported the state-of-the-art performance in Rawles et al. (2023). BC is a Transformer-based architecture that takes a task instruction, the current screen, and a stacked history of screen observations and actions as input. The task instruction and OCR-detected texts are encoded by a pre-trained BERT. The icons are represented by the embeddings for each of the bounding box points. The screen history is modeled by the \((x, y)\) positions of the touch and lift actions. All the embedded representations are fused to predict the action by a decoder. There are two BC variants, BC-single and BC-history, depending on whether the model takes as input the screen-action history.
4.3 Evaluation Measures
We compute the screen-wise action matching score as the main evaluation measure, defined as the number of correct actions divided by the episode length. A predicted action is considered correct if the action type and dual-point gesture match the gold ones. As we described in Section 3.3, the gesture actions can represent the click actions and scroll actions at arbitrary locations. Following Rawles et al. (2023), a click action is considered correct if its touch point and lift point fall within a 14% screen distance from the gold gestures or occur within the same detected bounding box with the gold gestures. A scroll action is considered correct if it has the same scroll axis as the gold gesture.
The screen-wise action matching score has been shown to correlate with the task complete score estimated by human evaluations (Rawles et al., 2023) and is appropriate to measure the action success rate for user instructions. Besides the overall matching score, we will also compare the click region accuracy, scroll direction accuracy, action type accuracy, and typed text accuracy for a more comprehensive reference (Section 5.1).
The evaluation criteria apply to the BC baselines and our Auto-UI. For the LLMs, they can only click on detected UI elements, rather than clicking at arbitrary locations. Therefore, we consider if the clicked UI element is matched for click actions instead of comparing dual-point gestures for LLMs.
4.4 Implementation Details
We adopt the encoder-decoder architecture (Raffel et al., 2020) under small (60M), base (200M) and large (700M) settings in our framework. We apply FLAN-Alpaca to initialize our model weights.\footnote{https://github.com/declare-lab/flan-alpaca} The vision features are obtained by the frozen BLIP-2 encoder (Li et al., 2023) (version: blip2_t5_instruct). We fine-tune the models up to 10 epochs, with a learning rate of \(1e^{-4}\). The maximum input sequence length is 512. The batch size is 4. Our experiments are run on 8 NVIDIA Tesla V100 32G GPUs. Training the large and base models takes 75 and 25 hours, respectively.
We develop two kinds of approaches to analyze their generalization abilities, namely Auto-UI\textsubscript{separate}, and Auto-UI\textsubscript{unified}. Specifically, Auto-UI\textsubscript{separate} is trained and evaluated independently on each subset. Auto-UI\textsubscript{unified} is a unified model trained on the training sets of each subset and evaluated on each test set. As the GoogleApps subset is 10-100 times larger than the other subsets, using all the training data to train a unified model would suffer from the data imbalance issue (Zhang et al., 2022). Therefore, we only use 10% training data of GoogleApps. At the same time, the overall computation cost can also be saved by 80%. We use Auto-UI\textsubscript{unified} as the default model for analysis unless otherwise stated.
4.5 Main Results
Table 1 shows the main results. Auto-UI\textsubscript{unified} achieves the best overall performance compared with all the baselines. When compared with separate (not unified) models, Auto-UI\textsubscript{unified} shows general effectiveness across various task scenarios. The results show that a unified multimodal model out of first principles thinking can serve as a strong autonomous agent. Compared with previous BC models, Auto-UI\textsubscript{unified} has two major advantages. First, Auto-UI\textsubscript{unified} is a unified model that can be adapted
Table 1: Main results (%). Segment 1: specialized agent baselines; Segment 2: in-context learning LLM baselines; Segment 3: fine-tuned Llama 2 baseline; Segment 4: our Auto-UI results. Prior published best results are marked with an underline. “Unified” means a general model that can work across subsets. “w/o Anno.” means no screen description is needed. The PaLM-CoT and BC results are from [Rawles et al., 2023]. The GPT-4V result is from [Yan et al., 2023]. The other results are based on our own implementations. The overall score is computed as the average accuracy on all the subsets. The best average result is in **bold** face.
| Model | Unified | w/o Anno. | Overall | General | Install | GoogleApps | Single | WebShopping |
|---------------------|---------|-----------|---------|---------|---------|------------|--------|-------------|
| PaLM 2-CoT | ✓ | | 39.6 | - | - | - | - | - |
| ChatGPT-CoT | ✓ | | 7.72 | 5.93 | 4.38 | 10.47 | 9.39 | 8.42 |
| GPT-4V | ✓ | | 52.96 | 43.01 | 46.14 | 49.18 | 78.29 | 48.18 |
| Fine-tuned Llama 2 | ✗ | ✗ | 28.40 | 28.56 | 35.18 | 30.99 | 27.35 | 19.92 |
| BC-single | ✗ | ✗ | 68.7 | - | - | - | - | - |
| BC-history | ✗ | ✗ | 73.1 | 63.7 | 77.5 | 75.7 | 80.3 | 68.5 |
| Auto-UI_separate | ✗ | ✓ | 74.07 | 65.94 | 77.62 | 76.45 | 81.39 | 69.72 |
| Auto-UI_unified | ✓ | ✓ | 74.27 | 68.24 | 76.89 | 71.37 | 84.58 | 70.26 |
Table 2: Ablation study of Auto-UI design components. We adopt Auto-UI_unified for analysis.
| Model | Overall | General | Install | GoogleApps | Single | WebShopping |
|------------------------|---------|---------|---------|------------|--------|-------------|
| Auto-UI | 74.27 | 68.24 | 76.89 | 71.37 | 84.58 | 70.26 |
| w/o chain of actions | 68.53 | 58.99 | 72.06 | 67.50 | 81.25 | 62.86 |
| w/ previous action history | 73.78 | 67.97 | 76.66 | 71.00 | 83.64 | 69.62 |
| w/ future action plan | 68.81 | 59.01 | 72.34 | 67.95 | 81.53 | 63.24 |
| w/o coordinate normalization | 70.23 | 63.79 | 73.28 | 66.63 | 82.11 | 65.33 |
to different scenarios without the need to train specific models for each task. Second, Auto-UI_unified does not need additional annotations (screen parsing) and is easy to use. We will provide a more detailed analysis of the generality of computation efficiency in Section 5.2 and 5.4.
The ablation study in Table 2 verifies that both the chain of actions and coordinate normalization contribute to the overall performance (+5.74% and 4.04%, respectively). We set the maximum numbers of the previous actions and future actions to 8 and 4, respectively. The choice is made according to our analysis on the General subset with Auto-UI_separate (Figure 3). The model under those setups achieves the optimal performance and both the input and output sequence lengths would not exceed the model limit.

For the LLMs, using either prompting or fine-tuning techniques does not achieve competitive performance compared with the other approaches. The most plausible reason is that they learn from the parsed HTML elements of the screen so that they may suffer from information loss compared with more informative vision features of the screens. Specifically, we find that ChatGPT is quite accurate at predicting the action type but fails at lower-level executions (Appendix B.1).
It is reasonable that Auto-UI\textsubscript{unified} performs relatively inferior to BC-history on the two App-centered subsets, Install and GoogleApps, because we only use 10% training data of GoogleApps considering the data balance and computation overhead. We observe that the performance does not improve when we use all the training data of GoogleApps, possibly due to the data imbalance issue (Zhang et al., 2022). In contrast, our separate model Auto-UI\textsubscript{separate} can achieve better performance than BC-history, showing that our approach is better than BC-history under the same training setting. As we aim to study a simple and unified approach that achieves generally strong performance, we leave the treatment of the data imbalance issue in future work.
5 ANALYSIS
5.1 CATEGORY ACCURACY
To dive into the capability of Auto-UI, we calculate the click region accuracy, scroll direction accuracy, action type accuracy, and typed text accuracy. Figure 4 presents the results. We see that Auto-UI achieves over 90% action type accuracy on average. In contrast, the major challenges lie within the click region and scroll direction predictions. Although the model is able to predict the right action most of the time, it tends to click a wrong place or scroll in a wrong direction. The result reveals a future direction of improving the model’s ability to understand the screen layouts, e.g., using more advanced vision features.

5.2 GENERALIZATION ABILITY
As our approach is designed under first principles thinking and does not rely on pre-defined internal APIs, it could be easily generalized to new task domains. To verify the generality, we evaluate the performance of Auto-UI\textsubscript{separate} on each subset in Figure 5. For example, we train an Auto-UI\textsubscript{separate} model on the training set of General and then test its performance on the tests of each subset. We see that our approach is able to achieve a decent performance though the domains vary. This result reveals that the model could capture general knowledge for the UI control task; thus is applicable to different domains. In addition, the unified model Auto-UI\textsubscript{unified} can serve as a potential choice in real-world applications owing to more coverage of training data.

5.3 Comprehensive Analysis
Here we present a comprehensive analysis of the choice of pre-trained features and model scale. The results are summarized in Table 3.
| Model | Overall | General | Install | GoogleApps | Single | WebShopping |
|------------------------|---------|---------|---------|------------|--------|-------------|
| Auto-UI on CLIP | 71.84 | 66.28 | 74.40 | 69.71 | 81.60 | 67.23 |
| Auto-UI on BLIP-2 | 74.27 | 68.24 | 76.89 | 71.37 | 84.58 | 70.26 |
| Auto-UI on Vanilla-T5large | 72.98 | 66.61 | 75.40 | 70.86 | 83.47 | 68.54 |
| Auto-UI on FLAN-T5large | 73.36 | 67.59 | 76.35 | 70.71 | 83.01 | 69.12 |
| Auto-UI on FLAN-Alpaca large | 74.27 | 68.24 | 76.89 | 71.37 | 84.58 | 70.26 |
| Auto-UI on FLAN-Alpaca small | 71.38 | 65.26 | 74.90 | 68.70 | 81.20 | 66.83 |
| Auto-UI on FLAN-Alpaca base | 72.84 | 66.97 | 75.93 | 70.29 | 82.56 | 68.46 |
| Auto-UI on FLAN-Alpaca large | 74.27 | 68.24 | 76.89 | 71.37 | 84.58 | 70.26 |
● Pre-trained Features. There are two kinds of pre-trained features used in this work, the vision features and language model weights. For vision features, we compare two popular types, CLIP (Radford et al., 2021) and BLIP-2 (Li et al., 2023). We observe that BLIP-2 achieves relatively better performance. Therefore, we use BLIP-2 by default in Auto-UI. For pre-trained language model weights, we compare initializing the model with the vanilla T5 (Raffel et al., 2020), FLAN-T5 (Chung et al., 2022), and FLAN-Alpaca (Taori et al., 2023a) weights under the large size. We see that FLAN-Alpaca achieves the best performance as it has been optimized with Stanford Alpaca synthetic instruction tuning data.
● Model Scale. Compared with the performance gains from our technique components (chain of actions and coordinate normalization) in Table 2, the benefit of scaling parameter size becomes relatively marginal. As we observe that a larger model size does not lead to dramatic improvement in performance, we do not scale the model scale but focus on the base (220M) and large (770M) models in this work. In addition, our choice is also based on other considerations, including the constriction of GPU memory and computation budget.
5.4 Computation Cost
Table 4 compares the inference speed and GPU memory cost for Auto-UI and Llama 2. Auto-UI is able to achieve nearly real-time inference (within less than one second for an action prediction) with less than 10GB GPU memory. The inference speed is over 10 times faster than Llama 2. Our work shows the strength of the medium-sized language model in building autonomous agents, which is able to achieve competitive performance with fast inference.
| Model | Feature Extraction (s/n) | Model Inference (s/n) | Peak GPU Memory (GB) |
|-----------|--------------------------|-----------------------|----------------------|
| Auto-UIbase | 0.06 | 0.19 (45x) | 4.6 (10x) |
| Auto-UILarge | 0.06 | 0.59 (15x) | 8.2 (6x) |
| Llama 2 | - | 8.5 | 49.7 |
6 Conclusion
This work presents an autonomous UI agent called Auto-UI that can interact in a multimodal UI environment without environment parsing or application-dependent API access. In addition, we propose a chain-of-action technique that leverages the previously executed actions and future action plans to help the agent decide what action to execute. Experimental results show that Auto-UI achieves superior performance to previous prompting-based and fine-tuning baselines. Besides the strong performance and generality across domains, Auto-UI can infer an action as fast as within less than one second.
REFERENCES
Adept. Act-1: Transformer for actions. https://www.adept.ai/act, 2022.
Aristotle. Physics 184a10–21.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. https://vicuna.lmsys.org, 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. A real-world webagent with planning, long context understanding, and program synthesis. arXiv preprint arXiv:2307.12856, 2023.
James Hendler. Is there an intelligent agent in your future? Nature, 11, 1999.
Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, and Chenglin Wu. Metagpt: Meta programming for multi-agent collaborative framework, 2023.
Terence Irwin. Aristotle’s first principles. Clarendon Press, 1989.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. ArXiv preprint, abs/2205.11916, 2022.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688, 2023.
Pattie Maes. Agents that reduce work and information overload. In Readings in human–computer interaction, pp. 811–821. Elsevier, 1995.
Yohei Nakajima. Babyagi. https://github.com/yoheinakajima/babyagi, 2023.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for intermediate computation with language models. In Deep Learning for Code Workshop, 2022.
OpenAI. Gpt-4 technical report, 2023.
Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748–8763. PMLR, 2021.
|
ATEawsFUj4
|
3. I am not sure the model can be named as zero-shot as it requires one shot for unseen faces. So, could you elaborate on the following '... generates a talking video of an unseen speaker with one portrait image ...'?
|
GAIA: Zero-shot Talking Avatar Generation
Tianyu He*, Junliang Guo*, Runyi Yu*, Yuchi Wang*, Jialiang Zhu, Kaikai An, Leyi Li
Xu Tan†, Chunyu Wang, Han Hu, HsiangTao Wu, Sheng Zhao, Jiang Bian
Microsoft
{tianyuhe,junliangguo,v-runyiyu,v-yuchiwang,xuta}@microsoft.com
https://microsoft.github.io/GAIA
Abstract
Zero-shot talking avatar generation aims at synthesizing natural talking videos from speech and a single portrait image. Previous methods have relied on domain-specific heuristics such as warping-based motion representation and 3D Morphable Models, which limit the naturalness and diversity of the generated avatars. In this work, we introduce GAIA (Generative AI for Avatar), which eliminates the domain priors in talking avatar generation. In light of the observation that the speech only drives the motion of the avatar while the appearance of the avatar and the background typically remain the same throughout the entire video, we divide our approach into two stages: 1) disentangling each frame into motion and appearance representations; 2) generating motion sequences conditioned on the speech and reference portrait image. We collect a large-scale high-quality talking avatar dataset and train the model on it with different scales (up to 2B parameters). Experimental results verify the superiority, scalability, and flexibility of GAIA as 1) the resulting model beats previous baseline models in terms of naturalness, diversity, lip-sync quality, and visual quality; 2) the framework is scalable since larger models yield better results; 3) it is general and enables different applications like controllable talking avatar generation and text-instructed avatar generation.
1 Introduction
Talking avatar generation aims at synthesizing natural videos from speech, where the generated mouth shapes, expressions, and head poses should be in line with the speech content. Previous studies achieve high-quality results by imposing avatar-specific training (i.e., training or adapting a specific model for each avatar) (Thies et al., 2020; Tang et al., 2022; Du et al., 2023; Guo et al., 2021), or by leveraging template video during inference (Prajwal et al., 2020; Zhou et al., 2021; Shen et al., 2023; Zhong et al., 2023). More recently, significant efforts have been dedicated to designing and improving zero-shot talking avatar generation (Zhou et al., 2020; Wang et al., 2021a; Zhang et al., 2023b; Wang et al., 2023; Yu et al., 2022; Gururani et al., 2022; Stypulkowski et al., 2023), i.e., only a single portrait image of the target avatar is available to indicate the appearance of the target avatar. However, these methods relax the difficulty of the task by involving domain priors such as warping-based motion representation (Siarohin et al., 2019; Wang et al., 2021b), 3D Morphable Models (3DMMs) (Blanz & Vetter, 1999), etc. Although effective, the introduction of such heuristics hinders direct learning from data distribution and may lead to unnatural results and limited diversity.
In contrast, in this work, we introduce GAIA (Generative AI for Avatar), which eliminates the domain priors in talking avatar generation. GAIA reveals two key insights: 1) the speech only drives the motion of the avatar, while the background and the appearance of the avatar typically remain the same throughout the entire video. Motivated by this, we disentangle the motion and appearance for each frame, where the appearance is shared between frames and the motion is unique to each frame. To predict motion from speech, we encode motion sequence into motion latent sequence and
*Equal contribution.
†Corresponding author: Xu Tan (xuta@microsoft.com).
predict the latent with a diffusion model conditioned on the input speech; 2) there exists enormous diversities in expressions and head poses when an individual is speaking the given content, which calls for a large-scale and diverse dataset. Therefore, we collect a high-quality talking avatar dataset that consists of 16K unique speakers with diverse ages, genders, skin types, and talking styles, to make the generation results natural and diverse.
More specifically, to disentangle the motion and appearance, we train a Variational AutoEncoder (VAE) consisting of two encoders (i.e., a motion encoder and an appearance encoder) and one decoder. During training, the input of the motion encoder is the facial landmarks \cite{Wood et al., 2021} of the current frame, while the input of the appearance encoder is a frame that is randomly sampled within the current video clip. Based on the outputs of the two encoders, the decoder is optimized to reconstruct the current frame. After we obtain the well-trained VAE, we have the motion latent (i.e., the output of the motion encoder) for all the training data. Then, we train a diffusion model to predict the motion latent sequence conditioned on the speech and one randomly sampled frame within the video clip, which provides appearance information to the generation process. During inference, given the reference portrait image of the target avatar, the diffusion model takes it and an input speech sequence as the condition, and generates the motion latent sequence that is in line with the speech content. The generated motion latent sequence and the reference portrait image are then leveraged to synthesize the talking video output using the decoder of the VAE.
For the collected dataset, to enable the desired information can be learned from data, we propose several automated filtration policies to ensure the quality of the training data. We train both the VAE and the diffusion model on the filtered data. From the experimental results, we have three key conclusions: 1) GAIA is able to conduct zero-shot talking avatar generation with superior performance on naturalness, diversity, lip-sync quality, and visual quality. It surpasses all the baseline methods significantly according to our subjective evaluation; 2) we train the model with different scales, varying from 150M to 2B. The results demonstrate that the framework is scalable since larger models yield better results; 3) GAIA is a general and flexible framework that enables different applications including controllable talking avatar generation and text-instructed avatar generation.
2 RELATED WORKS
Speech-driven talking avatar generation enables synthesizing talking videos in sync with the input speech content. Early methods have been proposed to train or adapt a specific model for each avatar with a focus on overall realness \cite{Thies et al., 2020, Lu et al., 2021}, natural head poses \cite{Zhou et al., 2021}, high lip-sync quality \cite{Lahiri et al., 2021} and emotional expression \cite{Ji et al., 2021}.
Despite significant advances made by these methods, the costs are high due to the avatar-specific training. This motivates zero-shot talking avatar generation, where only one portrait image of the target avatar is given. However, animating a single portrait image is not easy due to the limited information we have. MakeItTalk \cite{Zhou et al., 2020} handled this by first predicting 3D landmark displacements from the speech input, then the predicted landmarks are transferred to a warping-based motion representation \cite{Siarohin et al., 2019}, which is employed to warp the reference image to the desired expression and pose. Burkov et al. \cite{Burkov et al., 2020} achieved pose-identity disentanglement, but needs additional fine-tuning for the unseen identities. More recently, SadTalker \cite{Zhang et al., 2023b} leveraged 3DMMs as an intermediate representation between the speech and the video, and proposed two modules to predict the expression coefficients of 3DMMs and head poses respectively. In general, the current solutions relax the difficulty of the task by involving domain priors like warping-based transformation \cite{Zhou et al., 2020, Wang et al., 2021a, 2022, Liu et al., 2022, Drobyshev et al., 2022, Gururani et al., 2022}, 3DMMs \cite{Ren et al., 2021, Zhang et al., 2021, 2023b}, etc. Although the introduction of these heuristics makes the modeling easier, they inevitably hinder the end-to-end learning from data distribution, leading to unnatural results and limited diversity. PC-AVS \cite{Zhou et al., 2021} and PD-FGC \cite{Wang et al., 2023} similarly introduced identity space and non-identity space by leveraging the identity labels. The authors employed contrastive learning to align the non-identity space and speech content space. Our method differs in three ways: 1) they need additional driving video. Instead, we generate the entire motion from the speech at the same time and also provide the option to control the head pose; 2) they use contrastive learning to align speech and visual motion. In contrast, we leverage diffusion models to predict motion from the speech; 3) our method does not need additional identity labels. As verified in experiments, our method results in natural and consistent motion, and flexible control for talking avatar generation.
Figure 1: Method overview. GAIA consists of a VAE (the orange modules) and a diffusion model (the blue and green modules). The VAE is firstly trained to encode each video frame into a disentangled representation (i.e., motion and appearance representation) and reconstruct the original frame from the disentangled representation. Then the diffusion model is optimized to generate motion sequences conditioned on the speech sequences and a random frame within the video clip. During inference, the diffusion model takes an input speech sequence and the reference portrait image as the condition and yields the motion sequence, which is decoded to the video by leveraging the decoder of the VAE.
3 DATA COLLECTION AND FILTERING
A data-driven model is naturally scalable for large datasets, but it also requires high-quality data as it learns from data distribution. We construct our dataset from diverse sources. For high-quality public datasets, we collect High-Definition Talking Face Dataset (HDTF) [Zhang et al., 2021] and Casual Conversation datasets v1&v2 (CC v1&v2) [Hazirbas et al., 2021; Porgali et al., 2023], which contain thousands of identities (IDs) with a diverse set of ages, genders, and apparent skin types. In addition to these three datasets, we also collect a large-scale internal talking avatar dataset which consists of 7K hours of videos and 8K unique speaker IDs, to make the resulting model scalable and unbiased. The overview of the dataset statistics is demonstrated in Tab. 1.
However, the raw videos are surrounded by noisy cases that are harmful to the model training, such as non-speaking clips and rapid head moves. To enable the desired information can be learned from data, we develop several automated filtration policies to improve the quality of the training data: 1) to make the lip motion visible, the frontal orientation of the avatar should be toward the camera; 2) to ensure the stability, the facial movement in a video clip should be smooth without rapid shaking; 3) to filter out corner cases where the lip movements and speech are not aligned, the frames that the avatar wear masks or keep silent should be removed. Please refer to Appendix A.1 for more details. After filtration, we find that a majority of raw videos are dropped, which is necessary for the training of a data-driven model according to our preliminary experimental results, where the video quality generated by models trained on raw videos falls behind the one trained on filtered data.
4 MODEL
4.1 MODEL OVERVIEW
The zero-shot scenario that generates a talking video of an unseen speaker with one portrait image and a speech clip requires two key capabilities of the model: 1) the disentangled representation of appearance and motion from the image, as the former should be consistent while the latter dynamic in the generated video; 2) generate the motion representation conditioned on the speech in each timestamp. Correspondingly, as shown in Fig. 1, we propose two models including a Variational AutoEncoder (VAE) [Kingma & Welling, 2014] that extracts image representations and a diffusion model for speech-to-motion generation.
| Datasets | Raw #IDs | Raw #Hours | Filtered #IDs | Filtered #Hours |
|----------|----------|------------|---------------|----------------|
| HDTF | 362 | 16 | 359 | 14 |
| CC v1 | 3,011 | 750 | 2,957 | 330 |
| CC v2 | 5,567 | 440 | 4,646 | 183 |
| Internal | 8,007 | 7,000 | 8,007 | 642 |
| Total | 16,947 | 8,206 | 15,969 | 1,169 |
Problem Definition Given one portrait image \( x \) and a sequence of speech clip \( s = [s_1, ..., s_N] \), the model aims to generate a talking video clip \([x_1, ..., x_N]\) which is lip-syncing with speech \( s \) and appearance consistent with image \( x \).
4.2 Motion and Appearance Disentanglement
Given a frame of talking video \( x \), we would like to encode its motion representation which will serve as the generation target of the diffusion model. Therefore, it is crucial to disentangle the motion and appearance representation from \( x \). We propose a VAE that consists of two encoders, i.e., motion encoder \( E_M \) and appearance encoder \( E_A \) and one decoder \( D \). We then use the appearance information from the \( i \)-th frame and the motion information from the \( j \)-th frame to reconstruct the \( j \)-th frame by the VAE, in order to prevent the leakage of the appearance information in reconstruction. In this way, as the \( i \)- and \( j \)-th frames from one video clip contain the same appearance but different motion information, i.e., the same person talking different words, the VAE model will learn to first extract the pure appearance feature from the \( i \)-th frame, and then combine it with the pure motion feature of the \( j \)-th frame to reconstruct the original \( j \)-th frame. The individuals of the \( i \)- and \( j \)-th frame can be flexibly chosen for both self-reconstruction and cross-reenactment settings.
Motion and Appearance Encoder Specifically, denote the raw RGB image of \( x \) as \( x^a \in \mathbb{R}^{H \times W \times 3} \) and its landmark as \( x^m \in \mathbb{R}^{H \times W \times 3} \) which is predicted by an external tool (Wood et al., 2021). The landmark is supposed to only contain the locations of key facial features such as the mouth, while the raw image provides other appearance information including identity and background. Given two frames \( x(i) \) and \( x(j) \) from one video clip, the model takes \( x^a(i) \) and \( x^m(j) \) as inputs to the appearance and motion encoder respectively, and produces their latent representations:
\[
z^a(i) = E_A(x^a(i)), \quad z^m(j) = E_M(x^m(j)),
\]
where \( z^a(i) \in \mathbb{R}^{h^a \times w^a \times 3} \) and \( z^m(j) \in \mathbb{R}^{h^m \times w^m \times 3} \). Note that in practice we use a smaller size of \( h^m \) than \( h^a \) as landmarks usually contain less information which is easier to encode. The two latent representations are then projected to the same size and concatenated together to reconstruct \( x^a(j) \) by the decoder:
\[
\hat{x}^a(j) = D(z^a(i), z^m(j)).
\]
The two encoders \( E_A \) and \( E_M \) share similar model architectures except for the downsampling factors, and \( z^m(j) \) is first up-sampled to the same size as \( z^a(j) \) followed by concatenation and projection and then served as the input to the decoder.
Training We train the VAE model in an adversarial manner to learn perceptually rich representations following previous works (Esser et al., 2021; Rombach et al., 2022). In addition to the perceptual L1 reconstruction loss (Zhang et al., 2018) \( L_{rec}(x, \hat{x}) \) and the KL-penalty \( L_{kl}(x) \) of the latent towards a standard normal distribution (Kingma & Welling, 2014), we introduce a discriminator \( f_{dis} \) to distinguish between the real frame \( x \) and the generated \( \hat{x} \):
\[
L_{dis}(x, \hat{x}) = \log f_{dis}(x) + \log(1 - f_{dis}(\hat{x})).
\]
Then the total loss function of training the VAE can be written as:
\[
L_{VAE} = \min_{E_A, E_M, D} \max_{f_{dis}} (L_{rec}(x; E_A, E_M) + L_{kl}(x; E_A, E_M) + L_{dis}(x; f_{dis})).
\]
4.3 Speech-to-Motion Generation
Once the VAE is trained, we are able to obtain a motion latent sequence \( z^m \in \mathbb{R}^{N \times h^m \times w^m \times 3} \), an appearance latent sequence \( z^a \in \mathbb{R}^{N \times h^a \times w^a \times 3} \) for each video clip. We also have its corresponding speech feature \( z^s \in \mathbb{R}^{N \times d_s} \) extracted by wav2vec 2.0 (Baevski et al., 2020). We leverage a diffusion model with Conformer (Gulati et al., 2020) backbone \( S \) to predict the motion latent sequence \( z^m \) conditioned on the paired speech feature \( z^s \) and one reference frame \( x(i) \). The speech feature gives the driving information and the reference frame provides identity-related information like facial contour, the shape of eyes, etc.
Since the speech feature \( z^s \) comes from a fixed feature extractor (Baevski et al., 2020), to adapt it to our model, we process it with a lightweight speech encoder \( A \) before feeding it into the diffusion
model. Given that the diffusion model predicts the motion latent sequence, we thus use the motion latent \( z^m(i) \) of the reference frame \( x(i) \) as the condition, which is obtained by the pre-trained motion encoder \( E_M \). During training, the reference frame is randomly sampled within the video clip. Following previous practice (Du et al., 2023), we generate a pseudo-sentence for data augmentation by sampling a subsequence with a random starting point and a random length for each training pair.
**Diffusion Model** Our goal is to construct a forward diffusion process and a reverse diffusion process that has a tractable form to generate data samples. The forward diffusion gradually perturbs data samples \( z^m_0 \) into Gaussian noise with infinite time steps. Then in the reverse diffusion, with the learned score function, the model is able to generate desired data samples \( z^m_t \) from Gaussian noise in an iterative denoising process. Formally, the forward diffusion can be modeled as the following stochastic differential equation (SDE) (Song et al., 2021):
\[
dz^m_t = -\frac{1}{2} \beta_t z^m_t \ dt + \sqrt{\beta_t} \ dw_t, \quad t \in [0, 1],
\]
where noise schedule \( \beta_t \) is a non-negative function, \( w_t \) is the standard Wiener process (i.e., Brownian motion). According to previous literature (Song et al., 2021), the reverse diffusion that transforms the Gaussian noise to the data sample can therefore be written as:
\[
dz^m_t = -\left(\frac{1}{2} z^m_t + \nabla \log p_t(z^m_t)\right) \beta_t \ dt + \sqrt{\beta_t} \ d\tilde{w}_t, \quad t \in [0, 1],
\]
where \( \tilde{w}_t \) is the reverse-time Wiener process, \( p_t \) is the probability density function of \( z^m_t \).
In addition, Song et al. (2021) have shown that there is an ordinary differential equation (ODE) for the reverse diffusion:
\[
dz^m_t = -\frac{1}{2}(z^m_t + \nabla \log p_t(z^m_t)) \beta_t \ dt.
\]
Given the above formulation, we train a neural network \( S \) to estimate the gradient of the log-density of noisy data sample \( \nabla \log p_t(z^m_t) \). As a result, we can model \( p(z^m_0) \) by sampling \( z^m_1 \sim N(0, 1) \) and then numerically solving either Equ. 6 or Equ. 7.
**Conditioning** In addition to the noised data sample, our diffusion model processes additional conditional information: the noise time step \( t \), the speech feature \( z^s \), and a reference motion latent \( z^m(i) \) coming from the same clip. Following previous successes (Ho et al., 2020; Rombach et al., 2022), the noise time step \( t \) is projected to an embedding and then directly added to the input of each Conformer block. For the speech feature, since it should be aligned with the output, we add it to the hidden feature of each Conformer block in an element-wise manner. For the reference motion latent, we employ a cross-attention layer (Vaswani et al., 2017; Rombach et al., 2022) for each Conformer block, in which the hidden sequence in the Conformer layer acts as the query and the reference motion latent acts as the key and value.
**Pose-controllable Generation** Predicting motion latent from the speech is a one-to-many mapping problem since there are multiple plausible head poses when speaking a sentence. To alleviate this ill-posed issue, we propose to incorporate pose information during training (Du et al., 2023; Tang et al., 2022). To achieve this, we extract the head poses \( x^p \in \mathbb{R}^{N \times 3} \) (pitch, yaw, and roll) using an open-source tool\(^1\) and add the extracted poses to the output of speech encoder \( A \) through a learned linear layer. By complementing the prediction with the head poses, the model puts more focus on generating realistic facial expressions, mouth shapes, etc.
To enable flexible generation during inference (i.e., one can use either the appointed head poses or the predicted one to control the generated talking video), we also train a pose predictor \( P \) to estimate the head poses according to the speech. The pose predictor \( P \) consists of several convolutional layers and is optimized by the mean square error between the extracted head poses \( x^p \) and the estimated one \( \hat{x}^p \).
**Training** We jointly train the models \( S, A \) and \( P \) with the following loss function:
\[
L_{\text{dif}} = \mathbb{E}_{z^m_0,t}[\|z^m_0 - z^m_0\|^2] + L_{\text{mse}}(x^p, \hat{x}^p),
\]
where the first term is the data loss, \( \hat{z}^m_0 = S(z^m_t, t, z^s, z^m(i), x^p) \), and the second item is the loss for head pose prediction.
\(^1\)https://github.com/cleardusk/3DDFA
Table 2: Quantitative comparisons of the GAIA VAE model with previous video-driven baselines.
| Methods | Self-Reconstruction | Cross-reenactment |
|---------------|---------------------|--------------------|
| | FID↓ | LPIPS↓ | PSNR↑ | AKD↓ | MSI↑ | FID↓ | AKD↓ | MSI↑ |
| FOMM | 23.843 | 0.196 | 22.669 | 2.160 | 0.839 | 45.951 | 3.404 | 0.838 |
| HeadGAN | 21.499 | 0.278 | 18.555 | 2.990 | 0.835 | 90.746 | 5.964 | 0.788 |
| face-vid2vid | 18.604 | 0.184 | 23.681 | 2.195 | 0.813 | 28.093 | 3.630 | 0.853 |
| GAIA (Ours) | **15.730** | **0.167** | **23.942** | **1.442** | **0.856** | **15.200** | **2.003** | **1.102** |
Table 3: Quantitative comparisons of the GAIA framework with previous speech-driven methods. The subjective evaluation is rated at five grades (1-5) in terms of overall naturalness (Nat.), lip-sync quality (Lip.), motion jittering (Jit.), visual quality (Vis.), and motion diversity (Mot.). Note that, the Sync-D score for real video is 8.548, which is close to ours.
| Methods | Subjective Evaluation | Objective Evaluation |
|---------------|-----------------------|----------------------|
| | Nat.↑ | Lip.↑ | Jit.↑ | Vis.↑ | Mot.↑ | Sync-D↓ | MSI↑ | FID↓ |
| MakelTalk | 2.148 | 2.161 | 1.739 | 2.789 | 2.571 | 9.932 | 1.140 | 28.894 |
| Audio2Head | 2.355 | 3.278 | 2.014 | 2.494 | 3.298 | **8.508** | 0.635 | 28.607 |
| SadTalker | 2.884 | 4.012 | 4.020 | 3.569 | 2.625 | 8.606 | 1.165 | **22.322** |
| GAIA (Ours) | **4.362** | **4.332** | **4.345** | **4.320** | **4.243** | **8.528** | **1.181** | **22.924** |
5 EXPERIMENTS
Benefitting from the disentanglement between motion and appearance, GAIA enables two common scenarios: the video-driven generation which aims to generate results with the appearance from a reference image and the motion from a driving video, and the speech-driven generation where the motion is predicted from a speech clip. The video-driven generation evaluates the VAE, while the speech-driven one evaluates the whole GAIA system. We compare GAIA with state-of-the-art methods for the two scenarios in Sec. [5.2] and further make detailed analyses in Sec. [5.3] to understand the model better. To verify the scalability of GAIA, we evaluate it at different scales in Sec. [5.3] i.e., from 150M to 2B model parameters in total. Due to the flexibility of our architecture, we also enable extended applications like text-instructed avatar generation, pose-controllable and fully controllable talking avatar generation (i.e., the mouth region is synced with the speech, while the rest of facial attributes can be controlled by the given talking video), which we demonstrate in Sec. [5.4].
5.1 EXPERIMENTAL SETUPS
Datasets We train our model on the union of the datasets described in Sec. [3], and we randomly sample 100 videos from them as the validation set. For the test set, to eliminate the potential overlap and evaluate the generality of our model, we create an out-domain test set by choosing 500 videos from TalkingHead-1KH [Wang et al., 2021b] dataset. We test all baselines on the same set.
Implementation Details We adjust the VAE and the diffusion model to different scales by changing the hidden size and the number of layers in each block, resulting in VAE of 80M, 700M, 1.7B parameters and diffusion model of 180M, 600M, 1.2B parameters. Refer to Appendix [B.1] for the details of model architecture and training strategies.
Evaluation We utilize various metrics including subjective and objective ones to provide a thorough evaluation of the proposed framework. Subjective metrics: we conduct user studies to evaluate the lip-sync quality, visual quality, and head pose naturalness of the generated videos. 20 experienced users are invited to participate. We adopt MOS (Mean Opinion Score) as our metric. We present one video at a time and ask the participants to rate the presented video at five grades (1-5) in terms of overall naturalness, lip-sync quality, motion jittering, visual quality, and motion diversity respectively. Objective metrics: we adopt various objective metrics to evaluate the visual and motion quality of generation results. For visual quality, we report FID [Heusel et al., 2017] and LPIPS [Zhang et al., 2018] for perceptual similarity, and PSNR to measure the pixel-level mean squared error (MSE) between the ground truth and the reconstruction of the VAE. In addition, we detect the landmarks of
Figure 2: Qualitative comparison with the state-of-the-art speech-driven methods. It shows that GAIA achieves higher naturalness, lip-sync quality, visual quality and motion diversity. In contrast, the baselines tend to highly rely on the reference image (Ref. Image) therefore making generation with slight head motions (e.g., most of the baselines generate results with closed eyes when the eyes of the reference image are closed) or inaccurate lip synchronization.
ground truth and reconstructed images and report the Average keypoint distance (AKD) (Wang et al., 2021b) between them, to evaluate the motion quality of VAE reconstructions. Motion Stability Index (MSI) (Ling et al., 2022) which measures the motion stability of results is also reported. Following previous works (Thies et al., 2020; Tang et al., 2022), we adopt Sync-D (SyncNet Distance) to measure the lip-sync quality via SyncNet (Chung & Zisserman, 2016).
5.2 Results
We compare the proposed GAIA model with state-of-the-art baselines in this section. Our model is general and can be applied to two common settings: the video-driven generation which aims to generate results with the appearance from a reference image and the motion from a driving video, and the speech-driven generation where the motion will be predicted from a speech clip. The video-driven generation evaluates the VAE, while the speech-driven one evaluates the whole GAIA system.
5.2.1 Video-driven Results
We consider two different settings of the video-driven talking avatar generation including self-reconstruction and cross-reenactment, depending on whether the individual of the appearance frame is consistent with the driving motion frames. Details of the two settings are provided in Appendix B.2. We compare with three strong baselines including FOMM (Siarohin et al., 2019), HeadGAN (Doukas et al., 2021) and face-vid2vid (Wang et al., 2021b), which are all equipped with feature warping, a commonly utilized prior technique in talking video generation. The results are shown in Tab. 2. The VAE of GAIA achieves consistent improvements over previous video-driven baselines, especially in the cross-reenactment settings, illustrating our model successfully disentangles the appearance and motion representation. Note that as a part of the data-driven framework, we try to make the VAE as simple as possible, and eliminate some commonly used external components such as a face recognition model (Deng et al., 2020) that provides identity-preserving losses.
5.2.2 Speech-driven Results
The speech-driven talking avatar generation is enabled by predicting motion from the speech instead of the driving video. We provide both quantitative and qualitative comparisons with MakeItTalk (Zhou et al., 2020), Audio2Head (Wang et al., 2021a), and SadTalker (Zhang et al., 2023b) in Tab. 3 and Fig. 2. It can be observed that GAIA surpasses all the baselines by a large margin in terms of subjective evaluation. More specifically, as shown in Fig. 2, the baselines tend to make generation with high dependence on the reference image, even if the reference image is given with closed
eyes or unusual head poses. In contrast, GAIA is robust to various reference images and generates results with higher naturalness, lip-sync quality, visual quality and motion diversity. For the objective evaluation in Tab.3, the best MSI score demonstrates that GAIA generates videos with great motion stability. The Sync-D score of 8.528, which is close to the one of real video (8.548), illustrates that the generated videos have great lip synchronization. We obtain a comparable FID score to the baselines, which might be affected by the diverse head poses as we find that the model trained without diffusion realizes a better FID score in Tab.6.
5.3 Ablation Studies
5.3.1 Ablation Studies on Scaling
We change the scale of the model parameters as well as the training dataset to show the scalable of GAIA. For the model, we change the scales of VAE and Diffusion separately to study their influence on the framework. For the training set, we use the whole set with 1K hours or the subset of it.
The results are listed in Tab.4 and Tab.5 and we can find that scaling up the parameters and data size both benefit the proposed GAIA framework. For the VAE model, the results are tested with the self-reconstruction setting, which tends to converge when the model is larger than 700M. For the sake of efficiency, we utilize the 700M VAE model in our main experiments. As for the diffusion model, we still realize better results when the model grows up to 1.2B parameters.
5.3.2 Ablation Studies on Proposed Techniques
We study the proposed techniques in detail: 1) we encode each frame to the latent without disentanglement, and utilize the diffusion model to predict the latent (w/o disentanglement); 2) we generate the motion latent without making the condition on the head pose (w/o head pose); 3) we use the Conformer to predict the motion latent directly without the diffusion process (w/o diffusion); 4) we synthesize the coordinates of the landmarks, instead of the latent representation (w. landmark prediction). All experiments are conducted based on the 700M VAE model and the 180M diffusion model. As shown in Tab.6, which demonstrates that: 1) the model without disentanglement fails to generate effective results; 2) the model trained without head pose or diffusion process yields inferior performance; 3) predicting landmarks, instead of the motion latent like ours, degrades the performance in all aspects. This illustrates that encoding motion into latent representation helps the learning of motion generation.
5.4 Controllable Generation
Pose-controllable Talking Avatar Generation As introduced in Sec.4.3, in addition to predicting the head pose from the speech, we also enable the model with pose-controllable generation. We implement it by replacing the estimated head pose with either a handcrafted pose or the one extracted from another video, which is demonstrated in Fig.3(a). Refer to Appendix D for more details.
Fully Controllable Talking Avatar Generation Due to the controllability of the inverse diffusion process, we can control the arbitrary facial attributes by editing the landmarks during generation.
| #Params. VAE | #Hours | FID↓ |
|-------------|--------|------|
| 80M | 0.5K | 18.353 |
| 80M | 1K | 17.486 |
| 700M | 1K | 15.730 |
| 1.7B | 1K | 15.886 |
| #Params. Diffusion | #Hours | Sync-D↓ |
|--------------------|--------|---------|
| 180M | 0.1K | 9.145 |
| 180M | 1K | 8.913 |
| 600M | 1K | 8.603 |
| 1.2B | 1K | 8.528 |
Table 4: Scaling the VAE of GAIA. "#Params." and "#Hours" indicate the number of parameters and the size of the training dataset.
Table 5: Scaling the diffusion model of GAIA. We use the VAE model of 700M parameters for all experiments.
Table 6: Ablation studies on the proposed techniques.
| Methods | Sync-D↓ | MSI↑ | FID↓ |
|--------------------------|---------|-------|------|
| GAIA (700M + 180M) | 8.913 | 1.132 | 24.242 |
| w/o disentanglement | 12.680 | 1.423 | 140.009 |
| w/o head pose | 9.134 | 1.208 | 23.648 |
| w/o diffusion | 9.817 | 1.486 | 21.049 |
| w. landmark prediction | 9.331 | 1.038 | 27.022 |
Figure 3: Examples of controllable and text-driven video generation. Due to the flexibility of our framework, 1) we enable multi-granularity motion control over the generated video. 2) we realize text-instructed video generation. See Sec. 5.4 for the details.
Specifically, we train a diffusion model to synthesize the coordinates of the facial landmarks. The landmarks that we want to edit are fixed to reference coordinates. Then we leave the model to generate the rest. In Fig. 3(b), we show the results of fully controllable generation, i.e., the mouth and jaw are synced with the speech, while the rest of the facial attributes are controlled by the reference motion. Refer to Appendix D for more details.
Text-driven Video Generation In general, the diffusion model is a motion generator conditioned on speech, where the condition can be altered to other modalities flexibly. To show the generality of our framework, we consider textual instructions as the condition of the diffusion model, and enable the text-to-video generation (Fig. 3(c)). Refer to Appendix E.2 for more details.
5.5 Discussion
Different from previous works that employ warping-based motion representation (Wang et al., 2021a; Drobyshev et al., 2022), pre-defined 3DMM coefficients (Zhang et al., 2023b), we propose to eliminate these heuristics and generate the full motion latent at the same time. The framework discloses three insights: 1) the complete disentanglement between the motion and the appearance is the key to achieving zero-shot talking avatar generation; 2) handling one-to-many mapping with the diffusion model and learning full motion from real data distribution result in natural and diverse generations; 3) less dependence on heuristics and labels makes the method general and scalable.
6 Conclusion
We present GAIA, a data-driven framework for zero-shot talking avatar generation which consists of two modules: a variational autoencoder that disentangles and encodes the motion and appearance representations, and a diffusion model to predict the motion latent conditioned on the input speech. We collect a large-scale dataset and propose several filtering policies to enable the successful training of the framework end-to-end. The GAIA framework is general and scalable, which can provide natural and diverse results in zero-shot talking avatar generation, as well as being flexibly adapted to other applications including controllable talking avatar generation and text-driven video generation.
Limitations and Future Works Our work still has limitations. For example, we leverage a pre-trained landmark extractor and a head pose extractor, which may hinder the end-to-end learning of the models. We leave the fully end-to-end learning (e.g., disentangle motion and appearance without the help of landmarks) as future work.
Responsible AI Considerations
GAIA is intended for advancing AI/ML research on talking avatar generation. We encourage users to use the model responsibly and to adhere to the Microsoft Responsible AI Principles. We discourage users from using the method to generate intentionally deceptive or untrue content or for inauthentic activities. To prevent misuse, adding watermarks is a common way and has been widely studied in both research and industry works (Ramesh et al., 2022; Saharia et al., 2022). On the other hand, as an AIGC model, the generation results of our model can be utilized to construct artificial datasets and train discriminative models.
References
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. *Advances in Neural Information Processing Systems (NeurIPS)*, 33:12449–12460, 2020.
Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In *Proceedings of the 26th annual conference on Computer graphics and interactive techniques*, pp. 187–194, 1999.
Egor Burkov, Igor Pasechnik, Artur Grigorev, and Victor Lempitsky. Neural head reenactment with latent pose descriptors. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 13786–13795, 2020.
J Chung, A Nagrani, and A Zisserman. Voxceleb2: Deep speaker recognition. *Interspeech*, 2018.
Joon Son Chung and Andrew Zisserman. Out of time: automated lip sync in the wild. In *Asian conference on computer vision*, pp. 251–263. Springer, 2016.
Alexandre Defossez, Gabriel Synnaeve, and Yossi Adi. Real time speech enhancement in the waveform domain. In *Interspeech*, pp. 3291–3295, 2020.
Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. Retinaface: Single-shot multi-level face localisation in the wild. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 5203–5212, 2020.
Michail Christos Doukas, Stefanos Zafeiriou, and Viktoria Sharmanska. Headgan: One-shot neural head synthesis and editing. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 14398–14407, 2021.
Nikita Drobyshev, Jenya Chelishev, Taras Khakhulin, Aleksei Ivakhnenko, Victor Lempitsky, and Egor Zakharov. Megaportraits: One-shot megapixel neural head avatars. In *Proceedings of the 30th ACM International Conference on Multimedia*, pp. 2663–2671, 2022.
Chenpeng Du, Qi Chen, Tianyu He, Xu Tan, Xie Chen, Kai Yu, Sheng Zhao, and Jiang Bian. Dae-talker: High fidelity speech-driven talking face generation with diffusion autoencoder. In *Proceedings of the 31st ACM International Conference on Multimedia*, 2023.
Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 12873–12883, 2021.
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. Conformer: Convolution-augmented transformer for speech recognition. *Interspeech*, 2020.
Yudong Guo, Keyu Chen, Sen Liang, Yong-Jin Liu, Hujun Bao, and Juyong Zhang. Ad-nerf: Audio driven neural radiance fields for talking head synthesis. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 5784–5794, 2021.
Siddharth Gururani, Arun Mallya, Ting-Chun Wang, Rafael Valle, and Ming-Yu Liu. Spacex: Speech-driven portrait animation with controllable expression. *arXiv preprint arXiv:2211.09809*, 2022.
---
1<https://www.microsoft.com/en-us/ai/responsible-ai>
|
vxmvbzw76R
|
Can the authors comment on how this necessarily opens up a large security hole? I.e., the data has noise injected to protect it in the event of interception by a bad actor. However, this scheme requires the transmission of the actual denoising layer, which itself may be intercepted. Due to the need for model refreshes, this problem is non-trivial.
|
SPLIT-AND-DENOISE: PROTECT LARGE LANGUAGE MODEL INFERENCE WITH LOCAL DIFFERENTIAL PRIVACY
Anonymous authors
Paper under double-blind review
ABSTRACT
Large Language Models (LLMs) show powerful capabilities in natural language understanding by capturing hidden semantics in vector space. This process enriches the value of text embeddings for various downstream tasks, thereby fostering the Embedding-as-a-Service (EaaS) business model. However, the direct transmission of text to servers poses a largely unaddressed risk of privacy leakage. To mitigate this issue, we introduce Split-N-Denoise (SnD), an innovative framework that splits the model to execute the token embedding layer on the client side at minimal computational cost. This allows the client to introduce noise prior to transmitting the embeddings to the server, and subsequently receive and denoise the perturbed output embeddings for downstream tasks. Our approach is designed for the inference stage of LLMs and requires no modifications to the model parameters. Extensive experiments demonstrate SnD’s effectiveness in optimizing the privacy-utility tradeoff across various LLM architectures and diverse downstream tasks. The results reveal an improvement in performance under the same privacy budget compared to the baselines by over 10% on average, offering clients a privacy-preserving solution for local privacy protection.
1 INTRODUCTION
Large Language Models (LLMs) have shown powerful capability in natural language understanding by capturing hidden semantics in vector space. Consequently, users can leverage LLMs to obtain embeddings and subsequently apply them to their own downstream tasks, known as “embedding as a service” (EaaS). However, EaaS is typically provided as an online service, giving rise to significant privacy concerns. In particular, users may input sensitive information, such as names, phones, and email addresses, that needs to be kept hidden from the service provider. With the growing concern around the potential leakage of confidential data, certain companies, such as Samsung, have temporally prohibited the usage of online LLM services.
Recent research on privacy-preserving model inference investigates around two directions, cryptographic [Liu & Liu (2023); Chen et al. (2022)] and perturbation [Du et al. (2023)]. Cryptography typically employs homomorphic encryption (HE) to compute the inference result of the users’ encrypted input. Unfortunately, the application of cryptographic technique is constrained by the significant computation overhead of cryptographic operations, especially on large transformer models. Perturbation provides differential privacy (DP) guarantee by adding calibrated noise to the original data. A key challenge of this approach is how to balance the utility and privacy tradeoff in a local differential privacy (LDP) setting, where users’ inputs are privatized before being released to the server. Furthermore, privatization on text data is particularly difficult when the randomized algorithm is required to map text input to text output.
Split learning [Gupta & Raskar (2018); Vepakomma et al. (2018)] has emerged as a solution to privacy-preserving computation between two parties. During inference, the user performs affordable computation locally to obtain intermediate results (IRs), and forwards them to the service provider for subsequent operations. To mitigate privacy leakage, recent research has integrated DP with split learning by injecting noises into the IRs before sharing with the server [Yang et al. (2022)]. In the split
inference setting, a crucial problem is to design an algorithm that minimizes the impact on model performance while ensuring LDP.
A notable approach involves the application of denoising techniques to conduct error correction and enhance model utility. Existing studies incorporate denoising layers on the server side, leveraging the post-processing properties of DP [Nasr et al., 2020; Wang et al., 2019; Xu et al., 2022]. However, the effectiveness of denoising is hindered by the fact that the server is ignorant of the injected noise levels. Driven by the limitation, a question arises: can we improve the utility by conducting denoising on the user side, leveraging the knowledge of noise levels and raw IRs? It is a highly nontrivial task to uncover the closed-form mapping between denoised embedding and noises as well as raw IRs, since the inputs have undergone a series of complex transformations.
In this paper, we answer this question affirmatively by proposing Split-N-Denoise (SnD), a framework that integrates split inference and denoising techniques to enhance utility under LDP bound. To minimize computational overhead for users, we deploy only the token representation layer on the client sides. A denoise model that enhances noisy embeddings using raw inputs and noise levels is pre-trained on the server side and subsequently shared with the user. Once receiving the output from server, users input their private data into the denoise model to improve the utility of embeddings.
Our main contributions involve the following:
- We propose SnD, a framework that integrates split inference and denoising techniques to protect user’s privacy during LLM inference with strong privacy guarantee. Empirical studies demonstrate that our method outperforms existing DP-based baselines by over 10% on average, and maintains utility even in extremely low privacy budget settings ($\eta \leq 0.01$).
- We design a novel denoising method deployed on user side. In this approach, a denoise model is pre-trained on server side using public dataset and synthetic noises. Subsequently, this trained model is deployed on the user side, where it leverages the specific noise levels and raw IRs provided by the user to enhance the embeddings.
## 2 Prior Works
### Local Privacy Protection for LLMs
With the advent of LLMs, privacy leakage has emerged as a crucial concern. Existing literature predominantly focuses on privacy protection throughout the entire training process, encompassing pre-training [Hoory et al., 2021], fine-tuning [Huang et al., 2020; Kerrigan et al., 2020; Yu et al., 2021; Lukas et al., 2023], and prompt-tuning phases [Duan et al., 2023; Li et al., 2023]. Yet, there is a notable dearth of research that addresses local privacy during the inference phase with a fully frozen LLM. This scenario, which prohibits alterations to the model’s structure and parameters, is particularly complex. Nonetheless, it holds significance in black-box API access contexts, especially for proprietary models like GPT-4. An intuitive approach involves anonymizing sensitive terms prior to LLM input and subsequently restoring them post-output [Kan et al., 2023; Chen et al., 2023]. However, this method, while effective for obfuscating specific entities, falls short in concealing other linguistic elements, including verbs and non-named entities. Such a limitation compromises full privacy and is unsuitable for tasks necessitating exact semantic interpretation of the altered entities, such as knowledge retrieval and text continuation [Chen et al., 2023].
An alternative strategy might entail privatizing the input at token representations or intermediate layer levels. [Qu et al., 2021b] investigates the utility and privacy tradeoff for privacy-preserving finetuning, involving text-to-text privatization [Feyisetan et al., 2019; Qu et al., 2021a] and token embedding privatizations, while the two techniques could be adapted to private LLM inference. Privacy-Preserving Prompt Tuning (RAPT) [Li et al., 2023] employs text-text privatization to conduct prompt tuning and inference with local differential privacy. The authors propose a reconstruction head during prompt tuning to enhance the utility. Another direction employs homomorphic encryption (HE) to conduct private transformer inference such as Privacy-Computing Friendly Transformers (PCFT) and The-x [Liu & Liu, 2023; Chen et al., 2022], but the significant overhead renders it impractical for implementation in LLM.
### Privacy-Preserving Split Learning
Split learning is a privacy-preserving approach in distributed learning, where each client trains a segment of a deep network up to a designated “cut layer.” The outputs at this layer are then forwarded to the server side, which completes the training without
accessing the client’s raw data. This approach facilitates forward and backward propagation without sharing raw data, ensuring the client-side local privacy [Gupta & Raskar (2018); Vepakomma et al. (2018)]. Vepakomma et al. shows that split learning surpasses federated learning and large batch synchronous SGD in achieving superior accuracy with significantly reduced client-side computational demands [Gupta & Raskar (2018)]. Singh et al. further validate its efficacy across broader experimental contexts, demonstrating that an increase in the number of clients or model dimensions gives split learning an edge over federated learning [Singh et al. (2019)]. The advantage in its computational efficiency renders it suitable for LLM local privacy setting, where the client side executes minimal computational tasks, such as noising and denoising operations at specific segmented layers, to ensure privacy at reduced computational expenses. Meanwhile, the server handles the bulk of the model’s layers. Our research serves as an initial endeavor to integrate split learning with LLM privacy concerns.
**Denoising for Differential Privacy (DP)** While elevated noise levels offer robust privacy protections, privacy-preserving methods inevitably compromise the model’s quality [Wang et al. (2019)]. A notable approach involves the application of denoising techniques specifically tailored for Differential Privacy (DP), incorporating a post-processing layer to enhance DP utility. Pioneering research in statistical estimation underscores the efficacy of post-processing denoising in achieving accurate private network degree distribution estimates [Hay et al. (2009)], and in reducing linear regression estimation errors when the ground truth is sparse [Nikolov et al. (2013)]. Balle et al. demonstrated that denoising significantly enhances the Gaussian mechanism’s accuracy in high-dimensional settings for DP algorithms with output perturbations [Balle & Wang (2018)]. More recently, denoising mechanisms have been extended to the training of Machine Learning (ML) models, particularly Deep Neural Networks (DNNs), by applying denoising techniques to Gaussian noise-injected gradients, thereby improving the utility of privately trained ML models [Wang et al. (2019)]. Nasr, Shokri, and Houmansadr further explored the use of scaling as a denoising strategy to optimize DP utility in Differential Privacy Stochastic Gradient Descent (DP-SGD), scaling the noisy gradients based on their usefulness [Nasr et al. (2020)]. Subsequently, Xu et al. employed scaling and masking as post-processing denoising techniques on top of Gaussian noise-injected intermediate results in split learning, aiming to reduce the noisy neural network output’s estimation error without compromising privacy [Xu et al. (2022)].
### 3 METHODOLOGY
#### 3.1 PRELIMINARIES
##### 3.1.1 LDP
Differential privacy (DP) [Dwork (2006); Dwork et al. (2014)] is considered the gold standard for data privacy. Its definition is as follows:
**Definition 1 ((ε, δ)-Differential Privacy)** A randomized mechanism $M$ with domain $D$ and range $R$ preserves $(\epsilon, \delta)$-differential privacy if and only if for any two neighboring datasets $D, D' \in D$ and for any subset $S \subseteq R$, the following inequality holds:
$$\Pr[M(D) \in S] \leq e^\epsilon \Pr[M(D') \in S] + \delta$$
where $\epsilon$ is the privacy budget and $\delta$ is the failure probability.
Local differential privacy (LDP) is a particular case of DP, where the server is not trusted and data privatization is conducted by the client. For any inputs $x, x' \in D$, LDP requires a randomized mechanism $M$ to satisfy:
$$\Pr[M(x) \in S] \leq e^\epsilon \Pr[M(x') \in S] + \delta$$
for any measurable subset $S \subseteq Range(M)$.
##### 3.1.2 $d_\chi$-PRIVACY
In the context of local privacy preservation, we employ $d_\chi$-privacy [Chatzikokolakis et al. (2013)], a specialized variant of local differential privacy tailored for textual data [Feyisetan et al. (2019); Qu
\(d_\chi\)-privacy allows to impose high probability of observing the same output for inputs with similar semantics. We state the formal definition in the following:
**Definition 2 (\(d_\chi\)-privacy)** For an input domain \(X\) and an output domain \(Y\), \(d_\chi\) serves as a metric space over \(X\). A stochastic mechanism \(M : X \rightarrow Y\) is said to adhere to \(\eta d_\chi\)-privacy if, for any two elements \(x, x' \in X\), the output distributions \(M(x)\) and \(M(x')\) satisfy the following inequality:
\[
\frac{P(M(x) = y)}{P(M(x') = y)} \leq e^{\eta d_\chi(x, x')}, \quad \forall y \in Y,
\]
where \(\eta \geq 0\) is a tunable privacy parameter that modulates the level of privacy protection.
The privacy guarantee indicates that the log-likelihood ratio of producing the same outcome \(y\) is bounded by \(\eta d_\chi(x, x')\) for any two possible inputs \(x, x'\).
### 3.2 ARCHITECTURE

**Figure 1:** Overview of our privacy-preserving SnD framework. Users first obtain an initial embedding from a local encoder, followed by a noise addition via the privatization module. This privatized embedding is then transmitted to the server for processing. Upon completion, users receive a noised output, which is subsequently refined using a pre-trained denoising model to achieve an optimal balance between privacy and utility.
Denote \(G : V^n \rightarrow \mathbb{R}^d\) as the language model that maps \(n\)-token to embedding. In Split-N-Denoise (SnD), we split the language model \(G\) into a local encoder \(G_l : V^n \rightarrow \mathbb{R}^{n \times d}\) at user side and a cloud encoder \(G_c : \mathbb{R}^{n \times d} \rightarrow \mathbb{R}^d\) at server side. The local encoder consists of only the token representation layer to minimize the computation cost for user, and the server performs subsequent operations on the IRs uploaded by the clients. The architecture of SnD is depicted in figure, containing four main components:
- **Local encoder module**: the user retrieves the token embeddings of their input locally.
- **Privatization module**: the token representations are privatized by the user before being transmitted to the server to satisfy LDP.
- **Cloud encoder module**: the server performs transformation on the privatized token representations and returns the embedding to user.
- **Denoise module**: user conducts local denoising on the received embedding leveraging their raw inputs and specific noise levels.
### 3.3 NOISE MECHANISM
We adopt \(d_\chi\)-privacy to privatize the token representation layers on user side. Given an input sequence \(x = [x_1, \ldots, x_n]\), the token representation layer transforms \(x\) into a vector sequence...
$X = [x_1, \ldots, x_n] \in \mathbb{R}^{n \times d}$ via embedding model $E \in \mathbb{R}^{|V| \times d}$, where $|V|$ denotes the vocabulary size and $d$ represents the dimensionality of the embeddings.
Assuming $L_2$ norm as the distance metric, the application of $d_X$ privacy, parameterized by $\eta$, to a given word embedding $x_t \in \mathbb{R}^d$ is realized by the addition of Laplacian noise $z \sim c \exp(-\eta ||z||)$, where $c$ is a real-valued constant [Wu et al., 2017]. To sample $z$ from the Laplacian distribution, consider $z = lv$, where $l$ is sampled from a Gamma distribution $\Gamma(d, 1/\eta)$ and $v$ is uniformly sampled from the unit ball $B^d$. Consequently, the privatized representation $M(x_t)$ can be succinctly expressed as:
$$M(x_t) = x_t + z.$$
The supports for $z$ and thus $M(x_t)$ are unbounded, imposing difficulties on subsequent denoise procedures, especially under low level of $\eta$. To improve the performance of denoise model introduced in Section 3.4, the client clips the $l_2$ norm of the privatized representation within $C_{x_t}$:
$$M'(x_t) = \min(M(x_t), M(x_t) \cdot C_{x_t}/||M(x_t)||)$$
(2)
where $C_{x_t} = \max_{x_t \in X} ||x_t||$ is chosen to be the upper bound of $x_t$. The user then updates its noise matrix locally according to the clipped representations for subsequent denoise. Appendix A.10 demonstrates the benefits of norm clipping empirically.
The following theorem states that the noise mechanism $M' : \mathbb{R}^d \rightarrow \mathbb{R}^d$ adheres to $\eta d_X$-privacy. Refer to appendix A.2 for the proof.
**Theorem 1** For any $d \geq 1$ and any $\eta > 0$, the mechanism $M' : \mathbb{R}^d \rightarrow \mathbb{R}^d$ achieves $\eta d_X$-privacy with respect to $d_X(x, x') = ||x - x'||$.
### 3.4 Denoise Model
**Limitation of server-side denoise:** the denoising ability of a server is limited by its lack of knowledge regarding the noise levels. The server’s capacity to remove noise is inherently conflicted with the level of privacy protection. Intuitively, if the server could produce an appropriate denoised output on its own, there is a higher probability that it can also reconstruct the original user input. Proposition 1 below gives the lower bound of mean square error (MSE) for server-side denoise algorithms. The proof can be found in Appendix A.3.1.
**Proposition 1** Let $y \in Y \subseteq \mathbb{R}^k$ be the original vector without noises added, and let $\hat{y} \in \mathbb{R}^k$ be the noisy vector obtained under $\eta d$-privacy mechanism. Denote $D_s : \mathbb{R}^k \rightarrow \mathbb{R}^k$ as the denoising algorithm run by the server. Suppose $D_s$ is unbiased and the token embeddings are bounded by $B_x$:
$$||x' - x|| \leq B_x, \forall x', x$$
(3)
then:
$$\mathbb{E}[||D_s(\hat{y}) - y||/k] \geq \frac{\sum_{i=1}^d \text{diam}_i(Y)^2 / 4k}{e^{\eta B_x} - 1}$$
(4)
where $\text{diam}_i(Y) = \sup_{y, y' \in Y : y_i \neq y'_i} |y_i - y'_i|$ is the diameter of $Y$ in the $i$-th dimension.
**Remark 2** The vector $y$ can be: (i) the token representations uploaded from users, (ii) output embeddings, or (iii) any intermediate results returned by the language model based on the token embeddings. The instantiation of $y$ is determined by the layer at which the server runs denoising algorithm.
To address the limitation, we propose a denoise framework where users conduct error correction on the noisy embeddings using their specific noises and raw inputs. Given the black-box nature of neural network transformation on the privatized token representations, we propose to train a transformer-based model for embedding denoise.
Let $\tilde{X} = [\tilde{x}_1, \ldots, \tilde{x}_n], Z = [z_1, \ldots, z_n] \in \mathbb{R}^{n \times d}$ denote, respectively, the privatized token representations and noise matrix. Noted that the noise vector is updated with the clipped privatized token embeddings $z = M'(x_t) - x_t$. After a series of operations, the server returns a noisy embedding...
\( e_n \) capturing the context of input token to the user. The denoise model is parameterized by a \( L \)-layer transformer decoder, \( D : \mathbb{R}^{(2n+1) \times d} \rightarrow \mathbb{R}^d \):
\[
e_d = D(e_n, \tilde{X}, Z)
\]
(5)
The input to the denoise model \( H_0 \) is a concatenation of vectors:
\[
H_0 = [e_n; \tilde{x}_1, \ldots, \tilde{x}_n; z_1, \ldots, z_n]
\]
(6)
Let \( h^l_t \) represents the hidden state for the \( t^{th} \) vector at layer \( l \). This state is computed using the following recursive relation:
\[
h^l_t = h^{l-1}_t + a^{l-1}_t + m^{l-1}_t
\]
(7)
where
\[
a^{l-1}_t = \text{attn}^l(h^{l-1}_1, h^{l-1}_2, \ldots, h^{l-1}_{2n+1}), \quad m^{l-1}_t = W^l_{\text{proj}} \sigma(W^l_f \gamma(a^l_t + h^{l-1}_t))
\]
(8)
The denoised embedding is obtained directly from the hidden state representation for \( e_n \) at the final layer:
\[
e_d = h^L_0
\]
(9)
We visualize the architecture of the denoise model in figure 3. Intuitively, the noisy embedding undergoes \( L \) steps to transform into the denoised embedding. In each step, the transformation is conditioned on the feature representations of raw IRs as well as specific noises.
To train a denoise model, the server samples a set of noises added to the token representations of public corpus. Subsequently, the clean embedding \( e_c \) and noisy embedding \( e_n \) are computed from, respectively, the raw and privatized token representations:
\[
e_c = G(X), \quad e_n = G(\tilde{X})
\]
(10)
The denoise model is trained on the above datasets with the objective to minimize the deviation between denoised and clean embeddings:
\[
\min_D \mathbb{E}[||D(e_n, \tilde{X}, Z) - e_c||^2]
\]
(11)
The pretrained model is shared with users to conduct denoising on the received embeddings locally. It is important to note that the denoise model does not expose any information regarding user data. This is primarily due to the fact that the model’s training is carried out exclusively on a public dataset, rendering it irrelevant to users’ private inputs.
### 3.5 Complexity Analysis
In this section, we analyze the communication complexity and user computation complexity of our framework.
**Communication complexity:** the communication cost can be broken as: (1) user uploads the token representations to the server (\( O(nd) \) messages); (2) server share the embeddings with user (\( O(d) \) messages). Hence, the total communication overhead is \( O(nd) \).
**User computation complexity:** user’s computation cost can be broken as: (1) retrieving token embeddings from input text (\( O(n) \) complexity); (2) performing local denoising with the transformer-based model (\( O(n^2dL) \) complexity \cite{vaswani2017attention}). Therefore, the user’s computation cost adds up to \( O(n^2dL) \).
### 4 Experimental Results
#### 4.1 Experiment Setup
We evaluate our framework on three classes of LLMs: Bert \cite{devlin2018bert}, GPT2 \cite{radford2019language}, and T5 \cite{raffel2020exploring}. The architectures of our denoise and downstream models are described in appendix A.6. We benchmark our experiments against three baseline methods: (i) Token embedding privatization (TokEmbPriv) \cite{qu2021token}, where the token embeddings are perturbed...
by the user before sending them to the server. (ii) Text-to-text privatization (Text2Text) Feyisetan et al. (2019); Qu et al. (2021b), where the plain token sequence is transformed into a privatized token sequence by replacing each word with the perturbed token embeddings. (iii) Privacy-Preserving Prompt Tuning (RAPT) Li et al. (2023) that protects prompt tuning and inference with local DP.
To assess the performance of our approach, we employ two distinct evaluation metrics: (1) similarity with \( e_c \): we compute the mean square error (MSE) and cosine similarity (COS) between \( e_c \) and \( e_d \), the clean and privatized embeddings, to quantify the extent of data variations induced by the perturbation process; (2) performance on downstream tasks: we utilize accuracy scores (ACC) and area under the roc curve (AUC) to gauge the utility of the embeddings on downstream tasks.
### 4.2 DATASETS
To train the denoise model, we use the combination of 20 datasets to better mimic the generalized training scenarios, including TweetEval Offensive Barbieri et al. (2020), Hate Speech 18 de Gibert et al. (2018), Health Fact Kotonya & Tomi (2020), Daily Dialogue Li et al. (2017), etc. See the full list of datasets we used in Appendix A.4.
We test our denoising performance on a collection of downstream tasks: (i) Sentence classification: CoLA Warstadt et al. (2019), (ii) Pair similarity: Quora Question Pairs (QQP) Chen et al. (2018), MSR Paraphrase Corpus (MRPC) Dolan & Brockett (2005), (ii) Recognizing Textual Entailment (RTE) Dagan et al. (2006); Bar-Haim et al. (2006); Giampiccolo et al. (2007); Bentivogli et al. (2009). Refer to appendix A.5 for the evaluation details.
### 4.3 ATTACKS
We simulate two inference attacks on the privatized token embeddings from SnD to investigate the privacy protection ability under varying \( \eta \).
**Embedding inversion attack** Li et al. (2023); Qu et al. (2021b): a token-level attack that reconstructs the raw text from the privatized token representation. Given a noisy embedding \( \hat{x}_t, t \in [1, n] \), the server identify a token \( x_t \) closest to \( \hat{x}_t \) measured by \( L_2 \) distance in the embedding space:
\[
x_t = \arg \min_k \| w_k - \hat{x}_t \|
\]
where \( w_k \) represents the representation for the \( k^{th} \) token in the vocabulary.
**Attribute inference attack** Li et al. (2023): an attack that infers the sensitive features of records from the privatized token representations. We rely on the twitter text dataset Vashishth & Meehan (2020) to predict the gender based on the user’s review.
### 4.4 EXPERIMENT RESULTS
#### 4.4.1 PERFORMANCE ON DOWNSTREAM TASK
We record the performance on various downstream task in terms of accuracy (ACC) under varying \( \eta \) in Table 1, 2, and 3. The utility is benchmarked against the case without any noise injection and thus no denoise operation, denoted by \( \eta = \infty \). One important observation is that our framework maintains acceptable accuracy compared with the non-privatized setting. Across the chosen \( \eta \) levels and four downstream tasks, Bert, T5, and GPT models yield average model losses of 4.31%, 4.48%, and 5.25%, respectively. It is observed that larger models tend to incur greater utility loss, which aligns with the intuitive understanding that transformed noises become increasingly unpredictable—and consequently, more challenging to denoise—after traversing through additional layers. Noted that we perform evaluation on the embeddings from pre-trained model without any fine-tuning, and thus there’s a gap between the accuracy in our results for \( \eta = \infty \) and the SOTA benchmarks.
#### 4.4.2 COMPARISON WITH BASELINE
In Table 4, 5, and 6, we assess and compare the performance of three model families against three baseline methods using AUC. For the three model families, we selected three distinct \( \eta \) levels for experimentation, given the varying noise tolerance of each model. Note that \( \eta \) levels do not possess
Table 1: Accuracies on downstream tasks for BERT.
| η | DistillBert (66m) | Bert Base (110m) | Bert Large (340m) |
|---------|------------------|------------------|-------------------|
| | 100 500 ∞ | 100 500 ∞ | 100 500 ∞ |
| CoLA | 0.693 0.694 0.701 | 0.688 0.694 0.751 | 0.697 0.699 0.757 |
| QQP | 0.632 0.649 0.683 | 0.667 0.688 0.728 | 0.676 0.684 0.706 |
| MRPC | 0.683 0.691 0.695 | 0.689 0.725 0.742 | 0.684 0.689 0.701 |
| RTE | 0.578 0.580 0.592 | 0.592 0.610 0.616 | 0.590 0.601 0.621 |
Table 2: Accuracies on downstream tasks for T5.
| η | T5 Small (60m) | T5 Base (220m) | T5 Large (770m) |
|---------|----------------|----------------|-----------------|
| | 0.001 0.01 1 ∞ | 0.001 0.01 1 ∞ | 0.001 0.01 1 ∞ |
| CoLA | 0.69 0.69 0.69 | 0.71 0.69 0.70 | 0.70 0.73 0.70 |
| QQP | 0.68 0.69 0.68 | 0.71 0.66 0.67 | 0.69 0.72 0.66 |
| MRPC | 0.68 0.69 0.69 | 0.70 0.69 0.69 | 0.70 0.71 0.68 |
| RTE | 0.55 0.56 0.58 | 0.60 0.57 0.58 | 0.62 0.63 0.57 |
a universal implication across model families, as varying models exhibit distinct robustness against inference attacks, as delineated in Section 4.4.3
Table 3: Accuracies on downstream tasks for GPT2.
| η | GPT2 Small (120m) | GPT2 Medium (345m) | GPT2 large (774m) | GPT2 Xlarge (1.5b) |
|---------|------------------|--------------------|-------------------|-------------------|
| | 1 100 ∞ | 1 100 ∞ | 1 100 ∞ | 100 ∞ |
| CoLA | 0.688 0.700 0.709 | 0.690 0.698 0.728 | 0.700 0.701 0.724 | 0.693 0.766 |
| QQP | 0.645 0.657 0.716 | 0.647 0.652 0.711 | 0.637 0.650 0.721 | 0.650 0.741 |
| MRPC | 0.688 0.691 0.720 | 0.688 0.693 0.710 | 0.674 0.691 0.701 | 0.686 0.705 |
| RTE | 0.556 0.563 0.581 | 0.567 0.578 0.583 | 0.581 0.606 0.611 | 0.584 0.592 |
Table 4: AUC comparisons for BERT models with QQP task.
| η | DistillBert | Bert Base | Bert Large |
|---------|-------------|-----------|------------|
| | 50 100 500 | 50 100 500| 50 100 500 |
| TokenEmbPriv | 0.502 0.518 | 0.521 0.511| 0.535 0.557|
| Text2Text | 0.541 0.541 | 0.541 0.512| 0.513 0.507|
| RAPT | 0.517 0.515 | 0.545 0.513| 0.528 0.551|
| SnD | 0.583 0.600 | 0.610 0.674| 0.675 0.691|
Table 5: AUC Comparison for GPT Models with MRPC task.
| η | GPT2 Small | GPT2 Medium | GPT2 large |
|---------|------------|-------------|------------|
| | 1 50 100 | 1 50 100 | 1 50 100 |
| TokenEmbPriv | 0.514 0.525| 0.532 0.526| 0.523 0.530|
| Text2Text | 0.498 0.502| 0.502 0.496| 0.498 0.498|
| RAPT | 0.504 0.521| 0.524 0.503| 0.502 0.539|
| SnD | 0.542 0.552| 0.579 0.553| 0.578 0.573|
Table 6: AUC Comparison for T5 Models with RTE task.
| | T5 Small | T5 Base | T5 Large |
|----------|----------|---------|----------|
| η | 0.001 | 0.01 | 0.1 |
| TokenEmbPriv | 0.503 | 0.515 | 0.514 |
| Text2Text | 0.512 | 0.533 | 0.537 |
| RAPT | 0.510 | 0.548 | 0.547 |
| SnD | **0.547** | **0.577** | **0.575** |
For each model family, a representative task was selected. For BERT models, SnD outperforms TokenEmbPriv, Text2Text, and RAPT by an average of 22.2%, 22.1%, and 20.9%, respectively. For GPT models, SnD results in AUC higher than the three baselines from 7.3% to 12.3% on average. For T5 models, the performance of SnD is higher than the baselines by an average of over 10%. It can be observed that TokenEmbPriv and Text2Text exhibit poorer performance compared to the other two approaches. This could be attributed to the lack of denoise or reconstruction mechanism within these methods. Furthermore, the unbounded noise support in TokenEmbPriv leads to significant deviations between the privatized token representations and their original values. The MSE and COS between the initial and recovered embeddings in presented in Appendix A.8. Both AUC and the similarity metrics suggest our technique’s proficiency in restoring the original attributes of the noised embedding after procuring the perturbed results from the server.
4.4.3 Inference Attack
In this section we present the results for embedding inversion attack, and the discussion for attribute inference attack can be found in Appendix A.7. Figure 2 visualizes the attack accuracy, measured by the percentage of token correctly identified by the attack, for the three series of models at various η values. For Bert models, the attack success rates remain below 1% with η ≤ 500. GPT models exhibit negligible attack accuracy with η values up to 100, while GPT Xlarge demonstrates exceptional robustness against inference attacks as η increases. T5 models, on the other hand, require much smaller privacy budgets to resist inference attacks effectively.

5 Conclusion
This paper proposes SnD, a framework that employs split inference and denoising techniques to protect LLM inference with LDP. We split the language model to deploy the token representation layer on user side. User perturbs the token embeddings to guarantee $d_\infty$-privacy before transmitting them to the server. To improve the utility of embeddings, user conducts local denoising with a pre-trained model leveraging the raw token representations and specific noises. The empirical studies show that SnD performs better in maintaining the utility of embeddings compared with baseline methods by over 10% on average. Our study opens up new possibilities for privacy-preserving LLM inference, in terms of scalability to larger LLM, optimizing user computation cost, and extension to sequence-to-sequence inference model (see Appendix A.12).
REFERENCES
Tiago A. Almeida, Jose Maria Gomez Hidalgo, and Akebo Yamakami. Contributions to the Study of SMS Spam Filtering: New Collection and Results. In Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG’11), 2011.
Borja Balle and Yu-Xiang Wang. Improving the gaussian mechanism for differential privacy: Analytical calibration and optimal denoising, 2018.
Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The second pascal recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop, 2006.
Francesco Barbieri, Jose Camacho-Collados, Luis Espinosa-Anke, and Leonardo Neves. TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification. In Proceedings of Findings of EMNLP, 2020.
Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. The fifth pascal recognizing textual entailment challenge. In Proceedings of the TAC 2009 Workshop, 2009.
Iñigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. Efficient Intent Detection with Dual Sentence Encoders. In Proceedings of the 2nd Workshop on NLP for ConvAI - ACL 2020, 2020. URL https://arxiv.org/abs/2003.04807 Data available at https://github.com/PolyAI-LDN/task-specific-datasets.
Kostas Chatzikokolakis, Miguel Andrés, Nicolás Bordenabe, and Catuscia Palamidessi. Broadening the scope of differential privacy using metrics. 07 2013. ISBN 978-3-642-39076-0. doi: 10.1007/978-3-642-39077-7_5.
Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, and Furu Wei. The-x: Privacy-preserving transformer inference with homomorphic encryption. arXiv preprint arXiv:2206.00216, 2022.
Yu Chen, Tingxin Li, Huiming Liu, and Yang Yu. Hide and seek (has): A lightweight framework for prompt privacy protection, 2023.
Zihan Chen, Hongbo Zhang, Xiaoji Zhang, and Leqi Zhao. Quora question pairs, 2018. URL https://www.kaggle.com/c/quora-question-pairs
Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges, 2006.
Ona de Gibert, Naíara Perez, Aitor García-Pablos, and Montse Cuadros. Hate Speech Dataset from a White Supremacy Forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pp. 11–20, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5102. URL https://www.aclweb.org/anthology/W18-5102
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Bill Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing (IWP2005), 2005.
Minxin Du, Xiang Yue, Sherman SM Chow, Tianhao Wang, Chenyu Huang, and Huan Sun. Dp-forward: Fine-tuning and inference on language models with differential privacy in forward pass. arXiv preprint arXiv:2309.06746, 2023.
Haanon Duan, Adam Dziedzic, Nicolas Papernot, and Franziska Boenisch. Flocks of stochastic parrots: Differentially private prompt learning for large language models, 2023.
|
xnhvVtZtLD
|
Does the assumption $q(s) = p(s), \forall s$ amount to saying e.g. that the proportion of male and female samples would be the same across all sub-regions (e.g. across all age groups)? If so, is that a reasonable assumption to make?
|
ON THE FAIRNESS ROAD: ROBUST OPTIMIZATION FOR ADVERSARIAL DEBIASING
Vincent Grari∗,1,2,4, Thibault Laugel∗,1,2,4, Tatsunori Hashimoto2, Sylvain Lamprier3, Marcin Detyniecki1,4,5
1 AXA Group Operations
2 Stanford University
3 LERIA, Université d’Angers, France
4 TRAIL, Sorbonne Université, Paris, France
5 Polish Academy of Science, IBS PAN, Warsaw, Poland
{grari,lauge1}@stanford.edu
code: https://github.com/axa-rev-research/ROAD-fairness/
ABSTRACT
In the field of algorithmic fairness, significant attention has been put on group fairness criteria, such as Demographic Parity and Equalized Odds. Nevertheless, these objectives, measured as global averages, have raised concerns about persistent local disparities between sensitive groups. In this work, we address the problem of local fairness, which ensures that the predictor is unbiased not only in terms of expectations over the whole population, but also within any subregion of the feature space, unknown at training time. To enforce this objective, we introduce ROAD, a novel approach that leverages the Distributionally Robust Optimization (DRO) framework within a fair adversarial learning objective, where an adversary tries to predict the sensitive attribute from the predictions. Using an instance-level re-weighting strategy, ROAD is designed to prioritize inputs that are likely to be locally unfair, i.e., where the adversary faces the least difficulty in reconstructing the sensitive attribute. Numerical experiments demonstrate the effectiveness of our method: it achieves, for a given global fairness level, Pareto dominance with respect to local fairness and accuracy across three standard datasets, as well as enhances fairness generalization under distribution shift.
1 INTRODUCTION
The increasing adoption of machine learning models in various applications such as healthcare or criminal justice, has raised concerns about the fairness of algorithmic decision-making processes. As these models are often trained on historical data, they have been shown to unintentionally perpetuate existing biases and discrimination against certain vulnerable group (Obermeyer et al., 2019). Addressing fairness in ML has thus become an essential aspect of developing ethical and equitable systems, with the overarching goal of ensuring that prediction models are not influenced by sensitive attributes. One of its most common concepts, group fairness, entails dividing the population into demographic-sensitive groups (e.g., male and female) and ensuring that the outcomes of a decision model are equitable across these different groups, as measured with criteria like Demographic Parity (DP) (Dwork et al., 2012) and Equal Opportunity (EO) (Hardt et al., 2016).
However, focusing solely on these group fairness criteria, along with predictive performance, has been increasingly questioned as an objective: besides being shown to poorly generalize to unseen, e.g., drifted, environments (Kamp et al., 2021), it has been more generally criticized for being too simplistic (Selbst et al., 2019; Binns, 2020), leading to arbitrariness in the bias mitigation process (Krco et al., 2023) and the risk of having some people pay for others (Mittelstadt et al., 2023). Recognizing these issues, some researchers have long focused on exploring more localized fairness behaviors, proposing to measure bias sectionally within predefined demographic categories, in which comparison between sensitive groups is deemed meaningful for the considered task. For instance, using Conditional Demographic Disparity (Zliobaite et al., 2011), fairness in predicted
∗Equal contribution
salaries between men and women shall be evaluated by comparing individuals within the same job category and seniority level, rather than making a global comparison across sensitive groups.
Nevertheless, predefining these comparable groups to optimize their local fairness is often difficult: for instance, which jobs should be deemed legally comparable with one another? (Wachter et al., 2021) In this paper, we therefore propose to address the difficult problem of enforcing fairness in local subgroups that are unknown at training time (Sec. 2). For this purpose, we leverage the Distributionally Robust Optimization (DRO) framework, initially proposed to address worst-case subgroup accuracy (see e.g. Duchi & Namkoong, 2021). Our approach ROAD (Robust Optimization for Adversarial Debiasing, described in Sec. 3) combines DRO with a fair adversarial learning framework, which aims to minimize the ability of an adversarial model to reconstruct the sensitive attribute. By boosting attention on feature regions where predictions are the most unfair in the sense of this sensitive reconstruction, ROAD is able to find the best compromise between local fairness, accuracy and global fairness. Such dynamic focus is done by relying on a weighting process that respects some locality smoothness in the input space, in order to mitigate bias in any implicit subgroup of the population without supervision. Experiments, described in Section 4, show the efficacy of the approach on various datasets.
2 Problem Statement
Throughout this document, we address a conventional supervised classification problem, trained using \( n \) examples \((x_i, y_i, s_i)\) for \( i = 1, \ldots, n \), where each example is composed of a feature vector \( x_i \in \mathbb{R}^d \), containing \( d \) predictors, a binary sensitive attribute \( s_i \), and a binary label \( y_i \). These examples are sampled from a training distribution \( \Gamma = (X, Y, S) \sim p \). Our goal is to construct a predictive model \( f \) with parameters \( w_f \) that minimizes the loss function \( L_Y(f(x), y) \) (e.g. log loss for binary classification), whilst adhering to fairness constraints based on specific fairness definitions relying on the sensitive attribute \( S \). In this section, we present the fairness notions and works that are necessary to ground our proposition.
2.1 Group Fairness
One key aspect of algorithmic fairness is group fairness, which aims to ensure that the outcomes of a decision model are equitable across different demographic groups. In this paper, we focus on two of the most well-known group fairness criteria: Demographic Parity and Equalized Odds.
Demographic Parity: Demographic parity (DP) (Dwork et al., 2012) is achieved when the proportion of positive outcomes is equal across all demographic groups. Using the notations above, the learning problem of a model \( f \) under demographic parity constraints can be expressed as follows:
\[
\arg\min_{w_f} \mathbb{E}_{(x,y,s) \sim p} L_Y(f_{w_f}(x), y) \quad \text{s.t.} \quad |\mathbb{E}_p[f_{w_f}(x)|s = 1] - \mathbb{E}_p[f_{w_f}(x)|s = 0]| < \epsilon \tag{1}
\]
Where \( \hat{f} \) represents the output prediction after threshold (e.g., \( \hat{f}_{w_f}(x) = \mathbb{I}_{f_{w_f}(x) > 0.5} \)). The parameter \( \epsilon \) represents the deviation permitted from perfect statistical parity, allowing for flexibility in balancing accuracy and fairness. In the following, this deviation is noted as Disparate Impact (DI), representing the absolute difference in positive outcomes between the two demographic groups.
Although numerous methods exist to solve the problem described in Equation 1, we focus in this work on the family of fair adversarial learning, which has been shown to be the most powerful framework for settings where acting on the training process is an option (i.e., in-processing method) (Louppe et al., 2017; Wadsworth et al., 2018; Zhang et al., 2018; Grant, 2022). One of the most well-known fair adversarial approaches by Zhang et al. (2018) is framed as follows:
\[
\min_{w_f} \mathbb{E}_{(x,y,s) \sim p} L_Y(f_{w_f}(x), y) \quad \text{s.t.} \quad \min_{w_g} \mathbb{E}_{(x,y,s) \sim p} L_S(g_{w_g}(f_{w_f}(x)), s) > \epsilon' \tag{2}
\]
Where \( L_S \) represents a loss for sensitive reconstruction (e.g. a log loss for a binary sensitive attribute). In this adversarial formulation, the goal is to learn a model \( f \) that minimizes the traditional loss of the predictor model, while simultaneously ensuring that an adversary \( g \) with parameters \( w_g \) cannot effectively distinguish between the two sensitive demographic groups based on the predictor’s output \( f_{w_f}(x) \). The fairness constraint is thus imposed here as the adversary’s
ability to reconstruct the sensitive attribute, which should be limited, i.e., the value of the loss function \( L_S(g_w(f_w(x)), s) \) should be above a minimum value \( \epsilon' \). In practice, to achieve a balance between the predictor’s and the adversary’s performance, a relaxed formulation of Equation 2 is used:
\[
\min_{w_f} \max_{w_g} \mathbb{E}_{(x,y,s) \sim p} L_Y(f_w(x), y) - \lambda \mathbb{E}_{(x,y,s) \sim p} L_S(g_w(f_w(x)), s).
\]
The coefficient \( \lambda \in \mathbb{R}^+ \) controls the trade-off between the predictor’s performance on the task of predicting \( Y \) and the adversary’s performance on reconstructing the sensitive attribute. A larger value of \( \lambda \) emphasizes the importance of restricting the adversary’s ability to reconstruct the sensitive attribute, while a smaller value prioritizes the performance of the predictor on the main task.
**Equalized Odds:** Equalized Odds (EO) (Hardt et al., 2016) is another group fairness criterion that requires the classifier to have equal true positive rates (TPR) and false positive rates (FPR) across demographic groups. This criterion is especially relevant when misclassification can have significant impacts on individuals from different groups. To achieve EO, Zhang et al. (2018) employs an adversarial learning approach by concatenating the true outcome \( Y \) to the input of the adversary.
### 2.2 The Local Fairness Problem
The global aspect of these group fairness criteria begs the question of the emergence of local undesired behaviors: by enforcing constraints on global averages between sensitive groups, we still expect that some local differences may persist (Krco et al., 2023). We illustrate this phenomenon through a simple experiment, shown in Fig. 1. On two datasets, Adult and Compas (described in App. A.8.1), two models are trained: an unconstrained model solely optimizing for accuracy (called Biased, in red), and the adversarial model from Zhang et al. (2018) (in blue) optimizing for Demographic Parity for the sensitive attributes gender (Adult) and race (Compas). For each model, two types of Disparate Impact (DI) values are shown: the global DI values, calculated over all the test set (dashed lines); and the local ones, calculated in subgroups of the population (full lines). The subgroups are defined here as age categories: discretized bins of the continuous attribute age. Although local DI values are generally lower for the fair model, they vary a lot across subgroups, sometimes remaining unexpectedly high. This is especially true for less populated segments (e.g., higher age values), and segments where the sensitive attribute distribution is extremely unbalanced: as the fairness constraint only concerns global averages, more attention is put on densely populated regions. On the other hand, less populated segments are more likely to be ignored during the training.
These local differences echo the long-asserted claim that the blunt application of group fairness metrics bears inherent inequalities through their failure to account for any additional context (Selbst et al., 2019; Binns, 2020). Here, although reductive, the additional context we refer to is the information already available in the dataset \( X \), in which comparable subgroups (Wachter et al., 2021) can be drawn to evaluate fairness. This helps defining the notion of **Local Fairness** that is the focus of this paper: a locally fair model thus guarantees minimal differences in expectations within these comparable subgroups of \( X \). Contrary to works on intersectional fairness (Kearns et al., 2018), the desired behavior in Fig. 1 is thus not to treat age as a sensitive attribute: predictions \( f(x) \) are expected to vary along age. However, in the Compas dataset for instance, equality between race groups is expected to hold regardless of the age category considered. It is important to note that the
notion studied here is also different from the one of individual fairness, which aims to treat similarly individuals who are close w.r.t. some predefined similarity measure (see, e.g., Dwork et al. (2012)), without any notion of sensitive data, rather than minimize DI among subgroups of individuals. In the same vein of fairness without demographics, Hashimoto et al. (2018), Duchi et al. (2023) consider the case of unknown subgroups via the Distributionally Robust Optimization (DRO) framework. While their goal is to train models that perform uniformly well across all partitions of the population, our goal is to train a model that is uniformly fair (regarding a sensitive attribute) across all subregions of the feature space, which is quite different.
Having knowledge of these subgroups at training time would mean that it could be included as an additional constraint in the learning objective, akin to the work of Žliobaite et al. (2011). The criterion they propose, Conditional Demographic Disparity, measures Demographic Disparity across user-defined subcategories. However, several issues make this difficult, if not impossible, in practice. Besides that such expert knowledge is generally unavailable, or costly to acquire, the subgroups definitions might even be inconsistent across different testing environments (e.g. conflicting legal definitions of job categories or gender (Wachter et al., 2021)), making its optimization futile. Furthermore, exploring multiple categories is problematic in a combinatorial perspective. In this paper, we propose to optimize accuracy while adhering to a worst-case fairness constraint, an objective that was originally introduced to enhance fairness generalization capabilities in scenarios involving distribution drift or noisy labels (cf. Sec. 2.3). We implicitly define the subpopulations of interest, for which we aim to optimize fairness, using distributions \( q \) within an uncertainty set \( Q \), and present the DRO framework for the Demographic Parity criterion as follows:
\[
\min_{w_f} \mathbb{E}_{(x,y,s) \sim p} L_y(f_{w_f}(x), y) \quad \text{s.t.} \quad \max_{q \in Q} \left| \mathbb{E}_q \left[ f_{w_f}(x) | s = 1 \right] - \mathbb{E}_q \left[ f_{w_f}(x) | s = 0 \right] \right| < \epsilon \tag{3}
\]
The constraint ensures that the Disparate Impact remains less than a predefined threshold \( \epsilon \) under the worst-case distribution \( q \in Q \). Working with distribution \( q \) allows us to enforce local fairness by targeting subpopulations of interest, thus creating a more focused and adaptable model that addresses fairness problems both globally and at a granular level.
### 2.3 Related Work and Positioning
Several works have proposed to address the objective in Eq. 3 either to ensure better fairness generalization capabilities in drift scenarios (Rezaei et al., 2021; Ferry et al., 2022; Wang et al., 2023) or when facing noisy labels (Mandal et al., 2020; Wang et al., 2020; Roh et al., 2021). The uncertainty set \( Q \) then represents the perturbations that might affect the data at test time, and can therefore take several forms. While we expect \( Q \) contains the distribution of test data, leaving too much freedom to \( q \) may lead to trivial solutions that degenerate as uniform classifiers (Martínez et al., 2021). To do so, the uncertainty set \( Q \) is commonly defined as a ball centered on \( p \) using distribution distances or similarities. Examples include maximal Total Variation distance (Wang et al., 2020), Wasserstein distance (Wang et al., 2021) or Jaccard index (Ferry et al., 2022). From the fairness without demographics literature (Duchi et al., 2023), it is known that the maximal allowed divergence is connected to the risk of the smallest component of the training distribution, seen as a mixture of distributions. This observation also holds for worst-case fairness using DRO, as defined in Eq. 3.
To the best of our knowledge, our work is the first one to address the topic of local fairness with unknown subgroups. This different objective implies additional constraints on the set \( Q \) considered in Eq. 3. Notably, under our local fairness objective, we also want that the discrepancies of \( q \) w.r.t. \( p \) are smooth in the feature space, so that the fairness constraint does not increase mitigation on specific disconnected individuals, but rather on local areas of the space. This will guide the design of our approach in the next section.
Moreover, due to the discrete nature of the problem expressed in Eq. 3 (the constraint is applied on \( \hat{f} \) which is binary), most existing works restrict to linear models (Wang et al., 2020; Rezaei et al., 2020; Mandal et al., 2020; Taskesen et al., 2020), or rule-based systems (Ferry et al., 2022). This allows them to look for analytical solutions using linear programming. Although Rezaei et al. (2021) is an exception in this regard, they suffer from several drawbacks, namely requiring knowledge about the target distribution at train time and about the sensitive attribute at test time. Solving Equation 3 using a wider class of models remains therefore, to the best of our knowledge, unexplored.
3 ROAD: ROBUST OPTIMIZATION FOR ADVERSARIAL DEBIASING
3.1 FORMALIZATION
To overcome the limitations of previous works, we introduce our proposition to address the fairness generalization problem by combining adversarial optimization and the DRO framework. In order to learn a predictor \( f_{w_f} \) that is fair both globally and for any subregion of the feature space, the idea is therefore to boost, at each optimization step, the importance of regions \( q \) for which the sensitive reconstruction is the easiest for an optimal adversary \( g_{w_g^*} \) given the current prediction outcomes.
Rewriting the fairness constraint of Equation 3 with an adversary \( g_{w_g} : Y \rightarrow S \), we thus focus on the following problem for Demographic Parity:
\[
\min_{w_f} \mathbb{E}_{(x,y,s) \sim p} L_Y(f_{w_f}(x), y)
\]
subject to
\[
\min_{q \in Q} \mathbb{E}_{(x,y,s) \sim q} L_S(g_{w_g^*}(f_{w_f}(x)), s) > \epsilon'
\]
with \( w_g^* = \arg \min_{w_g} \mathbb{E}_{(x,y,s) \sim p} L_S(g_{w_g}(f_{w_f}(x)), s) \)
A major challenge with this formulation is that exploring all possible distributions in \( Q \) is infeasible in the general sense. Worse, modeling distribution \( q \) directly over the whole feature space as support is very difficult, and usually highly inefficient, even for \( Q \) restricted to distributions close to \( p \).
This motivates an adversarial alternative, which relies on importance weighting of training samples from \( p \). We therefore restrict \( Q \) to the set of distributions that are absolutely continuous with respect to \( p \), inspired by Michel et al. (2022). This allows us to write \( q = rp \), with \( r : X \times S \rightarrow \mathbb{R}^+ \) a function that acts as a weighting factor. Given a training set \( \Gamma \) sampled from \( p \), we can thus reformulate the overall objective, by substituting \( q \) with \( rp \) and applying its Lagrangian relaxation, as an optimization problem on \( r \in R = \{ r | rp \in Q \} \):
\[
\min_{w_f} \max_{r \in R} \frac{1}{n} \sum_{i=1}^{n} L_Y(f_{w_f}(x_i), y_i) - \lambda_q \frac{1}{n} \sum_{i=1}^{n} r(x_i, s_i) L_S(g_{w_g^*}(f_{w_f}(x_i)), s_i)
\]
with \( w_g^* = \arg \min_{w_g} \frac{1}{n} \sum_{i=1}^{n} L_S(g_{w_g}(f_{w_f}(x_i)), s_i) \)
With \( \lambda_q \) a regularization parameter controlling the trade-off between accuracy and fairness in the predictor model. In the following, we describe two constraints, inspired from the DRO literature, that we consider to ensure \( q \) keeps the properties of a distribution and avoids pessimistic solutions.
**Validity Constraint** To ensure \( q \) keeps the properties of a distribution (i.e., \( r \in R \)), previous works in DRO (e.g., Michel et al., 2022) enforce the constraint \( \mathbb{E}_{(x,s) \sim p} r(x, s) = 1 \) during the optimization. In the context of local fairness using our adversarial formulation from Eq.5, we argue that this constraint is not sufficient to ensure a safe behavior with regard to the fairness criterion, as it allows disturbances in the prior probabilities of the sensitive (i.e., \( q(s) \neq p(s) \)). As discussed more deeply in Appendix A.2.2, this may lead to a shift of the optimum of the problem, by inducing a stronger mitigation emphasis on samples from the most populated demographic-sensitive group.
To avoid this issue, we propose to further constrain \( r \) by considering a restricted set \( \tilde{R} = \{ r \in R | rp \in \tilde{Q} \} \), with \( \tilde{Q} \subset Q \) such that: \( \forall s, q(s) = p(s) \). To achieve this, we rely on the following constraint: \( \forall s, \mathbb{E}_{p(x|s)} r(x, s) = 1 \). Besides guaranteeing the desired property \( q(s) = p(s) \) (proof in Sec. A.2.1), we also note that ensuring these constraints still imply the former one: \( \mathbb{E}_{p(x,s)} r(x, s) = 1 \), which guarantees that \( q(x, s) \) integrates to 1 on its support. We further discuss the benefits of this conditional constraint in Section A.2.3.
**Shape Constraint** As discussed in Section 2.3, the definition of \( Q \) heavily impacts the desired behavior of the solution. In particular, controlling the shape of the allowed distributions \( q \) is especially
---
1 Adapting our work to EO is straightforward: as described in Sec. 2.1, adapting the adversarial method of Zhang et al. (2018) to the EO task simply requires to concatenate the true outcome \( Y \) to the prediction \( f(x) \) as input of the adversarial classifier. The same process can be followed for ROAD.
2 In the situation where all distributions in \( Q \) are absolutely continuous with respect to \( p \) all measurable subset \( A \subset X \times Y \), all \( q \in Q, q(A) > 0 \) only if \( p(A) > 0 \).
crucial in a setting such as ours, where the focus of the mitigation process is done dynamically. Without any constraint (as proposed by Mandal et al. (2020)), the mitigation could indeed end up focusing on specific points of the dataset where the sensitive reconstruction from \( f_{w_f}(X) \) is the easiest, using very sharp distributions \( q \) close to a Dirac. This may turn particularly unstable and, more critically, could concentrate the majority of fairness efforts on a relatively small subset of samples.
To control the shape of the bias mitigation distribution \( q \), we therefore choose to consider \( Q \) as a KL-divergence ball centered on the training distribution \( p \). However, similarly to Michel et al. (2022), we do not explicitly enforce the KL constraint (due to the difficulty of projecting onto the KL ball) and instead use a relaxed form. Using previous notations, the KL constraint takes the simple form
\[
KL(q||p) = KL(pr||p) = E_p r \log \frac{pr}{p} = E_p r \log r.
\]
The spread of \( Q \) can then be controlled with a temperature weight \( \tau \) in the overall optimization process, which can be seen as the weight of a Shannon entropy regularizer defined on discrepancies of \( q \) regarding \( p \). Setting \( \tau = 0 \) means that no constraint on the distribution of \( r \) is enforced, thus encouraging \( r \) to put extreme attention to lower values of \( L_S \). On the other hand, higher values of \( \tau \) favors distributions \( q \) that evenly spreads over the whole dataset, hence converging towards a classical globally fair model for highest values (cf. Section 2.1). Note that setting this hyper-parameter is strongly related to implicitly tuning the size of the smallest subgroup of the population for which we ensure fairness (cf. section 2.3).
**ROAD Formulation**
The overall optimization problem of our Robust Optimization for Adversarial Debiasing (ROAD) framework can thus finally be formulated as (full derivation given in A.1):
\[
\min_{w_f} \max_{r \in \mathbb{R}} \frac{1}{n} \sum_{i=1}^{n} L_Y(f_{w_f}(x_i), y_i) - \lambda_g \left[ \frac{1}{n} \sum_{i=1}^{n} r(x_i, s_i) L_S(g_{w_g^*}(f_{w_f}(x_i)), s_i) + \tau \frac{1}{n} \sum_{i=1}^{n} r(x_i, s_i) \log(r(x_i, s_i)) \right]
\]
with \( w_g^* = \arg \min_{w_g} \frac{1}{n} \sum_{i=1}^{n} L_S(g_{w_g}(f_{w_f}(x_i)), s_i) \)
(6)
### 3.2 TWO IMPLEMENTATIONS FOR ROAD
#### 3.2.1 BROAD: A NON-PARAMETRIC APPROACH
Let us first introduce a non-parametric approach, called Boltzmann Robust Optimization Adversarial Debiasing (BROAD), where each \( r(x_i, s_i) \) value results from the inner maximization problem from Eq. 13. As described below, this inner optimization accepts an analytical solution, whenever \( r \) values respect the aforementioned conditional validity constraints (proof in Appendix A.3).
**Lemma 3.1.** *(Optimal Non-parametric Ratio)* Given a classifier \( f_{w_f} \) and an adversary \( g_{w_g} \), the optimal weight \( r(x_i, s_i) \) for any sample from the training set, is given by:
\[
r(x_i, s_i) = \frac{e^{-L_S(g_{w_g}(f_{w_f}(x_i)), s_i)/\tau}}{\sum_{s_j \in \Gamma, s_j = s_i} e^{-L_S(g_{w_g}(f_{w_f}(x_j)), s_j)/\tau}}
\]
With \( n_{s_i} = \sum_{i=1}^{n} 1_{s=s_i} \). This expression allows us to set optimal weights for any sample from the training dataset, at no additional computational cost compared to a classical adversarial fairness approach such as Zhang et al. (2018). However, this may induce an unstable optimization process, since weights may vary abruptly for even very slight variations of the classifier outputs. Moreover, it implies individuals weights, only interlinked via the outputs from the classifier, hence at the risk of conflicting with our notion of local fairness. We therefore propose another - parametric - implementation, described in the next section, that improves the process by introducing local smoothness in the fairness weights.
3.2.2 Parametric Approach
To introduce more local smoothness in the fairness weights assigned to training samples, we propose an implementation of the $r$ function via a neural network architecture. Our goal is to ensure that groups of similar individuals, who might be neglected in the context of group fairness mitigation (e.g., due to their under-representation in the training population, cf. Fig. 1), receive a similar level of attention during the training process. However, solely relying on adversarial accuracy, as done in BROAD, may induce many irregularities in such groups. The lipschitzness of neural networks can add additional implicit locality smoothness assumptions in the input space, thus helping define the distributions $q$ as subregions of the feature space. Note that, in this approach, the network architecture therefore plays a crucial role in how local the behavior of $r_{w_r}$ will be: more complex networks will indeed tend to favor more local solutions, for a same value of $\tau$. In particular, a network of infinite capacity that completes training will have, in theory, the same behavior as BROAD.
To enforce the conditional validity constraint presented earlier, we employ an exponential parametrization with two batch-level normalizations, one for each demographic group. For each sample $(x_i, y_i, s_i)$ in the mini-batch, we define the normalized ratio as:
$$\forall i, r_{w_r}(x_i, s_i) = \frac{e^{h_{w_r}(x_i, s_i)}}{\sum_{(x_j, s_j) \in \Gamma, s_j = s_i} e^{h_{w_r}(x_j, s_j)}}$$
with $h : X \times \{0; 1\} \rightarrow \mathbb{R}$ a neural network with weights $w_r$. To train ROAD, we use an iterative optimization process, alternating between updating the predictor model’s parameters $w_f$ and updating the adversarial models’ parameters $w_\phi$ and $w_r$ by multiple steps of gradient descent. This leads to a far more stable learning process and prevents the predictor classifier from dominating the adversaries. More details are provided in the appendix (see Alg. 1).
4 Experiments
4.1 Assessing Local Fairness
In this first experiment, we assess how effective ROAD is for generating predictions that are locally fair for unknown subpopulations, while guaranteeing a certain level of global accuracy and global fairness. For this purpose, we use 3 datasets often used in fair classification, described in Appendix A.8.1: Compas (Angwin et al., 2016), Law (Wightman, 1998) and German Credit (Hofmann, 1994). Each dataset is split into training and test subsets, and the models described below are trained to optimize accuracy while mitigating fairness with respect to a sensitive attribute $S$.
To assess fairness at a local level, various subpopulations chosen among features of $X$, i.e. excluding $S$, are selected in the test set. As an example on the Compas dataset, in which $S$ is Race: to create the subgroups, Age is discretized into buckets with a 10-year range. These intervals are then combined with the Gender feature, identifying 12 distinct subgroups. As measuring DI in segments of low population is highly volatile, we filter out subgroups with less than 50 individuals (see App. A.8.3). These subgroups are unknown at training time, and chosen arbitrarily to reflect possible important demographic subgroups (see Sec. 4.3.2 for further discussion). Given these subgroups $G$, the local fairness is then assessed on the worst Disparate Impact value across these subgroups:
$$\text{Worst-1-DI} = \max_{g \in G} |\mathbb{E}_{(x,s) \in g}(\hat{f}_{w_f}(x)|s = 1) - \mathbb{E}_{(x,s) \in g}(\hat{f}_{w_f}(x)|s = 0)|.$$
To evaluate our approach, we compare our results with the globally fair adversarial models from Zhang et al. (2018) and Adel et al. (2019), and 3 approaches that address fairness generalization: FairLR (Rezaei et al., 2020), RobustFairCORELS (Ferry et al., 2022) and CUMA (Wang et al., 2023) (cf. App. A.8.2).
As local fairness can only be measured against global accuracy and fairness, we evaluate the approaches by plotting the tradeoffs between global accuracy and worst-1-DI subject to a global DI constraint (we choose $DI \leq 0.05$, following the fairness literature (Pannekoek & Spigler, 2021)). To ensure a thorough exploration of these tradeoffs, we sweep across hyperparameter values for each algorithm (hyperparameter grids in App. A.8.4). Fig. 2 shows the resulting Accuracy-Worst-1-DI Pareto curves for each method. Overall, ROAD mostly outperforms all other methods. This tends to show how our method efficiently maximizes local fairness, without sacrificing any other desirable criterion too much. On the other hand, BROAD does not always perform as effectively as ROAD, illustrating the benefit from the local smoothness induced by the use of a neural network. Interest-
Figure 2: Results for the experiment on Local Fairness. For all datasets, the X-axis is Worst-1-DI, Y-axis is Global accuracy. The curves represented are, for each method, the Pareto front for the results satisfying the imposed global fairness constraint (here, Global DI < 0.05 for all datasets).
Figure 3: Pareto front results on distribution drift using the Adult dataset. For all figures, the X-axis is Equalized Odds; the Y-axis is Accuracy. Left: in-distribution (i.e. Adult UCI in 1994) test dataset; Center and Right: resp. 2014 and 2015 test datasets from Folktables (Ding et al., 2021).
ingly, despite not including any robustness component, globally fair methods of Zhang et al. (2018) and Adel et al. (2019) still manage to slightly reduce local bias through their global mechanisms.
4.2 Experiments on Distribution Drift
As discussed in Section 2.3 DRO-based techniques have been considered before to help with the generalization of fairness. In this section, we therefore aim to show how our approach also leads to a better generalization of fairness in the face of distribution shift in addition to better-protecting subpopulations. For this purpose, we replicate the experimental protocol of Wang et al. (2023): after training classifiers on the training set of the classical Adult dataset (1994), we evaluate the tradeoff between accuracy and global fairness (measured with Equalized Odds (EO)) on the 2014 and 2015 Folktables datasets (Ding et al., 2021), containing US Census data from corresponding years, thus simulating real-world temporal drift. The same approaches as in the previous section, adapted to optimize for EO (details in Appendix A.8.2), are tested. Once again, the hyperparameters of every method are adjusted to maximize the two considered criteria, and the Pareto front is shown in Fig. 3.
Results on the classical Adult test set (in-distribution, left figure) are somewhat similar for most methods, with CUMA (Wang et al., 2023) slightly out-performing other methods. However, on drifted test sets (center and right figures), ROAD seems to achieve significantly better results than other methods, including other DRO-based fairness approaches. This suggests that the parametric implementation proposed in the paper is better suited to ensure robust behavior.
4.3 Ablation Studies
4.3.1 Behavior of $\tau$ and Impact of $\tau$
The behavior of ROAD depends on $\tau$, which controls the extent to which the distributions $q \in Q$ are allowed to diverge from $p$. The impact of $\tau$ can be observed in the left figure of Fig. 4 for the Compas dataset. As values of $\tau$ increase, the variance of the distribution of $r$ decreases, going from having most weights close to 0 and very high importance on a few others, to having most weights $r_i$ lying around 1. Choosing the right value of $\tau$ thus helps control the emphasis put on some subpopulations.
Figure 4: Analysis of the behavior of ROAD on Compas. Left: distribution of $r$ for several values of $\tau$ at epoch 200 (truncated at $r > 5$). Center: Relationship between Local DI and the average value of $r$ assigned to instances belonging to the corresponding subgroups. Each dot is a subgroup. Right: Worst-1-DI as a function of $\tau$ for different values for $\lambda_g$ (quartiles between 0.0 and 10.0).
Figure 5: Worst-1-DI scores for subgroups of the Law dataset of various definitions, built by varying age bin width and splits along gender. Full description or the subgroups is available in Sec. A.8.3.
A critical assumption ROAD relies on is that the adversary $r$ puts more attention on locally unfair regions. We test this assumption on the Compas dataset (same subgroups as in Sec. 4.1) and observe the results in the middle of Fig. 4. For each subgroup $k \in G$ (blue dots), we measure its local fairness (y-axis) and the average weight $\mathbb{E}_{(x,s) \sim k}(r_i(x,s))$ associated to instances of $k$. The graph reveals a correlation between these two notions, suggesting that more emphasis is indeed put on more unfair regions. As a consequence of these two results, setting $\tau$ helps control local bias, as shown in the right of Fig. 4 for various values of $\lambda_g$. The perfect local fairness score achieved when $\tau = 0$ is due to a constant model $f_{w_j}$: with no shape constraint, $r$ concentrates all the fairness effort on each training sample successively, which finally leads to $f(X) = \mathbb{E}[Y]$ for any input. Choosing a higher value of $\tau$ helps regularizing the process by inducing a distribution $q(x|s)$ closer to $p(x|s)$.
4.3.2 How important is the definition of subgroups?
The main motivation for ROAD is its ability to maximize local fairness when the definition of the local subgroups is unknown. To assess the veracity of this claim, we conduct another experiment where we measure the local fairness of ROAD when the definition of these subgroups vary. Concretely, we train once a biased model, a globally fair model (Zhang et al., 2018) and ROAD (with resp. accuracy scores 0.72, 0.60, and 0.60), and measure the local fairness for these models in subgroups of various definitions. These subgroups are defined successively as age bins with a width of 5, 10, 15 and 20, first across the whole population and then across subpopulations of other, non-sensitive, variables. Fig. 5 shows the local fairness results for the Law dataset (sensitive attribute is Race, subgroup attributes are Age and Gender). As expected, although the worst local DI for ROAD varies when the subgroup definition changes, it is almost consistently below the values reached by the globally fair model (except Def. 3 corresponding to the largest subgroups). This suggests that its tuning is not over-reliant on one subgroup definition, showcasing the flexibility of the approach.
5 Conclusion
In this work, we introduced the problem of enforcing local fairness in unknown subpopulations. By leveraging the strengths of adversarial learning and Distributionally Robust Optimization, our proposed framework ROAD provides a powerful approach for this setting, addressing the shortcomings of previous DRO-based approaches. Future works include extending our work to settings where the sensitive attribute is not available, to other differentiable penalties (e.g., Mutual Information in Ragonesi et al., 2021), and further exploring the optimization of a 3-network adversarial approach.
REFERENCES
Tameem Adel, Isabel Valera, Zoubin Ghahramani, and Adrian Weller. One-network adversarial fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 2412–2420, 2019.
Ulrich Aïvodji, Julien Ferry, Sébastien Gambs, Marie-José Huguet, and Mohamed Siala. Faircorels, an open-source library for learning fair rule lists. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 4665–4669, 2021.
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. ProPublica, May 23, 2016, 2016.
Ari Ball-Burack, Michelle Seng Ah Lee, Jennifer Cobbe, and Jatinder Singh. Differential tweetment: Mitigating racial dialect bias in harmful tweet detection. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 116–128, 2021.
Reuben Binns. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pp. 514–524, 2020.
Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.
Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021.
John Duchi, Tatsunori Hashimoto, and Hongseok Namkoong. Distributionally robust losses for latent covariate mixtures. Operations Research, 71(2):649–664, 2023.
John C Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3), 2021.
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226, 2012.
Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, and Mohamed Siala. Improving fairness generalization through a sample-robust optimization method. Machine Learning, pp. 1–62, 2022.
Vincent Grari. Adversarial mitigation to reduce unwanted biases in machine learning. PhD thesis, Sorbonne University, Paris, France, 2022. URL https://tel.archives-ouvertes.fr/tel-03828400
Vincent Grari, Boris Ruf, Sylvain Lamprier, and Marcin Detyniecki. Fair adversarial gradient tree boosting. In 2019 IEEE International Conference on Data Mining (ICDM), pp. 1060–1065. IEEE, 2019.
Vincent Grari, Arthur Charpentier, and Marcin Detyniecki. A fair pricing model via adversarial learning. arXiv preprint arXiv:2202.12008, 2022.
Moritz Hardt, Eric Price, and Nathan Srebro. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pp. 3315–3323, 2016.
Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, pp. 1929–1938. PMLR, 2018.
Hans Hofmann. Statlog (German Credit Data). UCI Machine Learning Repository, 1994.
Serafina Kamp, Andong Luis Li Zhao, and Sindhu Kutty. Robustness of fairness: An experimental analysis. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 591–606, 2021.
|
UgTrngiN16
|
The framework depends on the quality and coverage of LLM, which limits the effect and performance of LangProp. If LLM fails to generate sensible initial code, or struggles with some syntactic or logical issues, then LangProp is also hard to optimize. Existing LLMs may face some generation challenges or high uncertainty.
|
LangProp: A code optimization framework using language models applied to driving
Anonymous authors
Paper under double-blind review
Abstract
LangProp is a framework for iteratively optimizing code generated by large language models (LLMs) in a supervised/reinforcement learning setting. While LLMs can generate sensible solutions zero-shot, the solutions are often sub-optimal. Especially for code generation tasks, it is likely that the initial code will fail on certain edge cases. LangProp automatically evaluates the code performance on a dataset of input-output pairs, as well as catches any exceptions, and feeds the results back to the LLM in the training loop, so that the LLM can iteratively improve the code it generates. By adopting a metric- and data-driven training paradigm for this code optimization procedure, one could easily adapt findings from traditional machine learning techniques such as imitation learning, DAgger, and reinforcement learning. We demonstrate the first proof of concept of automated code optimization for autonomous driving in CARLA, showing that LangProp can generate interpretable and transparent driving policies that can be verified and improved in a metric- and data-driven way. Our code will be open-sourced and is available at https://github.com/langprop-iclr24/LangProp.
1 Introduction
Building systems that can self-improve with data is at the core of the machine learning paradigm. By leveraging vast amounts of data and having an automated feedback loop to update models according to an objective function, machine learning methods can directly optimize the metrics of interest, thus outperforming systems that are handcrafted by experts. In the early history of artificial intelligence (AI), Symbolic AI, e.g., rule-based expert systems (Hayes-Roth [1985], Jackson [1986]), was a dominant and perhaps a more intuitive and explainable approach to solving tasks in an automated way, and is still widely used in fields such as medicine (Abu-Nasser [2017]) and autonomous driving (Badue et al. [2021]). However, there have been numerous successes in recent decades in machine learning, e.g., deep neural networks, that demonstrate the advantage of data-driven learning.
The innovation in Large Language Models (LLMs) (Brown et al. [2020], OpenAI [2023], Touvron et al. [2023]) is a prominent success enabled by neural networks. Trained on both natural language and code, they can translate human intent and logic into executable code and back, expanding the boundaries of applying logic and reasoning. Unlike other machine learning techniques, LLMs have an affinity with Symbolic AI since they operate in discrete symbolic input-output spaces. The generated outputs are interpretable, even though the internal representation of these tokens is in a continuous embedding space. This observation led us to question if it is possible to have the best of both worlds – having an interpretable and transparent system, characteristic of Symbolic AI, which can self-improve in a data-driven manner, following the machine learning paradigm. We believe that LLMs provide the missing piece of the puzzle; the optimization mechanism.
Our insight is that we can draw a direct analogy from training neural networks and train symbolic systems by leveraging the power of language models to interpret and generate scripts. Using the analogy of model training, an LLM can be used as an optimizer equivalent to stochastic gradient descent or Adam. The actual model in our paradigm is an object that handles the initialization and updates of parameters as well as the forward pass logic, where the parameters are a collection of symbolic scripts that the LLM generates. At every iteration, we perform a forward pass through the model, compare it against the ground truth in the dataset, and pass the scores and feedback into the LLM which interprets the results and updates the scripts in a way that fixes the issues raised.
While many methods use LLMs for code generation, and systems such as Auto-GPT (Richards, 2023) iteratively query LLMs to execute tasks in an agent-like manner, as far as we know, we are the first to completely translate and apply the training paradigm used in machine learning for iterative code generation. We draw inspiration from MineDojo VOYAGER (Wang et al., 2023), which first introduced the idea that a collection of code generated by LLMs (skill library) can be considered as sharable and fine-tunable checkpoints. However, VOYAGER’s implementation is specific to Minecraft, and additional work is needed to apply its approach to other domains. We propose LangProp, a general code optimization framework that is easily adaptable to many application domains.
Autonomous driving is a key area in which model interpretability and transparency are critical. We consider LangProp to be a valuable proof of concept for building interpretable and language-instructable systems in a more automated and learnable way. We tested our hypotheses that (a) LangProp can generate interpretable code that learns to control a vehicle, (b) LangProp can improve driving performance with more training data in comparison to zero-shot code generation, and (c) we can easily transfer training paradigms from machine learning to LangProp such as imitation learning, reinforcement learning (Sutton & Barto, 2018) and DAgger (Ross et al., 2011).
2 RELATED WORK
2.1 LLMs FOR CODE GENERATION
Transformer-based models (Vaswani et al., 2017) have shown outstanding performance in code generation tasks (Chen et al., 2021; Li et al., 2022; Xu et al., 2022; Nijkamp et al., 2023; Fried et al., 2023). In particular, general purpose LLMs (Ouyang et al., 2022; OpenAI, 2023) have shown remarkable capabilities of code generation, translating natural language into code, and vice versa. However, there is no guarantee that the generated code is error-free. Benchmarks have been suggested to evaluate LLMs on the quality of code generation (Chen et al., 2021; Liu et al., 2023).
Code generation with execution is especially relevant to our work. Cobbe et al. (2021) and Li et al. (2022) used majority voting on the execution results to select code from a pool of candidates, but this is prone to favoring common erroneous solutions over infrequent correct solutions. Ni et al. (2023) suggested a ranking mechanism using a learned verifier to assess code correctness. Given the code, its specification, and its execution results, it computes the rankings based on the code correctness and code generation probability. CLAIRIFY (Skreta et al., 2023) implemented automatic iterative prompting that catches errors and provides feedback to the LLM until all issues are resolved.
Tangentially related fields are Automated Program Repair (APR) (Xia & Zhang, 2022; Xia et al., 2022), unit test generation (Roziere et al., 2022), and planning applied to LLMs and code generation (Le et al., 2022; Zhang et al., 2023). APR is typically solved as a text infill task by identifying an erroneous block of code, masking it out, and querying an LLM, providing the surrounding code as context. Planning for LLMs formulates code generation as a sequence generation task and applies Reinforcement Learning techniques. While these approaches are orthogonal to our approach of iteratively generating code using a pre-trained general-purpose LLM as an optimizer, findings from these fields may be compatible with LangProp for future work.
2.2 LARGE LANGUAGE MODELS FOR AUTOMATING COMPOSITIONAL TASKS
LLM-powered agents have demonstrated sophisticated planning capabilities. Sequential prompting with the history of observation, action, and the reason for the action was proposed by ReAct (Yao et al., 2023) as an improvement to Chain-of-Thought prompting (Wei et al., 2022), which has also been applied to autonomous driving (Fu et al., 2023). Auto-GPT (Richards, 2023) automated tasks by iteratively generating a sequence of subtasks in finer detail until they are executable. A similar strategy was applied to robotics (Huang et al., 2022). SayCan (Ahn et al., 2022) used LLMs to generate candidate subgoals and assessed their affordances with a value function given visual observations to ground the agent’s behavior. VIMA (Jiang et al., 2023) and PaLM-E (Driess et al., 2023) demonstrated profound reasoning and execution capabilities on multi-modal tasks such as Visual Q&A and robotics by fine-tuning LLMs to allow multi-modal prompting. Inner Monologue (Huang et al., 2023) used environment and user feedback to replan for embodied tasks. Unlike our method, the above methods require an LLM in the loop during inference, whereas our method only requires access to an LLM during the code optimization stage. (Liang et al., 2023) and (Singh et al., 2023)
used LLMs to directly generate code for robotics, while ViperGPT (Didac et al., 2023) and Vis-Prog (Gupta & Kembhavi, 2023) composed pre-trained vision-and-language models to solve challenging vision tasks which require reasoning and domain knowledge. However, none of the above methods implement code optimization via iterative prompting.
Our method is inspired by VOYAGER (Wang et al., 2023), which integrates environment feedback, execution errors, and self-verification into an iterative prompting mechanism for embodied control in Minecraft. VOYAGER maintains a skill library, a collection of verified reusable code, which can be considered as checkpoints. However, there is no mechanism to optimize or remove a sub-optimal skill once it has been added to the library. We address this limitation and present a more general code optimization framework that can be applied to a variety of domains, e.g. autonomous driving.
2.3 Autonomous Driving and the CARLA Benchmark
Approaches to Autonomous Driving can be broadly classified into modular systems and end-to-end systems (Yurtsever et al., 2020). Most systems take a modular approach (Urmson et al., 2008; Levinson et al., 2011; Wei et al., 2013; Maddern et al., 2017), which has human-defined rules that orchestrate separately engineered components for localization and mapping, object detection, tracking, behavior prediction, planning, and vehicle control. Such systems allow compartmentalization and better interpretability, but can be complex and require domain knowledge to maintain and update. Another challenge is error propagation (McAllister et al., 2017), i.e. the upstream outputs can be erroneous and must be corrected downstream. Recent work has harnessed end-to-end learning to address these issues. Imitation learning (IL) (Bojarski et al., 2016; Bansal et al., 2018) optimizes the policy to match actions taken by experts, and is the most widely used approach. However, its performance is upper-bounded by the expert. Deep reinforcement learning has also shown successes in simulation (Sallab et al., 2017), on the road (Kendall et al., 2019), and in combination with IL (Lu et al., 2022). Our work combines both the benefit of interpretability of expert systems while also taking a data-driven approach, exposing the system to potential failure modes and adverse scenarios during training time and iteratively optimizing the system towards a well-defined driving metric so that the resulting system is robust to adverse events and potential errors in intermediate components.
CARLA (Dosovitskiy et al., 2017) is a widely used open-sourced 3D simulator for autonomous driving research. Many prior works on CARLA have open-sourced their expert agents. Roach (Zhang et al., 2021) trained a PPO agent (Schulman et al., 2017) on handcrafted reward signals with privileged information. The heavy lifting is done at the reward shaping level, where hazardous agents are identified and the desired speed and pose are computed. Roach expert is also used in MILE (Hu et al., 2022) and TCP (Wu et al., 2022), where TCP has an additional emergency braking upon detecting potential collisions. TransFuser (Chitta et al., 2022), InterFuser (Shao et al., 2023) and TF++ (Jaeger et al., 2023) implement their handcrafted expert systems, either using cuboid intersections or line intersections for hazard detection. TransFuser also introduced the Longest6 benchmark, which consists of longer routes compared to the official CARLA benchmark and is less saturated.
3 The LangProp Framework
The LangProp framework, visualized in Figure 2, addresses a general task of optimizing code on any given metric of success in a data-driven way, similar to how a neural network is optimized on an objective function. LangProp performs iterative prompting to improve code performance, using the inputs, outputs, exceptions, metric scores, and any environmental feedback to inform the LLM upon updates. The updates in LangProp are performed using a form of an evolutionary algorithm (Bäck & Schwefel, 1993). The following sections describe the key concepts in LangProp in more detail.
3.1 Model definition
The LangProp model consists of a setup prompt, an update prompt, and a collection of executable code generated by the LLM, which we refer to as a policy. While neural models are parameterized by floating-point weights, the parameters of a LangProp model is the collection of policies. Each policy is associated with an executable script as well as a statistics tracker, which updates the priority, an aggregate measure of the policy’s performance with respect to the training objective. The priority is used to rerank the policies so that the best-performing policies are used for updates and inference.
Figure 1: An overview of the LangProp framework, which consists of a LangProp model, an LLM optimizer, and a LangProp trainer. During training, the LLM generates and updates the policy scripts which are evaluated against a training objective. The performances of the policies are monitored and aggregated over time by a policy tracker as priorities, which is then used to rerank the policies. Policies with higher priorities are selected for updates, and the best policy is used for inference.
3.1.1 Policy setup
The initialization of the policies is done similarly to zero-shot code generation. The definition and specification of the requested function is given, for example, the docstring of the function including the names and types of the inputs and outputs, what the function is supposed to achieve, and a template for the function. We also adopt Chain-of-Thought prompting (Wei et al., 2022). An example of a setup prompt can be found in Appendix A.1. The response from the LLM is parsed to extract the solution code snippet. Multiple responses are collected to ensure the diversity of the initial policies.
3.1.2 Training objective
The advantage of LangProp over typical usage of LLMs for code generation is that it performs code optimization in a metric- and data-driven manner. In many tasks, it is easier to provide a dataset of inputs and ground truth corresponding outputs rather than to accurately specify the requirements for a valid solution or write comprehensive unit tests. Similar to how neural networks are trained, the user defines an objective function that measures how accurate the policy prediction is against the ground truth, e.g. L1 or L2 loss. A penalty is given if the policy raises an exception.
3.1.3 Forward-pass and feedback
Similar to training neural networks, LangProp assumes a dataset of inputs and associated ground truth labels for supervised learning (or rewards/returns for reinforcement learning, discussed in Section 4.3). For every batch update, the inputs are fed into all the policies currently in the LangProp model to make predictions, equivalent to a forward-pass. For each policy, the prediction is evaluated by the objective function which returns a score. If an exception is raised during execution of a policy script, it is caught by the model and an exception penalty is returned as a score instead.
The execution results, which include the score, exception trace, and any print messages from the execution, are fed back into the model and are recorded by the policy tracker. This is analogous to how parameters in a neural network are assigned gradients during back-propagation (see Appendix A.9). This information stored by the tracker is used in the policy update step in Section 3.1.5.
3.1.4 Priority
The priority is, simply put, an average of scores with respect to the training objective. In case a small batch size is required for faster computation, a running average of the scores is used as the priority rather than ranking the policies’ performance based on scores from the current batch alone, which may result in highly stochastic results. This is sufficient for supervised learning with a fixed size dataset. As discussed later in Section 4.3, however, a more complex training method such as reinforcement learning or DAgger (Ross et al., 2011) has a non-stationary training distribution.
Therefore, we use exponential averaging with a discount factor of $\gamma \in (0, 1]$ following Equation (1).
$$P_{i,k} = \left( \sum_{j=1}^{N_k^B} s_{i,j,k} + W_{i,k-1} P_{i,k-1} \right) / (N_k^B + W_{i,k-1}), \quad W_{i,k} = \gamma(N_k^B + W_{i,k-1})$$
Here, $N_k^B$, $P_{i,k}$ and $W_{i,k}$ are the batch size, priority, and priority weighting of the $k$-th batch for the $i$-th policy, respectively, and $s_{i,k}$ is the objective score of the $i$-th policy for the $j$-th element in the $k$-th batch. Initial conditions are $P_{i,0} = 0$ and $W_{i,0} = 0$. By weighting recent scores higher, we ensure policies with higher priorities have high performance on the most up-to-date dataset.
### 3.1.5 Policy Reranking and Update
This step updates the model based on the most recent forward-backward pass and updated priorities. This corresponds to the optimization step in neural network training, where parameters are updated based on gradients computed on the most recent batch. First, the policies are reranked by the priorities and the top $N^K$ number of policies are kept, out of which the top $N^U$ policies are selected for updates. For each of these policies, the policy tracker storing records of inputs, outputs and scores is queried for the worst-case input-output pairs in the training batch that had the minimum score, along with any exception or print messages during the execution. This information, together with the old policy script, is embedded into the update prompt by a prompt template engine (Section 3.2). The update prompt is passed to the LLM, which returns $N^R$ responses containing new policy scripts.
After the model update, there are $N^U \times N^R$ new policies, as well as up to $N^K$ old policies. To initialize the new policies with sensible priorities, extra forward-backward passes are performed on these policies with the same batch of samples used for the model update. Finally, all policies are sorted according to their priorities, ready for inference or training on a new batch.
### 3.2 Prompt Template Engine
During the policy update stage, we require a dynamic prompting mechanism to embed information about the input, predicted output, ground truth, exception, print messages, and the policy script to be revised. The logic to generate these prompts is sometimes complex, for example, predictions are only made when there are no exceptions. To enable flexible prompt generation while avoiding any hardcoding of the prompts in the codebase, we developed a simple yet powerful prompt template that can parse variables, execute Python code embedded within the prompt, and import sub-prompts from other files, and will be included in our open-sourced solution. The update prompt example shown in Appendix A.2 makes extensive use of the policy template engine’s capabilities.
### 3.3 Training Paradigm
LangProp mirrors the code abstraction of PyTorch (Paszke et al., 2019) and PyTorch Lightning (Falcon, 2019) for the module and trainer interfaces, respectively. This allows LangProp to be task-agnostic, making it easily applicable to a range of domains and use cases. Moreover, it helps highlight the similarities between neural network optimization and code optimization using LangProp and facilitates a smooth integration of other training paradigms for neural network training.
Importantly, LangProp’s internal implementation does not depend on PyTorch or PyTorch Lightning. LangProp supports PyTorch datasets and data loaders, as well as any iterable dataset object for training and validation. Listing 1 shows an example of a standard LangProp training script.
```python
train_loader = DataLoader(train_data, batch_size, shuffle=True, collate_fn=lambda x: x)
val_loader = DataLoader(val_data, batch_size, shuffle=True, collate_fn=lambda x: x)
model = LPModule.from_template(name=model_name, root=model_root)
trainer = LPTuner(model, RunConfig(run_name=run_name))
trainer.fit(train_loader, val_loader, epochs=epochs) # train model
```
Listing 1: Training a LangProp model with a LangProp trainer.
After every training step on a mini-batch, the trainer saves a checkpoint, which consists of the setup prompt, update prompt template, the currently kept policy scripts (maximum of $N^K + N^U \times N^R$),
and the statistics monitored by the policy tracker (priorities $P$ and priority weights $W$). Since these can be stored as text or JSON files, the size of a checkpoint is in the order of a few hundred kilobytes. Checkpoints can be used to resume training, fine-tune the model, or for inference.
```python
1 model = LPModule.from_checkpoint(checkpoint) # load checkpoint
2 model.setup(config=RunConfig())
3 prediction = model(*input_args, **input_kwargs) # make prediction
```
Listing 2: Inference with a pre-trained LangProp model checkpoint.
Listing 2 shows how a LangProp checkpoint can be loaded and used for inference. The policy with the highest priority is used for inference. Since policies are parameterized as executable code, the use of an LLM is only required during training, not during inference. Since querying LLMs is both expensive and slow, this is a key advantage of the LangProp approach, which makes integration of LLMs more feasible for real-time applications, such as robotics and autonomous driving.
4 LangProp Applied to Driving in CARLA
In this section, we describe how the LangProp framework can be used in the context of autonomous driving. We chose the CARLA environment (Dosovitskiy et al., 2017) as a benchmark since (a) autonomous driving requires interpretable driving policies, (b) CARLA has a rich collection of human-implemented expert agents to compare against, and (c) a metric-driven learnable approach would be beneficial since driving decisions such as when to lane-change or to give way are challenging planning problems, and even human-implemented experts have sub-optimal performance.
4.1 Expert
We implemented our expert agent for data collection and to provide pseudo-ground-truth actions to train the LangProp agent with imitation learning. While TransFuser (Chitta et al., 2022) and TF++ (Jaeger et al., 2023) use a computationally expensive 3D bounding box collision detection algorithm, and InterFuser (Shao et al., 2023) uses line collision which is faster but less accurate, we use an efficient polygon collision detection algorithm between ground-projected bounding boxes. By extrapolating the motion of the ego vehicle and the actors into the future and checking for any polygon intersections, the safety margins to the pedestrians and vehicles are calculated. Together with the distance to the nearest traffic light and/or stop sign, the target speed is determined to give a 2 s margin. Steering is evaluated by calculating the angle to the next waypoint, which is 4 m ahead of the ego vehicle. A PID controller is used for low-level control to convert the target speed and angle to throttle, brake, and steering. For more implementation details, see Appendix B.2.
4.2 LangProp Agent
Similarly to our expert and all the baseline experts, we provide privileged information from the CARLA simulator to the agent. While we manually convert the bounding box coordinates of actors in the scene into the ego-relative frame of reference, we let LangProp handle these computations, providing everything in absolute world coordinates. We provide the location, orientation, speed, length, and width of the ego vehicle as well as for other vehicles and pedestrians that are within the range of 50 m. Importantly, we do not filter out actors even if they are irrelevant to the driving agent. We also provide the target waypoint (4 m ahead, used by other baseline experts) and the distances to a red traffic light and stop sign along the current lane if they exist. Given this information, the LangProp policy is expected to return a desired speed level ("MOVE": 6 m/s, "SLOW": 1 m/s, "STOP": 0 m/s), and a turning angle for the ego vehicle. These are passed to an external PID controller to convert them into throttle, brake, and steering. A more detailed explanation of the function definition is given in Listing A.5, which is an extract of the setup prompt used in the LangProp model. Given the function definition as a docstring, an LLM generates policy script.
While it is straightforward for the policy to directly predict the speed or acceleration as numeric values, this makes the task of designing a suitable loss function for imitation learning more challenging and open-ended. Therefore, we opted for a categorical output which simplifies the scoring function.
Figure 2: An overview of the LangProp agent training pipeline. The LangProp model is updated on a dataset that includes both offline expert data as well as online LangProp data annotated with expert actions, similar to DAgger. The agent is given negative rewards upon infraction.
candidates that satisfy the specification and updates them following the procedures in Section 3. We use GPT 3.5 Turbo 16k model, provided by OpenAI’s Chat Completion API (OpenAI, 2022).
4.3 Imitation Learning, DAgger, and Reinforcement Learning
We explore three major training paradigms often used to train embodied agents - imitation learning (IL), DAgger (Ross et al., 2011), and reinforcement learning (RL). In imitation learning, the accuracy of the policy outputs is measured against ground truth expert actions for a pre-collected dataset. Imitation learning is known to have issues with out-of-distribution inputs at inference time, since the expert’s policy is used to collect the training data, while the learned policy is used for rollouts at inference time. DAgger addresses this issue by labeling newly collected online data with expert actions, and adding them to the expert-collected offline data to form an aggregate replay buffer. Both CARLA and the LangProp agent run at a frame rate of 20 Hz. LangProp adds training samples to the replay buffer every 10 frames, and a batch update is performed after every 100 new samples.
While DAgger solves the issue of distribution mismatch, the performance of the learned policy is still upper-bounded by the accuracy of the expert. It also does not take into account that certain inaccuracies are more critical than others. In the context of autonomous driving, actions that result in infractions such as collisions should be heavily penalized. Reinforcement Learning offers a way of training a policy from reward signals from the environment, which is convenient since we can directly assign penalties upon any infractions according to the CARLA leaderboard (CARLA, 2020). While RL typically optimizes for maximum returns (discounted sum of future rewards), we simplify the setting by assigning an infraction penalty if there is an infraction in the next 2 s window. The agent monitors infractions every 10 frames, and triggers an update upon infractions.
Since infraction penalties are very sparse, and will become rarer as the policies improve, we adopt two strategies; (a) we combine RL training with imitation learning training that provides denser signals, and (b) we sample training data with infractions with 100 times higher sampling probability. The expert is only imitated upon no infractions, or if the expert was not the behavior policy which incurred the infraction, and an infraction cost is only given when the current policy takes the same action as the behavioral policy which caused the infraction when the expert chose a different action. For more details on the training objective, see Appendix C.2.
5 Experiments
We compared our LangProp agent against RL experts with privileged information (Roach (Zhang et al., 2021), TCP (Wu et al., 2022)) as well as human-implemented experts (TransFuser (Chitta et al., 2022), InterFuser (Shao et al., 2023), TF++ (Jaeger et al., 2023), ours). We used the official training and testing routes provided by the CARLA leaderboard (CARLA, 2020), as well as
Table 1: Driving performance of expert drivers in CARLA version 0.9.10. The driving score is a product of the route completion percentage $R$ and the infraction factor $\bar{I}$. IL and RL stand for imitation learning and reinforcement learning. DAgger uses both online and offline data.
| Method | Training routes | Testing routes | Longest6 |
|-----------------|-----------------|---------------|----------|
| | Score ↑ $R$ ↑ $\bar{I}$ ↑ | Score ↑ $R$ ↑ $\bar{I}$ ↑ | Score ↑ $R$ ↑ $\bar{I}$ ↑ |
| Roach expert | 57.8 95.9 0.61 | 63.4 98.8 0.64 | 54.9 81.7 0.67 |
| TCP expert | 64.3 92.3 0.71 | 72.9 93.2 0.77 | 46.9 63.1 0.76 |
| TransFuser expert| 69.8 94.5 0.74 | 73.1 91.3 0.80 | 70.8 81.2 0.88 |
| InterFuser expert| 69.6 83.1 0.86 | 78.6 81.7 0.97 | 48.0 56.0 0.89 |
| TF++ expert | 90.8 95.9 0.94 | 86.1 91.5 0.94 | 76.4 84.4 0.90 |
| **Our expert** | 88.9 92.8 0.95 | **95.2** 98.3 0.97 | 72.7 78.6 0.92 |
| LangProp: Offline IL | 0.07 0.37 0.97 | 0.00 1.00 | 0.00 1.00 |
| LangProp: DAgger IL | 36.2 94.5 0.40 | 41.3 95.3 0.44 | 22.6 87.4 0.30 |
| LangProp: DAgger IL/RL | 64.2 90.0 0.72 | 61.2 95.2 0.64 | 43.7 71.1 0.65 |
| LangProp: Online IL/RL | **70.3** 90.5 0.78 | **80.9** 92.0 0.89 | **55.0** 75.7 0.73 |
the Longest6 benchmark (Chitta et al., 2022) that has longer routes with denser traffic. See Appendix D.1 for more details on the benchmark and the routes and towns used. For the LangProp agent, only the training routes are used for imitation/reinforcement learning at training time, and the saved checkpoints are used for inference during evaluation runs. The results are shown in Table 1.
5.1 Expert and LangProp agents
Our expert and the TF++ expert significantly outperformed all other expert agents in all routes, and our expert outperformed TF++ by a margin on the test routes. The core collision avoidance logic is just 100 lines of code, with additional preprocessing and tooling for data collection. From the breakdown of the scores, our expert seems to prioritize safer driving with fewer infractions (higher infraction factor $\bar{I}$) by trading off route completion compared to TF++ in the Longest6 benchmark.
For the LangProp agent, we observe that training using offline samples, DAgger, and online samples improves performance in this order. Adding the infraction penalties as an additional reinforcement learning objective further improved the performance. The best-performing agent, LangProp trained on online data with IL and RL, achieved better performance than the Roach expert (trained with PPO) as well as the TransFuser and InterFuser experts (both written by researchers) on all benchmarks apart from TransFuser on the Longest6 benchmark.
The result has two important implications. Firstly, the code selection metric (the training objective) plays a large role in the ultimate performance of the code. This is an important finding since prior work on code generation mostly focused on error correction given exceptions. Our results demonstrate that for complex tasks, it is important to treat code generation as an iterative optimization process rather than a zero-shot task. Secondly, training using LangProp exhibits similar characteristics as training in deep learning; in deep learning, it is a well-studied problem that policies trained with imitation learning on offline datasets do not generalize to out-of-distribution online data. DAgger and reinforcement learning are two of the common ways of addressing this problem. Our results show that these training paradigms can also be effective when used in LangProp.
5.2 Demonstration of causal confusion when trained offline
A common failure mode of offline trained models was that the agent remained stationary indefinitely until the timeout was reached. Upon inspection of the policy code that was generated, we were able to identify the failure to be a phenomenon known as causal confusion in imitation learning (De Haan et al., 2019). A snippet of code responsible for such failure in one of the runs is shown in Listing 3.
This exemplifies the interpretability of LangProp models, allowing us to directly assess the source of failure. The code predicts 0 speed when the agent’s current speed is already close to 0. Note that this is not a failure of the LangProp algorithm, but due to such a policy maximizing the imitation
Figure 3: Training curves for the different training methods of the LangProp agent. The training scores are evaluated on 1000 samples from the offline training dataset and/or online replay buffer, and the validation scores are evaluated on 1000 samples from the offline validation dataset. Updates are performed every 1000 frames of agent driving, as well as upon infractions in the RL setting. The score is in the range of $[-10, 1]$ due to exception penalties. We limit the axis to $[-1, 1]$ in the plots.
learning objective on an offline dataset, bypassing the need to learn a more complex policy. This phenomenon is commonly researched in the context of deep imitation learning, and can be avoided by employing training on online data, e.g. using DAgger or RL. We believe our work to be the first to report a similar phenomenon using LLMs for policy optimization.
Listing 3: Identifying causal confusion in the policy when trained purely offline
5.3 ANALYSIS OF TRAINING METHODS
The use of online training samples alleviated the issue of causal confusion, leading to selecting policies where the agent has a sensible driving performance. This is because if the agent remains stationary, those samples will accumulate in the replay buffer, resulting in a lower priority for the causally confused policy. Comparing the results in Table 1 and the validation scores in Figure 3b, it seems that the scores on the offline dataset are not indicative of the agent’s driving performance. From the training scores on the replay buffer and/or offline dataset in Figure 3a, we see that the agents trained with RL on infractions have spikes corresponding to infractions. This is due to over-sampling infractions when they occur, allowing the policy update to immediately address the issue. DAgger has a milder response compared to training just on online data because the offline dataset does not include on-policy infractions. The higher rate of infractions in the training distribution may be why the online trained agent has a lower training score but has a higher driving performance.
6 CONCLUSION
We presented LangProp, a framework that uses LLMs for data-driven code optimization, and demonstrated its capability of generating driving policies in CARLA. We showed that classical training paradigms such as imitation learning, DAgger, and reinforcement learning directly translate to training with LangProp, and the choices of the objective function and the training data distribution can be used to guide which policies are selected. Since numerous candidate solutions satisfy the code specification, automatically optimizing the code to maximize a given performance metric has been a key missing feature in few-shot code generation. The LangProp framework provides this feature by reformulating the machine learning training paradigm in the context of using LLMs as code optimizers and treating policy code as parameters of the model. We believe that the LangProp paradigm opens up many possibilities for data-driven machine learning with more interpretability and transparency.
REPRODUCIBILITY STATEMENT
We will open-source the code both for the general LangProp framework, as well as the code for training and evaluating the LangProp agent in CARLA. More details of the implementation and design decisions can be found in the appendices.
For the ICLR 2024 conference submission, supplementary materials can be found at https://github.com/langprop-iclr24/LangProp/ which includes the code, pre-trained checkpoints using LangProp, videos of sample runs by the LangProp agent. We also include self-contained minimal examples of applying LangProp to tasks such as Sudoku and CartPole.
REFERENCES
Bassem Abu-Nasser. Medical expert systems survey. *International Journal of Engineering and Information Systems (IJEIS)*, 1(7):218–224, 2017.
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. *arXiv preprint arXiv:2204.01691*, 2022.
Thomas Bäck and Hans-Paul Schwefel. An overview of evolutionary algorithms for parameter optimization. *Evolutionary computation*, 1(1):1–23, 1993.
Claudine Badue, Rânik Guidolini, Raphael Vivaquqa Carneiro, Pedro Azevedo, Vinicius B Cardoso, Avelino Forechi, Luan Jesus, Rodrigo Berriel, Thiago M Paixao, Filipe Mutz, et al. Self-driving cars: A survey. *Expert Systems with Applications*, 165:113816, 2021.
Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. *arXiv preprint arXiv:1812.03079*, 2018.
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self-driving cars, 2016.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020.
CARLA. Carla autonomous driving leaderboard. https://leaderboard.carla.org/ 2020.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Paylov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021.
Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, and Andreas Geiger. Transfuser: Imitation with transformer-based sensor fusion for autonomous driving. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022.
|
5PkgaUwiY0
|
As per the CLIPSIM indicator in Table 2 and Appendix F, even with the employment of the costly GPT-4 for video planning, it still lags behind Make-A-Video and VideoLDM, and is even outperformed by ModelScopeT2V. This suggests that there may be certain issues with Layout2Vid's model fine-tuning method.
|
VIDEODIRECTORGPT: CONSISTENT MULTI-SCENE VIDEO GENERATION VIA LLM-GUIDED PLANNING
Anonymous authors
Paper under double-blind review
ABSTRACT
Although recent text-to-video (T2V) generation methods have seen significant advancements, the majority of these works focus on producing short video clips of a single event with a single background (i.e., single-scene videos). Meanwhile, recent large language models (LLMs) have demonstrated their capability in generating layouts and programs to control downstream visual modules such as image generation models. This prompts an important question: can we leverage the knowledge embedded in these LLMs for temporally consistent long video generation? In this paper, we propose VIDEODIRECTORGPT, a novel framework for consistent multi-scene video generation that uses the knowledge of LLMs for video content planning and grounded video generation. Specifically, given a single text prompt, we first ask our video planner LLM (GPT-4) to expand it into a ‘video plan’, which involves generating the scene descriptions, the entities with their respective layouts, the background for each scene, and consistency groupings of the entities and backgrounds. Next, guided by this output from the video planner, our video generator, named Layout2Vid, has explicit control over spatial layouts and can maintain temporal consistency of entities/backgrounds across multiple scenes, while being trained only with image-level annotations. Our experiments demonstrate that our proposed VIDEODIRECTORGPT framework substantially improves layout and movement control in both single- and multi-scene video generation and can generate multi-scene videos with visual consistency across scenes, while achieving competitive performance with SOTAs in open-domain single-scene text-to-video generation. We also demonstrate that our framework can dynamically control the strength for layout guidance and can also generate videos with user-provided images. We hope our framework can inspire future work on integrating the planning ability of LLMs into consistent long video generation.
1 INTRODUCTION
Text-to-video (T2V) generation has achieved rapid progress following the success of text-to-image (T2I) generation. Most works in T2V generation focus on producing short videos (e.g., 16 frames at 2fps) from the given text prompts (Wang et al., 2023b; He et al., 2022; Ho et al., 2022; Singer et al., 2023; Zhou et al., 2022). Recent works on long video generation (Blattmann et al., 2023; Yin et al., 2023; Villegas et al., 2023; He et al., 2023) aim at generating long videos of a few minutes with holistic visual consistency. Although these works could generate longer videos, the generated videos often display the continuation or repetitive patterns of a single action (e.g., driving a car) instead of transitions and dynamics of multiple changing actions/events (e.g., five steps about how to make caraway cakes). Meanwhile, large language models (LLMs) (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023a,b; Chowdhery et al., 2022) have demonstrated their capability in generating layouts and programs to control visual modules (Didac et al., 2023; Gupta & Kembhavi, 2023), especially image generation models (Cho et al., 2023b; Feng et al., 2023). This raises an interesting question: Can we leverage the knowledge embedded in these LLMs for planning consistent multi-scene video generation?
In this work, we introduce VIDEODIRECTORGPT, a novel framework for consistent multi-scene video generation. As illustrated in Fig. 1, VIDEODIRECTORGPT decomposes the T2V generation task into two stages: video planning and video generation. For the first video planning stage (see Fig. 1 blue part), we employ an LLM to generate a video plan, which is an overall plot of the video
Figure 1: Overall illustration of our VideoDirectorGPT framework. In the first stage, we employ GPT-4 as a video planner to craft a video plan, which provides a multi-component script for videos with multiple scenes. In the second stage, we utilize Layout2Vid, a grounded video generation module, to render multi-scene videos with layout and consistency control based on the video plan generated in the first stage.
with multiple scenes, each consisting of a text description of the scene and entity names/layouts, and a background. It also consists of consistency groupings of specific entities/backgrounds that reappear across scenes. For the second video generation stage (see Fig. 1 yellow part), we introduce Layout2Vid, a novel grounded video generation module that generates multi-scene videos from the video plan. Our framework provides the following strengths: (1) employing an LLM to write a video plan that guides the generation of videos with multiple scenes from a single text prompt, (2) layout control in video generation by only using image-level layout annotations, and (3) generation of visually consistent entities/backgrounds across multiple scenes.
To be specific, in the first stage, video planning (Sec. 3.1), we employ an LLM (e.g., GPT-4 (OpenAI, 2023)) as a video planner to generate a video plan, a multi-component script of videos with multiple scenes to guide the downstream video synthesis process. Our video plan consists of four components: (1) multi-scene descriptions, (2) entities (names and their 2D bounding boxes), (3) background, and (4) consistency groupings (scene indices for each entity/background indicating where they should remain visually consistent). We generate the video plan in two steps by prompting an LLM with different in-context examples. In the first step, we expand a single text prompt into multi-step scene descriptions with an LLM, where each scene is described with a text description, a list of entities, and a background (see Fig. 2 blue part for details). We also prompt the LLM to generate additional information for each entity (e.g., color, attire, etc.), and group together entities across frames and scenes, which will help guide consistency during the video generation stage. In the second step, we expand the detailed layouts of each scene with an LLM by generating a list of bounding boxes of the entities per frame, given the list of entities and scene description. This overall ‘video plan’ guides the downstream video generation module in the second stage (described next).
In the second stage, video generation (Sec. 3.2), we introduce Layout2Vid, a grounded video generation module to render videos based on the video plan generated by the LLM in the previous stage (see yellow part of Fig. 2). For the grounded video generation module, we build upon ModelScopeT2V (Wang et al., 2023b), an off-the-shelf text-to-video generation model, by freezing its original parameters and adding spatial control of entities through a small set of trainable parameters (13% of total parameters) in the gated-attention module (Li et al., 2023). This enables our Layout2Vid to be trained solely with layout-annotated images, thus bypassing the need for expensive training on annotated video datasets. To preserve the identity of entities appearing across different frames and scenes, we use shared representations for the entities within the same consistency group. We also propose to use a joint image+text embedding as entity grounding conditions which we find more effective than the existing text-only approaches (Li et al., 2023) in entity identity preservation (Appendix E). Overall, our Layout2Vid avoids expensive video-level training and also improves the object layout and movement control and cross-scene temporal consistency.
We conduct experiments on both single-scene and multi-scene video generation. For single-scene video generation, we evaluate layout control via VPEval Skill-based prompts (Cho et al., 2023b), assess object dynamics through ActionBench-Direction prompts adapted from ActionBench-SSV2 (Wang et al., 2023c), and examine open-domain video generation using the MSR-VTT dataset (Xu et al., 2016). For multi-scene video generation, we experiment with two types of input prompts: (1) a list of sentences describing events – ActivityNet Captions (Krishna et al., 2017) and Coref-SV prompts based on Pororo-SV (Li et al., 2019b), and (2) a single sentence from which models generate multi-scene videos – HiREST (Zala et al., 2023). Experiments show that our proposed VideoDirectorGPT demonstrates better layout skills (object, count, spatial, scale) and object movement control (Sec. 5.1), capable of generating multi-scene videos with visual consis-
tency across different scenes (Sec. 5.2), and competitive with SOTAs on single-scene open-domain text-to-video generation (Sec. 5.1). We also demonstrate that our framework can dynamically control the strength for layout guidance and generate videos with user-provided images (Sec. 5.3).
Our main contributions can be summarized as follows:
• We propose a new T2V generation framework VideoDirectorGPT with two stages: video content planning and grounded multi-scene video generation.
• We employ LLMs to generate a ‘video plan’ which consists of detailed scene descriptions, entity layouts, and entity/background consistency groupings to guide downstream video generation (Sec. 5.1).
• We introduce Layout2Vid, a novel grounded video generation module, which brings together image/text-based layout control ability and entity-level temporal consistency (Sec. 5.2). Our Layout2Vid can be trained using only image-level layout annotations.
• We empirically demonstrate that our framework can accurately control object layouts and movements in single-scene videos (Sec. 5.1) and can generate temporally consistent multi-scene videos (Sec. 5.2). We also provide qualitative examples, ablation study of our design choices (Appendix F), and human evaluations (Sec. 5.4).
2 RELATED WORKS
Text-to-video generation. Training a text-to-video (T2V) generation model from scratch is computationally expensive. Recent work often leverages pre-trained text-to-image (T2I) generation models such as Stable Diffusion (Rombach et al., 2022) by fine-tuning them on text-video pairs (Wang et al., 2023b; Blattmann et al., 2023). While this warm-start strategy enables high-resolution video generation, it comes with the limitation of only being able to generate short video clips, as T2I models lack the ability to maintain consistency through long videos. On the other hand, recent works on long video generation (Blattmann et al., 2023; Yin et al., 2023; Villegas et al., 2023; He et al., 2023)
aim at generating long videos of a few minutes. However, the generated videos often display the continuation or repetitive patterns of a single action instead of transitions and dynamics of multiple changing actions/events. In contrast, our layout-guided T2V generation model, Layout2Vid, infuses layout control and multi-scene temporal consistency into a pretrained T2V generation model via data and parameter-efficient training, while preserving its original visual quality.
**Bridging text-to-image generation with layouts.** To achieve interpretable and controllable generation, a line of research decomposes the T2I generation task into two stages: text-to-layout generation, and layout-to-image generation. While early models train the layout generation module from scratch (Hong et al., 2018; Tan et al., 2019; Li et al., 2019a; Liang et al., 2022), recent methods employ pretrained LLMs to leverage their knowledge in generating image layouts from text (Cho et al., 2023b; Feng et al., 2023; Qu et al., 2023). To the best of our knowledge, our work is the first to utilize LLMs to generate structured video plans from text, enabling accurate and controllable long video generation. See Appendix A for additional related works.
### 3 VideoDirectorGPT
#### 3.1 Video Planning: Generating Video Plans with LLMs
**Video Plan.** As illustrated in the blue part of Fig. 2, GPT-4 (OpenAI, 2023) acts as a planner, providing a detailed video plan to guide the video generation. Our video plan has four components: (1) **multi-scene descriptions**: a sentence describing each scene, (2) **entities**: names and their 2D bounding boxes, (3) **background**: text description of the location of each scene, and (4) **consistency groupings**: scene indices for each entity/background indicating where they should remain visually consistent. The video plan is generated in two steps by prompting GPT-4 independently. See Appendix B for each step’s GPT-4 prompt details.
**Video Planning Step 1: Generating multi-scene descriptions, entity names, and entity/background consistency groupings.** In the first step, we employ GPT-4 to expand a single text prompt into a multi-scene video plan. Next, we group entities and backgrounds that appear across different scenes using an exact match. For instance, if the ‘chef’ appears in scenes 1-4 and ‘oven’ only appears in scene 1, we form the entity consistency groupings as `{chef: [1, 2, 3, 4], oven: [1]}`. In the subsequent video generation stage, we use the shared representations for the same entity/background consistency groups to ensure they maintain temporally consistent appearances (see Sec. 3.2 for details).
**Video Planning Step 2: Generating entity layouts for each scene.** In the second step, we expand the detailed layouts for each scene using GPT-4. We generate a list of bounding boxes for the entities in each frame based on the entities and the scene description. For each scene, we produce layouts for 8 frames, then linearly interpolate to gather information for denser frames (e.g., 16 frames).
#### 3.2 Video generation: Generating Videos from Video Plans with Layout2Vid
**Layout2Vid: Layout-guided T2V generation.** We implement Layout2Vid by integrating layout control capability into ModelScopeT2V (Wang et al., 2023b), a public T2V generation model based on Stable Diffusion (Rombach et al., 2022) (see Appendix C.1 for ModelScopeT2V details). The diffusion UNet in ModelScopeT2V consists of a series of spatio-temporal blocks, each containing four modules: spatial convolution, temporal convolution, spatial attention, and temporal attention. Compared with ModelScopeT2V, our Layout2Vid enables layout-guided video generation with explicit spatial control over a list of entities represented by their bounding boxes, as well as visual and text content. As illustrated in Fig. 3(a), we build upon the 2D attention to create the Guided 2D Attention. As shown in Fig. 3(b), the Guided 2D Attention takes two conditional inputs to modulate the visual latent representation: (a) **layout tokens**, conditioned with gated self-attention (Li et al., 2023), and (b) **text tokens** that describe the current scene, conditioned with cross-attention.
**Temporally consistent entity grounding with image+text embeddings.** While previous layout-guided text-to-image generation models commonly used only the CLIP text embedding for layout control (Li et al., 2023; Yang et al., 2023), we use the CLIP image embedding in addition to the CLIP text embedding for entity grounding. In our ablation studies (see Appendix F), we find that using both the image and text embeddings for grounding is more effective than text-only or image-only grounding. As depicted in Equation (1), the grounding token for the $i^{th}$ entity, $h_i$, is a 2-layer...
Figure 3: Overview of (a) spatio-temporal blocks within the diffusion UNet of our Layout2Vid and (b) Guided 2D Attention in the spatial attention module. (a) The spatio-temporal block comprises four modules: spatial convolution, temporal convolution, spatial attention, and temporal attention. We adopt settings from ModelScopeT2V, where (N1, N2, N3, N4) are set to (2, 4, 2, 2). In (b) Guided 2D Attention, we modulate the visual representation with layout tokens and text tokens. For efficient training, only the parameters of the Guided 2D Attention (indicated by the fire symbol, constituting 13% of total parameters) are trained using image-level annotations. The remaining modules in the spatio-temporal block are kept frozen.
MLP which fuses CLIP image embeddings $f_{\text{img}}(e_i)$, CLIP text embeddings $f_{\text{text}}(e_i)$, and Fourier features (Mildenhall et al., 2021) of the bounding box $l_i = [x_0, y_0, x_1, y_1]$. We use learnable linear projection layers, denoted as $P_{\text{img/text}}$, on the visual/text features, which we found helpful for faster convergence during training in our initial experiments.
$$h_i = \text{MLP}(P_{\text{img}}(f_{\text{img}}(e_i)), P_{\text{text}}(f_{\text{text}}(e_i)), \text{Fourier}(l_i))$$
Our image embedding $f_{\text{img}}(e_i)$ can be obtained from either text descriptions or user-provided exemplar images. To obtain image embeddings from text (e.g., from the video plan), we employ Karlo (Lee et al., 2022), a public implementation of unCLIP Prior (Ramesh et al., 2022), which translates a CLIP text embedding into a CLIP image embedding. To obtain the image embedding from image exemplars, we can simply encode the images with the CLIP image encoder.
Parameter and data-efficient training. During training, we only update the parameters of the Guided 2D Attention (13% of total parameters) to inject layout guidance capabilities into the ModelScopeT2V backbone while preserving its original video generation capabilities. Such a training strategy allows us to efficiently train the model with only image-level layout annotations, while still equipped with multi-scene temporal consistency via shared entity grounding tokens. Training and inference details for our Layout2Vid are shown in Appendix C.2.
4 EXPERIMENTAL SETUP
Evaluated models. We primarily compare our VideoDirectorGPT with ModelScopeT2V, and present comparisons with other T2V generation models (see Appendix D.1 for all baseline model details) on the datasets for which their papers have provided results. ModelScopeT2V serves as the closest baseline to our framework, given that our Layout2Vid utilizes its frozen weights and only trains a small set of new parameters to add spatial control and temporal consistency across scenes.
Prompts for single-scene video generation. For single-scene video generation, we conduct experiments with VPEval Skill-based prompts (Cho et al., 2023b), (which cover skills including object, count, spatial relations, and relative scale) to evaluate layout control, ActionBench-Direction prompts to assess object dynamics, and MSR-VTT captions to cover diverse open-domain scenes (Xu et al., 2016). Specifically, we prepare ActionBench-Direction prompts by sampling video captions from ActionBench-SSV2 (Wang et al., 2023c) and balancing the distribution of movement directions. See Appendix D.2 for details.
Prompts for multi-scene video generation. For multi-scene video generation, we experiment with two types of input prompts: (1) a list of sentences describing events – ActivityNet Captions (Krishna et al., 2017) and Coref-SV prompts based on Pororo-SV (Li et al., 2019b), and (2) a single sentence from which models generate multi-scene videos – HiREST (Zala et al., 2023). Coref-SV is a new multi-scene text description dataset that we propose to evaluate the visual consistency of objects across multi-scene videos. We create Coref-SV by augmenting the Pororo-SV dataset (Li et al.,
which consists of multi-scene paragraphs from the “Pororo the Little Penguin” animated series. To evaluate the temporal consistency of video generation models trained on real-world videos, we replace its original animation characters (e.g., Pororo) with humans and common animals and examine their appearance across different scenes. Recurring character names are replaced with pronouns (she/he/it). See Appendix D.3 for prompt preparation details.
**Automated evaluation metrics.** Following previous works (Hong et al., 2022; Wu et al., 2022b; Wang et al., 2023b), we use FID (Heusel et al., 2017) and FVD (Unterthiner et al., 2019) as video quality metrics, and CLIPSIM (Wu et al., 2021) as the text-video alignment metric. Given that CLIP fails to faithfully capture detailed semantics such as spatial relations, object counts, and actions in videos (Otani et al., 2023; Cho et al., 2023a,b; Hu et al., 2023), we further propose the use of fine-grained evaluation metrics. For the evaluation of VPEval Skill-based prompts, we employ VPEval accuracy based on running skill-specific evaluation programs (object, count, spatial, scale) that execute relevant visual modules (Cho et al., 2023b). For ActionBench-Direction prompts, we propose an object movement direction accuracy metric that takes both temporal information and spatial layouts into consideration. To achieve this, we obtain the start/end locations of objects by detecting them with GroundingDINO (Liu et al., 2023) in the first/last video frames. We then evaluate whether the $x$-coordinates (for movements left or right) or $y$-coordinates (for movements up or down) of the objects have changed correctly. For consistency evaluation in ActivityNet Captions and Coref-SV, we introduce a new metric to measure the consistency of the visual appearance of a target object across different scenes. For this, we first detect the target object using GroundingDINO from the center frame of each scene video. Then, we extract the CLIP (ViT-B/32) image embedding from the crop of the detected bounding box. We calculate the multi-scene object consistency metric by averaging the CLIP image embedding similarities across all adjacent scene pairs:
$$\frac{1}{N} \sum_{n=1}^{N-1} \cos(\text{CLIP}_{n}^{\text{img}}, \text{CLIP}_{n+1}^{\text{img}}),$$
where $N$ is the number of scenes, $\cos(\cdot, \cdot)$ is cosine similarity, and $\text{CLIP}_{n}^{\text{img}}$ is the CLIP image embedding of the target object in $n$-th scene.
**Human evaluation.** We conduct a human evaluation on the multi-scene videos generated by both our VideoDirectorGPT and ModelScopeT2V on the Coref-SV dataset. Since we know the target entity and its co-reference pronouns in the Coref-SV prompts, we can compare the temporal consistency of the target entities across scenes. We evaluate the human preference between videos from two models in each category of Quality, Text-Video Alignment, and Object Consistency. We show 50 videos of each model to three crowd-workers from Amazon Mechanical Turk to rate and then we calculate human preferences between the models. See Appendix E for more setup details.
**Step-by-step error analysis.** We also do an error analysis with an expert on each step of our single-sentence to multi-scene video generation pipeline on the HiREST dataset. We analyze the generated multi-scene text descriptions, layouts, and entity/background consistency groupings to evaluate our video planning stage, and examine the final video to evaluate the video generation stage. We provide the detailed error analysis setup in Appendix E.
## 5 RESULTS AND ANALYSIS
### 5.1 SINGLE-SCENE VIDEO GENERATION
**Layout control results (VPEval Skill-based prompts).** Table 1 (left) displays the VPEval accuracy on the VPEval Skill-based prompts. Our VideoDirectorGPT significantly outperforms ModelScopeT2V on all layout control skills. These results suggest that layouts generated by our LLM are highly accurate and greatly improve the control of object placements during video generation. Fig. 4 (1st row) shows an example where our LLM-generated video plan successfully guides Layout2Vid to accurately place the objects. In contrast, ModelScopeT2V fails to generate a ‘pizza’.
**Object movement results (ActionBench-Direction).** Table 1 (right) shows the performance on the ActionBench-Direction prompts which evaluate both temporal understanding and spatial layout control. Our VideoDirectorGPT outperforms ModelScopeT2V in object movement direction accuracy by a large margin, demonstrating that our LLM-generated layouts can improve the accuracy of object dynamics in video generation. Fig. 4 (2nd row) shows video generation examples, where our LLM-generated video plan guides the Layout2Vid module to place in the correct starting position and guide the ‘pear’ to the correct end position in the video, whereas the ‘pear’ in the ModelScopeT2V video moves in a random wrong direction.
Table 1: Comparison of \textsc{VideoDirectorGPT} with ModelScopeT2V on layout control (VPEval Skill-based) and object movement (Actionbench-Direction) for single-scene video generation.
| Method | VPEval Skill-based | ActionBench-Direction |
|-------------------------|--------------------|------------------------|
| | Object | Count | Spatial | Scale | Overall Acc. (%) | Movement Direction Acc. (%) |
| ModelScopeT2V | 89.8 | 38.8 | 18.0 | 15.8 | 40.8 | 30.5 |
| \textsc{VideoDirectorGPT} (Ours) | 97.1 | 77.4 | 61.1 | 47.0 | 70.6 | 46.5 |
Figure 4: Generation examples on a VPEval Skill-based prompt and an ActionBench-Direction prompt. Our video plan, with object layouts overlaid, successfully guides the Layout2Vid module to place objects in the correct spatial relations for the VPEval Skill-based prompt and move the ‘pear’ in the correct direction for the ActionBench-Direction prompt, whereas ModelScopeT2V fails to generate a ‘pizza’ in the VPEval Skill-based prompt example and the ‘pear’ moves in a random wrong direction for the ActionBench-Direction prompt. See Appendix G for additional examples and supplementary material for full videos.
Open-domain results (MSR-VTT). Table 2 shows the visual quality (FVD, FID) and text-video alignment (CLIPSIM) metrics. Our \textsc{VideoDirectorGPT} maintains similar performance as its closest baseline ModelScopeT2V (good improvement in FVD, and similar performance on FID and CLIPSIM), while additionally being equipped with layout control and multi-scene temporal consistency. In addition, our \textsc{VideoDirectorGPT} achieves better or comparable performance to models trained with larger video data (e.g., Make-A-Video) or with higher resolution (e.g., VideoLDM).
Table 2: Visual quality and text-video alignment metrics on MSR-VTT. ModelScopeT2V†: Our replication with 2990 randomly selected test prompts.
| Method | Visual quality | T-V alignment |
|-------------------------|----------------|---------------|
| | FVD (↓) | FID (↓) | CLIPSIM (↑) |
| Different arch / Training data |
| NUWA | – | 47.68 | 0.2439 |
| CogVideo (Chinese) | – | 24.78 | 0.2614 |
| CogVideo (English) | 1294 | 23.59 | 0.2631 |
| MagicVideo | 1290 | – | 0.2929 |
| VideoLDM | – | – | 0.3049 |
| Make-A-Video | – | 13.17 | – |
| Same video backbone & Test prompts |
| ModelScopeT2V | 683 | 12.32 | 0.2909 |
| \textsc{VideoDirectorGPT} (Ours) | 550 | 12.22 | 0.2860 |
5.2 Multi-Scene Video Generation
Multiple sentences to multi-scene videos (ActivityNet Captions / Coref-SV). As shown in the left two blocks of Table 3, our \textsc{VideoDirectorGPT} outperforms ModelScopeT2V in visual quality (FVD/FID) and consistency on ActivityNet Captions and Coref-SV datasets. Notably, for Coref-SV, our \textsc{VideoDirectorGPT} achieves higher object consistency than ModelScopeT2V even with GT co-reference (where pronouns are replaced with their original noun counterparts, acting as oracle information; e.g., “she picked up ...” becomes “cat picked up ...”), showcasing the strong object identity preservation of our framework. Fig. 5(left) shows a video generation example from Coref-SV, where the LLM-generated video plan can guide the Layout2Vid module to generate the same mouse across scenes consistently, whereas ModelScopeT2V generates a hand and a dog instead of a mouse in later scenes. See Appendix G for an additional example.
Table 3: Multi-scene video generation with multiple input sentences (ActivityNet Captions and Coref-SV) and single sentence (HiREST prompts). *GT co-reference*: replacing co-reference pronouns in Coref-SV with the original object names (e.g., “his friends” becomes “dog’s friends” if the original object is ‘dog’).
| Method | ActivityNet Captions | Coref-SV | HiREST |
|---------------------------------|----------------------|----------|--------|
| | FVD (↓) | FID (↓) | Consistency (↑) | Consistency (↑) | FVD (↓) | FID (↓) |
| ModelScopeT2V | 980 | 18.12 | 46.0 | 16.3 | 1322 | 23.79 |
| ModelScopeT2V (with GT co-reference; oracle) | - | - | - | 37.9 | - | - |
| VideoDirectorGPT (Ours) | 805 | 16.50 | 64.8 | 42.8 | 733 | 18.54 |
Figure 5: Generation examples on Coref-SV (left) and HiREST (right). For both Coref-SV and HiREST, our VideoDirectorGPT is able to generate detailed video plans and visually consistent videos. In HiREST, the plan also expands the original text prompt to show the process. Conversely, ModelScopeT2V generates a hand and a dog instead of a mouse for Coref-SV, and only generates the final caraway cake (which is visually inconsistent). More examples are in Appendix G and see supplementary for full videos.
**Single sentence to multi-scene videos (HiREST).** The right block of Table 3 shows our VideoDirectorGPT achieves better visual quality scores (FVD/FID) than ModelScopeT2V on the HiREST dataset. As shown in Fig. 5 (right), our LLM can generate a step-by-step video plan from a single prompt and our Layout2Vid can generate consistent videos following the plan. Our VideoDirectorGPT generates a step-by-step video showing how to make caraway cakes (a British seed cake). ModelScopeT2V repeatedly generates the final caraway cake (which is visually inconsistent). We include an additional example in Appendix G.
### 5.3 ADDITIONAL ANALYSIS
**Generating videos with custom image exemplars.** Our Layout2Vid can obtain CLIP image embeddings either from user-provided image exemplars or from entity text descriptions via the Karlo Prior. In Fig. 6, we demonstrate that our Layout2Vid can flexibly take either text-only or image+text descriptions as input to generate multi-scene videos with good entity consistency.
**Dynamic layout strength control based on GPT-4.** The number of denoising steps with layout guidance, denoted as $\alpha$ (detailed in Appendix C.3), is a key hyper-parameter in our model. Instead of using a static $\alpha$ value, we explore dynamically adjusting it during the video plan generation by asking the LLM how much layout guidance needs to be enforced for each prompt. Table 4 shows the result with static $\alpha$ values of 0.1, 0.2, and 0.3, as well as dynamic $\alpha$ values determined by GPT-4 (called LLM-Dynamic-$\alpha$). Interestingly, LLMs can help the video generation process achieve a good balance in the quality-layout trade-off. Detailed explanation for Table 4 is given in Appendix F.
Table 4: Ablation of the denoising steps with layout guidance (via Guided 2D attentions) in open-domain (MSR-VTT) and object dynamics (ActionBench-Direction) prompts. \( \alpha = \frac{\text{# steps with layout guidance}}{\text{# total steps}} \). Our Layout2Vid module uses 50 denoising steps in total.
| # Denoising steps with layout guidance | MSR-VTT | ActionBench-Direction |
|---------------------------------------|---------|----------------------|
| | FVD (\( \downarrow \)) | FID (\( \downarrow \)) | CLIPSIM (\( \uparrow \)) | Movement Direction Acc. (%) |
| \( \alpha = 0.1 \) (5 steps) | 550 | 12.22 | 0.2860 | 46.5 |
| \( \alpha = 0.2 \) (10 steps) | 588 | 17.25 | 0.2700 | 59.8 |
| \( \alpha = 0.3 \) (15 steps) | 593 | 17.17 | 0.2702 | 57.8 |
| LLM-Dynamic-\( \alpha \) (5-15 steps) | 523 | 13.75 | 0.2790 | 56.8 |
Figure 6: Video generation with text-only and image+text inputs. Users can provide either text-only or image+text descriptions to place custom entities when generating videos with VIDEODIRECTORGPT. The identities of the entities are preserved across multiple scenes. Additional examples are shown in Appendix G and see supplementary for full videos.
5.4 Human Evaluation
We conduct a human evaluation (detailed in Sec. 4) on multi-scene videos generated by both VIDEODIRECTORGPT and ModelScopeT2V on the Coref-SV dataset. Table 5 shows that VIDEODIRECTORGPT achieves a higher preference than ModelScopeT2V in all categories (Quality, Text-Video Alignment, and Object Consistency).
We also conduct an error analysis on our single-sentence to multi-scene video generation pipeline on HiREST prompts and find that our LLM-guided planning steps score high accuracy, whereas the biggest score drop happens in the layout-guided video generation. This suggests that our VIDEODIRECTORGPT could generate more accurate videos, once we have access to a stronger T2V backbone than ModelScopeT2V. We present full error analysis results in Appendix E.
Table 5: Human preference on generated multi-scene videos of Coref-SV in three evaluation categories.
| Evaluation category | Human Preference (%) | VIDEODIRECTORGPT (Ours) | ModelScopeT2V | Tie |
|---------------------------|----------------------|-------------------------|---------------|-----|
| Quality | 54 | 34 | 12 |
| Text-Video Alignment | 54 | 28 | 18 |
| Object Consistency | 58 | 30 | 12 |
6 Conclusion
In this work, we propose VIDEODIRECTORGPT, a novel framework for consistent multi-scene video generation, leveraging the knowledge of LLMs for video content planning and grounded video generation. In the first stage, we employ GPT-4 as a video planner to craft a video plan, which provides a multi-component script for videos with multiple scenes. In the second stage, we introduce Layout2Vid, a grounded video generation module, to generate videos with layout and cross-scene consistency control. Our experiments demonstrate that our proposed VIDEODIRECTORGPT framework substantially improves object layout and movement control and can generate multi-scene videos with cross-scene visual consistency, while achieving competitive performance with SOTAs on open-domain single-scene T2V generation.
7 ETHICS STATEMENT
While our framework can be beneficial for numerous applications (e.g., user-controlled/human-in-the-loop video generation/manipulation and data augmentation), akin to other video generation frameworks, it can also be utilized for potentially harmful purposes (e.g., creating false information or misleading videos), and thus should be used with caution in the real-world applications (with human supervision). Our video generation module (Layout2Vid) is based on the pretrained weights of ModelScopeT2V. Therefore, we face similar limitations to their model, including deviations related to the distribution of training datasets, imperfect generation quality, and only understanding the English corpus.
8 REPRODUCIBILITY STATEMENT
Our model is built upon the publicly available code repository from GLIGEN \cite{li2023gligen} and ModelScopeT2V \cite{wang2023modelscope}. Please see Sec. 3, Appendix C, for model architecture details, Sec. 4, Appendix D.2, Appendix D.3, for dataset details, and Sec. 4, Appendix D.4, for metric details. We will publicly release our code.
REFERENCES
Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22563–22575, 2023.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020.
Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308, 2017.
Jaemin Cho, Abhay Zala, and Mohit Bansal. Dall-eval: Probing the reasoning skills and social biases of text-to-image generation models. In ICCV, 2023a.
Jaemin Cho, Abhay Zala, and Mohit Bansal. Visual programming for text-to-image generation and evaluation. In NeurIPS, 2023b.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022.
\footnotesize{
\begin{itemize}
\item[1] GLIGEN: https://github.com/gligen/GLIGEN
\item[2] ModelScopeT2V: https://modelscope.cn/models/damo/text-to-video-synthesis/summary
\item[3] https://github.com/ExponentialML/Text-To-Video-Finetuning/tree/main
\end{itemize}
}
|
mavWQw7DnC
|
the perspective of contrapositive logit is not fully novel. In fact, Pearl (1999)[1] defined the notation of **probability of necessary causation**, which follows the same logic as contrapositive. There may be some connection between the probability of necessary causation and the method proposed in this paper. Linking the method proposed in this paper with the necessary causality probability may provide a theoretical guarantee for the method proposed in this paper. Could you discuss something about the possible connections?
|
EXPLAINING RECOMMENDATION SYSTEMS THROUGH CONTRAPOSITIVE PERTURBATIONS
Anonymous authors
Paper under double-blind review
ABSTRACT
Recommender systems are widely used to help users discover new items online. A popular method for recommendations is factorization models, which predict a user’s preference for an item based on latent factors derived from their interaction history. However, explaining why a particular item was recommended to a user is challenging, and current approaches such as counterfactual explanations can be computationally expensive. In this paper, we propose a new approach called contrapositive explanations (Contra+) that leverages a different logical structure to counterfactual explanations. We show how contrapositive explanations can be used to explain recommendation systems, by presenting a methodology that focuses on finding an explanation in the form of “Because the user interacted with item j, we recommend item i to the user,” which we show is easier to compute and find compared to traditional counterfactual approaches which aim at “Because the user did not interacted with item j, we did not recommend item i to the user.” We evaluate our approach on several real-world datasets and show that it provides effective and efficient explanations compared to other existing methods.
1 INTRODUCTION
Recommender systems have become ubiquitous in online platforms to help users discover new items of interest [Lu et al. (2012); Aggarwal et al. (2016); Beel et al. (2016); Jannach et al. (2022)]. These systems analyze a user’s historical interactions with items and suggest new items based on those interactions to provide personalized recommenders that align with the user’s preferences [Lu et al. (2015); Das et al. (2017); Bobadilla et al. (2013); Pazzani & Billsus (2007)]. Factorization models, such as for example the Singular Value Decomposition (SVD) model, are commonly used in recommender systems [Guan et al. (2017); Bokde et al. (2015)] to predict a user’s preference for an item based on latent factors derived from the user’s interaction history.
However, the “why” behind a recommendation remains a challenging issue. Counterfactual explanations [Wachter et al. (2017)] offer one possible approach to this problem; they attempt to demonstrate the minimal changes needed in a user’s history that would trigger a different recommendation [Tran et al. (2021)]. This, however, requires the deletion of the (user, item) pair from the user’s history and retraining of the model, a process that is time-consuming and computationally expensive.
In order to bridge this gap, various techniques have been proposed, such as influence functions [Tran et al. (2021); Koh & Liang (2017)]. Despite their utility in computing the impact of data removal, these methods are still challenged by their computational demands, primarily due to the need to compute the inverse of the Hessian. This high computational cost often limits their practical application in real-time recommender systems; which is the primary focus of this paper. In addition to that, influence functions are only approximations and hence are less reliable when it comes to highly non-linear models such as Deep Neural Networks [Basu et al. (2020)].
To address this issue, this paper introduces a novel approach called contrapositive explanations (Contra+). The contrapositive logic involves negating and switching the order of the antecedent and consequent of an implication statement. The proposed approach in this paper leverages this logic to provide explanations for recommenders by first negating the user’s recommended item and switching it with another item and secondly inspecting the resulting changes in the user’s history. This approach avoids the need for retraining the model and provides a more efficient way to generate
explanations. Before diving deeper into how we can utilize this logic in recommender systems, let us take a short detour to lay out what the contrapositive logic entails in a simple example.
**Example 1.1.** [Toy Example] Consider the following two statements:
- **A:** It is raining.
- **B:** The road is wet.
From the above, we can make the following logical statements: \( A \rightarrow B \) i.e. It was raining and this implies that the road is wet. The logical equivalent is \( B \rightarrow A \) i.e. The road is not wet which implies that it was not raining. This is contrary to the counterfactual logic which would reason through \( \bar{A} \rightarrow \bar{B} \) i.e. It was not raining and this implied that the road is not wet. Note, that this is not always the case, as a bucket of water could make the road wet. Hence, these are two distinct statements and in this paper, in particular, we focus on trying to achieve \( A \rightarrow B \).
With this in mind, we now present how we can use contrapositive logic in explaining recommender systems. The explanation logic that we will be using throughout the paper is the following:
**Example 1.2.** [Recommender System Example] Consider again the following two statements:
- **A:** The user \( u \) interacted with item \( j \) in the user history.
- **B:** The user \( u \) is recommended item \( i \).
Here, the objective is to find an explanation that supports the statement \( A \rightarrow B \), meaning because user \( u \) interacted with item \( j \), item \( i \) was recommended. Identifying such explanations can be challenging and computationally intensive, as it would require exhaustively searching through all possible combinations of a user’s history to determine which interactions, when removed, do not alter the recommendation. To address this challenge, we adopt the logically equivalent contrapositive route \( \bar{B} \rightarrow \bar{A} \): if item \( i \) is not recommended, then user \( u \) would not have interacted with item \( j \).
Intuitively, given the predominance of user and item embeddings in most recommender systems, our method starts by invoking \( \bar{B} \), that is, we “perturb” the user embedding to ensure item \( i \) is not recommended. Then, given this perturbed user embedding, we identify the historical item that has lost most relevance to the user — effectively, the item with which the user would not have interacted, denoted as \( \bar{A} \). We detail the formalization of our method in Section 3.
The key contributions of this paper are as follows:
- We propose an explanation method for recommender systems that uses contrapositive logic, which involves negating and switching the antecedent and consequent of a user’s preference for items. This approach reduces the computational cost and the need for model retraining.
- We propose a computationally efficient framework recommender system. Specifically, we investigate its applicability and performance on SVD and MLP-based recommender systems, demonstrating its versatility in various experiments.
- We introduce an evaluation metric tailored for contrapositive logic, offering a new perspective to assessing explanations compared to traditional counterfactual logic. We demonstrate on extensive experiments that our proposed method is able to outperform existing methods.
This paper is structured as follows: Section 2 gives background on recommender systems and existing explanation methods. Section 3 introduces our proposed methodology Contra+, which is then followed by extensive experiments in section 4. Lastly, in section 5, we conclude with the limitations as well as future extensions to our proposed method Contra+.
## 2 Background and Related Work
### 2.1 Formulation of Recommender Systems
Before diving into the specifics of Singular Value Decomposition (SVD) and Multi-Layer Perceptron (MLP) models, we first establish the fundamental elements of recommender systems. The key components for SVD and MLP models are as follows:
• **User-Item pair** \((u, i)\): These pairs represent the interaction between user \(u \in U\) and item \(i \in I_u\), where \(U\) is the set of all users and \(I_u\) is the set of items user \(u\) has interacted.
• **Training data:** The data in recommender systems usually comprises of a user-item interaction matrix \(R \in \mathbb{R}^{m \times n}\), with \(m\) representing the number of users and \(n\) the total number of items. Each element \(R_{ui}\) corresponds to the rating given by user \(u\) to item \(i\).
• **User/Item embeddings** \((p_u, q_i)\): Each user \(u\) and item \(i\) are represented in a latent space through vectors, or embeddings, denoted as \(p_u\) and \(q_i\) respectively. These embeddings are computed during the training process (SVD or MLP) and capture the underlying characteristics and preferences of users and items.
### 2.2 Brief Overview of SVD and MLP Models
**Singular Value Decomposition:** SVD is a widely used matrix factorization model in recommender systems and allows us to predict user preferences by decomposing the user-item interaction matrix \(R\) into two low-rank matrices: \(P \in \mathbb{R}^{m \times d}\) and \(Q \in \mathbb{R}^{n \times d}\), according to:
\[
R \approx PQ^T
\]
(1)
Here, \(d\) is the pre-determined number of latent factors. Each row in the matrices \(P\) and \(Q\) represents a latent factor vector for a user and an item, respectively. These vectors, denoted as \(p_u\) and \(q_i\), serve as embeddings that encapsulate the essential characteristics of user \(u\) and item \(i\) in a \(d\)-dimensional space. To leverage the predictive power of SVD models, we first compute an interaction score between a user and every non-rated item. This interaction score signifies the predicted rating or preference of user \(u\) for item \(i\) and is calculated as the dot product of the corresponding user and item embeddings. Consequently, the score function \(s(u, i)\) is defined as:
\[
s(u, i) = p_u^T q_i = \langle p_u, q_i \rangle
\]
(2)
Once, we have computed the scores between the user \(u\) and all non-rated items, we can then sort the score and recommend the item which gave us the highest interaction score.
**Multi-Layer Perceptron:** On the other hand, MLP models extend beyond linear relationships captured by SVD. They leverage neural networks to process the concatenated user and item embeddings, thereby capturing potential non-linear interactions between users and items. In this case, given the user embedding \(p_u\) and the item embedding \(q_i\), the score function, denoted as \(s(u, i)\), is defined as:
\[
s(u, i) = \text{MLP}([p_u; q_i]; \theta)
\]
(3)
Here \(\text{MLP}([p_u; q_i]; \theta)\) is a neural network parameterized by \(\theta\). Similarly to the SVD model, when making a new recommendation for the user, we sort the score and pick the highest-scored item. Next, we delve deeper into the specifics of these models and how explanations can be generated.
### 2.3 Counterfactual Explanations and Influence Functions
One of the primary ways of explaining recommender systems is through counterfactual explanations [Wachter et al. (2017); Tran et al. (2021); Yao et al. (2022a); Ghazimatin et al. (2020); Kaffes et al. (2021); Tan et al. (2021)]. These methods attempt to compute logical statements of the form \(A \rightarrow B\), which indicates that because the user did not interact with item \(j\), item \(i\) was not recommended. In other words, because the removal of item \(j\) changed the recommendation for user \(u\), item \(j\) serves as an explanation for having had an impact on the recommendation of item \(i\).
However, computing such counterfactual explanations can be challenging, particularly when attempting to identify which historical item(s) are responsible for a given recommendation. One approach is to remove a combination of relevant item(s) from the training data and retrain the model, but this can be computationally infeasible, particularly for large neural networks. To address this issue, researchers have proposed alternative methods such as gradient-based [Tan et al. (2021)] and search-based [Kaffes et al. (2021)] approaches, as well as influence functions [Tran et al. (2021); Koh & Liang (2017)], that approximate the retraining of a model when one or more data points are removed. However, even though these methods significantly reduce the computational cost of retraining models, these methods are not suitable for large real-time recommender systems given that they still require significant computational requirements.
In particular, influence functions have become popular due to their ability to approximate a retrained differentiable model without requiring the retraining of the entire model. However, they are not without their own limitations, such as the need to compute the Hessian matrix, which can be computationally infeasible for large networks, and their second-order approximation of the model, which can result in misleading results [Basu et al., 2020]. Others have tried to train surrogate models that learn a mapping from removed items to retrained models [Yao et al., 2022a]. However, the latter suffers from extensive offline training of the surrogate model which can be prohibitive in practice.
To address these challenges, we propose a new approach to explainable recommendation systems based on contrapositive explanations. Unlike counterfactual explanations, which attempt to identify the necessary cause of a recommendation, contrapositive explanations focus on identifying the sufficient conditions for a recommendation to be made. Specifically, we aim to compute logical statements of the form \( B \rightarrow A \), which is equivalent to \( A \rightarrow B \).
3 PROPOSED METHOD: Contra+ EXPLANATIONS
In this section, we introduce our novel approach for generating what we term Contra+ explanations for any recommender system. We start by focusing on the SVD model as a simple case study and later explain how to apply our proposed method to any differentiable model such as MLPs.
3.1 FACTOR MODEL: SVD
Recall that the SVD model is a factorization-based approach that represents users and items in a shared latent space. A rating for user-item pair \((u, i)\) is predicted using a factor model, where the interaction between the user and item is represented \( s(u, i) = \langle p_u, q_i \rangle \), where \( p_u, q_i \in \mathbb{R}^d \) are \( d \)-dimensional latent factors that measure the alignment between the preferences of user \( u \) and the item \( i \). Our goal is to arrive at the statement \( B \rightarrow A \), which means that: Because we do not recommend item \( i \) to user \( u \), the user would not have interacted with item \( j \).
As a first step, we start by negating the consequent: \( B \): “we recommend item \( i \) to user \( u \)”. Our method Contra+ first constructs a user embedding \( p'_u \) for user \( u \) such that item \( i \) is not recommended. To achieve this, we perturb the user’s latent representation \( p_u \) in the opposite direction to the item’s representation \( q_i \), such that the recommendation score decreases and item \( i \) is no longer recommended. In other words, we enforce \( B \), the negative for “user \( u \) is recommended item \( i \)” .
More concretely, we define a new user embedding as follows:
\[
p'_u = \gamma p_u - \epsilon q_i, \text{ where } \epsilon \in \mathbb{R}^+ \text{ and } \gamma \in [0, 1]
\]
and hence the new score for the recommendation \((u, i)\) can then be expressed as:
\[
s'(u, i) = \langle p'_u, q_i \rangle = \gamma s(u, i) - \epsilon \|q_i\|^2 < s(u, i).
\]
For simplicity of exposition, we fix \( \gamma = 1 \) for now. Intuitively, if we choose a sufficiently large \( \epsilon \), we can ensure that the recommended item \( i \) is no longer recommended as the score \( s'(u, i) \) will drop. Specifically, if we want the new score to be less than \( S \in \mathbb{R}^+ \):
\[
s'(u, i) = \gamma s(u, i) - \epsilon \|q_i\|^2 < S \iff \epsilon > \frac{\gamma s(u, i) - S}{\|q_i\|^2}
\]
This leads us to the second step of Contra+ which is that of using \( p'_u \) to determine which items the user would have likely not interacted with. To this end, we construct the explanation set by considering the difference between the old score \( s(u, h) \) and the new score \( s'(u, h) \) e.g. \( \Delta_h = s(u, h) - s'(u, h) \), where \( h \in I_u \). By ordering \( \Delta_h \) (for items with a score of at least 4), we assume that the liked items that experienced the greatest decrease in score with the new embedding \( p'_u \) are the same items that user \( u \) would not have interacted with initially. Hence we can state the negative of the antecedent \( A \): “user \( u \) interacted with item \( h \)” i.e. \( \bar{A} \).
Putting both parts of Contra+ together we can make the statement \( \bar{B} \): “We did not recommend item \( i \) to user \( u \)” and therefore \( \bar{A} \): “User \( u \) would not have interacted with item \( h \)”. Which is logically equivalent to \( A \rightarrow B \) i.e. User \( u \) interacted with item \( h \) and hence we recommended item \( i \). We emphasize, that do not claim that we are able to find the one and only explanation, but rather, that we are able to provide a contrapositive explanation which fits our logical statement \( \bar{B} \rightarrow \bar{A} \), which is equivalent to \( A \rightarrow B \). This is corroborated by our extensive experiments as well.
To further elucidate our methodology, let us examine a scenario where a user \( u \) received a recommendation for the movie *The Godfather II* based on their previous interactions out of which one of them was *The Godfather*. A useful explanation for the user would be the logical statement: “Given your interaction with *The Godfather*, you were recommended *The Godfather II*”. Using our proposed contrapositive approach, we generate an explanation by first generating a user embedding \( p'_u \) who was not recommended *The Godfather II*. If we then observe that the scores for the previously recommended item such as *The Godfather* significantly decrease compared to the rating provided by the user, we can infer that the absence of the recommendation for *The Godfather II* would likely have been because of the lack of interaction with *The Godfather*. Hence, we can deduce that “If the movie *The Godfather II* was not recommended, the user would not have interacted with *The Godfather*” is logically equivalent to the explanation “Because you interacted with *The Godfather*, you were recommended *The Godfather II*”.
### 3.2 Factor Model: MLP Factor Models
Now that we have described the general framework for the SVD model the natural question is how this is applicable to a model for which we do not necessarily have the inner product structure between user and item embeddings \( \langle p_u, q_i \rangle \) such as MLP models. In these neural models, even though we are still constructing \( p_u, q_i \) i.e. user and item embedding respectively, we no longer compute the inner product but rather concatenate the embeddings before pushing them through multiple MLP layers.
Hence this renders our simple user embedding modification unusable. We therefore propose an alternative method for non-inner product methods which in essence only requires us to backpropagate the scores for a given user to reduce their score for a recommended item \( i \). In other words, let \( p_u, q_i \) be the user and item embeddings respectively and let \( \text{MLP} : \mathbb{R}^{2d} \rightarrow \mathbb{R} \) be the MLP that takes as input the concatenation \([p_u, q_i]\) and outputs the corresponding relevance score. In this case, we can update the user embedding \( p_u \) over a \( k \) iterations as follows, where \( \eta \) is a learning rate:
\[
p'_u \leftarrow p_u - \eta \nabla_{p_u} \text{MLP}([p_u, q_i])
\]
Note that all the other parameters of the recommender systems remain the same and that we are only modifying the embedding \( p_u \). In this case, we again a new user embedding \( p'_u \) for user \( u \) and are able to repeat the same procedure as above, i.e. select the items that have dropped most in score in the user history based on the new embedding as our explanation set.
We acknowledge that this computation is indeed more computationally heavy as in the SVD case where we were only required to compute the item embedding for item \( i \). However, we argue that this computation is significantly smaller than in Tan et al. (2021) as we only require to backpropagate for a single datapoint and user embedding, which is of the same complexity as a forward pass. In our later experiments, we show that our Contra+ for MLP takes less than 1 second, whereas influence function Koh & Liang (2017); Basu et al. (2020) can take at least 5 times longer.
### 3.3 Discussion on Contrapositive and Counterfactual Explanations
Before moving on to our empirical findings, we first need to clearly delineate the differences and similarities between contrapositive and counterfactual explanations within the realm of recommendation systems. These two types of explanations hinge on distinct logical structures. Counterfactual explanations follow a \( A \rightarrow B \) logic, while contrapositive explanations adopt a reversed \( B \rightarrow A \) logic. In simple terms, counterfactual explanations explore what changes in recommendations \( B \) occur upon the removal of specific elements \( A \), whereas contrapositive explanations begin by noting the changes in recommendations \( B \), and then seek to identify which elements were removed \( A \).
To further illustrate these concepts, consider a movie recommendation system. A counterfactual explanation might highlight that removing horror films (the removal \( A \)) from a user’s watch history leads to the system no longer recommending thriller movies (the change in recommendation \( B \)). In contrast, a contrapositive explanation begins with an observed change in the recommendation output—say, thriller movies are no longer suggested (the change \( B \))—and then determines that this change is due to the exclusion of horror films from the user’s history (the removal \( A \)). To be clear, these two explanation methods are not exclusive of each other, i.e. a counterfactual explanation could well possibly fulfill the conditions of contrapositive explanations and vice versa. However, the set of explanations is not completely overlapping as is evident from the Toy Example[1.1] (A bucket of water can make the road wet instead of rain).
Having established this understanding, we can now turn our attention to the recently proposed concept of *counterfactual backtracking* (von Kügelgen et al., 2022), which interestingly shares some parallels with our contrapositive explanations. Traditional counterfactual reasoning, often metaphorically described as creating “small miracles”, posits hypothetical scenarios where certain features of reality are modified while others persist. Translating this into the recommendation systems domain might entail erasing a segment of a user’s history while the remaining part stays unaltered.
However, the backtracking approach diverges from this path. Instead of crafting a new reality, backtracking maintains the laws of the system intact and traces back changes from the outcome to altered initial conditions. In other words, it starts from a change in recommendations and seeks to identify what alterations in the user’s history would lead to this new outcome. In this sense, there is an overlap between contrapositive explanations and counterfactual backtracking as both follow a reversed reasoning, tracing back from outcomes to causes.
Both approaches allow us to imagine how varying the user’s history would lead to different recommendations. But unlike traditional counterfactual reasoning—which constructs a completely new world by altering the user’s history—both contrapositive explanations and counterfactual backtracking keep the laws of the system unaltered and examine how changes in the outcome can be traced back to changes in initial conditions. This distinction offers an intuitively appealing and conceptually novel approach to understanding recommendation systems. Note that von Kügelgen et al. (2022) have not actually proposed a practical algorithm but rather set up a new theoretical framework.
Now that we have thoroughly explored the differences between our proposed methods to conventional counterfactual methods, we move on to the experimental setting. However, it is apparent that different metrics are needed to capture the contrapositive ideas. Hence, we developed a new metric for contrapositive explanations $M^u$ in recommender systems which we describe in the following.
### 3.4 Contrapositive Explanations Evaluation Metric
In contrast to counterfactual explanations, contrapositive explanations necessitate distinctive evaluation metrics. For counterfactual explanations, performance evaluation typically involves a three-step process: calculating the explanations, removing these explanations from the training data, and verifying whether these alterations changed the recommendation. As depicted in Figure 1, this process corresponds to the top row, where we aim for a high ratio $\frac{1}{(1 + 2)}$ when removing explanations.
Conversely, contrapositive explanations aim to maximize a different ratio: Given a change in recommendation, how many removals instigating this change align with our explanations? This concept is illustrated in the left column of Figure 1, where the desired ratio is $\frac{1}{(1 + 3)}$. Note that these ratios echo the familiar notions of precision and recall prominent in standard machine learning literature, as observed by Watson et al. (2021). However, in this context, we’re extending these concepts to fit within the realm of recommender systems.
### 4 Experiments
#### 4.1 Experimental Evaluation
To assess the effectiveness of our proposed *Contra+* explanations method, we conducted a series of experiments on well-established benchmark datasets, namely Movielens-100k, Movielens-1M,
and Netflix [Harper & Konstan (2015); Bennett et al. (2007)]. We aim to showcase the versatility of our approach by implementing our contrapositive strategy on two distinct model classes commonly employed in recommender systems: Singular Value Decomposition (SVD) and Multi-Layer Perceptron (MLP) models [He et al. (2017)]. For a comprehensive evaluation, we compared our proposed method against several baseline approaches. Recall, that in this paper we primarily focus on computationally very efficient methods and hence many of the aforementioned methods in the section 2.3 are not comparable due to their computational budget.
Baselines The first baseline method, referred to as the Random method, randomly selects explanations from a user’s historical data. This method serves as a fundamental sanity check to ensure our contrapositive method outperforms arbitrary selection. The second baseline, the Item Similarity method [Yao et al. (2022b)], selects historical items most similar to the recommended item as explanations, focusing on similarity-based justifications. This is one of the most commonly used ones as it is computationally very efficient similar to Contra+. The final baseline for completeness is Influence Function (IF) [Koh & Liang (2017)], which ascertains explanations based on the historical items with the greatest influence on the recommended item. Note that IF are computationally extremely expensive due to the Hessian matrix. Nevertheless, we believe that IF serves as the gold standard for the other SOTA methods mentioned in section 2.3 which in fact aim to approximate IF.
Evaluations Evaluating the quality of explanations generated by our contrapositive method involves using the previously outlined evaluation metric. However, accurately computing this metric requires a more nuanced procedure, which includes the following steps:
Firstly, we sample 10% of each user’s historical interactions, denoted as $H^u_s$, and remove them from the training dataset user-item interaction matrix $R$. This process is repeated 100 times per user, yielding 100 models with different subsets $\{H^u_s\}_{s=1}^{100}$ removed from $R$. From these 100 models, we select the subsets $\{H^u_{\sigma(k)}\}_{k=1}^{K}$ that led to a change in recommendation after retraining (as per the “recommendation changed” condition/ left column in Figure 1). Here, $\sigma(k)$ signifies the indexed subset of removals that triggered the recommendation change. We repeat this for 100 users, thus training 10000 models. We emphasise that this retraining is purely for evaluation’s sake, the actual explanation method Contra+ does not require retraining of models. Subsequently, we employ the following metric for contrapositive explanations:
$$M_{contra} = \frac{1}{n} \sum_{u=1}^{n} M^u,$$
where
$$M^u = \frac{1}{K} \sum_{k=1}^{K} \frac{1(H^u_{\sigma(k)} \cap E_{method})}{|E_{method}|}$$
(7)
where $1$ is the indicator function assessing intersection and $E_{method}$ being the explanation set for a given method. Intuitively, if for every user $u$, the explanations ($E_{method}$) consistently intersect with items causing the recommendations to change ($H^u_{\sigma(k)}$), then the metric $M_{contra}$ will be high and consequently also the average, thus confirming the usefulness of the contrapositive method.
Lastly, even though, the main goal of this paper is to investigate contrapositive explanations, we also include the counterfactual metric [Tran et al. (2021); Yao et al. (2022b)] in our experiments for completeness. The counterfactual metric works as follows. For every user, we remove the explanations from the training dataset and subsequently retrain the model. We then compute the ratio of the number of changed recommendations due to the removal of the explanations over the number of users. Intuitively, if this ratio is high, this means that removing the explanations consistently changes the recommendation and hence through the lens of counterfactual logic is considered a good explanation. Given that we are the first to introduce contrapositive explanations to XAI, we believe that, even though tangential, it is important to include the counterfactual metric in order to bridge the gap between the communities.
4.2 SVD experiments
To investigate the sensitivity of our method to the model size, we first conducted an ablation study using different latent dimensions for the SVD model on the MovieLens-1M dataset. Top of Figure 2 illustrates the metric $M_{contra}$ on the y-axis for different latent dimensions of 32, 64, 128. Within each subfigure, we also plot different explanation sizes of 1, 2, 3 and 5, along with comparisons to baseline methods. Similarly, we have experiments for the counterfactual metric in bottom Figure 2.
In Figure 2, our Contra+ firstly demonstrates robustness to changes in latent dimensions and secondly, outperforms the baseline by a significant margin (higher the better) on both evaluation met-
rics, across every latent dimension as well as explanations size. Following this positive result, we extend \textit{Contra+} to two additional datasets: the MovieLens-100k and Netflix datasets.
Figure 2: Ablation study with 32 (left), 64 (middle), 128 (right) latent dimensions for the SVD. We compare Contrapositive (Ours) against Item similarity, Influence functions and Random explanation baselines on the $M_{contra}$ metric (Top) and counterfactual metric (Bottom). (The higher the better)
4.2.1 Experiments on ML-1M and Netflix datasets
We further examined the efficacy of \textit{Contra+} explanations on the MovieLens-100k and Netflix datasets. Both are widely-used datasets containing approximately 100k and 600k data points, respectively. Here again, we plot the metric $M_{contra}$ as well as the counterfactual metric across different explanation sizes. As depicted in Figure 3, our proposed method \textit{Contra+} consistently shows statistically significant improvements over our baseline comparisons across both datasets as well as explanation sizes. It is only for MovieLens-100k, where at explanations size 5 the item similarity baselines seem to be comparable.
Figure 3: This compares Contrapositive (Ours) against Random, Influence functions and Item similarity baselines on the ML-100K(left) and Netflix (right) datasets. Similarly to the ML-1M dataset, the plots clearly show that our proposed method outperforms baseline methods on both the proposed contrapositive (Top) and counterfactual metric (Bottom). (The higher the better)
4.3 MLP experiments
Now that we have established that our methodology works with SVD models, we also perform experiments using MLP models to demonstrate the versatility of our proposed method. In this case, we conducted validation over latent dimensions of [32, 64, 128] and learning rates of [0.01, 0.001, 0.0001], for 3-Layer neural networks and selecting the best model for each dataset based on a held-out validation set. Further details can be found in the Appendix.
Figure 4 compares our proposed method to several baselines across the three different datasets, using the same metric as in the previous experiments. Here again, we see similar or significant improvement in our proposed method compared to the baselines. For ML-100k, we see that \textit{Contra+} is
able to keep up with influence functions for small explanation sizes but seems to be worse on large sizes. In all the remaining experiments, especially the Netflix dataset, our Contra+ is on par if not better than influence functions on both metrics. Interestingly, Contra+ performs also very well on the counterfactual metric, which can be explained through Figure 1. Both contrapositive and counterfactual metrics make use of the quantity in the top left corner in Figure 1 in their computation and hence there is a clear correlation between the metrics.
In addition, we would like to emphasize that influence functions were only included in our experiments for the sake of completeness and transparency. As mentioned earlier in Section 2.3, influence functions are computationally expensive due to Hessian computations and are thus not directly comparable to the objective of our paper, which focuses on computationally efficient methods. Nevertheless, we show that even in this case, Contra+ can perform on par or even better than influence functions, further demonstrating the merits of Contra+.
5 CONCLUSION, LIMITATIONS AND FUTURE WORK
In this paper, we introduce a novel way to compute explanations, Contra+, for recommender systems through the lens of contrapositive logic. The key insight is that the statements $B \rightarrow A$ and $\neg A \rightarrow \neg B$ are equivalent. In this case statement, $A$ is “user interacted with item $j$” and statement $B$ is “user was recommended item $i$”. Through extensive examples as well as empirical experiments, we have shown that our proposed method Contra+ is able to outperform conventional methods on several datasets from a $A \rightarrow B$ logic point of view. In addition, we have also shown that our proposed method is computationally much more efficient than methods such as the Influence function, which requires access to the Hessian matrix of a differentiable model. Lastly, we believe that this new way of considering explanations might open up new avenues of research in the field of explainable AI and recommender systems. By using contrapositive logic to compute explanations, we can provide more intuitive explanations, while also improving the efficiency of the computation.
There are however still limitations to our approach, firstly, given that we are unable to exactly recover the true data distribution of what caused the model to learn a lower score $s(u, i)$, we are only approximating the negation of the “did not interact with item $j$” statement. Even though we have shown how effective our approach is through extensive empirical evidence, more computationally heavy methods on how to properly select the historical items based on the perturbed user embeddings might improve the results. However, we stress that we are primarily interested in computationally efficient methods in this paper and hence leave this interesting avenue for future research. Secondly, we also acknowledge that in the case of Factor models such as SVD the computational complexity of an explanation is of the order of a recommendation, the same cannot necessarily be said for MLP models. In the neural model case, we require a few gradient steps which can lead to higher computational costs. However, this is still significantly cheaper than using methods such as influence functions which are completely unusable for large neural models. Lastly, extensions to different models outside recommender systems such as classifications or regression tasks would be an interesting extension, however, this is outside the scope of this paper and for future work.
REFERENCES
Charu C Aggarwal et al. *Recommender systems*, volume 1. Springer, 2016.
Samyadeep Basu, Philip Pope, and Soheil Feizi. Influence functions in deep learning are fragile. *arXiv preprint arXiv:2006.14651*, 2020.
Joeran Beel, Bela Gipp, Stefan Langer, and Corinna Breitinger. Paper recommender systems: a literature survey. *International Journal on Digital Libraries*, 17:305–338, 2016.
James Bennett, Stan Lanning, et al. The netflix prize. In *Proceedings of KDD cup and workshop*, volume 2007, pp. 35. New York, 2007.
Jesús Bobadilla, Fernando Ortega, Antonio Hernando, and Abraham Gutiérrez. Recommender systems survey. *Knowledge-based systems*, 46:109–132, 2013.
Dheeraj Bokde, Sheetal Girase, and Debajyoti Mukhopadhyay. Matrix factorization model in collaborative filtering algorithms: A survey. *Procedia Computer Science*, 49:136–146, 2015.
Debashis Das, Laxman Sahoo, and Sujoy Datta. A survey on recommendation system. *International Journal of Computer Applications*, 160(7), 2017.
Azin Ghazimatin, Oana Balalau, Rishiraj Saha Roy, and Gerhard Weikum. Prince: Provider-side interpretability with counterfactual explanations in recommender systems. In *Proceedings of the 13th International Conference on Web Search and Data Mining*, pp. 196–204, 2020.
Xin Guan, Chang-Tsun Li, and Yu Guan. Matrix factorization with rating completion: An enhanced svd model for collaborative filtering recommender systems. *IEEE access*, 5:27668–27678, 2017.
F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. *Acm transactions on interactive intelligent systems (tiis)*, 5(4):1–19, 2015.
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In *Proceedings of the 26th international conference on world wide web*, pp. 173–182, 2017.
Nicolas Hug. Surprise: A python library for recommender systems. *Journal of Open Source Software*, 5(52):2174, 2020. doi: 10.21105/joss.02174. URL https://doi.org/10.21105/joss.02174.
Dietmar Jannach, Pearl Pu, Francesco Ricci, and Markus Zanker. Recommender systems: Trends and frontiers, 2022.
Vassilis Kaffes, Dimitris Sacharidis, and Giorgos Giannopoulos. Model-agnostic counterfactual explanations of recommendations. In *Proceedings of the 29th ACM conference on user modeling, adaptation and personalization*, pp. 280–285, 2021.
Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In *International conference on machine learning*, pp. 1885–1894. PMLR, 2017.
Jie Lu, Dianshuang Wu, Mingsong Mao, Wei Wang, and Guangquan Zhang. Recommender system application developments: a survey. *Decision support systems*, 74:12–32, 2015.
Linyuan Lü, Mattiš Medo, Chi Ho Yeung, Yi-Cheng Zhang, Zi-Ke Zhang, and Tao Zhou. Recommender systems. *Physics reports*, 519(1):1–49, 2012.
Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. *The adaptive web: methods and strategies of web personalization*, pp. 325–341, 2007.
Juntao Tan, Shuyuan Xu, Yingqiang Ge, Yunqi Li, Xu Chen, and Yongfeng Zhang. Counterfactual explainable recommendation. In *Proceedings of the 30th ACM International Conference on Information & Knowledge Management*, pp. 1784–1793, 2021.
Khanh Hiep Tran, Azin Ghazimatin, and Rishiraj Saha Roy. Counterfactual explanations for neural recommenders. In *Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval*, pp. 1627–1631, 2021.
|
o4AydSd3Lp
|
In section 3.3.3, the authors mentioned that “It is alternatively possible that the discrete world model performs better simply because the VQ-VAE learns different information that is more conducive to world modeling.” What other different information do the authors think VQ-VAE is learning?
|
HARNESSING DISCRETE REPRESENTATIONS FOR CONTINUAL REINFORCEMENT LEARNING
Anonymous authors
Paper under double-blind review
ABSTRACT
Reinforcement learning (RL) agents make decisions using nothing but observations from the environment, and consequently, heavily rely on the representations of those observations. Though some recent breakthroughs have used vector-based categorical representations of observations, often referred to as discrete representations, there is little work explicitly assessing the significance of such a choice. In this work, we provide a thorough empirical investigation of the advantages of representing observations as vectors of categorical values within the context of reinforcement learning. We perform evaluations on world-model learning, model-free RL, and ultimately continual RL problems, where the benefits best align with the needs of the problem setting. We find that, when compared to traditional continuous representations, world models learned over discrete representations accurately model more of the world with less capacity, and that agents trained with discrete representations learn better policies with less data. In the context of continual RL, these benefits translate into faster adapting agents. Additionally, our analysis suggests that the observed performance improvements can be attributed to the information contained within the latent vectors and potentially the encoding of the discrete representation itself.
1 INTRODUCTION
This work is motivated by the quest to design autonomous agents that can learn to achieve goals in their environments solely from their stream of experience. The field of reinforcement learning (RL) models this problem as an agent that takes actions based on observations of the environment in order to maximize a scalar reward. Given that observations are the agent’s sole input when choosing an action (unless one counts the history of reward-influenced policy updates), the representation of observations plays an indisputably important role in RL.
The importance of observations becomes even more apparent when viewing proposed models of autonomous agents like the Common Model, identified by Sutton (2022), or JEPA, proposed by LeCun (2022). Nearly all of the components of these models, like the policy, value function, and world model, intake representations that originate from observations. Changes to observations are the most wide-reaching in the sense that they affect every part of the agent. Perhaps for this reason, both the Common Model and JEPA share a “perception” module that transforms observations into alternative representations before they are used by other components of the agent.
In this work, we examine the understudied yet highly effective technique of representing observations as vectors of categorical values, referred to in the literature as discrete representations (Van den Oord et al., 2017; Hafner et al., 2021; Friede et al., 2023) — a method that stands in stark contrast to the conventional deep learning paradigm that operates on learning continuous representations. Despite the numerous uses of learned, discrete representations (Robine et al., 2021; Hafner et al., 2023; Micheli et al., 2023), the mechanisms by which they improve performance are not well understood. To our knowledge, the only direct comparison to continuous representations in RL comes from a single result from Hafner et al. (2021) in a subfigure in their paper. In this work, we dive deeper into the subject and investigate the effects of discrete representations in RL.
The successes of discrete representations in RL date back to at least as early as tile coding methods, which map observations to multiple one-hot vectors via a hand-engineered representation function.
Tile coding was popular prior to the proliferation of deep neural networks as a way to construct representations that generalize well, and has more recently been adopted to reduce interference between hidden units in neural networks (Ghiassian et al., 2020). Continuous alternatives exist — notably, radial basis functions (RBFs) could be viewed as a generalization of tile coding that produce values in the interval $[0, 1]$. Despite the superior representational capacity of RBFs, however, they have tended to under perform in complex environment with many input dimensions (An et al., 1991; Lane et al., 1992).
A similar comparison can be seen between the work of Mnih et al. (2015) and Liang et al. (2016). Mnih et al. train a deep neural network (DNN) to play Atari games, relying on the neural network to learn its own useful representation, or features, from pixels. In contrast, Liang et al. construct a function for producing binary feature vectors that represent the presence of various patterns of pixels, invariant to position and translation. From this representation, a linear function approximator is able to perform as well as a DNN trained from pixels.
Recent approaches to producing discrete representations have moved away from hand-engineering representations, and towards learning representations. Van den Oord et al. (2017), for example, propose the vector quantized variational autoencoder (VQ-VAE), a self-supervised method for learning discrete representations. VQ-VAEs perform comparably to their continuous counterparts, variational autoencoders (Kingma & Welling, 2014), and do so while representing observations at a fraction of the size. When applied to DeepMind Lab (Beattie et al., 2016), VQ-VAEs are able to learn representations that capture the salient features of observations, like the placement and structure of walls, with as little as 27 bits.
Similar representation learning techniques have also been successfully applied in the domain of RL. Hafner et al. (2021) train an agent on Atari games (Bellemare et al., 2013; Machado et al., 2018), testing both discrete and continuous representations. They find that agents learning from discrete representations achieve a higher average reward, and carry on the technique to a follow-up work (Hafner et al., 2023) where they find success in a wider variety of domains, including the Proprio Control Suite (Tassa et al., 2018), Crafter (Hafner, 2022), and Minecraft (Guss et al., 2019). Works like those from Robine et al. (2021) and Michel et al. (2023) further build on these successes, using discrete representations to learn world models and policies. Work from Wang et al. (2022) find that representations that are more successful in transfer learning are often sparse and orthogonal, suggesting that these properties may underpin such successes of discrete representations.
The goal of this work is to better understand how discrete representations help RL agents. We use vanilla autoencoders (Ballard, 1987) to learn dense, continuous representations, FTA autoencoders (Pan et al., 2021) to learn sparse, continuous representations, and VQ-VAEs to learn fully discrete, binary representations. Inspired by the success of the Dreamer architecture (Hafner et al., 2021; 2023), we first examine how these different representations help in two distinct parts of a model-based agent: world-model learning and (model-free) policy learning. Observing that discrete and sparse representations specifically help when an agent’s resources are limited with respect to the environment, we turn to the continual RL setting, where an agent must continually adapt in response to its constrained resources (Kumar et al., 2023). We particularly emphasize the benefits of discrete and sparse representations in continual RL, as the most large and complex environments are impossible to perfectly model and require continual adaptation to achieve the best performance possible (Sutton et al., 2007; 2022).
The primary contributions of our work include:
- Elucidating multiple ways in which discrete representations have likely played a key role in successful works in model-based RL.
- Demonstrating that the successes of discrete representations are likely attributable to the choice of one-hot encoding rather than the “discreteness” of the representations themselves.
- Identifying and demonstrating that discrete and sparse representations can help continual RL agents adapt faster.
2 BACKGROUND
This work primarily focuses on how to train agents that learn to achieve some goal by interacting with the environment. This problem is formulated as learning to select actions from states $S_t \in \mathcal{S}$,
that best maximize a given reward signal, $R_{t+1} \in \mathbb{R}$. We are specifically concerned with how to learn the parameters, $\theta$, of a policy, $\pi_\theta(A_t|S_t)$, that maps from states to a distribution over actions. The goal is to maximize the discounted return from the current state, which is given by
$$G_t = \sum_{k=0}^{T} \gamma^k R_{t+k+1},$$
where $T$ is the terminal time step, and $\gamma \in [0, 1]$ is the discount factor.
We use proximal policy optimization (PPO) (Schulman et al., 2017) to learn policies, which collects transitions through environment interaction, and then applies multiple epochs of stochastic gradient descent to weights that directly parameterize the policy. Training on the same data for multiple epochs results in a highly sample efficient algorithm. The sample efficiency of model-free RL algorithms like PPO can sometimes be further improved with the additional use of a world model (Sutton et al., 2008; Tanner et al., 2019; Atkeson & Santamaria, 1997; Jin et al., 2018). Dyna (Sutton, 1991) is one such example of a framework for model-based RL that improves sample efficiency by learning from data generated by the model in a step called planning. In our work, we split model-based RL into its two components—world-model learning and (model-free) policy learning—and examine both components separately for a fine-grained view of how our solutions affect complex RL agents.
Both policy and world model architectures are split into two components in our work: a representation network (or encoder) that extracts a representation, and a problem-specific network that learns a policy or world model atop the learned representations. This decoupling can be beneficial in multiple ways (Lan et al., 2022; Barreto et al., 2017; Bellemare et al., 2019; Dabney et al., 2021), but we use it primarily as a means to carefully investigate how different representations affect learning. It allows us to swap out the encoder (both architecture and objective), while keeping the problem-specific model unchanged (aside from the input layer, which may vary in size).
With the exception of an end-to-end baseline, each of the encoders we use are trained with an observation reconstruction objective as part of a larger autoencoder model (Ballard, 1987). The autoencoder architecture compresses an observation into a bottleneck state before attempting to reconstruct it, forcing it to learn a representation that captures salient aspects of the observation. Each of the three types of learned representations we use in our work are produced by different autoencoder variants. We also evaluate the standard approach of end-to-end learning, where the representations are learned as a byproduct of the optimization process.
Dense, continuous representations are produced by a vanilla autoencoder. Sparse, continuous representations also use a vanilla autoencoder, but the bottleneck layer outputs are passed through a Fuzzy Tiling Activation (FTA) (Pan et al., 2021). FTA is a function that produces sparse outputs by converting scalars to “fuzzy” one-hot vectors. The FTA representations act as a bridge between dense, continuous representations and discrete representations, and they have been established as a strong baseline known to yield strong results in RL (Miahhi, 2022; Wang et al., 2022). Discrete representations are produced by a vector quantized-variational autoencoder (VQ-VAE) (van den Oord et al., 2017), which quantize the multiple outputs of the encoder to produce a vector of discrete values, also referred to as the codebook. The discrete representation we refer to in our work are comprised of multiple one-hot vectors, each representing a single, discrete value from the codebook. The details of these autoencoders are explained in more depth in Section A.1.
3 WORLD-MODEL LEARNING WITH DISCRETE REPRESENTATIONS
We begin our experiments by examining the benefits of using discrete representations when learning a sample-based world model.
3.1 ENVIRONMENTS
Throughout this work, we use the empty, crossing, and door key Minigrid environments (Chevalier-Boisvert et al., 2023), as displayed in Figure 1. In each environment, the agent receives pixel observations, and controls a red arrow that navigates through the map with left, right, and forward actions. The agent in the door key environment additionally has access to pickup and use actions to pickup the key and open the door. The crossing and door key environments are stochastic, with
---
1 We also tested variational autoencoders (Kingma & Welling, 2014) in early model learning experiments, but were unable to find hyperparameters to make the method competitive. Future work may be able to improve upon this baseline with other variations like $\beta$-VAEs or VAEs with Gaussian mixture model priors.
each action having a 10% chance to enact a random, different action. The stochasticity increases the difficulty of learning a world model by increasing the effective number of transitions possible in the environments. The increase in difficulty widens the performance gap between different methods, which makes the results easier to interpret.
The environments are episodic, terminating when the agent reaches the green square, or when the episode reaches a maximum length. The former yields a reward $R_t \in [0.1, 1]$ depending on the length of the episode (shorter episodes yield higher rewards), and the latter yields no reward. The reward is calculated with the standard Mini-grid formula, $1 - 0.9^{\frac{t}{T}}$, where $t$ is the current step and $T$ is the maximum episode length (dependent on the experiment). Though the environment is partially observable because the agent does not observe the current time step, this detail should not stop the agent from learning an optimal policy. Further environment details are displayed in Table 3 in the Appendix.
### 3.2 Learning World Models
We train autoencoders and world models on a static dataset, $\mathcal{D}$, of one million transition tuples, $(s, a, s')$, collected with random walks. In each episode, the environment terminates when the agent reaches the green square or after 10,000 steps. Training occurs in two phases: first the autoencoder is trained, and then a transition model is trained over the fixed representations.
Observations are 3-dimensional RGB arrays, so we use convolutional and deconvolutional neural networks (LeCun et al., 1989) for the encoder and decoder architectures. The encoder architecture is similar to the IMPALA network (Espeholt et al., 2018), but the size of the bottleneck layer is chosen with a hyperparameter sweep. Architectural details are given in Section A.3. All of the autoencoders are trained with a mean square error reconstruction loss, and the VQ-VAE with additional loss terms as detailed in Section A.4. Training for both autoencoders and world models use the Adam optimizer (Kingma & Ba, 2015) with hyperparameter values of $\beta_1 = 0.9$, $\beta_2 = 0.999$, and a step size of $2 \times 10^{-4}$. Training continues for a fixed number of epochs, until near-convergence, at which point the model weights are frozen and world model learning begins.
World models learned over latent representations take a latent state, $z$, and an action, $a$, as input to predict the next latent state, $\hat{z}' = w_\psi(z, a)$, with an MLP, $w_\psi$. World models learned over continuous representations, or continuous world models, consist of three layers of 64 hidden units (32 in the crossing environment), and rectified linear units (ReLUs) (Agarap, 2018) for activations. In discrete world models, the MLP is preceded by an embedding layer that converts discrete values into a continuous, 64-dimensional vectors. The loss for both world models is given by the difference between the predicted next latent state and the ground-truth next latent state. The continuous world model outputs a continuous vector and uses the squared error loss. The discrete model outputs multiple vectors of categorical logits and uses a categorical cross-entropy loss over each.
All world models are trained with 4 steps of hallucinated replay as described by Talvitie (2017), which entails feeding outputs of the model back in as new inputs. Figures 10 and 11 in the Appendix depict this training process for continuous and discrete world models.
Our aim is to train sample models — models that emulate the environment by producing outcomes with frequency equivalent to that of the real environment. This is more difficult in stochastic environments because our current training procedure would result in expectations models, where predictions are weighted averages over possible outcomes. To instead learn sample models, we augment our models using the method proposed by Antonoglou et al. (2022). This approach learns a distribution over potential outcomes, and samples from them when using the world model. We provided a more detailed explanation and relevant hyperparameters in Section A.2.
---
1 We also experimented with a squared error loss for the discrete world model and found it made little difference in the final world model accuracy.
Figure 2: The KL divergence between the ground-truth state distribution and the world model induced state distribution. Lower values are better, indicating a closer imitation of the real environment dynamics. The VQ-VAE and Vanilla AE learn near-perfect models in the empty environment, so the curves are so close to zero that they are not visible without magnification. FTA AE and End-to-End experiments were not run in the empty environment because of the triviality. Curves depict averages over 20 runs with 95% confidence intervals.
3.3 Experiments
The goal of this first set of experiments is to measure how the representation of the latent space affects the ability to learn an accurate world model. Unfortunately, this is not as simple as comparing a predicted latent state to the ground-truth latent state, as multiple outcomes may be possible for any given state-action pair. To account for this, we look at distributions over many transitions instead of the outcomes of single transitions. Specifically, we measure the differences between state distributions induced by a chosen behavior policy in the real environment and the same policy in an environment simulated by the learned transition model. Accurate world models should produce state distributions similar to that of the real environment, and inaccurate models should produce state distributions that differ. Figure [12] in the Appendix contains a visualization that helps build an intuition of how state distributions may differ, which we will discuss in more detail later.
The ability of world models to simulate trajectories outside of their training data is one of their major benefits, so to reflect this use case, we chose behavior policies that differ from the data collection policy. We use a random policy for the empty environment, a policy that explores the right half of the grid in the crossing environment, and a policy that navigates directly to the goal in the door key environment. Each of the policies are used to simulate 10,000 episodes in the real environment, and 10,000 episodes where the transition dynamics are simulated entirely by the learned world model. Episodes are cut off early, or frozen at the terminal state to reach exact 30 steps of interaction. We then compare the difference between state distributions at each step by measuring the KL divergence between the induced and ground-truth state distributions. A lower KL divergence is better, indicating that a model predicts outcomes more similar to the real environment.
We include two baselines in our comparisons that do not include auxiliary autoencoder objectives: the uniform baseline and the end-to-end baseline. The uniform baseline predicts a uniform distribution over all states and is strong when the agent’s target policy leads it to spread out, like in a random walk. The end-to-end baseline shares an architecture equivalent to the vanilla autoencoder, but the full model is trained end-to-end with a next-observation reconstruction loss, and the size of the latent state is re-tuned in a separate hyperparameter sweep. This is the standard setup in deep RL.
3.3.1 Model Rollouts
We roll out the trained world models for 30 steps and evaluate their accuracy, plotting the results in Figure 2. Although all of the methods perform the same in the empty environment, the gap in accuracy widens as the complexity progressively increases in the crossing, and then in the door key environment.
We examine visualizations of trajectories to better understand the patterns observed in Figure 2, showing two visualizations that most clearly represent these patterns in Figures [12] and [13] in the Appendix. The trajectories predicted by the continuous models (Vanilla AE and FTA AE) in the crossing environment rarely make it across the gap in the wall, which manifests as a steady increase in the
Figure 3: The KL divergence between the ground-truth and world model induced state distributions, averaged over 30 steps. Lower is better, indicating a closer imitation of the real environment dynamics. The x-axis gives the number of hidden units per layer for all three layers of the world model. Each point depicts the median over 20 runs with 95% confidence intervals. Error bars are high for the end-to-end method likely due to a few divergent runs. Training the end-to-end model is harder because gradients for multiple objectives must be passed back in time through multiple steps.
KL divergence starting around step 14. The performance of the continuous model in the door key environment suffers much earlier as the model struggles to predict the agent picking up the key, and again as the model struggles to predict the agent passing through the door. Notably, these two actions occur infrequently in the training data because the training data is generated with random walks, and because they can only happen once per episode even when they do occur. Stated concisely, the discrete world model more accurately predicts transitions that occur less frequently in the training data.
3.3.2 SCALING THE WORLD MODEL
Despite sweeping over the latent vector dimensions of the vanilla and FTA autoencoders in the hyperparameter sweep, we were unable to find an encoder architecture that enabled either of the continuous world models to adequately learn transitions underrepresented in the training data. Either the discrete representations allow learning something that is not learnable with the continuous representations, or the fixed size of the world model is limiting the continuous model’s performance. We test the latter hypothesis by varying the size of the world model while tuning the latent dimensions of each autoencoder as described in Section A.3. We plot the average performance of each world model in Figure 3.
In the plot, an interesting pattern emerges: the performance of all methods become indistinguishable beyond a certain size of the world model. Only when the environment dynamics cannot be modeled near-perfectly, due to the limited capacity of the world model, do the discrete representations prove beneficial. As the size of the world model shrinks, the performance of the continuous models degrade more rapidly. This observation aligns with the findings in the previous section, where the performance gap between models widened with the complexity of the environment. Both results converge to the same conclusion: the VQ-VAE discrete representations enable learning more of the world with less capacity, relative to the size of the environment. This gap is notable especially when the world is much larger than what the agent has capacity to model. In this setting, discrete representations are arguably favorable because they allow an agent to learn more despite its limited capacity.
3.3.3 REPRESENTATION MATTERS
Our experiments demonstrate the potential advantage of using VQ-VAE latents, but latent spaces are defined both by the information they represent—informational content—and by the way that information is structured—representation. Our goal in the previous experiments was to measure how representation alone affects performance, but we do not directly control for information content—i.e. the different bottleneck structures of a vanilla AE and a VQ-VAE may change what is learned. Our next experiment controls for this factor as we ask the question: do the benefits of discrete world models stem from the representation or from the informational content of the latent states?
To answer this question, we rerun the model learning experiment with two types of latents, both produced by the same VQ-VAE but represented in different ways. Generally, the outputs of a VQ-VAE encoder are quantized by “snapping” each latent to the nearest of a finite set of embedding vectors. The resulting quantized latents are discrete in the sense that each can take only a finite number of
Figure 4: The KL divergence between the ground truth state distribution and the world model induced state distribution. Lower values are better, indicating a closer imitation of the real environment dynamics. Both methods use the same VQ-VAE architecture, but represent the information in different ways. Curves depict averages over 20 runs with 95% confidence intervals.
distinct values, but are element-wise continuous. In our work, we alternatively represent latents as (one-hot encoded) indices of the nearest embedding vectors, which are element-wise binary. Both of these methods encode the same informational content and can produce latents of the same shape, but have different representations. If the representation of the latent space does not matter, then we would expect models learned over both representations to perform similarly.
We prepare the experiment by constructing architecturally equivalent world models with quantized and multi-one-hot representations. The number and dimensionality of the embedding vectors are set to 64 so that both representations take the same shape. The quantized model is trained with the squared error loss, but otherwise both models follow the same training procedure.
We plot the accuracy of both models in Figure 4, where we see multi-one-hot representations vastly outperform quantized representations despite both being discrete and semantically equivalent. These results support the claim that the representation, rather than the informational content, is responsible for the superior performance of the VQ-VAE latents in our experiments. Our results also suggest that the superior performance of discrete representations is not necessarily attributable to their "discreteness", but rather to their sparse, binary nature. Both quantized and multi-one-hot representations are discrete and semantically equivalent, yet yield different results. These results reveal that the implicit choice of representing discrete values as multi-one-hot vectors is essential to the success of discrete representations, yet to our knowledge, such a choice is not discussed in any prior work.
4 MODEL-FREE RL WITH DISCRETE REPRESENTATIONS
As we progress to the full reinforcement learning problem, we face new challenges, like that of learning from non-stationary distributions. Our first experiments of this section aim to understand the effects of using discrete representations in the standard, episodic RL setting. After identifying a clear benefit, we progress to the continual RL setting with continually changing environments Abbas et al. (2023) as a proxy for environments that are too big for the agent to perfectly model.
We train all RL agents in this section with the clipping version of proximal policy optimization (PPO) (Schulman et al., 2017). Instead of observations, the policy and value functions intake learned representations. Separate networks are used for the policy and value functions, but both share the same architecture, an MLP with two hidden layers of 256 units and ReLU activations. We sweep over select hyperparameters for PPO and over autoencoder hyperparameters as described in Section 2.
Training loops between collecting data, training the actor-critic model, and training the autoencoder, and is detailed in Algorithm 2 in the Appendix. This setup differs from previous experiments in that environment interaction and the training of each component happen in tandem instead of in separate phases. The objectives, however, remain separate; PPO gradients only affect the policy and value function weights, while autoencoder gradients only affect the encoder. Only the end-to-end baseline is an exception, in which the entire model is trained with PPO, as is often standard in deep RL.
Agents are trained in the crossing and door key environments shown in Figure 1. The maximum episode length is set to 400 in the crossing environment and 1,000 in the door key environment.
Figure 5: Performance of RL agents as measured by episode length with a 95% confidence interval over 30 runs. Lower is better. (a-b) Agents are trained with PPO and autoencoder objectives from the beginning. (c-d) The PPO objective is introduced only after the dotted line (with the exception of the end-to-end method).
4.1 Episodic RL
We train RL agents with each type of representation in the crossing and door key environments, plotting the results in Figures 5a and 5b. All of the methods with an explicit representation learning objective perform better than end-to-end RL. In a reverse from the previous model learning results, the VQ-VAE now performs the worst of all the representation learning methods. Inspecting the autoencoder learning curves in Figure 15 in the Appendix, however, reveals an important detail: all of the autoencoders learn at different speeds. If the speed of the RL learning updates is our primary concern (whether it actually is will be discussed later), then the learning speed of the autoencoder is a confounding factor. We address this by delaying PPO updates until all autoencoders are trained to around the same loss and plot the results in Figures 5c and 5d. Though the gap in performance in the new results looks small, the VQ-VAE and FTA autoencoder methods converge with around two to three times less PPO updates than the vanilla autoencoder.
4.2 Continual RL
While static Minigrid environments can test these representation learning methods to an extent, they do not reflect the vastness of the real world. When the size of the world and the complexity of its problems dwarf that of the agent, the agent will lose its ability to perfectly model the world and learn perfect solutions [Sutton et al., 2022]. The agent must instead continually adapt in response to its limited capacity if it is to best achieve its goal(s) in this continual RL setting [Kumar et al., 2023]. Given the ability of these representation learning methods to expedite policy learning, they may be well suited for the continual RL setting, where fast adaptation is key.
To test this hypothesis, we modify the previous experimental RL setup by randomizing the layout of the crossing environment every 40,000 steps, and the layout of the door key environment every 100,000 steps, as is similarly done in related work [Taylor & Stone, 2009; Khetarpal et al., 2022; Abbas et al., 2023]. All of the same items and walls remain, but their positions are randomized, only the positions of the goal and outer walls staying constant. Example layouts are shown in Figure 14 in the Appendix. By only changing the environment after a long delay, we create specific points in the learning process where we can observe the difference between how the different types of representation methods adapt to change. The RL training process otherwise stays the same, and is specified in Algorithm 2 in the Appendix. With only this modification to the environments, we rerun the previous RL experiment with a delayed PPO start, and plot the results in Figures 6a and 6b.
We observe a spike in the episode length each time the environment changes, indicating that the agents’ previous policies are no longer sufficient to solve the new environments. While the representation learning methods clearly outperform end-to-end training, the confidence intervals overlap at many time steps. If we instead, however, consider the average reward accumulated by each method per layout as displayed in Table 4 in the Appendix, a clear ranking emerges. In the crossing environment we see VQ-VAE > FTA AE > Vanilla AE, and in the door key environment we see VQ-VAE > FTA AE ≈ Vanilla AE.
While the slower initial learning speed of the VQ-VAE hinders its ability to maximize reward at the beginning of the training process (when PPO updates are not delayed), it does not seem to hinder its
ability to adapt after an initial representation has already been learned. Inspecting the reconstruction loss of both autoencoders, plotted in Figures 6c and 6d shows that the VQ-VAE’s reconstruction loss increases much less when the environment changes. The shorter spikes suggest that the VQ-VAE representations generalize better, allowing them to adapt faster when the environment changes.
With these results, we return to the prior question: can multi-one-hot representations be beneficial in RL even if the initial representation is learned slower? We argue in the affirmative. If we consider continually learning RL agents in the big world setting, where the goal of the agent is to maximize reward over its lifetime by quickly adapting to unpredictable scenarios, then the cost of learning an initial representation is easily amortized by a lifetime of faster adaptation.
5 CONCLUSION & FUTURE WORK
In this work, we explored the effects of learning from discrete and sparse representations in three modules that are commonly found in models of intelligent agents: a world model, a value function, and a policy. When learning a world model, discrete, multi-one-hot representations enabled accurately modeling more of the world with less resources. When in the model-free RL setting, agents with multi-one-hot or sparse representations learned to navigate to the goal and adapt to changes in the environment faster.
Our study underscores the advantages of multi-one-hot representations in RL but leaves several questions of deeper understanding and extrapolation to future work. We show that one-hot encoding is crucial to the success of discrete representations, but do not disentangle multi-one-hot representations from purely binary or sparse representations in our experiments. While the results of the FTA autoencoder and prior work (Wang et al., 2022) suggest that sparsity and orthogonality are major factors in the success of multi-one-hot representations, the evidence is not conclusive. Future work could also experiment with different methods of producing discrete representations or apply these methods to a wider variety of environments, beyond the inherently discrete domain of Minigrid. Prior work on DreamerV3 (Hafner et al., 2023) and the success of VQ-VAEs in the domain of computer vision (van den Oord et al., 2017; Nash et al., 2021; Esser et al., 2021; Hong et al., 2022) already suggest that this method will extrapolate and scale to larger environments.
Regardless of these open questions, our results implicate multi-one-hot representations learned by VQ-VAEs as a promising candidate for the representation of observations in continual RL agents. If we care about agents working in worlds much larger than themselves, we must accept that they will be incapable of perfectly modeling the world. The agent will see the world as forever changing due to its limited capacity, which is the case in complex environments like the real world (Sutton et al., 2022; Kumar et al., 2023). If we wish to address this issue in the representation learning space, agents must learn representations that enable quick adaptation, and are themselves quick to adapt (Sutton et al., 2007). Multi-one-hot representations learned by VQ-VAEs do exactly that, and provide a path towards ever more efficient, continually learning RL agents.
REFERENCES
Zaheer Abbas, Rosie Zhao, Joseph Modyayil, Adam White, and Marlos C. Machado. Loss of plasticity in continual deep reinforcement learning. In Conference on Lifelong Learning Agents (CoLLAs), 2023.
Abien Fred Agarap. Deep learning using rectified linear units (ReLU). CoRR, abs/1803.08375, 2018.
P. C. Edgar An, W. Thomas Miller III, and P. C. Parks. Design improvements in associative memories for cerebellar model articulation controllers (CMAC). Artificial Neural Networks, 47:1207–1210, 1991.
Ioannis Antonoglou, Julian Schrittwieser, Sherjil Ozair, Thomas K. Hubert, and David Silver. Planning in stochastic environments with a learned model. In International Conference on Learning Representations (ICLR), 2022.
Christopher G. Atkeson and Juan Carlos Santamaría. A comparison of direct and model-based reinforcement learning. In International Conference on Robotics and Automation (ICRA), 1997.
Dana H. Ballard. Modular learning in neural networks. In Association for the Advancement of Artificial Intelligence (AAAI), 1987.
André Barreto, Will Dabney, Rémi Munos, Jonathan J. Hunt, Tom Schaul, David Silver, and Hado van Hasselt. Successor features for transfer in reinforcement learning. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016.
Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning Environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research (JAIR), 47:253–279, 2013.
Marc G. Bellemare, Will Dabney, Robert Dadashi, Adrien Ali Taïga, Pablo Samuel Castro, Nicolas Le Roux, Dale Schuurmans, Tor Lattimore, and Clare Lyle. A geometric perspective on optimal representations for reinforcement learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432, 2013.
Maxime Chevalier-Boisvert, Bolun Dai, Mark Towers, Rodrigo de Lazcano, Lucas Willems, Salem Lahlou, Suman Pal, Pablo Samuel Castro, and Jordan Terry. Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. CoRR, abs/2306.13831, 2023.
Will Dabney, André Barreto, Mark Rowland, Robert Dadashi, John Quan, Marc G. Bellemare, and David Silver. The value-improvement path: Towards better representations for reinforcement learning. In Association for the Advancement of Artificial Intelligence (AAAI), 2021.
Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. In International Conference on Machine Learning (ICML), 2018.
Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
David Friede, Christian Reimers, Heiner Stuckenschmidt, and Mathias Niepert. Learning disentangled discrete representations. In Machine Learning and Knowledge Discovery in Databases, 2023.
|
Q00CO1Tm6M
|
It would be a natural question whether there is a connection between these two. Such a connection is important because it will provide a more unified understanding of partially observable RL with POSI. In this paper, there lacks an investigation (or at least a discussion) on this connection (either positive or negative).
|
THEORETICAL HARDNESS AND TRACTABILITY OF POMDPs IN RL WITH PARTIAL ONLINE STATE INFORMATION
Anonymous authors
Paper under double-blind review
ABSTRACT
Partially observable Markov decision processes (POMDPs) have been widely applied to capture many real-world applications. However, existing theoretical results have shown that learning in general POMDPs could be intractable, where the main challenge lies in the lack of latent state information. A key fundamental question here is how much online state information (OSI) is sufficient to achieve tractability. In this paper, we establish a lower bound that reveals a surprising hardness result: unless we have full OSI, we need an exponentially scaling sample complexity to obtain an $\epsilon$-optimal policy solution for POMDPs. Nonetheless, inspired by the key insights in our lower bound design, we find that there exist important tractable classes of POMDPs even with only partial OSI. In particular, for two novel classes of POMDPs with partial OSI, we provide new algorithms that are proved to be near-optimal by establishing new regret upper and lower bounds.
1 INTRODUCTION
Partially observable Markov decision processes (POMDPs) model reinforcement learning (RL) systems, where an agent interacts with the environment sequentially without observing the latent state. In these systems, the agent only has access to a noisy observation randomly generated by the latent state via an emission probability distribution. The goal of the agent is to achieve a large expected cumulative reward. POMDPs generalize the classic (fully observable) MDPs, and have been applied to capture many real-world applications. For example, an AI-trained robot often receives only noisy observations of the environment from its sensors due to sensory noise [Akkaya et al., 2019]; autonomous cars typically do not have a global view of traffic conditions due to their limited reception [Levinson et al., 2011]. Similar scenarios can occur in games [Berner et al., 2019], healthcare [Hauskrecht & Fraser, 2000], recommendation systems [Li et al., 2010], economic systems [Zheng et al., 2020], and so forth.
Existing information-theoretical results have shown that learning in general POMDPs is intractable and PSPACE-complete [Papadimitriou & Tsitsiklis, 1987; Mundhenk et al., 2000; Vlassis et al., 2012; Krishnamurthy et al., 2016]. This is in contrast to classic MDPs, where many efficient algorithms have been developed, e.g., Azar et al. [2017]; Jin et al. [2018]; Agarwal et al. [2019]; Jin et al. [2020]; Ayoub et al. [2020]; Xie et al. [2020]; Foster et al. [2021]; Jin et al. [2022]; Bai et al. [2019]; Cai et al. [2020], among others. The challenge of POMDPs mainly lies in the lack of latent state information, such that the Markov property that simplifies classic MDPs does not hold any more.
Despite the intractability in general POMDPs, recent studies have identified some tractable classes of POMDPs, for which efficient algorithms with polynomial dependency (on the number of actions $A$, number of states $S$ and episode length $H$) can be developed, e.g., $m$-step decodable POMDPs [Efroni et al., 2022], reactive POMDPs [Jiang et al., 2017], POMDPs with block MDPs [Zhang et al., 2022] or latent MDPs [Kwon et al., 2021], and POMDPs with reachability [Xiong et al., 2022] or observability [Golowich et al., 2022]. Due to page limits, we relegate more discussions about related work in Appendix A. One prominent tractable class is identified based on weakly revealing conditions [Liu et al., 2022; 2023] or predictive state representations [Chen et al., 2022a; Zhong et al., 2022]. However, these conditions may not hold in practical cases, e.g., resource allocation [Sinclair et al., 2023; Lee et al., 2023] and robotics [Pinto et al., 2018; Lee et al., 2023].
Moreover, the regret obtained there can be arbitrarily large if the emission probability differences of different underlying states are small.
To circumvent the dependency and strong assumptions on the emission probability measure, recent work has exploited hindsight state information (Sinclair et al., 2023; Lee et al., 2023), where full state information is revealed only at the end of each episode. This line of work is motivated by the fact that, although the precise information about the true underlying state is not available before the agent takes an action, some information may become available in hindsight. However, these studies have assumed full hindsight state information. Thus, a natural question one may ask is: what would happen if the state information was not fully revealed at the end of the episode? In fact, this can happen often in practice. For example, in classic wireless channel scheduling formulated by POMDPs (Zhao et al., 2007; Chen et al., 2008; Ouyang et al., 2015), only the feedback about the scheduled or sensed channels will be available to the users; in autonomous driving (Levinson et al., 2011; Pinto et al., 2018; Jennings & Figliozzi, 2019), only the condition of the located or probed path will be known to the car. Further, it can be trivially shown (based on the existing lower bounds in Krishnamurthy et al., 2016; Liu et al., 2022) that such a situation becomes intractable.
This thus motivates us to investigate the value of partial (i.e., not full) state information inside (i.e., not at the end of) the episode. We call this partial “Online State Information” (OSI). In order to model such partial OSI more concretely, we provide a novel formulation. Specifically, we consider vector-structured state (Jin et al., 2020; Agarwal et al., 2019; Ayoub et al., 2020), which are motivated by the aforementioned practical examples. In other words, the state is given by a $d$-dimension vector with each element representing an abstract feature, such as the feedback about a wireless channel (Zhao et al., 2007) and the condition of a path in autonomous driving (Jennings & Figliozzi, 2019). Partial OSI means that at each step of an episode, a subset of $\tilde{d}$ ($1 \leq \tilde{d} < d$) elements in the state-vector will be revealed to the agent after her query. Note that such a model allows the agent to actively query partial OSI for different elements at different times. This prevents the trivial case, where one state-element cannot be known throughout the process (so that the problem becomes equivalent to a POMDP problem with that specific unknown state-element being the hidden state).
Therefore, the key fundamental open questions are:
**With such partial OSI, can POMDPs be tractable/learnable? If not, are there any specific classes of POMDPs that can be tractable under partial OSI?**
**Our Contributions:** In this paper, we study the important problem of POMDPs with partial OSI and provide in-depth answers to the above key open questions.
First, we establish a lower bound in Theorem 1 that reveals a surprising hardness result: unless we have full OSI, we need an exponentially scaling sample complexity of $\Omega(\frac{AH}{\epsilon^2})$ to find an $\epsilon$-optimal policy for POMDPs, where $A$ and $H$ are the number of actions and episode length, respectively. This result indicates a sharp gap between POMDPs with partial OSI and those with full OSI or full hindsight state information (Lee et al., 2023). This may seem somewhat counter-intuitive, because by combining multiple partial OSI from different steps, one may construct full information of a state, and thus enjoy similar performance as that with full OSI. In fact, in Sec. 3, we design a hard instance with special state representations and transitions, under which partial OSI at each step and even a combination of partial OSI from different steps are not sufficient to achieve an $\epsilon$-optimal solution with polynomial complexity.
Nonetheless, inspired by the key insights in our design of the hard instance for establishing the lower bound, we identify two intriguing tractable classes of POMDPs with only partial OSI.
Second, inspired by our state-transition design for the lower bound, in Sec. 4, we identify a novel tractable class of POMDPs with partial OSI, where the transitions of the sub-states (i.e., elements) in the state-vector are independent of each other. This class is motivated by many practical examples ranging from wireless scheduling (Zhao et al., 2007; Chen et al., 2008; Ouyang et al., 2015) to Martian rock-sampling (Levinson et al., 2011) and autonomous driving (Pinto et al., 2018; Jennings & Figliozzi, 2019). We provide two new near-optimal algorithms for this class. The regrets of both algorithms achieve a polynomial dependency on all parameters (please see Theorem 2 and Theorem 6). In addition, the regret of our second algorithm for the case with $\tilde{d} > 1$ shows that the regret can be further reduced as $\tilde{d}$ increases. To achieve such results, our algorithm design includes important novel ideas to determine (i) which partial OSI is more informative, and (ii) the
action policy that relies on the queried partial OSI at each step. These also require new technical developments in the regret analysis (see Appendix E and Appendix F).
Third, inspired by our state-representation design for the lower bound, in Sec. 5 we identify another novel tractable class of POMDPs with partial OSI, where additional noisy observations for the sub-states in the state-vector that are not actively queried are available. We provide a new algorithm with a near-optimal regret in Theorem 3. Our regret analysis involves a non-trivial generalization of the observable operator method (Jaeger [2000]; Liu et al. [2022]) to handle the case with partial OSI of different sub-states that are actively queried by the agent. In addition, we provide a new regret lower-bound in Theorem 4 that demonstrates the near-optimality of the regret that we achieve.
2 PROBLEM FORMULATION
In this section, we first introduce the general episodic partially observable Markov decision process (POMDP) for clarity, which is intractable in the worst case. Then, we introduce the POMDP setting with partial online state information (OSI) that we study in this paper.
2.1 THE GENERAL EPISODIC POMDP
Episodic POMDPs are usually modelled by a tuple \( M = (S, A, O, H, \Delta_1, P, \Omega, r) \) (Liu et al., 2022; Chen et al., 2022a, b; Cai et al., 2022), where \( S, A \) and \( O \) denote the state space with \( S \) states, the action space with \( A \) actions and the observation space with \( O \) observations, respectively; \( H \) denotes the number of steps in an episode; \( \Delta_1 : S \rightarrow [0, 1] \) denotes a probability measure supported on the state space \( S \) and determines the randomness of the initial state at the beginning of an episode; \( P = \{P_h : S \times S \times A \rightarrow [0, 1]\}_{h=1}^{H-1} \) and \( \Omega = \{\Omega_h : O \times S \rightarrow [0, 1]\}_{h=1}^{H} \) denote the unknown transition and emission probability measures, respectively; and \( r = \{r_h : O \times A \rightarrow [0, 1]\}_{h=1}^{H} \) denotes the known reward function. Specifically, an online agent interacts with the environment in \( K \) episodes. At each step \( h = 1, \ldots, H \) of an episode, the agent receives a noisy observation \( o_h^k \) that is generated according to the emission probability \( \Omega_h(\cdot | s_h^k) \), where \( s_h^k \) is the unknown true latent state. Next, the agent takes an action \( a_h^k \) and receives the reward \( r_h(o_h^k, a_h^k) \). Then, the environment transits to the next state \( s_{h+1}^k \), which is drawn according to the transition probability \( P_h(\cdot | s_h^k, a_h^k) \).
The goal of the agent is to find a near-optimal policy that achieves an expected cumulative reward close to that of the optimal policy. Please see Fig. 1a for a sketch of one step. Due to the lack of latent state information, the observation is non-Markovian and the policy needs to maintain memory.
2.2 THE EPISODIC POMDP WITH PARTIAL OSI
As discussed in Sec. 1, we make the first effort to investigate the impact of partial OSI on POMDPs in this paper. We provide a formulation for studying POMDPs with partial OSI. Specifically, we consider the vector-structured states (Jin et al., 2020; Ayoub et al., 2020; Agarwal et al., 2019). Each state \( s \) is represented by a \( d \)-dimension feature vector \( \phi(s) = [\phi_1(s), ..., \phi_d(s)]^T \in \mathbb{S}^d \), where \( \mathbb{S} \) is the universal set of the values for each element/sub-state in \( \phi(s) \), and \( [\cdot]^T \) denotes the transpose of a vector. We use \( |\mathbb{S}| \) to denote the cardinality of the set \( \mathbb{S} \). Then, at each step \( h = 1, \ldots, H \) of an episode \( k = 1, \ldots, K \), the agent interacts with the environment as follows (please see Fig. 1b for a sketch of one step of the POMDP with partial OSI):
(Step-i) The agent actively queries a subset of \( \tilde{d} \) (where \( 1 \leq \tilde{d} < d \)) sub-states (let \( \hat{s}_h^k \) denote the indices of these queried sub-states); (Step-ii) the partial OSI, i.e., the precise information of the
queried sub-states \( \{\phi_i(s)\}_{i \in \hat{i}_h} \), is revealed to the agent; (Step-iii) the agent takes an action \( a^k_h \) and receives the reward \( r_h(\phi_{\hat{i}_h}(s^k_h), a^k_h) \), where the reward \( r_h : \hat{S} \times A \rightarrow [0, 1] \) is a function of the partial OSI and \( \hat{S} \triangleq \{\phi_i(s) : |\hat{i}| = \hat{d}, s \in S\} \) is the sub-state space for any union of \( \hat{d} \) sub-states.; (Step-iv) the environment transits to the next state \( s^{k+1}_h \).
This model is motivated by various practical scenarios, e.g., wireless scheduling (Chen et al., 2008; Ouyang et al., 2015), autonomous driving (Levinson et al., 2011; Pinto et al., 2018; Jennings & Figliozzi, 2019), robotics (Akkaya et al., 2019; Lee et al., 2023; Silver & Veness, 2010) and healthcare (Hauskrecht & Fraser, 2000). Below, we elaborate on two important motivating examples.
**Motivating example 1:** In an autonomous delivery system (Jennings & Figliozzi, 2019), in order to deliver the product to the destination, a robot explores multiple paths and chooses one path at each intersection. Here, each sub-state \( \phi_i(s) \) of \( s \) represents the condition, e.g., traffic intensity, of one path. At each step, the robot agent first actively queries and observes the condition of several paths, i.e., the partial OSI. However, due to delay requirements, unknown dynamics in the environment, and occlusion, the precise conditions of other paths may not be available to the robot. Then, she chooses one path to follow, i.e., the action that will incur a reward.
**Motivating example 2:** Consider a cognitive MAC (medium access control) system (Ouyang et al., 2015), where a secondary user, i.e., an agent, wishes to search for spectrum-access opportunities. Here, the state \( s \) characterizes the conditions of multiple channels available for an agent to use. Sub-state \( \phi_i(s) \) represents the condition, e.g., busy or idle, of the \( i \)-th channel. At each step, the agent first probes the conditions of a number of channels. After this query, the conditions of the sensed channels will be observed, i.e., the partial OSI. However, due to energy constraints and latency requirements, the agent cannot sense all the channels. Then, she transfers the packets using one channel, i.e., the action that will incur a reward.
### 2.3 Performance Metric
In POMDPs with partial OSI, at each step \( h \) of episode \( k \), the feedback revealed to the agent is \( \Phi^k_h = (\phi_{\hat{i}_h}(s^k_h), a^k_1, ..., \phi_{\hat{i}_h}(s^{k-1}_h), a^{k-1}_h) \). We use \( \tilde{\Phi}_h \) to denote the feedback space of \( \Phi^k_h \) before the partial OSI for step \( h \) is revealed, and use \( \bar{\Phi}_h = \{\tilde{\Phi}_h \cup \{\phi_{\hat{i}_h}(s_h)\}_{\hat{i}_h}\} \) to denote the feedback space after the partial OSI for step \( h \) has been revealed. Then, the query \( \hat{i}_h^k \) is made according to a **query policy** \( \pi^k_{q,h} : \tilde{\Phi}_h \rightarrow \hat{\Delta}_h(\{\hat{i}\}|\hat{d}) \), which maps from \( \tilde{\Phi}_h \) to a conditional probability measure \( \hat{\Delta}_h(\{\hat{i}\}|\hat{d}) \) supported on the query space \( \{\hat{i} : |\hat{i}| = \hat{d}\} \). Next, after receiving the partial OSI \( \phi_{\hat{i}_h}(s^k_h) \), the action \( a^k_h \) is taken according to an **action policy** \( \pi^k_{a,h} : \tilde{\Phi}_h \rightarrow \hat{\Delta}_h(A) \), which maps from \( \tilde{\Phi}_h \) to a probability measure \( \hat{\Delta}_h(A) \) supported on the action space \( A \). We use the \( V \)-value \( V^{\pi^k} \triangleq \mathbb{E}[\pi^k_{q,h}, \pi^k_{a,h}, \hat{\Delta}_h] \sum_{h=1}^{H} r_h(\phi_{\hat{i}_h}(s^k_h), a^k_h)] \) to denote the expected total reward in episode \( k \) by following \( \pi^k = \{\pi^k_{q,h}\}_{h=1}^{H} \) and \( \pi^k_a = \{\pi^k_{a,h}\}_{h=1}^{H} \), where \( \pi^k = (\pi^k_q, \pi^k_a) \). We take the regret as the performance metric, which is the difference between the expected cumulative reward using the online joint policies \( \pi^{1..K} \) and that of using the optimal policy, i.e.,
\[
\text{Reg}^{\pi^{1..K}}(K) \triangleq \sum_{k=1}^{K} \left[ V^* - V^{\pi^k} \right],
\]
where \( V^* \triangleq \sup_{\pi} V^{\pi} \) denotes the expected total reward of the optimal policy in an episode. The goal of the online agent is to find a policy that achieves a sub-linear regret with respect to \( K \). Hence, the main challenge and new difficulty here is how to design the query policy \( \pi^k_q \), such that an action policy \( \pi^k_a \) can also be intelligently developed to achieve a near-optimal regret.
### 3 Perils of Not Having Full OSI: A New Lower Bound
In this section, we answer the long-standing open question: whether POMDPs with online state information are tractable without full OSI? In Theorem 1 below, we establish a lower bound that reveals a surprising hardness result: unless we have full OSI, we need an exponential sample complexity to find an \( \epsilon \)-optimal policy for POMDPs, where a policy \( \pi \) is \( \epsilon \)-optimal if \( V^{\pi} \geq V^* - \epsilon \).
---
1Recall that \( \phi_{\hat{i}_h}(s^k_h) \in \tilde{\Phi}_h \). Thus, the action policy \( \pi^k_{a,h} \) relies on the output of the query policy \( \pi^k_{q,h} \).
Figure 2: A hard instance for developing the lower bound in POMDPs with only partial OSI. States \( s(1), s(2), s(3) \) and \( s(4) \) are represented by solid circles, dashed circles, solid squares and dashed squares, respectively. Ber(1/2) represents the Bernoulli distribution with mean 1/2.
**Theorem 1. (Intractability for not having full OSI)** For POMDPs with only partial online state information introduced in Sec. 2.2, there exists hard instances, such that with a probability \( p \geq 1/3 \), any algorithm needs at least \( \Omega(A^d/\epsilon^2) \) samples to find an \( \epsilon \)-optimal policy.
Theorem 1 demonstrates the hardness of POMDPs without full OSI: a polynomially scaling sample complexity \( \text{Poly}(A,H,S,K) \) is impossible. The result in Theorem 1 may seem counter-intuitive, because by combining multiple partial OSI collected from different steps, one may construct full observations and then enjoy similar performance as that with full OSI. Below, we design an important hard instance and provide our key proof ideas of Theorem 1, which shows why this is not true.
**Remark 1.** The intractability result in Theorem 1 still holds even if in addition to partial OSI, there exist noisy observations (please see our discussion in Sec. 5). This is because we can construct a hard instance directly based on the one that we construct in this section, while letting the emission probabilities of the additional noisy observations to be exactly the same for all underlying states, such that the additional observations do not provide any useful statistical information.
### 3.1 Our Key Proof Ideas for Theorem 1
For simplicity, we focus on the simpler case with \( d = 2 \) and \( \tilde{d} = 1 \), which makes it easier to understand our key proof ideas. Please see Appendix C for the complete proof. The important parts in our proof are to design special state representations and transitions, such that partial OSI cannot help the learner to improve her statistical knowledge about the true underlying state. Towards this end, we construct a hard instance with four states, i.e., \( s(1), s(2), s(3) \) and \( s(4) \) (see Fig. 2).
**Idea I (Special state representations):** Our first key idea is to construct special state representations, such that by only observing \( \tilde{d} = 1 \) sub-state, it is still impossible for the learner to infer the true latent state. Specifically, we let \( \tilde{\phi}(s(1)) = [x_1, x_2]^T \), \( \tilde{\phi}(s(2)) = [x_3, x_4]^T \), \( \tilde{\phi}(s(3)) = [x_1, x_4]^T \) and \( \tilde{\phi}(s(4)) = [x_3, x_2]^T \), where \( x_1, ..., x_4 \) are sub-states (see Fig. 2).
We introduce the high-level reason for constructing the state representations in this way. Let us consider states \( s(1) \) and \( s(2) \) as a group of states, and we call it group \( a \). Similarly, we call states \( s(3) \) and \( s(4) \) group \( b \). Under our construction of the state representation, each state in group \( a \) (i.e., \( s(1) \) and \( s(2) \)) must contain a same sub-state as that in each state of group \( b \) (i.e., \( s(3) \) and \( s(4) \)). For example, the first sub-states of both state \( s(1) \) and state \( s(3) \) are \( x_1 \). This means that, by only querying \( \phi_1(s) = x_1 \), the learner cannot know whether she is in a state from group \( a \) or group \( b \). As another example, the second sub-states of both state \( s(1) \) and state \( s(4) \) are \( x_2 \). This means that, by only querying \( \phi_2(s) = x_2 \), the learner cannot know whether she is in a state from group \( a \) or group \( b \). As a result, if (i) there is only one specific action sequence that guarantees the learner to be in group \( a \), and (ii) group \( a \) generates a larger reward, then intuitively the learner has to constantly keep trying all exponential number of possible action sequences to figure this out with high probability.
However, as we mentioned before, another question still remains: whether a combination of the partial OSI from different steps would be enough? To answer this question, we construct special state transitions using our idea II below. Together with the state representation that we construct above, this state transition causes difficulty for the learner, even when multiple partial OSI are combined.
Idea II (Special state transitions): Our second key idea is to construct special state transitions, such that even by combining the partial OSI from different steps, it is still impossible for the learner to infer the true latent state. Specifically, in each episode, the learner starts from state \( s_1 = s(1) \) (see Fig. 2). At step \( h = 1 \), (i) if action \( a(1) \) is chosen, the state will transition to \( s(1) \) and \( s(2) \) with the same probability (wsp); (ii) if action \( a(2) \) is chosen, the state will transition to \( s(3) \) and \( s(4) \) wsp. At step \( h = 2 \), (i) if action \( a(1) \) is chosen, both states \( s(1) \) and \( s(2) \) will transition to \( s(3) \) and \( s(4) \) wsp; (ii) if action \( a(2) \) is chosen, they will transition to \( s(1) \) and \( s(2) \) wsp. At step \( h = 3 \), (i) if action \( a(1) \) is chosen, states \( s(1) \) and \( s(2) \) will transition to \( s(1) \) and \( s(2) \) wsp; (ii) if action \( a(2) \) is chosen, they will transition to \( s(3) \) and \( s(4) \) wsp. For states \( s(3) \) and \( s(4) \) at step \( h = 2 \) and \( h = 3 \), no matter which action is chosen, the states will transition to \( s(3) \) and \( s(4) \) wsp.
Then, together with the state representation that we constructed, even when the partial OSI about the first and second sub-states from different steps are combined, such a construction for the state transition still prevents the learner from knowing which group of states she is in. For example, at step \( h = 1 \) of two episodes, the learner can keep taking action \( a(1) \) and query the first and second sub-states one-by-one. Then, the partial OSI at step \( h = 2 \) could be \( \phi_1(s^k_2) = x_1 \) (i.e., the first sub-state of \( s(1) \)) and \( \phi_2(s^{k+1}_2) = x_4 \) (i.e., the second sub-state of \( s(2) \)). However, note that the first and second sub-states of \( s(3) \) are also \( x_1 \) and \( x_4 \). Thus, such a combination of partial OSI (i.e., \( \phi_1(s^k_2) = x_1 \) and \( \phi_2(s^{k+1}_2) = x_4 \)) is not powerful enough for the learner to distinguish whether she is visiting \( s(1) \) and \( s(2) \) or she is simply visiting \( s(3) \). Similar issues occurs at other steps.
Idea III (Special reward functions): Up to here, we can see that only with partial OSI, the learner cannot improve her statistical knowledge about the true underlying states. Thus, she can only rely on the statistical relation between the sequence of actions that is chosen and the reward that is received. Hence, to create difficulties, we let (i) the rewards \( r_h \) at steps \( h = 1, 2, 3 \) are all 0; (ii) if the final state is in group \( b \), i.e., \( s(3) \) or \( s(4) \), the reward at step \( h = 4 \) follows Bernoulli distribution with mean \( \frac{1}{2} \); (iii) if the final state is in group \( a \), i.e., \( s(1) \) or \( s(2) \), the reward at step \( h = 4 \) follows Bernoulli distribution with a slightly higher mean equal to \( \frac{1}{2} + \epsilon \). In this way, the optimal policy will take action sequence \((a(1), a(2), a(1))\) for all episodes, so that she can remain in group \( a \) and enjoy a larger expected total reward in every episode equal to \( \frac{1}{2} + \epsilon \). In contrast, the online learner has to try every possible sequence of actions to figure out which sequence provides larger reward with high probability. Since there are \( A^H \) number of possible action sequences, according to the Hoeffding’s inequality, we can show that the sample complexity for achieving an \( \epsilon \)-optimal policy is \( \Omega(A^H/\epsilon^2) \).
4 OPTIMALITY UNDER PARTIAL OSI AND INDEPENDENT SUB-STATES
While learning in the world of general POMDPs with partial OSI is intractable, inspired by the key insights in our lower-bound design, we identify two rich classes of POMDPs with partial OSI that are tractable, for which we provide new near-optimal algorithms. We leave other potential learnable classes as future work. The tractable class that we study in this section is as follows.
Class 1. (POMDPs with partial OSI and independent sub-states) At each step, (step-i) the agent actively selects sub-states \( \tilde{i}^k_h \) to query, and receives the partial OSI \( \{\phi_i(s^k_h)\}_{i \in \tilde{i}^k_h} \); (step-ii) The agent takes the action \( a^k_h \) and receives the reward \( r_h(\phi_i(s^k_h), a^k_h) \); (step-iii) the next state \( s^{k+1}_h \) is drawn according to probability \( P_h(\cdot|s^k_h, a^k_h) = \prod_{i=1}^d P_{h,i}(\phi_i(\cdot)|\phi_i(s^k_h), a^k_h) \), where the product form indicates that the sub-states have independent transition kernels.
This class is motivated by many important practical applications. For example, in classic wireless channel scheduling Zhao et al. (2007); Chen et al. (2008); Ouyang et al. (2015), the condition of each channel could change independently; and in Martian RockSampling Silver & Veness (2010) or autonomous driving Pinto et al. (2018); Jennings & Figliozzi (2019), the condition of each potential rock or path could also change independently. Notably, as we state in Proposition 1, without the partial OSI in step-i of Class 1, even learning under independent sub-states could still be intractable.
Proposition 1. (Intractability if not having partial OSI) There exist POMDPs with independent sub-states, such that learning an \( \epsilon \)-optimal policy necessarily requires \( \tilde{\Omega}(A^H/\epsilon^2) \) samples.
Remark 2. By replacing partial OSI with noisy observations under certain conditions, POMDPs with independent sub-states could be decoupled into paralleled sub-POMDPs, which may be solved using existing methods. In contrast, the query of the agent for partial OSI in Class 1 couples potential sub-POMDPs together, such that existing solutions do not apply or result in poor performance.
Algorithm 1 Optimistic-Pessimistic Two-Layer Learning (OP-TLL)
for \( k = 1 : K \) do
Step 1: update the weights \( w^k(i) \) and probabilities \( p^k(i) \) according to Eq. (2).
for \( h = 1 : H \) do
Step-2: choose a sub-state \( i_h^k \) according to probability \( p^k(i) \) and query partial OSI \( \phi_i(s_h^k) \).
Step-3: take an action \( a_h^k \) that maximizes the updated Q-value function in Eq. (3).
end for
end for
For Class I, we develop two new near-optimal algorithms. Due to page limits, we focus on the simpler case with \( \tilde{d} = 1 \) in this section, and introduce our results for the more challenging case with \( \tilde{d} > 1 \) in Appendix F. Our new algorithm when \( \tilde{d} = 1 \) is called Optimistic-Pessimistic Two-Layer Learning (OP-TLL). Please see Algorithm 1. At each step \( h \), the optimal policy queries a sub-state \( i \) according to a fixed distribution \( p \), and receives the partial OSI for this queried sub-state. Then, she takes an action according to \( \phi_i(s_h) \). We note that the new challenge here is: how to utilize partial OSI to avoid the intractability issue shown in Proposition 4 and achieve optimality? To address this question, our OP-TLL algorithm contains two critical learning layers that involve our two new ideas, and obtains a near-optimal regret.
Idea-I (Update the query policy pessimistically): This pessimism is because the query policy updated in “Step-1” of Algorithm 1 affects the choice of action \( a_h^k \) in Step-3, which requires complete state information for \( V \)-value estimation. As a result of this, the relation between the regret and model misspecification error [Jin et al., 2020] indicates a linear-in-K regret if the estimation error due to query is not sufficiently considered. Thus, although the state-transition and reward are stochastic, the query needs to be made sufficiently conservatively. Specifically, at the beginning of each episode \( k \), OP-TLL updates the query policy as follows,
\[
w^k(i) = w^{k-1}(i) \cdot e^{\frac{\eta_1}{d} \sum_{h=1}^{H} r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1})}, \quad \text{and} \quad p^k(i) = \frac{(1-\eta_1)w^k(i)}{\sum_{i'=1}^{d} w^k(i')} + \frac{\eta_1}{d},
\]
where the estimated reward \( r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1}) = r_h(\phi_i(s_h^{k-1}), a_h^{k-1}) \), if \( i = i_h^k \); and \( r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1}) = 0 \), otherwise. Note that this is a new variant of the importance sampling method, where the new development lies in estimating the reward by exploiting partial OSI. Moreover, \( \eta_1 \) is a key parameter that determines how pessimistic the algorithm is. For example, with a smaller \( \eta_1 \), the term \( e^{\frac{\eta_1}{d} \sum_{h=1}^{H} r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1})} \) increases more slowly. As a result, the weight \( w^K(i) \) increases more slowly, and thus the algorithm behaves more pessimistically. In “Step-2”, OP-TLL chooses the query according to probability \( p^k(i) \), where the first term \( \frac{w^k(i)}{\sum_{i'=1}^{d} w^k(i')} \) captures the query importance of sub-state \( i \) among all sub-states.
Idea-II (Update the action policy optimistically): The intuition for this optimism is to minimize the bias in reward estimates, which is critical because the query policy updated in Step-1 relies on the estimated reward. Specifically, in “Step-3”, OP-TLL takes an action that maximizes the \( Q \)-value function following the optimism-in-face-of-uncertainty principle (the new challenge here is how to design the bonus term to address the impact of partial OSI),
\[
Q_h^k(\phi_i(s), a) = \min\{r_h(\phi_i(s), a) + [P_h^kV_h^k](\phi_i(s), a) + O(\sqrt{H^2/N_h^k(\phi_i(s), a))}, H\},
\]
where \( P_h^k(\phi_i(s')|\phi_i(s), a) = \frac{N_h^k(\phi_i(s'), a, \phi_i(s'))}{N_h^k(\phi_i(s), a)} \) is the estimated transition kernel, \( N_h^k(\phi_i(s), a) \) and \( N_h^k(\phi_i(s'), a, \phi_i(s')) \) are the number of times \( (\phi_i(s), a) \) and \( (\phi_i(s'), a, \phi_i(s')) \) have been visited at step \( h \) up to episode \( k \), respectively, and \( V_h^k(\phi_i(s)) = \max_a Q_h^k(\phi_i(s), a) \) is the estimated \( V \)-value.
Theorem 2. (Regret) For POMDPs with partial OSI (\( \tilde{d} = 1 \)) and independent sub-states, with probability \( 1 - \delta \) for any \( \delta \in (0, 1) \), the regret of our OP-TLL algorithm with parameter \( \eta_1 = O(\sqrt{\frac{d \ln d}{H^2 R}}) \) can be upper-bounded as follows,
\[
\text{Reg}_{\text{OP-TLL}}(K) \leq \tilde{O}\left(AH^3|\mathcal{S}|^2d\sqrt{K}\left(\ln(AH^2|\mathcal{S}|K/\delta)\right)^2\right).
\]
Algorithm 2 Optimistic Maximum Likelihood Estimation with Partial OSI (OMLE-POSI)
**Initialization:** \( \Theta^0 = \{ \theta \in \Theta : \min_{\{h,i\}} \sigma_S(\hat{\Phi}_h) \geq \alpha \} \).
**for** \( k = 1 : K \) **do**
**Step-1:** estimate the models \( \hat{\theta} \triangleq (\hat{P}, \hat{\Theta}, \hat{\Delta}) \) (including partial emission model) according to
\[
\Theta^k = \left\{ \hat{\theta} \in \Theta^0 : \sum_{\tau=1}^{k-1} \log P_{\hat{\theta}}^\tau(\Gamma^\tau) \geq \max_{(p', \hat{\Theta}', \Delta')} \sum_{\tau=1}^{k-1} \log P_{p', \hat{\Theta}', \Delta'}^\tau(\Gamma^\tau) - \beta \right\} \cap \Theta^0. \tag{5}
\]
**Step-2:** update the joint policy \( \pi^k \triangleq \arg\max_{\pi:\hat{\theta} \in \Theta^k} E_{\pi,\pi_a,\Delta_1,\hat{\theta}}[\sum_{h=1}^H r_h(\phi_i(s^k_h), a^k_h)] \).
**for** \( h = 1 : H \) **do**
**Step-3:** query the partial OSI \( \{\phi_i(s^k_h)\}_{i \in i^k_h} \) according to the query policy \( \pi_{q,h} \), collect partial noisy observation \( \tilde{o}^k_h \), and then take an action \( a^k_h \) according to the action policy \( \pi_{a,h}^k \).
**end for**
**end for**
Theorem 2 shows that OP-TLL achieves a regret that (i) depends polynomially in all parameters \( A, H, |\bar{S}|, d \) and \( K \), and (ii) depends on \( \sqrt{K} \), which is tight. To the best of our knowledge, this is the first such near-optimal result for POMDPs with partial OSI. Similar to algorithm design, the main difficulty in the proof is how to capture the mutual impact between the query and action policies. Due to page limits, please see Appendix E for details and Appendix F for the case when \( \tilde{d} > 1 \).
5 Optimality under Partial OSI and Partial Noisy Observations
In this section, we identify another tractable class (i.e., Class 2 below) of POMDPs with partial OSI, and provide a new near-optimal algorithm. Please see Fig. 1c for a sketch of one step in this class.
**Class 2. (POMDPs with partial OSI and partial noisy observations)** At each step, (step-i) the agent actively selects sub-states \( i^k_h \) to query, and receives the partial OSI \( \{\phi_i(s^k_h)\}_{i \in i^k_h} \); (step-ii) the agent receives the partial noisy observation \( \tilde{o}^k_h \) for the other \( d - \tilde{d} \) sub-states that are not queried, where \( \tilde{o}^k_h \) is generated according to the partial emission probability \( \tilde{\Theta}_h^k \left( \cdot | \{\phi_i(s^k_h)\}_{i \notin i^k_h} \right) \). The partial emission matrix \( \tilde{\Theta}_h^k \in \mathbb{R}^{O \times |\bar{S}|^{d-\tilde{d}}} \) satisfies the partially revealing condition: there exists a constant \( \alpha > 0 \), such that \( \sigma_{\bar{S}}(\tilde{\Theta}_h^k) \geq \alpha \) for any sub-states \( i \) and step \( h \), where \( \bar{S} = |\bar{S}|^{d-\tilde{d}} \) and \( \sigma_S(\cdot) \) denotes the \( S \)-th largest singular value of a matrix. Namely, \( \min_{\{h,i\}} \sigma_{\bar{S}}(\tilde{\Theta}_h^k) \geq \alpha \) holds; (step-iii) the agent takes an action \( a^k_h \) and receives the reward \( r_h(\phi_i(s^k_h), a^k_h) \); (step-iv) the next state \( s^k_{h+1} \) is drawn according to the joint transition probability \( P_h(\cdot|s^k_h, a^k_h) \).
We note that in classic POMDPs [Chen et al., 2022a; Liu et al., 2022, 2023], the noisy observation is independent of the decisions of the agent. In contrast, in Class 2, at each step, the partial noisy observation \( \tilde{o}^k_h \) depends on the query \( i^k_h \) of the agent. This new dependency results in new non-trivial challenges in both the algorithm design and regret analysis. For clarity, we use \( \Gamma^k_h \triangleq \{i^k_1, \phi_{i^k_1}(s^k_1), \tilde{o}^k_1, a^k_1, ..., i^k_{h-1}, \phi_{i^k_{h-1}}(s^k_{h-1}), \tilde{o}^k_{h-1}, a^k_{h-1}\} \) to denote the feedback (including both the partial OSI \( \Phi^k_h \) and partial noisy observations \( \tilde{o}^k_{1:h-1} \)) in this case.
**Remark 3.** The partially revealing condition in step-ii of Class 2 is milder than the weakly revealing condition in Liu et al. [2022] that requires \( \min_{h} \sigma_S(\Theta_h) \geq \alpha \), where \( S = |\bar{S}|^d \) is the total number of states and \( \Theta_h \) is the emission matrix that we introduced in Sec. 2.1. This is because for an \( m \times n \) matrix \( A \) and an \( m \times (n-l) \) sub-matrix \( B \) of \( A \), we have \( \sigma_{i+l}(A) \leq \sigma_i(B) \) [Horn et al., 1994].
**Remark 4.** Without the partially revealing condition in step-ii of Class 2, POMDPs with partial OSI are still intractable in the worst case. This can be shown by letting the partial emission probability \( \tilde{\Theta}_h^i \) of each \( i \) be the same for all possible sub-states \( \{\phi_i(s)\}_{i \notin i} \), and then we can show that learning an \( \epsilon \)-optimal policy in POMDPs with partial OSI still necessarily requires \( \Omega(A^H/\epsilon^2) \) samples.
For Class 2, we develop a new near-optimal algorithms (see Algorithm 2), called Optimistic Maximum Likelihood Estimation with Partial OSI (OMLE-POSI). Recall that the new challenges here are: (i) the partial noisy observation $\tilde{o}_h^k$ depends on the query $\tilde{i}_h^k$ of the agent; (ii) the performance of the action policy $\pi_{a,h}$ depends on both the observation $\tilde{o}_h^k$ and query $\tilde{i}_h^k$. Our algorithm is inspired by the idea of OMLE, but extends it to elegantly address the non-trivial joint query policy and action policy optimization. Specifically, OMLE-POSI (in Algorithm 2) differs from OMLE in two aspects.
First, in “Step-1”, OMLE-POSI only collects partial noisy observations $\tilde{o}_{1:H}$, which relies on the queries $\tilde{i}_{1:H}$ determined in Step-2. Due to this new relation, in Eq. 5, we design a new bonus term $\beta = O \left( (\tilde{S}|^{2d} A + |\tilde{S}|^{d-\tilde{d}} O) \ln((|\tilde{S}|^{d} A O H K)) \right)$ which depends on the size of the non-queried sub-state space $|\tilde{S}|^{d-\tilde{d}}$, and OMLE-POSI only estimates partial emission model $\tilde{\Theta}$. Second, note that in the joint optimization of “Step-2”, the action policy $\pi_a$ is inherently a function of the query policy $\pi_q$, since the action $a_h^k$ taken according to $\pi_{a,h}$ relies on the observation $\tilde{o}_h^k$, which further depends on the query $\tilde{i}_h^k$ made according to $\pi_{q,h}$. Due to page limits, please see Appendix G for more details.
**Theorem 3. (Regret)** For POMDPs with the partial OSI and partially revealing condition, with probability $1 - \delta$, when $|\tilde{S}| > (d/\tilde{d})^2$, the regret of OMLE-POSI can be upper-bounded as follows,
$$\text{Reg}_{\text{OMLE-POSI}}(K) \leq \tilde{O} \left( |\tilde{S}|^{2d-\tilde{d}} O A H^4 \sqrt{K(|\tilde{S}|^{2d} A + |\tilde{S}|^{(d-\tilde{d})/2} O)/\alpha^2} \right).$$
(6)
Theorem 3 above shows that (i) the regret of OMLE-POSI depends on $\sqrt{K}$, which is tight; (ii) the regret depends polynomially on $A$ and $H$; and (iii) the regret further decreases exponentially as $\tilde{d}$ increases. To the best of our knowledge, this is the first such near-optimal result for POMDPs with partial OSI. Recall that partial OSI affects both the MLE and policy optimization. Thus, the main difficulty in the proof of Theorem 3 is how to capture such new effects. Indeed, directly applying existing observable operator method (OOM) [Jaeger (2000); Liu et al. (2022)] will result in a regret that does not decrease with $\tilde{d}$. Please see Appendix G for our new analytical ideas and the proof.
**Theorem 4. (Lower bound)** For POMDPs with the partial online state information and partially revealing condition, the regret of any algorithm $\pi$ can be lower-bounded as follows,
$$\text{Reg}^\pi(K) \geq \tilde{\Omega} \left( \sqrt{AH} \cdot |\tilde{S}|^{d/2} \cdot \sqrt{K} \right).$$
(7)
Theorem 4 indicates that the dependency on $|\tilde{S}|^{d/2}$ in the regret of OMLE-POSI is necessary. Our key proof idea in Appendix H is to construct a new special state transition, such that even with partial OSI, all combinations of sub-states $\phi_i(s)$ must be explored to achieve a sub-linear regret. We conjecture that a stronger lower bound depending on the query capability would be $\tilde{\Omega} \left( \sqrt{AH} \cdot |\tilde{S}|^{(d-\tilde{d})/2} \cdot \sqrt{K}/\alpha \right)$, and leave this as a future open question.
### 6 DISCUSSION AND CONCLUSION
It is worthwhile to draw connection of our POMDP setting with the standard POMDP and general decision making problem. First, our POMDP setting can be placed under the general decision-making setting [Foster et al. (2021); Chen et al. (2022b); Foster et al. (2023)]. However, directly instantiating their result to our Classes 1 and 2 will result in worse regret upper bounds than our results here, which exploit our special problem structure such as the dependency of the action policy $\pi_a$ on the query policy $\pi_q$ for developing more refined bounds. Second, our POMDP setting cannot be placed under the standard POMDP setting [Liu et al. (2022); Chen et al. (2022a)], mainly due to the special sequential structure of the query, observation, action, and reward in our process. More detailed discussion is provided in Appendix B.
To conclude, this paper answers a fundamental open question: how much online state information (OSI) is sufficient to achieve tractability in POMDPs? Specifically, we establish a lower bound that reveals a surprising hardness result: unless we have full OSI, we need an exponential complexity to obtain an $\epsilon$-optimal policy for POMDPs. Nonetheless, we identify two novel tractable classes of POMDPs with only partial OSI, which are important in practice. For these two classes, we provide three new RL algorithms, which are shown to be near-optimal by establishing new regret upper and lower bounds. There are several interesting future work. For example, it would be interesting to study the value of partial OSI in more general POMDPs, e.g., with continuous state spaces [Cai et al. (2022); Liu et al. (2023)]. Second, the regret upper and lower bounds that we achieved can be further tightened, e.g., improve the dependency on $d$ and $O$ using ideas from [Chen et al. (2023)].
REFERENCES
Alekh Agarwal, Nan Jiang, Sham M Kakade, and Wen Sun. Reinforcement learning: Theory and algorithms. CS Dept., UW Seattle, Seattle, WA, USA, Tech. Rep, pp. 10–4, 2019.
Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019.
Karl Johan Åström. Optimal control of markov processes with incomplete state information. Journal of mathematical analysis and applications, 10(1):174–205, 1965.
Alex Ayoub, Zeyu Jia, Csaba Szepesvari, Mengdi Wang, and Lin Yang. Model-based reinforcement learning with value-targeted regression. In International Conference on Machine Learning, pp. 463–474. PMLR, 2020.
Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In International Conference on Machine Learning, pp. 263–272. PMLR, 2017.
Yu Bai, Tengyang Xie, Nan Jiang, and Yu-Xiang Wang. Provably efficient q-learning with low switching cost. Advances in Neural Information Processing Systems, 32, 2019.
Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
Qi Cai, Zhuoran Yang, Chi Jin, and Zhaoran Wang. Provably efficient exploration in policy optimization. In International Conference on Machine Learning, pp. 1283–1294. PMLR, 2020.
Qi Cai, Zhuoran Yang, and Zhaoran Wang. Reinforcement learning from partial observation: Linear function approximation with provable sample efficiency. In International Conference on Machine Learning, pp. 2485–2522. PMLR, 2022.
Fan Chen, Yu Bai, and Song Mei. Partially observable rl with b-stability: Unified structural condition and sharp sample-efficient algorithms. In The Eleventh International Conference on Learning Representations, 2022a.
Fan Chen, Song Mei, and Yu Bai. Unified algorithms for rl with decision-estimation coefficients: No-regret, pac, and reward-free learning. arXiv preprint arXiv:2209.11745, 2022b.
Fan Chen, Huan Wang, Caiming Xiong, Song Mei, and Yu Bai. Lower bounds for learning in revealing pomdps. arXiv preprint arXiv:2302.01333, 2023.
Yunxia Chen, Qing Zhao, and Ananthram Swami. Joint design and separation principle for opportunistic spectrum access in the presence of sensing errors. IEEE Transactions on Information Theory, 54(5):2053–2071, 2008.
Yonathan Efroni, Chi Jin, Akshay Krishnamurthy, and Sobhan Miryoosefi. Provable reinforcement learning with a short-term memory. In International Conference on Machine Learning, pp. 5832–5850. PMLR, 2022.
Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin. The statistical complexity of interactive decision making. arXiv preprint arXiv:2112.13487, 2021.
Dylan J Foster, Noah Golowich, and Yanjun Han. Tight guarantees for interactive decision making with the decision-estimation coefficient. arXiv preprint arXiv:2301.08215, 2023.
Noah Golowich, Ankur Moitra, and Dhruv Rohatgi. Planning in observable pomdps in quasipolynomial time. arXiv preprint arXiv:2201.04735, 2022.
Milos Hauskrecht. Value-function approximations for partially observable markov decision processes. Journal of artificial intelligence research, 13:33–94, 2000.
Milos Hauskrecht and Hamish Fraser. Planning treatment of ischemic heart disease with partially observable markov decision processes. Artificial intelligence in medicine, 18(3):221–244, 2000.
|
K1VLZ5rNuZ
|
Since a tree and a dish are obviously incorrect (concept confidences are near 100% but the concepts do not exist in the image), wouldn’t it be more straightforward to make interventions over these concepts?
|
MC²: Multimodal Concept-based Continual Learning
Anonymous authors
Paper under double-blind review
Abstract
The inability of deep neural networks to learn continually while retaining interpretability limits their deployment in critical settings. Existing research has made strides in either interpretability or continual learning, but the synergy of these two directions largely remains under-explored. This work examines this intersection from the perspective of concept-based models where classes are considered as combinations of text-based concepts, and thus can enhance the interpretability of models in a continual learning setting. Addressing the unique challenges of learning new concepts without forgetting past ones, our method MC² proposes an approach to seamlessly learn both classes and concepts over time. We adopt a multimodal approach to concepts, emphasizing text-based human-understandable semantics associated with images. Through various experimental studies, we demonstrate that MC² outperforms existing concept-based approaches by a large margin in a continual setting, while performing comparably if not better in full-data settings. We also demonstrate that MC² can be used as a post-hoc interpretability method to examine image regions associated with abstract textual concepts. Our code for MC² will be publicly released on acceptance.
1 Introduction
Modern deep neural networks (DNNs) have proven their ability to solve a multitude of tasks in the supervised learning setting, even outperforming humans on certain tasks. In recent times, there has been growing interest in developing models that not only perform well on a single task in an i.i.d. setting but also learn continually, i.e., on new, previously unseen tasks that may arrive in the future. However, in such settings, when a model is directly trained on a new task, it loses the ability to perform well on previously learnt tasks, a phenomenon known as catastrophic forgetting. Many methods in continual learning literature have explored a plethora of techniques to combat this issue (Wang et al., 2023). However, models that have the capability to learn continually still have a significant hurdle that prevents them from being extensively deployed in safety-critical conditions - they cannot explain how they arrive at a decision from the provided data, viz., they are not interpretable.
From another perspective, methods that allow deep neural networks to become interpretable have also become an active area of research in recent times (Molnar, 2018; Samek et al., 2021). Most existing literature focus on post-hoc interpretability, i.e., they attempt to explain the decisions of a model already trained on a particular dataset and task. There has been a recent thrust, however, towards developing intrinsically interpretable (ante-hoc interpretable) models that render models to be interpretable in the training process itself (Rudin, 2019; Vilone & Longo, 2021; Nauta et al., 2023). These recent efforts have largely focused on traditional supervised learning; methods that can learn continually and are also inherently interpretable remain largely unexplored.
A class of interpretable models that has gained prominence in recent years is concept-based learning. A concept is typically a high-level, inherently interpretable unit of information. A class can be abstracted into a set of concepts that define the characteristics of that class. For example, the class cat may be broken down into the concept set {fur, whiskers, four legs, pointy ears, sociable}. Such concept-based approaches allow atomic concepts to be combined to signify the presence of a particular class in image-based tasks. Recent works such as (Koh et al., 2020; Oikarinen et al., 2023; Yang et al., 2023) have shown promise in using concept-based models to enhance interpretability of image classification models, but however have not been studied for learning continually. When using concept-based models in a continual learning setting, several new problems emerge: (i) from
a concept-based learning perspective, the model has to allow mechanisms to learn new concepts without forgetting past concepts; (ii) from a continual learning perspective, in addition to learning new classes over time (standard continual learning), the model also has to learn new concepts over time, i.e. the model has to address catastrophic forgetting in classes as well as concepts; and (iii) it is possible that older concepts may be component of a newer class in a later task; it has to learn these associations effectively too. These challenges make this setting a non-trivial one, and timely considering the focus on ante hoc interpretable models. A very recent work (Rymarczyk et al., 2023) addressed this setting for the first time, also supporting the need for this direction of work. However, the notion of a concept is different from earlier efforts in this work, and is oriented towards part-based prototypes. Such part-based structures may not capture abstract concepts or relationships such as, for e.g., cat and sociable. We focus on a more generic approach to concepts that are not necessarily part-based but text-based high-level semantics associated with an object category.
To this end, we propose MC², a novel multimodal concept-based continual learner that not only accommodates new classes and concepts, but also implicitly localizes text-based concepts in images. We consider text encodings of text concepts, which we call concept anchors, along with image representations to create a set of multimodal concepts for a given image. These multimodal concepts are latent vectors which contain information that help in classification while also providing interpretations. We introduce the notion of concept grounding, which allows the interpretation of multimodal concepts in terms of text-based concepts. We also design MC² with the consideration that it should be able to learn continually. Our proposed approach is not limited to a pre-specified number of concepts and classes, thus making it scalable by design for class-incremental and concept-incremental learning. Our key contributions are summarized below:
• We propose a novel method for concept-based continual learning that can adapt continually to new classes as well as new concepts, without increasing the number of parameters. Standard experience replay does not help reliably explain the model in terms of concepts; we hence introduce a new concept-augmented exemplar replay approach that allows the model to retain concept-based explanations of previous experiences.
• We propose multimodal concepts, a combination of image embeddings and interpretable concept anchors, to perform classification. These multimodal concepts are grounded to their corresponding text-based concept anchors, thus making them interpretable. We also show that the vision-language models used in our approach need not be pre-aligned, allowing for more flexibility in the method.
• Our approach offers multi-hoc concept-based interpretability, i.e. it is designed in an ante-hoc fashion to offer interpretability in the form of high-level concepts, and can also be employed as a concept-specific attribution method, which enhances post-hoc interpretability by identifying regions of interest involved in the search for a specified concept. For example, in Figure 3, we show that our model is able to reliably localize image-level attributions for queried concepts.
• We perform a comprehensive set of experiments to evaluate our proposed method on well-known benchmark datasets, and also compare our method against continual adaptations of earlier concept-based methods. We study our method’s performance both in a continual as well as full-data setting. We perform qualitative evaluations of how well our model learns to associate concepts with localized visual cues in images, and also study the goodness of concepts by demonstrating their effectiveness in post-hoc interventions.
2 RELATED WORK
Interpretability of Deep Neural Network Models: Interpretability methods in DNN models can be broadly classified into post-hoc and ante-hoc methods. Post-hoc methods aim to interpret model predictions through several strategies, including Gradient-weighted Class Activation Mapping-based methods which focus on highlighting influential features by tracking gradient flows to the final layer (Selvaraju et al., 2017; Chen et al., 2020a; Chattopadhyay et al., 2018); Integrated gradient-based methods that compute the gradient integration via the Riemann integral (Sattarzadeh et al., 2021; Yvinec et al., 2022; Benitez et al., 2023); Shapley value-based methods that address model interpretation using Shapley values (Sundararajan & Najmi, 2020; Wang et al., 2020a; Jethani et al., 2021; Wang et al., 2020a), and several other Non-gradient based methods (Dabkowski & Gal, 2017; Fong & Vedaldi, 2017; Petsiuk et al., 2018; Montavon et al., 2019). While post-hoc methods offer insight into the model’s interpretability without posing additional model constraints, recent efforts have highlighted the issues with post-hoc methods and their reliability in reflecting a model’s reasoning (Kudin, 2019; Vilone & Longo, 2021; Nauta et al., 2023). Besides, when interpretations are inaccurate, it becomes difficult to reason whether the problem lies with the interpretation method or if the model relied on spurious correlations in the data. There have also been concerns on post-hoc interpretability and its larger success only on simple model architectures (Burns & Steinhardt, 2021; Adebayo et al., 2021; Bordt et al., 2022). On the other hand, ante-hoc methods that jointly learn to explain and predict provide models that are inherently interpretable (Sokol & Flach, 2021; Benitez et al., 2023). Ante-hoc methods have also been found to provide interpretations that help make the model more robust and reliable (Alvarez-Melis & Jaakkola, 2018; Chattopadhyay et al., 2022). We focus on this genre of methods in this work.
Continual Learning: Continual learning (CL) methods aim to tackle catastrophic forgetting (Hadsell et al., 2020) using techniques that alleviate forgetting across experiences. These methods have been extensively studied in the last few years and can be broadly grouped into three main categories: exemplar replay-based methods use a small exemplar buffer to store highly-representative samples of classes belonging to previous experiences using some similarity metric (Shin et al., 2017; Mi et al., 2020; Van de Ven et al., 2020; Maracani et al., 2021; Graffieti et al., 2023). Variations of such methods tend to adapt gradient-based sample selection strategies for populating the buffer (Aljundi et al., 2019; Jin et al., 2020; Tiwari et al., 2022). Architecture-based methods on the other hand rely on strategies such as network expansion and require updating parameters of the model as new classes arrive (Ebrahimi et al., 2020; Douillard et al., 2022; Kang et al., 2023), such methods can be costly and difficult to scale. Regularization-based methods tend to protect influential weights from old experiences from mutation (Sha et al., 2016; Jung et al., 2020; Maschler et al., 2021; Li et al., 2023). However, methods for interpretable continual learning have largely remained unexplored, except for one very recent work (Rymarczyk et al., 2023), discussed later in this section.
Concept-based Interpretability: Koh et al. (2020) proposed Concept Bottleneck Models (CBMs), a method that uses interpretable, human-defined concepts, combining them linearly to perform classification. CBMs also allow human interventions on concept activations (Shin et al., 2023; Steinmann et al., 2023) to steer the final prediction of the model. Subsequent efforts such as (Marconato et al., 2022b; Havasi et al., 2022; Barker et al., 2023) improved upon specific issues such as concept leakage. Adaptation of concept-based learning to provide ante-hoc interpretability to any DNN architecture was also shown in (Sarkar et al., 2022). While the presence of a representative set of concepts helps with interpretability, collecting such dense concept annotations is time-consuming. This issue was addressed in (Kim et al., 2023; Collins et al., 2023; Yan et al., 2023), where the intermediate semantic concepts are obtained by replacing domain experts with Large Language Models (LLMs). This allows for ease and flexibility in obtaining the concept set, while also overcoming the issue of concept leakage using concept filters. We follow this approach to obtain concepts in this work too. Besides making concept-based learning more feasible, using LLMs to obtain concepts also allow grounding of neurons in a bottleneck layer to a human-understandable concept, an
Figure 2: Overview of our data setup and proposed architecture. Our architecture receives new classes and associated concepts across multiple experiences in a continual learning setting. We use pre-trained language and vision encoders to get embeddings for the input image, concepts, and classes. These are then used to create multimodal concepts using our Multimodal Encoder. These multimodal concepts are grounded to their anchor concepts using a loss function, and are used to predict both the class label and the presence/absence of corresponding concepts in the image.
issue with CBMs that was highlighted in (Margeloiu et al., 2021). Other concept-based methods (Alvarez-Melis & Jaakkola, 2018; Chen et al., 2020b; Kazhdan et al., 2020; Rigotti et al., 2021; Benitez et al., 2023) use a different notion of concepts based on prototype representations of image features; we follow the former approach in this work. Importantly, all aforementioned efforts only perform concept-based learning in the traditional supervised setting, with no explicit efforts towards addressing the continual learning setting.
Interpretable Continual Learning: As stated earlier, we focus on the premise that making models continual and interpretable allows them to adapt their reasoning mechanisms to unseen data that arrive over time. Existing concept-based models (Koh et al., 2020; Oikarinen et al., 2023; Yang et al., 2023) address interpretability under the assumption that classes and concepts are pre-defined, making the concept set rigid. Concept-based continual learning has remained largely unstudied. We identify (Marconato et al., 2022a) as an early effort in this direction; however, this work trains CBMs in a continual setting under an assumption that all concepts, including those required for unseen classes, are accessible from the first experience itself, which does not emulate a real-world setting. More recently, Rymarczyk et al. (2023) proposed a method that is both continual and interpretable that uses part-based prototypes as concepts. As mentioned earlier, our notion of concepts allows us to go beyond parts of an object category, as in CBM-based models.
3 MC²: METHODOLOGY
Preliminaries and Notations. Given a sequence of experiences \( \{E^1, E^2, ..., E^T\} \), with each experience \( E^i \) consisting of \( n \) image-label pairs \((x^i_1, y^i_1), (x^i_2, y^i_2), ..., (x^i_n, y^i_n)\), a class-incremental continual learning (CiCL) system aims to learn a task \( E^t \) without catastrophically forgetting tasks \( E^1 \) to \( E^{t-1} \). In the scenario where human-provided concepts are used for classification, each experience \( E^i \) consists of \( n \) image-label-concept tuples \((x^i_1, y^i_1, C^i_1), (x^i_2, y^i_2, C^i_2), ..., (x^i_n, y^i_n, C^i_n)\), where \( C^i \) is the set of concepts known during experience \( E^i \) and \( C^i_k \) is the set of active concepts in example \( k \). The set of concepts known during \( E^i \) is the union of all concept sets from task \( E^1 \) to \( E^i \). For the following sections, we use the subscript \( NL \) for an object if it is presented to our method in natural language.
Concept Annotations: The natural language concepts in \( C^i \) may be provided as part of the dataset (e.g. CUB dataset). However, collecting concept annotations for classes can be tedious in general, especially if the number of classes is very large or if suitable and sufficient domain experts are not available. In such cases, one can derive the concepts by querying a Large Language Model (LLM) as proposed by Oikarinen et al. (2023) and Yang et al. (2023). Our approach is inclusive of both these approaches, depending on what may be available for a given dataset.
Our model learns to create multimodal concept embeddings that are grounded to their corresponding textual anchors and also contain the corresponding visual information that together aid in classification. We now formally define the terms *Grounding* and *Anchor*, as used in our work.
**Definition 1.** Given a vocabulary $\mathcal{V}$ containing words, phrases, or sentences in natural language, a text encoder $\phi : \mathcal{V} \rightarrow \mathbb{R}^d$, a vector $u \in \mathbb{R}^d$, and a distance function $D : \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$, $u$ is grounded to a word, phrase, or sentence $v_{NL} \in \mathcal{V}$ if $D(u, \phi(v_{NL})) \leq \varepsilon$ for some distance function $D$ and $\varepsilon > 0$. Then, $v_{NL}$ is said to be an anchor of $u$.
In other words, a feature vector is said to be grounded to a text term if their embeddings align with a certain tolerance $\varepsilon$. We present an overall schematic of MC² in Figure 2. Our approach leverages embeddings obtained from both the input image and the user-defined concepts present in that image. These are used to create multimodal concept embeddings using textual concept embeddings as anchors. Our algorithm comprises three major components: a multimodal concept encoder, a concept-grounding module, and a concept-augmented experience replay, each of which is described in detail below.
**Multimodal Concept Encoder.** Our proposed setting requires each sample of experience $E_t$ to be of the form $(x_i \in \mathcal{X}, y_i \in \mathcal{Y}_{NL}, C_i \in \mathcal{C}_{NL})$, where $x_i$ is an input image and $C_i$ is the concept set for the current experience in natural language. $y_i$ is the corresponding class name, also in natural language. The embeddings for the classes and concepts in $\mathcal{Y}_{NL}$ and $\mathcal{C}_{NL}$ are obtained using a pre-trained language encoder. Formally, given an image $x_i$ and a feature extractor $F$, the image embedding $x_i$ is obtained as $x_i = F(x_i)$; similarly, given a concept word or phrase $c_j \in \mathcal{C}_{NL}$ and a text encoder $T$, the text embedding $c_j$ is obtained as $c_j = T(c_j)$.
In order to enable cross-modal learning, we create multimodal representations of the image and text inputs using their respective embeddings. This allows the learned representation to exchange information between the modalities, and also assimilate information about the occurrence of the textual concept in the provided image. To this end, we use a multimodal encoder $M$, which is a stack of transformer encoder layers. We provide the image embeddings $x_i$ as well as the concept embeddings $c_1, c_2, ..., c_{|\mathcal{C}_{NL}|}$ as an input sequence to $M$. The output of $M$ is also a sequence of vectors, wherein we map the last $|\mathcal{C}_{NL}|$ of the sequence to the concept anchors using the concept grounding module (described later in this section). A shared sigmoid-activated linear layer, $\sigma(\cdot)$, is also trained on each multimodal concept vector to perform binary classification, where 1 indicates the presence of a concept, and 0 otherwise. A weighted binary cross-entropy loss, $L_{WBC}$, is used to train the model for concept classification. We also classify the entire image represented by these multimodal concepts using the standard image-level cross-entropy loss, $L_{CE}$. The loss for training $M$ is then a weighted sum of these two losses: $L = L_{CE} + \lambda L_{WBC}$, where $\lambda$ is a hyperparameter. Empirically, we find that $\lambda = 5$ works marginally better than lower or higher values. More details on $L_{WBC}$ are provided in the appendix.
**Classification using the Multimodal Concept Encoder:** The alignment between the $j$th multimodal concept vector of the current sample, $c'_j$, and the embedding of the $k$th class $y_k$ of the current experience can be obtained by taking the dot product of the two vectors. We define the strength $s_k$ of class $k$ in a given image to be the sum of dot products of all multimodal concepts onto the class embedding of $k$, i.e. $s_k = \sum_{j=1}^{|\mathcal{C}|} c'_j \cdot y_k$. The classification result is then given by: $\text{argmax}_y(s_1, s_2, ..., s_{|\mathcal{Y}|})$, that is, the index of the class having the greatest strength with respect to all concepts. We use $s_k$ as the logit of class $k$, and perform a softmax operation on top of the logits to get classification probabilities, which are used to train the model with the standard cross-entropy loss $L_{CE}$. It should be noted that deriving class strengths from the multimodal concepts does not require any additional parameters; this enables scalability of our approach to unseen classes and concepts when deployed in a continual setting.
**Concept-Grounding Module.** While the outputs of $M$ are multimodal by design, they do not implicitly provide explanations in terms of human-defined concepts. The concept-grounding module allows us to ground these multimodal concept vectors to known concept anchors that are directly obtained from textual descriptions of concepts. We use the last $|\mathcal{C}_{NL}|$ vectors of the output sequence given by $M$ as the set of our multimodal concepts. This allows us to create a one-to-one mapping between input and output concept vectors. We use a Concept Grounding Loss, $L_G$, to ground the
predicted multimodal concepts with their corresponding concept anchors, as below:
$$L_G = -\frac{1}{|C^i|} \sum_{k=1}^{|C^i|} \cos(c_k, W^T c'_k + b) = -\frac{1}{|C^i|} \sum_{k=1}^{|C^i|} \frac{c_k \cdot (W^T c'_k + b)}{|c_k| \cdot |W^T c'_k + b|}$$
where $c'_k$ represents the multimodal vector corresponding to concept anchor $c_k$. $W$ and $b$ are learnable parameters that are shared among all concepts and serve to perform concept alignment. Grounding the multimodal concept vectors enables them to encode the association between the corresponding concept anchor and the given input image.
**Concept-Augmented Experience Replay.** Experience (or exemplar) replay is a standard technique used in continual learning to prevent catastrophic forgetting. This is typically implemented by creating a (small) memory buffer that contains training samples from past experiences. When a model is trained on a new experience, the memory buffer is sampled and the model is additionally trained on these stored samples. We propose an extension called Concept-Augmented Experience Replay in this work, wherein we store the class-level concept labels in addition to images and class labels. The exemplar loss is identical to the loss $L$ used to train $M$, i.e. the concept-level loss in the multimodal concept encoder $L_{W BCE}$ is additionally used on these concepts when replaying these buffer samples, in addition to the cross-entropy loss. While this simple enhancement of experience replay may seem trivial since one can simply ignore concepts in the buffer, we show later in the paper that the quality of concepts learned through concept-augmented experience replay is far superior to standard experience replay that does not store concepts (see Table 4).
### 4 EXPERIMENTS AND RESULTS
We perform a comprehensive suite of experiments to study the performance of MC² on well-known benchmarks that allow us to study both continual as well as concept-based learning: CIFAR-100, ImageNet-100, and CalTech-UCSD Birds 200 (CUB200). We study our method both in a continual setting as well as in a full-data setting. We also examine different components of our model and study each of their importances to the method. Details related to architecture implementation and hyperparameter selection have been described in the appendix.
**Performance Metrics.** In the class-incremental setting, we follow earlier literature in using two metrics to evaluate the performance of different methods. **Final Average Accuracy (FAA)** is a measure of how well a model has adapted to a sequence of tasks or data streams over time. It represents the average accuracy of the model on the validation splits of all tasks or data streams after it has completed its learning process, across the experiences. FAA is defined as: $FAA = \frac{1}{T} \sum_{i=1}^{T} acc_i^T$, where $acc_i^T$ represents the model’s accuracy on the validation split of experience $i$ after training on $T$ experiences. **Average Forgetting (AF)** quantifies the extent to which a model forgets previously learned knowledge when exposed to new experiences. It measures the drop-in performance on tasks learned in previous experiences after the model has been trained on newer experiences. Lower average forgetting indicates better model stability and performance in a continual learning scenario. AF at task $T$ is defined as: $AF = \frac{1}{T-1} \sum_{i=1}^{T-1} acc_i^T - acc_i^T$, i.e., the difference in accuracy on the validation set of task $i$ when it was originally learned and the accuracy on it after the model has been trained on $T$ experiences. In the full-data setting where the model is provided with all training data in a single experience, we use the standard **Classification Accuracy** to evaluate different methods, viz. the ratio of correctly classified examples and the total number of examples.
**Baselines.** While there has been very little effort on explicitly studying concept-based continual learning, we thoroughly evaluate our approach in class-incremental and full-data settings by comparing it with existing works that use human-defined concepts. Our baseline methods for comparison include: (i) [Marconato et al., 2022a], which uses a concept bottleneck layer with one neuron assigned to each concept. In a class-incremental setting this baseline makes the assumption that all concepts, including those that would ideally only be provided in future experiences, are provided upfront. The model then uses a growing linear layer with new neurons added for new classes for the final classification; (ii) **Incremental CBM**, a version of the Concept Bottleneck Model [Koh et al., 2020] that we modify to adapt to a class-incremental and concept-incremental learning scenario. We grow both the bottleneck layer and the linear classification layer as new classes and new concepts are introduced. One can see that this is a generalized version of the previous baseline where the assumption that all concepts are provided upfront is relaxed; We also consider (iii) and (iv) which are
Label-Free CBM (Oikarinen et al., 2023) and LaBo (Yang et al., 2023), variations of CBM that use embeddings of natural language concepts as the bottleneck layer. Both these methods also propose ways to discover concepts for a specified class by querying LLMs. They primarily differ between them in how they query the LLM and filter the obtained concept set. We adapt these methods to the continual learning setting by allowing concepts from previous experiences to be considered when a new experience is provided.
**Implementation Details.** For ImageNet-100 and CIFAR-100, we grow the concept set at every experience as new concepts arrive, while discarding duplicate concepts from the new set. We show the number of concepts, with and without duplicates, in Table 1. In the case of CUB, the number of concepts is fixed to 312 across all experiences (as provided with the dataset). The number of concepts can be reasonably large, as shown. This can cause out-of-memory errors when used with the standard attention mechanism since our method performs attention over the entire concept set. To address this, we also study a simple variant of our method with linear attention, which we denote as $MC^2$(Linear) in our results. More details about the use of linear attention are provided as part of our ablation studies. Other implementation details including dataset details, hyperparameters, and training setups are provided in the Appendix. Our code will be made publicly available on acceptance.
| Exp | CIFAR-100 | ImageNet-100 |
|-----|-----------|--------------|
| E1 | 257 (257) | 214 (214) |
| E2 | 460 (527) | 359 (416) |
| E3 | 638 (794) | 457 (594) |
| E4 | 798 (1046)| 545 (762) |
| E5 | 925 (1309)| 641 (945) |
Table 1: Number of concepts per class, excluding duplicates across experiences (Exp) (inclusive numbers in parentheses)
| Method | CIFAR-100 | CUB | ImageNet-100 |
|-------------------------------|-----------|-----|--------------|
| | FAA | AF | FAA | AF | FAA | AF |
| CBM [Koh et al., 2020] | 0.4333 | 0.5646 | 0.5875 | 0.2029 | 0.4523 | 0.5553 |
| CBM (Sequential) [Köh et al., 2020] | 0.3533 | 0.6025 | 0.5329 | 0.1347 | 0.4523 | 0.5553 |
| ICIAP [Marconato et al., 2022a] | 0.4196 | 0.5719 | 0.5875 | 0.2029 | 0.4689 | 0.5253 |
| ICIAP (Sequential) [Marconato et al., 2022a] | 0.2945 | 0.5937 | 0.5329 | 0.1347 | 0.4689 | 0.5253 |
| Label-Free [Oikarinen et al., 2023] | 0.3200 | 0.2338 | 0.1934 | 0.4408 | 0.1493 | 0.2760 |
| LaBo [Yang et al., 2023] | 0.3009 | 0.6879 | 0.3101 | 0.4741 | 0.3384 | 0.4560 |
| $MC^2$ | 0.7022 | 0.3003 | 0.8137 | 0.0611 | 0.7970 | 0.0877 |
| $MC^2$(Linear) | 0.6920 | 0.3142 | 0.8188 | 0.0531 | 0.7985 | 0.0776 |
Table 2: Continual learning performance of different methods over 5 experiences
**Quantitative Results.** Table 2 shows our results on concept-based continual learning. On CIFAR-100 and ImageNet-100, our approach outperforms all baselines by a significant margin. It should be noted that this is done without adding any additional parameters to our model with newer experiences, whereas other methods require new parameters to incorporate new classes and concepts. We also observed significantly lower forgetting across experiences using our approach. These results show that our model can readily incorporate knowledge about new concepts and classes while internally forming the required concept-class associations. It is also able to remember these associations to a good extent, even after being trained on new tasks.
Figure 3: Visual grounding of concepts: Qualitative results for localizing concepts using $MC^2$ versus when localizing the same concepts using GradCAM on CBMs
**Qualitative Results. Visual Grounding and Attributions.** We extend our method as a post-hoc analysis tool to provide visualizations of the attention maps learned by our model. As shown in Figure 3, each heatmap shows the region an attention head focuses on for a specified concept. It can be seen that our model learns to assign a subset of its attention heads to extract user-defined concepts from an image. This is in contrast with models that do not provide such grounding, which fail to reliably extract user-defined concepts from the given image (Margelou et al., 2021).
**More Results: Full-Data Training.** To see how our model fares in standard classification settings, we evaluate our method in a full-data, single-experience setting on three datasets. These results are presented in Table 3. In this setting, we find that $MC^2$ considerably outperforms the next closest baseline on the CUB dataset, indicating that it is highly effective when used to differentiate between fine-grained classes. It also achieves comparable performance on ImageNet-100 and CIFAR-100, even though this setting is not our focus.
**More Results: Evaluating Concepts.** The concepts given by LLMs (in the case of ImageNet-100 and CIFAR-100), as well as concepts annotated by humans (as in CUB200), can be noisy. Therefore, directly comparing accuracies of concept classification may not evaluate how well the networks learn concepts. We instead evaluate the learned concepts in two ways: (i) using concept neurons (inspired by Marconato et al., 2022a), and (ii) using interventions, each of which is described below.
**Evaluating goodness of concepts through concept neurons:** A concept neuron (see Marconato et al., 2022a) predicts the presence or absence of a given concept based on a grounded concept representation. After training, such concept neurons should be able to feed a linear classifier on par with the grounded concept vectors. We evaluate this by treating a group of concept neurons as a bottleneck layer and training a linear classifier on top of the neurons. Since this only examines concepts, we train the linear layer on all classes simultaneously for 3 epochs, irrespective of whether the model was trained incrementally or in a full data setting. The results are shown in Table 4, with and without concept-augmented experience replay (CR). Evidently, the concepts perform significantly better when using our proposed CR replay.
| Method | CIFAR-100 | CUB | ImageNet-100 |
|-----------------|-----------|---------|--------------|
| CBM-J | 0.7868 | 0.7231 | 0.7773 |
| CBM-Seq | 0.5712 | 0.6932 | 0.4265 |
| Label-Free | 0.6431 | 0.7413 | 0.7818 |
| LaBo | **0.8572**| 0.7015 | **0.8506** |
| Ours | 0.8567 | **0.8401**| 0.8466 |
Table 3: Classification performance of different methods in the full-data (single experience) setting. CBM-J involves joint training of the CBM as in Koh et al., 2020, while CBM-Seq involves sequential training.
| Dataset | FAA w/ CR | Linear Acc w/ CR | FAA w/o CR | Linear Acc w/o CR |
|-----------------|-----------|------------------|------------|-------------------|
| CIFAR-100 | 0.7022 | 0.7650 | 0.6722 | 0.4511 |
| CUB200 | 0.8137 | 0.7914 | 0.7844 | 0.1382 |
| ImageNet-100 | 0.7970 | 0.7722 | 0.7903 | 0.5903 |
Table 4: Linear layer training on top of concept neurons; CR = concept-augmented experience replay
**Figure 4: Manual interventions on concepts:** We identify concepts that are incorrectly labeled, and modify them based on the image semantics, this results in correct classification.
Evaluating concepts using interventions: Interventions allow us to study the (potentially causal) relationship between concepts and the classes they describe. To study these, we use the linear layer trained above to evaluate how well our model learns such concept-class relationships. We consider samples that are misclassified by the newly trained linear layer and perform interventions on the wrongly predicted concepts using the mechanism described in [Koh et al., 2020]. Figure 4 shows qualitative results of performing interventions on a misclassified image. We observe that most concepts present in an image are usually correctly identified, but performing interventions on a few key misclassified concepts results in incorrect classifications a majority of the time. This highlights the goodness of semantics of the learned concepts, and its impact on classification. In the figure, we see that the image of Komodo Dragon also activates the concept “a tree” due to visual cue similarity. When the key concepts “scales” and “long, sharp claws” are activated better, the model now classifies this correctly as a Komodo Dragon.
Ablation Studies: Vision-Language Alignment. We now study the importance of having pre-aligned vision and text encoders to get image, class, and concept embeddings. Alignment here refers to the property that for a given image and corresponding image description in natural language, the encoders produce vectors that are close in a high-dimensional space based on some predefined metric. We perform a grid search over 9 different vision-language encoder pairs. Two of these pairs, CLIP [Radford et al., 2021] and FLAVA [Singh et al., 2022], have pre-aligned vision-language encoders. We also used BERT [Devlin et al., 2018] and ViT [Dosovitskiy et al., 2021] models trained on unimodal data, where our model explicitly aligns the modalities. The results are shown for CUB in Table 6 and for ImageNet-100 in Table 5. Our results indicate that using pre-aligned vision-language (VL) models for our method; our method’s alignment for this task is in fact superior to pre-aligned models. This is because pre-aligned VL models are trained at a general image level, while our explicit approach allows more fine-grained association between image and text.
Ablation Studies: Attention Mechanism and Scalability. In its naive implementation, the compute requirements of our method can grow quadratically with the number of concepts. This is due to the quadratic dependency of the vanilla attention mechanism used in transformer blocks. Fortunately, recent attempts [Katharopoulos et al., 2020; Vyas et al., 2020; Shen et al., 2021; Wang et al., 2020b; Kitaev et al., 2019] have been made to improve the computational efficiency of transformer architectures. As stated earlier, we propose a viable variant to make our model practically feasible for a large number of concepts: MC² with Linear Attention, whose compute requirements grow linearly with the number of concepts. We use transformer blocks featuring the linear attention mechanism proposed in [Katharopoulos et al., 2020] as a drop-in replacement in our multimodal encoder. These results are also shown in Table 2. We see that using linear attention surprisingly achieves better results on CIFAR-100 while achieving comparable performance to vanilla attention on CUB and ImageNet-100.
5 CONCLUSIONS AND FUTURE WORK
In this work, we propose a new perspective to integrating human-defined concept-based models perform in a continual setting. We propose a method that uses pre-trained language and vision encoders to create multimodal concepts, which are anchored to natural language concepts. Our approach can reliably interpret classification results in terms of the provided concepts, and can incorporate new concepts and classes at a later time as well. We perform comprehensive evaluations of our method on three benchmark datasets and also study the efficacy of concepts in our pipeline. Our qualitative and quantitative results show the usefulness of the proposed method. Although our method provides a high-performing continual and interpretable model, the use of a pre-trained vision encoder limits us from using arbitrary augmentations (e.g. color jitter) to improve model generalization. Allowing for this in unaligned unimodal encoders could help further performance. From an interpretability viewpoint, developing an improved intervention mechanism that can be used on our model without an explicit linear layer would be an interesting direction of future work. We can also explore other forms of attention, such as Flash Attention [Dao et al., 2022], to improve the practical scalability of our method.
| Text | FLAVA | CLIP | BERT |
|------|-------|------|------|
| FLAVA | 0.7036 | 0.6532 | 0.6952 |
| CLIP | 0.7372 | 0.7247 | 0.7125 |
| ViT | 0.7970 | 0.7458 | 0.7404 |
Table 5: VL alignment, ImageNet100
| Text | FLAVA | CLIP | BERT |
|------|-------|------|------|
| FLAVA | 0.7628 | 0.6501 | 0.7218 |
| CLIP | 0.8047 | 0.7180 | 0.7970 |
| ViT | 0.8245 | 0.7973 | 0.8344 |
Table 6: VL alignment, CUB
Reproducibility Statement: Necessary details required to reproduce our results have been provided in the Appendix. The full code shall be released publicly upon acceptance of the paper.
REFERENCES
Julius Adebayo, Michael Muelly, Harold Abelson, and Been Kim. Post hoc explanations may be ineffective for detecting unknown spurious correlation. In *International Conference on Learning Representations (ICLR)*, 2021.
Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. *Advances in Neural Information Processing Systems (NeurIPS)*, 32, 2019.
David Alvarez-Melis and Tommi S. Jaakkola. Towards robust interpretability with self-explaining neural networks, 2018.
Matthew Barker, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, and Umang Bhatt. Selective concept models: Permitting stakeholder customisation at test-time, 2023.
Raul Benitez et al. Ante-hoc generation of task-agnostic interpretation maps. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 3763–3768, 2023.
Sebastian Bordt, Michèle Finck, Eric Raidl, and Ulrike von Luxburg. Post-hoc explanations fail to achieve their purpose in adversarial contexts. In *Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency*, pp. 891–905, 2022.
Collin Burns and Jacob Steinhardt. Limitations of post-hoc feature alignment for robustness. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 2525–2533, 2021.
Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In *2018 IEEE winter conference on applications of computer vision (WACV)*, pp. 839–847. IEEE, 2018.
Aditya Chattopadhyay, Stewart Slocum, Benjamin D Haefele, Rene Vidal, and Donald Geman. Interpretable by design: Learning predictors by composing interpretable queries. *IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)*, 45(6):7430–7443, 2022.
Lei Chen, Jianhui Chen, Hossein Hajimirsadeghi, and Greg Mori. Adapting grad-cam for embedding networks. In *proceedings of the IEEE/CVF winter conference on applications of computer vision (CVPR)*, pp. 2794–2803, 2020a.
Zhi Chen, Yijie Bei, and Cynthia Rudin. Concept whitening for interpretable image recognition. *Nature Machine Intelligence*, 2(12):772–782, 2020b.
Katherine Maeve Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilya Sucholutsky, Adrian Weller, and Krishnamurthy Dvijotham. Human uncertainty in concept-based ai systems. In *Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 869–889, 2023.
Piotr Dabkowski and Yarin Gal. Real time image saliency for black box classifiers. *Advances in Neural Information Processing Systems (NeurIPS)*, 30, 2017.
Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. *Advances in Neural Information Processing Systems (NeurIPS)*, 35:16344–16359, 2022.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. 2021.
|
Oc4ji1iCjQ
|
In the synthetic data in Table 1, I was surprised that some of the wins were not that big. Given that this is fully synthetic data presented for this method, I'd expect the results to be outside the confidence bands, but in a few spots they're highly overlapping - it makes me wonder if some of the practical choices aren't as effective
|
Catch the Shadow: Automatic Shadow Variables Generation for Treatment Effect Estimation under Collider Bias
Anonymous authors
Paper under double-blind review
Abstract
Collider bias, which comes from non-random sample selection caused by both treatments and outcomes, is a significant and challenging problem of treatment effect estimation. Previous studies show that treatment effects are identifiable if some shadow variables are available in the observational data. Shadow variables are assumed to be fully observed covariates independent of the sample selection mechanism after conditioning on the outcome and other observed covariates. However, finding a well-defined shadow variable is often not an easier task than the task of dealing with collider bias itself in real-world scenarios. Therefore, we propose a novel ShadowCatcher that automatically generates representations serving the role of shadow variables from the observed covariates. Specifically, during the generation process, we impose conditional independence constraints on the learned representations to make them satisfy the assumptions of shadow variables. To further ensure that the generated representations are valid, we also use a tester to perform hypothesis testing and iteratively carry out the generation process until the generated representations pass the test. Using the generated representations, we propose a novel ShadowEstimator to estimate treatment effects under collider bias. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of our proposed ShadowCatcher and ShadowEstimator.
1 Introduction
Causal inference is a powerful statistical modeling tool for explanatory analysis, and a central problem in causal inference is treatment effects estimation. The gold standard approach for treatment effect estimation is to conduct Randomized Controlled Trials (RCTs), but RCTs can be expensive (Kohavi & Longbotham, 2011) and sometimes infeasible (Bottou et al., 2013). Therefore, developing practical approaches to estimate treatment effects from observational data is crucial for causal inference.
In observational studies, association does not imply causation, mainly due to the presence of spurious associations in the data. There are two primary sources of spurious associations: confounding bias and collider bias (Hernán & Robins, 2020). Most of the previous works focused on confounding bias that results from common causes of treatments and outcomes (Bang & Robins, 2005; Shalit et al., 2017; Louizos et al., 2017; Wager & Athey, 2018) while ignored collider bias which comes from non-random sample selection caused by both treatments and outcomes.
We use causal diagrams in Figure 1 to further illustrate the two biases, where $X$ denotes the observed covariates, $T$ denotes the treatment variable, $Y$ denotes the outcome variable, and $S$ denotes the sample selection indicator. Confounding bias results from common causes of treatment and outcome (Greenland, 2003; Guo et al., 2020). As shown in Figure 1(a), there are two sources of association between $T$ and $Y$: the path $T \rightarrow Y$ that represents the treatment effect of $T$ on $Y$, and the path $T \leftarrow X \rightarrow Y$ between $T$ and $Y$ that includes the common cause $X$, which introduces spurious associations into the observational data. Collider bias is a particular case of sample selection bias\(^1\) that results from conditioning on a common effect of $T$ and $Y$ (Hernán & Robins, 2020). As shown in Figure 1(b), except for the path $T \rightarrow Y$, the other source of association between $T$ and $Y$ is from
\(^1\)Sample selection bias results from non-random sample selection conditioned on $S$ caused by certain variables in data, while collider bias is the particular case that $T$ and $Y$ both cause $S$.
the open path $T \rightarrow S \leftarrow Y$. It links $T$ and $Y$ through their conditioned on common effect $S$, which introduces spurious associations. As shown in Figure 1(d), an analysis conditioned on $S$ will cause collider bias, i.e., we can only observe the outcome of those selected units ($S = 1$), and the values of $Y$ are missing for those unselected units ($S = 0$), leading to incorrect treatment effect estimation.
Previous studies show that treatment effects are unidentifiable under collider bias without further assumptions or prior knowledge. Fortunately, if some shadow variables are available in the observational data, it is still possible to identify treatment effects from observational data (Miao & Tchetgen Tchetgen, 2016). As shown in Figure 1(c), shadow variables $Z$ are assumed to be fully observed covariates independent of the sample selection mechanism after conditioning on the outcome and other covariates, i.e., a valid shadow variable needs to simultaneously satisfy that $Z \perp\!\!\!\perp Y | X, T, S = 1$ and $Z \perp\!\!\!\perp S | X, T, Y$. For example, when studying the effect of students’ mental health ($T$) on teachers’ assessment ($Y$), collider bias occurs since teachers might not be willing to report their assessment of students with poor mental health. The teacher’s response rate ($S$) may be related to their assessment of the student but is unlikely to be related to a separate parent’s report after conditioning on the teacher’s assessment and fully observed covariates; moreover, the parent’s report is likely highly correlated with the teacher’s. In this case, the parental assessment can be considered a shadow variable (Ibrahim et al., 2001). With the help of shadow variables, treatment effects can be identified and estimated (d’Haultfoeuille, 2010; Wang et al., 2014; Miao & Tchetgen Tchetgen, 2016).
However, finding a well-defined shadow variable requires domain-specific knowledge of experts and needs to be investigated on a case-by-case basis, which is often as challenging a task as the task of dealing with collider bias itself in real-world scenarios (Li et al., 2023). Therefore, we propose a novel method named ShadowCatcher that automatically generates representations from the observed covariates satisfying the assumptions of shadow variables, which can serve the role of shadow variables in the treatment effect estimation process and thus achieve the goal of solving collider bias without introducing more prior knowledge. Specifically, we iteratively generate shadow-variable representations by conditional independence constraints and test whether the generated representations satisfy the assumptions until the generated representations can pass the hypothesis test. Furthermore, we also propose a novel ShadowEstimator to estimate treatment effects under collider bias by leveraging the generated shadow variables representations. We conduct experiments on synthetic and real-world datasets, including ablation studies, and the results demonstrate the effectiveness of our proposed ShadowCatcher and ShadowEstimator.
The main contributions in this paper are as follows: (1) We study a practical and challenging problem of treatment effect estimation from observational data under collider bias. (2) We propose a novel ShadowCatcher that automatically generates representations serving the role of shadow variables from the observed covariates, which overcomes the common difficulty in finding valid shadow variables in real-world scenarios. (3) We propose a novel ShadowEstimator to estimate treatment effects using the generated shadow variable representations to address the collider bias in observational data. (4) Extensive experiments show that our proposed methods can practically generate shadow variable representations and address collider bias in treatment effect estimation.
2 RELATED WORK
Previous works on treatment effect estimation mainly focus on confounding bias in observational studies. Reweighting methods either use the inverse propensity score (Dehejia & Wahba, 2002) or learn a balancing weight from data (Hainmueller, 2012; Athey et al., 2018) to make $T$ and $X$ of the reweighted samples independent. Balanced representation learning methods (Johansson et al.,
learn representations of covariates so that the learned representations are independent of the treatment variable. Causal Forest (Wager & Athey, 2018) builds a large number of causal trees and then estimates heterogeneous treatment effects by taking an average of the outcomes from these causal trees. Generative methods (Louizos et al., 2017; Yoon et al., 2018) utilize generative models to generate counterfactual data. However, all the above methods suffer from sample selection bias because of the distribution shift problem.
To address sample selection bias, Heckman (1979) proposed a two-stage regression method with many extensions (Marchenko & Genton, 2012; Ding, 2014; Ogundimu & Hutton, 2016; Wiemann et al., 2022). Cole & Stuart (2010) proposed a sample reweighting method, which reweights the selected samples by estimating the inverse conditional probability of the sample selection as weights. Bareinboim et al. (2014); Bareinboim & Tian (2015) proposed the selection-backdoor adjustment approach by blocking the selection-backdoor paths. These methods can only solve selection bias caused by covariates and the treatment. However, these methods cannot solve collider bias, which is more likely to appear in real-world scenarios because $Y$ also causes $S$.
Fortunately, treatment effects are identifiable under collider bias if some shadow variables are available in the observational data (d’Haultfoeuille, 2010; Miao & Tchetgen Tchetgen, 2016). Shadow variables are assumed to satisfy that $Z \not\perp\!\!\!\perp Y \mid X, T, S = 1$ and $Z \perp\!\!\!\perp S \mid X, T, Y$. With the help of shadow variables, various estimators, including regression-based (d’Haultfoeuille, 2010; Zhao & Shao, 2016), IPSW-based (Wang et al., 2014), and doubly-robust-based (Miao & Tchetgen Tchetgen, 2016) were proposed to solve collider bias. However, the accessibility of valid shadow variables itself is a strong assumption because finding a well-defined shadow variable requires domain-specific knowledge of experts and needs to be investigated on a case-by-case basis (Li et al., 2023). Therefore, our proposed method that automatically generates representations serving the role of shadow variables can effectively relax the assumptions of solving collider bias and has excellent application values.
### 3 PROBLEM AND ALGORITHM
#### 3.1 PROBLEM FORMULATION
Suppose we have observational data $\mathcal{D} = \{x_i, t_i, y_i^{\text{obs}}, s_i\}_{i=1}^{n}$, where $n$ denotes the number of units. For the $i$th unit, we observe its treatment variable $t_i$, selection indicator $s_i$ that indicates whether the unit is selected into the sample, i.e., whether the value of the outcome can be observed, covariates $x_i \in \mathbb{R}^{d \times 1}$, where $d$ denotes the dimension of the covariates, and observed outcome variable $y_i^{\text{obs}}$ remains the same value as $y_i$ when $s_i = 1$. For missing values, we label them as $s_i = 0$. Figure 1(d) illustrates the collected data form in the presence of collider bias.
In this paper, we focus on the case of binary treatment\(^2\), i.e., $t_i \in \{0, 1\}$, where $t_i = 1$ denotes unit $i$ is treated, and $t_i = 0$ denotes otherwise. Under the potential outcome framework (Imbens & Rubin, 2015), we define the potential outcomes under treatment as $Y(1)$ and under control as $Y(0)$. With the observational data, our goal is to estimate the Conditional Average Treatment effect (CATE), which is defined as $\tau(x) = \mathbb{E}[Y(1) - Y(0) \mid X = x]$. For a unit $i$ with $t_i$ in $\mathcal{D}$, only the factual outcome $Y(t_i)$ is available. Therefore, to make CATE identifiable, we make the following commonly used assumptions (Imbens & Rubin, 2015):
- **Stable Unit Treatment Value Assumption.** The distribution of the potential outcome of one unit is assumed to be independent of the treatment assignment of another unit.
- **Overlap Assumption.** A unit has a nonzero probability of being treated and being selected, $0 < \mathbb{P}(T = 1 \mid X = x) < 1$ and $0 < \mathbb{P}(S = 1 \mid X = x) < 1$.
- **Unconfoundedness Assumption.** The treatments are independent of the potential outcomes given the covariates, i.e., $Y(1), Y(0) \perp\!\!\!\perp T \mid X$.
Based on the above assumptions, CATE can be estimated as:
$$\tau(x) = \mathbb{E}[Y \mid X = x, T = 1] - \mathbb{E}[Y \mid X = x, T = 0].$$
However, because the values of $Y$ are missing in $S = 0$ units caused by collider bias, we can only estimate the CATE of $S = 1$ samples, which differs from the true CATE of the entire data because
\(^2\)In this paper, we mainly focus on how to generate shadow-variable representations to address collider bias. To make the proposed ShadowCatcher and ShadowEstimator process more concise, here we consider the binary treatment setting, but our proposed methods can also be effectively applied to continuous treatment settings.
\[ \mathbb{E}[Y \mid X = x, T = t, S = 1] \neq \mathbb{E}[Y \mid X = x, T = t]. \] What is worse, since collider bias results in \( Y(1), Y(0) \not\perp T \mid X, S = 1 \), the unconfoundedness assumption is violated when conditioning on \( S = 1 \). It leads to a biased estimation using the observed samples, which means that the estimated CATE of the \( S = 1 \) samples even differs from the true CATE of only the \( S = 1 \) data. Therefore, it is necessary to develop approaches to solve collider bias for treatment effect estimation. Fortunately, studies show that treatment effects can be identifiable under collider bias if some shadow variables are available in the observational data (d’Haultfoeuille, 2010; Miao & Tchetgen Tchetgen, 2016).
### 3.2 Preliminaries of the Shadow Variable
Valid shadow variables \( Z \) are supposed to be fully observed covariates, i.e., the values of \( Z \) are observable in both \( S = 0 \) and \( S = 1 \) data like \( X \), and satisfy the following assumption:
**Assumption 1** (d’Haultfoeuille, 2010). \( Z \not\perp Y \mid X, T, S = 1 \) and \( Z \perp S \mid X, T, Y \).
As shown in Figure 1(c), Assumption 1 indicates that the shadow variable does not affect the sample selection mechanism after conditioning on the outcome and other observed covariates, and it is associated with the outcome given the covariates. This assumption is widely used in the literature of collider bias (d’Haultfoeuille, 2010; Wang et al., 2014; Miao & Tchetgen Tchetgen, 2016; Zhao & Shao, 2016; Li et al., 2023), and an illustrative example can be found in Section 1.
Throughout the paper, let \( f(\cdot) \) denote the data distribution function. The key problem of collider bias is that the outcome values are missing in \( S = 0 \) data, which results in \( f(Y \mid X, Z, T, S = 0) \) not available from the observed data. We can use the odds ratio function to encode the deviation between the distribution of \( S = 1 \) data and that of \( S = 0 \) data, which can be expressed as follows under Assumption 1 (Miao & Tchetgen Tchetgen, 2016):
\[
\text{OR}(X, Z, T, Y) = \text{OR}(X, T, Y) = \frac{f(S = 0 \mid X, T, Y) \cdot f(S = 1 \mid X, T, Y = 0)}{f(S = 0 \mid X, T, Y = 0) \cdot f(S = 1 \mid X, T, Y)}. \tag{2}
\]
In Equation (2), \( Y = 0 \) is used as a reference value, and \( \text{OR}(X, T, Y = 0) = 1 \), which can be replaced by any other value within the support of \( Y \). The odds ratio function measures the degree to which the \( S = 0 \) data differs from the \( S = 1 \) data and thus can be used to recover the unknown \( f(Y \mid X, Z, T, S = 0) \) from the observed \( f(Y \mid X, Z, T, S = 1) \) through the following proposition:
**Proposition 1** (Miao & Tchetgen Tchetgen, 2016). Given Assumption 1, we have that
\[
f(Y \mid X, Z, T, S = 0) = \frac{\text{OR}(X, T, Y) \cdot f(Y \mid X, Z, T, S = 1)}{\mathbb{E}[\text{OR}(X, T, Y) \mid X, Z, T, S = 1]}, \tag{3}
\]
\[
\mathbb{E}[\widetilde{\text{OR}}(X, T, Y) \mid X, Z, T, S = 1] = \frac{f(Z \mid X, T, S = 0)}{f(Z \mid X, T, S = 1)}, \tag{4}
\]
where \( \widetilde{\text{OR}}(X, T, Y) = \text{OR}(X, T, Y)/\mathbb{E}[\text{OR}(X, T, Y) \mid X, T, S = 1] \). Equation (3) shows that the key problem that \( f(Y \mid X, Z, T, S = 0) \) is unidentifiable can be solved under Assumption 1 by integrating the odds ratio function with the \( S = 1 \) data distribution. Since \( f(Y \mid X, Z, T, S = 1) \) can be obtained from the fully observed \( S = 1 \) samples, the only problem becomes the identification of the odds ratio function. Fortunately, With \( f(Z \mid X, S = 0) \) and \( f(Z \mid X, S = 1) \) obtained from the observed data, Equation (4) is a Fredholm integral equation of the first kind, with \( \widetilde{\text{OR}}(X, T, Y) \) to be solved for. Because \( \text{OR}(X, T, Y = 0) = 1 \), we have the following result:
\[
\text{OR}(X, T, Y) = \frac{\widetilde{\text{OR}}(X, T, Y)}{\text{OR}(X, T, Y = 0)}. \tag{5}
\]
Therefore, identification of \( \text{OR}(X, T, Y) \) is equivalent to finding a unique solution to Equation (4), which is guaranteed by the following theorem:
**Theorem 1** (Miao & Tchetgen Tchetgen, 2016). Under Assumption 1 and the completeness condition of \( f(Y \mid X, Z, T, S = 1) \), Equation (4) has a unique solution. Thus \( \text{OR}(X, T, Y) \) and \( f(Y \mid X, Z, T) \) can be identified.
---
3 See Appendix A.3.1 for more detailed explanation.
4 See Appendix A.3.2 for more detailed explanation.
Based on the above theorem, collider bias can be solved with the help of shadow variables by firstly estimating \( \text{OR}(X, T, Y) \) through Equation (4) and (5), then recovering \( f(Y | X, Z, T, S = 0) \) through Equation (3), and finally estimating \( f(Y | X, Z, T) \). However, finding a well-defined shadow variable in real-world scenarios is also challenging because it requires domain-specific knowledge of experts and must be investigated on a case-by-case basis (Li et al., 2023). To relax the assumption that prior knowledge about shadow variables is needed, we propose a novel ShadowCatcher to generate representations serving the role of shadow variables directly from observed covariates without prior knowledge and a novel ShadowEstimator to estimate CATE under collider bias with the help of the generated shadow variable representations.
### 3.3 ShadowCatcher
Intuitively, as shown in Figure 1(c), the causal link \( X \rightarrow Z \) indicates that the shadow variable is possible to be learned from the fully observed covariates. Therefore, our proposed ShadowCatcher aims to learn representations \( Z \) from \( X \) that satisfy the shadow variable assumptions. To achieve this goal, we must ensure that the generated representations do satisfy Assumption 1.
As stated in Assumption 1, a valid shadow variable needs to satisfy two conditional independence assumptions: (1) \( Z \not\perp\!\!\!\perp Y | X, T, S = 1 \), (2) \( Z \perp\!\!\!\perp S | X, T, Y \). The first assumption can be easily tested with only the observed data because only \( S = 1 \) data is involved. However, the second assumption needs \( Y \) to be fully observed, but the fact is that \( Y \) values are missing for \( S = 0 \) data. Fortunately, this assumption is proven refutable with only the observed data.
**Theorem 2** (d’Haultfoeuille, 2010). Suppose the overlap assumption and \( Z \not\perp\!\!\!\perp Y | X, T, S = 1 \) hold, then \( Z \perp\!\!\!\perp S | X, T, Y \) can be rejected if and only if there does not exist any function \( Q(\cdot) \) that satisfies the following equation and takes value between \((0, 1]\):
\[
E[S/Q(X, T, Y) - 1 | X, Z, T] = 0.
\]
Note that Equation (6) only involves the observed data since \( X, Z, T \) are fully observed and \( S/Q(X, T, Y) = 0 \) when \( S = 0 \). Hence, although we cannot directly test whether the generated \( Z \) satisfies the second assumption, we can test whether the generated \( Z \) can be rejected by Equation (6). As a result, we can tell ShadowCatcher generates valid shadow-variable representations if and only if the generated \( Z \) is tested to be not refutable.
Therefore, ShadowCatcher iteratively generates shadow-variable representations and tests whether the generated representations satisfy Assumption 1 until the generated representations can pass the hypothesis test, detailed as follows.
**Generation Phase.** During the generation process, ShadowCatcher uses a representations generator \( g(X) \rightarrow Z \) to learn representations \( Z \) from \( X \) with the following two constraints:
1. **Constraining \( Z \not\perp\!\!\!\perp Y | X, T, S = 1 \) by a selected outcome estimator.** This estimator aims to estimate \( f(Y | X, Z, T, S = 1) \) with \( S = 1 \) samples and generated \( Z \). The objective is to learn a function \( h_{y_1}(X, Z, T) \rightarrow Y \) by minimizing the Mean-Square Error (MSE) between \( h_{y_1}(X_{S=1}, Z_{S=1}, T_{S=1}) \)
and \( Y_{S=1} \), where \( X_{S=1}, Z_{S=1}, T_{S=1}, \) and \( Y_{S=1} \) denote the value of the corresponding variables of the \( S = 1 \) data. The loss function of this estimator is
\[
L_{gy} = \frac{1}{n_1} \sum_{i:s_i=1} (h_{y_1}(x_i, z_i, t_i) - y_i)^2,
\]
where \( n_1 \) denotes the number of \( S = 1 \) units in \( D \). Note that ShadowEstimator also uses this estimator. To constrain the generated \( Z \) satisfying \( Z \perp\!\!\!\perp Y | X, T, S = 1 \), we need to make \( f(Y | X, Z, T, S = 1) \) differ from \( f(Y | X, Z^-, T, S = 1) \), where \( Z^- \) denotes a value that differs significantly from \( Z \), e.g., for binary \( Z \), \( Z^- = 1 - Z \); for continuous \( Z \), \( Z^- \) can be a random \( Z \). Therefore, one objective of the generator is to simultaneously minimize the MSE between \( h_{y_1}(X_{S=1}, Z_{S=1}, T_{S=1}) \) and \( Y_{S=1} \), and maximize the MSE between \( h_{y_1}(X_{S=1}, Z^-_{S=1}, T_{S=1}) \) and \( Y_{S=1} \), where \( Z^-_{S=1} \) denotes \( Z^- \) of the \( S = 1 \) data, i.e., to minimize the following loss function:
\[
L_{gy} = \frac{1}{n_1} \sum_{i:s_i=1} (h_{y_1}(x_i, z_i, t_i) - y_i)^2 - \frac{1}{n_1} \sum_{i:s_i=1} (h_{y_1}(x_i, z_i, t_i) - y_i)^2.
\]
(2) Constraining \( Z \perp\!\!\!\perp S | X, T, Y \) by a representations estimator. This estimator aims to estimate \( f(Z | X, T, Y, S = 1) \) with \( S = 1 \) samples and generated \( Z \). The objective is to learn a function \( h_r(X, T, Y) \rightarrow Z \) to minimize the MSE between \( h_r(X_{S=1}, T_{S=1}, Y_{S=1}) \) and \( Z_{S=1} \). The loss function of this estimator is
\[
L_r = \frac{1}{n_1} \sum_{i:s_i=1} (h_r(x_i, t_i, y_i) - z_i)^2.
\]
To constrain the generated \( Z \) satisfying the \( Z \perp\!\!\!\perp S | X, T, Y \), we need to make \( f(Z | X, T, Y, S = 1) \) the same as \( f(Z | X, T, Y, S = 0) \). Therefore, the other objective of the generator is to minimize the MSE between \( h_r(X_{S=0}, T_{S=0}, Y_{S=0}) \) and \( Z_{S=0} \), where \( X_{S=0}, Z_{S=0}, T_{S=0}, \) and \( Y_{S=0} \) denote the value of the corresponding variables of the \( S = 0 \) data, i.e., to minimize the following loss function:
\[
L_{ga} = \frac{1}{n_0} \sum_{i:s_i=0} (h_r(x_i, t_i, h_{y_1}(x_i, z_i, t_i)) - z_i)^2,
\]
where \( n_0 \) denotes the number of \( S = 0 \) units in \( D \). Since the \( Y \) values are missing for \( S = 0 \) units, here we use \( Y_{S=0} \) predicted by \( h_{y_1} \) as substitutes. This imputation approach may harm the constraining process, but we can control this impact in the subsequent hypothesis test phase.
Therefore, the total loss of the representations generator is
\[
L_g = L_{gy} + L_{ga}.
\]
Hypothesis Test Phase. In the generation process, the \( Z \perp\!\!\!\perp S | X, T, Y \) assumption is not strictly constrained due to the missing \( Y \) values for \( S = 0 \) units. Therefore, ShadowCatcher conducts an additional hypothesis test based on Theorem 2 after the generation phase finishes. The tester aims to learn a solution \( q \) of \( Q(X, T, Y) \) in Equation (6) that belongs to \((0, 1]\) which turns into an optimization problem by minimizing
\[
L_q = \frac{1}{n} \sum_{i=1}^{n} \| s_i/q(x_i, t_i, y_i) - 1 \|_2 \cdot (x_i, z_i, t_i),
\]
where \( q(x_i, t_i, y_i) \) is a function from \( \mathbb{R} \) to \((0, 1]\) and \( \| \cdot \|_2 \) denotes the \( \ell_2 \) norm. Note that for \( s_i = 0 \) units, the value of \( s_i/q(x_i, t_i, y_i) \) equals 0, and thus, the entire optimization process does not involve missing \( y_i \) values. Therefore, when the loss function converges, if the loss value is greater than a given threshold \( \alpha \), which means it fails to learn a \( q \) that satisfies Equation (6), we can tell that no solution of Equation (6) belongs to \((0, 1]\) and Assumption 1 is rejected. Note that to preempt the possible multiple comparisons issue, we use Bonferroni correction (Dunn, 1961) to dynamically adjust \( \alpha \) during training by setting \( \alpha \) to \( \frac{\alpha}{m} \) in the \( m \)-th iteration. As a result, the generated \( Z \) does not satisfy Assumption 1, and we need to regenerate it until it can pass the hypothesis test, i.e., the converged loss value is less than \( \alpha \). Finally, the first generated \( Z \) that passes the test can serve the role of shadow variables and be used for treatment effect estimation under collider bias by ShadowEstimator.
3.4 ShadowEstimator
With the help of the generated shadow-variable representations, we can estimate treatment effects under collider bias through: 1) estimating \( \overline{OR}(X, T, Y) \) and \( OR(X, T, Y) \) by Equation (4, 5); 2) using Equation (3) to recover and estimate \( f(Y | X, Z, T, S = 0) \); 3) estimating \( f(S | X, Z, T) \); 4) estimating \( f(Y | X, Z, T) \) and the CATE using estimated \( f(Y | X, Z, T, S = 0), f(Y | X, Z, T, S = 1) \) and \( f(S | X, Z, T) \). Note that \( f(Y | X, Z, T, S = 1) \) is available from ShadowCatcher.
Estimation of \( \overline{OR}(X, T, Y) \) and \( OR(X, T, Y) \). With the generated \( Z \) and fully observed \( X \) and \( T \), we first use two shadow-variable estimator \( h_{z_0}(X, T) \) and \( h_{z_1}(X, T) \) to estimate \( f(Z | X, T, S = 0) \) and \( f(Z | X, T, S = 1) \) respectively by minimizing the following loss functions:
\[
L_{z_0} = \frac{1}{n_0} \sum_{i:s_i=0} (h_{z_0}(x_i, t_i) - z_i)^2, \quad L_{z_1} = \frac{1}{n_1} \sum_{i:s_i=1} (h_{z_1}(x_i, t_i) - z_i)^2.
\]
Using \( X, T, \) and \( Y \) of the \( S = 1 \) units and \( h_{z_0}(X, T)/h_{z_1}(X, T) \) as the ground truths, we then estimate \( \text{OR}(X, T, Y) \) by minimizing the following loss function:
\[
L_{\text{or}} = \frac{1}{n_1} \sum_{i:s_i=1} (\tilde{\text{or}}(x_i, t_i, y_i) - h_{z_0}(x_i, t_i)/h_{z_1}(x_i, t_i))^2,
\]
where \( \tilde{\text{or}}(\cdot) \) is the estimated \( \text{OR}(\cdot) \). Then we can obtain \( \text{OR}(X, T, Y) \) with \( \tilde{\text{or}}(\cdot) \) by Equation (5).
**Estimation of \( f(Y | X, Z, T, S = 0) \).** With the estimated \( \text{OR}(X, T, Y), f(Y | X, Z, T, S = 1) \) and \( \mathbb{E}[\text{OR}(X, T, Y) | X, Z, T, S = 1] \) equaling \( h_{z_0}(X, T)/h_{z_1}(X, T) \), the ground truth \( f(Y | X, Z, T, S = 0) \) of \( S = 1 \) samples can be obtained by Equation (3). Therefore, we can learn a function \( h_{y_0}(X, Z, T) \rightarrow Y \) to estimate \( f(Y | X, Z, T, S = 0) \) using \( S = 1 \) samples by minimizing the following loss function:
\[
L_{y_0} = \frac{1}{n_1} \sum_{i:s_i=1} \left( h_{y_0}(x_i, z_i, t_i) - \frac{\tilde{\text{or}}(x_i, t_i, y_i) \cdot h_{y_1}(x_i, z_i, t_i) \cdot h_{z_1}(x_i, t_i)}{\tilde{\text{or}}(x_i, t_i, 0) \cdot h_{z_0}(x_i, t_i)} \right)^2.
\]
**Estimation of \( f(Y | X, Z, T) \).** Now that \( f(Y | X, Z, T, S = 0) \) and \( f(Y | X, Z, T, S = 1) \) are both estimated, estimation of \( f(Y | X, Z, T) \) becomes estimation of \( f(S | X, Z, T) \), which can be achieved by minimizing the following loss function using fully observed \( X, Z \) and \( T \) to learn a function \( h_s(X, Z, T) \rightarrow S \):
\[
L_s = -\frac{1}{n} \sum_{i=1}^n (s_i \cdot \log(h_s(x_i, z_i, t_i)) + (1 - s_i) \cdot \log(1 - h_s(x_i, z_i, t_i))),
\]
and then we can obtain \( f(Y | X, Z, T) \) by:
\[
f(Y | X, Z, T) = \sum_{s \in \{0, 1\}} f(Y | X, Z, T, S = s) \cdot f(S = s | X, Z, T).
\]
Then, we can use Equation (1) to achieve CATE estimation under collider bias. Note that we apply existing de-confounding methods (Shalit et al., 2017) to the outcome estimators during training to address possible confounding bias. The pseudo-codes and the overall flowchart are in Appendix A.1.
### 4 EXPERIMENTS
#### 4.1 BASELINES
As stated in Section 2, there is currently no causal inference method that can solve collider bias without introducing additional assumptions and prior knowledge. Therefore, we implement the following treatment effect estimators that focus on confounding bias and sample selection bias caused by \( X \) and \( T \) as our baselines: (1) Heckman’s Correction (Heckit) (Heckman, 1979), (2) Doubly Robust (Bang & Robins, 2005), (3) Inverse Probability of Sampling Weights (IPSW) (Cole & Stuart, 2010), (4) Balancing Neural Network (BNN) (Johansson et al., 2016), (5) Treatment-Agnostic Representation Network (TARNet), (6) CounterFactual Regression (CFR) (Shalit et al., 2017), (7) Causal Forest (CForest) (Wager & Athey, 2018), (8) Disentangled Representations for CounterFactual Regression (DR-CFR) (Greiner, 2020), (9) TEDVAE (Zhang et al., 2021), (10) Decomposed Representations for CounterFactual Regression (DeR-CFR) (Wu et al., 2022), (11)
Table 1: The results of CATE estimation ($\sqrt{\text{PEHE}}$) on synthetic datasets under different $\beta$.
| Estimator | Selected data | Unselected data | Selected data | Unselected data | Selected data | Unselected data |
|--------------|---------------|-----------------|---------------|-----------------|---------------|-----------------|
| Heckit | 0.323±0.065 | 0.330±0.046 | 0.340±0.055 | 0.352±0.042 | 0.349±0.069 | 0.413±0.048 |
| DR | 0.298±0.032 | 0.316±0.042 | 0.331±0.048 | 0.357±0.053 | 0.367±0.033 | 0.448±0.017 |
| IPSW | 0.328±0.048 | 0.348±0.049 | 0.328±0.031 | 0.353±0.034 | 0.465±0.011 | 0.545±0.014 |
| BNN | 0.290±0.011 | 0.306±0.012 | 0.329±0.048 | 0.354±0.033 | 0.359±0.011 | 0.439±0.015 |
| TARNet | 0.295±0.012 | 0.312±0.011 | 0.329±0.030 | 0.357±0.053 | 0.366±0.071 | 0.436±0.087 |
| CFR | 0.290±0.009 | 0.307±0.008 | 0.324±0.009 | 0.350±0.013 | 0.359±0.008 | 0.436±0.030 |
| CForest | 0.310±0.030 | 0.331±0.038 | 0.338±0.019 | 0.368±0.022 | 0.373±0.026 | 0.453±0.043 |
| DR-CFR | 0.284±0.038 | 0.307±0.040 | 0.340±0.055 | 0.355±0.064 | 0.366±0.051 | 0.435±0.060 |
| TEDVAE | 0.281±0.056 | 0.419±0.070 | 0.378±0.063 | 0.420±0.059 | 0.394±0.054 | 0.431±0.067 |
| DeR-CFR | 0.291±0.010 | 0.309±0.014 | 0.323±0.015 | 0.348±0.017 | 0.358±0.011 | 0.439±0.013 |
| DESCN | 0.295±0.002 | 0.312±0.002 | 0.326±0.003 | 0.357±0.004 | 0.365±0.003 | 0.449±0.011 |
| ES-CFR | 0.289±0.003 | 0.305±0.004 | 0.331±0.003 | 0.359±0.003 | 0.369±0.003 | 0.448±0.005 |
| Ours | 0.241±0.014 | 0.248±0.009 | 0.305±0.013 | 0.326±0.015 | 0.333±0.040 | 0.404±0.053 |
| Ours (New) | 0.227±0.001 | 0.229±0.001 | 0.249±0.013 | 0.255±0.021 | 0.299±0.008 | 0.300±0.008 |
Deep Entire Space Cross Networks (DESCN) (Zhong et al., 2022), (12) Entire Space CounterFactual Regression (ES-CFR) (Wang et al., 2023) to estimate the CATE and compare them with our proposed methods. Based on the estimated CATE, we use the Precision in Estimation of Heterogeneous Effect (PEHE) (Shalit et al., 2017; Louizos et al., 2017) to evaluate the performance of the above methods, where $\text{PEHE} = \frac{1}{N} \cdot \sum_{i=1}^{N} ((\hat{y}_i(1) - \hat{y}_i(0)) - (y_i(1) - y_i(0))^2$. We split each dataset into 60/20/20 train/validation/test datasets, independently repeat 20 times, and report the mean and standard deviation (std) of $\sqrt{\text{PEHE}}$ for all experiments, formed as mean ± std in the tables.
### 4.2 Experiments on Synthetic Data
#### 4.2.1 Datasets
In order to better evaluate the performance of each estimator under collider bias, we generate synthetic datasets with different collider bias strengths, denoted by $\beta$, which affects the impact of $Y$ on $S$. The size $n$ of all datasets is 10,000, and the dimension $d$ of the covariates is 10. To compare our methods with the baselines under different strengths of collider bias, we set $d_s = 0.9 \cdot d$ and evaluate the performance of each estimator under $\beta = \{1, 3, 5\}$. We also conduct additional experiments on the synthetic data for evaluating the impact of different non-shadow-variables proportions in the covariates, the impact of the reject threshold $\alpha$ of ShadowCatcher, and the effectiveness of the constraints in the generation phase of ShadowCatcher. The data generation process and the additional experiments are detailed in Appendix A.2.
#### 4.2.2 Results
We separately report the results of the selected data ($S = 1$) and unselected data ($S = 0$) in Table 1 under different collider bias strengths with $\beta = \{1, 3, 5\}$. We observe that: (1) The overall performance of DR, BNN, CFR, CForest, TEDVAE, DR-CFR, DESCN, DeR-CFR and ES-CFR is not good because they all focus on confounding bias and thus cannot deal with sample selection bias. (2) The performance of Heckit and IPSW is also poor because they can only address sample selection bias caused by $T$ and $X$ and cannot address collider bias because of the spurious association $T \rightarrow S \leftarrow Y$. (3) Our method outperforms all baselines under all $\beta$ settings because the generated representations by ShadowCatcher make identification under collider bias possible, and ShadowEstimator provides a practical solution. (4) As collider bias strengthens, the performance gap between selected and unselected data increases because the more substantial the collider bias is, the more significant the distribution shift problem is. However, this gap for our method is much smaller than that of other baselines, which demonstrates that our proposed approaches can practically address collider bias.
### 4.3 Experiments on Real-world Data
#### 4.3.1 Datasets
In order to evaluate the proposed method in real-world scenarios, we conduct experiments on three well-known datasets: the IHDP dataset (Hill, 2011), the ACIC 2016 dataset (Dorie et al.,
Table 2: The results of CATE estimation on three real-world datasets.
| Estimator | IHDP (√PEHE) | ACIC 2016 (√PEHE) | Jobs (R_{Pol}) |
|------------|-------------|-------------------|----------------|
| | Within-sample | Out-of-sample | Within-sample | Out-of-sample | Within-sample | Out-of-sample |
| Heckit | 1.587±0.065 | 1.621±0.041 | 3.106±0.444 | 3.340±0.111 | 0.328±0.050 | 0.331±0.052 |
| DR | 1.355±0.123 | 1.572±0.205 | 2.346±0.129 | 2.653±0.222 | 0.316±0.007 | 0.317±0.036 |
| IPSW | 2.118±0.344 | 2.129±0.295 | 4.244±0.145 | 5.411±0.073 | 0.284±0.051 | 0.289±0.063 |
| BNN | 1.308±0.298 | 1.457±0.339 | 2.173±0.150 | 2.586±0.486 | 0.303±0.025 | 0.304±0.041 |
| TARNet | 1.240±0.158 | 1.416±0.154 | 2.275±0.756 | 2.805±0.766 | 0.315±0.012 | 0.316±0.050 |
| CFR | 1.283±0.186 | 1.401±0.238 | 2.107±0.297 | 2.361±0.587 | 0.313±0.018 | 0.314±0.072 |
| CForest | 1.702±0.292 | 1.948±0.429 | 4.137±0.295 | 4.605±0.137 | 0.326±0.012 | 0.326±0.059 |
| DR-CFR | 1.299±0.087 | 1.399±0.171 | 2.240±0.691 | 2.340±0.663 | 0.322±0.022 | 0.323±0.099 |
| TEDVAE | 4.246±0.394 | 4.347±0.563 | 3.501±0.708 | 4.468±0.813 | 0.296±0.046 | 0.300±0.031 |
| DeR-CFR | 1.446±0.345 | 1.571±0.371 | 2.214±0.204 | 2.246±0.598 | 0.309±0.023 | 0.311±0.029 |
| DESCN | 1.193±0.057 | 1.665±0.246 | 2.185±0.150 | 2.306±0.236 | 0.331±0.010 | 0.331±0.051 |
| ES-CFR | 1.499±0.096 | 1.436±0.095 | 3.875±0.224 | 4.494±0.214 | 0.290±0.045 | 0.293±0.046 |
| Ours | 1.039±0.069 | 1.065±0.099 | 2.078±0.333 | 2.142±0.390 | 0.283±0.018 | 0.284±0.080 |
| Ours (New) | 0.703±0.106 | 0.723±0.102 | 1.911±0.126 | 2.047±0.351 | 0.279±0.017 | 0.280±0.018 |
The ground truth CATE is known in the IHDP and ACIC 2016 datasets, so we use the same metric as those in the experiments on the synthetic data. Following Shalit et al. (2017), since the ground truth CATE is unknown in the Jobs dataset, we use the policy risk to evaluate the quality of CATE estimation. The policy risk is defined as the average loss in value when treating according to the policy implied by a CATE estimator:
\[ R_{Pol} = 1 - (\mathbb{E}[Y(1) \mid \tau(x) > 0, T = 1] \cdot P(\tau(x) > 0) + \mathbb{E}[Y(0) \mid \tau(x) \leq 0, T = 0] \cdot P(\tau(x) \leq 0)). \]
We report the mean and std of the policy risk formed as mean ± std in the table. More details about these datasets and the simulation process are provided in Appendix A.2.3.
### 4.3.2 Results
We separately report the results of within-sample data and out-of-sample data in Table 2, where within-sample means that the (factual) outcome of one treatment is observed, i.e., the \( S = 1 \) samples for training, and out-of-sample means no observed outcomes, i.e., the \( S = 1 \) samples for testing and all \( S = 0 \) samples (Shalit et al., 2017). From the results, we observe that: (1) The performance of the methods on confounding bias is not good because they cannot address sample selection bias. (2) The performance of the methods on sample selection bias is also poor because they can only address the cases that \( X \) and \( T \) cause \( S \) and thus cannot achieve a better estimate under collider bias. (3) Our method outperforms all baselines on both datasets because ShadowCatcher and ShadowEstimator effectively address collider bias in data. (4) The performance gap between our proposed method’s within-sample and out-of-sample data is also overall the lowest, proving the ability to counterfactual prediction of our method. (5) The proposed method on the Jobs dataset shows the lowest policy risk, which demonstrates the effectiveness of our methods in real-world applications.
### 5 Conclusion
In this paper, we overcome the challenge of finding valid shadow variables to estimate treatment effects under collider bias in observational studies. We propose a novel ShadowCatcher that can generate representations serving the role of shadow variables and a novel ShadowEstimator that uses the generated representations to estimate CATE under collider bias. Experimental results demonstrate the effectiveness and application value of ShadowCatcher and ShadowEstimator. One main limitation of our work is that the choice of the reject threshold \( \alpha \) is a tradeoff between efficiency and performance during the generation process of ShadowCatcher. The impact of different options of \( \alpha \) on the efficiency and performance of ShadowCatcher is further discussed in Appendix A.2.
---
5The IHDP dataset is available at http://www.fredjo.com/; The ACIC 2016 dataset is available at https://github.com/vdorie/aciccomp/tree/master/2016; The Jobs dataset is available at https://users.nber.org/~rdehejia/nswdata2.html.
REFERENCES
Douglas Almond, Kenneth Y. Chay, and David S. Lee. The Costs of Low Birth Weight*. *The Quarterly Journal of Economics*, 120(3):1031–1083, 08 2005.
Susan Athey, Guido W. Imbens, and Stefan Wager. Approximate residual balancing: debiased inference of average treatment effects in high dimensions. *Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, 80(4):597–623, 2018.
H. Bang and J. M. Robins. Doubly robust estimation in missing data and causal inference models. *Biometrics*, 61(4):962–73, 2005.
Elias Bareinboim and Jin Tian. Recovering causal effects from selection bias. In Blai Bonet and Sven Koenig (eds.), *Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA*, pp. 3475–3481. AAAI Press, 2015.
Elias Bareinboim, Jin Tian, and Judea Pearl. Recovering from selection bias in causal and statistical inference. In Carla E. Brodley and Peter Stone (eds.), *Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Québec City, Québec, Canada*, pp. 2410–2416. AAAI Press, 2014.
Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoning and learning systems: the example of computational advertising. *J. Mach. Learn. Res.*, 14(1):3207–3260, 2013.
J. Brooksgunn, F. R. Liaw, and P. K. Klebanov. Effects of early intervention on cognitive function of low-birth-weight preterm infants. *Journal of Pediatrics*, 120(3):350–359, 1992.
S. R. Cole and E. A. Stuart. Generalizing evidence from randomized clinical trials to target populations. *AMERICAN JOURNAL OF EPIDEMIOLOGY*, 172(1):107–115, 2010.
Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In *Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014*, volume 32 of *JMLR Workshop and Conference Proceedings*, pp. 685–693. JMLR.org, 2014.
R. H. Dehejia and S. Wahba. Propensity score-matching methods for nonexperimental causal studies. *Review of Economics and Statistics*, 84(1):151–161, 2002.
Peng Ding. Bayesian robust inference of sample selection using selection-tmodels. *Journal of Multivariate Analysis*, 124:451–464, 2014.
V. Dorie, J. Hill, U. Shalit, M. Scott, and D. Cervone. Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition. *STATISTICAL SCIENCE*, 34(1):43–68, 2019. ISSN 0883-4237 2168-8745. doi: 10.1214/18-STS667.
Olive Jean Dunn. Multiple comparisons among means. *Journal of the American Statistical Association*, 56(293):52–64, 1961.
Xavier d’Haultfoeuille. A new instrumental method for dealing with endogenous selection. *Journal of Econometrics*, 154(1):1–15, 2010.
S. Greenland. Quantifying biases in causal models: classical confounding vs collider-stratification bias. *Epidemiology*, 14(3):300–6, 2003.
Negar Hassanpour; Russell Greiner. Learning disentangled representations for counterfactual regression. In *International Conference on Learning Representations*, 2020. doi: https://openreview.net/forum?id=HkxBJT4YvB. URL https://openreview.net/forum?id=HkxBJT4YvB.
R. C. Guo, L. Cheng, J. D. Li, P. R. Hahn, and H. Liu. A survey of learning causality with data: Problems and methods. *Acm Computing Surveys*, 53(4):75:1–75:37, 2020.
J. Hainmueller. Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies. *Political Analysis*, 20(1):25–46, 2012.
J. J. Heckman. Sample selection bias as a specification error. *Econometrica*, 47(1):153–161, 1979.
|
VZVXqiaI4U
|
In 5.1, it is not explained how the “normal images” are obtained. This prevents us from discerning whether it really is the out-of-distribution attributes that increase the scores, or simply the difference between the generated images and the normal ones.
|
ATTRIBUTE-BASED INTERPRETABLE EVALUATION METRICS FOR GENERATIVE MODELS
Anonymous authors
Paper under double-blind review
ABSTRACT
When the training dataset comprises a 1:1 proportion of dogs to cats, a generative model that produces 1:1 dogs and cats better resembles the training species distribution than another model with 3:1 dogs and cats. Can we capture this phenomenon using existing metrics? Unfortunately, we cannot, because these metrics do not provide any interpretability beyond “diversity”. In this context, we propose a new evaluation protocol that measures the divergence of a set of generated images from the training set regarding the distribution of attribute strengths as follows. Single-attribute Divergence (SaD) reveals the attributes that are generated excessively or insufficiently by measuring the divergence of PDFs of individual attributes. Paired-attribute Divergence (PaD) reveals such pairs of attributes by measuring the divergence of joint PDFs of pairs of attributes. For measuring the attribute strengths of an image, we propose Heterogeneous CLIPScore (HCS) which measures the cosine similarity between image and text vectors with heterogeneous initial points.
With SaD and PaD, we reveal the following about existing generative models. ProjectedGAN generates implausible attribute relationships such as baby with beard even though it has competitive scores of existing metrics. Diffusion models struggle to capture diverse colors in the datasets. The larger sampling timesteps of the latent diffusion model generate the more minor objects including earrings and necklace. Stable Diffusion v1.5 better captures the attributes than v2.1. Our metrics lay a foundation for explainable evaluations of generative models.
1 INTRODUCTION
The advancement of deep generative models, including VAEs (Kingma and Welling, 2013), GANs (Karras et al., 2019; 2020b; 2021; Sauer et al., 2021), and Diffusion Models (DMs) (Song et al., 2020; Nichol and Dhariwal, 2021; Rombach et al., 2022), has led to generated images that are nearly indistinguishable from real ones. Evaluation metrics, especially those assessing fidelity and diversity, play a pivotal role in this progress. One standout metric is Fréchet Inception Distance (FID) (Heusel et al., 2017), measuring the disparity between training and generated image distributions in embedding space. Coupled with other metrics like precision, recall, density, and coverage, the difference between generated and real image distributions is effectively gauged.
Figure 1 illustrates the evaluation metrics for two models with distinct properties. While Model 1’s generated images align closely with the training dataset, Model 2 exhibits a lack of diversity. Notably, in Figure 1a gray box, Model 1 consistently outperforms Model 2 across all metrics. Yet, these metrics fall short in explicability; for example, they don’t highlight the overrepresentation of long hair and makeup in Model 2.
Addressing this gap, our paper proposes a methodology to quantify discrepancies between generated and training images, focusing on specific attributes. Figure 1 shows the concept of our alternative approach that measures the distribution of attribute strengths compared to the training set: while Model 1 offers a balanced attribute distribution akin to the training dataset, Model 2 overemphasizes long hair and underrepresents beard.
To build metrics that quantify the difference between two image sets in an interpretable manner, we introduce Heterogeneous CLIPScore (HCS), an enhanced variant of CLIPScore (Radford et al., 2021). Compared to CLIPScore, Heterogeneous CLIPScore captures the similarity between modalities—image and text—by establishing distinct origins for text and image vectors.
Figure 1: Conceptual illustration of our metric. We design the scenario, Model 2 lacks diversity. (a) Although existing metrics (gray box) capture the inferiority of Model 2, they do not provide an explanation for the judgments. (b) Our attribute-based proposed metric (green box) has an interpretation: Model 2 is biased regarding long hair, makeup, smiling, and beard.
Utilizing HCS, we introduce new evaluation protocols to assess the attribute distribution alignment between generated images and training data as follows. 1) Single-attribute Divergence (SaD) measures how much a generative model deviates from the distribution of each attribute in the training data. 2) Paired-attribute Divergence (PaD) measures how much a generative model breaks the relationship between attributes in the training data, such as "babies do not have beards." With the proposed metrics, users can now realize which specific attributes (or pairs of attributes) in generated images differ from those in training images.
Our protocols also enable flexible user-defined evaluation. Given our capability to assign each attribute, users can emphasize certain features without considering certain features, such as hair attributes (long hair, black hair, blonde hair), while excluding the apparent age like baby or elderly. Figure 1 shows SaD result with user-defined 6 attributes, where long hair, makeup, beard are the most influential attributes to SaD. We note elaborate quantification of attribute preservation could be one of the meaningful tasks since the generative model can be utilized for diverse purposes such as text-to-image generation not only for generating a plausible image.
We conduct a series of carefully controlled experiments with varying configurations of attributes to validate our metrics in Section 5.1 and 5.2. Then we provide different characteristics of state-of-the-art generative models (Karras et al., 2019; 2020b; 2021; Sauer et al., 2021; Nichol and Dhariwal, 2021; Rombach et al., 2022; Yang et al., 2023) which could not be seen in the existing metrics. For instance, GANs better synthesize color-/texture-related attributes such as striped fur which DMs hardly preserve in LSUN-Cat (Section 5.3). When we increase the sampling steps of DMs, tiny objects such as necklaces and earrings tend to appear more frequently. Even though Stable diffusion v2.1 is reported that have a better FID score than Stable diffusion v1.5, the attribute-aspect score is worse than v1.5 (Section 5.4). Our approach is versatile, and applicable wherever image comparisons are needed. The code will be publicly available.
2 RELATED WORK
Fréchet Inception Distance Fréchet Inception Distance (FID) (Heusel et al., 2017) calculates the distance between the estimated Gaussian distributions of two datasets using a pre-trained Inception-v3 (Szegedy et al., 2016). However, Kynkäänniemi et al. (2022) noted issues with embeddings when generated images deviate significantly from training data. This led to proposals of using CLIP (Radford et al., 2021) encoder, which aligns text and images in a shared space, instead of Inception-v3. However, they directly use the raw embedding of CLIP encoder while we design a new representation.
Fidelity and diversity Sajjadi et al. (2018) devised precision and recall for generative model evaluation. Further refinements were provided by Kynkäänniemi et al. (2019) and Naeem et al. (2020).
Generally, these metrics use a pre-trained network to evaluate how embeddings of generated images match with those of real images and vice-versa.
**Other metrics** Beyond these, metrics such as Perceptual path length (Karras et al., 2019), Fréchet segmentation distance (Bau et al., 2019), and Rarity score (Han et al., 2022) have been introduced. The first indicates latent space smoothness, the second measures pixel segmentation differences, and the latter assesses the rarity of generated images. However, these metrics predominantly rely on raw embeddings from pretrained classifiers, yielding scores with limited interpretability. As Figure 1a indicates, while some metrics highlight poor image generation performance, they lack in-depth explanatory insights. We aim to fill this gap with our novel, detailed, and insightful evaluation metrics.
TIFA (Hu et al., 2023) uses visual question answering to validate if text-to-image results correspond to the input texts. On the other hand, our metrics evaluate the distribution of attribute strengths in a set of images.
## 3 TOWARD EXPLAINABLE METRICS
Existing metrics for evaluating generated images often use embeddings from Inception-V3 (Szegedy et al., 2016) or CLIP image encoder (Dosovitskiy et al., 2020). Yet, these embeddings lack clarity in interpreting each channel in the embedding. Instead, we opt to measure attribute strengths in images for a predefined set of attributes. We first explain CLIPScore as our starting point (Section 3.1), introduce Heterogeneous CLIPScore (Section 3.2), and describe ways of specifying the target attributes (Section 3.3).
### 3.1 Measuring attribute strengths with CLIP
For a set of attributes, we start by measuring the attribute strengths of images. The typical approach is computing CLIPScore:
\[
\text{CLIPScore}(x, a) = 100 \times \text{sim}(E_I(x), E_T(a)),
\]
where \(x\) is an image, \(a\) is a given text of attribute, \(\text{sim}(*, *)\) is cosine similarity, and \(E_I\) and \(E_T\) are CLIP image encoder and text encoder, respectively. Figure 2 shows an example CLIPScores of an image regarding a set of attributes. Yet, CLIPScores themselves do not provide a clear notion of attribute strengths as we observe ambiguous similarities between opposite attributes. The research community is already aware of such a problem. To overcome this, we introduce Heterogeneous CLIPScore in the subsequent subsections, showcased in Figure 2, ensuring more accurate attribute strengths.
Table 1: CLIPScore and Heterogeneous CLIPScore’s accuracy on CelebA dataset.
| | accuracy | f1 score |
|----------------|----------|----------|
| Heterogeneous CLIPScore | 0.817 | 0.616 |
| CLIPScore | 0.798 | 0.575 |
3.2 HETEROGENEOUS CLIPSCORE
In the earlier section, we noted that CLIPScore tends to have a narrow value range, as visualized in Figure 2. To remedy this, we introduce Heterogeneous CLIPScore (HCS). It uses heterogeneous initial points for image and text embedding vectors as follows.
Given training images denoted as \( \{x_1, x_2, ..., x_{N_X}\} \in X \), and a set of attributes defined as \( \{a_1, a_2, ..., a_{N_A}\} \in A \), we define \( C_X \) as the center of images and \( C_A \) as another center of text attributes on CLIP embedding, respectively as
\[
C_X = \frac{1}{N_X} \sum_{i=1}^{N_X} E_I(x_i), \quad C_A = \frac{1}{N_A} \sum_{i=1}^{N_A} E_T(a_i).
\]
These centers act as initial points of the embedding vectors. HCS is defined by the similarity between the two vectors, \( V_x \) and \( V_a \). The former connects the image center to a specific image, while the latter connects the attribute center to a particular attribute. Then we define
\[
V_x = E_I(x) - C_X, \quad V_a = E_T(a) - C_A,
\]
\[
HCS(x, a) = 100 \times \text{sim}(V_x, V_a),
\]
where \( \text{sim}(*, *) \) computes cosine similarity. For extending HCS from a single sample to all samples, we denote the probability density function (PDF) of \( HCS(x_i, a_i) \) for all \( x_i \in X \) as \( HCS_X(a_i) \).
Figure 2 illustrates the difference between HCS (Heterogeneous CLIPScore) and CS (CLIPScore). HCS uses the respective centers as initial points, allowing for clearer determination of attribute magnitudes, whereas CS lacks this clarity.
HCS also outshines CS in classifying attributes as shown in Table 1. This table displays the accuracy of CS and HCS using ground truth attributes in CelebA [Liu et al., 2015]. Accuracy is computed by performing binary classification on all CelebA attributes using CS and HCS and comparing them to ground truth labels. HCS consistently surpasses CS. This accuracy trend persists even for refined attributes, excluding subjective ones such as Attractive or Blurry. The full accuracy, including Attractive and Blurry, is in Table S8. More details are available in the Appendix A.2.
3.3 ATTRIBUTE SELECTION
The effectiveness of our evaluation metric is contingent upon the target attributes we opt to measure. To determine the best attributes that truly capture generator performance, we put forth two methods for attribute selection.
Caption-extracted attributes Our goal is to pinpoint and assess the attributes evident in the training data via image descriptions. By analyzing the frequency of these attributes in image captions, we can identify which ones are most prevalent. To achieve this for captionless datasets, we employ the image captioning model, BLIP [Li et al., 2022], to extract words related to attributes from the training data. We then adopt \( N \) frequently mentioned ones as our target attributes, denoted as \( A \), for the metric. Given that these attributes are derived automatically, utilizing BLIP for this extraction could serve as a foundational method. Nevertheless, our approach retains flexibility for user-defined inputs as follows.
User annotation Another method for attribute selection involves utilizing human-annotated attributes. By directly choosing attributes for evaluating generative models, users can compare the influence of each attribute score or select specific attributes for a particular purpose. Notably, CelebA
offers annotated attributes, serving as a good example of this approach. While external models such as GPT-3 (Brown et al., 2020) can aid in selecting a large number of attributes, it is important to use external models judiciously, given the potential for bias in the attributes it extracts. For an example of using GPT-3, see Appendix A.1.
4 EVALUATION METRICS WITH ATTRIBUTE STRENGTHS
In this section, we harness the understanding of attribute strengths to devise two comprehensible metrics. Section 4.1 introduces Single-attribute Divergence (SaD), quantifying the discrepancy in attribute distributions between training data and generated images. Section 4.2 brings forth Paired-attribute Divergence (PaD), evaluating the relationship between attribute strengths.
4.1 SINGLE-ATTRIBUTE DIVERGENCE
If we have a dataset with dogs and cats, and a generative model only makes dog images, it is not an ideal model because it does not produce cats at all (Goodfellow et al., 2016). With this idea, we say one generative model is better than another if it makes a balanced number of images for each attribute similar to the training dataset. Since we do not know the true distribution of real and fake images, we came up with a new metric, Single-attribute Divergence (SaD). This metric checks how much of each attribute is in the dataset by utilizing interpretable representation. Our metric, SaD, quantifies the difference in density for each attribute between the training dataset ($\mathcal{X}$) and the set of generated images ($\mathcal{Y}$). We define SaD as
$$
\text{SaD}(\mathcal{X}, \mathcal{Y}) = \frac{1}{M} \sum_{i} \text{KL}(\text{HCS}_{\mathcal{X}}(a_i), \text{HCS}_{\mathcal{Y}}(a_i)),
$$
where $i$ denotes an index for each attribute, $M$ is the number of attributes, $\text{KL}(*)$ is Kullback-Leibler divergence, and $\text{HCS}_{\mathcal{X}}(a_i)$ denotes PDF of HCS($x_i, a_i$) for all $x_i \in \mathcal{X}$.
We analyze PDFs of Heterogeneous CLIPScore for each attribute present in $\mathcal{X}$ and $\mathcal{Y}$. These HCS PDFs reflect the distribution of attribute strengths within datasets. If an attribute’s distribution in $\mathcal{X}$ closely mirrors that in $\mathcal{Y}$, their respective HCS distributions will align, leading to similar PDFs. To measure discrepancies between these distributions, we employ Kullback-Leibler Divergence (KLD). This quantifies how much the generated images either over-represent or under-represent specific attributes compared to the original data. Subsequently, we determine the average divergence across all attributes between $\mathcal{X}$ and $\mathcal{Y}$ to derive the aggregated metric for SaD.
In addition, we define the mean difference of attribute strength to further examine whether poor SaD comes from excessive or insufficient strength of an attribute $a$:
$$
\text{mean difference} = \frac{1}{N_x} \sum_{i} \text{HCS}(x_i, a) - \frac{1}{N_y} \sum_{i} \text{HCS}(y_i, a).
$$
where $N_x$ and $N_y$ are the number of training images and generated images, respectively. Intuitively, a high magnitude of mean difference indicates the mean strength of $\mathcal{Y}$ differs significantly from $\mathcal{X}$ for attribute $a$. A positive value indicates $\mathcal{Y}$ has images with stronger $a$ than $\mathcal{X}$, and vice versa for a negative value. While this does not conclusively reveal the exact trend due to $a$’s complex distribution, it provides an intuitive benchmark.
4.2 PAIRED-ATTRIBUTE DIVERGENCE
We introduce another metric, Paired-attribute Divergence (PaD), aimed at evaluating whether generated images maintain the inter-attribute relationships observed in the training data. Essentially, if specific attribute combinations consistently appear in the training data, generated images should also reflect these combinations. To illustrate, if every male image in the training dataset is depicted wearing glasses, the generated images should similarly represent males with glasses. We assess this by examining the divergence in the joint probability density distribution of attribute pairs between the training data and generated images. This metric, termed Paired-attribute Divergence (PaD), leverages
joint probability density functions as detailed below:
$$\text{PaD}(\mathcal{X}, \mathcal{Y}) = \frac{1}{P} \sum_{(i,j)} \text{KL}(\text{HCS}_{\mathcal{X}}(a_{i,j}), \text{HCS}_{\mathcal{Y}}(a_{i,j})), \quad (7)$$
where $M$ is the number of attributes, $P = \binom{M}{2}$, $(i, j)$ denotes an index pair of attributes selected out of $M$, and the joint PDF of the pair of attributes is denoted as $\text{HCS}_{\mathcal{X}}(a_{i,j})$.
When utilized together with SaD, PaD will offer a comprehensive analysis of the model’s performance. For instance, if the probability density function of the generator for the attribute pair (baby, beard) diverges notably from the training data’s distribution while SaD for baby and bearded are comparatively low, it suggests that the generator may not be effectively preserving the (baby, bearded) relationship. Consequently, PaD enables us to quantify how well attribute relationships are maintained in generated data. Moreover, it facilitates the measurement of attribute interdependencies, an aspect not extensively addressed in prior studies.
5 EXPERIMENTS
Experiment details For estimating the probability density function (PDF) of Heterogeneous CLIPScore (HCS) in both the training data and generated images, Gaussian kernel density estimation is employed. We extract 10,000 samples from generated and real images to obtain PDFs of attribute strengths, which are then used to compute SaD and PaD. In every experiment, we use a set of $N_A = 20$ attributes. In the case of FFHQ, USER attributes from CelebA ground truth were used.
5.1 BIASED DATA INJECTION EXPERIMENT: THE EFFECTIVENESS OF OUR METRIC
In this subsection, we conduct a toy experiment to validate our metrics against existing methods. Initially, two non-overlapping subsets, each with 30K images from FFHQ, are defined as training data $\mathcal{X}$ and generated images $\mathcal{Y}$. Starting with these subsets that share a similar distribution, we gradually infuse biased data into $\mathcal{Y}$. The biased data is generated using DiffuseIT (Kwon and Ye 2022). We translate samples from the training data, without overlap to the initial 60K images, into makeup (Figure 5.1a) and bangs (Figure 5.1b). We also provide controlled counterpart where injected samples are unbiased data translated into the person (Figure 5.1c), or injected samples remain untranslated (Figure 5.1d).
As depicted in Figure 5.1, our metrics display a consistent trend: SaD and PaD rise with the inclusion of more edited images in $\mathcal{Y}$, whereas other metrics are static. Thanks to the attribute-based design, our metric suggests that makeup or bangs is the dominant factor for SaD, and relationships that are rarely seen in training data such as (man, makeup) and (man, bangs) for PaD. The impact on SaD and PaD scales linearly with the number of images from different attribute distributions. For an expanded discussion and additional experiments, refer to Figure 5.7 and Appendix B.3. These results underscore that SaD adeptly discerns the attribute distribution variation, and PaD identifies the joint distribution shift between attribute pairs, outperforming other metrics.
5.2 DISCERNMENT OF PAD
In another toy experiment, we designed a scenario where SaD metric struggled to detect specific attribute relationships, while PaD metric successfully pinpointed them. We used curated CelebA subsets as training data $\mathcal{X}$ and generated images $\mathcal{Y}$, ensuring discrepancies in attribute relationships.
For $\mathcal{X}$, we gathered 20,000 smiling men and 20,000 non-smiling women using CelebA’s ground truth labels. In contrast, $\mathcal{Y}$ comprised 20,000 non-smiling men and 20,000 smiling women. While we cannot get insights into attribute relationship errors through exploring SaD (Figure 5.2a), examining PaD (Figure 5.2b) provides us with valuable clues.
Figure 5.2b highlights divergence of (woman, smiling) and (man, smiling) notably influence PaD. These findings demonstrate the superior sensitivity and discernment of our proposed metrics, allowing for a more comprehensive evaluation of generative models. For example, PaD of ProjectedGAN (Sauer et al. 2021) is higher than other state-of-the-art generative models as shown in Table 2.
Figure 3: Validation of metrics through biased injection. We design one set: typical 30k of FFHQ images, and another set: 30k FFHQ + injected images. Biased data injection, illustrated in (a) with makeup and (b) with bangs leads to an increase in both SaD and PaD rise. In contrast, unbiased data injection (c) person and (d) real data, injecting the same distribution as the training set results in no SaD and PaD rise. Our metrics effectively capture changes in attribute distribution, while existing metrics cannot.
(a) Top 4 attributes contributing SaD
(b) Top 4 attribute pairs contributing PaD
(c) babies with beards by ProjectedGAN
Figure 4: Necessity of PaD. We define curated subsets of CelebA-HQ as training images, consisting of smiling men and non-smiling women, and generated images, consisting of non-smiling men and smiling women. (a) While SaD only specifies problematic attributes, (b) PaD identifies problematic attribute pairs such as (woman, smiling). (c) ProjectedGAN disregards attribute relationships, such as generating babies with beards.
Table 2: Comparing the performance of generative models. We computed each generative model’s performance on our metric with their official pretrained checkpoints on FFHQ (Karras et al., 2019). We used 50,000 images for both GT and the generated set. We used USER attributes for this experiment.
| Model | SaD ($10^{-\gamma}$) | PaD ($10^{-\gamma}$) | FID↓ | FID$_{\text{CLIP}}$↓ | Precision↑ | Recall↑ | Density↑ | Coverage↑ |
|---------------|----------------------|----------------------|------|---------------------|------------|---------|----------|-----------|
| StyleGAN1 | 11.35 | 27.25 | 4.74 | 3.17 | 0.90 | 0.86 | 1.05 | 0.97 |
| StyleGAN2 | 7.52 | 19.22 | 3.17 | 1.47 | 0.92 | 0.89 | 1.03 | 0.97 |
| StyleGAN3 | 7.79 | 19.73 | 3.20 | 1.66 | 0.92 | 0.90 | 1.03 | 0.97 |
| iDDPM | 14.78 | 34.04 | 7.31 | 2.39 | 0.93 | 0.84 | 1.09 | 0.95 |
| LDM (50) | 10.42 | 25.36 | 12.18| 3.89 | 0.94 | 0.82 | 1.09 | 0.94 |
| LDM (200) | 14.04 | 30.71 | 11.86| 3.57 | 0.91 | 0.88 | 1.07 | 0.97 |
| StyleSwin | 10.76 | 26.56 | 4.45 | 2.45 | 0.92 | 0.91 | 1.01 | 0.97 |
| ProjectedGAN | 17.61 | 41.53 | 5.45 | 3.63 | 0.92 | 0.92 | 1.05 | 0.97 |
and we observe there are implausible attribute relationships such as (baby, beard) as shown in Figure 5.2. We will discuss this in detail in the following Section 5.3.
Figure 5: LDM with 50 steps v.s. LDM with 200 timesteps. With increased sampling timesteps, (a) SaD of LDM gets worse, (b) since making too many fine objects such as earrings or necklace.
Table 3: SaD and PaD of models with different attributes for LSUN Cat. Analyzing the weakness of iDDPM for specific attribute types, such as color or shape. We used GPT-extracted attributes for this experiment.
| | color attributes | | shape attributes | |
|----------------|------------------|----------|------------------|----------|
| | SaD (10⁻⁶) ↓ | PaD (10⁻⁶) ↓ | SaD (10⁻⁶) ↓ | PaD (10⁻⁶) ↓ |
| StyleGAN1 | 139.03 | 248.96 | 169.76 | 318.46 |
| StyleGAN2 | 112.06 | 195.75 | 132.41 | 246.44 |
| iDDPM | 46.93 | 85.99 | 32.48 | 62.69 |
5.3 Comparing generative models with our metrics
Leveraging the superior sensitivity and discernment of our proposed metrics, we evaluate the performance of GANs and Diffusion Models (DMs) in Table 2. Generally, the tendency of SaD and PaD align with other existing metrics. However, three notable points emerge: 1) ProjectedGAN (Sauer et al., 2021) lags in performance. 2) As sampling timesteps in DM increase, FIDs improve, while SaD and PaD decline. 3) GANs and Diffusion models vary in their strengths and weaknesses concerning specific attributes.
1) ProjectedGAN (Sauer et al., 2021) prioritizes matching the training set’s embedding statistics for improving FID rather than improving actual fidelity (Kynkäänniemi et al., 2022). While it performs well in existing metrics, it notably underperforms in SaD and particularly in PaD. This implies that directly mimicking the training set’s embedding stats does not necessarily imply correct attribute correlations. Figure 5.2b provides failure cases generated by ProjectedGAN.
2) Diffusion models typically yield better quality with higher number of sampling timesteps. Yet, SaD and PaD scores for LDM with 200 steps surpass those of LDM with 50 steps. As illustrated in Figure 5, higher sampling timesteps in the LDM model produce more high-frequency elements such as necklaces and earrings. This could explain the dominance of attributes such as young, makeup, woman, wavy hair naturally. We suppose that dense sampling trajectory generates more high-frequency objects. The scores and mean differences of each attribute are depicted in Figure 5.1 and Figure 5.3 respectively.
In addition, iDDPM shows notable scores, with the attribute arched eyebrows showing scores over two times higher than GANs in SaD, and attributes related to makeup consistently receive high scores across all StyleGAN 1, 2, and 3 models in PaD. Investigating how the generation process of GANs or DMs affects attributes such as attributes would be an intriguing avenue for future research. See Appendix C for details.
3) Diffusion models fall short on modeling color-related attributes than shape-related attributes. As our metrics provide flexible customization, we report SaD and PaD of color attributes (e.g., yellow fur, black fur) and shape attributes (e.g., pointy ears, long tail) within LSUN Cat dataset. Table 3 shows that iDDPM excels in matching shape attributes compared to color attributes. This aligns with the hypothesis by Khrulkov et al. (2022), suggesting that DMs learn the Monge optimal transport map, the shortest trajectory, from Gaussian noise distribution to image distribution regardless of training data. This implies that when the initial latent noise \( x_T \) is determined, the image color is also roughly determined because the diffused trajectory tends to align with the optimal transport map.
5.4 Evaluating text-to-image models
Recently, there has been a huge evolution of text-to-image generative models (Nichol et al., 2021; Rombach et al., 2022; Saharia et al., 2022; Balaji et al., 2022). To evaluate text-to-image models, zero-shot FID score on COCO (Lin et al., 2014) is widely used including Stable Diffusion (SD). Instead, we use our metrics to examine text-to-image models regarding excessively or insufficiently generated attributes. We generate 30K images with captions from COCO using SDv1.5 and SDv2.1 to calculate SaD and PaD with attributes extracted from the captions. We use \( N_A = 30 \).
Table 4 shows SDv1.5 has twice better SaD and PaD than SDv2.1. Interestingly, the mean difference of attribute strengths is below zero. It implies that SDs tend to omit some concepts such as group.\(^1\)
\(^1\) e.g., A group of people is standing around a large clock.
Table 4: SaD and PaD of different versions of Stable Diffusion. Stable Diffusion v1.5 is almost twice better than v2.1. We generate 30k images using the captions from COCO. We use $N_A = 30$.
| $N_A = 30$ | SaD ($10^{-7}$)↓ | PaD ($10^{-7}$)↓ | SaD worst-rank attr (mean difference) |
|-----------|------------------|-----------------|-------------------------------------|
| SDv1.5 | 24.37 | 60.71 | plate (-1.9) group (-1.6) building (-1.6) |
| SDv2.1 | 48.23 | 106.86 | group (-3.7) plate (-2.5) person (-2.7) |
(a) number of samples
(b) number of attributes
Figure 6: SaD and PaD over a different number of samples and attributes. (a) SaD and PaD are stable with more than 50,000 images. (b) The ranking of models mostly remains consistent regardless of the number of attributes.
or plate. In particular, SDv2.1 struggles to generate scenes with multiple people. It aligns with common claims about SDv2.1 even though it achieves low FID. We provide more details in Appendix B.4.
5.5 Impact of Sample Size and Attribute Count on Proposed Metric
In Figure 6, we conduct ablation experiments to study the impact of the number of samples and attributes. Using four random seeds, we generate images with StyleGAN3 from FFHQ. We posit that SaD and PaD begin to standardize with 30,000 images and become more stable with over 50,000 images. Figure 6b provides SaD and PaD of various models over different numbers of attributes where the attributes from BLIP are sorted by their number of occurrences in the dataset. The ranking of the models largely stays stable irrespective of the number of attributes. However, the rank of LDM rises as rarely occurring attributes are included, as depicted by the purple line in Figure 6b. The rare attributes are scarf, flower, and child. We suggest that 20 attributes are sufficient for typical evaluation, but leveraging a broader range offers richer insights.
6 Conclusion and Discussion
We have introduced novel metrics that evaluate the distribution of attribute strengths. Single-attribute Divergence reveals which attributes are correctly or incorrectly modeled. Paired-attribute Divergence considers the joint occurrence of attributes in individual images. The explicit interpretability of these metrics allows us to know which generative model suits the user’s necessity. Furthermore, Heterogeneous CLIPScore more accurately captures the attribute strengths than CLIPScore.
Our metrics have the advantage of revealing the distribution of attributes from a set of generated images where human judgment faces difficulty in observing attributes in excessively many images. Furthermore, our research establishes a solid foundation for the development of explainable evaluation metrics for generative models and contributes to the advancement of the field.
Discussion
1) Estimating PDFs with KDE requires a sufficient (>50K) number of samples.
2) Our metrics can be influenced by quality of attribute detector.
3) While our metrics are highly customizable with different sets of attributes, the target attributes should be chosen to meet the users’ expectations. I.e., a limited or biased set of attributes might mislead our metrics.
4) Exploring strengths of other aspects such as texture (Caron et al., 2021; Oquab et al., 2023; Kirillov et al., 2023) or other modalities (Girdhar et al., 2023) may provide valuable insights and enhance the robustness of our metrics.
---
2 e.g., A table is set with two plates of food and a candle.
3 https://www.assemblyai.com/blog/stable-diffusion-1-vs-2-what-you-need-to-know/
REFERENCES
Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. edifi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022.
David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles, Hendrik Strobelt, Bolei Zhou, and Antonio Torralba. Seeing what a gan cannot generate. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4502–4511, 2019.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650–9660, 2021.
Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, and Sungroh Yoon. Perception prioritized training of diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11472–11481, 2022.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Imagebind: One embedding space to bind them all. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15180–15190, 2023.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. book in preparation for mit press. URL: http://www.deeplearningbook.org, 1, 2016.
Jiyeon Han, Hwanil Choi, Yunjey Choi, Junho Kim, Jung-Woo Ha, and Jaesik Choi. Rarity score: A new metric to evaluate the uncommonness of synthesized images. arXiv preprint arXiv:2206.08549, 2022.
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. arXiv preprint arXiv:2303.11897, 2023.
Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401–4410, 2019.
Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. Advances in neural information processing systems, 33:12104–12114, 2020a.
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8110–8119, 2020b.
Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. Advances in Neural Information Processing Systems, 34:852–863, 2021.
Valentin Khrulkov, Gleb Ryzhakov, Andrei Chertkov, and Ivan Oseledets. Understanding ddpm latent codes through optimal transport. arXiv preprint arXiv:2202.07477, 2022.
|
1uHTIjXjkk
|
About the formulation of the compositionality: - 1a: It seems that the unconditioned score function (c=∅) occurs only in Line 7 of Algorithm 1. Equation 6, 7, 8 simply ignores this term and combines all the conditional scores. Which one is actually used?
|
POTENTIAL BASED DIFFUSION MOTION PLANNING
Anonymous authors
Paper under double-blind review
ABSTRACT
Effective motion planning in high dimensional spaces is a long-standing open problem in robotics. One class of traditional motion planning algorithms corresponds to potential-based motion planning. An advantage of potential based motion planning is composability – different motion constraints can easily combined by adding corresponding potentials. However, constructing motion paths from potentials requires solving a global optimization across configuration space potential landscape, which is often prone to local minima, causing these approaches to fall out of favor in recent years. We propose a new approach towards learning potential based motion planning, where we train a neural networks to capture and learn an easily optimizable potentials over motion planning trajectories. We illustrate the effectiveness of such approach, significantly outperforming both classical and recent learned motion planning approaches, and illustrate its inherent composability, enabling us to generalize to a multitude of different motion constraints.
1 INTRODUCTION
Motion planning is a fundamental problem in robotics and aims to find a smooth, collision free path between a start and goal state given a specified configuration space, and is heavily used across a variety of different robotics tasks such as manipulation or navigation (Laumond et al., 1998). A variety of approaches exist for motion planning, ranging from classical sampling based approaches (Karaman & Frazzoli, 2011; Gammell et al., 2015; Kavraki et al., 1996; Kuffner & LaValle, 2000) and optimization based methods (Ratliff et al., 2009; Mukadam et al., 2018; Kalakrishnan et al., 2011). A recent body of works have further explored how learned neural networks can be integrated with motion planning for accelerated performance (Fishman et al., 2023; Yamada et al., 2023; Qureshi et al., 2019; Le et al., 2023).
A classical approach towards motion planning is potential based motion planning (Koren et al., 1991; Ratliff et al., 2009; 2018; Xie et al., 2020), where both obstacles and goals define energy potentials through which trajectories are optimized to reach. A great advantage of potential based motion planning is that different constraints to motion planning can be converted into equivalent energy potentials and directly combined to optimize for motion plans. However, such approach generates motion plans primarily based on the local geometry with greedy optimization, resulting in the long-standing local minima issues (LaValle, 2006). In addition, it typically requires implicit obstacle representations, which is hard to obtain in real-world settings.
We present a potential based motion planning approach leveraging diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) where diffusion models are used to parameterize and learn potential landscapes across configuration space trajectories between start and goal states. Our method maps the start state, goal state, and environment geometry directly into a learned latent potential space, eliminating the need to design sophisticated potential functions. These potential functions are fit directly over long-horizon plans, helping avoid local energy minima. Furthermore, the inherent stochasticity in diffusion model enables a more robust optimization and can generate diverse motion plans for a specific problem, enabling failure recovery. In addition, guided by both local and global environment geometry in learned potentials, our method provides faster planning and requires less collision checking, compared with problem-independent sampling-based planners.
One major hurdle of learning-based motion planners (Ichter & Pavone, 2019; Qureshi et al., 2019; Fishman et al., 2023) is their generalizability to unseen, more complex constraints. For example, models trained on sparse obstacles usually fall short of the scenarios with cluttered obstacles. By contrast, similar to prior potential based motion planning methods, our learned potentials can be additively composed together to jointly solve motion planning problems with sets of constraints. As illustrated in Figure 1, combining two potentials from different diffusion models enables us to opti-
mize for trajectories that satisfy both constraints, one to avoid obstacles in a cross, and a second to avoid obstacles in a square. Such flexibility to ad-hoc composition of constraints is especially useful in robotics where agents will often experience new sets of motion constraints in its environment over the course of execution.
In addition to being able to combining different motion constraints together, we can also compose multiple instance of the sample diffusion potential together. This form of composition enables us to naturally generalize at inference time to motion planning problems with a larger number of obstacles than what have been observed at training time, by composing multiple instances of the learn diffusion obstacle potential model conditioned on subsets of the larger set of obstacles. We illustrate the effectiveness of such approach, substantially outperforming both classical and learned baselines.
Overall, in this paper, our contributions are three-fold. (1) We present an approach to learned potential based motion planning using diffusion models. (2) We illustrate the effectiveness of our approach, outperforming existing classical and learned motion planning algorithms. (3) We illustrate the compositionality of motion planner, enabling it to generalize to multiple sets of motion constraints as well as an increased number of objects.
2 RELATED WORK
Motion Planning. Classic sampling-based motion planners (Kavraki et al., 1996; Kuffner & LaValle, 2000; Elbanhawi & Simic, 2014; Gammell et al., 2014; Janson et al., 2015; Choudhury et al., 2016; Strub & Gammell, 2020) have gained wide adoption due to their completeness and generalizability. However, problem-independent nature of these methods can result in inefficiency particularly when planning for similar problems repetitively. Reactive methods, such as potential-based approaches (Khatib, 1986; Ratliff et al., 2018; Xie et al., 2020), velocity obstacles (Fiorini & Shiller, 1998; Van den Berg et al., 2008), and safety barrier certificates (Wang et al., 2017) can provide fast updates and have the guarantee for obstacle avoidance. However, their performance is typically constrained by local minima or numerical instability issues (LaValle, 2006), and they usually need to construct obstacle representations in the robot configuration space, which is hard to obtain especially in high dimension. To address these issues, recent works have proposed many deep-learning based motion planners (Ichter & Pavone, 2019; Qureshi et al., 2019; Bency et al., 2019; Fishman et al., 2023). These methods can generally increase planning speed, expand the planning horizon, or reduce the access queries to the environment by leveraging learned knowledge.
One important line of research is combining neural network with sampling-based methods (Johnson et al., 2021; Yu & Gao, 2021; Lawson & Qureshi, 2022), termed hybrid motion planner. Particularly, latest work (Saha et al., 2023; Carvalho et al., 2023) adapts diffusion model as an auxiliary prior for trajectory generation, but still require accurate ground-truth cost function and dense environment queries when planning. In addition, many existing methods are only constrained to simple 2D environments (Yonetani et al., 2021; Chaplot et al., 2021; Toma et al., 2021). Contrary to them, we propose a motion planner applicable to various environments with different dimensionality while with shorter planning time and notably less environment access (i.e., collision checks).
In addition, our potential formulation also equips our model with high generalization capability to out-of-distribution environment.
Diffusion Models for Robotics. Many recent works have explored the application of diffusion model for robotics (Janner et al., 2022; Chen et al., 2022; Kapelyukh et al., 2023; Ha et al., 2023). Current research spans a variety of robotics problems, including action sequence generation (Liang et al., 2023; Fang et al., 2023; Li et al., 2023), policy (Wang et al., 2023; Kang et al., 2023),
grasping (Urain et al., 2023; Huang et al., 2023), and visuomotor planning or control (Dalal et al., 2023; Yang et al., 2023a; Chi et al., 2023), with recent work also exploring their application in solving manipulation constraints (Yang et al., 2023b). In contrast to these works, we focus on how diffusion models can be used to explicitly parameterize and learn potentials in potential based motion planning. We illustrate the efficacy of such an approach and its ability to compose with other learned potentials.
3 Method
In this section, we first introduce potential based motion planning in Section 3.1. We then discuss how potential based motion planning can be implemented with diffusion models in Section 3.2. We further discuss how such an approach enables us to combine multiple different potentials together in Section 3.3. Finally, we discuss how we can refine motion plans generated by diffusion models in cases of collision in Section 3.4.
3.1 Potential Based Motion Planning
Given a specified start state \( q_{\text{start}} \) and end state \( q_{\text{end}} \) in a configuration space \( \mathbb{R}^n \), motion planning is formulated as finding a collision-free trajectory \( q_{1:N} \) which starts from \( q_{\text{start}} \) and ends at \( q_{\text{end}} \). To solve for such a collision-free trajectory \( q_{1:N} \) in potential based motion planning (Koren et al., 1991), a potential function \( U(q) : \mathbb{R}^n \rightarrow \mathbb{R} \) on the configuration space composed of
\[
U(q) = U_{\text{att}}(q) + U_{\text{repel}}(q),
\]
is defined, where \( u(q) \) assigns low potential value to the goal state \( q_{\text{end}} \) and high potential to all states which are in collision. In Equation 1, \( U_{\text{att}}(q) \) represents an attraction potential that has low values at the end state \( q_{\text{end}} \) and high values away from it and \( U_{\text{repel}}(q) \) represents a repulsion potential that has high values near obstacles and low values away from them. The functional form of the potential function \( \tilde{U}(q) \) provides an easy approach to integrate additional obstacles in motion planning by adding the new potential \( U_{\text{new}}(q) \) representing obstacles to the existing potential in Equation 1.
To obtain a motion plan from a potential field \( U(q) \), a collision-free trajectory \( q_{1:N} \) from \( q_{\text{start}} \) to \( q_{\text{end}} \) is obtained by iteratively following gradient of the potential function
\[
q_t = q_{t-1} - \gamma \nabla_q U(q),
\]
with a successful motion plan constructed when the optimization procedure reaches the minimum of the potential function \( U(q) \). A major limitation of above approach in Equation 2 is local minima – if the optimization procedure falls in such a minima, the motion plan will no longer successfully construct paths from \( q_{\text{start}} \) to \( q_{\text{end}} \) (Yun & Tan, 1997; Teli & Wani, 2021).
3.2 Potential Based Diffusion Motion Planning
We next discuss how to learn potentials for potential motion planning that enable us to effectively optimize samples. Given a motion plan \( q_{1:T} \) from start state \( q_{\text{start}} \) to end state \( q_{\text{end}} \) and a characterization of the configuration space \( C \) (i.e. the set of obstacles in the environment), we propose to learn a trajectory-level potential function \( U_\theta \) so that
\[
q^*_{1:T} = \arg \min_{q_{1:T}} U_\theta(q_{1:T}, q_{\text{start}}, q_{\text{end}}, C),
\]
where \( q^*_{1:T} \) is a successful motion plan from \( q_{\text{start}} \) to \( q_{\text{end}} \).
To learn the potential function in Equation 3, we propose to learn a EBM (LeCun et al., 2006; Du & Mordatch, 2019) across a dataset of solved motion planning \( D = \{ q_{\text{start}}, q_{\text{end}}, q^*_{1:T}, C^*\} \), where \( e^{-E_\theta(q_{1:T}|q_{\text{start}}, q_{\text{end}}, C)} \propto p(q_{1:T}|q_{\text{start}}, q_{\text{end}}, C) \). Since the dataset \( D \) is of solved motion planning problems, the learned energy function \( E_\theta \) will have minimal energy at successful motion plans \( q^*_{1:T} \) and thus satisfy our potential function \( U_\theta \) in Equation 3.
To learn the EBM landscape that enables us to effectively optimize and generate motion plans \( q^*_{1:T} \), we propose to shape the energy landscape using denoising diffusion training objective (Sohl-Dickstein et al., 2015; Ho et al., 2020). In this objective, we explicitly train the energy landscape so gradient with respect to the energy function it can denoise and recover a motion plans \( q_{1:T} \) across many differing levels of noise corruption \( \{1, \ldots, S\} \) ranging from mostly correct motion paths to fully corrupted Gaussian noise trajectories. By shaping the gradient of the energy function to generate motion plans \( q_{1:T} \) from arbitrary initialization trajectories, our learned energy landscape is able to effectively optimize for motion paths.
Formally, to train our potential, we use the energy based diffusion training objective in (Du et al., 2023), where the gradient of energy function is trained to denoise noise corrupted motion plans \( q^*_{1:T} \)
\[
L_{\text{MSE}} = \| \epsilon - \nabla_{q_{1:T}} E_\theta(\sqrt{1-\beta_s} q^*_{1:T} + \sqrt{\beta_s} \epsilon, s, q_{\text{start}}, q_{\text{end}}, C^*) \|^2
\]
Algorithm 1 Code for Compositional Potential Based Planning
1: **Models:** compositional set of $N$ diffusion potential functions $E_\theta(q_{1:T}, t, q_{start}, q_{end}, C_i)$
2: **Hyperparameters:** horizon $T$, guidance scales $\omega_i$, denoising diffusion steps $S$
3: **Input:** start position $q_{start}$, goal position $q_{goal}$, $N$ constraints $C_{1:N}$
4: Initialize $q^s_{1:T} \sim \mathcal{N}(0, I)$
5: for $s = S \ldots 1$ do
6: # Combining Different Energy Potentials Together
7: $\epsilon_{comb} = \nabla_{q_{1:T}} E_\theta(q^s_{1:T}, s, q_{start}, q_{end}, \emptyset) + \sum_{i=1}^{N} \omega_i \nabla_{q_{1:T}} (E_\theta(q^s_{1:T}, s, q_{start}, q_{end}, C_i) - E_\theta(q^s_{1:T}, s, q_{start}, q_{end}, \emptyset))$
8: # Transit to Next Diffusion Time Step
9: $q^{s-1}_{1:T} = q^s_{1:T} - \gamma \epsilon_{comb} + \xi, \quad \xi \sim \mathcal{N}(0, \sigma^2_s I).$
10: end for
11: return
where $\epsilon$ is sampled from Gaussian noise $\mathcal{N}(0, 1)$, $s \in \{1, 2, ..., S\}$ is the denoising diffusion step, and $\beta_s$ is the corresponding Gaussian noise corruption on a motion planning path $q^s_{1:T}$. We refer to $E_\theta$ as the diffusion potential function.
To optimize and sample from our diffusion potential function, we initialize a motion path $q^S_{1:T}$ at diffusion step $S$ from Gaussian noise $\mathcal{N}(0, 1)$ and optimize for motion path following the gradient of the energy function. We iteratively refine the motion $q^s_{1:T}$ across each diffusion step following
$$q^{s-1}_{1:T} = q^s_{1:T} - \gamma \epsilon_C + \xi, \quad \xi \sim \mathcal{N}(0, \sigma^2_s I),$$
where $\epsilon_C = \epsilon_\emptyset - \omega(\nabla_{q_{1:T}} E_\theta(q_{1:T}, t, q_{start}, q_{end}, C) - \epsilon_\emptyset), \quad \epsilon_\emptyset = \nabla_{q_{1:T}} E_\theta(q_{1:T}, t, q_{start}, q_{end}, \emptyset)$
(5)
where $\gamma$ and $\sigma^2_s$ are diffusion specific scaling constants. The final predicted motion path $q^*_ {1:T}$ corresponds to the output $q^0_{1:T}$ after running $S$ steps of optimization from the diffusion potential function.
3.3 Composing Diffusion Potential Functions
Given two separate diffusion potential functions $E^1_\theta(\cdot)$ and $E^2_\theta(\cdot)$, encoding separate constraints in motion planning, we can likewise form a composite potential function $E_{comb}(\cdot) = E^1(\cdot) + E^2(\cdot)$ by directly summing the corresponding potentials. This potential function $E_{comb}$ will have low energy precisely at motion planning paths $q_{1:T}$ which satisfy both constraints, with sampling correspondings to optimizing this potential function.
To sample from the new diffusion potential function $E_{comb}$, we can follow
$$q^{t-1}_{1:T} = q^t_{1:T} - \gamma \nabla_{q_{1:T}} (E_{comb}(q_{1:T}, t, q_{start}, q_{end}, C)) + \xi, \quad \xi \sim \mathcal{N}(0, \sigma^2_t I).$$
(7)
To further improve the composition, a more expensive MCMC procedure can be used to explicitly combine diffusion models (Du et al., 2023).
Applications of Composing Potential Functions. The ability to combine multiple separate potential functions for motion planning offers a variety of different ways to generalize and extend existing motion planning systems. First, in many motion planning problems, there are often a heterogenous set of different types of constraints or collisions that limit possible configuration space paths. For instance, in autonomous driving, constraints that can arise may include moving pedestrians, traffic lanes, road work or incoming cars. Oftentimes, we cannot enumerate all potential combinations, but we wish motion planning systems to be able to handle all possible combination of constraints. Jointly learning a single motion planning model for all constraints may be difficult, as at test time, we may see novel combinations that we do not have training data for. By learning separate diffusion potential fields for each constraint, we can combine them in an ad-hoc manner at test-time to deal with arbitrary sets of constraints. We provide two concrete implementations of composing potentials together as below and a detailed procedural in Algorithm 1.
Generalization over More Obstacles Suppose that the model is trained on environments with 4 obstacles, namely, $|C| = 4$. However, in the test time, we want to generalize to a more complex environment that has 6 obstacles $C' = \{o_1, o_2, o_3, o_4, o_5, o_6\}$. This can be achieved by adding the potentials evaluated on two sets of obstacles, where $C_1 = \{o_1, o_2, o_3, o_4\}$ and $C_2 = \{o_3, o_4, o_5, o_6\}$. This formulation can be further extended to $N$ sets of obstacles $C_{1:N}$ and the composite diffusion potential function is given by:
$$E_{comb}(q_{1:T}, t, q_{start}, q_{end}, C_{1:N}) = \sum_{i=1}^{N} E_\theta(q_{1:T}, t, q_{start}, q_{end}, C_i)$$
(8)
1 A rescaling term at each diffusion step is omitted above for clarity
Algorithm 2 Code for Refining Motion Plans
1: **Model:** compositional potential denoiser $f_\theta(q_{1:T}, t, q_{\text{start}}, q_{\text{end}}, C_{1:N})$
2: **Hyperparameters:** number of refine attempts $R$, noise scale $k$
3: **Input:** trajectory $q_{1:T}$, start position $q_{\text{start}}$, goal position $q_{\text{goal}}$, $N$ constraints $C_{1:N}$
4: $S = \text{Get\_Collision\_Sections}(q)$ # A Set of Indices of Collision Sections in $q_{1:T}$
5: for $r = 1 \ldots R$ do
6: $q'_{1:T} = \sqrt{\alpha_k} q_{1:T} + (1 - \alpha_k) \xi$, $\xi \sim \mathcal{N}(0, \sigma^2 I)$ # Add Noise to $q_{1:T}$
7: $q' = f_\theta(q'_{1:T}, k, q_{\text{start}}, q_{\text{end}}, C_{1:N})$. # Get new Denoised Trajectory
8: for all $s_i \in S$ do
9: if is_section_good($q'[s_i]$) then
10: $q[s_i] = q'[s_i]$, $S = S \setminus s_i$ # Refine $q_{1:T}$ and Remove $s_i$ from set $S$
11: end if
12: end for
13: end for
14: return $q$
---
Figure 2: Visualization of the Motion Refining Scheme. A proposal plan is first generated by denoising an initial Gaussian noise. If collision is detected, a small noise is first added to the proposal and the new plan is generated based on the partially noisy trajectory.
Generalization over Static and Dynamic Obstacles. Many real-life scenarios involve dynamic real-time interaction. For instance, to construct motion plan for an autonomous vehicle, we must both avoid static lane obstacles as well as dynamically moving cars. While static obstacles are often known a priori, the motion patterns of dynamics obstacles often change with time, making it advantageous to be able to combine different dynamic constraints with static ones. We can directly implement this by using a diffusion potential function $E^j_{\theta_s}$ that only trained on static obstacles $C^s_i$ and a diffusion potential function $E^j_{\theta_d}$ that only trained on dynamic obstacles $C^d_j$, we can obtain the static&dynamic potential by adding $E^j_{\theta_s}$ and $E^j_{\theta_d}$. In a more general form, to condition on a set of $N_1$ static obstacles $C^s_{1:N_1}$ with their potential diffusion functions $E^{1:N_1}_{\theta_s}$ and a set of $N_2$ dynamic $C^d_{1:N_2}$ obstacles with their potential diffusion functions $E^{1:N_2}_{\theta_d}$, the composite diffusion potential function is then written as:
$$E^{\text{comb}}_\theta(q_{1:T}, t, q_{\text{start}}, q_{\text{end}}, [C^s_{1:N_1}, C^d_{1:N_2}]) = \sum_{i=1}^{N_1} E^j_{\theta_s}(q_{1:T}, t, q_{\text{start}}, q_{\text{end}}, C^s_i) + \sum_{j=1}^{N_2} E^j_{\theta_d}(q_{1:T}, t, q_{\text{start}}, q_{\text{end}}, C^d_j)$$
(9)
3.4 Refining Motion Plans
In practice, the predicted motion plan $q_{1:T}$ might occasionally contains sections that violate the constraints of the environment (i.e., collide with obstacles). To solve this issue, both classical and learned motion planners (Kuffner & LaValle, 2000; Qureshi et al., 2019) provide mechanisms to refine trajectories subject to collisions in configuration space.
With diffusion potential fields, we can likewise refine a trajectory, $q_{1:T}$ with collision, by locally perturbing it into a noisy trajectory $q^k_{1:T}$ defined by the $k$th step of the diffusion forward process:
$$q^k_{1:T} = \sqrt{\alpha_k} q_{1:T} + (1 - \alpha_k) \xi, \quad \xi \sim \mathcal{N}(0, \sigma^2 I).$$
(10)
A new motion plan $q'_{1:T}$ can be obtained by denoising the noisy trajectory following Equation 5. To be simple, let
$$q'_{1:T} = f_\theta(q_{1:T}, k, q_{\text{start}}, q_{\text{end}}, C_{1:N})$$
(11)
where $f_\theta(.)$ is a iterative diffusion potential denoiser that output the clean trajectory. The warm-start denoising scheme enables faster planning and is more efficient, especially important for those energy-critical mobile agents. We will then replace the collision section in $q_{1:T}$ with corresponding section in $q'_{1:T}$ when the new section is collision-free. This refining procedural can be repeated
Figure 3: **Environment Demonstration.** a) Maze2D: a point robot moving in 2D workspace with the highlighted block as obstacles. b) KUKA: robot manipulator with 7 DoF operating on a tabletop. The grey cuboids are obstacles. c) Dual KUKA14D: Two side by side KUKA manipulators operate simultaneously, where the dimension of the configuration space is 14.
Figure 4: **Quantitative Comparisons in Motion Planning Environments.** Our method outperforms the sampling-based planner and all other learning-based motion planning approaches on all metrics across a set of different environments. From left to right: a) number of collision checks, b) success rate, c) planning time.
until a desired trajectory is found. Algorithm 2 displays the complete refining pipeline and Figure 2 provides a corresponding visualization.
4 EXPERIMENTS
In this section, we firstly describe our environments and baselines in Section 4.1. Next, in Section 4.2, we discuss our experiments on base environments and motion refining algorithm. Following, in Section 4.3, we present the compositionality results by evaluating our motion planner on composite environments. Then, we describe the real world motion planning performance in Section 4.4.
4.1 ENVIRONMENTS AND BASELINES
We first classify the environments that we evaluated on to 4 categories by the level of generalization capability:
- **Base Environments:** same number of constraints as in training; constraints sampled from the same distribution;
- **Composite Same Environment:** more constraints than training phase, constraints sampled from the same distribution;
- **Composite Different Environment:** more constraints than training phase, constraints sampled from different distributions.
- **Real World Motion Planning Environments.**
Concretely, we propose three simulated motion planning environments with increasing difficulty as shown in Figure 3:
- **Maze2D** A point-robot moving in a 2D workspace. The configuration space is the x-y coordinate of the robot. The task is to generate a 2D trajectory navigate through the workspace without any collision with obstacles. We offer two variants: *Static Maze2D* where obstacles stay in the same locations and *Dynamic Maze2D* where obstacles are moving in randomly generated linear trajectories.
- **Kuka7D** A KUKA arm of 7 DoF operating on a tabletop. Obstacles are randomly placed in the 3D workspace. The start and goal are given as the 7 joint states of the KUKA arm.
- **Dual KUKA** Two KUKA arms are placed side by side on a tabletop and operate simultaneously with a total configuration space of 14 DoF. A successful trajectory should have both arms arrived in their goal states and should not have any self-collision or collision with obstacles.
**Baselines** We compare our methods with the classic sampling-based planning baselines RRT* (Karaman & Frazzoli, 2011), P-RRT* (Qureshi & Ayaz, 2016), BIT* (Gammell et al., 2015).
| Env | R = 3 Before | R = 3 After | R = 5 Before | R = 5 After | R = 10 Before | R = 10 After |
|--------------|--------------|-------------|--------------|-------------|---------------|--------------|
| Maze2D | 96.25 | 99.75 | 95.25 | 99.00 | 95.75 | 100.00 |
| KUKA | 71.25 | 90.00 | 69.50 | 94.30 | 69.75 | 94.75 |
| Dual KUKA | 45.50 | 69.75 | 47.25 | 77.25 | 47.00 | 80.75 |
Table 1: Quantitative Results of Refining Motion Plans. Success rate before and after motion refining. $R$ denotes the number of refine attempts. The proposed method consistently boost success rate on three base environments.
| Method | Success Rate | Time (s) | Check |
|------------|--------------|----------|-------|
| RRT* | 99.90 | 2.15 | 19k+ |
| Ours | **100.00** | **0.38** | **71.86** |
Table 2: Quantitative Results on Composite Different Environments. Two static Maze2D with different types of obstacles are combined at test time.
Figure 5: Compositional Generalization. Quantitative comparisons of different planner on compositional environment. The shaded area indicates the standard error across the mean of all tested environments. The leftmost column reports the results on the same number of obstacles that the models trained on. We report The composite model outperforms all other baseline by a margin, only except that in Maze2D, where RRT* is on par with our model, but with order of magnitude of more collision checks.
and SIPP (Phillips & Likhachev, 2011), traditional potential-based method RMP (Ratliff et al., 2018), and several learning-based motion planners: MPNet (Qureshi et al., 2019), MπNet (Fishman et al., 2023), and AMP-LS (Yamada et al., 2023). MPNet is trained on trajectories with sparse waypoints and use MLPs to encode environment configuration and predict the next position. In contrast, MπNet is trained on dense trajectory waypoints and predicts the movement vector instead of directly the next position. AMP-LS encodes the robot pose into a latent feature and approaching the goal pose by using the gradient of hand-crafted losses to update the latent. A sequence of latents are then decoded and form a trajectory. In evaluation, all start/goal poses and environment configurations are unseen to the model. For each experiment, we evaluate on 100 different environments with 20 problems each.
4.2 Motion Planning Performance on Base Environments
We first evaluate our method on motion planning in each base environments: randomly generated environments that follow the same procedural generation pipeline as the training environments. Qualitative results are shown in Figure 4 and Table VIII. We include the full details of evaluation setup in Section A.2.3.
Comparison to Sampling-based Planner We compare our method to traditional sampling-based RRT* (Karaman & Frazzoli, 2011). The success rate of RRT* suffers from a significant degradation when the dimension of the configuration space increases. In addition, the planning time of the sampling-based planner rises dramatically as the dimension of the problems increases. However, the planning time of our method performs steadily across all environments, namely, 0.116s, 0.135s, 0.299s and with order of magnitude less collision check.
Comparison to Learning-based Planners We also compare to three other learning-based motion planning baselines: MPNet, MπNet, and AMP-LS, as displayed in Figure 4 and 6. We can see that our method outperform all the learning baseline in both success rate and number of collision check.
Figure 6: **Qualitative Motion Plan in KUKA Environment.** Obstacles are shown in transparent grey for clearer view. Our method, in column (a), generates an end-to-end, smooth trajectory. In column (b) and (c) show the trajectory generated by MπNet from two different viewing angles. The proposed trajectory traverses from the other direction that requires more movement, is frequently stuck in local geometry, and finally fails to reach the goal state.
Figure 7: **Qualitative Compositionality Generalization over More Obstacles.** Two models that trained on only six obstacles are composed and tested on out-of-distribution environments, with 9, 10, 11, 12 obstacles, respectively.
| Method | Success | Time | Check | Success | Time | Check | Success | Time | Check |
|--------|---------|------|-------|---------|------|-------|---------|------|-------|
| SIPP | 69.85 | 32.21| 1M+ | 70.40 | 185.50| 1.7M+ | 73.95 | 98.66| 1.3M+ |
| Ours | 99.65 | 0.12 | 49.26 | 97.35 | 3.72 | 213.97| 97.95 | 3.63 | 177.31|
Table 3: **Quantitative Results on Base Dynamic and Static + Dynamic on Maze2D.** Static 1 and Static 2 refer to two different static Maze2D environments. Our method outperforms the sampling-based planner by a large margin.
Notably, in Dual KUKA, our method led the the state-of-the-art learning-based planner MπNet by 37% while with 3 times less of collision checks. We also observe that the planning time of it is slightly shorter than ours, even though it requires a higher number of collision checks. Note that the gap is closing as the dimension of the environment increases – in practice in the real world, we believe this gap will be further eliminated where collision checks is much more expensive.
**Motion Refining** We present quantitative and qualitative results of refining motion plans, as shown in Table 1 and Figure 2. The gain of refining motion plans increases as the dimensionality of the environment increases. As in Table 1, the success rate generally increases as we increase the number of refining attempts $R$, but the gain gradually saturates in 10 attempts. In this case, the proposed trajectory probably suffers from a catastrophic collision and the model might need to resample a trajectory from a pure noise.
### 4.3 Compositionality
**Composing Obstacles** We first evaluate the compositionality by adding obstacles to the environments. A qualitative visualization of a composite Maze2D environment is given in Figure 7, where we train our model on 6 obstacles and evaluate on environments with up to 12 obstacles. The blue blocks indicate 6 obstacles as in training distribution, while the orange blocks indicate out-of-distribution additional obstacles. As we can see, the composed model effectively proposes different trajectories according to the presented obstacles by sampling poses from the region with low composite potential. We report the full quantitative results in Figure 5 and Table XI.
**Composing Multiple Constraints** We then investigate the compositionality to combine two different diffusion potential functions together, (i.e., models trained on completely different environments). Specifically, we first train a model on 6 small obstacles and a model on 3 large obstacles and evaluate on environments where both the small and large obstacles are presented. The qualitative results is shown in Table 2. Moreover, we want to compose the two aforementioned models trained...
Figure 8: Qualitative Real World Motion Plans, Hotel Scene. The composed model provides long-horizon motion plan that avoid 10 pedestrians, while only trained on 5 pedestrians. In column (a) and (b), the composed plan is aware of P1 (cyan) and P6 (pink) and overtakes them from above, while the baseline model runs into them. In column (c), the composed motion plan chooses to move faster so as to pass through the intersection with P7 (brown) before P7 arrives, but the baseline motion plan results in a collision due to its slower speed. In column (d), the composed plan choose to go upward to avoid the oncoming P8 (black).
on static environments with another model that trained on dynamic environments. Hence, we test the composed model on environments where both static and dynamic obstacles are presented. We named the environments static 1 + dynamic and static 2 + dynamic, respectively. The quantitative results of the base dynamic environment and static + dynamic environments are shown in Table 3 and the qualitative results are in Figure X.
4.4 REAL WORLD
Finally, we evaluate the effectiveness of our method on the real world ETH\UCY(Pellegrini et al., 2010; Lerner et al., 2007) dataset. The dataset group we used consists of 5 scenes (ETH, Hotel, Zara01, Zara02, UNIV), where each scene contains human trajectories in world-coordinates collected by manual annotation from bird-eye-view camera. Our focus is to investigate if our model can propose successful trajectories given the start and goal locations of an agent in a random, cluttered street-level real-world interaction. Specifically, the planner is trained to predict the trajectory of the agent (highlighted in red), conditioned on the trajectories of 5 other pedestrians. Data from all the scenes are used when training and evaluate on unseen combination of start, goal, and surrounding pedestrian trajectories. In Figure XI, we present the qualitative results where 5 other pedestrians are presented. We also evaluate on 10 presented pedestrians by composing the two potential functions constrained by 5 pedestrians each, as illustrated in Figure 8.
5 DISCUSSION
Limitations. Our existing formulation of potential based diffusion motion planner has several limitations. First, although our motion trajectory is accurate, it is often suboptimal, e.g., there exists a shorter path from start to goal. This may be addressed by adding an additional potential to reach the goal as soon as possible. Second, our approach to composing potentials scales linearly with the number of composed models, requiring significantly more computation power with additional models. This can remedied by having different potential operate on shared features in a network.
Conclusion. In this work, we have introduced the potential based diffusion motion planner. We first formulate our potential diffusion motion planner and describe its connections and advantages over traditional potential based planner. We illustrate the motion planning performance of our approach in terms of success rate, planning time, and the number of collision checks over motion planning problems with dimensionality of 2D, 7D, 14D. We further illustrate the compositionality of approach, enabling generalization to both new object and new combinations of motion constraints. Finally, we illustrate the potential of our work on real world scenes with multi-agent interaction.
REFERENCES
Anurag Ajay, Yilun Du, Abhi Gupta, Joshua B. Tenenbaum, Tommi S. Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision making? In The Eleventh International Conference on Learning Representations, 2023.
Mayur J Bency, Ahmed H Qureshi, and Michael C Yip. Neural path planning: Fixed time, near-optimal path generation via oracle imitation. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3965–3972. IEEE, 2019.
Joao Carvalho, An T Le, Mark Baierl, Dorothea Koert, and Jan Peters. Motion planning diffusion: Learning and planning of robot motions with diffusion models. arXiv preprint arXiv:2308.01557, 2023.
Devendra Singh Chaplot, Deepak Pathak, and Jitendra Malik. Differentiable spatial planning using transformers. In International Conference on Machine Learning, pp. 1484–1495. PMLR, 2021.
Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, and Jun Zhu. Offline reinforcement learning via high-fidelity generative behavior modeling. arXiv preprint arXiv:2209.14548, 2022.
Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137, 2023.
Sanjiban Choudhury, Jonathan D Gammell, Timothy D Barfoot, Siddhartha S Srinivasa, and Sebastian Scherer. Regionally accelerated batch informed trees (rabit*): A framework to integrate local information into optimal path planning. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 4207–4214. IEEE, 2016.
Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016–2021.
Murtaza Dalal, Ajay Mandlekar, Caelan Garrett, Ankur Handa, Ruslan Salakhutdinov, and Dieter Fox. Imitating task and motion planning with visuomotor transformers. arXiv preprint arXiv:2305.16309, 2023.
Yilun Du and Igor Mordatch. Implicit generation and generalization in energy-based models. arXiv preprint arXiv:1903.08689, 2019.
Yilun Du, Conor Durkan, Robin Strudel, Joshua B Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, and Will Sussman Grathwohl. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. In International Conference on Machine Learning, pp. 8489–8510. PMLR, 2023.
Mohamed Elbanhawi and Milan Simic. Sampling-based robot motion planning: A review. Ieee access, 2:56–77, 2014.
Xiaolin Fang, Caelan Reed Garrett, Clemens Eppner, Tomás Lozano-Pérez, Leslie Pack Kaelbling, and Dieter Fox. Dimsam: Diffusion models as samplers for task and motion planning under partial observability. arXiv preprint arXiv:2306.13196, 2023.
Paolo Fiorini and Zvi Shiller. Motion planning in dynamic environments using velocity obstacles. The international journal of robotics research, 17(7):760–772, 1998.
Adam Fishman, Adithyavairavan Murali, Clemens Eppner, Bryan Peele, Byron Boots, and Dieter Fox. Motion policy networks. In Conference on Robot Learning, pp. 967–977. PMLR, 2023.
Jonathan D Gammell, Siddhartha S Srinivasa, and Timothy D Barfoot. Informed rrt: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. In 2014 IEEE/RSJ international conference on intelligent robots and systems, pp. 2997–3004. IEEE, 2014.
|
lJkOCMP2aW
|
The model seems to borrow conceptually very heavily from the PatchTST model without explicitly recognizing the source of inspiration. Without a detailed explanation of the actual difference between the two architectures the proposed architecture appears to be a minor perturbation of the original PatchTST.
|
PATHFORMER: MULTI-SCALE TRANSFORMERS WITH ADAPTIVE PATHWAYS FOR TIME SERIES FORECASTING
Peng Chen\textsuperscript{1}, Yingying Zhang\textsuperscript{2}, Yunyao Cheng\textsuperscript{3}, Yang Shu\textsuperscript{1†}, Yihang Wang\textsuperscript{1*}, Qingsong Wen\textsuperscript{2}, Bin Yang\textsuperscript{1}, Chenjuan Guo\textsuperscript{1}
\textsuperscript{1}East China Normal University, \textsuperscript{2}Alibaba Group, \textsuperscript{3}Aalborg University
\{pchen,yhwang\}@stu.ecnu.edu.cn, congrong.zyy@alibaba-inc.com
\{yshu,cjguo,byang\}@dase.ecnu.edu.cn, yunyaoc@cs.aau.dk
qingsongedu@gmail.com
ABSTRACT
Transformers for time series forecasting mainly model time series from limited or fixed scales, making it challenging to capture different characteristics spanning various scales. We propose Pathformer, a multi-scale Transformer with adaptive pathways. It integrates both temporal resolution and temporal distance for multi-scale modeling. Multi-scale division divides the time series into different temporal resolutions using patches of various sizes. Based on the division of each scale, dual attention is performed over these patches to capture global correlations and local details as temporal dependencies. We further enrich the multi-scale Transformer with adaptive pathways, which adaptively adjust the multi-scale modeling process based on the varying temporal dynamics of the input, improving the accuracy and generalization of Pathformer. Extensive experiments on eleven real-world datasets demonstrate that Pathformer not only achieves state-of-the-art performance by surpassing all current models but also exhibits stronger generalization abilities under various transfer scenarios. The code is made available at https://github.com/decisionintelligence/pathformer.
1 INTRODUCTION
Time series forecasting is an essential function for various industries, such as energy, finance, traffic, logistics, and cloud computing (Chen et al., 2012; Cirstea et al., 2022b; Ma et al., 2014; Zhu et al., 2023; Pan et al., 2023; Pedersen et al., 2020), and is also a foundational building block for other time series analytics, e.g., outlier detection (Campos et al., 2022; Kieu et al., 2022b). Motivated by its widespread application in sequence modeling and impressive success in various fields such as CV and NLP (Dosovitskiy et al., 2021; Brown et al., 2020), Transformer (Vaswani et al., 2017) receives emerging attention in time series (Wen et al., 2023; Wu et al., 2021; Chen et al., 2022; Liu et al., 2022c). Despite the growing performance, recent works have started to challenge the existing designs of Transformers for time series forecasting by proposing simpler linear models with better performance (Zeng et al., 2023). While the capabilities of Transformers are still promising in time series forecasting (Nie et al., 2023), it calls for better designs and adaptations to fulfill its potential.
Real-world time series exhibit diverse variations and fluctuations at different temporal scales. For instance, the utilization of CPU, GPU, and memory resources in cloud computing reveals unique temporal patterns spanning daily, monthly, and seasonal scales (Pan et al., 2023). This calls for multi-scale modeling (Mozer, 1991; Ferreira et al., 2006) for time series forecasting, which extracts temporal features and dependencies from various scales of temporal intervals. There are two aspects to consider for multiple scales in time series: temporal resolution and temporal distance. Temporal resolution corresponds to how we view the time series in the model and determines the length of each temporal patch or unit considered for modeling. In Figure 1, the same time series can be divided
\textsuperscript{*}Part of the work was done during the internship at Alibaba Group.
\textsuperscript{†}Corresponding author
Figure 1: Left: Time series are divided into patches of varying sizes as temporal resolution. The intervals in blue, orange, and red represent different patch sizes. Right: Local details (black arrows) and global correlations (color arrows) are modeled through different temporal distances.
into small patches (blue) or large ones (yellow), leading to fine-grained or coarse-grained temporal characteristics. Temporal distance corresponds to how we explicitly model temporal dependencies and determines the distances between the time steps considered for temporal modeling. In Figure 1, the black arrows model the relations between nearby time steps, forming local details, while the colored arrows model time steps across long ranges, forming global correlations.
To further explore the capability of extracting correlations in Transformers for time series forecasting, in this paper, we focus on the aspect of enhancing multi-scale modeling with the Transformer architecture. Two main challenges limit the effective multi-scale modeling in Transformers. The first challenge is the incompleteness of multi-scale modeling. Viewing the data from different temporal resolutions implicitly influences the scale of the subsequent modeling process (Shabani et al., 2023). However, simply changing temporal resolutions cannot emphasize temporal dependencies in various ranges explicitly and efficiently. On the contrary, considering different temporal distances enables modeling dependencies from different ranges, such as global and local correlations (Li et al., 2019). However, the exact temporal distances of global and local intervals are influenced by the division of data, which is incomplete from a single view of temporal resolution. The second challenge is the fixed multi-scale modeling process. Although multi-scale modeling reaches a more complete understanding of time series, different series prefer different scales depending on their specific temporal characteristics and dynamics. For example, comparing the two series in Figure 1, the series above shows rapid fluctuations, which may imply more attention to fine-grained and short-term characteristics. The series below, on the contrary, may need more focus on coarse-grained and long-term modeling. The fixed multi-scale modeling for all data hinders the grasp of critical patterns of each time series, and manually tuning the optimal scales for a dataset or each time series is time-consuming or intractable. Solving these two challenges calls for adaptive multi-scale modeling, which adaptively models the current data from certain multiple scales.
Inspired by the above understanding of multi-scale modeling, we propose Multi-scale Transformers with Adaptive Pathways (Pathformer) for time series forecasting. To enable the ability of more complete multi-scale modeling, we propose a multi-scale Transformer block unifying multi-scale temporal resolution and temporal distance. Multi-scale division is proposed to divide the time series into patches of different sizes, forming views of diverse temporal resolutions. Based on each size of divided patches, dual attention encompassing inter-patch and intra-patch attention is proposed to capture temporal dependencies, with inter-patch attention capturing global correlations across patches and intra-patch attention capturing local details within individual patches. We further propose adaptive pathways to activate the multi-scale modeling capability and endow it with adaptive modeling characteristics. At each layer of the model, a multi-scale router adaptively selects specific sizes of patch division and the subsequent dual attention in the Transformer based on the input data, which controls the extraction of multi-scale characteristics. We equip the router with trend and seasonality decomposition to enhance its ability to grasp the temporal dynamics. The router works with an aggregator to adaptively combine multi-scale characteristics through weighted aggregation. The layer-by-layer routing and aggregation form the adaptive pathways of multi-scale modeling throughout the Transformer. To the best of our knowledge, this is the first study that introduces adaptive multi-scale modeling for time series forecasting. Specifically, we make the following contributions:
• We propose a multi-scale Transformer architecture. It integrates the two perspectives of temporal resolution and temporal distance and equips the model with the capacity of a more complete multi-scale time series modeling.
• We further propose adaptive pathways within multi-scale Transformers. The multi-scale router with temporal decomposition works together with the aggregator to adaptively extract and aggregate multi-scale characteristics based on the temporal dynamics of input data, realizing adaptive multi-scale modeling for time series.
• We conduct extensive experiments on different real-world datasets and achieve state-of-the-art prediction accuracy. Moreover, we perform transfer learning experiments across datasets to validate the strong generalization of the model.
2 RELATED WORK
Time Series Forecasting. Time series forecasting predicts future observations based on historical observations. Statistical modeling methods based on exponential smoothing and its different flavors serve as a reliable workhorse for time series forecasting [Hyndman & Khandakar, 2008; Li et al., 2022a]. Among deep learning methods, GNNs model spatial dependency for correlated time series forecasting [Jin et al., 2023a; Wu et al., 2020; Zhao et al., 2024; Cheng et al., 2024; Miao et al., 2024; Cirstea et al., 2021]. RNNs model the temporal dependency [Chung et al., 2014; Kieu et al., 2022a; Wen et al., 2017; Cirstea et al., 2019]. DeepAR [Rangapuram et al., 2018] uses RNNs and autoregressive methods to predict future short-term series. CNN models use the temporal convolution to extract the sub-series features [Sen et al., 2019; Liu et al., 2022a; Wang et al., 2023]. TimesNet [Wu et al., 2023a] transforms the original one-dimensional time series into a two-dimensional space and captures multi-period features through convolution. LLM-based methods also show effective performance in this field [Jin et al., 2023b; Zhou et al., 2023]. Additionally, some methods are incorporating neural architecture search to discover optimal architectures [Wu et al., 2022; 2023b].
Transformer models have recently received emerging attention in time series forecasting [Wen et al., 2023]. Informer [Zhou et al., 2021] proposes prob-sparse self-attention to select important keys, Triformer [Cirstea et al., 2022a] employs a triangular architecture, which manages to reduce the complexity. Autoformer [Wu et al., 2021] proposes auto-correlation mechanisms to replace self-attention for modeling temporal dynamics. FEDformer [Zhou et al., 2022] utilizes fourier transformation from the perspective of frequency to model temporal dynamics. However, researchers have raised concerns about the effectiveness of Transformers for time series forecasting, as simple linear models prove to be effective or even outperform previous Transformers [Li et al., 2022a; Challu et al., 2023; Zeng et al., 2023]. Meanwhile, PatchTST [Nie et al., 2023] employs patching and channel independence with Transformers to effectively enhance the performance, showing that the Transformer architecture still has its potential with proper adaptation in time series forecasting.
Multi-scale Modeling for Time Series. Modeling multi-scale characteristics proves to be effective for correlation learning and feature extraction in the fields such as computer vision [Wang et al., 2021; Li et al., 2022b; Wang et al., 2022b] and multi-modal learning [Hu et al., 2020; Wang et al., 2022a], which is relatively less explored in time series forecasting. N-HiTS [Challu et al., 2023] employs multi-rate data sampling and hierarchical interpolation to model features of different resolutions. Pyraformer [Liu et al., 2022b] introduces a pyramid attention to extract features at different temporal resolutions. Scaleformer [Shabani et al., 2023] proposes a multi-scale framework, and the need to allocate a predictive model at different temporal resolutions results in higher model complexity. Different from these methods, which use fixed scales and cannot adaptively change the multi-scale modeling for different time series, we propose a multi-scale Transformer with adaptive pathways that adaptively model multi-scale characteristics based on diverse temporal dynamics.
3 METHODOLOGY
To effectively capture multi-scale characteristics, we propose multi-scale Transformers with adaptive pathways (named Pathformer). As depicted in Figure 2, the whole forecasting network is composed of Instance Norm, stacking of Adaptive Multi-Scale Blocks (AMS Blocks), and Predictor. Instance Norm [Kim et al., 2022] is a normalization technique employed to address the distribution shift between training and testing data. Predictor is a fully connected neural network, proposed due to its applicability to forecasting for long sequences [Zeng et al., 2023; Das et al., 2023].
The core of our design is the AMS Block for adaptive modeling of multi-scale characteristics, which consists of the multi-scale Transformer block and adaptive pathways. Inspired by the idea of patch-
Figure 2: The architecture of Pathformer. The Multi-scale Transformer Block (MST Block) comprises patch division with multiple patch sizes and dual attention. The adaptive pathways select the patch sizes with the top $K$ weights generated by the router to capture multi-scale characteristics, and the selected patch sizes are represented in blue. Then, the aggregator applies weighted aggregation to the characteristics obtained from the MST Block.
In Transformers (Nie et al., 2023), the multi-scale Transformer block integrates multi-scale temporal resolutions and distances by introducing patch division with multiple patch sizes and dual attention on the divided patches, equipping the model with the capability to comprehensively model multi-scale characteristics. Based on various options of multi-scale modeling in the Transformer block, adaptive pathways utilize the multi-scale modeling capability and endow it with adaptive modeling characteristics. A multi-scale router selects specific sizes of patch division and the subsequent dual attention in the Transformer based on the input data, which controls the extraction of multi-scale features. The router works with an aggregator to combine these multi-scale characteristics through weighted aggregation. The layer-by-layer routing and aggregation form the adaptive pathways of multi-scale modeling throughout the Transformer blocks. In the following parts, we describe the multi-scale Transformer block and the adaptive pathways of the AMS Block in detail.
### 3.1 Multi-scale Transformer Block
**Multi-scale Division.** For the simplicity of notations, we use a univariate time series for description, and the method can be easily extended to multivariate cases by considering each variable independently. In the multi-scale Transformer block, we define a collection of $M$ patch size values as $S = \{S_1, \ldots, S_M\}$, with each patch size $S$ corresponding to a patch division operation. For the input time series $X \in \mathbb{R}^{H \times d}$, where $H$ denotes the length of the time series and $d$ denotes the dimension of features, each patch division operation with the patch size $S$ divides $X$ into $P$ (with $P = H/S$) patches as $(X^1, X^2, \ldots, X^P)$, where each patch $X^i \in \mathbb{R}^{S \times d}$ contains $S$ time steps. Different patch sizes in the collection lead to various scales of divided patches and give various views of temporal resolutions for the input series. This multi-scale division works with the dual attention mechanism described below for multi-scale modeling.
**Dual Attention.** Based on the patch division of each scale, we propose dual attention to model temporal dependencies over the divided patches. To grasp temporal dependencies from different temporal distances, we utilize patch division as guidance for different temporal distances, and the dual attention mechanism consists of intra-patch attention within each divided patch and inter-patch attention across different patches, as shown in Figure 3(a).
Consider a set of patches $(X^1, X^2, \ldots, X^P)$ divided with the patch size $S$. Intra-patch attention establishes relationships between time steps within each patch. For the $i$-th patch $X^i \in \mathbb{R}^{S \times d}$, we first embed the patch along the feature dimension $d$ to get $X_{\text{intra}}^i \in \mathbb{R}^{S \times d_m}$, where $d_m$ represents the dimension of embedding. Then we perform trainable linear transformations on $X_{\text{intra}}^i$ to obtain the key and value in attention operations, denoted as $K_{\text{intra}}^i, V_{\text{intra}}^i \in \mathbb{R}^{S \times d_m}$. We employ a trainable query matrix $Q_{\text{intra}}^i \in \mathbb{R}^{1 \times d_m}$ to merge the context of the patch and subsequently compute the
Figure 3: (a) The structure of the Multi-Scale Transformer Block, which mainly consists of Patch Division, Inter-patch attention, and Intra-patch attention. (b) The structure of the Multi-Scale Router.
Cross-attention between $Q^i_{\text{intra}}, K^i_{\text{intra}}, V^i_{\text{intra}}$ to model local details within the $i$-th patch:
$$
\text{Attn}_{\text{intra}}^i = \text{Softmax}(Q^i_{\text{intra}}(K^i_{\text{intra}})^T / \sqrt{d_m})V^i_{\text{intra}}.
$$
After intra-patch attention, each patch has transitioned from its original input length of $S$ to the length of 1. The attention results from all the patches are concatenated to produce the output of intra-attention on the divided patches as $\text{Attn}_{\text{intra}} \in \mathbb{R}^{P \times d_m}$, which represents the local details from nearby time steps in the time series:
$$
\text{Attn}_{\text{intra}} = \text{Concat}(\text{Attn}_1^{\text{intra}}, \ldots, \text{Attn}_P^{\text{intra}}).
$$
Inter-patch attention establishes relationships between patches to capture global correlations. For the patch-divided time series $X \in \mathbb{R}^{P \times S \times d}$, we first perform feature embedding along the feature dimension from $d$ to $d_m$ and then rearrange the data to combine the two dimensions of patch quantity $S$ and feature embedding $d_m$, resulting in $X_{\text{inter}} \in \mathbb{R}^{P \times d'_m}$, where $d'_m = S \cdot d_m$. After such embedding and rearranging process, the time steps within the same patch are combined, and thus we perform self-attention over $X_{\text{inter}}$ to model correlations between patches. Following the standard self-attention protocol, we obtain the query, key, and value through linear mapping on $X_{\text{inter}}$, denoted as $Q_{\text{inter}}, K_{\text{inter}}, V_{\text{inter}} \in \mathbb{R}^{P \times d'_m}$. Then, we compute the attention $\text{Attn}_{\text{inter}}$, which involves interaction between patches and represents the global correlations of the time series:
$$
\text{Attn}_{\text{inter}} = \text{Softmax}(Q_{\text{inter}}(K_{\text{inter}})^T / \sqrt{d'_m})V_{\text{inter}}.
$$
To fuse global correlations and local details captured by dual attention, we rearrange the outputs of intra-patch attention to $\text{Attn}_{\text{intra}} \in \mathbb{R}^{P \times S \times d_m}$, performing linear transformations on the patch size dimension from 1 to $S$, to combine time steps in each patch, and then add it with inter-patch attention $\text{Attn}_{\text{inter}} \in \mathbb{R}^{P \times S \times d_m}$ to obtain the final output of dual attention $\text{Attn} \in \mathbb{R}^{P \times S \times d_m}$.
Overall, the multi-scale division provides different views of the time series with different patch sizes, and the changing patch sizes further influence the dual attention, which models temporal dependencies from different distances guided by the patch division. These two components work together to enable multiple scales of temporal modeling in the Transformer.
### 3.2 Adaptive Pathways
The design of the multi-scale Transformer block equips the model with the capability of multi-scale modeling. However, different series may prefer diverse scales, depending on their specific temporal characteristics and dynamics. Simply applying more scales may bring in redundant or useless signals, and manually tuning the optimal scales for a dataset or each time series is time-consuming or intractable. An ideal model needs to figure out such critical scales based on the input data for more effective modeling and better generalization of unseen data.
Pathways and Mixture of Experts are used to achieve adaptive modeling (Dean, 2021; Shazeer et al., 2016). Based on these concepts, we propose adaptive pathways based on multi-scale Transformer to model adaptive multi-scale, depicted in Figure 2. It contains two main components: the multi-scale router and the multi-scale aggregator. The multi-scale router selects specific sizes of patch division based on the input data, which activates specific parts in the Transformer and controls the extraction of multi-scale characteristics. The router works with the multi-scale aggregator to combine these characteristics through weighted aggregation, obtaining the output of the Transformer block.
**Multi-Scale Router.** The multi-scale router enables data-adaptive routing in the multi-scale Transformer, which selects the optimal sizes for patch division and thus controls the process of multi-scale modeling. Since the optimal or critical scales for each time series can be impacted by its complex inherent characteristics and dynamic patterns, like the periodicity and trend, we introduce a temporal decomposition module in the router that encompasses both seasonality and trend decomposition to extract periodicity and trend patterns, as illustrated in Figure 3(b).
**Seasonality decomposition** involves transforming the time series from the temporal domain into the frequency domain to extract the periodic patterns. We utilize the Discern Fourier Transform (DFT) (Cooley & Tukey, 1965), denoted as DFT(·), to decompose the input X into Fourier basis and select the $K_f$ basis with the largest amplitudes to keep the sparsity of frequency domain. Then, we obtain the periodic patterns $X_{sea}$ through an inverse DFT, denoted as IDFT(·). The process is as follows:
$$X_{sea} = \text{IDFT}(\{f_1, \ldots, f_{K_f}\}, A, \Phi),$$
where $\Phi$ and $A$ represent the phase and amplitude of each frequency from DFT(X), $\{f_1, \ldots, f_{K_f}\}$ represents the frequencies with the top $K_f$ amplitudes. **Trend decomposition** uses different kernels of average pooling for moving averages to extract trend patterns based on the remaining part after the seasonality decomposition $X_{rem} = X - X_{sea}$. For the results obtained from different kernels, a weighted operation is applied to obtain the representation of the trend component:
$$X_{trend} = \text{Softmax}(L(X_{rem})) \cdot (\text{Avgpool}(X_{rem})_{\text{kernel}_1}, \ldots, \text{Avgpool}(X_{rem})_{\text{kernel}_N}),$$
where $\text{Avgpool}(\cdot)_{\text{kernel}}$ is the pooling function with the $i$-th kernel, $N$ corresponds to the number of kernels, $\text{Softmax}(L(\cdot))$ controls the weights for the results from different kernels. We add the seasonality pattern and trend pattern with the original input $X$, and then perform a linear mapping $\text{Linear}(\cdot)$ to transform and merge them along the temporal dimension to get $X_{trans} \in \mathbb{R}^d$.
Based on the results $X_{trans}$ from temporal decomposition, the router employs a routing function to generate the pathway weights, which determines the patch sizes to choose for the current data. To avoid consistently selecting a few patch sizes, causing the corresponding scales to be repeatedly updated while neglecting other potentially useful scales in the multi-scale Transformer, we introduce noise terms to add randomness in the weight generation process. The whole process of generating pathway weights is as follows:
$$R(X_{trans}) = \text{Softmax}(X_{trans}W_r + \epsilon \cdot \text{Softplus}(X_{trans}W_{noise})), \epsilon \sim \mathcal{N}(0, 1),$$
where $R(\cdot)$ represents the whole routing function, $W_r$ and $W_{noise} \in \mathbb{R}^{d \times M}$ are learnable parameters for weight generation, with $d$ denoting the feature dimension of $X_{trans}$ and $M$ denoting the number of patch sizes. To introduce sparsity in the routing and encourage the selection of critical scales, we perform top-$K$ selection on the pathway weights, keeping the top $K$ pathway weights and setting the rest weights as 0, and denote the final result as $\bar{R}(X_{trans})$.
**Multi-Scale Aggregator.** Each dimension of the generated pathway weights $\bar{R}(X_{trans}) \in \mathbb{R}^M$ correspond to a patch size in the multi-scale Transformer, with $\bar{R}(X_{trans})_i > 0$ indicating performing this size $S_i$ of patch division and the dual attention and $\bar{R}(X_{trans})_i = 0$ indicating ignoring this patch size for the current data. Let $X_{out}^i$ denote the output of the multi-scale Transformer with the patch size $S_i$, due to the varying temporal dimensions produced by different patch sizes, the aggregator first performs a transformation function $T_i(\cdot)$ to align the temporal dimension from different scales. Then, the aggregator performs weighted aggregation for the multi-scale outputs based on the pathway weights to get the final output of this AMS block:
$$X_{out} = \sum_{i=1}^{M} I(\bar{R}(X_{trans})_i > 0) R(X_{trans})_i T_i(X_{out}^i).$$
$I(\bar{R}(X_{trans})_i > 0)$ is the indicator function which outputs 1 when $\bar{R}(X_{trans})_i > 0$, and otherwise outputs 0, indicating that only the top $K$ patch sizes and the corresponding outputs from the Transformer are considered or needed during aggregation.
4 EXPERIMENTS
4.1 TIME SERIES FORECASTING
Datasets. We conduct experiments on nine real-world datasets to assess the performance of Pathformer, encompassing a range of domains, including electricity transportation, weather forecasting, and cloud computing. These datasets include ETT (ETTh1, ETTh2, ETTm1, ETTm2), Weather, Electricity, Traffic, ILI, and Cloud Cluster (Cluster-A, Cluster-B, Cluster-C).
Baselines and Metrics. We choose some state-of-the-art models to serve as baselines, including PatchTST (Nie et al., 2023), NLinear (Zeng et al., 2023), Scaleformer (Shabani et al., 2023), TIDE (Das et al., 2023), FEDformer (Zhou et al., 2022), Pyraformer (Liu et al., 2022b), and Autoformer (Wu et al., 2021). To ensure fair comparisons, all models follow the same input length ($H = 36$ for the ILI dataset and $H = 96$ for others) and prediction length ($F \in \{24, 49, 96, 192\}$ for Cloud Cluster datasets, $F \in \{24, 36, 48, 60\}$ for ILI dataset and $F \in \{96, 192, 336, 720\}$ for others). We select two common metrics in time series forecasting: Mean Absolute Error (MAE) and Mean Squared Error (MSE).
Implementation Details. Pathformer utilizes the Adam optimizer (Kingma & Ba, 2015) with a learning rate set at $10^{-3}$. The default loss function employed is L1 Loss, and we implement early stopping within 10 epochs during the training process. All experiments are conducted using PyTorch and executed on an NVIDIA A800 80GB GPU. Pathformer is composed of 3 Adaptive Multi-Scale Blocks (AMS Blocks). Each AMS Block contains 4 different patch sizes. These patch sizes are selected from a pool of commonly used options, namely $\{2, 3, 6, 12, 16, 24, 32\}$.
Main Results. Table 1 shows the prediction results of multivariable time series forecasting, where Pathformer stands out with the best performance in 81 cases and the second-best in 5 cases out of the overall 88 cases. Compared with the second-best baseline, PatchTST, Pathformer demonstrates a significant improvement, with an impressive 8.1% reduction in MSE and a 6.4% reduction in MAE. Compared with the strong linear models NLinear, Pathformer also outperforms them comprehensively, especially on large datasets such as Electricity and Traffic. This demonstrates the potential of Transformer architecture for time series forecasting. Compared with the multi-scale models Pyraformer and Scaleformer, Pathformer exhibits good performance improvements, with a substantial 36.4% reduction in MSE and a 19.1% reduction in MAE. This illustrates that the proposed comprehensive modeling from both temporal resolution and temporal distance with adaptive pathways is more effective for multi-scale modeling.
4.2 TRANSFER LEARNING
Experimental Setting. To assess the transferability of Pathformer, we benchmark it against three baselines: PatchTST, FEDformer, and Autoformer, devising two distinct transfer experiments. In the context of evaluating transferability across different datasets, models initially undergo pre-training on the ETTh1 and ETTm1. Subsequently, we fine-tune them using the ETTh2 and ETTm2. For assessing transferability towards future data, models are pre-trained on the first 70% of the training data sourced from three clusters: Cluster-A, Cluster-B, and Cluster-C. This pre-training is followed by fine-tuning the remaining 30% of the training data specific to each cluster. In terms of methodology for baselines, we explore two approaches: direct prediction (zero-shot) and full-tuning. Deviating from these approaches, Pathformer integrates a part-tuning strategy. In this approach, specific parameters, like those of the router network, undergo fine-tuning, resulting in a significant reduction in computational resource demands.
Transfer Learning Results. Table 2 presents the outcomes of our transfer learning evaluation. Across both direct prediction and full-tuning methods, Pathformer surpasses the baseline models, highlighting its enhanced generalization and transferability. One of the key strengths of Pathformer lies in its adaptive capacity to select varying scales for different temporal dynamics. This adaptability allows it to effectively capture complex temporal patterns present in diverse datasets, consequently demonstrating superior generalization and transferability. Part-tuning is a lightweight fine-tuning method that demands fewer computational resources and reduces training time on average by 52%, while still achieving prediction accuracy nearly comparable to Pathformer full-tuning. Moreover, it outperforms the full-tuning of other baseline models on the majority of datasets. This demonstrates that Pathformer can provide effective lightweight transfer learning for time series forecasting.
Table 1: Multivariate time series forecasting results. The input length $H = 96$ ($H = 36$ for ILI). The best results are highlighted in bold, and the second-best results are underlined.
| Method | Metric | Pathformer | PatchTST | NLinear | Scaleformer | TIDE | FEDformer | Pyraformer | Autoformer |
|--------|--------|------------|----------|---------|-------------|------|-----------|-----------|-----------|
| | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE |
| ETTh1 | 96 | 0.282 | 0.401 | 0.240 | 0.386 | 0.539 | 0.474 | 0.617 | 0.519 |
| | 192 | 0.307 | 0.446 | 0.308 | 0.430 | 0.560 | 0.472 | 0.686 | 0.420 |
| | 336 | 0.454 | 0.432 | 0.485 | 0.455 | 0.443 | 0.462 | 0.476 | 0.527 |
| | 720 | 0.479 | 0.461 | 0.495 | 0.474 | 0.486 | 0.472 | 0.494 | 0.500 |
| ETTh2 | 96 | 0.279 | 0.331 | 0.294 | 0.343 | 0.290 | 0.339 | 0.364 | 0.407 |
| | 192 | 0.380 | 0.578 | 0.396 | 0.379 | 0.396 | 0.466 | 0.459 | 0.394 |
| | 336 | 0.348 | 0.402 | 0.370 | 0.391 | 0.391 | 0.399 | 0.376 | 0.396 |
| | 720 | 0.398 | 0.424 | 0.412 | 0.435 | 0.436 | 0.485 | 0.492 | 0.463 |
| ETTm1 | 96 | 0.316 | 0.346 | 0.324 | 0.361 | 0.339 | 0.369 | 0.355 | 0.398 |
| | 192 | 0.366 | 0.370 | 0.362 | 0.383 | 0.379 | 0.386 | 0.428 | 0.455 |
| | 336 | 0.411 | 0.414 | 0.411 | 0.417 | 0.407 | 0.419 | 0.416 | 0.415 |
| | 720 | 0.460 | 0.432 | 0.461 | 0.438 | 0.478 | 0.442 | 0.558 | 0.517 |
| Weather | 96 | 0.170 | 0.248 | 0.177 | 0.260 | 0.177 | 0.257 | 0.182 | 0.275 |
| | 192 | 0.238 | 0.295 | 0.248 | 0.306 | 0.241 | 0.297 | 0.251 | 0.318 |
| | 336 | 0.293 | 0.351 | 0.304 | 0.342 | 0.302 | 0.333 | 0.340 | 0.375 |
| | 720 | 0.356 | 0.406 | 0.350 | 0.385 | 0.351 | 0.366 | 0.368 | 0.396 |
| Electricity | 96 | 0.156 | 0.192 | 0.177 | 0.218 | 0.168 | 0.208 | 0.288 | 0.365 |
| | 192 | 0.206 | 0.240 | 0.224 | 0.258 | 0.217 | 0.255 | 0.368 | 0.425 |
| | 336 | 0.254 | 0.282 | 0.277 | 0.297 | 0.267 | 0.292 | 0.447 | 0.469 |
| | 720 | 0.340 | 0.336 | 0.350 | 0.345 | 0.351 | 0.346 | 0.640 | 0.574 |
| ILI | 96 | 0.479 | 0.283 | 0.492 | 0.324 | 0.645 | 0.388 | 2.678 | 1.071 |
| | 192 | 0.429 | 0.298 | 0.428 | 0.306 | 0.395 | 0.365 | 0.564 | 0.353 |
| | 336 | 0.503 | 0.309 | 0.505 | 0.317 | 0.390 | 0.389 | 0.606 | 0.349 |
| | 720 | 0.537 | 0.322 | 0.542 | 0.337 | 0.645 | 0.388 | 0.576 | 0.349 |
| Cluster-A | 96 | 0.100 | 0.205 | 0.126 | 0.234 | 0.134 | 0.225 | 0.128 | 0.247 |
| | 192 | 0.160 | 0.264 | 0.208 | 0.302 | 0.214 | 0.310 | 0.182 | 0.319 |
| | 336 | 0.249 | 0.352 | 0.272 | 0.352 | 0.302 | 0.312 | 0.241 | 0.324 |
| | 720 | 0.349 | 0.400 | 0.452 | 0.435 | 0.425 | 0.425 | 0.425 | 0.425 |
| Cluster-B | 96 | 0.121 | 0.224 | 0.126 | 0.257 | 0.130 | 0.241 | 0.125 | 0.241 |
| | 192 | 0.172 | 0.270 | 0.183 | 0.290 | 0.173 | 0.285 | 0.164 | 0.280 |
| | 336 | 0.242 | 0.322 | 0.272 | 0.352 | 0.281 | 0.365 | 0.252 | 0.342 |
| | 720 | 0.342 | 0.380 | 0.412 | 0.406 | 0.416 | 0.436 | 0.436 | 0.436 |
| Cluster-C | 96 | 0.064 | 0.169 | 0.075 | 0.188 | 0.100 | 0.205 | 0.109 | 0.208 |
| | 192 | 0.102 | 0.218 | 0.118 | 0.241 | 0.163 | 0.286 | 0.110 | 0.242 |
| | 336 | 0.162 | 0.276 | 0.188 | 0.305 | 0.245 | 0.318 | 0.177 | 0.321 |
| | 720 | 0.304 | 0.369 | 0.354 | 0.413 | 0.375 | 0.457 | 0.326 | 0.428 |
Table 2: Transfer Learning results. The best results are in bold, and the second results are underlined.
| Models | Metric | Pathformer | PatchTST | Full-tuning | Pathformer | PatchTST | Full-tuning | Pathformer | PatchTST | Full-tuning |
|--------|--------|------------|----------|-------------|------------|----------|-------------|------------|----------|-------------|
| | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MSE | MAE | MSE |
| ETTh2 | 96 | 0.340 | 0.369 | 0.227 | 0.333 | 0.276 | 0.328 | 0.346 | 0.369 | 0.227 |
| | 192 | 0.411 | 0.406 | 0.322 | 0.334 | 0.350 | 0.376 | 0.422 | 0.420 | 0.366 |
| | 336 | 0.450 | 0.450 | 0.450 | 0.450 | 0.450 | 0.450 | 0.450 | 0.450 | 0.450 |
| ETTm2 | 96 | 0.220 | 0.294 | 0.181 | 0.260 | 0.172 | 0.251 | 0.189 | 0.284 | 0.171 |
| | 192 | 0.325 | 0.350 | 0.232 | 0.334 | 0.302 | 0.334 | 0.339 | 0.349 | 0.303 |
| | 336 | 0.422 | 0.408 | 0.406 | 0.398 | 0.391 | 0.392 | 0.429 | 0.419 | 0.410 |
| | 720 | 0.422 | 0.408 | 0.406 | 0.398 | 0.391 | 0.392 | 0.429 | 0.419 | 0.410 |
| Cluster-A | 96 | 0.186 | 0.281 | 0.179 | 0.281 | 0.144 | 0.254 | 0.231 | 0.322 | 0.142 |
| | 192 | 0.249 | 0.334 | 0.215 | 0.315 | 0.193 | 0.302 | 0.350 | 0.396 | 0.290 |
| | 336 | 0.372 | 0.416 | 0.332 | 0.381 | 0.292 | 0.371 | 0.524 | 0.491 | 0.406 |
| Cluster-B | 96 | 0.202 | 0.298 | 0.174 | 0.275 | 0.170 | 0.270 | 0.207 | 0.306 | 0.178 |
| | 192 | 0.296 | 0.357 | 0.253 | 0.322 | 0.234 | 0.321 | 0.298 | 0.365 | 0.264 |
| | 336 | 0.464 | 0.468 | 0.441 | 0.477 | 0.424 | 0.495 | 0.471 | 0.463 | 0.528 |
| Cluster-C | 96 | 0.144 | 0.254 | 0.104 | 0.219 | 0.101 | 0.215 | 0.138 | 0.246 | 0.115 |
| | 192 | 0.174 | 0.284 | 0.166 | 0.275 | 0.162 | 0.272 | 0.194 | 0.303 | 0.182 |
| | 336 | 0.327 | 0.386 | 0.316 | 0.374 | 0.301 | 0.365 | 0.376 | 0.413 | 0.349 |
4.3 Ablation Studies
To ascertain the impact of different modules within Pathformer, we perform ablation studies focusing on inter-patch attention, intra-patch attention, time series decomposition, and Pathways. The W/O Pathways configuration entails using all patch sizes from the patch size pool for every dataset, eliminating adaptive selection. Table 2 illustrates the unique impact of each module. The influence of Pathways is significant; omitting them results in a marked decrease in prediction accuracy. This emphasizes the criticality of optimizing the mix of patch sizes to extract multi-scale characteristics, thus markedly improving the model’s prediction accuracy. Regarding efficiency, intra-patch attention is notably adept at discerning local patterns, contrasting with inter-patch attention which primarily captures wider global patterns. The time series decomposition module decomposes trend...
Table 3: Ablation study. W/O Inter, W/O Intra, W/O Decompose represent removing the inter-patch attention, intra-patch attention, and time series decomposition, respectively.
| Models | W/O Inter | W/O Intra | W/O Decompose | W/O Pathways | Pathformer |
|------------|-----------|-----------|---------------|--------------|------------|
| Metric | MSE | MAE | MSE | MAE | MSE |
| Weather | 96 | 0.162 | 0.196 | 0.170 | 0.203 | 0.162 | 0.198 | 0.168 | 0.204 | 0.156 | 0.192 |
| | 192 | 0.219 | 0.248 | 0.220 | 0.249 | 0.212 | 0.244 | 0.219 | 0.250 | 0.206 | 0.240 |
| | 336 | 0.262 | 0.290 | 0.272 | 0.292 | 0.256 | 0.285 | 0.269 | 0.290 | 0.254 | 0.282 |
| | 720 | 0.350 | 0.343 | 0.358 | 0.357 | 0.344 | 0.340 | 0.349 | 0.348 | 0.340 | 0.336 |
| Electricity| 96 | 0.176 | 0.209 | 0.176 | 0.204 | 0.172 | 0.204 | 0.168 | 0.202 | 0.167 | 0.206 |
| | 192 | 0.185 | 0.270 | 0.193 | 0.275 | 0.176 | 0.268 | 0.181 | 0.272 | 0.167 | 0.256 |
| | 336 | 0.216 | 0.301 | 0.214 | 0.297 | 0.195 | 0.281 | 0.210 | 0.296 | 0.186 | 0.275 |
| | 720 | 0.239 | 0.322 | 0.253 | 0.327 | 0.235 | 0.316 | 0.254 | 0.332 | 0.231 | 0.309 |
Table 4: Parameter sensitivity study. The prediction accuracy varies with $K$.
| Metric | $K=1$ | $K=2$ | $K=3$ | $K=4$ |
|------------|-----------|-----------|-----------|-----------|
| | MSE | MAE | MSE | MAE |
| ETTh2 | 96 | 0.283 | 0.335 | 0.279 | 0.331 |
| | 192 | 0.357 | 0.380 | 0.349 | 0.380 |
| | 336 | 0.342 | 0.379 | 0.348 | 0.382 |
| | 720 | 0.411 | 0.430 | 0.398 | 0.424 |
| Electricity| 96 | 0.165 | 0.247 | 0.145 | 0.236 |
| | 192 | 0.175 | 0.269 | 0.159 | 0.266 |
| | 336 | 0.192 | 0.278 | 0.186 | 0.275 |
| | 720 | 0.234 | 0.311 | 0.231 | 0.309 |
and periodic patterns to improve the ability to capture the temporal dynamics of its input, assisting in the identification of appropriate patch sizes for combination.
Varying the Number of Adaptively Selected Patch Sizes. Pathformer adaptively selects the top $K$ patch sizes for combination, adjusting to different time series samples. We evaluate the influence of different $K$ values on prediction accuracy in Table 4. Our findings show that $K = 2$ and $K = 3$ yield better results than $K = 1$ and $K = 4$, highlighting the advantage of adaptively modeling critical multi-scale characteristics for improved accuracy. Additionally, distinct time series samples benefit from feature extraction using varied patch sizes, but not all patch sizes are equally effective.
Visualization of Pathways Weights. We show three samples and depict their average Pathways weights for each patch size in Figure 4. Our observations reveal that the samples possess unique Pathways weight distributions. Both Samples 1 and 2, which demonstrate longer seasonality and similar trend patterns, show similar visualized Pathways weights. This manifests in the higher weights they attribute to the larger patch sizes. On the other hand, Sample 3, which is characterized by its shorter seasonality pattern, aligns with higher weights for the smaller patch sizes. These observations underscore Pathformer’s adaptability, emphasizing its ability to discern and apply the optimal patch size combinations for the diverse seasonality and trend patterns across samples.
Figure 4: The average pathways weights of different patch sizes for the Weather. $B_1$, $B_2$, and $B_3$ denote distinct AMS (Adaptive Multi-Scale) blocks, while $S_1$, $S_2$, $S_3$, and $S_4$ represent varying patch sizes within each AMS block, with patch size decreasing sequentially.
5 CONCLUSION
In this paper, we propose Pathformer, a Multi-Scale Transformer with Adaptive Pathways for time series forecasting. It integrates multi-scale temporal resolutions and temporal distances by introducing patch division with multiple patch sizes and dual attention on the divided patches, enabling the comprehensive modeling of multi-scale characteristics. Furthermore, adaptive pathways dynamically select and aggregate scale-specific characteristics based on the different temporal dynamics. These innovative mechanisms collectively empower Pathformer to achieve outstanding prediction performance and demonstrate strong generalization capability on several forecasting tasks.
ACKNOWLEDGMENTS
This work was supported by National Natural Science Foundation of China (62372179) and Alibaba Innovative Research Program.
REFERENCES
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
David Campos, Tung Kieu, Chenjuan Guo, Feiteng Huang, Kai Zheng, Bin Yang, and Christian S. Jensen. Unsupervised time series outlier detection with diversity-driven convolutional ensembles. Proceedings of the VLDB Endowment, 2022.
Cristian Challu, Kin G. Olivares, Boris N. Oreshkin, Federico Garza Ramírez, Max Mergenthaler Canseco, and Artur Dubrawski. NHITS: neural hierarchical interpolation for time series forecasting. In Association for the Advancement of Artificial Intelligence (AAAI), 2023.
Cathy WS Chen, Richard Gerlach, Edward MH Lin, and WCW Lee. Bayesian forecasting for financial risk management, pre and post the global financial crisis. Journal of Forecasting, 2012.
Weiqi Chen, Wenwei Wang, Bingqing Peng, Qingsong Wen, Tian Zhou, and Liang Sun. Learning to rotate: Quaternion transformer for complicated periodical time series forecasting. In International Conference on Knowledge Discovery & Data Mining (KDD), 2022.
Yunyao Cheng, Peng Chen, Chenjuan Guo, Kai Zhao, Qingsong Wen, Bin Yang, and Christian S. Jensen. Weakly guided adaptation for robust time series forecasting. Proceedings of the VLDB Endowment, 2024.
Junyoung Chung, Çağlar Gülçehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, 2014.
Razvan-Gabriel Cirstea, Bin Yang, and Chenjuan Guo. Graph attention recurrent neural networks for correlated time series forecasting. In International Conference on Knowledge Discovery & Data Mining (KDD), 2019.
Razvan-Gabriel Cirstea, Tung Kieu, Chenjuan Guo, Bin Yang, and Sinno Jialin Pan. EnhanceNet: Plugin neural networks for enhancing correlated time series forecasting. In IEEE International Conference on Data Engineering (ICDE), 2021.
Razvan-Gabriel Cirstea, Chenjuan Guo, Bin Yang, Tung Kieu, Xuanyi Dong, and Shirui Pan. Tri-former: Triangular, variable-specific attentions for long sequence multivariate time series forecasting. In International Joint Conference on Artificial Intelligence (IJCAI), 2022a.
Razvan-Gabriel Cirstea, Bin Yang, Chenjuan Guo, Tung Kieu, and Shirui Pan. Towards spatio-temporal aware traffic time series forecasting. In IEEE International Conference on Data Engineering (ICDE), 2022b.
James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of computation, 1965.
Abhimanyu Das, Weihao Kong, Andrew Leach, Rajat Sen, and Rose Yu. Long-term forecasting with tide: Time-series dense encoder. arXiv, 2023.
Jeff Dean. Introducing pathways: A next-generation ai architecture, 2021.
|
uhR7aYuf0i
|
How are the parameters initialized in the outer loop? Do you rely on default initializations for the family of architectures considered for meta-training? I wonder if such an approach would be unstable if the nature of networks considered is different.
|
LEARNING TO EXPLORE FOR STOCHASTIC GRADIENT MCMC
Anonymous authors
Paper under double-blind review
ABSTRACT
Bayesian Neural Networks (BNNs) with high-dimensional parameters pose a challenge for posterior inference due to the multi-modality of the posterior distributions. Stochastic Gradient Markov Chain Monte Carlo (SGMCMC) with cyclical learning rate scheduling is a promising solution, but it requires a large number of sampling steps to explore high-dimensional multi-modal posteriors, making it computationally expensive. In this paper, we propose a meta-learning strategy to build SGMCMC which can efficiently explore the multi-modal target distributions. Our algorithm allows the learned SGMCMC to quickly explore the high-density region of the posterior landscape. Also, we show that this exploration property is transferrable to various tasks, even for the ones unseen during a meta-training stage. Using popular image classification benchmarks and a variety of downstream tasks, we demonstrate that our method significantly improves the sampling efficiency, achieving better performance than vanilla SGMCMC without incurring significant computational overhead.
1 INTRODUCTION
Bayesian methods have received a lot of attention as powerful tools for improving the reliability of machine learning models. Bayesian methods are gaining prominence due to their ability to offer probability distributions over model parameters, thereby enabling the quantification of uncertainty in predictions. They find primary utility in safety-critical domains like autonomous driving, medical diagnosis, and finance, where the accurate modeling of prediction uncertainty often takes precedence over the predictions themselves. The integration of Bayesian modeling with (deep) neural networks, often referred to as Bayesian Neural Networks (BNNs), introduces exciting prospects for the development of secure and trustworthy decision-making systems.
However, there are significant problems for the successful application of BNNs in real-world scenarios. Bayesian inference in high-dimensional parameter space, especially for deep and large models employed for the applications mentioned above, is notoriously computationally expensive and often intractable due to the complexity of the posterior distribution. Moreover, posterior landscapes of BNNs frequently display multi-modality, where multiple high density regions exist, posing a significant challenge to efficient exploration and sampling. Due to this difficulty, the methods that are reported to work well for relatively small models, for instance, variational inference (Blei & McAuliffe, 2017) or Hamiltonian Monte Carlo (HMC) (Neal et al., 2011), can severely fail for deep neural networks trained with large amount of data, when applied without care.
Recently, Stochastic Gradient Markov Chain Monte Carlo (SGMCMC) methods (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2015) have emerged as powerful tools for enhancing the scalability of approximate Bayesian inference. This advancement has opened up the possibilities of applying Bayesian methods to large-scale machine learning tasks. SGMCMC offers a versatile array of methods for constructing Markov chains that converge towards the target posterior distributions. The simulation of these chains primarily relies on stochastic gradients, making them particularly suitable for BNNs trained on large-scale datasets. However, despite the notable successes of SGMCMC in some BNN applications (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2015; Zhang et al., 2020), there remains a notable challenge. Achieving optimal performance often demands extensive engineering efforts and hyperparameter tuning. This fine-tuning process typically involves human trial and error or resource-intensive cross-validation procedures. Furthermore, it’s worth noting that
even with the use of SGMC MCMC methods, there remains room for improvement in efficiently exploring multi-modal posterior distributions. As a result, in practical applications, a trade-off between precision and computational resources often becomes necessary.
To address these challenges, we introduce a novel meta-learning framework tailored to enhance the efficiency of SGMC MCMC algorithms. Traditional SGMC MCMC methods often rely on handcrafted design choices inspired by mathematical or physics principles, such as the formulation of kinetic energy terms, curl, and diffusion matrices. Recognizing the pivotal role these design components play in shaping the trade-off between exploration and exploitation within SGMC MCMC chains, we argue in favor of learning them directly from data rather than manually specifying them. To achieve this, we construct neural networks to serve as meta-models responsible for approximating the gradients of kinetic energy terms. These meta-models are trained using a diverse set of BNNs inference tasks, encompassing various datasets and architectural configurations. Our proposed approach, termed L2E, exhibits several advantageous properties, including better mixing rates, improved prediction performance, and a reduced need for laborious hyperparameter tuning. We point out that ours is not the first meta learning algorithm for SGMC MCMC, as Gong et al. (2018) already explored a similar concept. However, they did not achieve to learn appropriate exploration-exploitation balance in simulating multi-modal BNNs posterior, and their algorithm was not demonstrated to scale for large-scale BNNs with robust architecture and dataset transfer, limiting the practicality.
Our contributions can be summarized as follows:
- We introduce L2E, a novel meta-learning framework enhancing SGMC MCMC methods. In contrast to conventional hand-designed approaches and meta-learning approach (Gong et al., 2018), L2E learns the kinetic energy term directly, offering a more data-driven and adaptable solution.
- We present a multitask training pipeline equipped with a scalable gradient estimator for L2E. This framework allows the meta-learned SGMC MCMC techniques to generalize effectively across a wide range of tasks, extending their applicability beyond the scope of tasks encountered during meta-training.
- Using real-world image classification benchmarks, we demonstrate the remarkable performance of BNNs inferred using the SGMC MCMC algorithm discovered by L2E, both in terms of prediction accuracy and sampling efficiency.
2 BACKGROUNDS
2.1 SGMC MCMC FOR BAYESIAN NEURAL NETWORKS
Settings. In this paper, we focus on supervised learning problems with a training dataset \( D = \{(x_i, y_i)\}_{i=1}^n \) with \( x_i \) being observation and \( y_i \) being label. Given a neural network with a parameter \( \theta \in \mathbb{R}^d \), a likelihood \( p(y | x, \theta) \) and a prior \( p(\theta) \) are set up, together defining an energy function \( U(\theta) = -\sum_{i=1}^n \log p(y_i | x_i, \theta) - \log p(\theta) \). The goal is to infer the posterior distribution \( p(\theta | D) \propto \exp(-U(\theta)) \). When the size of the dataset \( n \) is large, evaluating the energy function \( U(\theta) \) or its gradient \( \nabla_\theta U(\theta) \) may be undesirably costly as they require a pass through the entire dataset \( D \). For
such scenarios, SGMC (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2015) is a standard choice, where the gradients of the energy function \( \nabla_\theta U(\theta) \) are approximated by a stochastic gradient computed from mini-batches. That is, given a mini-batch \( B \subset \{1, \ldots, n\} \) where \( |B| \ll n \), an unbiased estimator of the full gradient \( \nabla_\theta U(\theta) \) with \( B \) can be computed as
\[
\nabla_\theta \tilde{U}(\theta) = -\frac{n}{|B|} \sum_{i \in B} \nabla_\theta \log p(y_i | x_i, \theta) - \nabla_\theta \log p(\theta).
\]
(1)
A complete recipe. There may be several ways to build a Markov chain leading to the target posterior distribution. Ma et al. (2015) presented a generic recipe that includes all the convergent SGMC algorithms as special cases, constituting a complete framework. In this recipe, a parameter \( \theta \) of interest is augmented with an auxiliary momentum variable \( r \), and an Stochastic Differential Equation (SDE) of the following form is defined for a joint variable \( z = (\theta, r) \in \mathbb{R}^{2d} \) as follows.
\[
H(z) := U(\theta) + g(\theta, r), \quad \Gamma(z) := \sum_{j=1}^{2d} \frac{\partial}{\partial z_j} (D_{ij}(z) + Q_{ij}(z)),
\]
(2)
\[
dz = [-(D(z) + Q(z)) \nabla_z H(z) + \Gamma(z)] dt + \sqrt{2D(z)} dw_t,
\]
where \( g(\theta, r) \) is the conditional energy function of the momentum \( r \) such that \( p(z) \propto \exp(-H(z)) \) and \( w_t \) is \( 2d \)-dimensional Brownian motion. Here, \( D(z) \in \mathbb{R}^{2d \times 2d} \) and \( Q(z) \in \mathbb{R}^{2d \times 2d} \) are restricted to be positive semi-definite and skew-symmetric, respectively. Given this SDE, one can obtain a SGMC algorithm by first substituting the full gradient \( \nabla_z H(z) \) with a mini-batch gradient \( \nabla_z \tilde{H}(z) = \nabla_z (\tilde{U}(\theta) + g(\theta, r)) \) and then discretizing it via a numerical solver such as symplectic Euler method. A notable example would be Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) (Chen et al., 2014), where \( g(\theta, r) = \frac{1}{2} r^\top M^{-1} r \), \( D(z) = \begin{bmatrix} 0 & 0 \\ 0 & C \end{bmatrix} \), and \( Q(z) = \begin{bmatrix} 0 & -I \\ I & 0 \end{bmatrix} \) for some positive semi-definite matrices \( M \) and \( C \), leading to an algorithm when discretized with symplectic Euler method as follows.
\[
r_{t+1} = r_t - \epsilon_t \nabla \tilde{U}(\theta_t) - \epsilon_t CM^{-1} r_t + \xi_t, \quad \xi_t \sim \mathcal{N}(0, 2C \epsilon_t)
\]
\[
\theta_{t+1} = \theta_t + \epsilon_t M^{-1} r_{t+1},
\]
(3)
where \( \epsilon_t \) is a step-size.
The complete recipe includes interesting special cases that introduce adaptive preconditioners to improve the mixing of SGMC (Girolami & Calderhead, 2011; Li et al., 2016; Wenzel et al., 2020). For instance, Li et al. (2016) proposed Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD), which includes RMSprop (Tieleman & Hinton, 2012)-like preconditioning matrix in the updates:
\[
\theta_{t+1} = \theta_t - \epsilon_t [G(\theta_t) \nabla_\theta \tilde{U}(\theta_t) + \Gamma(\theta_t)] + \xi_t, \quad \xi_t \sim \mathcal{N}(0, 2G(\theta_t) \epsilon_t)
\]
\[
V(\theta_{t+1}) = \alpha V(\theta_t) + (1 - \alpha) \frac{\nabla_\theta \tilde{U}(\theta_t)}{n} \odot \frac{\nabla_\theta \tilde{U}(\theta_t)}{n}
\]
\[
G(\theta_{t+1}) = \text{diag}(1 \odot (\lambda 1 + \sqrt{V(\theta_{t+1})}),
\]
(4)
where \( \odot, \oslash \) denotes elementwise division and multiplication, respectively. pSGLD exploits recent gradient information to adaptively adjust the scale of energy gradients and noise. However, this heuristical adjustment is still insufficient to efficiently explore the complex posteriors of BNNs (Zhang et al., 2020). Also, introducing preconditioner dependent to \( \theta \) harms the computational efficiency of sampler since it requires to include additional correction term in the discretization step for correct simulation (Wenzel et al., 2020).
Recently, Zhang et al. (2020) introduced cyclic learning rate schedule for efficient exploration of multi-modal distribution. The key idea is using the spike of learning rate induced by cyclic learning rate to escape from a single mode and move to other modes. However, in our experiment, we find that SGMC with cyclical learning rate does not necessarily capture multi-modality and it also requires a large amount of update steps to move to other modes in practice.
Prediction via Bayesian model averaging. After inferring the posterior \( p(\theta | D) \), for a test input \( x_* \), the posterior predictive is computed as
\[
p(y_* | x_*, D) = \int_{\mathbb{R}^d} p(y_* | x_*, \theta) p(\theta | D) d\theta,
\]
which is also referred to as Bayesian Model Averaging (BMA). In our setting, having collected from the posterior samples \( \theta_1, \ldots, \theta_K \) from a convergent chain simulated from SGMCMC procedure, the predictive distribution is approximated with a Monte-Carlo estimator,
\[
p(y_* | x_*, D) \approx \frac{1}{K} \sum_{k=1}^{K} p(y_* | x_*, \theta_k).
\]
As one can easily guess, the quality of this approximation depends heavily on the quality of the samples drawn from Markov Chain Monte Carlo (MCMC) procedure. For over-parameterized deep neural networks that we are interested in, the target posterior \( p(\theta | D) \) is typically highly multi-modal, so simple SGMCMC methods suffer from poor mixing; that is, the posterior samples collected from those methods are not widely spread throughout the parameter space, so it takes exponentially many samples to achieve desired level of accuracy for the approximation. Hence, a good SGMCMC algorithm should be equipped with the ability to efficiently explore the parameter space, while still be able to stay sufficiently long in high-density regions. That is, it should have a right balance between exploration-exploitation.
Meta Learning Meta-learning, or learning to learn, refers to the algorithm that learns the useful general knowledge from source tasks that can transfer to the unseen tasks. Most meta-learning algorithms involves two levels of learning: an inner-loop and outer-loop (Metz et al., 2018). Inner-loop usually contains the training procedure of particular task. In our work, inner-loop for our meta-training is iteratively update the model parameter \( \theta \) by running SGMCMC with learnable transition kernel. Outer-loop refers to the training procedure of meta-parameter \( \phi \), which is done by minimizing meta-objective \( L(\phi) \).
As a subfield of meta-learning, learning an optimizer is emerging field which aims to learn the learnable optimizer well applied to some set of target tasks. In general, training learned optimizer includes the backpropagation through the computational graph of long inner-loop iteration. Truncated backpropagation through time (Werbos, 1990) can be one solution, but this introduces truncation bias to the gradient estimator. Recent studies (Metz et al., 2019; 2022b) revealed that replacing backpropagation with non-analytic gradient estimation method like Evolution Strategy (ES) (Salimans et al., 2017) can improve meta-optimization. In this paper, for estimating \( \nabla_\phi L(\phi) \), we do not retain computational graph to backpropagate through inner-loop.
3 MAIN CONTRIBUTION: LEARNING TO EXPLORE
3.1 META-LEARNING FRAMEWORK FOR SGMCMC
Instead of using a hand-designed recipe for SGMCMC, we aim to learn the proper SGMCMC update steps through meta learning. The existing works, both the methods using hand-designed choices or meta-learning (Gong et al., 2018), try to determine the forms of the matrices \( D(z) \) and \( Q(z) \) while keeping the kinetic energy \( g(\theta, r) \) as simple Gaussian energy function, that is, \( g(\theta, r) = r^\top M^{-1} r / 2 \). This choice indeed is theoretically grounded, which can be shown to be optimal when the target distribution is Gaussian (Betancourt, 2017), but may not be optimal for the complex multi-modal posteriors of BNNs. We instead choose to learn \( g(\theta, r) \) while keeping \( D(z) \) and \( Q(z) \) as simple as possible. We argue that the meta-learning approach based on this alternative parameterization is more effective in learning versatile SGMCMC procedure that scales to large BNNs.
More specifically, we parameterize the gradients of the kinetic energy function \( \nabla_\theta g(\theta, r) \) and \( \nabla_r g(\theta, r) \) with neural networks \( \alpha_\phi(\theta, r) \) and \( \beta_\phi(\theta, r) \) respectively, and set \( D(z) \) and \( Q(z) \) as in SGHMC. The update step of SGMCMC, when discretized with symplectic Euler method is,
\[
r_{t+1} = r_t - \epsilon_t [\nabla_\theta \tilde{U}(\theta_t) + \alpha_\phi(\theta_t, r_t) + C \beta_\phi(\theta_t, r_t)] + \xi_t, \quad \xi_t \sim \mathcal{N}(0, 2C\epsilon_t)
\]
\[
\theta_{t+1} = \theta_t + \epsilon_t \beta_\phi(\theta_t, r_{t+1}).
\]
The neural networks $\alpha_\phi$ and $\beta_\phi$ are parameterized as two-layer Multi-Layer Perceptrons (MLPs) with 32 hidden units. Specifically, $\alpha_\phi$ and $\beta_\phi$ are applied to each dimension of parameter and momentum independently, similar to the commonly used learned optimizers (Andrychowicz et al., 2016; Metz et al., 2019). Again, following the common literature in learned optimizers (Metz et al., 2019), for each dimension of the parameter and momentum, we feed the corresponding parameter and momentum values, the stochastic gradients of energy functions for that element, and running average of the gradient at various time scales, as they are reported to encode the sufficient information about the loss surface geometry. See Appendix E for implementation details of $\alpha_\phi$ and $\beta_\phi$. By leveraging these information, we expect our meta-learned SGMC procedure to capture the multi-modal structures of the target posteriors of BNNs, and thus yielding a better mixing method.
### 3.2 Meta-Objective and Optimization
**Objective functions for meta-learning.** Meta-objective should reflect the meta-knowledge one wants to learn. We design the meta-objective based on the hope that samples collected through SGMC should be good at approximating the posterior predictive $p(y_* | x_*, D)$. In order to achieve this goal, we propose the meta-objective called BMA meta-loss. After the sufficient number of inner-updates, we collect $K$ parameter samples with some interval between them (thinning). Let $\theta_k(\phi)$ be the $k^{th}$ collected parameter, and we compute the Monte-Carlo estimator of the predictive distribution and use it as a meta-objective function (note the dependency of $\theta_k$ on the meta-parameter $\phi$, as it is a consequence of learning SGMC with the meta-parameter $\phi$).
$$L(\phi) = -\log \frac{1}{K} \sum_{k=1}^{K} p(y_* | x_*, \theta_k(\phi)), \quad (8)$$
where $(x_*, y_*)$ is a validation data point.
#### Algorithm 1: Meta training procedure
**Input:** task distribution $P(T)$, inner iterations $N_{inner}$, outer iterations $N_{outer}$, step size $\epsilon$, noise scale $\sigma^2$, initial meta-parameter $\phi_0$.
**Output:** Meta parameter $\phi$.
For $j = 1, \ldots, N_{outer}$ do
- Sample task $T_j \sim P(T)$
- Initialize model parameter $\theta_0$ for $T_j$
- Sample $\eta \sim \mathcal{N}(0, \sigma^2 I)$
- $L(\phi + \eta) \leftarrow \text{InnerLoop}(\theta_0, \phi + \eta, \epsilon, N_{inner})$
- $L(\phi - \eta) \leftarrow \text{InnerLoop}(\theta_0, \phi - \eta, \epsilon, N_{inner})$
- $\nabla_\phi L \leftarrow \frac{1}{2\sigma^2} \eta (L(\phi + \eta) - L(\phi - \eta))$
- $\phi \leftarrow \phi - \gamma \nabla_\phi L(\phi)$
end
#### Algorithm 2: InnerLoop
**Input:** Meta parameter $\phi$, inner iterations $N_{inner}$, initial parameter $\theta_0$, step size $\epsilon$, burn-in steps $B$, thinning interval $T$.
**Output:** Loss $L(\phi)$
Initialize $\Theta = \emptyset$ and $r_0 \sim \mathcal{N}(0, I_d)$.
For $i = 1, \ldots, N_{inner}$ do
- $r_{t+1} = r_t - \epsilon_t (\nabla U(\theta_t) + \alpha_\phi + c_\beta_\phi) + \xi_t$
- where $\xi_t \sim \mathcal{N}(0, 2c)$.
- $\theta_{t+1} = \theta_t + \epsilon_t \beta_\phi$
- if $i > B$ & mod $(i, T) = 0$ then
- $\Theta \leftarrow \Theta \cup \{\theta_i\}$
end
$L(\phi) \leftarrow -\log \frac{1}{|\Theta|} \sum_{\theta \in \Theta} p(y_* | x_*, \theta)$
**Gradient estimation for meta-objective.** Estimating the meta-gradient $\nabla_\phi L(\phi)$ is highly non-trivial (Metz et al., 2018; 2019), especially when the number of inner update steps is large. For instance, a naïve method such as backpropagation through time would require memory grows linearly with the number of inner-steps, so become easily infeasible for even moderate sized models. One might consider using the truncation approximation, but that would result in a biased gradient estimator. Instead, we adapt ES (Salimans et al., 2017) with antithetic sampling scheme, which has been widely used in recent literature of training learned optimizer. Metz et al. (2019) showed that unrolled optimization with many inner-steps can lead to chaotic meta-loss surface and ES is capable of relieving this pathology by employing smoothed loss,
$$L(\phi) = \mathbb{E}_{\tilde{\phi} \sim \mathcal{N}(\phi, \sigma^2 I)} \left[ L(\tilde{\phi}) \right] \quad (9)$$
where $\sigma^2$ determines the degree of smoothing. Also, antithetic sampling is usually applied to reduce the estimation variance of $\nabla_\phi L(\phi)$. Through log-derivative trick, we can get unbiased estimator of
\begin{equation}
\hat{g} = \frac{1}{N} \sum_{i=1}^{N} L(\phi + \eta_i) \frac{\eta_i}{2\sigma^2} \quad \text{where } \eta_i \overset{i.i.d}{\sim} \mathcal{N}(0, \sigma^2 I), \ i \in \{1 \ldots N\}
\end{equation}
In addition, we can get another unbiased estimator \( \hat{g}^{-1} = -\frac{1}{N} \sum_{i=1}^{N} L(\phi - \eta_i) \frac{\eta_i}{2\sigma^2} \) by reusing the negative of \( \eta_i \). By taking the average of two estimators, we can obtain the following gradient estimator.
\begin{equation}
\nabla_\phi \hat{L}(\phi) = \frac{1}{N} \sum_{i=1}^{N} \left[ \frac{L(\phi + \eta_i) - L(\phi - \eta_i)}{2\sigma^2} \right] \eta_i
\end{equation}
The estimator is also amenable to parallelization, improving the efficiency of gradient computation.
### 3.3 Meta training procedure
**Generic pipeline.** General process of meta-training is as follows. First, for each inner-loop, we sample a task from the pre-determined task distribution. An inner-loop starts from an randomly initialized parameter and iteratively apply update step (7) to run a single chain of SGMCMC. In the initial stage of meta-training, the chains from these inner loops show poor convergence, but the performance improves as training progresses. Similar to general Bayesian inference, we consider the early part of the inner loop as a burn-in period and collect samples from the end of the inner-loop at regular intervals when evaluating the meta-objective. This training process naturally integrates the meta-learning and Bayesian inference in that mimicking the actual inference procedure of Bayesian methods in realistic supervised learning tasks. In Figure 2 we show that L2E achieve desired level of accuracy for the approximation of posterior predictive with relatively small number of samples. This result indicates that L2E has successfully acquired the desired properties through meta-training.
**Multitask training for better generalization.** In meta-learning, it is commonly known that diversifying the task distribution helps to improve generalization performance. We include various neural network architectures and datasets in the task distribution to ensure that L2E has sufficient generalization capacity. Also, we evaluate how the diversity of the task distribution affects the performance of L2E in Table 12.
### 4 Experiments
In this section, we will evaluate the performance of L2E in various aspects. Through extensive experiments, we would like to demonstrate followings:
- L2E shows remarkable performance both on real-world image classification task and Out-of-Distribution (OOD) detection task comparing to other competitive baseline methods.
- L2E can effectively sample from BNNs posterior distribution, collecting diverse set of parameters both in weight space and function space.
**Experimental details** For image classification experiments, we compare L2E with Deep Ensembles (DE), Cyclical Stochastic Gradient MCMC (CSGMC) and Preconditioned Cyclic Stochastic Gradient MCMC (P-CSGMC). We use 1 layer convolution neural network on fashion-MNIST (Xiao et al., 2017), ResNet20 on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and ResNet56 on Tiny-ImageNet (Le & Yang, 2015). We set total training epochs for all methods to a similar level. Please refer to Appendix G.1 for experimental setup and hyperparameter settings. We report Accuracy (\( \text{acc} \)), Negative Log-Likelihood (NLL), Expected Calibration Error (ECE) (Naeini et al., 2015) and pairwise Kullback–Leibler divergence (KLD) between probabilistic outputs from different parameters on test dataset. Throughout experiments, we do not use data augmentation since it produces significant modifications to the likelihood function so that we cannot interpret this as a valid likelihood function (Wenzel et al., 2020). We report the mean and standard deviation of results over three different trials.
Table 1: Image classification results.
| Dataset | Method | ACC ↑ | NLL ↓ | ECE ↓ | KLD ↑ |
|---------------|------------|-------|-------|-------|-------|
| Fashion-MNIST | DE | 0.915±0.001 | 0.243±0.000 | 0.009±0.001 | 0.110±0.004 |
| | CSGMCMC | 0.911±0.002 | 0.277±0.008 | 0.021±0.002 | 0.080±0.004 |
| | P-CSGMCMC | 0.912±0.001 | 0.254±0.002 | 0.005±0.000 | 0.062±0.004 |
| | L2E | 0.917±0.002 | 0.245±0.002 | 0.008±0.001 | 0.247±0.000 |
| CIFAR-10 | DE | 0.893±0.002 | 0.327±0.002 | 0.032±0.001 | 0.497±0.002 |
| | CSGMCMC | 0.884±0.002 | 0.374±0.004 | 0.065±0.004 | 0.490±0.004 |
| | P-CSGMCMC | 0.875±0.001 | 0.393±0.005 | 0.049±0.001 | 0.726±0.012 |
| | L2E | 0.904±0.001 | 0.307±0.004 | 0.053±0.001 | 0.648±0.002 |
| CIFAR-100 | DE | 0.675±0.002 | 1.234±0.006 | 0.132±0.005 | 1.413±0.003 |
| | CSGMCMC | 0.580±0.005 | 1.526±0.016 | 0.079±0.006 | 0.948±0.045 |
| | P-CSGMCMC | 0.697±0.002 | 1.135±0.002 | 0.130±0.002 | 1.466±0.002 |
| Tiny-ImageNet | DE | 0.583±0.003 | 1.783±0.015 | 0.109±0.003 | 1.936±0.005 |
| | CSGMCMC | 0.562±0.002 | 1.811±0.004 | 0.096±0.002 | 0.776±0.009 |
| | P-CSGMCMC | 0.457±0.009 | 2.273±0.007 | 0.038±0.002 | 1.178±0.003 |
| | L2E | 0.585±0.003 | 1.740±0.001 | 0.111±0.001 | 1.075±0.003 |
Table 2: CIFAR-10-C results.
| Level | Method | 1 | 2 | 3 | 4 | 5 |
|-------|------------|-------|-------|-------|-------|-------|
| DE | | 0.869 | 0.853 | 0.839 | 0.822 | 0.794 |
| CSGMCMC | | 0.855 | 0.837 | 0.823 | 0.803 | 0.774 |
| L2E | | 0.874 | 0.854 | 0.839 | 0.817 | 0.784 |
Table 3: OOD detection AUROC.
| In-distr | Method | OOD | DE | CSGMCMC | L2E |
|----------|------------|-----|----|---------|-----|
| CIFAR-100 | DE | 0.836±0.001 | 0.828±0.003 | 0.851±0.002 |
| CIFAR-100 | SVHN | 0.919±0.001 | 0.906±0.002 | 0.931±0.003 |
| CIFAR-100 | Tiny-ImageNet | 0.844±0.001 | 0.833±0.003 | 0.853±0.001 |
| CIFAR-10 | DE | 0.793±0.001 | 0.791±0.002 | 0.792±0.002 |
| CIFAR-100 | SVHN | 0.917±0.002 | 0.904±0.002 | 0.917±0.002 |
| CIFAR-100 | Tiny-ImageNet | 0.780±0.002 | 0.775±0.001 | 0.782±0.002 |
Meta-training details We construct set of meta-training tasks using various datasets and model architectures. Specifically, we use MNIST, Fashion-MNIST, EMNIST (Cohen et al., 2017) and MedMNIST (Yang et al., 2021) as meta-training datasets. For model architecture, we fix the general structure with several convolution layers followed by readout MLP layer. For each outer training iteration, we randomly choose dataset and sample the configuration of architecture including number of channels, depth of the convolution layers and whether to use skip connections. See Appendix E for detailed configuration of task distribution. For evaluation, we use same meta-parameter of L2E for all experiments to check the generalization ability of L2E.
4.1 Real-world image classification
Table 1 shows the results of image classification experiments. We confirm that L2E outperforms other baselines in terms of predictive accuracy in general. Specifically, only DE shows comparable predictive accuracy comparing to L2E in some experiments. Among datasets for evaluation, fashion-MNIST is the only dataset included in our task distribution. Despite not having seen other datasets during meta-training, L2E consistently outperforms other tuned baseline methods. This clearly shows that L2E can scale and generalize well to unseen problems. In terms of functional diversity of ensemble, L2E consistently shows competitive performance among baselines. Note that KLD should be considered along with predictive accuracy since functional diversity usually declines when the predictive error of individual members is reduced (Fort et al., 2019). Taking this into account, L2E clearly outperforms similar or better level of diversity comparing to other SGMC methods even though predictive accuracy of L2E is the best among baselines. In complex datasets like CIFAR-100 and Tiny-ImageNet, we can find that functional diversity of DE, which necessarily captures multiple modes in the loss surface, is significantly better than other methods. Also, we confirm that introducing preconditioner to SGMC does not necessarily improve general performance of SGMC. P-CSGMCMC shows comparable performance with CSGMCMC in Fashion-MNIST dataset, but it significantly underperforms in other experiments.
4.2 Out-of-Distribution(OOD) Detection
Bayesian methods are frequently used for OOD detection task. In Table 3, we report the OOD detection performance of baseline methods and L2E. We use Maximum Softmax Probability (MSP) which is equivalent to confidence of logit as OOD score. Difference of confidence between in-distribution data and OOD data is measured using Area Under the ROC curve (AUROC) (Liang et al., 2017). For Tiny-ImageNet, we resize the image to 32×32. Firstly, among models trained on CIFAR-10, L2E shows the best performance for all OOD datasets. For models trained on CIFAR-100, L2E is significantly better than CSGMCMC and comparable to DE. Since DE is a very strong baseline in uncertainty estimation, we can confirm that L2E is competitive in OOD detection.
Next, we consider CIFAR-10-C (Hendrycks & Dietterich, 2019) for evaluating robustness to covariate shift. We use accuracy on corrupted data for the metric. Table 2 shows that L2E is more robust to covariate shift than CSGMCMC for all levels of corruption. However, with intensive corruption, L2E
Figure 3: t-SNE visualization of predictions from different model parameters of DE, CSGMCMC and L2E on CIFAR-10 test dataset. Highlighted points represent parameters which are collected for BMA. L2E covers almost the entire space despite using a single trajectory.
Figure 4: Loss surface of ResNet56 on Tiny-ImageNet as a function of model parameters in a 2-dimensional subspace spanned by solutions of DE, CSGMCMC, P-CSGMCMC and L2E. Colors represent the level of test accuracy. Left and Right plots clearly display the multi-modality while Middle plot does not.
exhibits lower accuracy than DE. This aligns with the results from Izmailov et al. (2021a) that BNN are not robust to covariate shift in reality. Nevertheless, L2E still demonstrates better performance than CSGMCMC, implying the advantage over CSGMCMC when using BNN under covariate shift.
4.3 L2E CAN CAPTURE MULTI-MODALITY
In Figure 3 and Figure 4, we can observe the behavior of DE, CSGMCMC, and L2E in function space. Firstly, we save the model parameters with short intervals (5 epochs) and visualize them along with their predictions using t-SNE (Van der Maaten & Hinton, 2008) in Figure 3. Notably, L2E exhibits a distinctive pattern that appears to traverse a variety of areas within the space. In Figure 4, we display the loss surface using a 2-dimensional subspace spanned by the first three collected parameters for each method following Garipov et al. (2018). Parameters of DE clearly located on multiple distinct modes as expected. In contrast, CSGMCMC seems to sample parameters within a single mode, while samples from L2E appear to be in distinct modes. For deeper investigation, we plot test error along a linear path between multiple pairs of saved parameters inspired by Goodfellow et al. (2014). If there is a loss barrier between the two parameters, indicating that they belong to different modes, classification error significantly increases along the linear path between two parameters. In Figure 6, L2E shows a significant increase in predictive error along the linear path between every pair of parameters while CSGMCMC exhibits a relatively low level of the loss barrier between samples. This suggests that L2E is capable of capturing multi-modality of the posterior distribution.
CSGMCMC attempts to explore multi-modality by exploiting artificial spikes induced by the learning rate schedule. Since it inevitably deviates from high-density regions for exploration, it requires a sufficient number of update steps to return to the high-density region. In contrast, L2E shows better exploration-exploitation balance than CSGMCMC because it continuously explores various modes while staying in high-density regions without using a learning rate schedule. In addition, in Figure 19, CSGMCMC shows a significant decrease in predictive accuracy and diversity as the thinning interval gets shorter while L2E can maintain a similar level of accuracy and diversity even with a shorter thinning interval, making it a much more computationally efficient approach in practice.
Figure 5: Cosine similarity between weights of ResNet56 on Tiny-ImageNet. DE and L2E collect more diverse solutions in weight space than CSGMCMC.
Figure 6: Test error along linear path between a pair of parameters
Table 4: ESS / wall clock time.
| Dataset | Method | CSGMCMC | P-CSGMCMC | L2E |
|---------------|------------|---------|-----------|-----|
| Fashion-MNIST | | 219.85±0.64 | 526.73±0.42 | 136.31±0.42 |
| CIFAR-10 | | 45.73±0.22 | 108.90±0.18 | 75.91±0.16 |
| CIFAR-100 | | 33.62±0.12 | 114.12±0.01 | 73.54±0.48 |
| Tiny-ImageNet | | 1.93±0.12 | 1.70±0.01 | 1.71±0.00 |
Table 5: Proportion of samples with $\hat{R} < 1.2$.
| Method | Fashion-MNIST | CIFAR-10 | CIFAR-100 | Tiny-ImageNet |
|------------|---------------|----------|-----------|---------------|
| CSGMCMC | 0.238 | 0.542 | 0.524 | 0.600 |
| P-CSGMCMC | 0.898 | 0.872 | 0.722 | 0.803 |
| L2E | **0.992** | **0.953** | **0.800** | **0.880** |
4.4 Convergence analysis
To evaluate whether L2E converge to the target distribution, we use $\hat{R}$ (Gelman & Rubin, 1992). $\hat{R}$ compares the variance between multiple chains to the variance within a single chain. If it is significantly greater than 1.0, it implies the poor mixing of chains. While the desirable level of $\hat{R}$ is problem-specific, we use the criterion proposed by Brooks & Gelman (1998), $\hat{R} < 1.2$, to evaluate the degree of mixing. In Table 5, we report the proportion of parameters with $\hat{R}$ values less than 1.2. Overall, L2E method demonstrates good mixing, with over 95% of parameters showing $\hat{R} < 1.2$ in Fashion-MNIST and CIFAR-10 experiments. L2E consistently demonstrates descent performance in various experiments, whereas CSGMCMC exhibits poor mixing across all experiments, indicating that CSGMCMC fails to sample from BNNs posterior distribution. In general, applying preconditioner seems to improve mixing, but it is evident that the predictive performance of P-CSGMCMC is still significantly suboptimal.
Also, we measure Effective Sample Size (ESS) to quantify the quality of the samples in terms of their independence and effectiveness in representing the underlying distribution. We report ESS normalized with respect to wall clock time of each method to evaluate the sampling efficiency in fixed computational cost. In Table 4, P-CSGMCMC achieves better normalized ESS than other methods except for Tiny-ImageNet experiment. However, its poor predictive performance diminishes the preference for P-CSGMCMC. Taking this into account, L2E demonstrates comparable level of efficiency among practical methods, even utilizing additional neural networks. We also report wall clock time of each method in Table 18.
5 Conclusion
In this work, we introduced a novel meta-learning framework called L2E to improve SGMCMC methods. Unlike conventional SGMCMC methods that heavily rely on manually designed components inspired by mathematical or physics principles, we aim to learn critical design components of SGMCMC directly from data. Through experiments, we show numerous advantages of L2E over existing SGMCMC methods, including better mixing, improved prediction performance, and a decreased need for tuning hyperparameters. Our learning-based approach would be a promising direction to solve several challenges that SGMCMC methods face in BNNs.
Ethics and Reproducibility statement Please refer to Appendix G for full experimental detail including datasets, models, and evaluation metrics. We have read and adhered to the ethical guideline of International Conference on Learning Representations in the course of conducting this research.
REFERENCES
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. *Advances in neural information processing systems*, 29, 2016.
Michael Betancourt. A conceptual introduction to hamiltonian monte carlo. *arXiv preprint arXiv:1701.02434*, 2017.
A. Blei, D. M. an Kucukelbir and J. D. McAuliffe. Variational inference: a review for statisticians. *Journal of the American Statistical Association*, 112(518):859–877, 2017.
Stephen P Brooks and Andrew Gelman. General methods for monitoring convergence of iterative simulations. *Journal of computational and graphical statistics*, 7(4):434–455, 1998.
Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In *International conference on machine learning*, pp. 1683–1691. PMLR, 2014.
Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In *2017 international joint conference on neural networks (IJCNN)*, pp. 2921–2926. IEEE, 2017.
Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. Deep ensembles: A loss landscape perspective. *arXiv preprint arXiv:1912.02757*, 2019.
Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. *Advances in neural information processing systems*, 31, 2018.
Andrew Gelman and Donald B Rubin. Inference from iterative simulation using multiple sequences. *Statistical science*, 7(4):457–472, 1992.
Mark Girolami and Ben Calderhead. Riemann manifold langevin and hamiltonian monte carlo methods. *Journal of the Royal Statistical Society Series B: Statistical Methodology*, 73(2):123–214, 2011.
Wenbo Gong, Yingzhen Li, and José Miguel Hernández-Lobato. Meta-learning for stochastic gradient mcmc. *arXiv preprint arXiv:1806.04522*, 2018.
Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. *arXiv preprint arXiv:1412.6544*, 2014.
Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *arXiv preprint arXiv:1903.12261*, 2019.
Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, and Andrew G Wilson. Dangers of bayesian model averaging under covariate shift. *Advances in Neural Information Processing Systems*, 34:3309–3322, 2021a.
Pavel Izmailov, Sharad Vikram, Matthew D Hoffman, and Andrew Gordon Gordon Wilson. What are bayesian neural network posteriors really like? In *International conference on machine learning*, pp. 4629–4640. PMLR, 2021b.
Sanyam Kapoor, Wesley J Maddox, Pavel Izmailov, and Andrew G Wilson. On uncertainty, tempering, and data augmentation in bayesian classification. *Advances in Neural Information Processing Systems*, 35:18211–18225, 2022.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images, 2009.
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in neural information processing systems*, 30, 2017.
|
wAsjsSe0U6
|
How do you evaluate the quality and diversity of the generated image path? Do you have any quantitative or qualitative measures to show the trade-off between structural and detailed information along the path?
|
Visual Semantic Learning via Early Stopping in Inverse Scale Space
Anonymous authors
Paper under double-blind review
Abstract
Different levels of visual information are generally coupled in image data, thus making it hard to reverse the trend of deep learning models that learn texture bias from images. Consequently, these models are vulnerable when dealing with tasks in which semantic knowledge matters. To solve this problem, we propose an instance smoothing algorithm, in which the Total Variation (TV) regularization is enforced in a differential inclusion to generate a regularized image path from large-scale (i.e., semantic information) to fine-scale (i.e., detailed information). Equipped with a proper early stopping mechanism, the structural information can be disentangled from detailed ones. We then propose an efficient sparse projection method to obtain the regularized images, by exploiting the graph structure of the Total Variation matrix. We then propose to incorporate this algorithm into neural network training, which guides the model to learn structural features in the process of training. The utility of our framework is demonstrated by improved robustness against noisy images, adversarial attacks, and low-resolution images; and better explainability via visualization and frequency analysis.
1 Introduction
Deep learning models have achieved great success in abundant computer vision tasks like image recognition, detection, and segmentation, through the usage of large-scale image datasets [Krizhevsky et al. (2012); Simonyan & Zisserman (2014); Long et al. (2015)]. As shown by previous works [Geirhos et al. (2018b)], the neural networks are prone to learn more texture bias from the image data rather than the structural information like shape. On the other hand, studies [Brendel & Bethge (2019)] have also realized that low-frequency features like shapes and edges can help make models more robust, which means it is also important to learn this kind of feature during model training. However, since texture and shape are generally entangled in real-world data, it is hard to change the tendency of texture bias based on these raw image data.
To see if there is a remedy for this problem, we resort to the concept of total variation (TV) regularization, which has been widely applied in image denoising and stylization [Rudin et al. (1992); Chan & Vese (2001); Osher et al. (2005); Chambolle & Pock (2011)]. Such a TV regularization gives rise to the spatial smoothing prior, i.e., adjacent pixels tend to have the same value. With such a regularization, the noisy information can be smoothed away and only structural information is maintained. Particularly, [Burger et al. (2005)] propose the Inverse Scale Space (ISS) method for image denoising, which progressively learns finer scales as iterates, until a noise-free image is recovered when stops properly.
Inspired by such an inverse-scale-space (ISS) property [Burger et al. (2005)], we propose a semantic-aware instance smoothing method based on Splitted Bregman ISS [Huang et al. (2016)], which can disentangle semantic/structural information from details. Specifically, our method is guided by a differential inclusion, which can efficiently enforce TV regularization on an augmented parameter introduced by a variable splitting term. With this TV regularization, this differential inclusion enjoys the ISS property in that it can generate a TV-regularized image path, transitioning from a larger scale, associated with structural information, to a finer scale, associated with detailed information. In this regard, we can disentangle the structural information from detailed ones if the image path is stopped at a proper time, as illustrated in Fig. 1. To obtain the TV-sparse estimator, we project onto the sparse subspace by exploiting the connected components of the graph that the TV matrix corresponds to.
Figure 1: Illustration of Semantic Learning. (a) Image path generated by instance smoothing, less sparsity means less details. (b) Visualization using Grad-Cam. By smoothing details, our method (trained sparsity = 0.6) better captures semantic information than vanilla training on original images.
We show that this projection can be efficiently completed in $O(p)$ ($p$ denotes the dimension of the image vector) time complexity.
To incorporate this algorithm into neural network training, we propose several training procedures, including fixed training procedure and iterative training procedure. Specifically, the fixed training directly trains the model parameters on smoothed data with fixed sparsity; while the iterative training alternatively runs the instance smoothing algorithm and optimizes the model parameters. Besides, we can also apply the above procedure to tune any trained model. To validate the benefit brought by the proposed pipeline, we conduct extensive experiments among tasks including adversarial attack, low-resolution image classification, noisy images, etc. In addition to enhanced robustness in these tasks, we also notice improved interpretability through frequency analysis.
Our main contributions are summarized as follows.
- We propose a novel instance smoothing algorithm that can disentangle the structural information from detailed ones.
- We propose several training procedures that can efficiently incorporate our instance smoothing algorithm into the training procedure.
- Our model achieves promising results on robustness tasks with better explainability.
2 RELATED WORK
Total Variation in Computer Vision Total Variation (TV), proposed by Rudin (1992), has been successfully applied in various vision tasks including denoising (Beck & Teboulle, 2009; Chambolle, 2004), deconvolution (Chan & Wong, 1998), deblurring (Beck & Teboulle, 2009), inpainting (Afonso et al., 2008), superresolution (Burger, 2006), structure-texture decomposition (Donoser et al., 2009). Recently, Yeh et al. (2022a) has shown the benefit for deep learning models brought by the introduction of TV Optimization layer. Different from these methods, we consider the idea of TV-constrained image reconstruction problem in the perspective of semantic-aware learning.
Linearized Bregman Iteration (LBI) LBI, a method for solving convex optimization problems, was originally proposed in Osher et al. (2005); Yin et al. (2008). It has been demonstrated that LBI exhibits convergence for convex loss functions, as well as the fundamental properties of discretized differential inclusion dynamics (Osher et al., 2016; Huang & Yao, 2018). Subsequent research has built upon the LBI framework, introducing various enhancements. In this study, we delve into the practical application of this concept. Specifically, we leverage the inverse scale space property of differential inclusion to address the TV regularization problem, resulting in a versatile solution path for image smoothing.
3 SEMANTIC-AWARE LEARNING IN INVERSE SCALE SPACE
In this section, we will introduce our framework for learning semantic features via Inverse Scale Space (ISS). In Sec. 3.1, we will first introduce the instance smoothing method to decompose the semantic and non-semantic information in the inverse scale space, followed by incorporation of this smoothing method to the neural network training in Sec. 3.2.
3.1 Instance Smoothing in Inverse Scale Space
To smooth detailed information for image \( x \in \mathbb{R}^p \) (\( p := h \times w \) denotes the size of the image vector, with \( h, w \) resp. denoting the height and width), typically one can enforce the following Total-Variation (TV) regularization Rudin et al. (1992), which has been widely applied in image denoising Rudin et al. (1992); Osher et al. (2005):
\[
L_{TV}^\lambda(\beta) = \frac{1}{2} \| \beta - x \|_2^2 + \lambda \| D \beta \|_1,
\]
where \( \lambda > 0 \) denotes the regularization hyperparameter, and \( D \in \mathbb{R}^{m \times p} \) denotes the total variation matrix that corresponds to the graph in which the edge set \( E \) (with \( |E| = m \)) contains those adjacent pairs of pixels, such that \( \| D \beta \|_1 := \sum_{(i,j) \in E} |\beta(i) - \beta(j)| \). The TV-regularized image \( \beta_\lambda \) is obtained by minimizing this loss. However, solving the solution path of \( \beta_\lambda \) w.r.t. \( \lambda \) in Eq. [1] is time-consuming Yeh et al. (2022b) since one has to solve Eq. [1] for each \( \lambda \). Although several methods have been proposed for acceleration Yeh et al. (2022b); Xin et al. (2014), it is still too expensive to apply in large-scale data.
To efficiently enforce this TV-regularization, we employ the following Split Bregman Inverse Scale Space (ISS) Huang et al. (2018; 2016) that was proposed for sparse recovery:
\[
\begin{align*}
0 &= -\nabla_\beta L_v(\beta_t, \gamma_t), \\
\rho_t &= -\nabla_\gamma L_v(\beta_t, \gamma_t), \\
\rho_t &\in \partial \| \gamma_t \|_1,
\end{align*}
\]
where \( L_v(\beta, \gamma) = \frac{1}{2} \| \beta - x \|_2^2 + \frac{1}{2\nu} \| D \beta - \gamma \|_2^2 \) denotes the variable splitting term that has been proposed in ADMM Boyd et al. (2011) and Split Bregman Ye & Xie (2011) for implementation convenience. Equipped with such a splitting term, one can enforce sparsity on \( \gamma \), the distance of which from \( D \beta \) is controlled by the hyperparameter \( \nu \). The dynamics in Eq. [2] is a differential inclusion, which generates a regularized image path from large-scale to fine-scale, with \( t \) playing a similar role as \( 1/\lambda \) in Eq. [1]. This is because the path \( \gamma_t \) transitions from sparsity to density as \( t \) increases. Furthermore, in accordance with the ISS property Burger et al. (2005), the non-zero elements of \( \gamma_t \) earlier in the process correspond to larger-scale information within the image.
Specifically, we first note from Eq. [2b] that \( \rho_t \) follows a gradient descent flow, starting from \( \rho_0 = 0 \) (hence \( \gamma_0 = 0 \)). As \( t \) grows, more elements \( \rho_t \in \partial \| \gamma_t \|_1 \) tend to hit the boundary of \( \pm 1 \), making corresponding elements of \( \gamma_t \) selected to be non-zeros according to Eq. [2c]. With a sparse \( \gamma_t \) at each \( t \), we can obtain a sparse TV-regularized image \( \tilde{\beta}_t \) by projecting \( \beta_t \) onto the subspace expanded by the support set of \( \gamma_t \), i.e., \( S_t := \text{supp}(\gamma_t) := \{ i : \gamma_t(i) \neq 0 \} \). Due to this projection, we have \( D_{S_t^c} \tilde{\beta}_t = 0 \), meaning that \( \tilde{\beta}_t \) smooth out the information outside \( S_t \). Since \( \gamma_t \) gets denser (i.e., \( S_t \) is larger) as \( t \) grows, \( \tilde{\beta}_t \) will learn more information. According to the ISS property of Split Bregman ISS Burger et al. (2005; Huang et al. (2016), we know that \( \tilde{\beta}_t \) will progressively learn finer-scale information as \( t \) grows. This means if we stop early in the image path (say \( t_0 \)), then \( \tilde{\beta}_{t_0} \) is able to keep only semantic information while more detailed information will be smoothed out.
Discussions of Split Bregman ISS and Our Specification. The Split Bregman ISS, which was proposed in the sparse inference of model parameters Huang et al. (2018), and later applied to many machine learning tasks including medical imaging Sun et al. (2017), transfer learning Zhao et al. (2018), and neural network pruning Fu et al. (2020). However, these methods primarily focused on learning important parameters in the model. In contrast, we are the first to explore the ISS property at the image level, with the goal of extracting semantic information from the original image.
Discretization. To implement, we follow Yin et al. (2008); Huang et al. (2018) to consider a discrete form of Eq. [2] with step size \( \alpha \) and the damping factor \( \kappa > 0 \):
\[
\begin{align*}
\beta_{k+1} &= \beta_k - \kappa \alpha \nabla_\beta L(\beta_k, \gamma_k), \\
z_{k+1} &= z_k - \alpha \nabla_\gamma L(\beta_k, \gamma_k), \\
\gamma_{k+1} &= \kappa \text{prox}_{\| \cdot \|_1}(z_{k+1}),
\end{align*}
\]
where \( \text{prox}_{\| \cdot \|_1}(z_t) := \arg \min_u \frac{1}{2} \| u - z_t \|_2^2 + \| u \|_1 = \text{sign}(z_t) \max(|z_t| - 1, 0) \). As pointed out in Huang et al. (2016), Eq. [3] will converge to the original dynamics Eq. [2] by letting \( \alpha \to 0 \) and \( \kappa \to 0 \).
Besides, the step size $\alpha$ should satisfy $\alpha < \frac{2}{\kappa \|H_\nu\|_2}$ with $H_\nu := \nabla^2 L_\nu(\beta, \gamma)$, in order to make $L_\nu(\beta_k, \gamma_k)$ decrease as iterates. Compared to TV regularization in Eq. (1), that has to run several optimizations, we can easily obtain a regularized image path at all scales with a single run of Eq. (3).
**Sparse Projection via Graph Algorithm.** With $\beta_k$ and $\gamma_k$ at each $k$, we can obtain the TV regularized image $\tilde{\beta}_t$ by projecting $\beta_k$ onto the sparse subspace of $\gamma_k$, i.e., $S_k := \text{supp}(\gamma_k)$:
$$\tilde{\beta}_k = \text{proj}_{S_k}(\beta_k) := \arg \min_{D_{S_k}^\dagger \beta' = 0} \| \beta' - \beta_k \|_2,$$
(4)
which has a closed-form solution, i.e., $\tilde{\beta}_k = (I - D_{S_k}^\dagger D_{S_k}) \beta_k$. Here $D_{S_k}^\dagger$ denotes the pseudo-inverse matrix of $D_{S_k}$. The cost is $O(|S_k|^3)$, which is much larger than the cost of gradient descent that is $O(p)$ when $|S_k|$ is large.
To improve the efficiency, we exploit the graph structure of $D_{S_k}$. Specifically, note that $D_{S_k}$ corresponds to the graph $G := (V, E_{S_k})$, such that
$$D_{S_k}(\tilde{\beta})(i, j) := \tilde{\beta}_k(i) - \tilde{\beta}_k(j) = 0, \forall (i, j) \in E_{S_k}.$$
In other words, we have $\tilde{\beta}_k(i) = \tilde{\beta}_k(j)$ if and only if $i$ and $j$ are connected by a path. Inspired by this, we propose to decompose the graph into connected components, such that $\tilde{\beta}_i$ shares the same value in each component. To minimize $\|\tilde{\beta}_k - \beta_k\|_2$, such a value should equal the average of $\beta_k$ in that component. Since the complexity of finding connected components of a $p$-node graph is $O(p)$, the projection has the same cost as the gradient descent. Our result is summarized as follows.
**Proposition 3.1.** Given $\beta_k$ and $S_k := \text{supp}(\gamma_k)$ and suppose $G = (V, E_{S_k})$ has $C$ connected components $G_1 = (V_1, E_1), ..., G_C = (V_C, E_C)$, such that $V = V_1 \cup ... \cup V_C$, then $\tilde{\beta}_k = \text{proj}_{S_k}(\beta_k)$ can be given by the following with complexity $O(p)$:
$$\tilde{\beta}_k(j) = \overline{\beta}_k(V_c), \forall j \in V_c \text{ for some } c \in \{1, ..., C\}, \text{ where } \overline{\beta}_k(V_c) \text{ denotes the average of } \beta_k(V_c).$$
**Extension to colored image via group sparsity.** For a colored image, we have $x \in \mathbb{R}^{p \times 3}$. This means each pixel is a 3-d vector $x_i = [x_{i1}, x_{i2}, x_{i3}]$ in the RGB channels. Correspondingly, we enforce group sparsity on $\gamma \in \mathbb{R}^{p \times 3}$, where each group corresponds to the vector $\gamma_i \in \mathbb{R}^3$:
$$P(\gamma) = \|\gamma\|_{1,2} := \sum_i \|\gamma_i\|_2 = \sum_i \sqrt{\gamma_{i1}^2 + \gamma_{i2}^2 + \gamma_{i3}^2}.$$
(5)
By replacing the penalty $\|\gamma\|_1$ with $P(\gamma)$, we can obtain $\gamma_k$ from $z_k \in \mathbb{R}^{p \times 3}$ as follows:
$$\gamma_i = \text{prox}_{\|\gamma\|_{1,2}}(z_i) := \begin{cases} \left(1 - \frac{1}{\|z_i\|_2}\right) z_i & \|z_i\|_2 \geq 1, \\ 0 & \text{otherwise}, \end{cases}$$
(6)
which can replace Eq. (Ec) to generate the image path for colored images.
### 3.2 INCORPORATION TO THE TRAINING PROCEDURE
In this section, we introduce several strategies to incorporate Eq. (3) into the training procedure: **fixed training**, **iterative training**, and **Finetuning**. By decomposing the semantic and detailed information apart, these training methods are endowed with better interpretability; moreover, they can exploit semantic information to improve robustness against non-semantic perturbation, such as natural noise, high-frequency perturbation, adversarial noise, and low-resolution images.
Specifically, we denote $f_\theta : X \to Y$ as the neural network with parameters $\theta$, which is typically trained via **Empirical Risk Minimization** (ERM) with loss $\ell(f_\theta(x), y)$.
**Fixed Training Procedure.** Simply speaking, it means training $f_\theta$ via ERM on regularized image data when stopped at a fixed sparsity level of $\gamma$ (i.e., the proportion of non-zero elements of $\gamma$ over the dimension of $\gamma$). This procedure can be applied to the task of classification with noisy images.
---
1For a general matrix $A$, we denote $A_S$ as the sub-matrix of $A$ with rows indexed by $S$.
and adversarial defense, where the image at an early stopped iteration in Eq. (3) can eliminate the detailed information in which the adversarial perturbation happens.
**Iterative Training Procedure.** Equipped with an efficient generation of the image path, we can iteratively train the network parameter $\theta$ and run the instance smoothing in Eq. (3). In this regard, the model can first learn semantic features, followed by detailed/fine-scale features. Specifically, the iterative training alternatively runs the LBI and the gradient descent w.r.t. $\theta$ as follows:
$$\beta_{k+1} = \beta_k - \kappa \alpha \nabla_\beta \mathcal{L}_\nu(\beta_k, \gamma_k),$$
$$z_{k+1} = z_k - \alpha \nabla_z \mathcal{L}_\nu(\beta_k, \gamma_k),$$
$$\gamma_{k+1} = \kappa \ast \text{prox}_{\|\gamma\|_1}(z_{k+1}) \text{ from Eq. (3)},$$
$$\tilde{\beta}_{k+1} = \text{proj}_{\text{supp}(\gamma_{k+1})}(\beta_{k+1}) \text{ from Prop. [3.1]},$$
$$\theta_{k+1} = \theta_k - \nabla_\theta \ell(f_\theta(\tilde{\beta}_{k+1}, y)) \text{ gradient descent w.r.t. } \theta,$$
where Eq. (7c) can be replaced with other optimizers such as SGD or Adam. Such an iterative training procedure can decompose the information, which enjoys better interpretability and can be potentially applied to the task when $y$ is labeled according to both semantic and detailed information features, e.g., the medical imaging diagnosis in which both shape and texture of the lesion are pathologically related to the disease. As a compromise, this method may still be open to adversarial attacks since it also contains fine-scale information.
**Finetune Procedure.** For any pre-trained model $f_{\theta_0}$ obtained through a non-regularized training procedure (e.g., vanilla ERM), we perform a fine-tuning process on the parameter $\theta_0$ using the Fixed Training procedure, allowing the model to progressively capture semantic information.
## 4 EXPERIMENTS
In this section, we conduct extensive experiments to demonstrate the ability of our method to learn semantic features. We mainly focus on the robustness against non-semantic features introduced by our method. Specifically, the method is evaluated to show the robustness against noisy images, adversarial attacks, high-frequency perturbations, and low-frequency images.
**Datasets.** CIFAR10 [Krizhevsky & Hinton (2009)] and miniImageNet [Vinyals et al. (2016)] are adopted in our experiments. For noisy training, we instead utilize CIFAR10-C [Hendrycks & Dietterich (2019)], which contains different kinds of noisy and corrupted images from CIFAR10.
**Implementation Details.** We use ResNet18 for CIFAR10 and ResNet34 [He et al. (2016)] for miniImageNet in our experiments. For hyperparameters, we set $\kappa = 10$, $\nu = 1$, and calculate $\alpha$ by $\alpha = \frac{1}{\kappa \|H\|_2}$, where $H = \nabla^2 \mathcal{L}_\nu$ is the Hessian matrix of loss function. Since the miniImagenet was originally used for few-shot learning, its classes in the training set and testing set are different. To adapt it into our settings, we split the train set and randomly chose 100 images of each class as our new test set, and others as our training set.
### 4.1 ROBUSTNESS AGAINST NOISY IMAGES
To explain the efficacy of our proposed method when dealing with noisy images, we compare our model with (1) Vanilla Model: vanilla training to optimize ERM, and (2) TV Layer that appended the neural network with a layer to enforce TV smoothness, following [Yeh et al. (2022b)]. For training, the vanilla model and TV layer are trained on clean images in CIFAR10. For the fixed training procedure, we train the network on preprocessed images from CIFAR10 with sparsity 0.8. For iterative training, we follow Eq. (7) trained with strategy in Eq. (7) with sparsity level from 0.3 to 0.8. For the finetuning procedure, we use fixed training on processed images with sparsity 0.8 to finetune the vanilla model for 20 epochs. In the test stage, we consider three scenarios for both methods: None, Sparsity 0.6, which respectively correspond to noisy images with no preprocessing, and preprocessing test images via our instance smoothing algorithm in Eq. (3) with sparsity 0.6.
We report the classification accuracy in Tab. 1. When used as a preprocessing method, our method can help almost all the models improve their accuracy on several kinds of noisy data. Meanwhile, our model achieves a further improvement over others by smoothing the detailed information out via preprocessing in the training stage.
Table 1: Classification results on noisy data from CIFAR10-C with different preprocessing strategies.
| Training | Preprocessing on Test Data | Corruption Type | Mean |
|---------------------------|----------------------------|-----------------|------|
| Vanilla Model | None | 45.90% | 67.14% |
| | Sparsity 0.6 | 72.57% | 73.28% |
| TV Layer | None | 49.97% | 69.78% |
| | Sparsity 0.6 | 76.15% | 75.86% |
| Ours (Fixed Training) | None | 36.60% | 61.29% |
| | Sparsity 0.6 | 75.34% | 74.16% |
| Ours (Iterative Training) | None | 42.90% | 64.16% |
| | Sparsity 0.6 | 78.46% | 76.99% |
| Ours (Finetune) | None | 42.62% | 64.65% |
| | Sparsity 0.6 | 75.28% | 74.59% |
It is also interesting to observe that instance smoothing yields varying effects with different corruption types. Notably, significant improvements are evident following preprocessing for types like "Gaussian" and "Shot", whereas some other types, such as "Elastic" and "Glass," do not exhibit this phenomenon. To explain, we visualize images corrupted by different types in Fig. 2. As shown, Gaussian or Shot noise mainly corrupts background or contextual details, which can be smoothed out after preprocessing. In contrast, 'Glass Blur' and 'Elastic Transform' alter shapes significantly, challenging our method’s effectiveness. Additionally, 'Brightness' type corruption shows minimal impact, possibly because the noise is relatively not strong.
Figure 2: Visualization of different types of noisy images.
4.2 ROBUSTNESS AGAINST ADVERSARIAL ATTACK
Table 2: Classification results on adversarial examples (FGSM) at different strengths with CIFAR-10.
| Training | Preprocessing on Test Data | $\varepsilon = 8/255$ | $\varepsilon = 16/255$ | $\varepsilon = 24/255$ | $\varepsilon = 32/255$ |
|---------------------------|----------------------------|-----------------------|------------------------|------------------------|------------------------|
| Vanilla Model | None | 26.77 | 18.49 | 15.63 | 14.32 |
| PNI | None | 41.07 | 26.05 | 16.64 | 13.31 |
| TV Layer | None | 43.57 | 31.59 | 21.17 | 16.46 |
| Ours iterative | None | 35.31 | 28.70 | 21.16 | 18.08 |
| Ours fix | None | 37.23 | 26.14 | 18.57 | 13.54 |
| Finetune | None | 48.60 | 36.27 | 23.66 | 17.47 |
| Vanilla Model | Sparsity 0.6 | 38.51 | 31.32 | 27.48 | 25.53 |
| PNI | Sparsity 0.6 | 51.21 | 43.35 | 37.30 | 32.83 |
| TV Layer | Sparsity 0.6 | 53.25 | 45.59 | 40.99 | 36.81 |
| Ours iterative | Sparsity 0.6 | 44.79 | 38.38 | 35.64 | 32.62 |
| Ours fix | Sparsity 0.6 | 51.19 | 43.39 | 38.13 | 33.82 |
| Finetune | Sparsity 0.6 | 57.61 | 51.62 | 46.87 | 41.14 |
| Wang, et al. Natural | - | 17.10 | 14.00 | 12.70 | - |
| Wang, et al. Adv | - | 43.50 | 23.20 | 28.60 | - |
In this section, we show the robustness of our method against adversarial attacks. The attacked data are generated via commonly-used FSGM [Goodfellow et al., 2014] and PGD [Madry et al., 2018] (In appendix G). We compare our methods with the Vanilla, TV layer methods, PNI [Rakin et al., 2018] and results from [Wang et al., 2020]. For the fixed training procedure, we train the network on...
preprocessed images with sparsity 0.6. For iterative training, we follow Eq. 7 trained with strategy in Eq. 7 with sparsity level from 0.3 to 0.6. For the finetuning procedure, we use fixed training on processed images with sparsity 0.6 to finetune the vanilla model for 20 epochs. During the test stage, we consider three scenarios to smooth each data, None, Sparsity 0.6, and Sparsity 0.8, which respectively correspond to test data with no smoothing, and preprocessing with sparsity 0.6 and sparsity 0.8. Since optimizing the TV layer method involves computing the Hessian, which is not computationally tractable for large-scale image data, we have limited its implementation on CIFAR10.
We report the accuracy at strengths $\varepsilon = 2/255$ to $\varepsilon = 8/255$ of CIFAR10 and miniImagenet in Tab. 2 and Tab. 3, where $\varepsilon$ stands for the attack strengths on normalized images. We first note that for all methods, applying the instance smoothing method to test data can bring about robustness improvement, which suggests the utility of instance smoothing. Besides, it is also interesting to see that all variants of our methods can outperform the Vanilla method by a large margin, which can further demonstrate the utility of incorporating the instance smoothing into the training stage. In particular, the finetuning training procedure can outperform the vanilla model by $17.20\%$ at $\varepsilon = 2/255$.
Table 3: Classification results on adversarial examples (FGSM) with miniImagenet.
| Training | Preprocessing on Test Data | $\varepsilon = 2/255$ | $\varepsilon = 4/255$ | $\varepsilon = 6/255$ | $\varepsilon = 8/255$ |
|-------------------|----------------------------|-----------------------|-----------------------|-----------------------|-----------------------|
| Vanilla Model | None | 13.23 | 7.86 | 6.17 | 5.30 |
| Ours iterative | None | **16.72** | **9.40** | **6.98** | **6.22** |
| Ours fix | None | 12.84 | 7.39 | 5.84 | 5.08 |
| Finetune | None | 12.20 | 6.78 | 5.14 | 4.47 |
| Vanilla Model | Sparsity 0.6 | 30.54 | 17.58 | 12.75 | 10.39 |
| Ours iterative | Sparsity 0.6 | 30.27 | 16.98 | 12.09 | 9.78 |
| Ours fix | Sparsity 0.6 | 32.62 | 18.67 | 13.09 | **10.48** |
| Finetune | Sparsity 0.6 | **33.92** | **19.81** | **13.55** | 10.31 |
4.3 Frequency Domain Analysis
Figure 3: Examples of high and low-frequency components of images. (a) An example from CIFAR10 with a cut-off radius $r = 8$. (b) An example from miniImagenet with a cut-off radius $r = 20$.
To further illustrate the role of our method in enhancing robustness, we try to analyze the method from the perspective of the frequency domain. We follow Wang et al. (2020) to test the accuracy of models on both the high and low-frequency components and follow Geirhos et al. (2018a) to measure the fraction of low-frequency features in our trained model. Specifically, we first decompose the images into low-frequency and high-frequency components as shown in Fig. 3. Then the low-frequency fraction, which is defined as the proportion of correctly predicted instances using only low-frequency components, is calculated among all correctly predicted samples. We consider three kinds of model with different training strategy compared with the vanilla model. For fixed training procedure, we train the network on preprocessed images with sparsity 0.8. For iterative training, we follow Eq. 7 with sparsity level from 0.3 to 0.8. For the finetuning procedure, we use fixed training on processed images with sparsity 0.8 to finetune the vanilla model for 20 epochs.
We plot the low-frequency fraction and accuracy on high/low-frequency components during training with different cut-off radius $r$ in Fig. 4 within CIFAR-10 and miniImagenet. As iterates, our method has a higher fraction than the vanilla model. On low-frequency components, our models always achieve the highest accuracy, while the vanilla model usually makes a better prediction on high-frequency components, which is not robust since a human cannot get obvious visual information.
from high-frequency components. This result suggests the ability of our method to learn semantic information contained in the low-frequency features. Moreover, we note that the iterative training method learns more low-frequency information than the fixed training, which suggests that smoothly increased sparsity in iterative training can facilitate semantic learning.
Figure 4: Test accuracy and fraction during training on high/low-frequency components of images from miniImagenet and CIFAR-10. The top row contains a low-frequency fraction on CIFAR10 and miniImagenet. The bottom row contains the accuracy on high/low-frequency components.
Moreover, we present two visualization results. The first one shows the frequencies in the first layer’s feature maps during training in Fig. 5. Each grid corresponds to a feature map, where in the high-frequency map is typically more visually dispersed, and the low-frequency map is usually more concentrated. As shown, the vanilla model (bottom) tends to learn high-frequent features while our method can first learn low-frequency features and then high-frequency features in the process of training. This result can explain the low-frequency robustness shown in Fig. 4.
Figure 5: Visualization of the frequency maps in the first convolution layer in the frequency domain during training. The top and bottom rows respectively correspond to our iterative training model and the vanilla method.
The second one is about the expected difference in the frequency domain as proposed in Yin et al. (2019). We calculate $\mathbb{E}(\mathcal{F}(X) - \mathcal{F}(\tilde{X}))$, where $\mathcal{F}$ stands for Fourier transformation, $X$ and $\tilde{X}$ stand for different images. As shown in Fig. 6, the difference between processed images and the original image is mainly located within the low-frequency component. Besides, as the sparsity level increases from 0.6 to 0.8, the difference in the high-frequency domain between the original images and images generated by our method decreases. These results can explain the low-frequency robustness of our model since during iterative training, the model initially learns low-frequency (large-scale) information and then high-frequency (small-scale) information.
Figure 6: The expected difference in the frequency domain on CIFAR10. "0.6 & Original" stands for the difference between images with 0.6 sparsity and original images. The interpretation of "0.8 & Original" and "0.6 & 0.8" is similar.
4.4 Robustness against Low Resolution
To illustrate the robustness of our method against low-resolution data, we apply our method to the task of classifying low-resolution images. We first downsample the original images to some specific intermediate sizes and then upsample to the original size via nearest interpolation. The smaller intermediate size will result in a lower-resolution image. Similar to the previous section, we consider the fixed model trained on preprocessed images with sparsity 0.6, the iterative model trained with strategy in Eq. 7 from sparsity 0.3 to 0.6 and the finetuned model on preprocessed images with sparsity 0.6.
The results are presented along the training procedure in Fig. 7 for test data with intermediate size sets from 74 to 24 respectively. As shown, all variants of our methods outperform the vanilla model (blue curve), especially with lower-resolution images. This result suggests the effectiveness of instance smoothing in learning semantic information during training, as the low-resolution images can smooth out the details while maintaining the object’s shape.
Figure 7: (a) Examples of low-resolution images, with the original image on the left and images of different intermediate sizes from 74 to 24 on the right; (b) Test accuracy during training on low-resolution images with different intermediate sizes.
5 Conclusions and Discussions
We present a novel instance smoothing algorithm that effectively disentangles structural information from images. We propose an efficient graph-based algorithm for projection acceleration. We then propose three procedures to incorporate the algorithm into network training. We demonstrate the utility in several robustness tasks.
Limitations. Our methods can bring additional memory costs during training, which makes it difficult to extend to larger-scale datasets such as the Imagenet. Besides, we believe that our method can be potentially applied to feature maps with TV regularization. Such an extension and the optimization of memory usage will be explored in the future.
REFERENCES
Nonlinear total variation based noise removal algorithms. *Physica D: Nonlinear Phenomena*, 60(1):259–268, 1992. ISSN 0167-2789.
Structure-texture image decomposition—modeling, algorithms, and parameter selection. *International Journal of Computer Vision*, 67(1):111–136, 2006. doi:10.1007/s11263-006-4331-z URL https://doi.org/10.1007/s11263-006-4331-z
Image super-resolution by tv-regularization and bregman iteration. *Journal of Scientific Computing*, 37(3):367–382, 2008. doi:10.1007/s10915-008-9214-8 URL https://doi.org/10.1007/s10915-008-9214-8
Manya V. Afonso, José M. Bioucas-Dias, and Mário A. T. Figueiredo. An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems. *IEEE Transactions on Image Processing*.
Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PLOS ONE*, 10(7):1–46, 07 2015. doi:10.1371/journal.pone.0130140 URL https://doi.org/10.1371/journal.pone.0130140
Amir Beck and Marc Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. *IEEE Transactions on Image Processing*, 18(11):2419–2434, 2009. doi:10.1109/TIP.2009.2028250
Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. *Foundations and Trends® in Machine Learning*, 3(1):1–122, 2011.
Wieland Brendel and Matthias Bethge. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. *arXiv preprint arXiv:1904.00760*, 2019.
Martin Burger, Stanley Osher, Jinjun Xu, and Guy Gilboa. Nonlinear inverse scale space methods for image restoration. In *VLSM*, volume 5, pp. 25–36. Springer, 2005.
Antonin Chambolle. An algorithm for total variation minimization and applications. *Journal of Mathematical Imaging and Vision*, 20(1):89–97, 2004. doi:10.1023/B:JMIV.0000011325.36760.1e URL https://doi.org/10.1023/B:JMIV.0000011325.36760.1e
Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. *Journal of mathematical imaging and vision*, 40(1):120–145, 2011.
T.F. Chan and Chiu-Kwong Wong. Total variation blind deconvolution. *IEEE Transactions on Image Processing*, 7(3):370–375, 1998. doi:10.1109/83.661187
Tony F Chan and Luminita A Vese. An active contour model without edges. In *IEEE International Conference on Computer Vision*, volume 1, pp. 324–331. IEEE, 2001.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. IEEE, 2009.
Michael Donoser, Martin Urschler, Martin Hirzer, and Horst Bischof. Saliency driven total variation segmentation. In *2009 IEEE 12th International Conference on Computer Vision*, pp. 817–824, 2009. doi:10.1109/ICCV.2009.5459296
Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, and Yuan Yao. Dessilbi: Exploring structural sparsity of deep networks via differential inclusion paths. In *International Conference on Machine Learning*, pp. 3315–3326. PMLR, 2020.
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. *arXiv preprint arXiv:1811.12231*, 2018a.
|
4XCfu7fTgw
|
In Fig.2, $\beta$ introduces little to no effect on the metrics for varying its values in the entire range. Do the authors have any speculation or analysis on this pattern? Because from Tables 1 and 2, single svd loss term can provide significant improvement and sometimes has the best performance. But when combined with std, it does not show a significant effect.
|
Spectral Contrastive Regression
Anonymous authors
Paper under double-blind review
Abstract
While several techniques have been proposed to enhance the generalization of deep learning models for classification problems, limited research has been conducted on improving generalization for regression tasks. This is primarily due to the continuous nature of regression labels, which makes it challenging to directly apply classification-based techniques to regression tasks. Conversely, existing regression methods overlook feature-level generalization and primarily focus on data augmentation using linear interpolation, which may not be an effective approach for synthesizing data for regression. In this paper, we introduce a novel generalization method for regression tasks based on the metric learning assumption that the distance between features and labels should be proportional. Unlike previous approaches that solely consider the scale prediction of this proportion and disregard its variation among samples, we argue that this proportion is not constant and can be defined as a mapping function. Additionally, we propose minimizing the error of this function and stabilizing its fluctuating behavior by smoothing out its variations. The t-SNE visualization of the embedding space demonstrates that our proposed loss function generates a more discriminative pattern with reduced variance. To enhance Out-of-Distribution (OOD) generalization, we leverage the characteristics of the spectral norm (i.e., the sub-multiplicativity of the spectral norm of the feature matrix can be expressed as Frobenius norm of the output), and align the maximum singular value of the feature matrices across different domains. Experimental results on the MPI3D benchmark dataset reveal that aligning the spectral norms significantly improves the unstable performance on OOD data. We conduct experiments on eight benchmark datasets for domain generalization in regression, and our method consistently outperforms state-of-the-art approaches in the majority of cases. Our code is available in an anonymous repository, and it will be made publicly available upon acceptance of the paper: https://github.com/workerasd/SCR.
1 Introduction
Continuous label prediction, known as regression, is widely utilized across various domains, including computer vision (Zhang et al., 2015; Chen et al., 2016), medical testing (Gilsanz & Ratib, 2011; Agatston et al., 1990), and financial analysis (Happersberger, 2021). Unlike classification, which seeks to determine optimal decision boundaries, regression involves fitting outputs to a continuous function (Lee & Landgrebe, 1993). Therefore, when addressing challenges such as uncertainty estimation (Hüllermeier & Waegeman, 2021) and generalization (Yao et al., 2022) in regression, it is crucial to consider the relationships between the labels.
While out-of-distribution generalization has received significant attention for classification (Wang et al., 2022), regression generalization remains relatively underexplored. Particularly, the existing representation learning based methods like IRM (Arjovsky et al., 2019) are primarily designed for classification tasks. The augmentation-based approach of C-Mixup (Yao et al., 2022) has recently been proposed to enhance generalization by mixing training pairs based on the probability associated with label distances. While the aforementioned approaches are applied or can be adapted for regression generalization, their performance is limited because they do not account for the contrastive interdependence between features and labels.
To tackle the aforementioned problem and with the aim of learning a generalizable representation from the source domains, we introduce a contrastive learning loss specifically designed for regres-
This loss brings features with smaller label distances closer together in the learned representation, while simultaneously pushing features with larger label distances farther apart, ultimately helping to separate representations learned from different domains and enhancing the generalization performance in the target domain. Contrary to the assumption in Regression Metric Loss (RML) (Chao et al., 2022) that the ratio between feature distance and label distance is constant, we propose that this ratio varies and only equals a constant under certain ideal conditions. We argue that RML, by overlooking the variability in this ratio, may obscure the pattern of feature distributions in certain cases, as demonstrated in our experiments.
Specifically, motivated by augmentation-based techniques (Xu et al., 2021; Sicilia et al., 2023; Yao et al., 2022) for domain generalization in regression and classification, we propose to generate new distributions by mixing pairs of training data. For each distribution, we create a metric penalty to identify discriminative patterns within the feature distribution. We align the real and synthesized distributions by minimizing the difference between the spectral norms of their feature representations. With the property of spectral norm, the minimization keeps the output scale from standing out, while lowering the upper bound of distribution discrepancy in regression.
The main contributions of this paper are three-folded:
1. Unlike prior methods that treat the feature-label distance proportion as fixed, we propose to model this as a variable mapping function and address the instability arising from fluctuations in this mapping.
2. To improve the OOD generalization, we expand the training distribution by generating new samples Yao et al. (2022), and then align the real and synthesized distributions by minimizing the difference between the spectral norm of their feature representations.
3. We conduct experiments on eight benchmark regression datasets and show that our method outperforms the state-of-the-art in most cases. The t-SNE visualization of the feature embedding illustrates the effectiveness and stability of our proposed metric loss.
2 RELATED WORK
2.1 METRIC LEARNING
Metric learning has been shown to be effective when related to methods that rely on distances and similarities (Kulis et al., 2013). Traditionally, methods like PCA (Pearson, 1901) and KNN are widely used in the area of machine learning. With the development of deep learning, networks (Schroff et al., 2015; Brumley et al., 1993) related to pair distances are designed to correlate among samples while using shared weights in deep learning (Kaya & Bilge, 2019). Then, prototype-based metric losses (Wen et al., 2016; Deng et al., 2019) were proposed based on contrastive motivation. In regression tasks, the metric learning loss has not been well-defined because it is hard to build the connection between the metric distance and continuous labels. Recently, Chao et al. (2022) proposed an assumption that there is a constant proportion between the feature distance and the label distance. However, the method based on this assumption only considers the scale of the feature matrix, ignoring fluctuations in the proportion map. To solve this issue, this paper assumes that the proportion is a mapping function in the training process and proposes a metric loss to smooth fluctuations.
2.2 OUT-OF-DISTRIBUTION GENERALIZATION
Out-of-distribution (OOD) generalization aims at generalizing the model from the training distribution to an unseen distribution. Mostly, the methods can be divided into 3 parts (Wang et al., 2022): data augmentation, representation learning, and training strategy. Data augmentation methods (Zhang et al., 2018; Zhou et al., 2021) utilize linear interpolation to fill the distribution gap, and some methods (Xu et al., 2021; Sicilia et al., 2023) also generate a new distribution to enrich the convex hull supported by the source distributions. Representation learning (Arjovsky et al., 2019; Albuquerque et al., 2019) aims at generating distribution-invariant feature representations from source distributions. Recently, methods like SWAD (Cha et al., 2021) proposed some novel training and model selection strategies, significantly improving performance in OOD generalization.
2.3 Generalization in Regression
Recent research targeting generalization in regression tasks is based on data augmentation in which mixup pairs are selected based on the probability related to label distances (Yao et al., 2022; Yang et al., 2021). Even though limited research has been proposed on this topic, some methods designed for regression tasks can be transferred to generalization purposes. For instance, due to the function of metric learning, the metric loss in regression (Chao et al., 2022; Gong et al., 2022) can be regarded as an in-distribution generalization method. Also, distribution alignment methods in regression (Nejjar et al., 2023; Chen et al., 2021) can be updated as OOD generalization methods. However, these distribution alignment methods are not related to the label functions, which are supposed to be very important in regression tasks.
3 Methodology
3.1 Problem Definition
Regression in deep learning. Let \(\{(x_i, y_i)\}_{i=1}^N\) be the dataset with \(N\) samples, with \(x_i \in \mathcal{X}\) being the input sample \(i \in \mathbb{R}^+\) and \(y_i \in \mathcal{Y}\) its corresponding label, and \(\mathcal{X}\) and \(\mathcal{Y}\) denoting the input space and the continuous label space, respectively. In the training phase, the network learns a projection function \(g : \mathcal{X} \rightarrow \mathcal{F}\) and a regression function \(p : \mathcal{F} \rightarrow \mathcal{Y}\). The projection function \(g\) transforms the input data into the feature space, and the regression function \(p\) maps the compact feature representation to the label space. The objective of the regressor is to bring the output prediction \(\hat{y}_i\) close to the ground truth label \(y_i\). Ideally, the optimal predictor \(p\) is a fully connected layer that satisfies \(y_i = \hat{y}_i = W_p^* f_i + b_p^*\), where \(f_i = g(x_i)\) is the extracted feature, \(W_p^*\) is the optimal weight, and \(b_p^*\) is the optimal bias.
Distribution discrepancy in regression. Cortes & Mohri (2011) defines a theory of learning from different distributions in regression. Given the hypothesis \(h\) being a map from input space \(\mathcal{X}\) to the label space \(\mathcal{Y}\), the discrepancy distance \(\text{disc}\) between two distributions \(P\) and \(Q\) is defined as:
\[
\text{disc}(P, Q) = \max_{h,h' \in H} |\mathcal{L}_P(h', h) - \mathcal{L}_Q(h', h)|
\]
Here, the hypothesis \(H\) is a subspace of the reproducing kernel Hilbert space (RKHS) \(\mathcal{H}\) and \(\mathcal{L}_D(h', h) = E_{x \sim D}[L(h(x), h'(x))]\), with \(L\) being a MSE loss.
3.2 Relational Contrastive Learning
Prior works show that by leveraging the discrete labels to define positive and negative pairs in classification models, contrastive learning aims to learn feature representations with low intra-class variance and high inter-class separation, which can improve the generalization ability of the learned model. However, this motivation is based on the fact that the labels are discrete. In regression tasks, given an input-label pair of \((x_i, y_i)\), \(\forall \epsilon > 0\), with input \(x_i + \epsilon\) and its continuous label \(y_i + \epsilon\), it's proven that \(p\) should be a continuous bijection (Chao et al., 2022), with homeomorphic label and feature distributions. Intuitively, there is a positive relationship between the distances of labels and distances of features - as the distance between two labels increases, the distance between their corresponding features should also increase, meaning that when two examples have labels that are farther apart, their representations in feature space should also be farther apart, and vice versa for labels that are closer together.
Remark 1. \(d(y_i, y_j) < d(y_i, y_k) \iff d(f_i, f_j) < d(f_i, f_k), \forall i, j, k \in \mathbb{R}^+\)
Note that, for any bounded open subset in \(\mathcal{F}\), \(p\) should be convergent and bounded, which means \(p\) should be uniformly continuous on any bounded open subset (Rudin, 1976). Then, Remark 1 should be updated.
Remark 2. \(d(y_i, y_j) < d(y_t, y_k) \iff d(f_i, f_j) < d(f_t, f_k), \forall i, j, k, t \in \mathbb{R}^+\)
Remark 2 is not trivial. Since \(\mathcal{F}\) is a compact space and label \(\mathcal{Y}\) is continuous, then for \(\forall \epsilon > 0\), we can find labels \(y', y''\) with \(d(y', y'') = \epsilon\). Then, \(\exists \delta = d(f', f'') > 0\), such that \(\forall d(f_a, f_b) < \delta\), we have \(d(y_a, y_b) < \epsilon\). So, Remark 2 keeps \(p\) uniformly continuous.
In light of the discussion above, we argue that the distance between labels cannot be ignored in the regression tasks. In particular, we propose learning a feature-label proportional distance instead of the traditional distance, e.g., Euclidean distance between features:
\[ d_r(f_i, f_j) = \frac{d(f_i, f_j)}{d(y_i, y_j)}, \]
(1)
Here, \( d(\cdot, \cdot) \) represents Euclidean distance and \( d_r(\cdot, \cdot) \) denotes the proportional distance induced from \( d(\cdot, \cdot) \). In addition, \( d_r(\cdot, \cdot) \) should be a bounded distance, which can be illustrated by the following theorem.
**Theorem 1.** Given any two data points \((x_i, y_i)\) and \((x_j, y_j)\), we have \( \|f_i - f_j\|_p \leq \|W_p^{*^{-1}}\|_p \|y_i - y_j\|_p \). Here, \( W_p^* \) is the optimal weight of the fully connected layer. \( f_i, f_j \) are the features extracted from \( x_i, x_j \) through model \( g \), and \( \|\cdot\|_p \) is the norm under \( L_p \) space.
**Proof 1.** Given the optimal weight \( W_p^* \), bias \( b_p^* \) and data \((x_i, y_i), (x_j, y_j)\), we have
\[ y_i = W_p^* f_i + b_p^*, \quad y_j = W_p^* f_j + b_p^* \]
where \( f_i, f_j \) are extracted features from \( x_i, x_j \), respectively. Then,
\[ \|f_i - f_j\|_p = \|W_p^{*^{-1}}(y_i - y_j)\|_p \leq \|W_p^{*^{-1}}\|_p \|y_i - y_j\|_p \]
Theorem 1 gives the upper bound of \( d_r(\cdot, \cdot) \) which is \( \|W_p^{*^{-1}}\|_2 \). In addition, when the equal sign in Theorem 1 holds, it can explain the assumption of regression metric loss (Chao et al., 2022) that the distance between the features should be proportional to the distance between their corresponding labels. Specifically, Chao et al. (2022) uses a learnable parameter to restrain the proportion between feature distance and label distance. However, according to Theorem 1, this proportion should be related to the optimal weight \( W_p^* \), and the equation may not hold when the labels are continuous. Moreover, representing the proportion with a constant ignores its fluctuations and variances among different samples. To alleviate this issue, we formulate this proportion as a mapping function and minimize its standard deviation to constrain the distance between the features to be uniform along the samples.
According to Theorem 1, the result of \( d_r(\cdot, \cdot) \) should be a bounded proportion map and can be a constant function in some ideal situation. Hence, we minimize the standard deviation of \( d_r(\cdot, \cdot) \) to acquire a flatter proportion map in a mini-batch. The loss function should be:
\[ L_{std} = \sqrt{\frac{1}{N_b^2 - 1} \sum_{i} \sum_{j} (d_r(f_i, f_j) - \bar{d}_r)} \]
(2)
Here, \( \bar{d}_r \) is a constant function equal to the mean of the relative distances in the batch and \( N_b \) is the batch size. Clearly, \( L_{std} \) constrains the predictor \( p \) as a Lipschitz continuous function satisfying Remarks 1 and 2.
### 3.3 Spectral Alignment of Domains
Existing works (Xu et al., 2021; 2023) in domain generalization have demonstrated that the diversity and amount of training examples are positively correlated with the generalizability of a machine learning model. To expand the training set, we employ the data augmentation technique of c-mixup (Yao et al., 2022) to generate additional samples from unseen distributions. However, without imposing a constraint of domain invariance, the learned feature space might include domain-specific information and thus become noisy (Liu et al., 2023). This could hinder obtaining the optimal generalization power of the model.
To impose domain invariance constraint, the existing work of Chen et al. (2021) suggests not to minimize the difference between the Frobenius norm of feature representations of different domains, since the Frobenius norm may cause unstable performance. We assume that this instability can come from the fact that the Frobenius norm may encode the average of variances (i.e., singular values) along all orthogonal feature projections. We argue that the transferability of the feature representations mainly lies in aligning the highest variability directions corresponding to the largest
singular values Chen et al. (2019). Therefore, in our formulation, the Frobenius norm is substituted by the spectral norm, which only encodes the highest variability direction. We further show that the difference between spectral norms of features can be related to domain discrepancy.
**Notations** As Cortes & Mohri (2011), the expected loss in regression is \( L_D(h', h) = E_{x \sim D}[L(h(x), h'(x))] \) with \( L \) being the MSE loss. We have the \( L_D(h, 0) = \frac{1}{N} \| \hat{Y}_D^h \|_F^2 \), with \( N \) being the number of samples, and \( \hat{Y}_D^h \) being the output with hypothesis \( h \) under distribution \( D \). 0 represents the hypothesis mapping to zero element in \( Y \).
**Theorem 2.** Given two distributions \( P \) and \( Q \), we have
\[
\text{disc}(P, Q) \leq \frac{1}{N} \max_{h \in H} \| \hat{Y}_P^h \|_F^2 - \| \hat{Y}_Q^h \|_F^2 ,
\]
where \( \text{disc} \) represents the difference between distributions and \( N \) denotes the number of the samples.
**Proof 2.** Generally speaking, we have
\[
L(h', h) = L(h - h', 0)
\]
Since \( h, h' \) are in the subspace \( H \) of Hilbert Space \( \mathbb{H} \), we have \( h'' = h - h' \in H \). Then, we have
\[
\forall h'' \in \mathbb{H}, \text{disc}(P, Q) \leq \max_{h'' \in H} |L(h'', 0) - L(h'', 0)|
\]
So, the proof is concluded.
Theorem 2 shows the relation between the difference of feature representations and their distribution discrepancy. To determine the relation between the norm of the feature matrix and the output scale\(^1\), we consider the spectral norm of the feature space, \( \| F \|_2 = \sup_{w \neq 0} \frac{\| Fw \|_2}{\| w \|_2} \). If \( W_i \) is a row vector of the weight \( W \) in the fully connected layer, then \( \| \hat{Y}_i^h \|_2 \leq \| \hat{Y}_i^h - b_i \|_2 + |b_i| \leq \| F \|_2 \| W_i \|_2 + |b_i| \), \( \hat{Y}_i^h \) is the \( i \)-th vector of the output matrix \( \hat{Y}^h \) and \( b_i \) is the \( i \)-th value of the bias vector \( b \) in the fully connected layer. If we define \( \lambda_i(F) = \| F \|_2 \| W_i \|_2 + |b_i| \), we will have \( \| \hat{Y}_i^h \|_F \leq \| \lambda(F) \|_2 \).
From the discussion above, the spectral norm is related to the upper bound of the output scale. So aligning the spectral norms can prevent the output scales from differing greatly, which can also align two distributions as per Theorem 2. In this case, we propose a loss based on singular value decomposition (SVD) as follows:
\[
L_{svd} = |\max(s_{real}) - \max(s_{syn})|,
\]
where \( s_{real} \) and \( s_{syn} \) are the set of the singular values of the feature matrices from the real and synthesized distributions. The largest singular values of matrices are selected for calculating the loss. Note that \( \| F \|_2 = \max(s_F) \), where \( s_F \) is the set of the singular values of matrix \( F \).
### 3.4 Overall Objective Function
We combine our objectives for relational contrastive learning and spectral alignment, and optimize them in an end-to-end training fashion. Formally, we have:
\[
L = L_{mse} + \alpha L_{std} + \beta L_{svd},
\]
where \( \alpha \) and \( \beta \) represent hyper-parameters to balance the contribution of their corresponding loss functions. We further optimize the supervised loss of \( L_{mse} \), formulated as:
\[
L_{mse} = \frac{1}{N} \sum_{i=1}^{N} (p(g(x_i^{real})) - y_i^{real})^2 + \frac{1}{N} \sum_{i=1}^{N} (p(g(x_i^{syn})) - y_i^{syn})^2
\]
with \( p(g(x_i^{real})) \) and \( p(g(x_i^{syn})) \) being the prediction of input \( x_i^{real} \) and the augmented sample \( x_i^{syn} \), respectively. Here, \( y_i^{real} \) and \( y_i^{syn} \) denote the ground truth label corresponding to \( x_i^{real} \) and \( x_i^{syn} \) respectively.
\(^1\)The Frobenius norm of the output \( \| \hat{Y}_P^h \|_F \) represents the scale of the output in distribution \( P \). Unlike classification, in regression, the target for each sample can be a vector. That means, if we have \( N \) samples, each with \( M \) dimensional target vectors, then \( \hat{Y}_P^h \) is an \( N \times M \) matrix.
4 EXPERIMENTAL RESULTS
4.1 IMPLEMENTATION DETAILS
Recent research (Kumar et al., 2022; Kirichenko et al., 2023) reveals a phenomenon that fine-tuning the whole network on a new task can improve the in-distribution (ID) performance of the new task, at the price of its out-of-distribution (OOD) accuracies. This is because fine-tuning the whole network changes the feature space spanned by the training data of a new task, which distorts the pretrained features. While linear probing can be an alternative solution to fine-tuning, due to its inability to adapt the features to the downstream task, it may degenerate the performance on in-distribution tasks. To mitigate this ID-OOD trade-off, motivated by the discussion in (Kumar et al., 2022; Kirichenko et al., 2023), we freeze the top of the C-mixup (Yao et al., 2022) pretrained network (excluding the last block and the linear layers) during the training process. Specifically, we only fine-tune the bottom layer to preserve the low-level features from the pretrained model and unfreeze the last block to avoid degeneracy in the in-distribution tasks. In the following part, we use FT as an abbreviation for fine-tuning.
4.2 IN-DISTRIBUTION GENERALIZATION
Datasets and experimental settings. We evaluate the in-distribution (ID) generalization ability of our method on two tabular datasets (i.e., Airfoil (Kooperberg, 1997), No2 (Kooperberg, 1997)), and one time-series dataset (i.e., Exchange-Rate (Lai et al., 2018a)). Airfoil contains 1503 data of aerodynamic and acoustic test results for different sizes of airfoil type NACA0012, while No2 is a collection of 500 data of air pollution related to traffic volume and meteorological variables. The Exchange-Rate dataset is a time-series dataset with a length of 7588, consisting of daily exchange rate data of eight countries from 1990 to 2016. Following Yao et al. (2022), we use a three-layer linear layer network for Airfoil and No2, and LST-Attn (Lai et al., 2018b) for Exchange-rate. The preprocessing method on each dataset is the same as Yao et al. (2022). We also provide the result of RML (Chao et al., 2022) combined with our fine-tuning method in the experiments of ID generalization. Additionally, we have conducted comparisons with the Feature Distribution Smoothing (FDS) method Yang et al. (2021), as well as with RankSim Gong et al. (2022). In it metric loss, RankSim considers the discrepancy between the order of feature distances and the order of label distances, rather than their proportion. Two evaluation metrics are considered for the performance on in-distribution tasks, namely Root Mean Square Error (RMSE) and Mean Averaged Percentage Error (MAPE). The results of our method and reproduced results are obtained by averaging three runs with different random seeds.
| Method | Airfoil RMSE↓ | MAPE (%)↓ | No2 RMSE↓ | MAPE (%)↓ | Exchange-Rate RMSE↓ | MAPE (%)↓ |
|-----------------|---------------|-----------|-----------|-----------|---------------------|-----------|
| ERM† | 2.901 | 1.753 | 0.537 | 13.615 | 0.0236 | 2.423 |
| ERM* | 2.755 | 1.690 | 0.529 | 13.402 | 0.0257 | 2.613 |
| k-Mixup† (Greenewald et al., 2021) | 2.938 | 1.769 | 0.519 | 13.173 | 0.0236 | 2.403 |
| Mixup† (Zhang et al., 2018) | 3.730 | 2.327 | 0.528 | 13.534 | 0.0239 | 2.441 |
| Mani-Mixup† (Verma et al., 2019) | 3.063 | 1.842 | 0.522 | 13.382 | 0.0242 | 2.475 |
| C-Mixup† (Yao et al., 2022) | 2.717 | 1.610 | 0.509 | 12.998 | 0.0203 | 2.041 |
| C-Mixup* | 2.736 | 1.639 | 0.516 | 13.069 | 0.0235 | 2.415 |
| FT | 2.541 | 1.474 | 0.519 | 13.201 | 0.0233 | 2.387 |
| FT+RML | 2.560 | 1.496 | 0.537 | 13.801 | 0.0179 | 1.838 |
| FT+RankSim | 2.635 | 1.537 | 0.520 | 13.188 | | |
| FT+FDS | 2.663 | 1.529 | 0.589 | 14.986 | 0.0235 | 2.397 |
| FT+Lstd | 2.586 | 1.501 | 0.510 | 12.879 | 0.0161 | 1.529 |
| FT+Lsvd | 2.489 | 1.443 | 0.517 | 13.161 | 0.0233 | 2.391 |
| FT+Lstd+Lsvd | 2.516 | 1.460 | 0.506 | 12.896 | 0.0176 | 1.691 |
Table 1: Comparison on in-distribution datasets. The bold number is the best result and the underlined number is the second best result. The results of methods with † are reported by Yao et al. (2022) and the results of methods with * are reproduced based on the source code of Yao et al. (2022).
Performance comparison. We evaluate the ID generalization over three datasets and show the performance in Table 1. As we can see from the table, our method outperforms all the comparison methods in the ID generalization tasks. We find that the performance of $L_{std}$ outperforms RML in
most cases. As discussed above, RML only considers the scale of the proportion and ignores the variance which may not be able to have a better performance than $FT + L_{std}$. In addition, since the scales of the three datasets are not large enough, the pretrained model and our $FT + L_{svd}$ with synthesized distribution can also contribute to the improvement of in-distribution generalization on these datasets in some cases.
**t-SNE visualization** It is well known that traditional contrastive learning methods aim at learning compact feature clusters in the embedding space (Wen et al., 2016; Schroff et al., 2015). As aforementioned in the section on method, such clustering motivation may not be suitable for the regression tasks, but there are still connections between metric learning methods in regression and classification. According to our discussion, $L_{std}$ is trying to get a flatter $d_r$, which means the feature distribution should follow a discriminative pattern with less variance. To test the effect of contrastive learning in regression on embedding space, we visualize the feature distribution without metric loss, with RML, and with $L_{std}$ on Figure 4. This visualization can strongly support our assumption and discussion above. As Figure 4 shows, the feature distribution is more dispersed and the distribution pattern is clearer with $L_{std}$. In addition, as we discussed, RML focuses on learning a scale of the matrix feature and ignores the variance in the proportion. So, in some situations, the pattern will be blurred with RML, which is the same as the one shown in Figure 4. Note that $L_{std}$ maintains the property of being Lipschitz continuous for the predictor, which enhances the continuity of the feature distribution with less steep slopes. Figures 1c and 1d illustrate this difference: unlike $L_{std}$, RankSim Gong et al. (2022), which focuses solely on the distance between orders, does not preserve Lipschitz continuity. This characteristic might contribute to $L_{std}$’s superior performance over RankSim in most scenarios, as shown in Tables 1 and 2. It will also contribute to breakpoints in Figure 1c, which supports this hypothesis. Additional visualizations on $L_{svd}$ and $L_{std} + L_{svd}$ are provided in the Appendix.

(a) FT
(b) FT + RML
(c) FT + RankSim
(d) FT + $L_{std}$
Figure 1: T-SNE visualization of the embedding space on DTI dataset. The visualizations from left to right are (a) The baseline model that is fine-tuned to minimize MSE loss, (b) The model that is fine-tuned to minimize both MSE and RML objectives, (c) the model that is finetuned to minimize both MSE and RankSim, and (d) The model that is fine-tuned to minimize both MSE and our developed $L_{std}$. The red points represent the features extracted from the train set and the blue points represent the features extracted from the test set. It is obvious that the pattern of the feature distribution is clearer with $L_{std}$.
### 4.3 Out-of-Distribution Generalization
**Datasets** The out-of-distribution (OOD) generalization ability of models is evaluated over five datasets, including three real-world datasets (*i.e.*, Communities&Crimes (Redmond, 2009), SkillCraft (Mark Blair & Chen, 2013), Drug-targetInteractions (DTI) (Huang et al., 2021)), one synthetic dataset (*i.e.*, RCF-MNIST (Yao et al., 2022)), and one dataset contains both synthetic and real images (*i.e.*, MPI3D Gondal et al. (2019)). The Crimes and SkillCraft are two tabular datasets. The crimes dataset combines 1994 socio-economic data from three different sources and SkillCraft contains 3,395 video game telemetry data of real-time strategy (RTS) games from eight leagues. DTI is designed to predict the binding activity score between each small molecule and the corresponding target protein by collecting 232,458 data on the drug and target protein information. RCF-MNIST is a dataset with 60,000 images built on FashionMNIST (Xiao et al., 2017) with spurious correlations between colours and rotation angles. MPI3D is a benchmark dataset of 1,036,800 images with three
distributions to predict intrinsic factors. In our experiments, we only consider the prediction of the rotation around a vertical and horizontal axis.
**Experimental settings** We evaluate our method on four datasets, namely RCF-MNIST, Crime, SkillCraft, and DTI. We leverage a three-layer linear layer network on Community&Crime and SkillCraft. Resnet18 (He et al., 2016) is incorporated as the feature extractor for RCF-MNIST, and we employ DeepDTA (Öztürk et al., 2018) on DTI.
Following the original paper of DTI (Huang et al., 2021), we evaluate the methods on $R$ value. For the other three datasets, the evaluation metric is Root Mean Square Error (RMSE). When evaluating the out-of-distribution robustness, same as Yao et al. (2022), we report both average and worst-domain performance for the OOD experiments. Also, all the experiments are run over 3 seeds.
**Performance comparison.** The performance of OOD robustness on the four datasets is shown in Table 2. As the table shows, our method can achieve superior performance in most cases. For the datasets with small sizes, the pretrained model plays an important role in improving generalization, since the scarcity of data is the key problem in these datasets. Also, the distribution alignment with $L_{svd}$ can enhance the OOD robustness as well. In addition, we find that $L_{std}$ also has the ability to generalize the spurious correlation as shown by the results of RCF-MNIST. We assume that the spurious correlation increases the variance in the proportion, which can be generalized by $L_{std}$.
| Method | RCF-MNIST | Crime | SkillCraft | DTI |
|-------------------------|-----------|-------|------------|-----|
| | RMSE↓ Avg.| RMSE↓ Avg. | RMSE↓ Avg. | RMSE↓ Avg. | RMSE↓ Worst | RMSE↓ Worst | RMSE↓ Worst | RMSE↓ Worst | $R$↑ Avg. | $R$↑ Worst |
| ERM† | 0.162 | 0.134 | 0.173 | 5.887 | 10.182 | 0.464 | 0.429 |
| ERM* | 0.160 | 0.135 | 0.172 | 6.151 | 7.916 | 0.475 | 0.438 |
| IRM† (Arjovsky et al., 2019) † | 0.153 | **0.127** | 0.155 | 5.937 | 7.849 | 0.478 | 0.432 |
| IB-IRM† (Ahuja et al., 2021) | 0.167 | **0.127** | **0.153** | 6.055 | 7.650 | 0.479 | 0.435 |
| CORAL† (Li et al., 2018) | 0.163 | 0.133 | 0.166 | 6.353 | 8.272 | 0.483 | 0.432 |
| GroupDRO† (Sagawa et al., 2019) | 0.232 | 0.138 | 0.168 | 6.155 | 8.131 | 0.442 | 0.407 |
| mixup† (Zhang et al., 2018) | 0.176 | 0.128 | 0.154 | 5.764 | 9.206 | 0.465 | 0.437 |
| C-Mixup* (Yao et al., 2022) | 0.153 | 0.131 | 0.166 | 5.860 | 8.795 | 0.483 | 0.449 |
| FT | 0.146 | 0.129 | 0.156 | 5.592 | 8.358 | 0.479 | 0.458 |
| FT+RML | 0.167 | 0.129 | **0.153** | 5.496 | 8.249 | 0.480 | 0.446 |
| FT+RankSim | 0.239 | 0.135 | 0.164 | 5.324 | 7.577 | 0.479 | 0.464 |
| FT+FDS | 0.147 | 0.129 | 0.160 | **5.201** | **6.908** | 0.479 | 0.445 |
| FT+Lstd | **0.145** | 0.128 | 0.157 | 5.592 | 8.355 | **0.491** | **0.479** |
| FT+Lsvd | 0.147 | 0.129 | 0.159 | 5.591 | 8.358 | 0.479 | 0.444 |
| FT+Lstd+Lsvd | 0.146 | **0.127** | 0.161 | 5.592 | 8.355 | 0.484 | 0.469 |
Table 2: Comparison on out-of-distribution datasets. The bold number is the best result and the underlined number is the second best. The results of methods with † are reported by Yao et al. (2022). The results of methods with * are reproduced based on the source code of Yao et al. (2022).
**Results on MPI3D dataset** We analyze our method under the setting of domain generalization on MPI3D dataset, which is a benchmark dataset for Domain Adaptation in Regression. We adapt a domain generalization settings (Gulrajani & Lopez-Paz, 2021) by evaluating our method over three generalization tasks on MPI3D dataset: $rl, rc \rightarrow t; t, rc \rightarrow rl; rl, t \rightarrow rc$. We use the test sets of source distributions as the validation sets for the model selection. All the experiments are run over three random seeds, and we follow Cha et al. (2021) for random seed and hyper-parameter seed selection. The evaluation metrics on this task are Mean Square Error (MSE) and Mean Absolute Error (MAE). Since MPI3D is a large dataset containing 1,036,800 examples, we do not use our fine-tuning method on this dataset and there is no frozen parameter during the training process.
The MSE and MAE results are shown in Table 3 and 4 respectively. The comparison between $L_{std}$ and RML (Chao et al., 2022) shows the advantage of regarding the proportion as a fluctuating map instead of a constant. In addition, the performance also shows that the alignment with $L_{svd}$ can significantly improve the generalization ability in some cases. We also provide the results of alignment with Nuclear-norm $\|\cdot\|_*$ and Frobenius norm $\|\cdot\|_F$. With norm equivalence (Cai et al., 2016), $\|\cdot\|_2 \leq \|\cdot\|_F \leq \|\cdot\|_*$, the spectral norm can give a tighter upper bound. This can explain the reason that $L_{svd}$ can get the best performance among them.
| | MPI3D-MSE | | | Avg. |
|------------------|-----------|-------|-------|------|
| | rc | rl | t | |
| ERM | 0.08132 ± 9.6e⁻⁶ | 0.09819 ± 6.2e⁻⁵ | 0.007004 ± 5.4e⁻⁹ | 0.06217 |
| C-Mixup | 0.09226 ± 4.2e⁻⁵ | 0.10495 ± 1.8e⁻⁴ | 0.014453 ± 5.9e⁻⁸ | 0.07055 |
| RML | 0.08596 ± 5.6e⁻⁵ | 0.09412 ± 6.3e⁻⁶ | 0.020132 ± 1.3e⁻⁶ | 0.06676 |
| Nuclear-norm | 0.09490 ± 8.1e⁻⁵ | 0.09536 ± 5.8e⁻⁴ | 0.011940 ± 3.1e⁻⁶ | 0.06740 |
| F-norm | 0.09565 ± 1.2e⁻⁵ | 0.10548 ± 2.4e⁻² | 0.008318 ± 4.0e⁻⁶ | 0.06981 |
| $L_{std}$ | 0.07984 ± 8.2e⁻⁵ | 0.09624 ± 2.7e⁻⁵ | 0.006996 ± 1.1e⁻⁸ | 0.06103 |
| $L_{svd}$ | **0.07942 ± 4.9e⁻⁵** | 0.08355 ± 1.1e⁻⁴ | **0.006016 ± 1.3e⁻⁷** | 0.05633 |
| $L_{std} + L_{svd}$ | 0.07956 ± 4.0e⁻⁵ | **0.07885 ± 2.0e⁻⁵** | 0.006017 ± 1.6e⁻⁷ | **0.05481** |
Table 3: Comparison on MPI3D dataset with the setting of domain generalization under the MSE index. The bold number is the best result. The unseen domains are labeled on the top.
| | MPI3D-MAE | | | Avg. |
|------------------|-----------|-------|-------|------|
| | rc | rl | t | |
| ERM | 0.3163 ± 3.3e⁻⁵ | 0.3511 ± 3.2e⁻⁴ | 0.0922 ± 6.7e⁻⁷ | 0.2532 |
| C-Mixup | 0.3367 ± 1.6e⁻⁴ | 0.3666 ± 5.5e⁻⁴ | 0.1296 ± 5.1e⁻⁶ | 0.2776 |
| RML | 0.3315 ± 1.3e⁻⁴ | 0.3448 ± 1.8e⁻⁵ | 0.1661 ± 4.4e⁻⁵ | 0.2808 |
| Nuclear-norm | 0.3270 ± 2.4e⁻⁴ | 0.3313 ± 1.7e⁻³ | 0.1181 ± 5.3e⁻⁵ | 0.2588 |
| F-norm | 0.3226 ± 4.6e⁻⁵ | 0.3411 ± 6.2e⁻³ | 0.0985 ± 2.2e⁻⁵ | 0.2541 |
| $L_{std}$ | 0.3149 ± 3.2e⁻⁴ | 0.3478 ± 1.3e⁻⁴ | 0.0919 ± 9.6e⁻⁷ | 0.2515 |
| $L_{svd}$ | **0.3016 ± 9.8e⁻⁵** | 0.3225 ± 5.0e⁻⁴ | **0.0856 ± 1.1e⁻⁵** | 0.2366 |
| $L_{std} + L_{svd}$ | 0.3058 ± 1.0e⁻⁴ | **0.3137 ± 1.1e⁻⁴** | 0.0858 ± 1.4e⁻⁴ | **0.2351** |
Table 4: Comparison on MPI3D dataset with the setting of domain generalization under MAE index. The bold number is the best result. The unseen domains are labeled on the top.
### 4.4 Hyper-parameter Sensitivity Analysis
We analyze the hyper-parameters on $\alpha$ and $\beta$ in Equation 4. Since the value of $L_{mse}$ is always much smaller than the value of $L_{std}$ and $L_{svd}$, we hope the two hyper-parameters can be smaller than 1. So, we analyze the trend of the performance of $L_{std}$ and $L_{svd}$ with $\alpha$ and $\beta$ in the range between $[1e^{-9}, 1e^4]$. Figure 2 shows the sensitivity of the hyper-parameters on in-distribution dataset No2 and out-of-distribution dataset DTI respectively. We find that the $L_{std}$ is much more sensitive since the value of $L_{std}$ is usually much larger than $L_{mse}$ and $L_{svd}$. More analysis of $\beta$ on MPI3D dataset is shown in the Appendix.

### 5 Conclusion
This paper discusses two main objectives that are required to improve generalization in regression. For In-Distribution generalization, we propose relational contrastive learning loss, based on the assumption that the distance between features and their corresponding labels should be correlated. We assume that the proportion between feature distance and label distance is a mapping function. Through this loss, we show that the variance in the embedding space is decreased, resulting in more discriminative patterns. To improve the transferability of the model on out-of-distribution data, we propose to augment the original data and then align the synthesized and real distributions through minimizing the difference between spectral norm of features.
REFERENCES
Arthur S. Agatston, Warren R. Janowitz, Frank J. Hildner, Noel R. Zusmer, Manuel Viamonte, and Robert Detrano. Quantification of coronary artery calcium using ultrafast computed tomography. *Journal of the American College of Cardiology*, 15(4):827–832, 1990. ISSN 0735-1097. doi: https://doi.org/10.1016/0735-1097(90)90282-T. URL https://www.sciencedirect.com/science/article/pii/073510979090282T.
Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-distribution generalization. *Proceeding of the Conference on Neural Information Processing Systems*, 34:3438–3450, 2021.
Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H Falk, and Ioannis Mitliagkas. Generalizing to unseen domains via distribution matching. *arXiv preprint arXiv:1911.00804*, 2019.
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019.
Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a “siamese” time delay neural network. *Proceeding of the Conference on Neural Information Processing Systems*, 6, 1993.
T Tony Cai, Zhao Ren, and Harrison H Zhou. Estimating structured high-dimensional covariance and precision matrices: Optimal rates and adaptive estimation. 2016.
Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. *Proceeding of the Conference on Neural Information Processing Systems*, 34:22405–22418, 2021.
Hanqing Chao, Jiajin Zhang, and Pingkun Yan. Regression metric loss: Learning a semantic representation space for medical images. *arXiv preprint arXiv:2207.05231*, 2022.
Weifeng Chen, Zhao Fu, Dawei Yang, and Jia Deng. Single-image depth perception in the wild. *Proceeding of the Conference on Neural Information Processing Systems*, 29, 2016.
Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 1081–1090. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/chen19i.html.
Xinyang Chen, Sinan Wang, Jianmin Wang, and Mingsheng Long. Representation subspace distance for domain adaptation regression. In *International conference on machine learning*, pp. 1749–1759, 2021.
Corinna Cortes and Mehryar Mohri. Domain adaptation in regression. In *International Conference on Algorithmic Learning Theory*, pp. 308–323. Springer, 2011.
Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 4690–4699, 2019.
V. Gilsanz and O. Ratib. *Hand Bone Age: A Digital Atlas of Skeletal Maturity*. Springer Berlin Heidelberg, 2011. ISBN 9783642237621. URL https://books.google.com.au/books?id=uUo6z5_XfyIC.
Muhammad Waleed Gondal, Manuel Wuthrich, Djordje Miladinovic, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer. On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/d97d404b6119214e4a7018391195240a-Paper.pdf.
|
qYoIuM095A
|
Is the method introduced robust to disruptions? It is one of the most important questions to think about when claiming the usefulness. If there is SKU shutting down or suddenly high labor shortage (as in the pandemic), I doubt if the method can quickly capture the dynamics and still have good performance.
|
GNN-based Probabilistic Supply and Inventory Predictions in Supply Chain Networks
Anonymous authors
Paper under double-blind review
Abstract
Successful supply chain optimization must mitigate imbalances between supply and demand over time. While accurate demand prediction is essential for supply planning, it alone does not suffice. The key to successful supply planning for optimal and viable execution lies in maximizing predictability for both demand and supply throughout an execution horizon. Therefore, enhancing the accuracy of supply predictions is imperative to create an attainable supply plan that matches demand without overstocking or understocking. However, in complex supply chain networks with numerous nodes and edges, accurate supply predictions are challenging due to dynamic node interactions, cascading supply delays, resource availability, production and logistic capabilities. Consequently, supply executions often deviate from their initial plans. To address this, we present the Graph-based Supply Prediction (GSP) probabilistic model. Our attention-based graph neural network (GNN) model predicts supplies, inventory, and imbalances using graph-structured historical data, demand forecasting, and original supply plan inputs. The experiments, conducted using historical data from a global consumer goods company’s large-scale supply chain, demonstrate that GSP significantly improves supply and inventory prediction accuracy, potentially offering supply plan corrections to optimize executions.
1 Introduction
At the heart of supply chain optimization lies the task of mitigating the risks associated with imbalances between supply and demand across a supply chain network over time (Kleindorfer & Saad (2005); Pang & Tomlin (2008); Ivanov (2022); Abdel-Basset et al. (2019)). While accurate demand prediction is a critical factor in optimizing supply planning, it alone does not suffice (Manuj & Mentzer (2008)). Successful supply planning in the context of optimal and viable execution hinges on the ability to achieve the highest degree of predictability for both demand and supply, spanning the execution horizon. Making better supply and inventory predictions is imperative to create an attainable supply plan that effectively matches demand without overstocking or understocking inventory. While considerable efforts have been directed toward improving demand prediction in isolation (Salinas et al. (2020); Makridakis et al. (2022); Hyndman & Athanasopoulos (2018)), comparatively less attention has been given to predicting supply events (quantity/timing), lead times, and inventory levels as an integrated whole, accounting for all variabilities in demand and supply.
However, in complex, large-scale supply chain networks with a multitude of nodes and edges (Figure 1), making accurate supply and inventory predictions across all nodes and edges poses significant challenges. This difficulty arises from the need to account for dynamic interactions among interconnected nodes in the network, the ripple effects of supply delays through multi-hop nodes, resource availability, production capacity, and logistics capabilities.
Typically, planned shipments from the Sales and Operations Planning (S&OP) process, which consider limited sets of states, conditions, and constraints, tend to be notably inaccurate and unsuitable for direct execution. Consequently, organizations need to bridge the gap between S&OP shipment plans and day-to-day operational shipment activities, aligning them with financial objectives and tackling supply and demand challenges (Hippold (2019); Hainey (2022)). Therefore, supply chain operators frequently encounter situations where they cannot act on the planned shipments due to stock shortages and delayed upstream supplies. Accurate predictions of the actual executed events...
(quantity/timing) for planned shipments support operators in achieving better alignment with day-to-day operational objectives.
To address the shipment event, supply and inventory prediction problems, this paper introduces the Graph-based Supply Prediction (GSP) probabilistic model which is tailored for situations where planned shipment and forecasted demand inputs are available over the specified time horizon. We employ attention-based graph neural networks (GNN) to make network-wide consistent and simultaneous predictions for incoming and outgoing supplies and inventory, relying on sequential graph-structured snapshots of historical supply chain data, demand forecasting, and shipment plan inputs.
In addition, we propose a model-training loss function that combines cumulative supply prediction errors with inventory prediction errors. To elaborate further, incorporating cumulative supply prediction errors as illustrated in Figure 3 into the loss function deals with the prediction inaccuracies attributed to the unpredictable supply variability in both the quantity and timing of shipment events throughout the time horizon. Note that in an edge with a consistent lead time, the cumulative outgoing supply quantity from the source node over the time horizon has a proportional impact on the inventory level at the destination node.
Moreover, it is worth noting that frequently, our primary objective extends beyond merely predicting the supply itself. Instead, we often focus on forecasting key performance metrics such as service level, fill rate, and the total economic cost associated with imbalanced risks (e.g., lost sales and excess inventory). In this context, it is vital to train the model to predict both inventory levels and supply coherently, taking into account the provided demand prediction inputs. The inclusion of inventory prediction errors in the loss function accounts for comprehensive impacts of demand and supply variabilities on the accuracy of inventory predictions, since the inventory of each node is a result of the cumulative sums of incoming supply, outgoing supply, and demand.
The experiments, conducted using historical data from a complex supply chain network of a global consumer goods company, demonstrate that GSP achieves substantial enhancements in both supply and inventory prediction accuracy. These improvements have the potential to drive corrective adjustments in supply plans, ultimately leading to more optimal executions.
In summary, our contributions can be highlighted in three aspects.
---
1 Consider a scenario with actual quantities [0, 100, 0, 0] over 4 timesteps. Evaluating three predictive models, such as M1 = [0, 0, 100, 0], M2 = [100, 0, 0, 0], and M3 = [0, 0, 0, 0], M1 and M2 have a wMAPE of 200%, while M3 has 100%. We emphasize that wMAPE (= weighted Mean Absolute Percentage Error = the sum of the absolute quantity errors (= samples of |predicted quantity - actual quantity|) divided by the sum of the actual quantities) isn’t suitable for event prediction evaluation, since it doesn’t account for both quantity and timing predictions in a single metric. Thus, we introduce sMACE (scaled Mean Absolute Cumulative Error), defined as the mean of the absolute cumulative quantity errors (= samples of |cumulative predicted quantity - cumulative actual quantity|) divided by the mean of the actual quantities. Refer to Appendix C for a detailed discussion on sMACE. In the context of sMACE, when an actual event quantity (representing a step increase in the cumulative actual quantity function in Figure 3) is predicted to occur either \(d\) timesteps earlier (\(d < 0\)) or later (\(d > 0\)), it contributes an error equal to the quantity multiplied by \(|d|\). In terms of sMACE, both M1 and M2 have a score of 100%, whereas M3 scores 300%.
We presented a novel GNN-based generalized method for predicting event quantity/timing in graph-structured problem contexts where there are planned events without a one-to-one mapping to actual events. This implies that there is no ground truth as labeled data for quantity/timing variables for model learning. Our approach relies on labeled data in the form of edge-level and/or node-level aggregated quantities at specific temporal granularities, such as daily or weekly periods.
We proposed a novel prediction error metric called sMACE (scaled Mean Absolute Cumulative Error, detailed in Appendix C) to assess the inaccuracies in predictions caused by unpredictable supply variability in both quantity and timing of events. Also, we employed the concept of sMACE in the loss function for training our GNN-based event delta prediction models.
In the context of supply chain networks, we built GSP models using the GNN-based generalized method, specifically developed for supply chain network scenarios aiming to predict outgoing shipment events (quantity/timing). We demonstrated network-wide, reliable supply and inventory predictions for supply and inventory while adhering to node-level supply capacity constraints, conducting experiments with a global consumer goods company’s large-scale real supply chain data.
The structure of this paper is as follows: In Section 2, we conduct a comparison with related work. Section 3 describes our problem, followed by the presentation of our generalized method in Section 4. Section 5 introduces the GSP models, while Section 6 showcases our experiment results. The paper concludes with Section 7.
2 RELATED WORK
Predicting supply events requires unique techniques, distinct from individual node-level event prediction methods for intermittent demand forecasting (Kourentzes (2014); Petropoulos & Kourentzes (2015)), such as Croston’s method and its modifications (Croston (1972); Syntetos & Boylan (2005); Teunter & Sani (2009)), and deep renewal process (Turkmen et al. (2019)). When dealing with supply event predictions across an entire network, it is crucial to account for the cascading effect of events as they traverse various network nodes and edges with distinct topological structures. Our approach implements a GNN-based iterative inference technique to maintain prediction consistency at all node and edge levels across the entire network.
Our problem differs significantly from traditional lead time prediction or ETA (estimated time of arrival) prediction scenarios (Viellechner & Spinler (2020); Mariappan et al. (2023); Hathikal et al. (2020); Viellechner & Spinler (2020); Lingitz et al. (2018); Gyulai et al. (2018)). In those contexts, the target variable is typically defined as the time duration between the planned order initiation and its eventual arrival at the destination. In our GSP probabilistic approach, we broaden the scope to encompass detailed predictions. Our primary focus lies in forecasting both the timing and quantity of shipments originating from the source node. Subsequently, we integrate these shipment predictions with the lead time predictions between the outgoing shipment and the reception event. These comprehensive predictions from both source and destination nodes’ viewpoints (Figure 4) play a pivotal role in ensuring coherent and causally explainable reconstruction of node-level inventory predictions (Figure 5), drawing upon edge-level shipment quantity/timing predictions and node-level demand forecasts.
3 Problem Description
We consider supply chain networks where each SKU is associated with its own distinct topological graph. Let \( G = (V, E) \) denote a directional graph for the SKU-specific supply chain network containing a collection of nodes \( V = \{1, \ldots, n\} \) with diverse node types (e.g., plants, distribution centers, retailers) and edges \( E \subseteq V \times V \) where \((v, w) \in E\) denotes an edge from a source node \( v \) to a destination node \( w \). The nodes and edges in \( G \) at time \( t \) are associated with a set of node feature vectors \( x^t = \{x^t_v \in \mathbb{R}^{d_1} : v \in V\} \in \mathbb{R}^{|V| \times d_1} \) and a set of edge feature vectors \( a^t = \{a^t_{vw} \in \mathbb{R}^{d_2} : (v, w) \in E\} \in \mathbb{R}^{|E| \times d_2} \) where \( d_1 \) is the number of node features and \( d_2 \) is the number of edge features. In addition, let \( G^R = (V, E^R) \) denote the reverse graph of \( G \) with the same nodes and features but with the directions of all edges reversed, that is, \( E^R = \{(w, v) | (v, w) \in E\} \).
Our models make supply and inventory predictions in supply chain networks where the input data on planned (scheduled) shipment events, weekly demand forecasting and other planning features over the specified time horizon are available. Note that the planned shipment (quantity/timing) events are the original supply plan acquired from the enterprise’s planning system, which frequently falls short of feasible for execution as originally planned.
The objective is to minimize both (a) the average absolute errors of edge-level daily outgoing supply cumulative predictions and (b) the average absolute errors of node-level weekly inventory predictions. More precisely, our evaluation metrics are defined in a normalized manner as follows.
(a) daily outgoing supply prediction sMACE
\[
sMACE = \frac{\sum_{(t,(v,w)) \sim D} \sum_{h \in H} |\hat{Q}_{vw}^{day,t}(h) - Q_{vw}^{day,t}(h)|}{\sum_{(t,(v,w)) \sim D} \sum_{h \in H} q_{vw}^{day,t}(h)} \times 100\%
\]
where the predicted and actual cumulative daily quantity vectors of outgoing supply are defined as:
\[
\hat{Q}_{vw}^{day,t}(h) = \sum_{d=0}^{h} \hat{q}_{vw}^{day,t}(d), \quad Q_{vw}^{day,t}(h) = \sum_{d=0}^{h} q_{vw}^{day,t}(d)
\]
Here, \( \hat{q}_{vw}^{day,t}(d) \) is the predicted daily quantity of outgoing supply for day \( d \) in a time horizon \( |H| \) days, \( H = \{0, 1, 2, \ldots, |H| - 1\} \) from source node \( v \) to destination node \( w \) at the prediction time \( t \). Also, \( q_{vw}^{day,t}(d) \) is the actual daily quantity (as ground truth) of outgoing supply for day \( t + d \) on the edge from \( v \) to \( w \).
(b) weekly inventory prediction wMAPE
\[
wMAPE = \frac{\sum_{(t,v) \sim D} \sum_{w \in W} |\hat{I}_v^{week,t}(w) - I_v^{week,t}(w)|}{\sum_{(t,v) \sim D} \sum_{w \in W} I_v^{week,t}(w)} \times 100\%
\]
where \( \hat{I}_v^{week,t}(w) \) and \( I_v^{week,t}(w) \), respectively, are the weekly predicted and actual inventory for node \( v \) over the weekly time horizon \( W = \{0, 1, 2, \ldots, |W| - 1\} \) at week \( t \). Here, \( |W| = |H|/7 \).
4 Generalized Method for Event Delta Predictions
Before introducing GSP models for shipment event predictions in supply chain networks, we describe the generalized method for GNN-based event quantity/timing delta predictions in graph-structured problem contexts where there is information about planned events, but there is no one-to-one mapping with actual events for these planned events. Therefore, in our problem setting, there is no ground truth available as labeled data for quantity/timing variables in model learning. However, we assume that labeled data (ground truth) is obtainable in the form of edge-level and/or node-level aggregated quantities at specific temporal granularities, such as in daily or weekly periods.
---
For the ease of notation and without loss of generality, we drop the SKU-specific subscript from most notations, unless we explicitly include it.
4.1 Graph Attention Networks
GNNs are highly effective in representing graph-structured data by generating graph node embeddings (Bronstein et al., 2021; Hamilton et al., 2017; Kipf & Welling, 2017). These embeddings are created through graph convolutions (Figure 2) that consider both node-level and edge-level features, making them particularly well-suited for capturing the complex dynamics of demand and supply interactions among interconnected nodes via edges.
To achieve this, we harness the power of Graph Attention Networks (GAT), a specific type of GNN equipped with dynamic attention mechanisms that empower our model with the capability to dynamically allocate different weights to diverse connections, drawing on information from the present states of both nodes and edges (Brody et al., 2022; Veličković et al., 2018).
In GAT, dynamic attentions enable a node to selectively attend to its neighboring nodes that are highly relevant in its supply predictions. Provided both node features \( h^{(0)} = x \) and edge features \( e = a \) for \( G = (V, E) \), \( \text{GAT}_{XA} \) (Appendix A) denotes the \( L \)-layered graph convolution network that makes iterative updates \( l = 1, 2, ..., L \) and calculates
\[
h^{(L)} = \text{GAT}_{XA}(h^{(0)}, e, G; \phi_{XA}) = \{h_v^{(L)} \in \mathbb{R}^B : v \in V\} \in \mathbb{R}^{B \times |V|}
\]
where \( \phi_{XA} \) is the set of all learned parameters of \( \text{GAT}_{XA} \) and \( B \) is the embedding dimension. The graph node embedding \( u \) for a given pair of node features \( x \) and edge features \( a \) is defined as:
\[
u \overset{\text{def}}{=} \text{emb}_{XA}(x, a; \phi_{XA_f}, \phi_{XA_b}) = [u_f || u_b] \in \mathbb{R}^{2B \times |V|}
\]
where \( u_f = \text{GAT}_{XA}(x, a, G; \phi_{XA_f}) \) and \( u_b = \text{GAT}_{XA}(x, a, G^R; \phi_{XA_b}) \). Note that \( u_f \) and \( u_b \) each are calculated using the forward-directional (original) graph and the backward-directional (reverse) graph, respectively. The \( u_f \) embedding vectors from graph \( G \) incorporate the competing and collaborative dynamics of multiple source nodes to a destination node into their own attentions, whereas the \( u_b \) embedding vectors from reversed graph \( G^R \) encompass the interactions of multiple destination nodes to a source node into their respective attentions.
4.2 Event Quantity/Timing Delta Prediction for Each Planned Event
In this section, we present the probabilistic event prediction model that leverages GAT graph embedding to predict the timings and quantities of events at every edge, provided the information on the timings and quantities of originally planned events.
Specifically, let \( \tau_{vw}^{i|t} \in H = \{0, 1, 2, ..., |H| - 1\} \) be the originally planned time of event \( i \) elapsed from the current time \( t \). That is, the event \( i \) on the edge from \( v \) to \( w \) is planned to occur at time \( t + \tau_{vw}^{i|t} \). Also, we denote the planned outgoing shipment quantity of event \( i \) by \( a_{vw}^{i|t} \in \mathbb{R} \).
Given \( \tau_{vw}^{i|t} \) and \( a_{vw}^{i|t} \) of the \( i \)-th planned event (\( i \in \{1, 2, ..., |H|\} \)) on the edge from \( v \) to \( w \) at the prediction time \( t \), we predict \( P_{vw}^{i|t}(\delta) \) and \( \hat{a}_{vw}^{i|t} \). \( P_{vw}^{i|t}(\delta) \) is the predicted discrete probability distribution of the time difference variable \( \delta \) (in days) between the actual event time and the originally planned event time where \( \delta \in \Delta = \{-7, -6, ..., -1, 0, 1, ..., 6, 7\} \), and \( p_{vw}^{i|t}(\delta) \) is the probability of predicted time difference \( \delta = \delta \). Also, the predicted shipment quantity \( \hat{a}_{vw}^{i|t} = r_{vw}^{i|t} \cdot a_{vw}^{i|t} \in \mathbb{R} \) is calculated using \( r_{vw}^{i|t} \in (0, 2] \) as a predicted multiplier.\(^3\) Figure 6 illustrates the comparisons among predicted, planned, and actual shipments. We denote
\[
r_{vw}^{i|t} = \{r_{vw}^{i|t} | (v, w) \in E\} \in \mathbb{R}^{|E|}; \quad P_{vw}^{i|t}(\delta) = \{P_{vw}^{i|t}(\delta) | (v, w) \in E\} \in \mathbb{R}^{|E| \times |\Delta|}.
\]
\(^3\)The predictions of \( |H| \) events cover the maximum possible count of daily events over the \( |H| \) days.
\(^4\)Alternatively, the predicted shipment quantity may be modeled to depend on \( \delta \): \( \hat{a}_{vw}^{i|t}(\delta) = \min\{0, (r_{vw}^{i|t} + \delta s_{vw}^{i|t})\} \in \mathbb{R} \) where \( r_{vw}^{i|t} \in (0, 2] \) and \( s_{vw}^{i|t} \in [-1, 1] \) are predicted multipliers. Note that \( s_{vw}^{i|t} \) may capture a potential correlation between the time difference (earlier or later than the planned shipment time) and the shipment quantity (larger or smaller than the quantity on time). Furthermore, \( P(\hat{a}_{vw}^{i|t}, \delta_{vw}^{i|t}) = P(\hat{a}_{vw}^{i|t} | \delta_{vw}^{i|t})P(\delta_{vw}^{i|t}) \). An extension employing Bayesian regression allows for modeling the quantity prediction as a conditional probability \( P(\hat{a}_{vw}^{i|t} | \delta_{vw}^{i|t}) \).
For each $i$, the GAT-based prediction model $M$ predicts $i$-th event’s
$$ (r^{i|t}, p^{i|t}(\delta)) = M(x^t, a^t(i)) \tag{7} $$
at time $t$, taking edge-level features $a^t(i) = \{a_{vw}^t(i) | (v,w) \in E\}$ and node-level features $x^t = \{x_v^t | v \in V\}$ as inputs. For simplicity, we assume that node-level features are dependent only on the prediction time $t$, not the prediction target event $i$. For the edge-level features, as the default setting, we use
$$ a_{vw}^t(i) = \{\tau_{vw}^{i|t}, a_{vw}^{i|t}\} \cup \{\tau_{vw,\text{hist}}^{-k|t}, a_{vw,\text{hist}}^{-k|t} | k = 0, 1, 2, ..., K - 1\} \tag{8} $$
where $\tau_{vw,\text{hist}}^{-k|t}$ and $a_{vw,\text{hist}}^{-k|t}$ represents historical shipment event times and quantities, with $k = 0$ denoting the most recent event before time $t$.
To predict $r^{i|t}$ and $P^{i|t}(\delta)$, the model $M$ begins by calculating GAT embeddings $u_f$ and $u_b$, taking $x^t$ and $a^t(i)$, as outlined in Equation 5. Then, using the graph node embeddings $u_v = [u_{f,v} \| u_{b,v}]$ and $u_w = [u_{f,w} \| u_{b,w}]$ for nodes $v$ and $w$,
$$ r_{vw}^{i|t} = \text{mlp}_r([u_v \| u_w]) \in (0, 2] \tag{9} $$
where $\text{mlp}_r$ is a multilayer feedforward network with an output of sigmoid multiplied by 2.0. Also,
$$ P_{vw}^{i|t}(\delta) = \text{GumbelSoftmax}[\text{mlp}_p([u_v \| u_w])] \in [0, 1]^{|H|} \tag{10} $$
where $\text{mlp}_p$ is a multilayer feedforward network and $\text{GumbelSoftmax}$ (Figure 7) is to sample from a categorical distribution in the forward pass and be differentiable in backprop (Jang et al., 2016).
Note that the predicted time of an event $\tau_{vw}^{i|t} + \delta$ should be always zero or a positive integer. Thus, we only allow $\delta \geq -\tau_{vw}^{i|t}$, and update the probability of $\delta = 0$ by $p_{vw}^{i|t}(0) := \sum_{\delta' < -\tau_{vw}^{i|t}} p_{vw}^{i|t}(\delta')$.
### 4.3 Predicted Event Aggregation into Edge-Level Quantities over Time
In this paper, we exhibit the aggregation of the event predictions into the vector of daily event quantities over a defined time horizon ($|H|$ days, $H = \{0, 1, 2, ..., |H| - 1\}$), denoted as $\hat{q}_{vw}^{\text{day}, t} \in \mathbb{R}^{|H|}$ on the edge from node $v$ to node $w$ at the prediction time $t$ (= the start of day $t$). However, our methodology can be flexibly applied to time granularities other than the daily level.
For any event time $t' \in H$ elapsed from the current time $t$, $e(t') \in \mathbb{R}^{|H|}$ is the standard basis vector $[0, ..., 0, 1, 0, ..., 0]$ with a 1 at position $t' + 1$. It is worth mentioning that the probability distribution of the planned event time is described as $e(\tau_{vw}^{i|t}) \in \mathbb{R}^{|H|}$, which assigns the full probability 1 to the element corresponding to the planned event time $\tau_{vw}^{i|t}$.
Using $p_{vw}^{i|t}(\delta)$ and $\tau_{vw}^{i|t}$, we calculate the probability distribution of the predicted event time $\hat{\tau}_{vw}^{i|t}$ over the time horizon $H$ by
$$ \pi_{vw}^{i|t} = \sum_{\delta \in \Delta} p_{vw}^{i|t}(\delta) e(\tau_{vw}^{i|t} + \delta) \in \mathbb{R}^{|H|}. \tag{11} $$
---
5 At the prediction time $t$ (= the start of day $t$), we designate the prediction for the same day $t$ as the forecasted timestep 0 (or $h = 0$) prediction. As a result, the prediction made for the final day of the $|H|$ day horizon is referred to as the forecasted timestep $|H| - 1$ (or $h = |H| - 1$) prediction.
6 Since the minimum value of $t'$ is zero, the first element of vector $e(t')$ corresponds to when $t' = 0$. Also, for ease of performing mathematical operations below, we set $e(t')$ to the zero vector $0$ if $t' \notin H$.
Figure 8: An Illustration of $H = 14$ Day Prediction with Planned Shipments on Days 4, 7, 12
Then, the predicted quantity distribution vector of event $i$ over the time horizon $H$ is
$$\hat{q}_{vw}^{i|t} = r_{vw}^{i|t} a_{vw}^{i|t} \pi_{vw}^{i|t} = r_{vw}^{i|t} a_{vw}^{i|t} \sum_{\delta \in \Delta} p_{vw}^{i|t}(\delta) e^{(\tau_{vw}^{i|t} + \delta)} \in \mathbb{R}^{|H|}. \quad (12)$$
The predicted daily quantity vector is computed as:
$$\hat{q}_{vw}^{day,t} = \sum_{i \in A_{vw}^t} \hat{q}_{vw}^{i|t} \in \mathbb{R}^{|H|} \quad (13)$$
where $A_{vw}^t = \{ i \in \mathbb{N} \mid \tau_{vw}^{i|t} \in H \}$ is the set of all planned events at time $t$ over the time horizon $H$.
The predicted and actual cumulative daily quantity vectors are defined as:
$$\hat{Q}_{vw}^{day,t}(h) = \sum_{d=0}^{h} \hat{q}_{vw}^{day,t}(d) \mid h \in H \}, \quad Q_{vw}^{day,t}(h) = \sum_{d=0}^{h} q_{vw}^{day,t}(d) \mid h \in H \} \quad (14)$$
Note that $\hat{q}_{vw}^{day,t}(d)$ is the predicted daily quantity for day $d$ in $H$ on the edge from $v$ to $w$ at the prediction time $t$. Also, $q_{vw}^{day,t}(d)$ is the actual daily quantity (as ground truth) for day $t + d$ on the edge from $v$ to $w$.
The set of parameters of the prediction model $\theta_M = \{\phi_{XA_t}, \phi_{XA_h}, \phi_{mlp_v}, \phi_{mlp_w}\}$. Figure 8 illustrates the calculation of predicted vectors over $H = 14$ days for given $t$ and $(v,w)$.
### 4.4 Predicted Event Aggregation into Node-Level Quantities Over Time
Suppose that there are node-level labeled quantities (as ground truth) that we aim to incorporate into our model training for event quantity/timing delta predictions, along with the edge-level labeled quantities. Also, we assume that there exists a pre-determined process model $Z$ that takes edge-level predicted quantity vectors $\{\hat{q}_{vw}^{day,t} \mid (v,w) \in E\}$ as inputs and predicts node-level aggregated quantity vectors $\{\hat{I}_v^{week,t} \mid v \in G\}$ in a different time granularity (week for an illustration here) as outputs where $\hat{I}_v^{week,t} = \{\hat{I}_w^{t+w} \mid w \in W\}$ for the weekly time horizon $W = \{0, 1, 2, ..., |W| - 1\}$ and $|W| = |H|/7$. That is,
$$\{\hat{I}_v^{week,t}\} = Z(\{\hat{q}_{vw}^{day,t}\}) \quad (15)$$
We notate node-level labeled quantities (as ground truth) by $I_v^{week,t} = \{I_v^{t+w, actual} \mid w \in W\}$ where $I_v^{t+w, actual}$ is the actual quantity in node $v$ at the start of time $t$.
### 4.5 Loss Function
Let $\theta_M$ be the set of parameters of prediction model $M(x,a ; \theta_M)$. The loss function combines edge-level cumulative outgoing supply prediction errors with node-level inventory prediction errors.
$$L(\theta_M) = (1-\alpha) \mathbb{E}_{(t,(v,w)) \sim D}[\|\hat{Q}_{vw}^{day,t} - Q_{vw}^{day,t}\|^2_2] + \alpha \mathbb{E}_{(t,v) \sim D}[\|\hat{I}_v^{week,t} - I_v^{week,t}\|^2_2] \quad (16)$$
where $\alpha \in [0, 1]$ is a hyperparameter that controls the balance of the two loss components. When the loss function is set with $\alpha = 0$, it enables model training exclusively with edge-level labeled quantities. Conversely, when $\alpha = 1$, it does model training solely with node-level labeled quantities. We optimize $\theta_M$ by minimizing the loss $L(\theta_M)$ over the dataset $D$ through backprop.
5 GRAPH-BASED SUPPLY PREDICTION (GSP) MODELS
We build the GSP models by applying the generalized method outlined in Section 4 to supply chain network scenarios where the goal is to predict outgoing shipment events (quantity/timing). Due to limitations in the underlying supply chain processes and systems for tracking actual shipments against planned shipments, it is often not possible to establish a direct one-to-one correspondence between actual and planned shipments. Consequently, we lack precise information regarding the exact discrepancies between actual and planned shipments in terms of both shipment event quantity and timing. Nevertheless, there are the edge-level daily outgoing supply quantities ($\{q_{vw}^{day,t}\}$) and the node-level weekly inventory quantities ($\{I_{v,week,t}\}$) available for use as ground-truth labels in model training and evaluation. The prediction model $M$ in Equation 7 takes as inputs the edge-level features, configured by default in Equation 8, along with the node-level features defined as:
$$x_v^t = \{I_{v,actual}^t\} \cup \{I_{v,plan}^w | w = 1, 2, ..., |W| - 1\} \cup \{D_{v,pred}^w, S_{v,plan}^w, A_{v,plan}^w | w = 0, 1, ..., |W| - 1\}. \quad (17)$$
$I_{v,actual}^t$ is the actual inventory at the start of time $t$. Also, $I_{v,plan}^w$, $D_{v,pred}^w$, $S_{v,plan}^w$, and $A_{v,plan}^w$ are planned inventory, predicted demand, planned incoming supply, and planned outgoing supply for week $w$ as predicted at the start of week $t$, respectively. In this paper we suppose that these weekly demand forecasting and planning features are obtained from the organization’s existing system of planning.
We compute the node-level weekly predicted inventory vectors $\{\hat{I}_{v,week,t}\} = Z(\{\hat{q}_{vw}^{day,t}\})$ using the inventory prediction process model $Z$, described in Appendix D. Note that the model $Z$ internally relies on an edge-level model for probabilistic discrete lead time prediction $P_{LT,vw}^{t+h}(k)$ where $p_{LT,vw}^{t+h}(k)$ is the probability of predicted lead time $k = k \in H$ for any outgoing supply at day $t + h$.
In the context of network-wide supply event predictions, it becomes essential to predict the sequence of events propagating through nodes and edges. This requires connected predictions that span across nodes and edges in the network. Appendix E illustrates our approach for iterative and simultaneous predictions that allow for satisfying each node’s supply capacity constraint, which is affected by the supply executions of other neighboring nodes in a cascading manner.
6 EXPERIMENT RESULTS
| Method | (a) Daily Outgoing Supply sMACE | (b) Weekly Inventory wMAPE | (c) Weekly Constraint Error $\kappa$ |
|-------------------------|---------------------------------|----------------------------|-------------------------------------|
| GSP ($\alpha=0.0$) | 102.90 ± 0.1 % | 31.03 ± 0.3 % | 3.99 ± 0.03 % |
| GSP ($\alpha=0.5$) | **99.94 ± 0.1%** | **30.43 ± 0.3 %** | **3.69 ± 0.02 %** |
| GSP ($\alpha=1.0$) | 100.01 ± 0.2 % | 30.44 ± 0.4 % | 3.69 ± 0.04 % |
| Planned Shipments | 279.72 % | 34.65 ± 0.5 % | 3.67 ± 0.03 % |
| Croston’s Method | 1541.79 % | 55.06 ± 0.4 % | **3.47 ± 0.02 %** |
Table 1: Prediction Performances (Mean ± SD, Calculated Across 4 Weeks)
Our experiments were performed using the historical data from a global consumer goods company with complex supply chain networks. The GSP models were trained on 18 months of historical data from March 2021 to August 2022, validated on the subsequent 4 months, and then tested on a 4-month hold-out dataset from 2023. The dataset covers 51 high-volume SKUs. Each SKU-week combination has a unique network graph topology. The networks include a varying number of nodes, from 2 to 50, and a varying number of edges, from 1 to 91.
In all our experiments with different methods, we consistently employed 4-week ahead demand predictions at the SKU/node level with wMAPEs ranging from 90% to 105% by forecasted timestep.
We also utilized the 4-week ahead short-term planned shipments ($|H| = 28$, $|W| = 4$) and a separately trained edge-level model for probabilistic lead time prediction, $P_{LT,vw}^{t+h}(k)$.\footnote{We think that the future extension of this paper may involve exploring the simultaneous training of models for both shipment events and lead time predictions.} The GSP models were trained across all SKUs, each having individual quantity ranges as well as unique topological networks that may vary over time. All relevant variables, such as demand, supply, inventory are scaled to a uniform unit using a single SKU-specific scaler based on the maximum planned shipment quantity that occurred during the training data period. The GNN embedding layer, GAT$_{XA}$, was constructed using the PyTorch Geometric (PyG) library \cite{fey2019fast, paszke2019pytorch}, using GATv2Conv \cite{brody2022gat}. Using the validation dataset, we determined the optimal epoch and hyperparameters resulting in the minimum validation loss. More details on model training are available in Appendix B. We compared the prediction performances of GSP models against the planned shipments (original plan) and Croston’s method on the testing dataset.
The planned shipments exhibited significant inaccuracies when compared to the actual shipments, and were frequently not executed as intended. Croston’s method uses historical estimates of the average interval time between non-zero shipment events and the average non-zero shipment quantity for every edge, using smoothing parameters = 0.9. For GSP predictions, we use hard = True for GumbelSoftmax, enabling it to function as a categorical probability distribution. We predicted 4-week (28-day) horizon starting from each SKU/day network snapshot in the dataset, generating 20 probabilistic MC predictions. Table 1 compares different methods in terms of the performance metrics: (a) daily outgoing supply prediction sMACE, defined in Equation 1, (b) weekly inventory prediction wMAPE, defined in Equation 3, and (c) weekly constraint violation error $\kappa$ (i.e., a normalized metric indicating the extent to which the node-level weekly outgoing supply prediction quantities surpass the available supply capacity constraint, defined in Appendix E).
The results in Table 1 demonstrate that GSP models substantially outperformed the planned shipments and Croston’s method in terms of both the edge-level daily outgoing supply sMACE and the node-level weekly inventory wMAPE. We also noted that Croston’s method involved a substantial bias of 13.9%, whereas GSP ($\alpha = 0.5$) had a bias of 0.70% and planned shipments had a bias of 0.99%. In contrast, GSP ($\alpha = 0.5$, iteration = 0) showed minimal bias and yielded a $\kappa$ value that was comparable to that of the planned shipments.
Through our comprehensive investigations, we have been able to confirm that the outstanding performance metrics of GSP can mainly be attributed to its remarkable capability to detect systematic deviation patterns observed historically between actual and planned shipment events. These encompass deviations in both quantity (lower/higher) and timing (earlier/later) that are evident at the SKU/edge levels. For instance, certain nodes consistently delayed their outgoing supply events compared to the originally planned timings when there were recurring delays occurred in incoming supply from parent nodes. GSP effectively leveraged these patterns to enhance predictions for future event quantities and timings, all while adhering to constraints. We also noticed that GSP predictions based on GNN embeddings could incorporate the demand and inventory statuses of both source and destination nodes, as well as neighboring nodes. It is noteworthy that GSP ($\alpha = 0.5$), when equipped with a loss function that evenly weights errors in edge-level cumulative outgoing supply predictions and node-level inventory predictions, showcased the most exceptional overall performance.
7 CONCLUSION
Although accurate demand forecasting is a fundamental component of supply chain optimization, it is not the only necessity. Obtaining precise and reliable predictions for supply and inventory across all nodes and edges in supply chain networks is equally essential and challenging. We address this by GNN-based probabilistic approach that attains network-wide, reliable supply predictions while adhering to node-level supply capacity constraints. The designed loss function for model training, which combines cumulative supply prediction errors and inventory prediction errors, delivers robust performance on critical metrics. Our research plays a pivotal role in charting the path for the integration of AI within the supply chain domain.
REFERENCES
Mohamed Abdel-Basset, M Gunasekaran, Mai Mohamed, and Naveen Chilamkurti. A framework for risk assessment, management and evaluation: Economic tool for quantifying risks in supply chain. *Future Generation Computer Systems*, 90(1):489–502, 2019.
Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? 2022.
Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. *arXiv preprint arXiv:2104.13478*, 2021.
John D Croston. Forecasting and stock control for intermittent demands. *Journal of the Operational Research Society*, 23(3):289–303, 1972.
Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. *arXiv preprint arXiv:1903.02428*, 2019.
Dávid Gyulai, András Pfeiffer, Gábor Nick, Viola Gallina, Wilfried Sihn, and László Monostori. Lead time prediction in a flow-shop environment with analytical and machine learning approaches. *IFAC-PapersOnLine*, 51(11):1029–1034, 2018.
Steven Hainey. The rise of s&oe: Achieving organizational objectives with improved execution. *Journal of Business Forecasting*, 41(3), 2022.
William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. *arXiv preprint arXiv:1709.05584*, 2017.
Saraswathi Hathikal, Sung Hoon Chung, and Martin Karczewski. Prediction of ocean import shipment lead time using machine learning methods. *SN Applied Sciences*, 2(7):1272, 2020.
Sarah Hippold. How to set up s&oe in supply chain planning. *Business Insights & Trends, Gartner*, 2019. URL https://www.gartner.com/smarterwithgartner/how-to-set-up-soe-in-supply-chain-planning.
Rob J Hyndman and George Athanasopoulos. *Forecasting: principles and practice*. OTexts, 2018.
Dmitry Ivanov. Viable supply chain model: integrating agility, resilience and sustainability perspectives—lessons from and thinking beyond the covid-19 pandemic. *Annals of operations research*, 319(1):1411–1431, 2022.
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. *arXiv preprint arXiv:1611.01144*, 2016.
Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. 2017.
Paul R Kleindorfer and Germaine H Saad. Managing disruption risks in supply chains. *Production and operations management*, 14(1):53–68, 2005.
Nikolaos Kourentzes. On intermittent demand model optimisation and selection. *International Journal of Production Economics*, 156:180–190, 2014.
Lukas Lingitz, Viola Gallina, Fazel Ansari, Dávid Gyulai, András Pfeiffer, Wilfried Sihn, and László Monostori. Lead time prediction using machine learning algorithms: A case study by a semiconductor manufacturer. *Procedia CIRP*, 72:1051–1056, 2018.
Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. M5 accuracy competition: Results, findings, and conclusions. *International Journal of Forecasting*, 38(4):1346–1364, 2022.
Ila Manuj and John T Mentzer. Global supply chain risk management strategies. *International Journal of Physical Distribution & Logistics Management*, 38(3):192–223, 2008.
Mahesh Babu Mariappan, Kanniga Devi, Yegnanarayanan Venkataraman, Ming K Lim, and Panneerselvam Theivendren. Using ai and ml to predict shipment times of therapeutics, diagnostics and vaccines in e-pharmacy supply chains during covid-19 pandemic. *The International Journal of Logistics Management*, 34(2):390–416, 2023.
|
R6AA1NZhLd
|
Is this somehow a result of the different mechanisms used to induce topographic organization? (Bilinear vs. Matrix-vector product?). Similarly, is there a reason why these components appear to have patchier organization in the BERT setting as welll?
|
TOPOFORMER: BRAIN-LIKE TOPOGRAPHIC ORGANIZATION IN TRANSFORMER LANGUAGE MODELS THROUGH SPATIAL QUERYING AND REWEIGHTING
Anonymous authors
Paper under double-blind review
ABSTRACT
Spatial functional organization is a hallmark of biological brains: neurons are arranged topographically according to their response properties, at multiple scales. In contrast, representations within most machine learning models lack spatial biases, instead manifesting as disorganized vector spaces that are difficult to visualize and interpret. Here, we propose a novel form of self-attention that turns Transformers into “Topoformers” with topographic organization. We introduce spatial querying — where keys and queries are arranged on 2D grids, and local pools of queries are associated with a given key — and spatial reweighting, where we convert the standard fully connected layer of self-attention into a locally connected layer. We first demonstrate the feasibility of our approach by training a 1-layer Topoformer on a sentiment classification task. Training with spatial querying encourages topographic organization in the queries and keys, and spatial reweighting separately encourages topographic organization in the values and self-attention outputs. We then apply the Topoformer motifs at scale, training a BERT architecture with a masked language modeling objective. We find that the topographic variant performs on par with a non-topographic control model on NLP benchmarks, yet produces interpretable topographic organization as evaluated via eight different linguistic test suites. Finally, analyzing an fMRI dataset of human brain responses to a large set of naturalistic sentences, we demonstrate alignment between low-dimensional topographic variability in the Topoformer and human brain language network. Scaling up Topoformers further holds promise for greater interpretability in NLP research, and for more accurate models of the organization of linguistic information in the human brain.
1 INTRODUCTION
Biological brains are spatially organized, containing category-selective areas (Kanwisher, 2010), broad feature maps that tile individual cortical areas (Konkle & Oliva, 2012; Bao et al., 2020) and the cortex more broadly (Huth et al., 2012, 2016; Margulies et al., 2016), and large-scale distributed networks (Yeo et al., 2011; Braga et al., 2020). Particularly within brain regions, this spatial organization is one way in which the human brain, a vastly complex “black box”, is more naïvely interpretable than modern deep neural networks (DNNs), whose units have functional properties organized without simple spatial priors. Recent work in computational neuroscience has bridged this gap in DNNs trained for vision, demonstrating that local smoothness or wiring cost minimization objectives can be incorporated into DNNs to encourage the development of smooth functional organization of responses, which can then be easily visualized in 2D (Lee et al., 2020a; Blauch et al., 2022; Keller & Welling, 2021; Doshi & Konkle, 2021; Margalit et al., 2023; Lu et al., 2023), building upon classic approaches (Kohonen, 1982; Jacobs & Jordan, 1992). In addition to simulating topographic properties within regions, topographic vision models have also explained the hierarchical organization of topographic information from earlier to later visual areas (Margalit et al., 2023; Lu et al., 2023). One topographic vision model has even demonstrated the emergence of spatial clusters corresponding to ventral, lateral, and dorsal streams of the visual system (Finzi et al., 2021). Collectively, topographic vision models are helping to unify a computational understanding of the functional organization of the visual system.
However, topographical priors have not yet been built into models of linguistic processing, despite tremendous progress in the development of natural language processing (NLP) models and their application in cognitive science and neuroscience [Wilcox et al., 2020; Gauthier et al., 2020; Schrimpl et al., 2021; Caucheteux & King, 2022; Goldstein et al., 2022a; Tuckute et al., 2023]. In NLP, Transformer language models (LMs) have undoubtedly established themselves as the leading architecture for language tasks [Vaswani et al., 2017; Radford et al., 2018; Brown et al., 2020; OpenAI, 2023], displaying human-like language understanding and generation for the first time. In cognitive science and neuroscience, these LMs have emerged as the most quantitatively accurate models of human language processing. They generate probabilities of upcoming words that explain reading behavior of humans [Wilcox et al., 2020; Merkx & Frank, 2021; Shain et al., 2022], and their internal activations can explain the neural signals of humans reading or listening to naturalistic sentences or stories at the granularity of fMRI voxels and intracranial recordings [Schrimpl et al., 2021; Goldstein et al., 2022b; Tang et al., 2023]. Despite the success of these LMs, they remain difficult to interpret, and incomplete as models of brain function.
In the current work, our aim is to bridge these gaps by inducing a topographic organization of features within the Transformer architecture. We employ local-connectivity based approaches inspired by recent topographic vision models [Blauch et al., 2022; Keller & Welling, 2021] to the language domain, asking whether we can obtain topographic organization of linguistic representations within a Transformer architecture via spatial constraints. To do so, we introduce two computational motifs — spatial querying and spatial reweighting — to the self-attention layer, which encourage the development of topographic organization in separate components of the self-attention layer. We call Transformer models employing these constraints Topoformers. We show that we can scale these topographic motifs to a large BERT Topoformer model trained with a masked language modeling objective, and that topographic organization develops within each hierarchical layer of the network, without significantly compromising task performance. We interpret this topography using a novel suite of 8 semantic and syntactic tests. Last, we demonstrate that the topographic representations of the Topoformer can be aligned with the topographic representations of the human functionally-defined language network in multiple subjects. In summary, our work demonstrates for the first time that Transformer models can be trained to exhibit topographic organization similar to the human brain, and paves the way for further interpretability work leveraging spatial priors.
2 METHODS
In this study, we propose two approaches for enforcing topographic organization in a Transformer layer. Both methods rely on the use of local communication to introduce spatial constraints that encourage the formation of spatially organized linguistic representations.
2.1 SPATIAL QUERYING
We begin with the standard self-attention operation used by Vaswani et al. (2017). In this formulation, every token embedding is projected onto a set of queries, keys, and values, and the query of a given token is associated with a corresponding key of all other tokens. Spatial querying works by associating a local pool of queries with a given key. The locality is parameterized with a width parameter $r_{SQ}$ determining the fraction of units in a given key’s circular receptive field (RF). For simplicity, we examine the case of a simple non-weighted sum of queries. This is achieved by inserting a binary intermediate matrix $M \in \mathbb{R}^{d \times d}$, where $d$ is the embedding dimension, and the columns of $M$ determine the spatial pool of queries associating with a given key. Essentially, this makes it such that the dot product attention between a given pair of tokens is not between individual queries and keys, but local pools of queries and individual keys. This biases the representations of queries to be locally smooth, and the representations of keys to have a spatial correspondence with the queries. The local pooling of spatial querying can be visualized with a simple example, assuming a model dimension of $d = 3$, and 2 tokens.
Figure 1: Spatial querying and reweighting operations in the "Topoformer".
The first row shows standard querying operations in the attention module of a single-head Transformer, and the second row shows the spatial counterparts used in the Topoformer. Standard querying associates a single query dimension of token $i$ with a single key dimension of token $j$. In contrast, spatial querying associates a local pool of query dimensions with a given key dimension, through the intermediate local pooling matrix $M$. Standard (dense) reweighting applies a fully connected linear layer $W^O$ to the outputs of a single attention head (typically to combine the outputs of multiple attention heads). In our formulation, we use a locally connected layer $W^O_{\text{local}}$ in its place (spatial reweighting). While the figure illustrates querying for a pair of tokens and reweighting for a single token, when processing a full sequence, there is a 2D grid of the form shown here for each token. Each heatmap shows the second PC of responses (top: control model, bottom: SQR model).
$$QMK^T = \begin{pmatrix} Q_{1,1} & Q_{1,2} & Q_{1,3} \\ Q_{2,1} & Q_{2,2} & Q_{2,3} \end{pmatrix} \begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix} \begin{pmatrix} K_{1,1} & K_{2,1} \\ K_{1,2} & K_{2,2} \\ K_{1,3} & K_{2,3} \end{pmatrix}$$
$$= \begin{pmatrix} Q_{1,1} + Q_{1,2} & Q_{1,2} + Q_{1,3} & Q_{1,1} + Q_{1,3} \\ Q_{2,1} + Q_{2,2} & Q_{2,2} + Q_{2,3} & Q_{2,1} + Q_{2,3} \end{pmatrix} \begin{pmatrix} K_{1,1} & K_{2,1} \\ K_{1,2} & K_{2,2} \\ K_{1,3} & K_{2,3} \end{pmatrix}$$
(1)
We can see that, instead of the rows of the matrix multiplication containing individual queries, they now contain summed local pools of queries. This is the essence of spatial querying. The full self-attention equation with spatial querying (SQ) is given as follows:
$$\text{Attention}_{\text{SQ}}(Q, K, V) = \text{softmax} \left( \frac{QMK^T}{\sqrt{d_k}} \right) V W^O$$
(2)
For simplicity of visualization, we use a single attention head in the Topoformer implementation, but we retain the outer reweighting matrix $W^O$ used in multi-head attention [Vaswani et al., 2017]. Our motivation for using single-head attention is to ensure that the dominant functional organization occurs within a head rather than across heads; without further constraints than Eq 2, organization across heads would be non-topographic and thus complicate interpretability and visualizations. Although the dimensionality of the model could be of any size, it’s convenient in our implementation for it to be a perfect square, such that it can be reshaped into a $\sqrt{d} \times \sqrt{d}$ grid for visualization purposes. While we work with square grids, theoretically, any 1D, 2D, or 3D arrangements of units could be used to define the spatial position of units. For a visual explanation, Figure 1 compares the differences between standard operations within a self-attention block, and their spatial counterparts.
2.2 Spatial reweighting
Spatial querying only imposes a topographic relationship between queries and keys to encourage the development of topographic organization of the values and self-attention outputs (hereafter fc\_out), we convert the outer reweighting matrix $W^O$ to a locally connected layer $W^O_{local}$. By using locally connectivity in $W^O_{local}$, we encourage the model to learn more localized feature representations in the values and attention outputs (analogous to what spatial querying does for the queries and keys). We parameterize local connectivity using a width parameter $r_{SR}$ that determines the fraction of units within a given unit’s circular receptive field (RF). Our locally connected linear layer $W^O_{local} \in \mathbb{R}^{d \times d}$ is situated in our network as follows:
$$\text{Attention}_{\text{SQR}}(Q, K, V) = \text{softmax}\left(\frac{QMK^T}{\sqrt{d_k}}\right)VW^O_{local}$$
Preliminary experiments demonstrated the need to use large positive weights to fully encourage the development of topographic organization. Thus, we initialize $W^O = [W_i^o * 10]$, where $W_i^o$ is a standardly initialized PyTorch linear layer. This operation, denoted as spatial reweighting, has the effect of enhancing local correlations, commonly viewed as a hallmark of topographic organization (Lee et al., 2020b; Blauch et al., 2022; Margalit et al., 2023). These excitatory feedforward connections mimic the dominant role of excitatory pyramidal neurons in between-area cortical communication in biological brains (Laszlo & Plaut, 2012; Blauch et al., 2022).
3 Results
3.1 Training a 1-layer Topoformer on a supervised task
We begin by training 1-layer single-head encoder-only Topoformer (bidirectional attention) on the IMDB sentiment analysis dataset (Maas et al., 2011), which classifies movie reviews as having a positive or negative overall sentiment. The local connectivity in the spatial querying and spatial reweighting operations are controlled through a hyperparameter $r_{SR}$ that sets the radius of spatial receptive fields (RFs). We investigated the effect of different RF sizes in the Topoformer-SQR model, finding that smaller $r_{SQ}$ values yield better accuracy and topography, while the network is more robust to the $r_{SR}$ hyperparameter (see Appendix A.9 for details). In the following, we report results for an RF size of 0.3 for the SQ model and 0.1 for the SQR model. We set $d = 400$.
3.1.1 Topoformer-SQ
We first describe the results of a model using only spatial querying (Topoformer-SQ). Following training, Topoformer-SQ achieved an accuracy of 0.81 on the IMDB sentiment test set. In comparison, an identical 1-layer Transformer model without spatial querying achieved an accuracy of 0.83. To probe its topographic organization, we conducted selectivity analyses and principal component analysis (PCA) to investigate the unit activation patterns in the different layers, as shown in Figure 2 (see Appendix A.1, A.3 for details). Our selectivity analysis was designed to contrast the response magnitudes for positive and negative sentiment sentences. As expected, this analysis revealed a topographic organization in the keys and query sublayers, but not in the values of fully-connected layers. We next performed PCA to assess generic forms of topographic variability in the model representations. We found that the weights of the first two principal components (PCs) exhibited a smooth topography in the keys and queries, with the second PC spatially aligned to the selectivity for both representations. This demonstrates that the network has learned to organize its dominant modes of variability spatially, a hallmark of topographic functional organization.
3.1.2 Topoformer-SQR
We next trained a model incorporating both spatial querying and reweighting (Topoformer-SQR). This model achieved a test set accuracy of 0.75 on the IMBD sentiment test set, slightly lower than the Topoformer-SQ model. We performed identical probing analyses to those performed in the previous section, highlighting the results for the values and fully-connected (fc\_out) representations in Figure 2B. We found that the Topoformer-SQR exhibited more pronounced topographic organization in the values and fc\_out layers compared to Topoformer-SQ. This suggests that the local
Figure 2: Topographic organization across sublayers with spatial querying and reweighting.
A. Topoformer-SQ produces topography in the keys and queries, but not the values or self-attention outputs. Each column shows a different sublayer representation within a self-attention block (keys, queries, values, and fc.out). The representations were obtained by averaging across the tokens in each sentence from the IMBD sentiment classification test set [Maas et al., 2011]. The first row shows selectivity for positive vs. negative sentiment sentences. The second and third rows show the PC weights for the first and second components, respectively. B. Topoformer-SQR produces topography in the values and self-attention outputs. The format is the same as for A., but for brevity we show only the values and self-attention outputs as the keys and queries show a similar topographic organization from spatial querying.
connectivity matrix $W_{\text{local}}$ (see Figure 1 A., and equation 3) successfully enforced a topographic correspondence between the values and attention outputs, as predicted.
3.2 Scaling up: Topoformer-BERT
| BERT Model | MNLI | SST-2 | STSB | RTE | QNLI | QQP | MRPC | CoLA | GLUE |
|------------|------|-------|------|-----|------|-----|------|------|------|
| multihead | 83.0/83.2 | 91.6 | 84.8 | 54.7 | 88.5 | 86.9 | 86.4 | 43.7 | 78.1 |
| l head | 81.1/81.5 | 90.0 | 82.1 | 51.2 | 87.6 | 86.7 | 84.8 | 47.5 | 76.9 |
| Topoformer | 80.1/80.1 | 90.9 | 75.1 | 51.2 | 86.6 | 86.0 | 81.5 | 46.3 | 75.31 |
Table 1: Comparison of GLUE performance between multi-head and single-head non-topographic BERT control models and Topoformer-BERT, each trained with the Cramming procedure [Geiping & Goldstein, 2022].
We next scaled up the Topoformer motifs to train a BERT model using a Masked Language Modeling objective (Topoformer-BERT). We followed the training paradigm introduced by [Geiping & Goldstein, 2022]. We trained a 16-layer BERT model on the Bookcorpus-Wikipedia dataset [Zhu et al., 2015] for 12 hours (see Appendix A.2 for more details). To provide a control for our Topoformer-BERT model, we trained a standard, non-topographic single-head BERT model with identical parameters and training procedure as our Topoformer-BERT (besides the lack of topographical motifs, Appendix A.7). To evaluate the models’ performance on natural language tasks, we followed the General Language Understanding Evaluation (GLUE) benchmark [Wang et al., 2019] procedure as described in [Geiping & Goldstein, 2022], testing each model on all tasks besides WNLI as in [Devlin et al., 2019]. Critically, we observed that the task performance of Topoformer-BERT on the GLUE Benchmark was similar to that of the non-topographic model counterpart, suggesting that our added spatial constraints were not significantly hindering task performance Table 1. Having established that Topoformer-BERT is capable of performing linguistic tasks, we move on to characterizing the topographic organization in Topoformer-BERT.
Figure 3: Topographic organization across all layers of Topoformer-BERT.
The generic topography statistic is given by Equation 4. A. Mean statistic \( t_g \) computed over a range of maximum distances B. Statistic \( t_{g,d} \) computed at each of several maximum distances for layer 15 C. Visualization of the first principal component (PC) weights for keys and fc_out sublayers.
First, we systematically quantified the topography of each of the 16 Topoformer-BERT layers using a statistic that relates the degree of correlation with the distance between pairs of units (Appendix A.3.2). Equation 4. A high value of the statistic \( t_g \) indicates that nearby units tend to be more correlated in their response pattern across sequences than distant units. We plot the mean statistic \( t_g \) over distance thresholds for all layers in Figure 3A, and the distance-threshold-specific statistic \( t_{g,d} \) for layer 15 (Figure 3B). In general, the keys and queries have the greatest degree of topographic organization, and the values show the weakest organization. Nevertheless, each is consistently above 0, driven by very local decay in correlation, as seen in the analysis of \( t_{g,d} \) across different maximum distances (Figure 3).
Second, we took a step towards interpreting the emergent topographical structure in Topoformer-BERT. Specifically, we evaluated the selectivity of the unit activations to a set of eight test suites targeting different linguistic properties. All eight test suites consisted of 76 sentences each, and were either based on carefully designed minimal pair sentences based on prior work (Gauthier et al., 2020; Hu et al., 2020; Misra et al., 2023) or were designed by us to control for the number of words and sentence surprisal (see Appendix A.8 for information and sentence examples for each test suite).
The first suite, Intactness tests intact sentences versus their scrambled counterparts, thereby degrading both linguistic form (syntax) and meaning (semantics). The next suites test more targeted linguistic properties: Suites 2 through 4 test three different dimensions of meaning that have been extensively investigated in prior work, as specified below. Suite 2 tests Animacy (sentences with animate vs. inanimate meanings; Naselaris et al., 2009; Connolly et al., 2012; Konkle & Caramazza, 2013), suite 3 tests Concreteness (sentences with concrete vs. abstract meanings; Binder et al., 2005; Fiebach & Friederici, 2004), and suite 4 tests Visuomotor properties (sentences with visual vs. motor meanings; Desar et al., 2010; Lynott et al., 2020). The next suite (5) tests Semantic acceptability using minimal pair sentences (Conceptual Minimal Pair Sentences; Misra et al., 2023). The final three suites test three different dimensions of form using suites from SyntaxGym (Gauthier et al., 2020; Hu et al., 2020): Suite 6 tests Agreement (Subject-Verb Number Agreement), suite 7 tests Licensing (Reflexive Number Agreement), and suite 8 tests Garden-Path ambiguity (Verb Transitivity).
We performed selectivity analyses for these eight test suites (Figure 4). These analyses intuitively ask whether a given unit shows a preference for a particular contrast (e.g., animate versus inanimate sentences). We evinced a strong topographic organization according to broad semantic categories (top row, Figure 4), both in terms of significant topographic selectivity, as well as significant decodability of condition from the distributed pattern of activities. Intriguingly, the selectivity patterns were different across contrasts, implying that semantic distinctions are represented in topographic activity pattern differences across categories. It is important to note that despite the strongly significant selectivity, the mean activity patterns were highly similar across categories within each contrast (Appendix A.9.3, Figure 12): rather than indicating contrasting hot spots of activation for animate and inanimate content, for example, the rank order of unit activities tends to be similar...
Figure 4: Selectivity-based interpretation of topographic organization in Topoformer-BERT.
Each panel shows the selectivity of Topoformer-BERT layer 15 (keys), for a given contrast. Each test suite contains two contrasting conditions each with a set of sentences; unit activities are computed as the mean over tokens for each sentence, and the conditions are contrasted with a t-test. We plot the selectivity significance value (see Appendix A.3), where $s = 2$ corresponds to positive selectivity with $p = 0.01$, and $s = -2$ corresponds to negative selectivity with the same significance level. The first row contains sentences with natural variability, whereas the bottom row contains results from constructed minimal pairs differing in only one word across conditions. To ensure visibility of effects regardless of size, we used different statistic ranges for plotting of each row: $s = 10$ for the top row, and $s = 2$ for the bottom row.
across sentences, and selectivity across conditions is indicative of small yet distinct deviations from the dominant pattern. This is not unique to the Topoformer (Appendix A.9.3 Figure 12E.), however, the Topoformer allows a uniquely intuitive visualization of selectivity patterns in 2D, aiding interpretability.
Second, we turned to more controlled test suites constructed using pairs of minimally different sentences in line with prior work in psycholinguistics and natural language processing (e.g., Linzen et al., 2016; Warstadt et al., 2020). As expected, the effects were lower (bottom row, Figure 4) relative to sentences only matched on length and surprisal (top row). We evidenced weak topographic selectivities to sentences with correct syntactic agreement versus those that do not, for example. The weak selectivities were also reflected in the fact that the condition of interest could not be decoded accurately from the patterns of key activation.
In summary, we quantified the extent of generic topographical organization across all sublayers across the full Topoformer-BERT model, and honed in on selectivities of the topographic organization of the final layer. We analyzed eight different linguistic properties, finding strong effects for naturalistic semantic dimensions: animacy, concreteness, and visuomotor properties. Future work should investigate the organization of finer-grained semantic dimensions, as well as more extensive tests for syntactic knowledge.
3.3 MODELING THE TOPOGRAPHIC ORGANIZATION OF THE HUMAN LANGUAGE NETWORK
To assess the topographic organization of language in the human brain, we recorded brain responses using event-related fMRI from N=5 participants (4 female, native English speakers) during a sentence reading task. Participants read 1,000 6-word, corpus-extracted sentences that were selected to maximize semantic and stylistic diversity (see A.4). Following standard preprocessing, we used a set of five language masks ("parcels") that denote brain regions within which most or all individuals in prior studies (Fedorenko et al., 2010; Lipkin et al., 2022) showed activity for an extensively validated language localizer contrast between reading of sentences and non-word strings (Fedorenko et al., 2010). For each participant, within these anatomical parcels, we then computed individual functionally-defined regions by comparing responses to sentences and non-words, and taking all voxels with at least weak preferences for sentences ($t > 1$). We then restricted our analyses to these voxels, henceforth the "language network". To determine that the language network exhibits spatial smoothness, as in the model, we computed the generic topographic statistic $t_g$ (Equation 4) on
unsmoothed brain responses within the functionally-defined language network of each participant, splitting the network into 5 spatial subregions (see Appendix A.4). We compared this statistic to a null distribution, using shuffled brain responses, finding that the $t_g$ value for each cluster fell outside this null distribution, indicating significant decay in unit response correlations with distance.
To determine whether the topology of the human language network is linguistically meaningful and corresponding to that of Topoformer-BERT, we performed representational alignment using partial least squares singular value decomposition (PLS-SVD). Given z-scored brain responses $X$ and model embeddings $Y$, PLS-SVD finds joint low-dimensional embeddings $X_c$ and $Y_c$ by computing the SVD on the covariance matrix $X^TY$ as $X^TY = U\Sigma V$ such that the left singular vectors $\tilde{U} = W_x$ are the component weights from brain responses and the right singular vectors $\tilde{V}^T = W_y$ are the component weights from model embeddings. The component scores are then given as $X_c = XW_x$ and $Y_c = YW_y$, where $X_c^{(i)}$ and $Y_c^{(i)}$ are the $i$-th aligned component scores.
Figure 5: Alignment of topographic representations in the human language network and Topoformer-BERT model. A. Illustration of the PLS-SVD alignment approach for a single participant and model sublayer representation. B. Alignment quantified across all 10 components, and each sublayer of Topoformer-BERT layer 15. The alignment of components is computed as the correlation of respective cross-validated PLS-SVD component scores for brain and model representations. Error bars are 95% confidence intervals over 5 participants.
Given the spatial organization of both brain and Topoformer responses, we can visualize the SVD weights of individual brain and model components $W_x^{(i)}$ and $W_y^{(i)}$, respectively, reshaped into their native spatial format. To compute the alignment of components, we perform a cross-validated analysis that ensures generalization, where SVD is computed using 80% of the sentences, and the scores are computed for the remaining 20% of the data. These scores can then be correlated across brain and model, for each dimension, to determine their alignment.
Figure 5A plots example alignments between the first three brain and model components, using the first participant and the Topoformer-BERT layer 15 (final layer, zero-indexed) keys representation. We see that the first two components are strongly aligned, as well as strongly topographically organized in both model and brain spaces. The third component is not aligned, despite being spatially organized in each representational space. Figure 5B repeats this analysis for all participants and sublayers, using layer 15 again. In general, the first two components were significantly aligned for each sublayer, whereas later components were less likely to be aligned. This result demonstrates that the low-dimensional variability can be aligned in the topographic representations of the human language network and Topoformer language model. The fact that we used functionally-defined language regions suggests that there is spatial functional organization even within this relatively functionally homogeneous brain network (e.g., Blank & Fedorenko [2020]; Fedorenko et al. [2020]), rather than simply across different functional networks with heterogeneous response profiles, similar to the organization that emerges in the Topoformer model.
To determine the specificity of this alignment, we performed an identical analysis using a control network and untrained Topoformer-BERT variant (Appendix A.6). Alignment, as well as voxel encoding model prediction, was significantly greater between the trained Topoformer-BERT and language network compared to a non-language control network and an untrained model, highlighting the linguistic nature of the alignment.
4 DISCUSSION
Here, we introduced the first topographically organized Transformer language models, “Topoformers”. Across small and large models, we found that these spatial querying and reweighting operations produced topographic organization in Topoformer models trained on natural language processing tasks. This organization was revealed with specific hypotheses by contrasting different linguistic properties, as well as generically through PCA. Finally, analyzing brain responses to a large number of sentences in the human language network, we uncovered topographic variability with low-dimensional alignment to that found in the Topoformer-BERT model.
Introducing topography into language models may improve interpretability in NLP. We took some initial steps with our suite of tests, but the interpretability problem is far from solved. One issue is that of “polysemanticity,” whereby units are involved in the representation of several distinct concepts (Bricken et al. [2023]). Despite strong semantic selectivity, we found that Topoformer-BERT’s activations were highly overlapping across categories, similar to non-topographic models. While our 2D visualizations aided interpretability of selectivity, efforts to improve “monosemanticity” or to encourage disentangled representations (Higgins et al. [2021]) may prove fruitful in yielding even more interpretable topographic organization when combined with the Topoformer motifs. Additionally, topographically constrained sparse autoencoders might allow for greater interpretability of entangled representations in Topoformers (Cunningham et al. [2023]; Bricken et al. [2023]).
Introducing topography is also necessary to improve the biological realism of models of language processing in the human cortex, and understand how biological constraints (e.g., wiring cost) shape the emergence and organization of the language network. Future work should aim to scale the approach towards foundation-model level. One critical insight is that scale will not only improve the performance of these models, but also improve their brain predictivity (Schrimpf et al. [2021]). In parallel, a greater focus on biological plausibility may prove fruitful for basic neuroscientific investigation of the language network (Iain et al. [2023]).
This work marks the beginning of topographic modeling of language processing. We hope that other researchers will be persuaded to embrace topography in language models, and push the development and use of Topoformers along several new directions.
REFERENCES
John Ashburner and Karl J. Friston. Unified segmentation. *NeuroImage*, 26:839–851, 2005. doi: 10.1016/j.neuroimage.2005.02.018.
Pinglei Bao, Liang She, Mason Mcgill, and Doris Y. Tsao. A map of object space in primate inferotemporal cortex. *Nature*, (January 2019), 2020. ISSN 1476-4687. doi: 10.1038/s41586-020-2350-5.
J. R. Binder, C. F. Westbury, K. A. McKiernan, E. T. Possing, and D. A. Medler. Distinct brain systems for processing concrete and abstract concepts. *Journal of Cognitive Neuroscience*, 17(6):905–917, June 2005. ISSN 0898-929X. doi: 10.1162/0898929054021102.
Idan A. Blank and Evelina Fedorenko. No evidence for differences among language regions in their temporal receptive windows. *NeuroImage*, 219:116925, October 2020. ISSN 1053-8119. doi: 10.1016/j.neuroimage.2020.116925. URL https://www.sciencedirect.com/science/article/pii/S1053811920304110
Nicholas M Blauch, Marlene Behrmann, and David C Plaut. A connectivity-constrained computational account of topographic organization in primate high-level visual cortex. *Proceedings of the National Academy of Sciences of the United States of America*, 119(3), jan 2022. ISSN 0027-8424. doi: 10.1073/pnas.2112566119. URL http://www.pnas.org/lookup/doi/10.1073/pnas.2112566119
Rodrigo M. Braga, Lauren M. DiNicola, Hannah C. Becker, and Randy L. Buckner. Situating the left-lateralized language network in the broader organization of multiple specialized large-scale distributed networks. *Journal of Neurophysiology*, 124(5):1415–1448, November 2020. ISSN 1522-1598. doi: 10.1152/jn.00753.2019.
Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, and Christopher Olah. Towards monosemanticity: Decomposing language models with dictionary learning. *Transformer Circuits Thread*, 2023. https://transformer-circuits.pub/2023/monosemantic-features/index.html.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs], July 2020. URL http://arxiv.org/abs/2005.14165 arXiv: 2005.14165.
Charlotte Caucheteux and Jean-Rémi King. Brains and algorithms partially converge in natural language processing. *Communications Biology*, 5(1):134, December 2022. ISSN 2399-3642. doi: 10.1038/s42003-022-03036-1. URL https://www.nature.com/articles/s42003-022-03036-1
Andrew C Connolly, J Swaroop Guntupalli, Jason Gors, Michael Hanke, Yaroslav O Halchenko, Yu-Chien Wu, Hervé Abdi, and James V Haxby. The representation of biological classes in the human brain. *Journal of Neuroscience*, 32(8):2608–2618, 2012.
Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models, 2023.
Rutvik H Desai, Jeffrey R Binder, Lisa L Conant, and Mark S Seidenberg. Activation of sensory–motor areas in sentence comprehension. *Cerebral Cortex*, 20(2):468–478, 2010.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In *Proceedings of NAACL-HLT 2019*, June 2019.
|
2zoi9YI21Y
|
Figure 2 has (a), (b) and (c), and each subfigure has different columns. In the caption, there are words: left, middle and right. Do they correspond to the three subfigures or the columns inside different subfigures?
|
Towards a Self-Made Model: Zero-Shot Self-Supervised Purification for Adversarial Attacks
Anonymous authors
Paper under double-blind review
Abstract
Adversarial purification is an adversarial defense method without robustness training for the classifier and regardless of the form of attacks, aiming to remove the adversarial perturbations on the attacked images. Such methods can defend against various unseen threats without modifying the classifier in contrast to empirical defenses. However, previous purification methods require careful training of a strong generative model or incorporating additional knowledge when training a classifier to be comparable to adversarial training. Retraining promising generative models or classifiers on large-scale datasets (e.g., ImageNet) is extremely challenging and computation-consuming. In this work, following the natural image manifold hypothesis, we propose a zero-shot self-supervised method for adversarial purification named ZeroPur: For an adversarial example that lies beyond the natural image manifold, its corrupted embedding vector is first restored so that it is moved close to the natural image manifold. The embedding is then fine-tuned on finer intermediate-level discrepancies to project it back within the manifold. The whole purification process is done from coarse to fine, which does not rely on any generative model and does not require retraining the classifier to incorporate additional knowledge. Extensive experiments on three datasets including CIFAR-10, CIFAR-100, and ImageNet with various classifier architectures including ResNet and WideResNet, demonstrate that our method achieves state-of-the-art robust performance. Code released.
1 Introduction
Recent studies show that adding carefully crafted imperceptible perturbations to natural examples can easily fool deep neural networks (DNNs) to make wrong decisions [Goodfellow et al., 2014; Szegedy et al., 2013]. The potential vulnerability behind their remarkable performance raises a significant challenge to security-critical applications. Thus, exploring efficient adversarial defense strategies is necessary for real-world applications in DNNs.
One adversarial defense strategy that is widely considered to be efficient is adversarial training [Ira et al., 2022; Madry et al., 2017; Zhang et al., 2019], which incorporates adversarial examples into the model training, causing the model to empirically adapt to adversarial perturbations. However, such approaches usually require huge computational resources [Shafahi et al., 2019; Wu et al., 2022] and suffer from substantial performance degradation [Dai et al., 2022; Kang et al., 2019; Ladlaw et al., 2020] in the presence of unseen attacks that are not involved in training.
Another adversarial defense strategy is adversarial purification [Nie et al., 2022; Shi et al., 2021; Yoon et al., 2021], which purifies adversarial examples by removing adversarial perturbations. Unlike adversarial training, adversarial purification does not require additional adversarial examples and effectively defends against unseen attacks. The purification techniques at this stage can be roughly considered in two categories: the former uses generative models [Goodfellow et al., 2014; Song & Ermon, 2019; Song et al., 2020] to remove adversarial perturbations on images [Nie et al., 2022; Shi et al., 2021; Wang et al., 2022a]. Such methods are supported by generative models to achieve
1Code available at https://github.com/
Figure 1: An illustration of ZeroPur. Given an adversarial example, we use its blurred counterpart as a reference for coarse shifting, and then fine-tune the coarse result to obtain the fine alignment result.
global image modeling and better performance than adversarial training. Yet it is often too expensive and difficult to train a promising generative model. Another purification technique uses specific lightweight pre-processing means instead of generative models [Dziugaite et al., 2016; Liao et al., 2018], and thus enabling fast purification. However, the necessity to achieve comparable performance with adversarial training requires additional knowledge on the classifier, such as the auxiliary loss [Mao et al., 2021; Shi et al., 2021].
In this work, following the natural image manifold hypothesis, we consider adversarial purification to move adversarial examples that are beyond the natural image manifold back onto the manifold, and propose a new adversarial purification method named ZeroPur. As illustrated in Figure 1, we move adversarial examples towards spots of the target natural images on the manifold by repeatedly pulling the distance between adversarial examples and their various blurred counterparts close in the embedding space. The movement procedure is limited by the low-quality embedding of their blurred counterparts, and cannot be precisely returned back to the original spot of target natural images on the manifold, providing only a reasonable direction. Thus, we maximize the intermediate-level discrepancies between the previous results and corresponding adversarial examples without sacrificing the direction, allowing previous results that lie near the manifold to continue moving toward it.
The main contributions of the current work are as follows:
1. We analyze the relationship between adversarial attack and adversarial purification based on the natural image manifold hypothesis, and show that a simple blurring operator can bring adversarial examples far from the natural image manifold back close.
2. We propose a zero-shot self-supervised approach for adversarial purification named ZeroPur, including two stages from coarse to fine and not depending on additional purification models.
3. The proposed approach can efficiently purify adversarial examples even does not require the classifier to learn additional knowledge. Meanwhile, the proposed approach shows superior performance if sufficient additional knowledge is available (e.g., strong data augmentation).
4. Extensive experiments demonstrate that the proposed approach outperforms current lightweight purification approaches on various datasets and has competitive performance with state-of-the-art approaches relying on generative models.
2 REVIEW OF LITERATURE
Adversarial training Adversarial training [Jia et al., 2022; Luo et al., Madry et al., 2017; Wu et al., 2020; Zhang et al., 2019] has been shown to be an effective way to improve robustness, by incorporating adversarial examples into the training data and reformulating the optimization objective from a minimization problem to a minimax problem. However, the computational cost of adversarial training is huge, caused by the fact that crafting adversarial examples requires backpropagation multiple times. In contrast to the better performance of adversarial training on seen attacks, it suffers
from substantial performance degradation in the presence of unseen attacks that are not involved in training [Dai et al., 2022; Kang et al., 2019; Laidlaw et al., 2020].
**Adversarial purification** By well-designed preprocessing techniques, adversarial examples can be projected back near the natural image fold. This type of approach is called adversarial purification. [Samangouei et al., 2018] propose defense-GAN, a generator that models the distribution of unperturbed images. [Song et al., 2017] assume that adversarial examples are mainly low in the low probability density region of the training distribution, and design PixelDefend to approximate this distribution using the PixelCNN [Van Den Oord et al., 2016]. Recently, works using score-based models [Yoon et al., 2021] and diffusion models [Nie et al., 2022; Wang et al., 2022a] as purification models have been proposed and demonstrated to yield much better performance. These approaches first destroy adversarial perturbations with known Gaussian noise and then remove the Gaussian noise with Langevin sampling or stochastic differential equation (SDE) to achieve clean examples. Contrary to the above approaches utilizing generative models, [Shi et al., 2021] propose a lightweight purification approach SOAP, which uses self-supervised loss to purification. SOAP no longer relies on generative models and can run quickly, but it requires classifiers to use the corresponding auxiliary loss in the training stage.
**Intermediate-level discrepancies** The intermediate-level discrepancies of neural networks reflect the analytical procedure of their decision, and much work on adversarial attacks and defenses has been proposed based on such discrepancies. Recently, intermediate-level discrepancies have been shown to improve the transferability of adversarial attacks [Gao et al., 2021; Huang et al., 2019; Wang et al., 2021; Yan et al., 2022]. Instead of distorting the output layer, such feature-level attacks maximize the distortion of intermediate-level discrepancies and achieve higher transferability. Similarly, it can be applied to adversarial defense techniques [Bai et al., 2021a; Wang et al., 2022b; Zhou et al., 2021]. This work discusses adversarial purification based on intermediate-level discrepancies.
### 3 Proposed Zero-Shot Learning for Adversarial Purification
#### 3.1 Adversarial Attacks and Purification in Natural Image Manifold Hypothesis
In the natural image manifold hypothesis [Ho et al., 2022], all natural images lie on a specific manifold called the natural image manifold. The learning process for DNNs can be considered as modeling this manifold, and its embedding space is an approximation of the natural image manifold.
Considering an embedding function $f$ and a decision function $g$, the natural image $x$ is embedded by $f(x) \in \mathbb{R}^d$ to the manifold $\mathcal{M}$, where $y \in \mathbb{R}$ is its label. Adversarial attacks aim to move $x$ beyond the manifold by optimizing the following objective with classification loss function $\ell$:
$$\max_{\|\delta^*\| \leq \epsilon} \ell(g \circ f(x + \delta^*), y),$$
where $\epsilon$ is the budget of adversarial perturbation. In most contexts, $\delta^*$ is approximated by the local worst-case $\delta$. For example, The Projected Gradient Descent (PGD) [Madry et al., 2017] updates adversarial perturbations in each step with the following equation:
$$\delta^{t+1} = \Pi_\epsilon(\delta^t + \alpha \cdot \text{sgn}(\nabla_{\delta^t} \ell(g \circ f(x + \delta^t), y))), \quad t \in [0, \tau - 1],$$
where $\Pi_\epsilon$ is the projection operator which projects the perturbation $\delta^t$ back to the $\epsilon$-ball to measure perceptibility and $\alpha, \tau$ denote the step size and the number of iterations of attacks. Finally $\delta^t$ is then an approximation of $\delta^*$ and notated as $\delta$.
Intuitively, optimizing Equation 1 results in a large shift in the embedding for $f$, which is equivalent to the natural image $x$ deviating from the manifold. We can formulate the procedure:
$$\min_{\delta^*} \|\delta^*\| \quad \text{s.t.} \quad \|f(x) - f(x + \delta^*)\| \geq \gamma, \quad g \circ f(x) \neq g \circ f(x + \delta^*).$$
Adversarial purification can be naturally considered in the reverse process of adversarial attacks. Thus the goal of adversarial purification is to move adversarial examples that deviate from the manifold to their initial spot on the manifold. The objective of adversarial purification can be written:
$$\min_{\delta_{\text{pfy}}} \|f(x_{\text{adv}} + \delta_{\text{pfy}}) - f(x)\| \quad \text{s.t.} \quad \|\delta_{\text{pfy}}\| \leq \epsilon_{\text{pfy}},$$
Figure 2: An illustration of restoration on CIFAR-10 and ImageNet. Left: PGD-20 on CIFRA-10. Middle and right: PGD-20 on ImageNet. Adversarial examples distort the capacity of all networks to capture features, but this perturbation was restored after being blurred.
Table 1: Responses of various models on CIFAR-10 for different types of examples.
| Examples | Cross-Entropy Loss | Accuracy (%) |
|--------------|--------------------|--------------|
| | Vanilla | Standard | Strong | Vanilla | Standard | Strong |
| Natural | 0.588 | 0.287 | 0.278 | 83.81 | 92.70 | 90.89 |
| Perturbed | 36.018 | 54.596 | 35.049 | 0.00 | 0.00 | 0.00 |
| Blurred | 7.475 | 3.758 | 1.037 | 11.39 | 27.70 | 68.61 |
| Coarse | 5.837 | 4.989 | 1.076 | 53.73 | 56.21 | 77.64 |
where \( x_{\text{adv}} = x + \delta^* \), and \( x_{\text{adv}} + \delta_{\text{pfy}} \) is the idealized purification result. Similar to adversarial attacks, Equation (4) can be minimized indirectly by optimizing the following equation:
\[
\min_{\delta_{\text{pfy}}} \ell(g \circ f(x_{\text{adv}} + \delta_{\text{pfy}}), y) \quad \text{s.t.} \quad \| \delta_{\text{pfy}} \| \leq \epsilon_{\text{pfy}},
\]
where \( \delta_{\text{pfy}} \) and \( \epsilon_{\text{pfy}} \) are defined to correspond to \( \delta \) and \( \epsilon \) in Equation (1) to offset perturbations.
However, the natural example \( x \) and its label \( y \) are invisible, and the only one we can use is the adversarial example \( x_{\text{adv}} \). We are thus required to devise a purification loss function \( \ell_{\text{pfy}} \) with \( x_{\text{adv}} \) taken for input, optimizing the following equation:
\[
\min_{\delta_{\text{pfy}}} \ell_{\text{pfy}}(f(x_{\text{adv}} + \delta_{\text{pfy}}); \Theta) \quad \text{s.t.} \quad \| \delta_{\text{pfy}} \| \leq \epsilon_{\text{pfy}},
\]
where \( \Theta \) are the additional parameters introduced in the design \( \ell_{\text{pfy}} \). Purification methods based on generative models [Ho et al., (2022); Nie et al., (2022); Samangouei et al., (2018); Yoon et al., (2021)] usually train a purification model \( G \) to minimize the global \( \ell_{\text{pfy}} \). Other lightweight purification methods [Mao et al., (2021); Shi et al., (2021)] that do not use generative models usually design a suitable \( \ell_{\text{pfy}} \) to indirectly minimize Equation (4) and they usually need to retrain the classifier to introduce additional knowledge. These methods all introduce learnable parameters \( \Theta \) in \( \ell_{\text{pfy}} \). We now discuss how to design \( \ell_{\text{pfy}} \) without \( \Theta \).
Considering that image transformation can also shift the position of embeddings \( f(x) \) in manifold \( M \), we investigate the response of various classifiers to the transformed adversarial examples. We trained three ResNet-18 [He et al., (2016)] on CIFAR-10 with different levels of data augmentation to verify the prevalence of the phenomenon. Specifically, ‘Vanilla’ denotes no data augmentation, ‘Standard’ denotes basic data augmentation (random resized crop, random horizontal flip), and ‘Strong’ denotes strong data augmentation used in contrastive learning (color jitter, grayscale, gaussian blur, solarization, equalization). See Appendix A.1 for details.
As illustrated in Figure 2(a), all three networks do not capture features correctly on PGD attack [Madry et al., (2017)]. However, after blurring the image using a median filter with a window size of \( 3 \times 3 \), any
of them recaptures the same features as the original image. And the stronger the data augmentation, the greater the similarity of the recaptured features. The same restoration is shown in Figure 2(b) and (c) on ImageNet, where the pre-trained ResNet-50 and VGG19 (Timm & Wightman (2019) version) captures features with high overlap with the original image on the blurred adversarial examples (gaussian blur with $\sigma = 1.2$). Note that they benefit from a well-designed data augmentation strategy, thus the restoration is effective. (e.g., AugMix Hendrycks et al. (2019)). Table 1 also shows the loss and robustness accuracy of natural examples, perturbed (adversarial) examples, blurring adversarial examples, and coarse purification results by our method on three classifiers. The loss w.r.t adversarial examples is significantly larger than their blurring counterparts. And the accuracy is also slightly improved, although there is still a gap for natural examples.
3.2 Coarse Shifting
Motivated by this heuristic phenomenon above that the blurring operator restores the attention of classifiers, and the loss added by worst-case adversarial examples and destroyed decisions will be fixed, we can conclude that adversarial examples that deviated from the natural image manifold are coming back closer to the manifold. Therefore, we can try to move the adversarial example to the vicinity of the natural image fold by pulling the distance between them and their blurred counterpart in the embedding space. Moreover, to avoid a single example being too blurred to be recognized by the classifier, we suggest that the blurring of a single time for adversarial examples should be attenuated, and the distance should be iteratively closed (See the appendix A.5 for a detailed discussion). The distance of feature embeddings is defined by the Cosine Similarity:
$$d(z_{adv}, z'_{adv}) = \frac{z_{adv} \cdot z'_{adv}}{\|z_{adv}\|\|z'_{adv}\|},$$
where $z'_{adv}$ is the embedding of blurred adversarial example.
The workflow of coarse purification is shown in Algorithm 1. Note that $\alpha_c$ and $\epsilon_c$ are hyper-parameters of the algorithm, which in practice are empirically set to $\alpha_c = \alpha$, $\epsilon_c = 1.25\epsilon$. The main results of the algorithm are reported in Table 1. The accuracy is significantly improved after purification. The increase is 42.34% on ‘Vanilla’ and 28.51% on ‘Standard’ compared to the blurring operator. The detailed procedure of the coarse purification is shown in Fig. 3. Solid lines denote the cosine similarity of features embedding between all adversarial examples and natural examples on CIFAR-10. And dashed lines denote cosine similarity between blurring adversarial examples and natural examples. Each color denotes ResNet-18 of various data augmentation strategies. The steady increase of solid lines implies that each blurring is well guided by the purification, and the decrease of areas between solid and dashed lines also implies that the discrepancy between adversarial examples and their blurring counterparts in the embedding space is getting smaller.
3.3 Fine Alignment
The result of the coarse shifting is already too promising for the classifier with a strong data augmentation strategy. Once there is no aggressive data augmentation to support the classifier training, the coarse shifting is limited by the low-quality embedding of the blurred image and cannot move to the exact spot in the manifold that its corresponding natural image is in. But at least the direction of shifting is reasonable. It is demonstrated by the red and blue lines that eventually converge to a straight line in Figure 3.
Our goal turns to break the limitation of low-quality embedding of the blurred image without the support of aggressive data augmentation. Breaking the limitation and allowing the example to shift
independently is similar to Intermediate Level Attack (ILA) \cite{Huang2019}. Specifically, given a function $f_l$ denoted as feature maps at layer $l$ of the classifier, we define the following objective:
$$\max_{x''} L_l(x_{adv}, x', x'') = -\Delta u''_l \cdot \Delta u'_l,$$
where $x''$ is the new purification result after fine-tuning, $\Delta u''_l$ and $\Delta u'_l$ are two vectors of flattened feature maps defined as follows:
$$\Delta u''_l = f_l(x'') - f_l(x_{adv}),$$
$$\Delta u'_l = f_l(x') - f_l(x_{adv}).$$
The resulting $x''$ is initialized by $x_{adv}$. Maximizing Equation 8 is equivalent to maximizing the projection of $u''_l$ on $u'_l$ since $\|u'_l\|$ is a constant. The increase of projection implies that $x''$ is not restricted by the blurring example to continue moving along the direction of the coarse result, which makes $x''$ hopefully move independently to the natural manifold. It allows us to refine the purification result by making each pixel change significant at a constrained purification budget $\delta_{pfy}$.
The fine-tuning process is called fine alignment.
We empirically found that using feature maps deep in the classifier can significantly improve alignment results, and using multiple layers will yield better results than single layers. We therefore designed Multiple Intermediate-Level Discrepancies (MILD). Let $L = \{l_1, l_2, ..., l_m\}$ denote the set of a $m$ layers model $f$, we carefully selected $S \subseteq L$ as a candidate for computing Equation 8. The MILD can be rewritten as:
$$\max_{x''} \text{MILD}(x_{adv}, x', x'') = \frac{1}{\|S\|} \sum_{l \in S} L_l(x_{adv}, x', x'') \quad \text{s.t.} \quad \|x'' - x_{adv}\| \leq \epsilon_{pfy},$$
where $\|S\|$ denotes the number of elements of the set $S$.
Algorithm 2 describes details of the fine alignment. In line 2 we estimate the step size $\alpha_f$ of each iteration with iteration number $K_f$ and purification budget $\epsilon_f$, not wasting any pixels altered by each alignment. In practice, we consider the last three layers of the classifier into the candidate set $S$. The intuitive understanding of the approach is: Each iteration allows the purified example to move autonomously in the direction of the previous coarse shifting, but the process is unaffected by other low-quality embeddings and thus can reach the original natural image spot successfully. The results of fine alignment are reported in Section 4. Not surprisingly, the direction of fine alignment is strongly correlated with coarse shifting. Algorithm 2 backfires if coarse shifting does not give an approximately correct direction as a reference for fine alignment.
### 4 EXPERIMENTS
#### 4.1 EXPERIMENTAL SETTINGS
**Datasets and base classifier.** Three benchmark datasets CIFAR-10 \cite{Krizhevsky2009}, CIFAR-100 and ImageNet \cite{Deng2009} were considered to evaluate the robustness. We compare with the state-of-the-art adversarial training methods reported in standard benchmark RobustBench \cite{Croce2021} and other adversarial purification methods on such three datasets. We consider the based model ResNet-18 \cite{He2016} and WideResNet-28-10 \cite{Zagoruyko2016} on CIFAR-10 and CIFAR-100, and ResNet-50 on ImageNet. As described in Section 3.2, we consider various data augmentation strategies to train based models on CIFAR-10 and CIFAR-100 (See Appendix A.1 for
details), demonstrating the positive effect of additional data augmentation on our approach. In practice, we use a median filter with $3 \times 3$ window size as the blurring operator on the ‘Vanilla’ classifier, and Gaussian blur with $\sigma = 1.2$ on the ‘Standard’ and ‘Strong’ classifiers. For simplicity, we use the notation ‘V’, ‘B’, and ‘S’ to denote the results on ‘Vanilla’, ‘Standard’, and ‘Strong’, while ‘C’ and ‘F’ denote the results on coarse shifting and fine alignment. For example, ‘ZeroPur-B-C-F’ denotes the results for ZeroPur with fine alignment on ‘Standard’.
**Adversarial attacks and evaluation metrics.** We evaluate our method with standard attacks and strong adaptive attacks. For standard attacks where the defense strategy is unknown to the adversary, we use PGD attack and AutoAttack [Croce & Hein (2020)] with adversarial training methods and other adversarial purification methods. For strong adaptive attacks, the adversary knows the defense strategy for the model. We use Defense Aware (DA) Attack [Mao et al., (2021)] and BPDA+EOT [Athalye et al., (2018); Hill et al., (2021)] to evaluate our method, where BPDA+EOT is the strongest attack for purification methods so far.
### 4.2 Comparison with the State-of-the-Art
**CIFAR-10 & CIFAR-100**
Table 2 and Table 3 reports the robust performance against $\ell_\infty$ threat model ($\epsilon = 8/255$) and $\ell_2$ threat model ($\epsilon = 0.5$) with PGD-20 and AutoAttack on CIFAR-10, as well as Table 4 on CIFAR-100. ‘Training Required’ denotes that the method requires retraining the classifier. It can be seen that our method yields better robust performance than previous state-of-the-art methods even without robustness training in $\ell_\infty$ threat model. In $\ell_2$ threat model, our method is also comparable to state-of-the-art methods. Meanwhile, classifiers with data augmentation (‘Strong’) obtain greater robustness. For a fair comparison, we regard the ‘Strong’ as Training Required.
**Table 2:** Robust accuracy (%) against PGD-20 and AutoAttack $\ell_\infty (\epsilon = 8/255)$ on CIFAR-10, obtained by different classifier architectures. The first part corresponds to adversarial training methods and the second part corresponds to adversarial purification methods.
| Training Required | Method | PGD-20 | AutoAttack | Method | PGD-20 | AutoAttack |
|-------------------|-------------------------|--------|------------|-------------------------|--------|------------|
| ✓ | Gowal et al. (2021) | 61.31 | 59.12 | Gowal et al. (2021) | 66.09 | 63.99 |
| ✓ | Sohwag et al. (2021) | 59.00 | 56.19 | Wang et al. (2021) | 64.92 | 61.47 |
| ✓ | Rade et al. (2021) | 61.71 | 58.17 | Gowal et al. (2020) | 66.05 | 63.27 |
| ✓ | Addepalli et al. (2022b)| 56.71 | 52.90 | Rade et al. (2021) | 66.04 | 63.36 |
| ✓ | Shi et al. (2021) | 60.65 | 66.62 | Shi et al. (2021) | 65.43 | 68.56 |
| ✓ | Mao et al. (2021) | 54.59 | 58.20 | Mao et al. (2021) | 64.64 | 67.79 |
| X | ZeroPur-V-C | 53.73 | 55.58 | ZeroPur-V-C | 57.41 | 58.21 |
| X | ZeroPur-V-C-F | 69.52 | 68.59 | ZeroPur-V-C-F | 67.82 | 67.20 |
| X | ZeroPur-B-C | 56.21 | 58.66 | ZeroPur-B-C | 57.45 | 53.29 |
| X | ZeroPur-B-C-F | 69.56 | 71.76 | ZeroPur-B-C-F | 70.66 | 69.39 |
| ✓ | ZeroPur-S-C | 77.64 | 79.29 | ZeroPur-S-C | 76.77 | 78.82 |
| ✓ | ZeroPur-S-C-F | 85.15 | 83.46 | ZeroPur-S-C-F | 83.92 | 82.31 |
**ImageNet**
Table 5 shows the robust performance against $\ell_\infty$ threat model ($\epsilon = 4/255$) with PGD-200 and AutoAttack on ImageNet. The upper of the table shows adversarial training methods, which require classifier robustness training. In the middle, adversarial purification methods including DISCO, DiffPure and GDMP all rely on purification models, while Reverse Attack in the bottom requires classifier robust training. However, Our method achieves similar robustness to them even without relying on any purification models or retraining. We provide visual examples in Figure 4 to show how our method purifies the adversarial examples. Note that Reverse Attack is used as a post-processing technique for boosting adversarial training methods, thus we report its best performance on ImageNet.
### 4.3 Defend Against Strong Adaptive Attacks
Assuming that the adversary is aware of the specific defense method for adversarial purification, strong adaptive attacks can be conducted. Thus we evaluate the robustness of ZeroPur on adaptive attacks including BPDA+EOT [Athalye et al., (2018); Hill et al., (2021)] and DA Attack [Mao et al., (2021)] on
Table 3: Robust accuracy (%) against AutoAttack $\ell_2 (\epsilon = 0.5)$ on CIFAR-10. The order of method types is consistent with Table 2. (Accuracy not reported in respective papers is replaced by ‘-’.)
| Method | Tra. | Arch. | Robust (%) |
|-------------------------|------|-----------|------------|
| (Rade et al., 2021) | ✓ | ResNet-18 | 77.48 |
| Rebuffi et al., 2021 | ✓ | ResNet-18 | 78.08 |
| Sehwag et al., 2021 | ✓ | ResNet-18 | 76.11 |
| Sehwag et al., 2021 | ✓ | WRN-34-10 | 79.03 |
| Augustin et al., 2020 | ✓ | WRN-34-10 | 81.35 |
| Wu et al., 2020 | ✓ | WRN-34-10 | 75.33 |
| Sun et al., 2019 | ✓ | WRN-28-10 | - |
| Nie et al., 2022 | ✓ | WRN-28-10 | - |
| Nie et al., 2022 | ✓ | WRN-70-16 | - |
| Ho et al., 2022 | ✓ | WRN-28-10 | 88.47 |
| ZeroPur-V-C-F | ✗ | ResNet-18 | 74.89 |
| ZeroPur-B-C-F | ✗ | ResNet-18 | 79.21 |
| ZeroPur-S-C-F | ✓ | ResNet-18 | 90.85 |
| ZeroPur-V-C-F | ✗ | WRN-28-10 | 76.59 |
| ZeroPur-B-C-F | ✗ | WRN-28-10 | 77.89 |
| ZeroPur-S-C-F | ✓ | WRN-28-10 | 89.77 |
Table 4: Robust accuracy (%) against PGD-20 and AutoAttack $\ell_\infty (\epsilon = 8/255)$ on CIFAR-100, obtained by different classifier architectures. The order of method types is consistent with Table 2.
| Method | Tra. | Arch. | Robust (%) |
|-------------------------|------|-----------|------------|
| (Rade et al., 2021) | ✓ | ResNet-18 | 32.71 |
| Addepalli et al., 2022a | ✓ | ResNet-18 | 33.29 |
| Addepalli et al., 2022b | ✓ | ResNet-18 | 34.04 |
| Rebuffi et al., 2021 | ✓ | WRN-28-10 | 36.11 |
| Yang et al., 2022 | ✓ | WRN-28-10 | 35.48 |
| Jia et al., 2022 | ✓ | WRN-34-10 | 36.45 |
| Shi et al., 2021 | ✓ | ResNet-18 | 28.67 |
| Mao et al., 2021 | ✓ | ResNet-18 | 23.83 |
| Shi et al., 2021 | ✓ | WRN-28-10 | 37.66 |
| Mao et al., 2021 | ✓ | WRN-34-10 | 31.21 |
| ZeroPur-V-C-F | ✗ | ResNet-18 | 36.64 |
| ZeroPur-B-C-F | ✗ | ResNet-18 | 37.61 |
| ZeroPur-S-C-F | ✓ | ResNet-18 | 55.43 |
| ZeroPur-V-C-F | ✗ | WRN-28-10 | 32.56 |
| ZeroPur-B-C-F | ✗ | WRN-28-10 | 34.80 |
| ZeroPur-S-C-F | ✓ | WRN-28-10 | 55.89 |
Table 5: Natural accuracy and robust accuracy (%) against $\ell_\infty$ threat model ($\epsilon = 4/255$) on ImageNet, obtained by ResNet-50. In our method, the blurring operator is Gaussian blur with $\sigma = 1.2$. The $^1$ indicates evaluation with PGD-200, otherwise, evaluation with AutoAttack. (The accuracy is directly reported from the respective paper.)
| Method | Model Required | Training Required | Accuracy (%) |
|-------------------------|----------------|-------------------|--------------|
| Salman et al., 2020 | ✗ | ✓ | 64.02 |
| Wong et al., 2020 | ✗ | ✓ | 55.62 |
| Bai et al., 2021 | ✗ | ✓ | 67.38 |
| DISCO (Ho et al., 2022) | LIIF | ✗ | 71.22 |
| DiffPure (Nie et al., 2022) | SDE | ✗ | 67.79 |
| GDMP (Wang et al., 2022a) | Guided DDPM | ✗ | 70.17 |
| Reverse Attack$^1$ (Mao et al., 2021) | ✗ | ✓ | - |
| ZeroPur (Ours) | ✗ | ✗ | 62.15 |
CIFAR-10 with WideResNet-28-10. Figure 5 reports the robustness of different purification steps on two strong adaptive attacks. The robustness of our method is stable on DA Attack, but on BPDA+EOT attack decreases as the number of purification steps increases. It could be caused by the increase in purification steps that lead to the correct estimation of the gradient by BPDA+EOT. In Table 6 we show the comparison with other purification methods on BPDA+EOT attack. Purification methods based sampling loops such as Hill et al., 2021 have naturally stronger defense on BPDA+EOT attack, and our method is slightly below it. The robustness of our method against strong adaptive attacks decreases partly because these attacks start from clean natural examples. However, our method is based on the assumption that the purified images are adversarial images (as described in the Appendix A.4). We believe that the direct use of strong adaptive attacks underestimates the robustness of ZeroPur. Following the comparison criteria of most of the literature, but, we still report all results.
4.4 DISCUSSION FOR BLURRING OPERATORS IN ZEROPUR
We show how the blurring operators affect the robust performance in Appendix A.3 evaluating the robustness of our method on CIFAR-10 using different blurring operators including median filters and Gaussian blurring kernels. The results show that the excessive blurring may cause the ‘Vanilla’
classifier to recognize images incorrectly. On the contrary, aggressive blurring operators achieve better robust performance on classifiers with strong data augmentation.
ZeroPur utilizes the blurring operator to move adversarial examples back to the natural image manifold, which follows that such adversarial examples are not robust to blurring operations. However, some attacks are naturally robust to blurring operations, such as DI$^2$-FGSM [Xie et al., 2019]. It does not mean that these attacks are completely immune to ZeroPur, and we can replace the blurring operator with other operators that destroy adversarial examples to further strengthen the purification, i.e., TV Minimization [Guo et al., 2017]. The purification results of ResNet-18 on CIFAR-10 before and after the replacement are reported in Table 7, and the performance against PGD attacks rises to 77.03%, which even surpasses the optimal performance achieved by blurring.
5 CONCLUSIONS
We propose a zero-shot self-supervised method for adversarial purification named ZeroPur that does not rely on any generative model or does not require retraining the classifier to incorporate additional knowledge. Our method largely outperforms previous state-of-the-art adversarial training and adversarial purification methods and is more lightweight.
Despite the improvements, our method has two major limitations: (i) our method suffers when applied to clean natural images because the blurring operator corrupts the clean image. (ii) it is not feasible to completely eliminate the distortion on images. We leave them for our future work.
Table 6: Comparison of robust accuracy (%) with other adversarial purification methods using the BPDA+EOT with $\ell_\infty (\epsilon = 8/255)$ threat model.
| Method | Purification | BPDA+EOT |
|-----------------|------------------|----------|
| Song et al. [2017] | Gibbs Update | 9.00 |
| Yang et al. [2019] | Mask+Recon. | 15.00 |
| Hill et al. [2021] | EBM+LD | **54.90** |
| Shi et al. [2021] | Auxiliary Loss | 31.90 |
| ZeroPur-B-C | Zero-shot | 35.60 |
| ZeroPur-B-C-F | Zero-shot | 25.00 |
Table 7: Robust accuracy (%) against PGD and DI$^2$-FGSM by blurring and TV Minimization. The Better performance in each attack is bolded.
| Method | Blurring | TV | Blurring | TV |
|-----------------|----------|----|----------|----|
| ZeroPur-V-C | **53.73** | 51.72 | 35.38 | **39.17** |
| ZeroPur-V-C-F | 69.52 | **69.53** | 57.86 | **61.85** |
| ZeroPur-B-C | 56.21 | **63.27** | 40.42 | **49.52** |
| ZeroPur-B-C-F | 69.56 | **77.03** | 60.48 | **69.96** |
| ZeroPur-S-C | **77.64** | 65.44 | **68.13** | 53.50 |
| ZeroPur-S-C-F | **85.15** | 72.31 | **82.34** | 66.39 |
Figure 4: Visual examples of ZeroPur against $\ell_\infty$ threat model ($\epsilon = 8/255$) on ImageNet. The red label is the error prediction and the green label is the correct prediction.
Figure 5: Impact of purification steps in our method on Robust accuracies.
REFERENCES
Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, and R Venkatesh Babu. Scaling adversarial training to large perturbation bounds. In *Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part V*, pp. 301–316. Springer, 2022a.
Sravanti Addepalli, Samyak Jain, et al. Efficient and effective augmentation strategy for adversarial training. *Advances in Neural Information Processing Systems*, 35:1488–1501, 2022b.
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In *International conference on machine learning*, pp. 274–283. PMLR, 2018.
Maximilian Augustin, Alexander Meinke, and Matthias Hein. Adversarial robustness on in-and-out-distribution improves explainability. In *Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16*, pp. 228–245. Springer, 2020.
Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, and Yisen Wang. Improving adversarial robustness via channel-wise activation suppressing. *arXiv preprint arXiv:2103.08307*, 2021a.
Yutong Bai, Jieru Mei, Alan L Yuille, and Cihang Xie. Are transformers more robust than cnns? *Advances in Neural Information Processing Systems*, 34:26831–26843, 2021b.
Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *International conference on machine learning*, pp. 2206–2216. PMLR, 2020.
Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track*, 2021. URL https://openreview.net/forum?id=SSKZPJCT7B.
Sihui Dai, Saeed Mahloujifar, and Prateek Mittal. Formulating robustness against unforeseen attacks. *arXiv preprint arXiv:2204.13779*, 2022.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009.
Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M Roy. A study of the effect of jpg compression on adversarial images. *arXiv preprint arXiv:1608.00853*, 2016.
Lianli Gao, Zijie Huang, Jingkuan Song, Yang Yang, and Heng Tao Shen. Push & pull: Transferable adversarial examples with attentive attack. *IEEE Transactions on Multimedia*, 24:2329–2338, 2021.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*, 2014.
Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. Uncovering the limits of adversarial training against norm-bounded adversarial examples. *arXiv preprint arXiv:2010.03593*, 2020.
Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy A Mann. Improving robustness using generated data. *Advances in Neural Information Processing Systems*, 34:4218–4233, 2021.
Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. Countering adversarial images using input transformations. *arXiv preprint arXiv:1711.00117*, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016.
|
XZGklkaOsL
|
Many of the results, as shown in tables such as Table 4 (AUC), Table 5, and Table 7, indicate only marginal improvements. This raises questions about the practical significance and real-world applicability of the proposed framework.
|
UNIFIED MEDICAL IMAGE PRE-TRAINING IN LANGUAGE-GUIDED COMMON SEMANTIC SPACE
Anonymous authors
Paper under double-blind review
ABSTRACT
Vision-Language Pre-training (VLP) has shown the merits of analysing medical images, by leveraging the semantic congruence between medical images and their corresponding reports. It efficiently learns visual representations, which in turn facilitates enhanced analysis and interpretation of intricate imaging data. However, such observation is predominantly justified on single-modality data (mostly 2D images like X-rays), adapting VLP to learning unified representations for medical images in real scenario remains an open challenge. This arises from medical images often encompass a variety of modalities, especially modalities with different various number of dimensions (e.g., 3D images like Computed Tomography). To overcome the aforementioned challenges, we propose an Unified Medical Image Pre-training framework, namely UniMedI, which utilizes diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images (especially for 2D and 3D images). Under the text’s guidance, we effectively uncover visual modality information, identifying the affected areas in 2D X-rays and slices containing lesion in sophisticated 3D CT scans, ultimately enhancing the consistency across various medical imaging modalities. To demonstrate the effectiveness and versatility of UniMedI, we evaluate its performance on both 2D and 3D images across 10 different datasets, covering a wide range of medical image tasks such as classification, segmentation, and retrieval. UniMedI has demonstrated superior performance in downstream tasks, showcasing its effectiveness in establishing a universal medical visual representation.
1 INTRODUCTION
In recent years, the field of medical image analysis has witnessed significant advancements, largely driven by the application of deep learning techniques and the increasing availability of medical imaging data. Notably, Visual-Language Pre-training (VLP) (Huang et al., 2021; Boecking et al., 2022; Bannur et al., 2023) attracts lots of attention, as it reduces the need for costly and time-consuming manual annotations by leveraging the vast amount of information in radiology reports and unlabelled data. Despite these success, further expanding the data scale for medical VLP remains non-trivial, because the availability of single-modality medical images is limited, especially when compared to the general domain. This introduces a strong need to integrate multi-modality medical images (e.g., X-rays, Computed Tomography (CT) and Magnetic Resonance Imaging(MRI)) within a unified VL framework. However, fully leveraging the information across multi-modal images within this VL framework is unexplored.
Figure 1: An example showing X-ray (up) and CT scan (down) both demonstrate similar abnormality, recording in the report.
Figure 2: t-SNE visualizations of image representations by models trained with different methods (2D: X-rays, 3D: CT, both modalities denote the same disease, pneumonia.). (a) Two models for different image modalities are trained individually in separate VLP processes. (b) One model for different image modalities are trained in one VLP processes, but without designs in UniMedI. (c) UniMedI. Learning a common semantic space for different medical images is non-trivial, even with language guidance, and UniMedI can well handle this integration. We use circles to highlight differences between different images.
On the above aspect, the inherent heterogeneity of medical imaging from different modalities obstructs their effective integration. One obvious and important problem is that medical images have different dimensions. For example, X-rays are 2D images, while CT scans are 3D images. To tackle this challenge, we start from the following key observation: despite big differences, medical images from various modalities share a common semantic latent space, which captures the underlying features of an individual’s health status, and such status are reflected in medical reports via language.
As shown in Fig. 1, the X-ray and CT scan can contribute to a comprehensive understanding of pneumonia, reflecting the commonality within the latent space, and these abnormalities are listed in reports. This observation motivates us to map data from various medical image modalities into the shared semantic space, which is guided by language in reports. This strategy not only tackles data-related issues but also fosters synergy and collaboration among distinct modalities, ultimately resulting in a more holistic understanding of an individual’s health condition.
However, creating a unified model that effectively maps data from different sources into a common space for combined learning is challenging, even with language guidance in reports. Figure 2a demonstrates the representation space of two distinct modalities with different dimensions (i.e., 2D X-rays and 3D CT scans) when trained individually via VLP. They are far apart in the representation space, even with same pathological information in reports. Furthermore, Figure 2b shows simply unifying them in one model does not solve the problem. Although the distance between representations of two modalities are shortened to some extent, their representations remain insufficiently compact, since only little space are shared between them.
To address the above challenge, we propose UniMedI, a novel Unified VL framework, designed to effectively integrate Medical multi-modal Images into a language-guided common semantic space. First, under the dilemma that paired 2D and 3D medical images are unavailable, and naively integration is not effectively as we shown above, we first design an attentive selection method to accurately identify text-relevant 2D slices without extra annotations. This builds a data bridge between 2D and 3D medical images. Then, we devise a cross-dimensional VLP method to bring both 3D data and selected 2D slices closer to the same report representation space, constructing a unified VL framework. Moreover, we introduce a self-distillation technique using a teacher-student structure and construct a masking and recovery task, further enhancing the associations between 2D and 3D data within the image space. Figure 2c shows UniMedI significantly reduces the distance between 2D and 3D features after undergoing our effective design for cross-dimensional pre-training.
To further demonstrate the effectiveness of our approach, we conduct extensive visualizations and experiments to showcase the working mechanisms and superior representational capabilities of our model. We evaluate our UniMedI framework on 10 real-world medical datasets and various downstream tasks (i.e., classification, segmentation and retrieval). The results consistently show superior performance, regardless of whether UniMedI is applied to full-scale data or limited data scenarios. We also provide visualizations on regions and slices selected by UniMedI, verifying our claim that UniMedI can identify key information from both 2D and 3D medical images.
2 RELATED WORK
Medical Self-supervised Learning In the domain of medical image analysis, a number of self-supervised learning (SSL) techniques have been developed to exploit the unique characteristics of medical data. These methods construct feature embedding spaces by designing pre-text tasks, such as solving jigsaw puzzles [Noroozi & Favaro] and inpainting tasks [Pathak et al., 2016]. Recently, researchers have explored the use of 3D convolutional neural network (CNN) architectures while retaining established SSL tasks on 2D CNNs [Tang et al., 2022]. However, the diversity of medical data poses a significant challenge, as the development of a unified visual representation that adequately captures the intricacies of different data types remains a crucial yet complex task that requires further investigation. To address this challenge, [Xie et al., 2022] proposed Unimiss, a universal medical self-supervised representation learning framework that overcomes the dimensionality barrier. Furthermore, [Nguyen et al., 2023] introduced Joint, an SSL framework capable of accommodating various data dimensions and generating versatile pre-trained weights for both 2D and 3D downstream applications. These approaches have made notable contributions to handling data from different modalities. However, they have given relatively less attention to the relationships and connections between different types of medical data.
Medical Vision-Language Processing Medical Vision-Language Processing (VLP) has emerged as a promising approach for learning medical visual representations by leveraging naturally occurring paired descriptive text [Zhang et al., 2022]. [Huang et al., 2021] propose Gloria, an attention-based framework that contrasts image sub-regions and words in the paired report to learn global and local representations. [Wang et al., 2022] further optimize the framework from the perspective of disease in their method MGCA. These methods exhibit remarkable performance in various downstream tasks involving medical images. However, the application of medical VLP is primarily limited to 2D images, mainly due to the limited availability of extensive 3D medical image-text datasets. Compared to 2D medical image-text pairs, 3D images and reports contain more abundant information, which offers clear advantages for learning visual representations. While some methods [Liu et al., 2023], [Chen et al., 2023] attempt to address this limitation by converting 3D data into 2D slices and subsequently employing generative models to generate captions for 3D medical data, this approach results in a loss of the original 3D volume structure information. Therefore, it is imperative to develop strategies that can effectively harness the valuable information present in 3D images and reports while preserving the structural integrity of the data. This will facilitate the enhancement of the learning process for visual representations in medical VLP.
3 METHODOLOGY
Figure 3 illustrates UniMedI and its designs to realize integration of 2D and 3D medical images. Generally, to overcome the challenges that no paired 2D and 3D image data exists, UniMedI employs the following pipeline. When the input is a 3D volume, we first extract a portion of 2D slices from it which most relevant to the report, and then regard the selected slices as 2D image. Those selected 2D slices are fed into the network along with the original 3D volume, allowing us to jointly learn the relationships between 2D, 3D, and radiology reports, and ultimately form a unified feature space. When the input is a 2D image, the slice selection process is omitted.
In Section 3.1, we demonstrate our designed attentive slice selection method, which can identify more relevant 2D slices in 3D data related to the report text, helping us learn the unified space between 2D and 3D data guided by report. In Section 3.2, we design a method to bring together 3D data and selected 2D slices closer to the same report representation, which serves as the foundation for our language-guided construction of a unified model. In Section 3.3, we design a self-distillation technique to EMA teacher for the visual encoder, constructing image-level and patch-level contrastive learning tasks, further enhancing the connection between 2D and 3D data.
3.1 ATTENTIVE SLICE SELECTION
In order to construct a cross-modal unified representation space, we have chosen language as the bridge. Therefore, we need to extract key information from various image modalities that correspond to the information in medical reports. Particularly, important 2D slices relevant with reports should
be selected from 3D volume. This process is similar to how doctors view CT scans; they also base their report descriptions on some important slices.
As shown in Figure 4, in order to better locate the lesion-related 2D slices in the 3D data, we use the attention weights of the [CLS] token in the EMA teacher as the basis for calculation. The visual encoder’s [CLS] token is directly supervised by the radiology report features from the language encoder, reflecting the most likely lesion areas described in the report. For the attentive score at token location $P$:
$$s^P = \frac{1}{HL} \sum_{l=1}^{L} \sum_{h=1}^{H} \text{Softmax} \left( \frac{f^q_{lh}(CLS) \cdot f^k_{lh}(P)}{\sqrt{C}} \right),$$
where $l$ denotes the layer index; $h$ denotes the attention head index; $f^q_{lh}(CLS)$ denotes the query embedding of the [CLS] token at Layer $l$ and Head $h$; $f^k_{lh}(P)$ denotes the key embedding of Layer $l$ and Head $h$ for an 3D image token at location $P$; $C$ is the number of channels for the query and key embedding.
The important slices located strategy is based on the token-level score. Each token in the original CT volume represents a small voxel. By aggregating scores based on the slice dimension, we can calculate the total score for each group of slices:
$$s_i = \frac{1}{N} \sum_{j=1}^{N} s^P_{ij},$$
where $s_i$ is the attentive score for the $i$-th slice, $s^P_{ij}$ is the token-level attentive score for the $j$-th voxel in $i$-th slice, $N$ represents the total number of voxels included in a slice. After aggregating the attentive scores, we can obtain text relevance scores for each 2D slice. We then choose the top $k$ slices to establish a connection with the 3D data and the report, allowing us to learn a shared feature space.
### 3.2 Cross-Dimensional Medical Visual-Language Pretraining.
We use CLIP (Radford et al., 2021) loss for cross-modal pre-training of 2D and 3D medical images and their corresponding reports. CLIP is a powerful tool that enables the alignment of features from two modalities after large-scale contrastive learning. For 2D X-ray training, we directly use $T_{2D}$ and $E_v$ for feature extraction, obtaining the global image feature [CLS] token, and then aligning it with the language encoder $E_l$’s [CLS] token. For the training of 3D CT scan data, the 2D slices within it also carry the content of the same radiology report, so we select attentive 2D slices according to...
Figure 4: Attentive slice selection from 3D volume. Generally, the slice is selected according to the attention weights of the \([CLS]\) token attending to other tokens, and the \([CLS]\) token is also guided by language in the report. We compute the average attention weights within each sliced area, and then select the top K slices with the highest scores.
the method in Section 3.1 as joint input. Through this approach, we bring the 2D slice features and 3D features closer to the same language encoder’s features, using radiology reports as a medium to form cross-dimensional interactions.
A highlight of our work is the use of attentive slices selection to ensure that the selected 2D slices are sufficiently representative. Only in this way can these 2D slices carry the supervision information from the report and, together with the 3D features, construct a joint feature space. If we were to use random selection, it would be easy to cause mismatches between the visual and textual information, and the noise generated would make the model’s understanding on 2D data very confusing. Once the common coordinates from the report are no longer accurate, it would not be possible to effectively form a cross-dimensional information bridge.
3.3 Enhancing Dimensional Interactions via Self-distillation
In Section 3.1, we introduced the method for selecting 2D slices that can share the same report. Then, in Section 3.2, we aligned them across dimensions using text as shared coordinates for visual-textual training. In fact, apart from using text as a medium, the projected representative 2D slice features and 3D features with global information also possess strong correlations. We aim to construct an auxiliary task to directly leverage this correlation, further enhancing the cross-dimensional communication.
We adopted a simple and straightforward auxiliary task design: mask and recovery. We chose to use the self-distillation method for implementation [Yang et al., (2023); Zhou et al., (2021)], due to its simplicity and effectiveness. During the training process, we mask a certain proportion of both 2D and 3D tokens in the online encoder, while keeping the complete input in the EMA encoder. Therefore, this non-trivial task requires predicting the EMA encoder’s features directly from the online encoder, as there is a significant amount of missing information. For both 2D and 3D recovery tasks, the model has to learn the correlation with the other modality to obtain more reference information, thus directly strengthening the interaction between 2D and 3D features within the network.
Similarly, during the token masking phase, we also employed the attentive selection design. While passing through the EMA encoder, we calculated the patch scores as described in Equation 7 and retained the portion with the highest scores. This approach minimizes the disruption of effective lesion structures, thereby avoiding ambiguity and making the cross-modal interaction more meaningful.
During the feature distillation process, we utilized the head and loss from BYOL [Grill et al., (2020)]. We applied this loss to both the global \([CLS]\) tokens and all local patch tokens in the output 2D and 3D features, thereby enabling interaction at different granularities to enhance feature robustness.
4 Experiments
We build our universal medical framework UniMedI and pre-train on the two medical vision-report datasets with different modalities including 2D X-rays and 3D CT scans. Furthermore, extensive experiments on multiple cross-modal downstream dataset from diverse tasks are conducted to verify
| Method | CheXpert(AUC) | RSNA(AUC) | COVIDx(ACC) |
|-----------------|--------------|-----------|-------------|
| Random Init | 56.1 | 62.6 | 65.7 |
| ImageNet Init | 74.4 | 79.9 | 81.4 |
**pre-trained on CheXpert**
| Method | CheXpert(AUC) | RSNA(AUC) | COVIDx(ACC) |
|-----------------|--------------|-----------|-------------|
| DSVE | 50.1 | 51.0 | 51.5 |
| VSE++ | 50.3 | 51.2 | 52.4 |
| GLoRIA | 86.6 | 87.8 | 88.1 |
**pre-trained on MIMIC-CXR**
| Method | CheXpert(AUC) | RSNA(AUC) | COVIDx(ACC) |
|-----------------|--------------|-----------|-------------|
| Caption-Transformer | 77.2 | 82.6 | 83.9 |
| Caption-LSTM | 85.2 | 85.3 | 86.2 |
| Contrastive-Binary | 84.5 | 85.6 | 85.8 |
| ConVIRT | 85.9 | 86.8 | 87.3 |
| GLoRIA-MIMIC | 87.1 | 88.7 | 88.0 |
| MGCA (ResNet-50)| 87.6 | 88.0 | 88.2 |
| MGCA (ViT-B/16) | 88.8 | 89.1 | 89.7 |
**UniMedI (Ours, ViT-B/16)**
| CheXpert(AUC) | RSNA(AUC) | COVIDx(ACC) |
|--------------|-----------|-------------|
| 89.4 | 89.7 | 90.5 |
| 90.0 | 90.4 | 91.5 |
| 80.3 | 92.4 | 94.6 |
Table 1: Linear classification results on CheXpert, RSNA and COVIDx with 1%, 10%, 100% training data. Area under ROC curve (AUROC [%]) are reported for CheXpert and RSNA dataset, and accuracy (ACC [%]) is reported for COVIDx dataset. The best results are highlighted in **boldface**.
the effectiveness of the multi-modal vision representations. In the following subsections, we first present the pre-training experiments settings in Section 4.1 and two main downstream tasks in Section 4.2. In addition, we compare the performance of our proposed approach with the state-of-the-art vision-language processing methods in Section 4.3. Finally, we perform plenty of ablation experiments on multi-modal downstream tasks and visualization to show the validity of each module of our framework.
### 4.1 Pre-Training Setup
**Dataset** We pre-train our UniMedI framework on the JPG version of 2D X-rays dataset MIMIC-CXR 2.0.0 Johnson et al. (2019) and the MINC version of 3D CT scans dataset BIMCV de la Iglesia Vayá et al. (2021). As the downstream 2D datasets only encompass frontal-view chest images, we remove the lateral-view images to preprocess the 2D dataset MIMIC-CXR 2.0.0. Similarly, as the downstream 3D datasets only encompass frontal-view chest images, we remove the lateral-view images to preprocess the 3D dataset BIMCV. For the processing of text reports, we remove the reports which are less than 3 tokens for 2D and 3D datasets following Wang et al. (2022).
**Implementation Details** Following Gloria Huang et al. (2021), we utilize ViT-B/16 Dosovitskiy et al. (2020) as the vision encoder to extract representations in the common feature space for 2D and 3D visual data. We use BioClinicalBERT Alsentzer et al. (2019) as the text encoder to obtain the report embeddings.
| Method | CC-CCII | LUNA |
|-----------------|---------|------|
| Random Init | 43.4 | 69.7 |
| UniMISS* | 41.6 | 73.1 |
| UniMedI* | 64.2 | 75.1 |
| UniMedI | 75.6 | 84.8 |
Table 2: Linear classification results on CC-CCII with 1%, 10%, 100% training data. Accuracy are reported for the dataset. * denotes the input size $16 \times 96 \times 96$. Others is $32 \times 128 \times 128$. The best results are highlighted in **boldface**.
| Method | CC-CCII | LUNA |
|-----------------|---------|------|
| supervised | | |
| ResNet3D101 | 85.5 | - |
| CovidNet3D-L | 88.7 | - |
| unsupervised | | |
| Joint Nguyen et al. (2023) | - | 94.2 |
| UniMedI | 93.8 | 95.9 |
Table 3: Classification results on CC-CCII, RICORD with full training data. ACC [%] is reported for CC-CCII and AUC [%] is reported for LUNA2016-v2. The best results are highlighted in **boldface**.
Table 4: Ablation study of training mode on linear classification (2D dataset CheXpert, RSNA and 3D dataset CC-CCII) settings. We report Area under ROC curve (AUROC [%]) on CheXpert and RSNA datasets, and (Acc [%]) on CC-CCII dataset. Best results of each setting are in boldface.
| Training tasks | CheXpert (AUC) | RSNA (AUC) | CC-CCII (Acc) |
|---------------|----------------|------------|---------------|
| 2D | | | |
| ✓ | 87.1 | 88.0 | 88.4 |
| ✓ | 87.4 | 88.1 | 88.5 |
| 3D | | | |
| ✓ | - | - | - |
| ✓ | - | - | - |
| ✓ | 88.9 | 89.3 | 90.6 |
| ✓ | 88.7 | 89.5 | 90.3 |
| ✓ | 72.4 | 80.0 | 86.2 |
4.2 Downstream Tasks and Experimental Setup
Medical Classification We conduct medical image classification on three representative datasets: (1) CheXpert [Irvin et al., 2019], which contains 191,229 frontal-view chest radiographs. The task is to classify each image into 5 individual binary labels: atelectasis, cardiomegaly, consolidation, edema, and pleural effusion. Following [Zhang et al., 2022; Huang et al., 2021], we hold out the expert-labeled validation set as test data and randomly select 5,000 radiographs from training data for validation. (2) RSNA Pneumonia [Shih et al., 2019]. We use the stage 2 version, which contains around 29,700 frontal view chest radiographs. The task is a binary classification, i.e., classifying each chest image into normal or pneumothorax positive. Following [Huang et al., 2021], we manually split the dataset into training, validation, and test set with 70%/15%/15% ratio. (3) COVIDx [Wang et al., 2020], which contains over 30k CXR images from a multinational cohort of over 16,600 patients. This dataset contains 16,490 positive COVID-19 images from over 2,800 patients. We use the latest version 6 of this dataset. The task is a three-class classification, i.e., classifying each radiograph into COVID-19, non-COVID pneumonia or normal. We use the original validation dataset as test data and manually split 10% of original training set for validation.
Table 5: Ablation study of our framework on linear classification (2D dataset CheXpert, RSNA and 3D dataset CC-CCII) settings. We report Area under ROC curve (AUROC [%]) on CheXpert and RSNA datasets, and (Acc [%]) on CC-CCII dataset. VL represents the default experiment setting include image-text contrastive loss $L_{vl}$ with random slices selection. FD will include $L_{ict}$ and $L_{pel}$ loss to execute self feature distillation. Attn will use attentive slices selection instead of random. Best results of each setting are in boldface.
| Training tasks | CheXpert (AUC) | RSNA (AUC) | CC-CCII (Acc) |
|---------------|----------------|------------|---------------|
| VL | | | |
| ✓ | 87.4 | 88.1 | 88.5 |
| ✓ | 89.0 | 89.3 | 90.1 |
| ✓ | 89.4 | 89.7 | 90.5 |
| FD | | | |
| ✓ | 88.9 | 89.3 | 90.6 |
| ✓ | 89.5 | 90.1 | 91.2 |
| ✓ | 90.0 | 90.4 | 91.5 |
| Attn | | | |
| ✓ | 72.4 | 80.0 | 86.2 |
| ✓ | 74.6 | 80.9 | 86.7 |
| ✓ | 75.6 | 84.8 | 89.4 |
We conduct medical volume classification on two representative datasets: (1) CC-CCII [Zhang et al., 2020] and LUNA 16 [Setio et al., 2017]. More details about the 3D datasets are in Appendix.
we use the Linear Classification setting to evaluate the representative ability of our universal vision-language pre-training framework. Apart from this, we also apply Classification to evaluate UniMedI for 3D data. Linear Classification freezes the pre-trained ViT vision encoder and only training a randomly initialized linear classification head for the downstream classification task with 1%, 10%, and 100% training data on each classification dataset.
Medical Semantic Segmentation We conduct experiments to evaluate the performance of our framework for medical semantic segmentation on RSNA and BCV datasets: (1) RSNA Pneumonia [Shih et al., 2019], contains 29700 frontal view radiograph. The task is to predict bounding boxes indicating evidence of pneumonia. We randomly split the original training set into 16,010/5,337/5,337 for training/validation/testing. We convert object detection ground truths into masks for semantic segmentation. (2) BCV [Landman et al., 2015], which consists of 50 CT scans and is divided into 24/26 for training/testing following [Xie et al., 2022].
We evaluate the segmentation performance with the paradigm that we use the pre-trained vision encoder as a frozen encoder and train a decoder portion using 1%, 10% and 100% training data
on RSNA dataset and 20%, 40%, 100% training data on BCV dataset. Dice scores are reported to evaluate the segmentation performance.
### 4.3 Result
#### 4.3.1 Results on Medical Classification
**2D Medical Image Classification** Table 1 reports the results of Linear Classification on three 2D medical image classification datasets (CheXpert, RSNA and COVIDx). The results of other methods on CheXpert and RSNA are from original paper Wang et al. (2022). The methods including UniMedI shown in the table are pre-trained on MIMIC-CXR dataset, which achieves a fair comparison. As for the state-of-the-art method, MGCA, we mainly compare the performance with the MGCA (ViT-B/16) which employs the ViT as the visual encoder. It is obvious that our method shows the best performance in the three 2D medical image classification for the different training data ratio (1%, 10%, 100%), outperforming the state-of-the-art MGCA (ViT-B/16) by a large margin. Specifically, our method outperforms MGCA with ViT-B/16 backbone with +0.6%, +0.6%, +0.8% AUROC on CheXpert dataset, +0.9%, +0.5%, +0.7% AUROC on RSNA dataset and +5.5%, +7.6%, +2.3% ACC on COVIDx dataset under the 1%, 10%, 100% training ratio respectively. The significant improvement indicates the data efficiency and effectiveness of our method.
**3D Medical Volume Classification** Table 2 reports the results of Linear Classification on the 2D medical image classification dataset, CC-CCII. We compare UniMedI with UniMiss Xie et al. (2022). To our knowledge, the UniMiss Xie et al. (2022) is the state-of-the-art unified method to process 2D and 3D medical images. We show the performances of both UniMiss and UniMedI, where the results are that our method achieves a +22.6%, +2.0% and +0.8% ACC gain on CC-CCII dataset comparing with the UniMiss under the 1%, 10%, 100% training ratio respectively. The significant improvement indicates the data efficiency and effectiveness of our method.
When fine-tuning the total vision encoder and the linear classification head with full training data, as listed in Table 3, our method gets the best performance on the multiple 3D medical volume classification datasets (CC-CCII and LUNA2016-v2) compared with other methods. It is observed that our method achieves with 93.8% ACC on CC-CCII dataset, and 95.9% ACC on LUNA2016-v2 dataset respectively. The remarkable performance of our method shows the generalization of our method for 2D and 3D medical classification tasks. It demonstrates our framework possesses the ability of extracting universal features for multi-modal data.
| Method | RSNA 1% | RSNA 10% | RSNA 100% |
|-----------------|---------|----------|-----------|
| ConVIRT | 55.0 | 67.4 | 67.5 |
| GLoRIA | 59.3 | 67.5 | 67.8 |
| GLoRIA-MIMIC | 60.3 | 68.7 | 68.3 |
| MGCA | 88.6 | 81.2 | 94.3 |
| MGCA (ViT-B/16) | 66.2 | 71.3 | 73.6 |
| UniMedI (ViT-B/16) | **67.8** | **73.1** | **75.3** |
Table 6: 2D Semantic segmentation results (Dice [%]) on RSNA with 1%, 10% and 100% training labels. Best results of each setting are in boldface.
| Method | BCV 20% | BCV 40% | BCV 100% |
|-----------------|---------|---------|----------|
| MoCo v3 | 74.5 | 78.2 | 82.0 |
| DINO | 75.3 | 78.9 | 82.6 |
| UniMiss | **78.0** | **81.0** | **85.0** |
| UniMedI | **77.5** | **81.6** | **85.4** |
Table 7: 3D Semantic segmentation results (Dice [%]) on BCV with 20%, 40% and 100% training labels. Best results of each setting are in boldface.
#### 4.3.2 Results on Medical Semantic Segmentation
Table 6 and Table 7 report the results of Semantic Segmentation on 2D and 3D medical data. In 2D semantic segmentation task, our method UniMedI significantly outperforms the current state-of-the-art algorithm, MGCA. When using 1% training data, UniMedI achieves 67.8% Dice, surpassing the MGCA 1.6%. Meanwhile, concurrently, UniMedI also demonstrates exceptional performance in 3D semantic segmentation tasks. In the BCV dataset, UniMedI achieves 0.6% and 0.4% performance gain under 20% and 40% label settings compared with Unimiss. These results underscore the exceptional performance of our method in dense prediction tasks.
4.4 ANALYSIS OF OUR FRAMEWORK
Visualization To better demonstrate the effectiveness of our selection process guided by language, we visualize the original X-rays, masked X-rays, their corresponding reports, and original CT scans, as well as the selected lesion slices in Figure 5. On the left side of Figure 5, the first row effectively demonstrates how UniMedI accurately captures the areas referenced in the report, including the “Normal post-operative alignment of the sternal wires” and “Bilateral pleural effusions of mild-to-moderate extent persist”. In addition, the second and third cases adeptly showcase the detection of pleural effusion and scoliosis, further emphasizing the method’s precision. The right side of Figure 5 displays the comprehensive slice selection process employed by UniMedI. Amidst the extensive collection of CT scan slices, our method exhibits remarkable accuracy in pinpointing the slices containing lesions. As an example, the presence of pulmonary nodules is clearly noticeable in slices 28-31.
Ablation Study of Component Design We conduct ablation experiments primarily focusing on two aspects: training mode and framework module.
Training mode We pre-train our framework separately using only 2D data, only 3D data, and a combination of 2D and 3D data. Subsequently, we evaluated the performance on downstream 2D dataset CheXpert, RSNA and 3D dataset CC-CCII on linear classification task respectively, with the results presented in Table 3. It can be observed that the pretraining approach combining 2D and 3D data yields benefits for both single-modal 2D and 3D data classification tasks. Particularly, the enhancement achieved with the use of multimodal data on the 3D dataset is remarkably significant. We obtained improvements of +16.8% ACC, +8.3% ACC, +9.8% ACC when using 1%, 10%, and 100% of the training data, respectively.
Framework module In this section, we further analyze the effects of self feature distillation and attentive slices selection on our framework. We conduct a linear classification task on downstream 2D datasets CheXpert and RSNA, as well as the 3D dataset CC-CCII. The results are summarized in Table 5. The experimental results show that incorporating both self feature distillation and attentive slices selection into our framework significantly improves the performance across all data splits and datasets.
5 CONCLUSION
In this paper, we propose a novel approach called UniMedI that leverages diagnostic reports as a shared semantic space to create unified representations for diverse modalities of medical images, with a specific emphasis on 2D and 3D images. By using medical diagnostic reports as a bridge, we establish the unified vision-language framework that connects visual medical data across different modalities. Moreover, with the guidance of the text, we effectively extract visual modality information and accurately identify affected areas in 2D images and lesion slices in 3D CT scans, thereby enhancing consistency across various visual data modalities. Extensive experiments demonstrate UniMedI’s superior performance in these downstream tasks (classification, segmentation, and retrieval) on various 2D and 3D medical image datasets. We hope our work can promote the exploration of VLP in medical image processing.
REFERENCES
Emily Alsentzer, John R Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. Publicly available clinical bert embeddings. *arXiv preprint arXiv:1904.03323*, 2019.
Samuel G Armato III, Geoffrey McLennan, Luc Bidaut, Michael F McNitt-Gray, Charles R Meyer, Anthony P Reeves, Binsheng Zhao, Denise R Aberle, Claudia I Henschke, Eric A Hoffman, et al. The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. *Medical physics*, 38(2):915–931, 2011.
Shruthi Bannur, Stephanie Hyland, Qianchu Liu, Fernando Perez-Garcia, Maximilian Ilse, Daniel C Castro, Benedikt Boecking, Harshita Sharma, Kenza Bouzid, Anja Thieme, et al. Learning to exploit temporal structure for biomedical vision-language processing. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 15016–15027, 2023.
Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel C Castro, Anton Schwaighofer, Stephanie Hyland, Maria Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez-Valle, et al. Making the most of text semantics to improve biomedical vision–language processing. In *European conference on computer vision*, pp. 1–21. Springer, 2022.
Yinda Chen, Che Liu, Wei Huang, Sibo Cheng, Rossella Arcucci, and Zhiwei Xiong. Generative text-guided 3d vision-language pretraining for unified medical image segmentation. *arXiv preprint arXiv:2306.04811*, 2023.
Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. Meshed-memory transformer for image captioning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 10578–10587, 2020.
Maria de la Iglesia Vayá, Jose Manuel Saborit-Torres, Joaquim Angel Montell Serrano, Elena Oliver-García, Antonio Pertusa, Aurelia Bustos, Miguel Cazorla, Joaquin Galant, Xavier Barber, Domingo Orozco-Beltrán, Francisco García-García, Marisa Caparrós, Germán González, and Jose María Salinas. Bimcv covid-19+: a large annotated dataset of rx and ct images from covid-19 patients, 2021. URL https://dx.doi.org/10.21227/w3aw-rv39.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020.
Martín Engilberge, Louis Chevallier, Patrick Pérez, and Matthieu Cord. Finding beans in burgers: Deep semantic-visual embedding with localization. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3984–3993, 2018.
Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. Vse++: Improving visual-semantic embeddings with hard negatives. *arXiv preprint arXiv:1707.05612*, 2017.
Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in neural information processing systems*, 33:21271–21284, 2020.
Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R Roth, and Daguang Xu. Unetr: Transformers for 3d medical image segmentation. In *Proceedings of the IEEE/CVF winter conference on applications of computer vision*, pp. 574–584, 2022.
Shih-Cheng Huang, Liyue Shen, Matthew P Lungren, and Serena Yeung. Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 3942–3951, 2021.
|
vE5MyzpP92
|
Since you conducted experiments on the large-scale iNaturalist-2018 dataset, what are the differences between open-set metric learning and face recognition or re-identification (re-ID)? Can your method be applied in the field of face recognition?
|
THRESHOLD-CONSISTENT MARGIN LOSS FOR OPEN-WORLD DEEP METRIC LEARNING
Qin Zhang\textsuperscript{1}, Linghan Xu\textsuperscript{1}\textsuperscript{*}, Qingming Tang\textsuperscript{2}, Jun Fang\textsuperscript{1}, Ying Nian Wu\textsuperscript{1}, Joe Tighe\textsuperscript{1}, Yifan Xing\textsuperscript{1}
\textsuperscript{1} AWS AI Labs, \textsuperscript{2} Alexa AI
\{qzaamz, linghax, qmtang, junfa, wunyin, yifax\}@amazon.com, jtighe@cs.unc.edu
ABSTRACT
Existing losses used in deep metric learning (DML) for image retrieval often lead to highly non-uniform intra-class and inter-class representation structures across test classes and data distributions. When combined with the common practice of using a fixed threshold to declare a match, this gives rise to significant performance variations in terms of false accept rate (FAR) and false reject rate (FRR) across test classes and data distributions. We define this issue in DML as threshold inconsistency. In real-world applications, such inconsistency often complicates the threshold selection process when deploying commercial image retrieval systems. To measure this inconsistency, we propose a novel variance-based metric called Operating-Point-Inconsistency-Score (OPIS) that quantifies the variance in the operating characteristics across classes. Using the OPIS metric, we find that achieving high accuracy levels in a DML model does not automatically guarantee threshold consistency. In fact, our investigation reveals a Pareto frontier in the high-accuracy regime, where existing methods to improve accuracy often lead to degradation in threshold consistency. To address this trade-off, we introduce the Threshold-Consistent Margin (TCM) loss, a simple yet effective regularization technique that promotes uniformity in representation structures across classes by selectively penalizing hard sample pairs. Extensive experiments demonstrate TCM’s effectiveness in enhancing threshold consistency while preserving accuracy, simplifying the threshold selection process in practical DML settings.
1 INTRODUCTION
Deep metric learning (DML) has shown success in various open-world recognition and retrieval tasks (Schroff et al., 2015a; Wu et al., 2017; Deng et al., 2019; Wang et al., 2018). Nevertheless, the common DML losses, such as contrastive loss (van den Oord et al., 2018; Chen et al., 2020), pairwise loss (Brown et al., 2020; Patel et al., 2022) and proxy-based losses (Kim et al., 2020; Movshovitz-Attias et al., 2017; Qian et al., 2019; Deng et al., 2019), often yield highly varied intra-class and inter-class representation structures across classes (Liu et al., 2019; Duan et al., 2019; Zhao et al., 2019). Hence, even if an embedding model has strong separability, distinct classes may still require varying thresholds to uphold a consistent operating point in terms of false reject rate (FRR) or false acceptance rate (FAR). This challenge is particularly important in real-world image retrieval systems, where a threshold-based retrieval criterion is preferred over a top-k approach due to its ability to identify negative queries without matches in the gallery. However, selecting the right threshold is difficult, especially when systems must cater to diverse use-cases. For instance, in clothing image retrieval for online shopping, the similarity between two T-shirts can be significantly different from that between two coats. A threshold that works well for coats may lead to poor relevancy and give many false positives in the retrieved images for T-shirts, as shown in Figure 1. These difficulties are more pronounced in the open-world scenarios (Scheirer et al., 2012; Bendale & Boult, 2015, 2016), where the test classes may include entirely new classes not seen during training.
We define the phenomenon in DML, where different test classes and distributions require varying distance thresholds to achieve a similar retrieval or recognition accuracy, as threshold inconsistency. In commercial environments, particularly under the practical evaluation and deployment
\textsuperscript{*}Equal contribution.
Figure 1: Here we show that (a) without threshold-consistent representation, selecting the right threshold for a commercial image retrieval system that serves a diverse range of test classes and distributions is challenging. It requires careful manual tuning of retrieval thresholds to strike a balance across multiple datasets. However, (b) with threshold-consistent representation, different test distributions yield similar distance thresholds at the performance target, effectively simplifying the otherwise complicated manual threshold tuning process. In the plots, $d^*$ denotes the distance threshold selected to align the False Positive (FP) rate with a pre-defined target.
setting with one fixed threshold for diverse user groups (Liu et al., 2022), the significance of threshold inconsistency cannot be overstated. Accurate quantification of this inconsistency is essential for detecting potential biases in the chosen threshold. To this end, we introduce a novel evaluation metric, named Operating-Point-Inconsistency-Score (OPIS), which quantifies the variance in the operating characteristics across classes within a target performance range. Using OPIS, we observe an accuracy-threshold consistency Pareto frontier in the high accuracy regime, where methods to improve accuracy often result in a degradation in threshold consistency, as shown in Figure 3. This highlights that achieving high accuracy does not inherently guarantee threshold consistency.
One solution to this problem is using posthoc calibration methods (Platt et al., 1999; Zadrozny & Elkan, 2002; Guo et al., 2017a), which adjust a trained model’s distance thresholds to align with specific operating points in FAR or FRR. However, in real-world settings, these methods can be inefficient and lack robustness, as they involve constructing separate calibration datasets and may require prior knowledge about the test distribution for effective calibration (Naeini et al., 2015; Guo et al., 2017a). Moreover, they do not address the threshold inconsistency problem unless customized calibration is done for each user. Another option is employing conformal prediction (Romano et al., 2020; Gibbs & Candes, 2021), which guarantees confidence probability coverage and can handle complex data distributions as well as covariate and label shifts. However, conformal prediction inherently assumes a closed-world setting, where training and test samples share the same label space. In contrast, real-world image retrieval systems typically operate in an open-world environment, presenting a more complex and realistic setting with unknown classes at test time.
Given these challenges, an essential question arises: Can we train an embedding model for open-world image retrieval that sustains a consistent distance threshold across diverse data distributions, thus avoiding the complexities of posthoc threshold calibration? This objective falls within the scope of calibration-aware training. In closed-set classification, the goal of calibration-aware training is to align predicted confidence probabilities with empirical correctness of the model (Guo et al., 2017a; Müller et al., 2019; Mukhoti et al., 2020). However, our focus lies on what we term as threshold-consistent DML, a paradigm that trains an embedding model with reduced threshold inconsistencies, such that a universal distance threshold can be applied to different test distributions to attain a similar level of FAR or FRR. This differentiation is crucial because in DML the output similarity score does not strictly reflect the empirical correctness of the model (Xu et al., 2023) and may exhibit strong variations across test data distributions. To address the unique challenges of threshold inconsistency in DML, we propose a simple yet effective regularization technique called Threshold-Consistent Margin (TCM) loss. Through experiments on four standard image retrieval benchmarks, we validate the efficacy of the TCM regularization in improving threshold consistency while maintaining accuracy. To summarize, our contributions are as follows:
• We propose a novel variance-based metric, named Operating-Point-Inconsistency-Score (OPIS), to quantify the threshold inconsistency of a DML model. Notably, OPIS does not need a separate hold-out dataset besides the test set, enhancing flexibility in evaluation.
• We observe an accuracy-threshold consistency Pareto frontier in the high accuracy regime. This finding underscores that achieving high model accuracy in DML does not automatically guarantee threshold consistency, necessitating dedicated solutions.
• We introduce the Threshold-Consistent Margin (TCM) loss, a simple yet effective regularization technique, that can be combined with any base losses and backbone architecture to improve threshold consistency in DML. Our approach outperforms SOTA methods across various standard image retrieval benchmarks, demonstrating substantial improvements in threshold consistency while maintaining or even enhancing accuracy.
2 RELATED WORKS
DML losses for image retrieval Advancements in DML losses for image retrieval have focused on improving accuracy, scalability and generalization Brown et al. (2020); Patel et al. (2022); Deng et al. (2020); Kim et al. (2023); Roth et al. (2020); Kan et al. (2022); Ypsilantis et al. (2023). The pioneering work of the Smooth-AP loss (Brown et al., 2020) optimizes a smoothed approximation for the average precision. Similarly, the Recall@k Surrogate loss (Patel et al., 2022) approximates the recall@k metric. Leveraging vision-transformer backbones and large batch sizes, Recall@k Surrogate has achieved remarkable performance in several image retrieval benchmarks. However, these pairwise methods are inefficient when dealing with a large number of classes. To reduce the computational complexity, proxy-based methods such as ProxyAnchor (Kim et al., 2020), ProxyNCA (Movshovitz-Attias et al., 2017), SoftTriple (Qian et al., 2019), ArcFace (Deng et al., 2019), and HIER (Kim et al., 2023) are employed, where sample representations are compared against class prototypes. Despite high accuracy, these methods still face challenges in biases and fairness (Fang et al., 2013; Livento, 2019; Dullerud et al., 2022) and display inconsistencies in distance thresholds when applied in real-world scenarios (Liu et al., 2022).
Evaluation Metrics for threshold consistency (inconsistency) In closed-set classification, threshold consistency is usually evaluated through calibration metrics, such as Expected Calibration Error (ECE) (Naemi et al., 2015), Maximum Calibration Error (MCE) (Guo et al., 2017b) and Adaptive ECE (Nixon et al., 2019). These metrics gauge how well a model’s predictions match actual correctness. However, directly applying them to evaluate threshold consistency in DML (e.g., by replacing confidence probability with similarity measures) is not straightforward. A key hurdle is that DML uses distance measurements to represent semantic similarities, and these distances can vary widely across different classes due to the intrinsic non-bijectiveness of semantic similarity in the data (Roth et al., 2022). In the context of DML, OneFace (Liu et al., 2022) introduced the calibration threshold for face recognition systems, which corresponds to the distance threshold at a given FAR of a separate calibration dataset. They further propose the One-Threshold-for-All (OTA) evaluation protocol to measure the difference in the accuracy performance across datasets at this calibration threshold as an indicator for threshold consistency. However, this approach requires a dedicated calibration dataset, which can be difficult to acquire in practice. To our knowledge, there is no widely accepted and straightforward metric for threshold consistency in DML.
Calibration-aware training vs Posthoc threshold calibration Calibration-aware training has been well studied in closet-set classification, where the goal is to align predicted probabilities with empirical correctness (Guo et al., 2017a; Müller et al., 2019; Mukhoti et al., 2020). Common approaches use a regularizer to guide the model in generating more calibrated predictions (Pereyra et al., 2017; Liang et al., 2020; Hebbalaguppe et al., 2022). Yet, threshold-consistent training for DML differs from calibration-aware training. Instead of aligning model output with empirical correctness, threshold-consistent DML seeks to maintain a consistent distance threshold across classes and data distributions. In face recognition, Liu et al. (2022) introduces the Threshold Consistency Penalty to improve threshold consistency among various face domains. The method divides mini-batch data into 8 domains and computes each domain threshold using a large set of negative pairs from a feature queue. It then adjusts the loss contribution from each sample based on the ratio of its domain threshold to the in-batch calibration threshold. However, this method is designed for face recognition – a more constrained scenario. In contrast, our target is general image retrieval tasks which can involve significantly more domains, making it impractical to construct negative pairs for all domains. Besides train-time methods, another approach is posthoc threshold calibration, such as Platt calibration (Platt et al., 1999), isotonic regression (Zadrozny & Elkan, 2002) and temperature scaling (Guo et al., 2017a), which seeks to calibrate the operating point of a trained model using
hold-out calibration datasets. However, it cannot solve threshold inconsistency unless customized calibration is conducted for each user. Another category of posthoc calibration method is conformal prediction (Tibshirani et al., 2019; Romano et al., 2020; Gibbs & Candes, 2021; Barber et al., 2023), which can be applied beyond the setting of exchangeable data even when the training and test data are drawn from different distributions. However, conformal prediction relies on a closed-set setting where the training and test data share the same label space, which does not apply to open-world image retrieval. Thus, in this work, we focus on developing a threshold-consistent training technique tailored for DML, with the goal of simplifying the posthoc calibration process in practical settings.
### 3 Threshold Inconsistency in Deep Metric Learning
**Visualizing threshold inconsistency in image retrieval** We visually illustrate the issue of threshold inconsistency in DML using image retrieval datasets. First, we borrow the widely-used $F$-score (Sasaki et al., 2007) to define the utility score, incorporating both sides of the accuracy metric (e.g. precision and recall, or specificity and sensitivity). Specifically, we denote one side as $\phi$ and the other side as $\psi$, and define the utility score, denoted as $U$, as follows:
$$U(d) = \frac{(1 + c^2) \cdot \phi(d) \cdot \psi(d)}{c^2 \phi(d) + \psi(d)}$$
where $d$ is the distance threshold ($d \in [0, 2]$ for hyperspherical embeddings), and $c$ is the relative importance of $\psi$ over $\phi$ ($c = 1$ if not specified). Without loss of generality, we let $\phi$ be specificity (same as TNR or $1 - \text{FAR}$) and $\psi$ be sensitivity (same as recall or $1 - \text{FRR}$).
In Figure 2, we present the *accuracy utility-distance threshold* curves for the test classes using models trained on the iNaturalist-2018 (Horn et al., 2017) and Cars-196 (Krause et al., 2013) datasets. In the left column of each subfigure, we observe considerable variations in the operating characteristics among distinct classes for models trained with the popular Smooth-AP loss. These variations make it difficult to select a single distance threshold that works well across the entire spectrum of test distributions. However, while we will elaborate on in later sections, incorporating our proposed TCM regularization during training visibly improves the threshold consistency across classes, as evidenced by the more aligned utility curves compared to those without the TCM regularization.
**OPIS for overall threshold inconsistency** To quantify threshold inconsistency in DML, we introduce a variance-based metric, Operating-Point-Inconsistency Score (OPIS). Unlike the OTA evaluation proposed in Liu et al. (2022), OPIS does not require a separate calibration dataset. It quantifies the variance in the operating characteristics across test classes in a predefined calibration range of distance thresholds. This calibration range, denoted as $[d_{\text{min}}, d_{\text{max}}]$, is typically determined based on the target performance metric operating ranges (e.g., $a < \text{FAR} < b$, where $a, b$ are pre-determined error constraints). Formally, the OPIS metric can be expressed as follows:
$$\text{OPIS} = \frac{\sum_{i=1}^{T} \int_{d_{\text{min}}}^{d_{\text{max}}} ||U_i(d) - \bar{U}(d)||^2 \, dd}{T \cdot (d_{\text{max}} - d_{\text{min}})}$$
---
1We employ the specificity and sensitivity pair because they are particularly relevant for visual recognition applications and are not sensitive to changes in test data composition.
where \( i = 1, 2, \ldots, T \) is the index for the test classes, \( U_i(d) \) is the accuracy utility for class \( i \), and \( \bar{U}(d) \) is the average utility for the entire test dataset.
**\( \varepsilon \)-OPIS for utility divide between groups**
The overall OPIS metric does not emphasize on the outlier classes. For applications where outlier threshold consistency is essential, we also provide a more fine-grained metric that focuses on the utility disparity between the best and worst sub-groups. First, we define the utility of the \( \varepsilon \) percentile of best-performing classes as follows:
\[
U_{\varepsilon_{\text{best}}}(d) = \frac{\phi_{\varepsilon_{\text{best}}}(d) \cdot \psi_{\varepsilon_{\text{best}}}(d)}{\phi_{\varepsilon_{\text{best}}}(d) + \psi_{\varepsilon_{\text{best}}}(d)}
\]
where \( \phi_{\varepsilon_{\text{best}}}(d) \), \( \psi_{\varepsilon_{\text{best}}}(d) \) are the expected accuracy metrics for the entirety of the \( \varepsilon \) percentile of the best-performing classes. By replacing \( \varepsilon_{\text{best}} \) with \( \varepsilon_{\text{worst}} \), the same can be defined for \( U_{\varepsilon_{\text{worst}}}(d) \).
Then, we define the \( \varepsilon \)-OPIS metric as the following:
\[
\varepsilon\text{-OPIS} = \frac{1}{d_{\text{max}} - d_{\text{min}}} \int_{d_{\text{min}}}^{d_{\text{max}}} ||U_{\varepsilon_{\text{worst}}}(d) - U_{\varepsilon_{\text{best}}}(d)||^2 \, dd
\]
By definition, the \( \varepsilon \)-OPIS metric is maximized at \( \varepsilon \to 0 \), and eventually becomes zero when \( \varepsilon \to 100\% \) as the best-performing set and worst-performing set become identical.
**High accuracy ≠ High threshold consistency**
In Figure 3, we employ the OPIS metric to examine the relations between threshold inconsistency and recognition error in embedding models trained with various DML losses, backbones and batch sizes. Notably, we observe distinct behaviors across different accuracy regimes. In the low-accuracy regime, located in the right of the plot, we notice a simultaneous improvement of accuracy and threshold consistency. This aligns with the established notion that improving model discriminability helps threshold consistency by strengthening the association between samples and their corresponding class centroids. However, as the error decreases, a trade-off surfaces in the high-accuracy regime. Here, the reduction in error is correlated with increased threshold inconsistency, leading to the formation of a Pareto frontier.
The trade-off between recognition error and threshold inconsistency highlights that achieving high accuracy alone does not automatically guarantee threshold consistency. In this context, introducing the proposed OPIS metric as an additional evaluation criterion alongside recall@k is crucial for threshold-based commercial DML applications, where the ability to identify negative queries without matching classes in the gallery is of importance. To explain further, we compare OPIS with the widely-used accuracy metric, recall@k. These two metrics evaluate different aspects of a model and can be used complementarily: recall@k focuses on top-k relevancy (retrieving top-k similar samples as the query from a collection), and OPIS measures the inconsistency in threshold-relevancy (retrieving similar examples above a threshold from a collection). Moreover, unlike recall@k that solely gauges recall, OPIS evaluates both the FAR and FRR (=recall), offering a more holistic error assessment.
### 4 TOWARDS THRESHOLD-CONSISTENT DEEP METRIC LEARNING
To tackle the threshold inconsistency problem, we introduce the Threshold-Consistent Margin (TCM) loss. TCM specifically penalizes hard positive and hard negative sample pairs near the decision boundaries outlined by a pair of cosine margins. This strategy is in line with several studies (Dong et al., 2017; Xuan et al., 2020; Robinson et al., 2020) that emphasize hard mining for
extracting more informative samples. Let $S^+$ and $S^-$ be the sets of cosine similarity scores for positive and negative pairs in a mini-batch, respectively, the TCM loss is formulated as follows:
$$L_{\text{TCM}} = \lambda^+ \cdot \frac{\sum_{s \in S^+} (m^+ - s) \cdot 1_{s \leq m^+}}{\sum_{s \in S^+} 1_{s \leq m^+}} + \lambda^- \cdot \frac{\sum_{s \in S^-} (s - m^-) \cdot 1_{s \geq m^-}}{\sum_{s \in S^-} 1_{s \geq m^-}}$$
(5)
where $1_{\text{condition}} = 1$ if the condition is true, and 0 otherwise. $\lambda^+$ and $\lambda^-$ are the weights assigned to the positive and negative regularizations, respectively. The TCM regularizer can be combined with any base loss $L_{\text{base}}$, resulting in the final objective function:
$$L_{\text{final}} = L_{\text{base}} + L_{\text{TCM}}$$
(6)
**Design justification: representation structures** Several works have shown a strong correlation between model accuracy and representation structures (Yu et al., 2020; Chan et al., 2022). Indeed, SOTA DML losses are designed to optimize this relationship by encouraging intra-class compactness and inter-class discrimination. However, when considering threshold consistency, the focus shifts towards achieving consistent performance in FAR and FRR in the calibration range, with an emphasis on local representation structures near the distance threshold. In this context, the TCM regularization serves as a “local inspector” by selectively adjusting hard samples to prevent over separateness and excessive compactness in the vicinity of the margin boundaries. This strategy also aligns with previous work that found excessive feature compression actually hurts DML generalization (Roth et al., 2020). Since the margin constraints are applied globally, this helps encourage more equidistant distribution of class centroids and more uniform representation compactness across different classes in the embedding space.
**Hard mining strategy** TCM regularizes on hard samples, distinguishing it from techniques that encourage similarity consistency by minimizing marginal variance (Kan et al., 2022). Specifically, TCM’s hard mining strategy is different from the semi-hard negative mining strategy (Schroff et al., 2015b) and its variants (Oh Song et al., 2016; Wu et al., 2017; Wang et al., 2019), as TCM’s hard mining is based on the absolute cosine similarity values, rather than their relative differences. Meanwhile, TCM also differs from ROADMAP (Ramzi et al., 2021) in that TCM utilizes hard positive and negative counts, whereas ROADMAP uses the total positive and negative counts. This makes TCM well-suited for scenarios involving large batch sizes (as is the standard in DML) and significant imbalances between the counts of positive and negative pairs.
**Connection to the calibration range** TCM is implicitly connected to the calibration range of the OPIS metric through the two cosine margins. Since cosine similarity is bijective with the $L_2$ distance for hyperspherical embeddings, these margin constraints ensure that the model’s intra-class and inter-class representation structures adhere to the desired distance threshold range, which is $[\sqrt{2 - 2m^+}, \sqrt{2 - 2m^-}]$. However, due to the inevitable distributional shift between the training and testing datasets, the selection of the margin constraints requires some hyper-parameter tuning and cannot be directly estimated from the calibration range. In Figure 6 we give guidance on how to select the margins, with details discussed in the ablation of TCM margin hyperparameters.
**TCM vs Margin-based Softmax loss** TCM has distinguishing characteristics when compared to margin-based softmax losses (Deng et al., 2019; Qian et al., 2019), as illustrated in Figure 4(b).
---
2 An detailed comparison between TCM and the method of Ramzi et al. (2021) is given in appendix A.2.2
First, TCM is designed as a regularizer that operates in conjunction with a base loss. It specifically applies to hard sample pairs that are located near the margin boundaries. Second, TCM employs two cosine margins to regularize the intra-class and inter-class distance distributions simultaneously. This allows TCM to capture both hard positive and hard negative examples, resulting in more hard pairs within a mini-batch. Secondly, the TCM loss is specifically applied to the hard pairs, contrasting with Arcface, which is applied to all pairs. Last, TCM is a sample-level pair-wise loss, which better models the relationships between individual samples compared to proxy-based methods.
**Visualization of TCM effect** We visualize the effect of the TCM regularization on representation structures across the 10 classes in the MNIST dataset of handwritten digits (LeCun et al., 1998) by training a shallow CNN using the Arcface loss (Deng et al., 2019). For clearer visualization, we use two-dimensional features and employ kernel-density estimation (Chen, 2017) to model the probability density function for the embeddings of each class. As shown in Figure 5, compared to using ArcFace (Deng et al., 2019) only, the incorporation of TCM (ArcFace+TCM) enhances the separation between digits 2 and 5 (lower middle), 0 and 8 (lower right), and 4 and 9 (upper left). This observation supports our claims about TCM’s ability in refining the representation structures for improved threshold consistency.
### 5 EXPERIMENTS
#### 5.1 DATASETS AND IMPLEMENTATION DETAILS
**Datasets** For training and evaluation, we use four commonly-used image retrieval benchmarks, namely iNaturalist-2018 (Horn et al., 2017), Stanford Online Product (Song et al., 2015), CUB-200-2011 (Wah et al., 2011) and Cars-196 (Krause et al., 2013). These benchmarks cover a diverse set of data domains including natural species, online catalog images, birds, and cars. As in previous works (Brown et al., 2020; Patel et al., 2022; An et al., 2023), the iNaturalist and Stanford Online Product datasets use an open-world train-test-split, where the training classes are disjoint from the ones in testing. For CUB and Cars, we use shared train-test classes to make fair comparisons with prior DML methods. The details to each dataset can be found in Table 1.
**Evaluation metrics** We measure model accuracy using the recall@k metric and assess threshold inconsistency using the OPIS and c-OPIS metrics as defined earlier. Similar to previous works (Veit & Wilber, 2020; Liu et al., 2022), we estimate threshold inconsistency by comparing normalized features of image pairs in 1:1 comparisons. In the case of the iNaturalist-2018 and Stanford Online Product datasets, given the large number of classes, we only sample positive pairs exhaustively and randomly sample negative pairs with a fixed negative-to-positive ratio of 10-to-1 for each class. All positive and negative pairs in the CUB and Cars datasets are exhaustively sampled.
**Implementation details** We use two backbone architectures, namely ResNet (He et al., 2016) and Vision Transformer (Dosovitskiy et al., 2020), both pretrained on ImageNet. Since the original papers do not report OPIS, we train both baseline models (without TCM) and TCM-regularized models using the same configuration. The hyperparameters for each base loss are taken from the original papers. For TCM, we set $\lambda^+ = \lambda^- = 1$. For OPIS, the calibration range is set to $1e-2 \leq \text{FAR} \leq 1e-1$ for all benchmarks. The margin parameters $(m^+, m^-)$ are tuned using grid search on 10% of the training data for each benchmark. We adopt the same optimization schemes as specified.
---
3We also provide results for CUB and Cars in the open-world setting in Appendix A.2.5.
4For ResNet, we follow Brown et al. (2020) and use ImageNet-pretrained backbones. For ViTs, we follow Patel et al. (2022) and use ImageNet-21k pretrained backbones released by timm library (Wightman, 2019).
Table 1: Dataset statistics. The datasets with an open-world train-test split are highlighted in light gray.
| Dataset | # Img | # Img/Cl | # Img | # Img/Cl |
|---------|-------|----------|-------|----------|
| iNat | 225846| 5.6 | 136093| 452 |
| SOP | 59551 | 11318 | 60502 | 11316 |
| CUB | 5994 | 200 | 5794 | 200 |
| Cars | 8054 | 196 | 8131 | 196 |
Table 2: The influence of TCM regularization on different base losses for ResNet50\textsuperscript{12} backbones.
| Losses & Losses | R@1 | R@4 | R@16 | OPIS \times 1e-3 |
|---------------|-----|-----|------|------------------|
| ProxyNCA + TCM | 63.1 (1.4) | 78.6 (1.4) | 88.3 (1.1) | 0.28 (0.15) |
| ArcFace + TCM | 63.6 (1.0) | 78.6 (1.0) | 88.3 (0.9) | 0.25 (0.05) |
| SAP + TCM | 69.1 (1.7) | 82.9 (1.0) | 91.1 (0.7) | 0.17 (0.16) |
| RS@K + TCM | 72.2 (1.5) | 84.9 (1.2) | 92.1 (1.0) | 0.11 (0.17) |
Table 3: Impact of TCM regularization on various DNN models trained with the Recall@k Surrogate loss at a batch size of 4000 as in [Patel et al., 2022].
| Arch.\textsuperscript{size} | R@1 | R@4 | R@16 | OPIS \times 1e-3 |
|-----------------------------|-----|-----|------|------------------|
| ResNet50\textsuperscript{12} | 72.2 (1.5) | 84.9 (1.2) | 92.1 (1.0) | 0.11 (0.17) |
| ResNet101\textsuperscript{12} | 73.8 (1.7) | 85.8 (1.1) | 92.6 (0.9) | 0.14 (0.13) |
| ViT-S/16\textsuperscript{12} | 81.6 (1.0) | 90.9 (0.5) | 95.6 (0.5) | 0.17 (0.04) |
| ViT-B/16\textsuperscript{12} | 84.8 (0.8) | 92.7 (0.6) | 96.5 (0.4) | 0.17 (0.20) |
| ViT-L/16\textsuperscript{12} | 85.7 (0.7) | 93.0 (0.7) | 96.6 (0.7) | 0.21 (0.15) |
Table 4: Time complexities of TCM in comparison to the Recall@k Surrogate loss on the Cars-196 dataset. The ViT-B/16 backbone is utilized with 8x Tesla V100 GPUs and a batch size of 392.
| Method | Complexity | $t_{loss}$ (s) | $t_{forward}$ (s) | $t_{backward}$ (s) |
|--------|------------|----------------|-------------------|--------------------|
| RS@k | $\mathcal{O}(n^2)$ | 19.9 | 102.6 | 131.3 |
| RS@K + TCM | $\mathcal{O}(n^2)$ | 21.4 | 104.7 | 133.2 |
| Delta | $\mathcal{O}(1)$ | +1.05% | +1.30% | +1.44% |
In the original papers for each base loss. During training, mini-batches are generated by randomly sampling 4 images per class following previous works [Brown et al., 2020; Patel et al., 2022].
5.2 Ablation and Complexity Analysis
Unless stated otherwise, all ablation studies are conducted using the iNaturalist-2018 dataset. Owing to space constraints, further ablations can be found in the appendix.
Effect of TCM margins We examine the impact of the cosine margins $m^+$, $m^-$ on accuracy and OPIS. As shown in Figure 6, adding TCM consistently enhances threshold consistency compared to the baseline Smooth-AP loss across all combinations of margins, with up to 50% of reduction in OPIS. Regarding accuracy, we observe that the negative margin ($m^-$) has a greater influence than the positive margin ($m^+$), which aligns with previous works [Dong et al., 2017; Xuan et al., 2020; Robinson et al., 2020]. However, when the negative margin becomes excessively stringent, such as $m^- = 0.25$, the accuracy drops below the baseline. We hypothesize that an overly restrictive negative margin may interfere with the base loss, leading to decreased accuracy. For ImageNet-pretrained backbones, the recommended values for $m^+$ and $m^-$ are around 0.9 and 0.5, respectively.
Compatibility with various base DML losses We select the most representative DML losses for each method category, including proxy-based methods [Movshovitz-Attias et al., 2017; Deng et al., 2020] and pairwise-based methods [Brown et al., 2020; Patel et al., 2022]. Notably, the Recall@k surrogate loss [Patel et al., 2022] represents the SOTA loss for fine-grained image retrieval tasks. We run experiments using these base losses with and without the TCM regularization. As shown in Table 2, there is a consistent improvement in both accuracy (> 1.0% increase in recall@1) and threshold consistency (up to 60.7% in relative reduction) when TCM regularization is applied in conjunction with different high-performing base losses.
Compatibility with different architectures We investigate the compatibility of TCM regularization with different backbone architectures including ResNet variants and Vision Transformers. As shown in Table 3, we observe significant improvements in threshold consistency across backbone architectures when TCM is incorporated. On accuracy, ResNet models exhibit more notable improvements in accuracy (> 1.5%) compared to Vision Transformers, which see a < 1.0% boost.
Time Complexity In a mini-batch with size $n$, the complexity of TCM is $\mathcal{O}(n^2)$ as it compares every sample with all samples in the mini-batch. For image retrieval benchmarks where the number of training classes $K$ is significantly greater than the batch size $n$, i.e., $K \gg n$, this complexity is comparable to most pair-based losses ($\mathcal{O}(n^2)$) and proxy-based losses ($\mathcal{O}(nK)$). In Table 4, we provide time complexities for the loss computation, the forward and backward passes and the overall
Table 5: Performance of supervised image retrieval after incorporating TCM regularization in recall@k (the higher the better) and OPIS (the lower the better) on 4 image retrieval datasets. The numbers in black represent models trained with $L_{\text{base}} + L_{\text{TCM}}$, while the colored numbers indicate improvement / degradation in absolute magnitude over models trained with $L_{\text{base}}$ alone. For Cars, with the same DINO backbone and ProxyAnchor base loss as in [Kim et al., 2023], TCM achieves a R@1 of 91.9, with a 46.3% relative OPIS reduction.
| Benchmark | Arch$^{\text{dim}}$ | $L_{\text{base}} + L_{\text{TCM}}$ | BS | OPIS $\times 10^{-3}$ | 10%-OPIS $\times 10^{-3}$ | R@1 ↑ | Previous SOTA, with ImageNet pretraining |
|-----------------|---------------------|-----------------------------------|----|-----------------------|--------------------------|------|-----------------------------------------|
| iNaturalist-2018 | ResNet50$^{12}$ | SAP + TCM | 384 | 0.17 (0.08 – 0.48) | 1.77 (1.25 – 61.5%) | 69.1 | R@1: 83.9 WEB16 |
| | RS@k + TCM | 4000 | 0.11 (0.07 – 0.60) | 1.25 (0.91 – 66.6%) | 72.2 | |
| | ViT-B/16$^{12}$ | SAP + TCM | 384 | 0.20 (0.09 – 0.48) | 2.81 (1.66 – 66.1%) | 81.2 | R@1: 84.8 WEB16 |
| | RS@k + TCM | 4000 | 0.17 (0.09 – 0.54) | 2.03 (1.68 – 73.5%) | 84.8 | |
| Stanford Online Product | ResNet50$^{12}$ | SAP + TCM | 384 | 0.06 (0.01 – 0.44) | 0.52 (0.17 – 69.2%) | 82.7 | R@1: 88.0 WEB16 |
| | RS@k + TCM | 4000 | 0.07 (0.00 – 0.30) | 0.74 (0.22 – 14.0%) | 83.3 | |
| | ViT-B/16$^{12}$ | SAP + TCM | 384 | 0.04 (0.00 – 0.26) | 0.33 (0.11 – 25.4%) | 87.3 | R@1: 88.4 WEB16 |
| | RS@k + HMC | 4000 | 0.04 (0.00 – 0.37) | 0.38 (0.20 – 17.4%) | 88.4 | |
| CUB-200-2011 | ResNet50$^{12}$ | SAP + TCM | 384 | 0.11 (0.08 – 0.26) | 1.00 (0.61 – 30.1%) | 80.8 | R@1: 85.7 WEB16 |
| | RS@k + TCM | 384 | 0.10 (0.02 – 0.54) | 0.91 (0.54 – 53.3%) | 80.0 | |
| | ViT-B/16$^{12}$ | SAP + TCM | 384 | 0.07 (0.04 – 0.66) | 0.58 (0.38 – 66.1%) | 88.4 | R@1: 87.6 WEB16 |
| | RS@k + TCM | 384 | 0.10 (0.04 – 0.77) | 0.91 (0.56 – 74.5%) | 87.6 | |
| Cars-196 | ResNet50$^{12}$ | SAP + TCM | 384 | 0.39 (0.16 – 1.33) | 3.33 (2.24 – 27.1%) | 89.6 | R@1: 91.3 RESO |
| | RS@k + TCM | 392 | 0.45 (0.06 – 4.99) | 2.93 (1.68 – 18.2%) | 89.7 | |
| | ViT-B/16$^{12}$ | SAP + TCM | 384 | 0.54 (0.06 – 5.25) | 0.83 (0.79 – 68.3%) | 87.8 | R@1: 87.7 RESO |
| | RS@k + TCM | 392 | 0.60 (0.03 – 34.1%) | 0.98 (0.75 – 83.8%) | 87.7 | |
time per epoch. The results suggest that adding TCM regularization results in a negligible (< 1.5%) increment in the overall training time per epoch.
5.3 IMAGE RETRIEVAL EXPERIMENT
The results for supervised fine-tuning for image retrieval benchmarks with and without the TCM regularizer are summarized in Table 5. As is shown, our TCM loss is effective in improving threshold consistency (measured by OPIS and $\epsilon$-OPIS, the lower the better), by up to 77.3%, compared to the various baseline losses considered. Meanwhile, adding TCM regularization consistently improves accuracy across almost all benchmarks, base losses and backbone architectures. While we notice a slight decrease in recall@1 on the two smaller datasets (as marked in red), namely CUB and Cars, these are at the same magnitude as non-significant variations due to random initialization during training. It’s worth highlighting that on iNaturalist-2018, arguably the largest public image retrieval benchmark, adding our TCM regularization is shown to out-perform SOTA DML loss, recall@k surrogate, reducing the OPIS threshold inconsistency score from $0.37 \times 10^{-3}$ to $0.17 \times 10^{-3}$, while improving the recall@1 accuracy metrics from 83.9% to 84.8%.
6 CONCLUSION
In this work, we comprehensively study the issue of threshold inconsistency in deep metric learning. We introduce a novel variance-based metric named Operating-Point-Inconsistency-Score (OPIS) to quantify threshold inconsistency among different classes. Distinct from the One-Threshold-for-All evaluation protocol proposed by [Liu et al., 2022], a key advantage of OPIS is its elimination of the need for a separate calibration dataset. As a result, OPIS can be easily utilized alongside existing accuracy metrics, providing an added dimension for evaluating the threshold robustness of trained DML models. With the OPIS metric, we find that achieving high accuracy in a DML model does not necessarily guarantee threshold consistency. To address this issue, we propose the Threshold-Consistent Margin loss (TCM), a simple and versatile regularization technique that can be integrated with any base loss and backbone architecture to improve the model’s threshold consistency during training. TCM is designed to enforce more uniform intra-class compactness and inter-class separability across diverse classes in the embedding space. By incorporating TCM, we demonstrate state-of-the-art performance in both threshold consistency and accuracy across various image retrieval benchmarks. We hope that our work serves as a catalyst to encourage more explorations in developing threshold-consistent DML solutions for practical open-world scenarios.
Limitations of OPIS The OPIS and $\epsilon$-OPIS metrics necessitate a sufficient number of samples per class to ensure statistical significance, making them unsuitable for few-shot evaluation scenarios.
Limitations of TCM Like other inductive deep learning methods, TCM can fail when there’s a significant distribution shift between the training and test sets or when strong label noise is present.
REFERENCES
Xiang An, Jiankang Deng, Kaicheng Yang, Jiawei Li, Ziyong Feng, Jia Guo, Jing Yang, and Tongliang Liu. Unicom: Universal and compact representation learning for image retrieval. In ICLR, 2023.
L.C. Andrews. Special Functions of Mathematics for Engineers. Online access with subscription: SPIE Digital Library. SPIE Optical Engineering Press, 1998. ISBN 9780819426161. URL https://books.google.com/books?id=2CAgsF-RebgC
Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. Conformal prediction beyond exchangeability. The Annals of Statistics, 51(2):816–845, 2023.
Abhijit Bendale and Terrance Boult. Towards open world recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1893–1902, 2015.
Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1563–1572, 2016.
Andrew Brown, Weidi Xie, Vicky Kalogeiton, and Andrew Zisserman. Smooth-ap: Smoothing the path towards large-scale image retrieval. CoRR, abs/2007.12163, 2020. URL https://arxiv.org/abs/2007.12163
Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, and Yi Ma. Redunet: A white-box deep network from the principle of maximizing rate reduction. The Journal of Machine Learning Research, 23(1):4907–5009, 2022.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597–1607. PMLR, 2020.
Yen-Chi Chen. A tutorial on kernel density estimation and recent advances, 2017.
Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4690–4699, 2019.
Jiankang Deng, Jia Guo, Tongliang Liu, Mingming Gong, and Stefanos Zafeiriou. Sub-center arcface: Boosting face recognition by large-scale noisy web faces. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), Computer Vision – ECCV 2020, pp. 741–757, Cham, 2020. Springer International Publishing. ISBN 978-3-030-58621-8.
Qi Dong, Shaogang Gong, and Xiatian Zhu. Class rectification hard mining for imbalanced deep learning. In Proceedings of the IEEE international conference on computer vision, pp. 1851–1860, 2017.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
Yueqi Duan, Jiwen Lu, and Jie Zhou. Uniformface: Learning deep equidistributed representation for face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3415–3424, 2019.
Natalie Dullerud, Karsten Roth, Kimia Hamidieh, Nicolas Papernot, and Marzyeh Ghassemi. Is fairness only metric deep? evaluating and addressing subgroup gaps in deep metric learning. arXiv preprint arXiv:2203.12748, 2022.
Chen Fang, Ye Xu, and Daniel N Rockmore. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1657–1664, 2013.
|
QGR5IeMNDF
|
The link prediction results for attributed benchmarks, as shown in Table 2 is limited in that it does not include results from all of the standard datasets: ogbl-ppa, ogbl-ddi, ogbl-citation2. The OGBL datasets are included as baselines in all of the included SOTA models, the results from which would serve as a direct comparison for MPLP's performance versus any SOTA method.
|
Pure Message Passing Can Estimate Common Neighbor for Link Prediction
Anonymous authors
Paper under double-blind review
Abstract
Message Passing Neural Networks (MPNNs) have emerged as the de facto standard in graph representation learning. However, when it comes to link prediction, they are not always superior to simple heuristics such as Common Neighbor (CN). This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN. To bridge this gap, we posit that, by harnessing the orthogonality of input vectors, pure message-passing can indeed capture joint structural features. Specifically, we study the proficiency of MPNNs in approximating CN heuristics. Based on our findings, we introduce the Message Passing Link Predictor (MPLP), a novel link prediction model. MPLP taps into quasi-orthogonal vectors to estimate link-level structural features, all while preserving the node-level complexities. Moreover, our approach demonstrates that leveraging message-passing to capture structural features could offset MPNNs’ expressiveness limitations at the expense of estimation variance. We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods.
1 Introduction
Link prediction is a cornerstone task in the field of graph machine learning, with broad-ranging implications across numerous industrial applications. From identifying potential new acquaintances on social networks [Tiben-Novell & Kleinberg, 2003] to predicting protein interactions [Żaklarczyk et al., 2019], from enhancing recommendation systems [Koren et al., 2009] to completing knowledge graphs [Zhu et al., 2021], the impact of link prediction is felt across diverse domains. Recently, with the advent of Graph Neural Networks (GNNs) [Kipf & Welling, 2017] and more specifically, Message-Passing Neural Networks (MPNNs) [Gilmer et al., 2017], these models have become the primary tools for tackling link prediction tasks. Despite the resounding success of MPNNs in the realm of node and graph classification tasks [Kipf & Welling, 2017; Hamilton et al., 2018; Velickovic et al., 2018; Xu et al., 2018], it is intriguing to note that their performance in link prediction does not always surpass that of simpler heuristic methods [Hu et al., 2021].
Zhang et al. [2021] highlights the limitations of GNNs/MPNNs for link prediction tasks arising from its intrinsic property of permutation invariance. Owing to this property, isomorphic nodes invariably receive identical representations. This poses a challenge when attempting to distinguish links whose endpoints are isomorphic nodes. As illustrated in Figure 1a, nodes $v_1$ and $v_3$ share a Common Neighbor $v_2$, while nodes $v_1$ and $v_5$ do not. Ideally, due to their disparate local structures, these two links $(v_1, v_3)$ and $(v_1, v_5)$ should receive distinct predictions. However, the permutation invariance of MPNNs results in identical representations for nodes $v_3$ and $v_5$, leading to identical predictions for the two links. As Zhang et al. [2021] asserts, such node-level representation, even with the most expressive MPNNs, cannot capture structural link representation such as Common Neighbors (CN), a critical aspect of link prediction.
In this work, we posit that the pure Message Passing paradigm [Gilmer et al., 2017] can indeed capture structural link representation by exploiting orthogonality within the vector space. We begin by presenting a motivating example, considering a non-attributed graph as depicted in Figure 1a. In order to fulfill the Message Passing’s requirement for node vectors as input, we assign a one-hot vector to each node $v_i$, such that the $i$-th dimension has a value of one, with the rest set to zero.
Figure 1: (a) Isomorphic nodes result in identical MPNN node representation, making it impossible to distinguish links such as \((v_1, v_3)\) and \((v_1, v_5)\) based on these representations. (b) MPNN counts Common Neighbor through the inner product of neighboring nodes’ one-hot representation.
These vectors, viewed as signatures rather than mere permutation-invariant node representations, can illuminate pairwise relationships. Subsequently, we execute a single iteration of message passing as shown in Figure 1b, updating each node’s vector by summing the vector of its neighbors. This process enables us to compute CN for any node pair by taking the inner product of the vectors of the two target nodes.
At its core, this naive method employs an orthonormal basis as the node signatures, thereby ensuring that the inner product of distinct nodes’ signatures is consistently zero. While this approach effectively computes CN, its scalability poses a significant challenge, given that its space complexity is quadratically proportional to the size of the graph. To overcome this, we draw inspiration from DotHash (Nunes et al., 2023) and capitalize on the premise that the family of vectors almost orthogonal to each other swells exponentially, even with just linearly scaled dimensions (Kainen & Kůrková, 1993). Instead of relying on the orthogonal basis, we can propagate these quasi-orthogonal (QO) vectors and utilize the inner product to estimate the joint structural information of any node pair. Furthermore, by strategically selecting which pair of node signatures to compute the inner product, we can boost the expressiveness of MPNNs to estimate substructures—a feat previously deemed impossible in the literature (Chen et al., 2020).
In sum, our paper presents several pioneering advances in the realm of GNNs for link prediction:
- We are the first, both empirically and theoretically, to delve into the proficiency of GNNs in approximating heuristic predictors like CN for link prediction. This uncovers a previously uncharted territory in GNN research.
- Drawing upon the insights gleaned from GNNs’ capabilities in counting CN, we introduce MPLP, a novel link prediction model. Uniquely, MPLP discerns joint structures of links and their associated substructures within a graph, setting a new paradigm in the field.
- Our empirical investigations provide compelling evidence of MPLP’s dominance. Benchmark tests reveal that MPLP not only holds its own but outstrips state-of-the-art models in link prediction performance.
2 Preliminaries and Related Work
Notations. Consider an undirected graph \(G = (V, E, X)\), where \(V\) represents the set of nodes with cardinality \(n\), indexed as \(\{1, \ldots, n\}\), \(E \subseteq V \times V\) denotes the observed set of edges, and \(X_i \in \mathbb{R}^{F_x}\) encapsulates the attributes associated with node \(i\). Additionally, let \(N_v\) signify the neighborhood of a node \(v\), that is \(N_v = \{u | \text{SPD}(u, v) = 1\}\) where the function \(\text{SPD}(\cdot, \cdot)\) measures the shortest path distance between two nodes. Furthermore, the node degree of \(v\) is given by \(d_v = |N_v|\). To generalize, we introduce the shortest path neighborhood \(N^s_v\), representing the set of nodes that are \(s\) hops away from node \(v\), defined as \(N^s_v = \{u | \text{SPD}(u, v) = s\}\).
Link predictions. Alongside the observed set of edges \(E\), there exists an unobserved set of edges, which we denote as \(E_c \subseteq V \times V \setminus E\). This unobserved set encompasses edges that are either absent from the original observation or are anticipated to materialize in the future within the graph \(G\). Consequently, we can formulate the link prediction task as discerning the unobserved set of edges \(E_c\). Heuristics link predictors include Common Neighbor (CN) (Liben-Nowell & Kleinberg, 2003), Adamic-Adar index (AA) (Adamic & Adar, 2003), and Resource Allocation (RA) (Zhou...
Figure 2: GNNs estimate CN, AA and RA via MSE regression, using the mean value as a Baseline. Lower values are better.
CN is simply counting the cardinality of the common neighbors, while AA and RA count them weighted to reflect their relative importance as a common neighbor.
\[
CN(u,v) = \sum_{k \in N_u \cap N_v} 1 ; \quad AA(u,v) = \sum_{k \in N_u \cap N_v} \frac{1}{\log d_k} ; \quad RA(u,v) = \sum_{k \in N_u \cap N_v} \frac{1}{d_k}.
\]
Though heuristic link predictors are effective across various graph domains, their growing computational demands clash with the need for low latency. To mitigate this, approaches like ELPH (Chamberlain et al., 2022) and DotHash (Nunes et al., 2023) propose using estimations rather than exact calculations for these predictors. Our study, inspired by these works, seeks to further refine techniques for efficient link predictions. A detailed comparison with related works and our method is available in Appendix A.
GNNs for link prediction. The advent of graphs incorporating node attributes has caused a significant shift in research focus toward methods grounded in GNNs. Most practical GNNs follow the paradigm of the Message Passing (Gilmer et al., 2017). It can be formulated as:
\[
h^{(l+1)}_v = \text{UPDATE} \left( \{ h^{(l)}_v, \text{AGGREGATE} \left( \{ h^{(l)}_u, h^{(l)}_v, \forall u \in N_v \} \right) \} \right),
\]
where \( h^{(l)}_v \) represents the vector of node \( v \) at layer \( l \) and \( h^{(0)}_v = X_v \). For simplicity, we use \( h_v \) to represent the node vector at the last layer. The specific choice of the neighborhood aggregation function, AGGREGATE(\(\cdot\)), and the updating function, UPDATE(\(\cdot\)), dictates the instantiation of the GNN model, with different choices leading to variations of model architectures. In the context of link prediction tasks, the GAE model (Kipf & Welling, 2016) derives link representation, \( h(i,j) \), as a Hadamard product of the target node pair representations, \( h(i,j) = h_i \odot h_j \). Despite its seminal approach, the SEAL model (Zhang & Chen, 2018), which labels nodes based on proximity to target links and then performs message-passing for each target link, is hindered by computational expense, limiting its scalability. Efficient alternatives like ELPH (Chamberlain et al., 2022) estimate node labels, while NCNC (Wang et al., 2023) directly learns edgewise features by aggregating node representations of common neighbors.
3 CAN MESSAGE PASSING COUNT COMMON NEIGHBOR?
In this section, we delve deep into the potential of MPNNs for heuristic link predictor estimation. We commence with an empirical evaluation to recognize the proficiency of MPNNs in approximating link predictors. Following this, we unravel the intrinsic characteristics of 1-layer MPNNs, shedding light on their propensity to act as biased estimators for heuristic link predictors and proposing an unbiased alternative. Ultimately, we cast light on how successive rounds of message passing can estimate the number of walks connecting a target node pair with other nodes in the graph. All proofs related to the theorem are provided in Appendix B.
3.1 ESTIMATION VIA MEAN SQUARED ERROR REGRESSION
To explore the capacity of MPNNs in capturing the overlap information inherent in heuristic link predictors, such as CN, AA and RA, we conduct an empirical investigation, adopting the GAE
framework (Kipf & Welling, 2016) with GCN (Kipf & Welling, 2017) and SAGE (Hamilton et al., 2018) as representative encoders. SEAL (Zhang & Chen, 2018), known for its proven proficiency in capturing heuristic link predictors, serves as a benchmark in our comparison. Additionally, we select a non-informative baseline estimation, simply using the mean of the heuristic link predictors on the training sets. The datasets comprise eight non-attributed graphs (more details in Section 5). Given that GNN encoders require node features for initial representation, we have to generate such features for our non-attributed graphs. We achieved this by sampling from a high-dimensional Gaussian distribution with a mean of 0 and standard deviation of 1. Although one-hot encoding is frequently employed for feature initialization on non-attributed graphs, we choose to forgo this approach due to the associated time and space complexity.
To evaluate the ability of GNNs to estimate CN information, we adopt a training procedure analogous to a conventional link prediction task. However, we reframe the task as a regression problem aimed at predicting heuristic link predictors, rather than a binary classification problem predicting link existence. This shift requires changing the objective function from cross-entropy to Mean Squared Error (MSE). Such an approach allows us to directly observe GNNs’ capacity to approximate heuristic link predictors.
Our experimental findings, depicted in Figure 2, reveal that GCN and SAGE both display an ability to estimate heuristic link predictors, albeit to varying degrees, in contrast to the non-informative baseline estimation. More specifically, GCN demonstrates a pronounced aptitude for estimating RA and nearly matches the performance of SEAL on datasets such as C.ele, Yeast, and PB. Nonetheless, both GCN and SAGE substantially lag behind SEAL in approximating CN and AA. In the subsequent section, we delve deeper into the elements within the GNN models that facilitate this approximation of link predictors while also identifying factors that impede their accuracy.
3.2 Estimation capabilities of GNNs for link predictors
GNNs exhibit the capability of estimating link predictors. In this section, we aim to uncover the mechanisms behind these estimations, hoping to offer insights that could guide the development of more precise and efficient methods for link prediction. We commence with the following theorem:
**Theorem 1.** Let \( G = (V, E) \) be a non-attributed graph and consider a 1-layer GCN/SAGE. Define the input vectors \( X \in \mathbb{R}^{N \times F} \) initialized randomly from a zero-mean distribution with standard deviation \( \sigma_{node} \). Additionally, let the weight matrix \( W \in \mathbb{R}^{F' \times F} \) be initialized from a zero-mean distribution with standard deviation \( \sigma_{weight} \). After performing message passing, for any pair of nodes \( \{(u, v)\} | (u, v) \in V \times V \setminus E \}, the expected value of their inner product is given by:
\[
\text{GCN: } \mathbb{E}(h_u \cdot h_v) = \frac{C}{\sqrt{d_u d_v}} \sum_{k \in N_u \cap N_v} \frac{1}{d_k}; \quad \text{SAGE: } \mathbb{E}(h_u \cdot h_v) = \frac{C}{\sqrt{d_u d_v}} \sum_{k \in N_u \cap N_v} 1,
\]
where \( d_v = d_v + 1 \) and the constant \( C \) is defined as \( C = \sigma_{node}^2 \sigma_{weight}^2 FF' \).
The theorem suggests that given proper initialization of input vectors and weight matrices, MPNN-based models, such as GCN and SAGE, can adeptly approximate heuristic link predictors. This makes them apt for encapsulating joint structural features of any node pair. Interestingly, SAGE predominantly functions as a CN estimator, whereas the aggregation function in GCN grants it the ability to weigh the count of common neighbors in a way similar to RA. This particular trait of GCN is evidenced by its enhanced approximation of RA, as depicted in Figure 2.
**Quasi-orthogonal vectors.** The GNN’s capability to approximate heuristic link predictors is primarily grounded in the properties of their input vectors in a linear space. When vectors are sampled from a high-dimensional linear space, they tend to be quasi-orthogonal, implying that their inner product is nearly 0 w.h.p. With message-passing, these QO vectors propagate through the graph, yielding in a linear combination of QO vectors at each node. The inner product between pairs of QO vector sets essentially echoes the norms of shared vectors while nullifying the rest. Such a trait enables GNNs to estimate CN through message-passing. A key advantage of QO vectors, especially when compared with orthonormal basis, is their computational efficiency. For a modest linear increment in space dimensions, the number of QO vectors can grow exponentially, given an acceptable margin of error (Kainen & Kůrková, 1993). An intriguing observation is that the orthogonality of QO vectors remains intact even after GNNs undergo linear transformations post message-passing,
attributed to the randomized weight matrix initialization. This mirrors the dimension reduction observed in random projection (Johnson & Lindenstrauss, 1984).
**Limitations.** While GNNs manifest a marked ability in estimating heuristic link predictors, they are not unbiased estimators and can be influenced by factors such as node pair degrees, thereby compromising their accuracy. Another challenge when employing such MPNNs is their limited generalization to unseen nodes. The neural networks, exposed to randomly generated vectors, may struggle to transform newly added nodes in the graph with novel random vectors. This practice also violates the permutation-invariance principle of GNNs when utilizing random vectors as node representation. It could strengthen generalizability if we regard these randomly generated vectors as signatures of the nodes, instead of their node features, and circumvent the use of MLPs for them.
**Unbiased estimator.** Addressing the biased element in Theorem 1, we propose the subsequent instantiation for the message-passing functions:
$$h_{v}^{(l+1)} = \sum_{u \in N_v} h_u^{(l)}. \quad (3)$$
Such an implementation aligns with the SAGE model that employs sum aggregation devoid of self-node propagation. This methodology also finds mention in DotHash (Nunes et al., 2023), serving as a cornerstone for our research. With this kind of message-passing design, the inner product of any node pair signatures can estimate CN impartially:
**Theorem 2.** Let $G = (V, E)$ be a graph, and let the vector dimension be given by $F \in \mathbb{N}_+$. Define the input vectors $X = (X_{i,j})$, which are initialized from a random variable $x$ having a mean of 0 and a standard deviation of $\frac{1}{\sqrt{F}}$. Using the 1-layer message-passing in Equation 3 for any pair of nodes $\{(u,v)\} | (u,v) \in V \times V\}$, the expected value and variance of their inner product are:
$$E(h_u \cdot h_v) = CN(u,v),$$
$$Var(h_u \cdot h_v) = \frac{1}{F} (d_u d_v + CN(u,v)^2 - 2CN(u,v)) + FVar(x^2)CN(u,v).$$
Though this estimator provides an unbiased estimate for CN, its accuracy can be affected by its variance. Specifically, DotHash recommends selecting a distribution for input vector sampling from vertices of a hypercube with unit length, which curtails variance given that $Var(x^2) = 0$. However, the variance influenced by the graph structure isn’t adequately addressed, and this issue will be delved into in Section 4.
**Orthogonal node attributes.** Both Theorem 1 and Theorem 2 underscore the significance of quasi orthogonality in input vectors, enabling message-passing to efficiently count CN. Intriguingly, in most attributed graphs, node attributes, often represented as bag-of-words (Purchase et al., 2022), exhibit inherent orthogonality. This brings forth a critical question: In the context of link prediction, do GNNs primarily approximate neighborhood overlap, sidelining the intrinsic value of node attributes? We earmark this pivotal question for in-depth empirical exploration in Appendix C, where we find that random vectors as input to GNNs can catch up with or even outperform node attributes.
### 3.3 Multi-layer message passing
Theorem 2 elucidates the estimation of CN based on a single iteration of message passing. This section explores the implications of multiple message-passing iterations and the properties inherent to the iteratively updated node signatures. We begin with a theorem delineating the expected value of the inner product for two nodes’ signatures derived from any iteration of message passing:
**Theorem 3.** Under the conditions defined in Theorem 2, let $h_u^{(l)}$ denote the vector for node $u$ after the $l$-th message-passing iteration. We have:
$$E(h_u^{(p)} \cdot h_v^{(q)}) = \sum_{k \in V} |\text{walks}^{(p)}(k,u)||\text{walks}^{(q)}(k,v)|,$$
where $|\text{walks}^{(l)}(u,v)|$ counts the number of length-$l$ walks between nodes $u$ and $v$.
This theorem posits that the message-passing procedure computes the number of walks between the target node pair and all other nodes. In essence, each message-passing trajectory mirrors the path
of the corresponding walk. As such, $h_u^{(l)}$ aggregates the initial QO vectors originating from nodes reachable by length-$l$ walks from node $u$. In instances where multiple length-$l$ walks connect node $k$ to $u$, the associated QO vector $X_{k,u}$ is incorporated into the sum $|\text{walks}^{(l)}(k,u)|$ times.
One might surmise a paradox, given that message-passing calculates the number of walks, not nodes. However, in a simple graph devoid of self-loops, where at most one edge can connect any two nodes, it is guaranteed that $|\text{walks}^{(1)}(u,v)| = 1$ iff $\text{SPD}(u,v) = 1$. Consequently, the quantity of length-1 walks to a target node pair equates to CN, a first-order heuristic. It’s essential to recognize, however, that $|\text{walks}^{(l)}(u,v)| \geq 1$ only implies $\text{SPD}(u,v) \leq l$. This understanding becomes vital when employing message-passing for estimating the local structure of a target node pair in Section 4.
4 METHOD
In this section, we introduce our novel link prediction model, denoted as MPLP. Distinctively designed, MPLP leverages the pure essence of the message-passing mechanism to adeptly learn structural information. Not only does MPLP encapsulate the local structure of the target node pair by assessing node counts based on varying shortest-path distances, but it also pioneers in estimating the count of triangles linked to any of the target node pair—an ability traditionally deemed unattainable for GNNs (Chen et al., 2020).
Node representation. While MPLP is specifically designed for its exceptional structural capture, it also embraces the inherent attribute associations of graphs that speak volumes about individual node characteristics. To fuse the attributes (if they exist in the graph) and structures, MPLP begins with a GNN, utilized to encode node $u$’s representation: $GNN(u) \in \mathbb{R}^F$. This node representation will be integrated into the structural features when constructing the QO vectors. Importantly, this encoding remains flexible, permitting the choice of any node-level GNN.
4.1 QO VECTORS CONSTRUCTION
Probabilistic hypercube sampling. Though deterministic avenues for QO vector construction are documented (Kainen, 1992; Kainen & Kurkova, 2020), our preference leans toward probabilistic techniques for their inherent simplicity. We inherit the sampling paradigm from DotHash (Nunes et al., 2023), where each node $k$ is assigned with a node signature $h_k^{(0)}$, acquired via random sampling from the vertices of an $F$-dimensional hypercube with unit vector norms. Consequently, the sampling space for $h_k^{(0)}$ becomes $\{-1/\sqrt{F}, 1/\sqrt{F}\}^F$.
Harnessing One-hot hubs for variance reduction. The stochastic nature of our estimator brings along an inevitable accompaniment: variance. Theorem 2 elucidates that a graph’s topology can augment estimator variance, irrespective of the chosen QO vector distribution. At the heart of this issue is the imperfectness of quasi-orthogonality. While a pair of vectors might approach orthogonality, the same cannot be confidently said for the subspaces spanned by larger sets of QO vectors.
Capitalizing on the empirical observation that real-world graphs predominantly obey the power-law distribution (Barabási & Albert, 1999), we discerned a strategy to control variance. Leveraging the prevalence of high-degree nodes—or hubs—we designate unique one-hot vectors for the foremost hubs. Consider the graph’s top-$b$ hubs; while other nodes draw their QO vectors from a hypercube $\{-1/\sqrt{F-b}, 1/\sqrt{F-b}\}^{F-b} \times \{0\}^b$, these hubs are assigned one-hot vectors from $\{0\}^{F-b} \times \{0, 1\}^b$, reserving a distinct subspace of the linear space to safeguard orthogonality. Note that when new nodes are added to the graph, their QO vectors are sampled the same way as the non-hub nodes, which can ensure a tractable computation complexity.
Norm rescaling to facilitate weighted counts. Theorem 1 alludes to an intriguing proposition: the estimator’s potential to encapsulate not just CN, but also RA. Essentially, RA and AA are nuanced heuristics translating to weighted enumerations of shared neighbors, based on their node degrees. In Theorem 2, such counts are anchored by vector norms during dot products. MPLP enhances this count methodology by rescaling node vector norms, drawing inspiration from previous works [Nunes et al., 2023; Yun et al., 2021]. This rescaling is determined by the node’s representation, GNN(u), and its degree \(d_u\). The rescaled vector is formally expressed as:
\[
\tilde{h}_k^{(0)} = f(\text{GNN}(k)||[d_k]) \cdot h_k^{(0)},
\]
where \(f : \mathbb{R}^{F_x+1} \rightarrow \mathbb{R}\) is an MLP mapping the node representation and degree to a scalar, enabling the flexible weighted count paradigm.
4.2 Structural feature estimations
Node label estimation. The estimator in Theorem 2 can effectively quantify CN. Nonetheless, solely relying on CN fails to encompass diverse topological structures embedded within the local neighborhood. To offer a richer representation, we turn to Distance Encoding (DE) [Li et al., 2020]. DE acts as an adept labeling tool [Zhang et al., 2021], demarcating nodes based on their shortest-path distances relative to a target node pair. For a given pair \((u, v)\), a node \(k\) belongs to DE\((p, q)\) iff \(SPD(u, k) = p\) and \(SPD(v, k) = q\). Unlike its usage as node labels, we opt to enumerate these labels, producing a link feature defined by \#\((p, q) = |\text{DE}(p, q)|\). Our model adopts a philosophy akin to ELPH [Chamberlain et al., 2022], albeit with a distinct node-estimation mechanism.
Returning to Theorem 3, we recall that message-passing as in Equation 3 essentially corresponds to walks. Our ambition to enumerate nodes necessitates a single-layer message-passing alteration, reformulating Equation 3 to:
\[
\eta_v^s = \sum_{k \in N_v^s} \tilde{h}_k^{(0)}.
\]
Here, \(N_v^s\) pinpoints \(v\)'s shortest-path neighborhoods distanced by the shortest-path \(s\). This method sidesteps the duplication dilemma highlighted in Theorem 3, ensuring that \(\eta_v^s\) aggregates at most one QO vector per node. Similar strategies are explored in [Abboud et al., 2022; Feng et al., 2022].
For a tractable computation, we limit the largest shortest-path distance as \(r \geq \max(p, q)\). Consequently, to capture the varied proximities of nodes to the target pair \((u, v)\), we can deduce:
\[
\#\((p, q) = \begin{cases}
E(\eta_u^p \cdot \eta_v^q), & r \geq p, q \geq 1 \\
|N_v^q| - \sum_{1 \leq s \leq r} \#(s, q), & p = 0 \\
|N_u^p| - \sum_{1 \leq s \leq r} \#(p, s), & q = 0
\end{cases}
\]
Concatenating the resulting estimates yields the expressive structural features of MPLP.
Shortcut removal. The intricately designed structural features improve the expressiveness of MPLP. However, this augmented expressiveness introduces susceptibility to distribution shifts during link prediction tasks [Dong et al., 2022]. Consider a scenario wherein the neighborhood of a target node pair contains a node \(k\). Node \(k\) resides a single hop away from one of the target nodes but requires multiple steps to connect with the other. When such a target node pair embodies a positive instance in the training data (indicative of an existing link), node \(k\) can exploit both the closer target node and the link between the target nodes as a shortcut to the farther one. This dynamic ensures that for training-set positive instances, the maximum shortest-path distance from any neighboring node to the target pair is constrained to the smaller distance increased by one. This can engender a discrepancy in distributions between training and testing phases, potentially diminishing the model’s generalization capability.
To circumvent this pitfall, we adopt an approach similar to preceding works [Zhang & Chen, 2018; Yin et al., 2022; Wang et al., 2023; Jin et al., 2022]. Specifically, we exclude target links from the original graph during each training batch, as shown by the dash line in Figure 3. This maneuver ensures these links are not utilized as shortcuts, thereby preserving the fidelity of link feature construction.
Table 1: Link prediction results on non-attributed benchmarks evaluated by Hits@50. The format is average score ± standard deviation. The top three models are colored by First, Second, Third.
| | USAir | NS | PB | Yeast | C.ele | Power | Router | E.coli |
|--------|---------|---------|---------|---------|---------|---------|---------|---------|
| CN | 80.52±4.07 | 74.00±1.98 | 37.22±3.52 | 72.60±3.85 | 47.67±10.87 | 11.57±0.55 | 9.38±1.05 | 51.74±2.70 |
| AA | 85.51±2.25 | 74.00±1.98 | 39.48±3.52 | 73.62±1.01 | 58.34±2.88 | 11.57±0.55 | 9.38±1.05 | 68.13±1.61 |
| RA | 85.95±1.83 | 74.00±1.98 | 38.94±3.52 | 73.62±1.01 | 61.47±4.59 | 11.57±0.55 | 9.38±1.05 | 74.45±0.55 |
| GCN | 73.29±4.70 | 78.32±2.57 | 37.32±4.69 | 73.15±2.41 | 40.68±5.45 | 15.40±2.90 | 24.42±4.59 | 61.02±11.91 |
| SAGE | 83.81±3.09 | 56.62±9.41 | 47.26±2.53 | 71.06±5.12 | 58.97±4.77 | 6.89±0.95 | 42.25±4.32 | 75.60±2.40 |
| SEAL | 90.47±3.00 | 86.59±3.03 | 44.47±2.86 | 83.92±1.17 | 64.80±4.23 | 31.46±3.25 | 61.00±10.10 | 83.42±1.01 |
| Neo-GNN| 86.07±1.96 | 83.54±3.92 | 44.04±1.89 | 83.14±0.73 | 63.22±4.32 | 21.98±4.62 | 42.81±4.13 | 73.76±1.94 |
| ELPH | 87.60±1.49 | 88.49±2.14 | 46.91±2.21 | 82.74±1.19 | 64.45±3.91 | 26.61±1.73 | 61.07±3.06 | 75.25±1.44 |
| NCNC | 86.16±1.77 | 83.18±3.17 | 46.85±3.18 | 82.00±0.97 | 60.49±5.09 | 23.28±1.55 | 52.45±8.77 | 83.94±1.57 |
| MPLP | 92.05±1.20 | 89.47±1.98 | 52.55±2.90 | 85.36±0.68 | 74.29±2.78 | 32.25±1.43 | 60.83±1.97 | 87.11±0.83 |
### 4.3 Triangle estimations
Constructing the structural feature with DE can provably enhance the expressiveness of the link prediction model (Li et al., 2020; Zhang et al., 2021). However, there are still prominent cases where labelling trick also fails to capture. Since labelling trick only considers the relationship between the neighbors and the target node pair, it can sometimes miss the subtleties of intra-neighbor relationships. For example, the nodes of DE(1, 1) in Figure 3 exhibit different local structures. Nevertheless, labelling trick like DE tends to treat them equally, which makes the model overlook the triangle substructure shown in the neighborhood. Chen et al. (2020) discusses the challenge of counting such a substructure with a pure message-passing framework. We next give an implementation of message-passing to approximate triangle counts linked to a target node pair—equivalent in complexity to conventional MPNNs.
For a triangle to form, two nodes must connect with each other and the target node. Key to our methodology is recognizing the obligatory presence of length-1 and length-2 walks to the target node. Thus, according to Theorem 3, our estimation can formalize as:
\[
\#(\triangle_u) = \frac{1}{2} \mathbb{E} \left( \tilde{h}_u^{(1)} \cdot \tilde{h}_u^{(2)} \right).
\]
Augmenting the node label counts with triangle estimates gives rise to a more expressive structural feature set of MPLP.
#### Feature integration for link prediction.
Having procured the structural features, we proceed to formulate the encompassing link representation for a target node pair \((u, v)\) as:
\[
h_{(u,v)} = (\text{GNN}(u) \odot \text{GNN}(v)) || [\#(1,1), \ldots, \#(r,r), \#(\triangle_u), \#(\triangle_v)],
\]
which can be fed into a classifier for a link prediction between nodes \((u, v)\).
### 5 Experiments
#### Datasets, baselines and experimental setup
We evaluate our approach on a diverse set of 8 non-attributed and 5 attributed graph benchmarks. In the absence of predefined train/test splits, links are partitioned into train, validation, and test splits following a 70-10-20 percentage distribution. Our comparison spans three categories of link prediction models: (1) heuristic-based methods encompassing CN, AA, and RA; (2) node-level models like GCN and SAGE; and (3) link-level models, including SEAL, Neo-GNN (Yun et al., 2021), ELPH (Chamberlain et al., 2022), and NCNC (Wang et al., 2023). Each experiment is conducted 10 times, with the average score and standard deviations reported using the Hits@50 metric, a well-accepted standard for the link prediction task (Hu et al., 2021). We limit the number of hops \(r = 2\), which results in a good balance of performance and efficiency. A comprehensive description of the experimental setup is available in Appendix B.
#### Results
Performance metrics are presented in Table 1 and Table 2. MPLP outperforms other models on 12 of the 13 benchmarks. In the context of non-attributed graphs, MPLP takes the lead on 7 out of the 8 datasets, followed by SEAL and ELPH. For attributed graphs, MPLP reigns supreme on all 5 datasets. Notably, MPLP consistently demonstrates superior results across a wide range of graph domains, with a performance advantage ranging from 2% to 10% in Hits@50 over the closest competitors. More ablation study can be found in Appendix D.
Table 2: Link prediction results on attributed benchmarks evaluated by Hits@50. The format is average score ± standard deviation. The top three models are colored by First, Second, Third.
| | CS | Physics | Computers | Photo | Collab |
|-------|--------|---------|-----------|---------|--------|
| CN | 51.04±15.96 | 61.40±11.12 | 21.95±7.00 | 29.33±2.74 | 61.37±10.00 |
| AA | 68.26±11.28 | 70.98±11.96 | 26.96±12.08 | 37.35±2.65 | 64.35±10.00 |
| RA | 68.25±11.29 | 72.29±11.69 | 28.05±11.29 | 40.77±3.41 | 64.00±10.00 |
| GCN | 66.00±10.90 | 73.71±2.28 | 22.95±10.58 | 28.14±1.81 | 35.53±2.39 |
| SAGE | 57.79±18.23 | 74.10±2.51 | 33.79±11.11 | 46.01±1.83 | 36.82±2.41 |
| SEAL | 60.30±16.76 | 74.27±2.68 | 30.48±2.07 | 49.08±3.27 | 64.75±10.43 |
| Neo-GNN | 71.10±11.69 | 72.33±13.33 | 22.76±3.53 | 44.85±3.23 | 65.52±10.43 |
| ELPH | 72.26±2.58 | 76.80±2.73 | 29.01±1.66 | 43.51±3.47 | 65.94±0.58 |
| NCNC | 74.65±2.23 | 75.96±1.73 | 36.48±4.16 | 47.98±2.36 | 66.61±0.71 |
| MPLP | 76.40±1.44 | 76.00±2.91 | 40.51±2.91 | 56.50±2.82 | 67.05±0.51 |
Figure 4: Evaluation of model size and inference time on Collab. The inference time encompasses the entire cycle within a single epoch.
Model size and inference time A separate assessment focuses on the trade-off between model size and inference time using the Collab dataset, with findings presented in Figure 4. Observing the prominent role of graph structure in link prediction performance on Collab, we introduce a streamlined version of our model, termed MPLP(no feat). This variant solely capitalizes on structural features, resulting in a compact model with merely 260 parameters. Nevertheless, its efficacy rivals that of models which are orders of magnitude larger. Furthermore, MPLP’s inference time for a single epoch ranks among the quickest in state-of-the-art approaches, underscoring its efficiency both in terms of time and memory footprint. More details can be found in Appendix B.3.
Estimation accuracy We investigate the precision of MPLP in estimating #(p, q), which denotes the count of node labels, using the Collab dataset. The outcomes of this examination are illustrated in Figure 5. Although ELPH possesses the capability to approximate these counts utilizing techniques like MinHash and Hyperloglog, our method exhibits superior accuracy. Moreover, ELPH runs out of memory when the dimension is larger than 3000. Remarkably, deploying a one-hot encoding strategy for the hubs further bolsters the accuracy of MPLP, concurrently diminishing the variance introduced by inherent graph structures. An exhaustive analysis, including time efficiency considerations, is provided in Appendix D.1.
6 CONCLUSION
In this work, we delved into the potential of message-passing GNNs to encapsulate joint structural features of graphs. Stemming from this investigation, we introduced a novel link prediction paradigm that consistently outperforms state-of-the-art baselines across a varied suite of graph benchmarks. The inherent capability to adeptly capture structures enhances the expressivity of GNNs, all while maintaining their computational efficiency. Our findings hint at a promising avenue for elevating the expressiveness of GNNs through probabilistic approaches.
REFERENCES
Ralph Abboud, İsmail İlkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The Surprising Power of Graph Neural Networks with Random Node Initialization, 2021. eprint: 2010.01179.
Ralph Abboud, Radoslav Dimitrov, and Ismail Ilkan Ceylan. Shortest Path Networks for Graph Property Prediction. November 2022. URL https://openreview.net/forum?id=mWzWvMxUFG1.
Robert Ackland and others. Mapping the US political blogosphere: Are conservative bloggers more prominent? In BlogTalk Downunder 2005 Conference, Sydney, 2005.
Lada A. Adamic and Eytan Adar. Friends and neighbors on the Web. Social Networks, 25(3):211–230, 2003. ISSN 0378-8733. doi: https://doi.org/10.1016/S0378-8733(03)00009-1. URL https://www.sciencedirect.com/science/article/pii/S0378873303000091.
Albert-László Barabási and Réka Albert. Emergence of Scaling in Random Networks. Science, 286(5439):509–512, 1999. doi: 10.1126/science.286.5439.509. URL https://www.science.org/doi/abs/10.1126/science.286.5439.509. eprint: https://www.science.org/doi/pdf/10.1126/science.286.5439.509.
Vladimir Batagelj and Andrej Mrvar. Pajek datasets website, 2006. URL http://vlado.fmf.uni-lj.si/pub/networks/data/.
Sergey Brin and Lawrence Page. The Anatomy of a Large-Scale Hypertextual Web Search Engine. Computer Networks, 30:107–117, 1998. URL http://www-db.stanford.edu/~backrub/google.html.
Benjamin Paul Chamberlain, Sergey Shirobokov, Emanuele Rossi, Fabrizio Frasca, Thomas Markovich, Nils Yannick Hammerla, Michael M. Bronstein, and Max Hansmire. Graph Neural Networks for Link Prediction with Subgraph Sketching. September 2022. URL https://openreview.net/forum?id=mloqEOAozQU.
Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can Graph Neural Networks Count Substructures? arXiv:2002.04025 [cs, stat], October 2020. URL http://arxiv.org/abs/2002.04025. arXiv: 2002.04025.
Kaiwen Dong, Yijun Tian, Zhichun Guo, Yang Yang, and Nitesh Chawla. FakeEdge: Alleviate Dataset Shift in Link Prediction. In The First Learning on Graphs Conference (LOG), 2022. URL https://openreview.net/forum?id=QDNOjSXuvtX.
Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, and Muhan Zhang. How Powerful are K-hop Message Passing Graph Neural Networks. May 2022. URL https://openreview.net/forum?id=nN3aVRQsxGd.
Matthias Fey and Jan E. Lenssen. Fast Graph Representation Learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
Fabrizio Frasca, Beatrice Bevilacqua, Michael M. Bronstein, and Haggai Maron. Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries, June 2022. URL http://arxiv.org/abs/2206.11140. arXiv:2206.11140 [cs].
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural Message Passing for Quantum Chemistry. CoRR, abs/1704.01212, 2017. URL http://arxiv.org/abs/1704.01212. arXiv: 1704.01212.
William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. arXiv:1706.02216 [cs, stat], September 2018. URL http://arxiv.org/abs/1706.02216. arXiv: 1706.02216.
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. arXiv:2005.00687 [cs, stat], February 2021. URL http://arxiv.org/abs/2005.00687. arXiv: 2005.00687.
|
RMgqvQGTwH
|
Could NPG coverage condition be weakened such that it reflects the dependence on the value function class? In the current form, if $\nu$ is generated by some policy on the environment, then $C_{off, \pi^e} \leq C_{jpg, \pi^e}$.
|
OFFLINE DATA ENHANCED ON-POLICY POLICY GRADIENT WITH PROVABLE GUARANTEES
Yifei Zhou *
University of California, Berkeley
{yifei_zhou@berkeley.edu
Ayush Sekhari *
MIT
sekhari@mit.edu
Yuda Song
Carnegie Mellon University
yudas@cs.cmu.edu
Wen Sun
Cornell University
ws455@cornell.edu
ABSTRACT
Hybrid RL is the setting where an RL agent has access to both offline data and online data by interacting with the real-world environment. In this work, we propose a new hybrid RL algorithm that combines an on-policy actor-critic method with offline data. On-policy methods such as policy gradient and natural policy gradient (NPG) have shown to be more robust to model misspecification, though sometimes it may not be as sample efficient as methods that rely on off-policy learning. On the other hand, offline methods that depend on off-policy training often require strong assumptions in theory and are less stable to train in practice.
Our new approach integrates a procedure of off-policy training on the offline data into an on-policy NPG framework. We show that our approach, in theory, can obtain a best-of-both-worlds type of result — it achieves the state-of-art theoretical guarantees of offline RL when offline RL-specific assumptions hold, while at the same time maintaining the theoretical guarantees of on-policy NPG regardless of the offline RL assumptions’ validity. Experimentally, in challenging rich-observation environments, we show that our approach outperforms a state-of-the-art hybrid RL baseline which only relies on off-policy policy optimization, demonstrating the empirical benefit of combining on-policy and off-policy learning. Our code is publicly available at https://github.com/YifeiZhou02/HNPG.
1 INTRODUCTION
On-policy RL methods, such as direct policy gradient (PG) methods (Williams, 1992; Sutton et al., 1999; Konda & Tsitsiklis, 1999; Kakade, 2001), are a class of successful RL algorithms due to their compatibility with rich function approximation (Schulman et al., 2015), their ability to directly optimize the cost functions of interests, and their robustness to model-misspecification (Agarwal et al., 2020). While there are many impressive applications of on-policy PG methods in high-dimensional dexterous manipulation (Akkaya et al., 2019), achieving human-level performance in large-scale games (Vinyals et al., 2019), and finetuning large language model with human feedback (Ouyang et al., 2022), the usage of on-policy PG methods is often limited to the setting where one can afford a huge amount of training data. This is majorly due to the fact that on-policy PG methods do not reuse old data (i.e., historical data that are not collected with the current policy to optimize or evaluate).
On the other hand, offline RL asks the question of how to reuse existing data. There are many real-world applications where we have pre-collected offline data (Fan et al., 2022; Grauman et al., 2022), and the goal of offline RL is to learn a high-quality policy purely from offline data. Since offline data typically is generated from sub-optimal policies, offline RL methods rely on off-policy learning (e.g., Bellman-backup-based learning such as Q-learning and Fitted Q Iteration (FQI) (Munos & Szepesvári, 2008)). While the vision of offline RL is promising, making offline RL work in both theory and practice is often challenging. In theory, offline RL methods rely on strong assumptions on the function
*First two authors contribute equally.
approximation (e.g., classic off-policy Temporal Difference (TD) Learning algorithms can diverge without strong assumptions such as Bellman completeness (Tsitsiklis & Van Roy, 1996)). In practice, unlike on-policy PG method which directly performs gradient ascent on the objective of interests, training Bellman-backup based value learning procedure in an off-policy fashion can be unstable (Kumar et al., 2019) and less robust to model misspecification (Agarwal et al., 2020). In this work, we ask the following question: Can we design an RL algorithm that can achieve the strengths of both on-policy and offline RL methods? We study this question and provide an affirmative answer under the setting of hybrid RL (Ross & Bagnell, 2012; Song et al., 2023), which considers the situation where in addition to some offline data, the learner can also perform online interactions with the underlying environment to collect fresh data. Prior hybrid RL works focus on the simple approach of mixing both offline data and online data followed by iteratively running off-policy learning algorithms such as FQI (Song et al., 2023) or Soft Actor-Critic (SAC) (Nakamoto et al., 2023; Ball et al., 2023)—both of which are off-policy methods that rely on Bellman backup or TD to learn value functions from off-policy data. We take an alternative approach here by augmenting on-policy PG methods with an off-policy learning procedure on the given offline data. Different from prior work, our new approach combines on-policy learning and off-policy learning, thus achieving the best of both worlds guarantee. More specifically, on the algorithmic side, we integrate the Fitted Policy Evaluation procedure (Antos et al., 2007) (an off-policy algorithm) into the Natural Policy Gradient (NPG) (Kakade, 2001) algorithm (an on-policy framework). On the theoretical side, we show that when standard assumptions related to offline RL hold, our approach indeed achieves similar theoretical guarantees that can be obtained by state-of-art theoretical offline RL methods which rely on pessimism or conservatives (Xie et al., 2021), while at the same time, our approach always maintains the theoretical guarantee of the on-policy NPG algorithm, regardless of the validity of the offline RL specific assumptions. Thus, our approach can still recover the on-policy result while the offline component fails.
On the practical side, we verify our approach on the challenging rich-observation combination lock problem (Misra et al., 2020) where the agent has to always take the only correct action at each state to get the final optimal reward (see Section 6 for more details). This RL environment has been extensively used in prior works to evaluate an RL algorithm’s ability to do representation learning and exploration simultaneously (Zhang et al., 2022b; Song et al., 2023; Agarwal et al., 2023). Besides the standard rich-observation combination lock example, we propose a more challenging variant where the observation is made of real-world images from the Cifar100 dataset (Krizhevsky, 2009). In the Cifar100 augmented combination lock setting, the RL agent can only access images from the training set during training and will be tested using images from the test set. Unlike standard Mujoco environments where the transition is deterministic and initial state distribution is narrow, our new setup here stresses testing the generalization ability of an RL algorithm when facing real-world images as states. Empirically, on both benchmarks, our approach significantly outperforms baselines such as pure on-policy method PPO and a hybrid RL approach RLPD (Ball et al., 2023) which relies on only off-policy learning.
2 RELATED WORKS
On-policy RL. On-policy RL defines the algorithms that perform policy improvement or evaluation using the current policy’s actions or trajectories. The most notable on-policy methods are the family of direct policy gradient methods, such as REINFORCE (Williams, 1992), Natural Policy Gradient (NPG) (Kakade, 2001), and more recent ones equipped with neural network function approximation such as Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). In general, on-policy methods have some obvious advantages: they directly optimize the objective of interest and they are nicely compatible with general function approximation, which contributes to their success on larger-scale applications (Vinyals et al., 2019; Berner et al., 2019). In addition, (Agarwal et al., 2020) demonstrates the provable robustness to the “Delusional Bias” (Lu et al., 2018) while only part of the model is well-specified.
Off-policy / offline RL. Off-policy learning uses data from behavior policy that is not necessarily the current policy that we are estimating or optimizing. Since off-policy methods rely on the idea of bootstrapping (either from the current or the target function), in theory, stronger assumptions are required for successful learning. Foster et al. (2021) showed that realizability alone does not guarantee sample efficient offline RL (in fact, the lower bound could be arbitrarily large depending on the state space size). Stronger conditions such as Bellman completeness are required. It is also well-known that in off-policy setting, when equipped with function approximation, classic TD algorithms indeed do not guarantee to converge (Tsitsiklis & Van Roy, 1996), and even converged, the fixed point
solution of TD can be arbitrarily bad (Scherrer, 2010). In addition, (Agarwal et al., 2020) provided counter-examples for TD/Q-learning style algorithms’ failures on partially misspecified models. These negative results all indicate the challenges of learning with offline or off-policy data. On the other hand, positive results are present when stronger assumptions such as Bellman completeness (Munos & Szepesvári, 2008). While these assumptions are off-policy/offline learning specific and can be strong, the fact that TD can succeed in practice implies such conditions can hold in practice (or at least hold approximately).
**Hybrid RL.** Hybrid RL (Song et al., 2023) defines the setting where the learning agent has access to both offline dataset and online interaction with the environment. Previous hybrid RL methods (Song et al., 2023; Ross & Bagnell, 2012; Ball et al., 2023; Nakamoto et al., 2023) perform off-policy learning (or model-based learning) on the dataset mixed with online and offline data. In particular, HyQ (Song et al., 2023) provides theoretical justification for the off-policy approach, and the guarantees presented by HyQ still require standard offline RL conditions to hold (due to the bootstrap requirement). However, our new approach is fundamentally different in algorithm design: although our offline component is still inevitably off-policy with the ideas of bootstrap, we perform on-policy learning with the data collected during online interaction without bootstrap, which gives us a doubly robust result when the offline learning specific assumptions (e.g., Bellman completeness) do not hold. Additionally, we would like to mention that some other works (Gu et al., 2017b;a; Xiao et al., 2023; Zhao et al., 2023; Lee et al., 2021), also explored the possibility of achieving the best of both worlds of on-policy and off-policy learning. Despite achieving empirical success, their theoretical guarantees still require a strong coverage condition on the reset distribution, while this work presents a doubly-robust guarantee when either the offline or the on-policy condition holds.
### 3 Preliminaries
We consider discounted infinite horizon MDP \( \mathcal{M} = \{S, A, \gamma, r, \mu_0, P\} \) where \( S, A \) are state-action spaces, \( \gamma \in (0, 1) \) is the discount factor, \( r(s, a) \in [0, 1] \) is the reward, \( \mu_0 \in \Delta(S \times A) \) is the initial reset distribution over states and actions (i.e., we reset based on a state and action sampled from \( \mu_0 \)), and \( P \in S \times A \mapsto \Delta(S) \) is the transition kernel. Note that assuming the reset distribution over the joint space \( S \times A \), contrary to just resetting over \( S \), is a standard assumption used in policy optimization literature such as CPI and NPG (Kakade & Langford, 2002; Agarwal et al., 2021).
As usual, given a policy \( \pi \in S \mapsto \Delta(A) \), we denote \( Q^\pi(s, a) \) as the Q function of \( \pi \), and \( V^\pi(s) \) as the value function of \( \pi \). We denote \( d^\pi \in \Delta(S \times A) \) as the average state-action occupancy measure of policy \( \pi \). We denote \( V^\pi = \mathbb{E}_{s_0 \sim \mu_0} V^\pi(s_0) \) as the expected total discounted reward of \( \pi \). We denote \( T^\pi \) as the Bellman operator associated with \( \pi \), i.e., given a function \( f \in S \times A \mapsto \mathbb{R} \), we have
\[
T^\pi f(s, a) = r(s, a) + \gamma \mathbb{E}_{s' \sim P(s, a), a' \sim \pi(s')} [f(s', a')].
\]
In the hybrid RL setting, we assume that the learner has access to an offline data distribution \( \nu \), from which it can draw i.i.d. samples \( s, a \sim \nu, r = r(s, a), s' \sim P(s, a) \) to be used for learning (in addition to on-policy online interactions). The assumption that the learner has direct access to \( \nu \) can be easily relaxed by instead giving the learner a dataset \( D \) of samples drawn i.i.d. from \( \nu \). For a given policy \( \pi \), we denote \( d^\pi \) as the average state-action occupancy measure, starting from \( \mu_0 \).
In our algorithm, given a policy \( \pi \), we will draw state-action pairs from the distribution \( d^\pi \) defined such that \( d^\pi(s, a) = (1 - \gamma)(\mu_0(s_1, a_1) + \sum_{t=1}^{\infty} \gamma^t \Pr^\pi(s_t = s, a_t = a)) \), which can be done by sampling \( h \) with probability proportional to \( \gamma^h \), execute \( \pi \) to \( h \) starting from \( (s_1, a_1) \sim \mu_0 \), and return \( (s_h, a_h) \). Given \( (s, a), \pi \), to draw an unbiased estimate of the reward-to-go \( Q^\pi(s, a) \), we can execute \( \pi \) starting from \( (s_0, a_0) := (s, a) \), every time step \( h \), we terminate with probability \( \gamma \) (otherwise move to \( h + 1 \)); once terminated at \( h \), return the sum of the undiscounted rewards \( \sum_{\tau=0}^{h} r_\tau \). This is an unbiased estimate of \( Q^\pi(s, a) \). Such kind of procedure is commonly used in on-policy PG methods, such as PG (Williams, 1992), NPG (Kakade, 2001; Agarwal et al., 2020), and CPI (Kakade & Langford, 2002). We refer readers to Algorithm 1 in Agarwal et al. (2021) for details.
### 4 Hybrid Actor-Critic
In this section, we present our main algorithm called Hybrid Actor-Critic (HAC), given in Algorithm 1. HAC takes as input the number of rounds \( T \), a value function class \( F \), an offline data distribution \( \nu \) (or equivalently an offline dataset sampled from \( \nu \)), and a weight parameter \( \lambda \), among other parameters, and returns a policy \( \hat{\pi} \). HAC runs for \( T \) rounds, where it performs a few very simple steps at each round \( t \in T \). At the beginning of every round, given a policy \( \pi^t \) computed in the previous rounds, it first invokes the subroutine Hybrid Fitted Policy Evaluate (HPE), given in Algorithm 2,
Algorithm 1 Hybrid Actor-Critic (HAC)
Require: Function class \( \mathcal{F} \), offline data \( \nu \), # of PG iteration \( T \), HPE # of iterations \( K_1, K_2 \), weight parameter \( \lambda \)
1: Initialize \( f^0 \in \mathcal{F} \), set \( \pi_1(a|s) \propto \exp(f^0(s,a)) \).
2: Set \( \eta = (1 - \gamma)\sqrt{\log(A)/T} \).
3: for \( t = 1, \ldots, T \) do
4: Let \( f^t \leftarrow \text{HPE}(\pi^t, \mathcal{F}, K_1, K_2, \nu, \lambda) \).
5: \( \pi^{t+1}(a|s) \propto \pi^t(a|s) \exp(\eta f^t(s,a)), \quad \forall s,a. \)
6: end for
7: Return policy \( \hat{\pi} \sim \text{Uniform}(\{\pi_1, \ldots, \pi_{T+1}\}) \).
Algorithm 2 Hybrid Fitted Policy Evaluation (HPE)
Require: Policy \( \pi \), function class \( \mathcal{F} \), offline distribution \( \nu \), number of iterations \( K_1, K_2 \), weight \( \lambda \)
1: Initialize \( f_0 \in \mathcal{F} \).
2: Sample \( D_{\text{on}} = \{(s,a,y = \hat{Q}^\pi(s,a))\} \) of \( m_{\text{on}} \) many on-policy samples using \( \pi \).
3: Sample \( D_{\text{off}} = \{(s,a,s',r)\} \) of \( m_{\text{off}} \) many offline samples from \( \nu \).
4: for \( k = 1, \ldots, K_1, \ldots, K_2 \) do
5: Solve the square loss regression problem to compute:
\[
f_k \leftarrow \arg\min_{f \in \mathcal{F}} \mathbb{E}_{D_{\text{off}}}[(f(s,a) - r - \gamma f_{k-1}(s',\pi(s')))^2] + \lambda \mathbb{E}_{D_{\text{on}}}[f(s,a) - y]^2.
\]
6: Sample fresh datasets \( D_{\text{off}} \) and \( D_{\text{on}} \) as in lines 2 and 3 above.
7: end for
8: Return \( \bar{f} = \frac{1}{K_2-K_1} \sum_{k=K_1+1}^{K_2} f_k \), and optionally \( D_{\text{off}} \) and \( D_{\text{on}} \).
to compute an approximation \( f^t \) of the value function \( Q^{\pi^t} \) corresponding to \( \pi^t \). Then, using the function \( f^t \), HAC computes the policy \( \pi^{t+1} \) for the next round using the softmax policy update:
\[
\pi^{t+1}(a|s) \propto \pi^t(a|s) \exp(\eta f^t(s,a)) \quad \forall s \in S,
\]
where \( \eta \) is the step size. This step ensures that the new policy does not change too much compared to the old policy.
We next describe the subroutine HPE, our key tool in the HAC algorithm. HPE algorithm takes as input a policy \( \pi \), a value function class \( \mathcal{F} \), an offline distribution \( \nu \), and a weight parameter \( \lambda \), among other parameters, and outputs a value function \( f \) that approximates \( Q^\pi \) of the input policy \( \pi \). HPE performs \( K_2 \) many iterations, where on the \( k \)-th iteration, it computes a function \( f_k \) based on the function \( f_{k-1} \) from the previous rounds. At the \( k \)-th iteration, in order to compute \( f_k \), HPE first collects a dataset \( D_{\text{on}} \) of \( m_{\text{on}} \) many on-policy online samples from the input policy \( \pi \), each of which consists of a triplet \( (s,a,y) \) where \( (s,a) \sim d^\pi \) and \( y \) is a stochastic estimate for \( Q^\pi(s,a) \) i.e. satisfies \( \mathbb{E}[y] = Q^\pi(s,a) \) (e.g., \( y \) can be obtained from a Monte-Carlo rollout). Then, HPE collects a dataset \( D_{\text{off}} \) of \( m_{\text{off}} \) many offline samples \( (s,a,s',r) \) from \( \nu \), where \( s' \sim P(\cdot|s,a) \) and \( r \sim r(s,a) \). Finally, HPE computes the estimate \( f_k \) by solving the optimization problem in (1).
The first term in (1) corresponds to minimizing TD error with respect to \( f_{k-1} \) under the offline data \( D_{\text{off}} \), and the second term corresponds to minimizing estimation error of \( Q^\pi(s,a) \) under the online dataset \( D_{\text{on}} \). Note that the second term does not rely on the bootstrapping procedure. The relative weights of the terms is decided by the parameter \( \lambda \), given as an input to the algorithm and chosen via hyperparameter tuning in our experiments. Typically, \( \lambda \in [1,T] \). Finally, after repeating this for \( K_2 \) times, HPE outputs \( \bar{f} \) which is computed by taking the average of \( f_k \) produced in the last \( K_2 - K_1 \) iterations\(^1\), where we ignore the first \( K_1 \) iterations to remove the bias due to the initial estimate \( f_0 \).
The key step in HPE is Eq. 1 which consists of an off-policy TD loss and an on-policy least square regression loss. When the standard offline RL condition — Bellman completeness, holds (i.e., the Bayes optimal \( T^\pi f_{k-1} \in \mathcal{F} \)), HPE can return a function \( f \) that has the following two properties: (1) \( f \) is an accurate estimate of \( Q^\pi(s,a) \) under \( d^\pi \) thanks to the on-policy least square loss, (2) \( f \) has small Bellman residual under the \( \nu^2 \) — the offline distribution. On the other hand, without the Bellman completeness condition, due to the existence of the on-policy regression loss, we can still ensure
\(^1\) Averaging is only needed for our theoretical guarantees. Our experiments in Section 6 do not perform averaging and use the last iterate.
\(^2\) i.e. \( \mathbb{E}_{s,a \sim \nu}[(f(s,a) - r(s,a) - \gamma \mathbb{E}_{s' \sim P(\cdot|s,a),a' \sim \pi}f(s',a'))^2] \)
\( \bar{f} \) is a good estimator of \( Q^\pi \) under \( d^\pi \). This property ensures that we always retain the theoretical guarantees of on-policy NPG. We illustrate these points in more detail in the analysis section.
**Parameterized policies.** Note that HAC uses softmax policy parameterization, which may be intractable when \( A \) is large (since we need to compute a partition function in order to sample from \( \pi^t \)). In order to circumvent this issue, we also consider a Hybrid Natural Policy Gradient (HNPG) algorithm (Algorithm 3), that directly works with a parameterized policy class \( \Pi = \{ \pi_\theta | \theta \in \Theta \} \), where \( \theta \) is the parameter (e.g., \( \pi_\theta \) can be a differentiable neural network based policy). The algorithm is very similar to HAC and runs for \( T \) rounds, where at each round \( t \leq T \), it first computes \( f^t \), an approximation for \( Q^{\pi_{\theta^t}} \), by invoking the HPE procedure. However, it relies on compatible function approximation to update the current policy \( \pi_{\theta^t} \). For working with parameterized policies, HNPG relies on HPE to also supply an offline dataset \( D_{\text{off}} \) of \( m_{\text{off}} \) many tuples \( (s, a) \sim \nu \), and an on-policy online dataset \( D_{\text{on}} \) of \( m_{\text{on}} \) many tuples \( (s, a) \sim d^\pi \), which it uses to fit the linear critic \( (w^t)^\top \nabla \ln \pi_{\theta^t}(a|s) \) in (2). We then update the current policy parameter via \( \theta^{t+1} = \theta^t + \eta w^t \), similar to the classic NPG update for parameterized policy (Kakade, 2001; Agarwal et al., 2021) except that we fit the linear critic under both online and offline data. Another way to interpret this update rule is to investigate the form of \( w^t \). Taking the gradient of the objective in (2) with respect to \( w \), setting it to zero, and solving for \( w \), we get that the stationary point should be in the form of \( \left[ \sum_{s,a} (\phi^t(s,a)(\phi^t(s,a))^\top) \right]^{-1} \sum_{s,a} \phi^t(s,a)\bar{f}^t(s,a) \). Using the fact that \( \phi^t(s,a) \) is defined to be \( \nabla \ln \pi_{\theta^t}(a|s) \), we see that \( \sum_{s,a} (\phi^t(s,a)(\phi^t(s,a))^\top) \) is exactly the fisher information matrix computed using both online and offline data. Thus our new approach extends the parameterized NPG (Kakade, 2001) to the hybrid RL setting in a principled manner.
**Algorithm 3 Hybrid NPG with Parameterized Policies (HNPG)**
Require: Function class \( F \), PG iteration \( T \), PE iterations \( (K_1, K_2) \), offline data \( \nu \), Params \( \lambda, \eta \).
1: Initialize \( f^0 \in F \), and \( \theta^1 \) such that \( \pi_{\theta^1} = \text{Uniform}(A) \).
2: for \( t = 1, \ldots, T \) do
3: \( f^t, D_{\text{off}}, D_{\text{on}} \leftarrow \text{HPE}(\pi^t, F, K_1, K_2, \nu, \lambda) \).
4: Let \( \phi^t(s,a) = \nabla \log \pi_{\theta^t}(a|s) \) and \( \bar{f}^t(s,a) = f^t(s,a) - \mathbb{E}_{a \sim \pi_{\theta^t}(s)}[f^t(s,a)] \).
5: Solve the square loss regression problem to compute:
\[
w^t \in \arg\min_w \mathbb{E}_{D_{\text{off}}}[(w^\top \phi^t(s,a) - \bar{f}^t(s,a))^2] + \lambda \mathbb{E}_{D_{\text{on}}}[(w^\top \phi^t(s,a) - \bar{f}^t(s,a))^2]. \tag{2}
\]
6: Update \( \theta^{t+1} \leftarrow \theta^t + \eta w^t \).
7: end for
8: Return policy \( \hat{\pi} \sim \text{Uniform}(\{\pi_{\theta^1}, \ldots, \pi_{\theta^{T+1}}\}) \).
**5 THEORETICAL ANALYSIS**
In this section, we first present our main theoretical guarantees for our Hybrid Actor-Critic algorithm (HAC), and then proceed to its variant HNPG that works for parameterized policy classes. We start by stating the main assumptions and definitions for function approximation, the underlying MDP, the offline data distribution \( \nu \), and their relation to the prior works. We remark that all our assumptions and definitions are standard, and are frequently used in the RL theory literature (Agarwal et al., 2019).
**Assumption 1 (Realizability).** For any \( \pi \), there exists a \( f \in F \) s.t. \( \mathbb{E}_\pi[(f(s,a) - Q^\pi(s,a))^2] = 0 \).
We consider the following notion of inherent Bellman Error, that will appear in our bounds.
**Definition 1 (Point-wise Inherent Bellman Error).** We say that \( F \) has a point-wise inherent Bellman error \( \varepsilon_{\text{be}} \), if for all \( f \in F \) and policy \( \pi \), there exists a \( f' \in F \) such that \( \| f' - T^\pi f \|_\infty \leq \varepsilon_{\text{be}} \).
Note that when \( \varepsilon_{\text{be}} = 0 \), the above definition implies that for any \( f \in F \), its Bellman backup \( Tf \) is in the class \( F \), i.e. \( F \) is Bellman complete. While, Bellman completeness is a commonly used assumption in both online (Jin et al., 2021; Xie et al., 2022) and offline RL (Munos & Szepesvári, 2008), our results do not require \( \varepsilon_{\text{be}} = 0 \). In fact, our algorithm enjoys meaningful guarantees, as presented below, even when Bellman completeness does not hold, i.e., \( \varepsilon_{\text{be}} \) could be arbitrarily large.
We next define the coverage for the comparator policy \( \pi^c \), which is a common tool in the analysis of policy gradient methods (Kakade & Langford, 2002; Agarwal et al., 2021).
Definition 2 (NPG Coverage). Given some comparator policy $\pi^e$, we say that it has coverage $C_{\text{npg}, \pi^e}$ over $\pi^e$ if for any policy $\pi$, we have $\left\| \frac{d\pi^e}{d\pi} \right\|_\infty \leq C_{\text{npg}, \pi^e}$, where $d\pi^e$ is the occupancy measure of $\pi^e$.
Note that $C_{\text{npg}, \pi^e} < \infty$ if the reset distribution $\mu_0$ satisfies $\left\| d\pi^e / \mu_0 \right\|_\infty < \infty$, which is a standard assumption used in policy optimization literature such as CPI and NPG (Kakade & Langford, 2002; Agarwal et al., 2021). This condition intuitively says that the reset distribution has good coverage over $d\pi^e$, making it possible to transfer the square error under $d\pi$ of any policy $\pi$ to $d\pi^e$ (since we always have $\left\| \mu_0 / d\pi \right\|_\infty < 1/(1-\gamma)$ for all $\pi$, by definition of $\mu_0$). Finally, we introduce the Bellman error transfer coefficient, which allows us to control the expected Bellman under a policy $\pi$ in terms of the squared Bellman error under the offline distribution $\nu$.
Definition 3 (Bellman error transfer coefficient). Given the offline distribution $\nu$, for any policy $\pi^e$, we define the Bellman error transfer coefficient as
$$C_{\text{off}, \pi^e} := \max \left\{ 0, \max_{\pi} \max_{f \in F} \frac{\mathbb{E}_{s,a \sim d\pi^e} \left[ T^\pi f(s,a) - f(s,a) \right]}{\sqrt{\mathbb{E}_{s,a \sim \nu} \left( T^\pi f(s,a) - f(s,a) \right)^2}} \right\},$$
where the $\max_{\pi}$ is taken over the set of all stationary policies.
The Bellman error transfer coefficient above was introduced in Song et al. (2023), and is known to be weaker than other related notions considered in prior works, including density ratio (Kakade & Langford, 2002; Munos & Szepesvári, 2008; Chen & Jiang, 2019; Uehara & Sun, 2021), all-policy concentrability coefficient (Munos & Szepesvári, 2008; Chen & Jiang, 2019), square Bellman error based concentrability coefficient (Xie et al., 2021), relative condition number for linear MDP (Uehara et al., 2021; Zhang et al., 2022a), etc. (see Song et al. (2023) for a detailed comparison). Our definition of Bellman error transfer coefficient involves two policies $\pi^e$ and $\pi$, where $\pi^e$ denotes the comparator policy that we wish to compete with (and is thus fixed), and $\pi$ is used to define the Bellman backups (i.e. the terms $T^\pi f_{h+1}(s,a) - f_h(s,a)$) that we transfer from the offline distribution $\nu$ to the occupancy measure induced by $\pi^e$. We take a max w.r.t. all possible $\pi$ for the underlying MDP as our analysis proceeds by transferring (from under $\nu$ to $d\pi^e$) the Bellman error terms corresponding to the policies that are generated by our algorithm, which could be arbitrary.
Theorem 1 (Cumulative suboptimality). Fix any $\delta \in (0, 1)$, and let $\nu$ be an offline data distribution. Suppose the function class $F$ satisfies Assumption 1. Additionally, suppose that the subroutine HPE is run with parameters $K_1 = 4\lceil \log(1/\gamma) \rceil$, $K_2 = K_1 + T$, and $m_{\text{off}} = m_{\text{on}} = \frac{2T \log(2|F|/\delta)}{(1-\gamma)^2}$. Then, with probability at least $1 - \delta$, HAC satisfies the following bounds on cumulative suboptimality w.r.t. any comparator policy $\pi^e$:
- Under approximate Bellman Complete (when $\varepsilon_{\text{be}} \leq 1/T$):
$$\sum_{t=1}^{T} V^{\pi^e} - V^{\pi^t} \leq O \left( \frac{1}{(1-\gamma)^2} \sqrt{\log(A)T} + \frac{1}{(1-\gamma)^2} \sqrt{\min \left\{ C_{\text{npg}, \pi^e}, C_{\text{off}, \pi^e}^2 \right\} \cdot T} \right).$$
- Without Bellman Completeness (when $\varepsilon_{\text{be}} > 1/T$):
$$\sum_{t=1}^{T} V^{\pi^e} - V^{\pi^t} \leq O \left( \frac{1}{(1-\gamma)^2} \sqrt{\log(A)T} + \frac{1}{(1-\gamma)^2} \sqrt{C_{\text{npg}, \pi^e} T} \right).$$
where $\pi^t$ denotes the policy at round $t$.
The above shows that as $T$ increases, the average cumulative suboptimality $(\sum_{t=1}^{T} V^{\pi^e} - V^{\pi^t})/T$ converges to 0 at rate at least $O(1/\sqrt{T})$. Thus, our algorithm will eventually learn to compete with any comparator policy $\pi^e$ that has bounded $C_{\text{npg}, \pi^e}$ (or bounded $C_{\text{off}, \pi^e}$ with $\varepsilon_{\text{be}} \leq 1/T$). Furthermore, our algorithm exhibits a best-of-both-worlds behavior in the sense that it can operate both with or without approximate Bellman Completeness and enjoys a meaningful guarantee in both cases.
In scenarios when approximate Bellman Completeness holds (i.e. $\varepsilon_{\text{be}} \leq 1/T$), the above theorem shows that our algorithm can benefit from access to offline data, and can compete with any comparator policy $\pi^e$ that has a small Bellman error transfer coefficient. This style of bound is typically obtained in pure offline RL by using pessimism, which is typically computationally inefficient (Uehara & Sun, 2021; Xie et al., 2021). In comparison, our algorithm only relies on simple primitives like square-loss regression, which can be made computationally efficient under mild assumptions on $F$ (see discussion below); On the practical side, least square regression is much easier to implement.
and is even compatible with modern neural networks. Finally, note that, under approximate Bellman Completeness and when $C_{\text{off}, \pi^e}^2 \leq C_{\text{npg}, \pi^e}$, while our guarantees are similar to that of HyQ algorithm from Song et al. (2023), the performance guarantee for HyQ only holds under the conditions that Bellman completeness (when $\varepsilon_{\text{be}}$ is small) and the problem has a small bilinear rank (Du et al., 2021). In comparison, our algorithm enjoys an on-policy NPG style convergence guarantee even when $\varepsilon_{\text{be}}$ or bilinear-rank is large.
When there is no control on the inherent Bellman error, the second bound above holds. Such a bound is typical for policy gradient style algorithms, which do not require any control on $\varepsilon_{\text{be}}$. Again our result is doubly robust in the sense that we still obtain meaningful guarantees when the offline condition does not hold, while previous hybrid RL results like Song et al. (2023) do not have any guarantee when the offline assumptions (that $C_{\text{off}, \pi^e}$ is small for some reasonable $\pi^e$ or $\varepsilon_{\text{be}}$ is small) are not met.
Setting $\pi^e$ to be $\pi^*$ (the optimal policy for the MDP), and using a standard online-to-batch conversion, the above bound implies a sample complexity guarantee for Algorithm 1 for finding an $\varepsilon$-suboptimal policy w.r.t. $\pi^*$. Details are deferred to Section C.1.5.
On the computation side, there are two key steps that need careful consideration: (a) First, the sampling step in line 5 in HAC. Note that for any given $s$, we have that $\pi^t(a | s) \propto \exp(\eta \sum_{\tau=1}^{t-1} f^\tau(s, a))$, so for an efficient implementation, we need the ability to efficiently sample from this distribution. When $|\mathcal{A}|$ is small, this can be trivially done via enumeration. However, when $|\mathcal{A}|$ is large, we may need to resort to parameterized policies in HNPG to avoid computing the partition function. (b) Second, the minimization of (1) in HPE to compute $f_k$ given $f_{k-1}$. Note that (1) is a square loss regression problem in $f$, which can be implemented efficiently in practice. In fact, for various function classes $\mathcal{F}$, explicit guarantees for suboptimality/regret for minimizing the square loss in (1) are well known (Rakhlin & Sridharan, 2014). The above demonstrates the benefit of hybrid RL over online RL and offline RL: by leveraging both offline and online data, we can avoid explicit exploration or conservativeness, making algorithms much more computationally tractable.
**Hybrid NPG with Parameterized Policies.** We can provide a similar bound as in Theorem 1 for the HNPG algorithm, given in Algorithm 3, that works with parameterized policy class $\Pi = \{\pi_\theta | \theta \in \Theta\}$. In particular, we show that HNPG exhibits a best-of-best-worlds behavior in the sense that it can operate with/without approximate Bellman Completeness, and in both cases enjoy a meaningful cumulative suboptimality bound. For the theoretical analysis of HNPG, we make additional assumptions that the policy class is well-parameterized, $\log \pi_\theta(a | s)$ is smooth w.r.t. the parameter $\theta$, and that $\mathcal{W}$ realizes the appropriate linear critics for all $\pi \in \Pi$; All of these assumptions are standard in the literature on theoretical analysis of NPG algorithms. We defer the exact details of the assumptions, and the cumulative suboptimality bound for HNPG, to Appendix D.
### 6 EXPERIMENTS
In this section, we describe our empirical comparison of HNPG with other state-of-the-art hybrid RL methods on two challenging rich-observation exploration benchmarks with continuous action space. Our experiments are designed to answer the following questions: (1) Is HNPG able to leverage offline data to solve hard exploration problems which cannot be easily solved by pure online on-policy PG methods? (2) For setting where Bellman Completeness condition does not necessarily hold, is HNPG able to outperform other hybrid RL baselines which only rely on off-policy learning?
**Implementation.** The implementation of HNPG largely follows from Algorithm 3 and the practical implementation recommendations from TRPO (Schulman et al., 2015). We use a two-layer multi-layer perceptron for Q-functions and policies, plus an additional feature extractor for imaged-based environment. Generalized Advantage Estimation (GAE) (Schulman et al., 2018) is used while calculating online advantages. For NPG-based policy updates, we use conjugate gradient algorithm followed by a line search to find the policy update direction. Following standard combination lock algorithms (Song et al., 2023; Zhang et al., 2022b), instead of a discounted setting policy evaluation, we adapt to the finite horizon setting and train separate Q-functions and policies for each timestep. The pseudocode and hyperparameters are provided in Appendix E.
**Baselines.** We compare HNPG with both pure on-policy and hybrid off-policy actor-critic methods. For pure on-policy method, we use TRPO (Schulman et al., 2015) as the baseline. For hybrid off-policy method, we consider RLPD (Ball et al., 2023), a state-of-the-art algorithm in Mujoco benchmarks, and tuned the hyperparameters specifically for this environment (see Appendix E). We tried
training separate actors and critics for each time step and also training a single large actor and critic shared for all time steps, for both TRPO and RLPD, and report the best variant. We found that using a single actor and critic for RLPD resulted in better performance while the opposite holds for TRPO.
Note that imitation learning such as Behavior Cloning (BC) (Bain & Sammut, 1995) and pure offline learning such as Conservative Q-Learning (CQL) (Kumar et al., 2020) have previously been shown to fail on this benchmark (Song et al., 2023). Hybrid Q-learning methods (Hester et al., 2018; Song et al., 2023) or provable online learning methods for block MDP (Du et al., 2019; Misra et al., 2020; Zhang et al., 2022b; Mhammedi et al., 2023) do not apply here due to the continuous action space.
Offline distribution. Following Song et al. (2023), we use a suboptimal offline distribution generated by an $\varepsilon$-greedy policy with $1 - \varepsilon$ probability of taking the good action and $\varepsilon$ probability of taking a random action. $\varepsilon$ is taken to be $1/H$ so that this offline distribution has a bounded density ratio for the optimal policy. The size of the offline dataset is set to 50000, and around 32%-36% of the trajectories get optimal rewards, for $H$ ranging from 5 to 50.
6.1 Continuous Comblock
The left part of Figure 1 provides an illustration of a rich observation continuous Comblock of horizon $H$ (Misra et al., 2020; Zhang et al., 2022b). For each latent state, there is only one good latent action (out of 10 latent actions) that can lead the agent to the good states (green) in the next time step, while taking any of the other 9 actions will lead the agent to a dead state (orange) from which the agent will never be able to move back to good states; The reward is available at the good states in the last time step. Every timestep, the agent does not have direct access to the latent state, instead, it has access to a high-dimensional observation omitted from the latent state. More details can be found in Appendix E.1. This environment is extremely challenging due to the exploration difficulty and also the need to decode latent states from observations, and many popular deep RL baselines are known to fail (Misra et al., 2020). Built on this environment, we further make the action space continuous. We consider a 10-dimensional action space where $a \in \mathbb{R}^{10}$. At each timestep, when the agent chooses a 10-dimensional action $a$, the action is passed through a softmax layer, i.e., $p \propto \exp(a)$ where the distribution $p$ encodes the probability of choosing the 10 latent actions. A latent action is then sampled based on $p$ and the agent transits to the next time step. This continuous Comblock preserves the exploration difficulty where a uniform exploration strategy only has $\exp(-H)$ probability of getting the optimal reward. The continuous action space makes this environment even harder and rules out many baselines that are based on Q-learning scheme (e.g., HyQ from Song et al. (2023))
The sample complexity of our algorithm vs. the baselines are shown in Figure 2; The loss curves are deferred to Appendix E.2. To begin with, we observe that HNPG can reliably solve continuous Comblock up to horizon 50 with mild sample complexity (50k sub-optimal offline samples and around 30m online samples) despite the challenges of continuous action space. In comparison, TRPO is not able to solve even horizon 5 due to the exploration difficulty in the environment. Although RLPD has the benefit of improved sample complexity (detailed in Appendix E) by reusing past online interactions, it can only solve up to horizon 15. To investigate why off-policy methods cannot solve continuous Comblock as reliably as HNPG, we examine the critic loss of HNPG and RLPD for both online and offline samples. Notably, although both methods maintain a relatively stable critic loss on the offline samples, the online critic loss is more volatile for RLPD since it optimizes the TD error (which requires bootstrap from target network) while HNPG optimizes the policy evaluation error (which is a pure supervised learning problem) for online samples. We believe this unstable online critic loss is why off-policy methods fail to learn reliably in this environment.
6.2 Image Based Continuous Comblocks
To examine the robustness of HNPG when bellman completeness does not necessarily hold, we carry out experiments on a real-world-image-based continuous Comblock, as depicted in the right part of Figure 1. The only difference between an image-based continuous Comblock and a continuous
Figure 2: Comparison of sample complexity of different algorithms in Continuous Comblock and Image-Based Continuous Comblock benchmarks. The number of online samples is averaged over 5 random seeds and the standard deviation is shaded.
Comblock lies in their observation spaces. Specifically, for an image-based continuous Comblock, each latent state is represented by a class in cifar100 (Krizhevsky, 2009) and an observation is generated by randomly sampling a training image from that class. After sampling an image, we get the observation by using the "ViT-B/32" CLIP (Radford et al., 2021) image encoder to calculate a pre-trained feature. In addition, in the training environment, the image observations are sampled from the training set of cifar100 while in the test environment the image observations are drawn from the val set of cifar100. Unlike mujoco-based benchmarks where transition is often deterministic and initial state distribution is narrow, our setting, which uses real-world supervised learning datasets with a clear training and testing data split, challenges the algorithms to generalize to unseen test examples.
To get a sense of the inherent bellman error in Definition 1 for this setting, we conduct supervised learning experiments on cifar100 with the same functions as used for actors and critics (on top of the CLIP feature). The resulting top-1 classification accuracy is 77.7% on the training set and 72.1% on the test set, showing that the latent states are not 100% decodable from the pre-trained features using our function class. The fact that our function classes are not rich enough to exactly decode the latent states introduces model misspecification such that Bellman completeness may not hold.
The sample complexity results are shown in Figure 2 (right) and the loss curve results of horizon 5 are shown in Figure 3. First, TRPO fails to solve horizon 5 again due to its inefficient exploration. In this more realistic image-based setting, we observe that RLPD struggles even in horizon 5 and completely fails for horizon 10. In contrast, HNPG not only has a reduced sample complexity for horizon 5 but also reliably solves up to horizon 30 with around 10m online samples. To investigate this contrast, we examine the critic loss of HNPG and RLPD for horizon 5. While the offline critic TD loss stays stable for both HNPG and RLPD, the online critic TD loss is exploding for RLPD. This is not surprising since for environments where the Bellman completeness condition does not hold, Bellman backup based methods can diverge and become unstable to train. On the other hand, the on-policy training loss for HNPG is small since the on-policy training is based on supervised learning style least square regression instead of TD-style bootstrapping.
Finally, the train and test learning curves are also reported in Figure 3. It is observed that both the training curve and the test curve of HNPG have smaller variances, indicating that it is more stable to train, while those of RLPD have a larger variance, indicating that it is less stable. More importantly, even though two methods reach a similar train set reward in the best random seed, HNPG achieves a larger margin over RLPD in the test environment (around 0.8 compared to 0.6), showing that HNPG is better at generalization since RLPD uses the off-policy algorithm SAC which typically has a much higher updates-to-data (gradient updates per \((s, a, r, s')\) collection) ratio from 1:1 to 10:1, making it possible to overfit to the training data in the replay buffer.
7 CONCLUSION
We propose a new actor-critic style algorithm for the hybrid RL setting. Unlike previous model-free hybrid RL methods that only rely on off-policy learning, our proposed algorithms HAC and (the parametrized version) HNPG perform on-policy learning over the online data together with an off-policy learning procedure using the offline data. Thus, our algorithms are able to achieve guarantees that are the best-of-both-worlds. In particular, our algorithms achieve the state-of-art theoretical guarantees of offline RL when offline RL-specific assumptions (e.g., Bellman completeness and offline distribution coverage) hold, while at the same time enjoy the theoretical guarantees of on-policy policy gradient methods regardless of the offline RL assumptions’ validity. Our experiment results show that HNPG can indeed outperform the pure on-policy method, and stay robust to the lack of Bellman completeness condition in practice; In the latter scenario, other off-policy hybrid RL algorithms fail. Future research directions include sharpening the rates in our theoretical bounds and trying our algorithmic ideas for large-scale applications.
ACKNOWLEDGEMENTS
We thank Akshay Krishnamurthy and Drew Bagnell for useful discussions. AS acknowledges support from the Simons Foundation and NSF through award DMS-2031883, as well as from the DOE through award DE-SC0022199.
REFERENCES
Alekh Agarwal, Nan Jiang, and Sham M Kakade. Reinforcement learning: Theory and algorithms. 2019.
Alekh Agarwal, Mikael Henaff, Sham Kakade, and Wen Sun. PC-PG: Policy cover directed exploration for provable policy gradient learning. Advances in Neural Information Processing Systems, 2020.
Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. The Journal of Machine Learning Research, 22(1):4431–4506, 2021.
Alekh Agarwal, Yuda Song, Wen Sun, Kaiwen Wang, Mengdi Wang, and Xuezhou Zhang. Provable benefits of representational transfer in reinforcement learning. In Gergely Neu and Lorenzo Rosasco (eds.), Proceedings of Thirty Sixth Conference on Learning Theory, volume 195 of Proceedings of Machine Learning Research, pp. 2114–2187. PMLR, 12–15 Jul 2023.
Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019.
András Antos, Csaba Szepesvári, and Rémi Munos. Fitted q-iteration in continuous action-space mdps. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007.
Michael Bain and Claude Sammut. A framework for behavioural cloning. In Machine Intelligence 15, 1995.
Philip J Ball, Laura Smith, Ilya Kostrikov, and Sergey Levine. Efficient online reinforcement learning with offline data. arXiv preprint arXiv:2302.02948, 2023.
Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
Jinglin Chen and Nan Jiang. Information-theoretic considerations in batch reinforcement learning. In International Conference on Machine Learning, 2019.
Simon Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dud’ik, and John Langford. Provably efficient RL with rich observations via latent state decoding. In International Conference on Machine Learning, 2019.
Simon Du, Sham Kakade, Jason Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, and Ruosong Wang. Bilinear classes: A structural framework for provable generalization in rl. In International Conference on Machine Learning, pp. 2826–2836. PMLR, 2021.
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. arXiv preprint arXiv:2206.08853, 2022.
Dylan J Foster, Akshay Krishnamurthy, David Simchi-Levi, and Yunzong Xu. Offline reinforcement learning: Fundamental barriers for value function approximation. In Conference on Learning Theory, 2021.
|
yTBXeXdbMf
|
The previous work Pacchiano et al. studied the online PbRL with a linear reward function. It seems that the sample complexity of this in the reward-free exploratory stage is better than that of the previous work, which is not reward-free setting. What causes this gap between current work and previous work?
|
PROVABLE REWARD-AGNOSTIC PREFERENCE-BASED REINFORCEMENT LEARNING
Wenhao Zhan
Princeton University
wenhao.zhan@princeton.edu
Masatoshi Uehara*
Genentech
uehara.masatoshi@gene.com
Wen Sun
Cornell University
ws455@cornell.edu
Jason D. Lee
Princeton University
jasonlee@princeton.edu
ABSTRACT
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories, rather than explicit reward signals. While PbRL has demonstrated practical success in fine-tuning language models, existing theoretical work focuses on regret minimization and fails to capture most of the practical frameworks. In this study, we fill in such a gap between theoretical PbRL and practical algorithms by proposing a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired before collecting any human feedback. Theoretical analysis demonstrates that our algorithm requires less human feedback for learning the optimal policy under preference-based models with linear parameterization and unknown transitions, compared to the existing theoretical literature. Specifically, our framework can incorporate linear and low-rank MDPs with efficient sample complexity. Additionally, we investigate reward-agnostic RL with action-based comparison feedback and introduce an efficient querying algorithm tailored to this scenario.
1 INTRODUCTION
Reinforcement learning algorithms train agents to optimize rewards of interests. However, setting an appropriate numerical reward can be challenging in practical applications (e.g., design a reward function for a robot arm to learn to play table tennis), and optimizing hand-crafted reward functions can lead to undesirable behavior when the reward function does not align with human intention. To overcome this challenge, there has been a recent surge of interest in Preference-based Reinforcement Learning (PbRL) with human feedback. In PbRL, the agent does not receive a numerical reward signal, but rather receives feedback from a human expert in the form of preferences, indicating which state-action trajectory is preferred in a given pair of trajectories. PbRL has gained considerable attention in various domains, including NLP (Ziegler et al., 2019; Stiennon et al., 2020; Wu et al., 2021; Nakano et al., 2021; Ouyang et al., 2022; Glaese et al., 2022; Ramamurthy et al., 2022; Liu et al., 2023), robot learning (Christiano et al., 2017; Brown et al., 2019; Shin et al., 2023), and recommender systems (Xue et al., 2022).
Despite the promising applications of PbRL in various areas, there are only a few provably efficient algorithms (also known as PAC RL) for this purpose (Pacchiano et al., 2021; Chen et al., 2022b). These algorithms focus on regret minimization and iterate through the following processes: collecting new trajectories from the environment, obtaining human feedback on the trajectories, and learning hidden reward functions as well as the dynamic model from the human feedback. However, this approach can be slow and expensive in practice as it requires humans in every iteration of dynamic model selection process, which is not as easy as it may sound. For example, interactive decision-making algorithms that put human in the loop of model learning process such as DAGger (Ross et al., 2011) can become impractical when applied to some real-world robotics applications, as has been observed in prior works (Ross et al., 2013; Laskey et al., 2016). In contrast, in practical PbRL
*This work was done at Cornell University.
applications like InstructGPT (Ouyang et al., 2022) and PEBBLE (Lee et al., 2021), the majority of preference data are collected by crowdsourcing prompts from the entire world and the supervised or heuristic policies, therefore most of the human labeling process does not depend on the training steps afterward. Another line of work (Zhu et al., 2023) focuses on purely offline RL algorithms to learn a near-optimal policy from offline trajectories with good coverage (e.g., offline data that covers some high-quality policies’ traces). Nevertheless, it is unclear how to obtain such high-quality offline data a priori (Chen and Jiang, 2019).
To fill in such a gap between theoretical work and practical applications in PbRL, we propose a new theoretical method that lies in between purely online and purely offline methods for PbRL, resembling the framework of InstructGPT and PEBBLE. Our algorithm first collects state-action trajectories from the environment without human feedback. In this step, we design a novel sampling procedure to acquire exploratory trajectories that facilitate the subsequent learning of reward functions which is fully reward-agnostic. In the second step, we collect preference feedback on the collected trajectories from human experts. In the third step, we aim to learn the underlying hidden reward functions using the collected trajectories in the first step and preference feedback in the second step. In the fourth step, we learn the optimal policy by solving the offline RL problem under the learned reward function. Our approach can be understood as performing experimental design for PbRL, which allows us to separate the data-collection process from the process of querying human feedback, eliminating the need for constantly keeping human in the training loop. For instance, we only need to keep human experts in step 2 above, while we can freely perform hyperparameter tuning / model selection for the rest steps without requiring human experts sitting next to the computers. This process can significantly reduce the burden from human experts.
Our contributions can be summarized as follows:
• We propose an efficient experimental design algorithm for PbRL. Our algorithm is specifically designed for linear reward parametrization, which is commonly used in models such as the Bradley-Terry-Luce model, and can handle unknown transitions. This flexibility allows us to handle non-tabular transition models like low-rank MDPs (Agarwal et al., 2020a) and linear MDPs (Jin et al., 2019). To the best of our knowledge, existing works with statistical guarantees cannot incorporate these models efficiently. Notably, our experimental design algorithm does not depend on any information of the reward and is reward-agnostic. Therefore, the collected trajectories can indeed be reused for learning many reward functions at the same time.
• Our key idea is to decouple the interaction with the environment and the collection of human feedback. This decoupling not only simplifies the process of obtaining human feedback in practice but also results in a significant reduction in the sample complexity associated with human feedback compared to existing works (Pacchiano et al., 2021; Chen et al., 2022b). This improvement is particularly valuable as collecting human feedback is often a resource-intensive process.
• To circumvent the scaling with the maximum per-trajectory reward in the trajectory-based comparison setting, we further investigate preference-based RL with action-based comparison and propose a provably efficient algorithm for this setting. We show that in this case the sample complexity only scales with the bound of the advantage functions of the optimal policy, which can be much smaller than the maximum per-trajectory reward (Ross et al., 2011; Agarwal et al., 2019).
1.1 Related Works
We refer the readers to Wirth et al. (2017) for an overview of Preference-based RL (PbRL). PbRL has been well-explored in bandit setting under the notion of dueling bandits (Yue et al., 2012; Zoghi et al., 2014; Dudik et al., 2015), where the goal is to find the optimal arm in the bandit given human preference over action pairs. For MDPs, in addition to Pacchiano et al. (2021); Chen et al. (2022b), which we compare in the introduction, Novoseller et al. (2020); Xu et al. (2020) have also developed algorithms with sample complexity guarantees. Novoseller et al. (2020) proposes a double posterior sampling algorithm with an asymptotic regret sublinear in the horizon $H$. Xu et al. (2020) proposes a PAC RL algorithm but relies on potentially strong assumptions such as Strong Stochastic Transitivity. Note both of Novoseller et al. (2020); Xu et al. (2020) are limited to the tabular setting.
Our algorithm shares a similar concept with reward-free RL which focuses on exploration in the state-action space without using explicit rewards. Reward-free RL has been studied in many MDPs such as tabular MDPs (Jin et al., 2020a), linear MDPs (Wang et al., 2020), low-rank MDPs (Agarwal
and several other models (Chen et al., 2022a; Zanette et al., 2020; Qiu et al., 2021). The goal of reward-free RL is to gather exploratory state-action data to address the challenge of unknown transitions before observing rewards. In contrast, our approach aims to design a single exploration distribution from which we can draw trajectory pairs to solicit human feedback for learning reward functions. Our setting can be considered as an experimental design for PbRL.
2 PRELIMINARIES
We introduce our formulation of Markov decision processes (MDPs) and PbRL.
2.1 MDPs with Linear Reward Parametrization
We consider a finite-horizon MDP \( \mathcal{M} = (\mathcal{S}, \mathcal{A}, P^*, r^*, H) \), where \( \mathcal{S} \) is the state space, \( \mathcal{A} \) is the action space, \( P^* = \{P_h^*\}_{h=1}^H \) is the ground-truth transition dynamics, \( r^* = \{r_h^*\}_{h=1}^H \) is the ground-truth reward function, and \( H \) is the horizon. Specifically, for each \( h \in [H] ([H] := \{1, \cdots, H\}) \), \( P_h^* : \mathcal{S} \times \mathcal{A} \rightarrow \Delta(\mathcal{S}) \) and \( r_h^* : \mathcal{S} \times \mathcal{A} \rightarrow [0, 1] \) represent the transition and reward function at step \( h \), respectively. Moreover, we use \( P_1(\cdot) \) to denote the initial state distribution. Here, both \( r^*, P^* \) are unknown to the learner. In this work, we assume that the cumulative reward of any trajectory \( \tau = (s_h, a_h)_{h=1}^H \) does not exceed \( r_{\text{max}} \), i.e., \( \sum_{h=1}^H r_h(s_h, a_h) \leq r_{\text{max}} \).
Policies and value functions. A policy \( \pi = \{\pi_h\}_{h=1}^H \) where \( \pi_h : \mathcal{S} \rightarrow \Delta(\mathcal{A}) \) for each \( h \in [H] \) characterizes the action selection probability for every state at each step. In this paper, we assume the policy belongs to a policy class \( \Pi \), which can be infinite. Given a reward function \( r \) and policy \( \pi \), the associated value function and Q function at time step \( h \) are defined as follows: \( V_h^{r,\pi}(s) = \mathbb{E}_{\pi,P^*}[\sum_{h'=h}^H r_h(s_h, a_h)|s_h = s], Q_h^{r,\pi}(s,a) = \mathbb{E}_{\pi,P^*}[\sum_{h'=h}^H r_h(s_h, a_h)|s_h = s, a_h = a] \). Here, \( \mathbb{E}_{\pi,P^*}[\cdot] \) represents the expectation of the distribution of the trajectory induced by a policy \( \pi \) and the transition \( P^* \). We use \( V^{r,\pi} \) to denote the expected cumulative rewards of policy \( \pi \) with respect to reward function \( r \) under \( P^* \), i.e., \( V^{r,\pi} := \mathbb{E}_{s \sim P_1^*} V_1^{r,\pi}(s) \), and use \( V^{r,*} \) to denote the maximal expected cumulative rewards with respect to reward function \( r \) under \( P^* \), i.e., \( V^{r,*} := \max_{\pi \in \Pi} V^{r,\pi} \). In particular, let \( \pi^* \) denote the best policy in \( \Pi \) with respect to \( r^* \), i.e., \( \arg \max_{\pi \in \Pi} V^{r^*,\pi} \). In contrast, we denote the globally optimal policy by \( \pi_g := \arg \max_{\pi \in \Pi_{\text{Mar}}} V^{r^*,\pi} \) where \( \Pi_{\text{Mar}} \) is the set of all Markovian policies. Note that when \( \Pi \neq \Pi_{\text{Mar}}, \pi^* \) might not be optimal compared to \( \pi_g \).
Linear reward parametrization. To learn the unknown reward function, it is necessary to make structural assumptions about the reward. We consider a setting where the true reward function possesses a linear structure:
Assumption 1 (Linear Reward Parametrization). We assume MDP has a linear reward parametrization with respect to (w.r.t.) known feature vectors \( \phi_h(s, a) \in \mathbb{R}^d \). Specifically, for each \( h \in [H] \), there exists an unknown vector \( \theta_h^* \in \mathbb{R}^d \) such that \( r_h^*(s, a) = \phi_h(s, a)^T \theta_h^* \) for all \( (s, a) \in \mathcal{S} \times \mathcal{A} \). For technical purposes, we suppose for all \( s \in \mathcal{S}, a \in \mathcal{A}, h \in [H] \), we have \( \| \phi_h(s, a) \| \leq R, \| \theta_h^* \| \leq B \).
Note when \( d = |\mathcal{S}||\mathcal{A}| \) and setting \( \phi_h(s, a) \) as one-hot encoding vectors, we can encompass the tabular setting. Linear reward parametrization is commonly used in the literature of preference-based RL with statistical guarantees (Pacchiano et al., 2021; Zhu et al., 2023). See Appendix A for more details.
Notation. We use \( r^*(\tau) := \sum_{h=1}^H r_h^*(s_h, a_h) \) to denote the ground-truth cumulative rewards of trajectory \( \tau \). In particular, \( r^*(\tau) = \langle \phi(\tau), \theta^* \rangle \) where \( \phi(\tau) := [\phi_1(s_1, a_1)^T, \cdots, \phi_H(s_H, a_H)^T]^T, \theta^* := [\theta_1^T, \cdots, \theta_H^T]^T \). We use \( \phi(\pi) \) to denote \( \mathbb{E}_{\tau \sim (\pi, P^*)}[\phi(\tau)] \) for simplicity. We also use \( \Theta(B) \) to denote the set \( \{ \theta \in \mathbb{R}^d : \| \theta \| \leq B \} \) and \( \Theta(B,H) \) to denote the set \( \{ \theta \in \mathbb{R}^{HD} : \theta = [\theta_1^T, \cdots, \theta_H^T]^T, \theta_h \in \Theta(B), \forall h \in [H] \} \cap \{ \theta \in \mathbb{R}^{HD} : \langle \phi(\tau), \theta^* \rangle \leq r_{\text{max}}, \forall \tau \} \). We use the notation \( f = O(g) \) when there exists a universal constant \( C > 0 \) such that \( f \leq Cg \) and \( O(g) := O(g \log g) \).
2.2 Preference-Based Reinforcement Learning
In this paper, we consider a framework for PbRL that mainly consists of the following four steps:
- **Step 1**: Collect a dataset of trajectory pairs \( \mathcal{D}_{\text{reward}} = (\tau^{n,0}, \tau^{n,1})_{n=1}^N \) in a reward-agnostic fashion, where \( \tau^{n,i} = \{s_h^{n,i}, a_h^{n,i}, s_{h+1}^{n,i}\}_{h=1}^H \) for \( n \in [N] \) and \( i \in (0, 1) \).
- **Step 2**: Obtain preference feedback from human experts for each pair of trajectories in \( \mathcal{D}_{\text{reward}} \). Namely, if trajectory \( \tau^{n,1} \) is preferred over \( \tau^{n,0} \), then assign \( o^n = 1 \), otherwise assign \( o^n = 0 \).
Algorithm 1 REGIME: Experimental Design for Querying Human Preference
1: Input: Regularization parameter $\lambda$, model estimation accuracy $\epsilon'$, parameters $\epsilon, \delta$.
2: Initialize $\hat{\Sigma}_1 = \lambda I$
3: Estimate model $\hat{P} \leftarrow P(\Pi, \epsilon', \delta/4)$ (Possibly, requires the interaction with the environment.)
4: for $n = 1, \cdots , N$ do
5: Compute $(\pi^{n,0}, \pi^{n,1}) \leftarrow \arg\max_{\pi_0, \pi_1 \in \Pi} \| \hat{\phi}(\pi^0) - \hat{\phi}(\pi^1) \|_{\hat{\Sigma}_n^{-1}}$.
6: Update $\hat{\Sigma}_{n+1} = \hat{\Sigma}_n + (\hat{\phi}(\pi^0) - \hat{\phi}(\pi^1))(\hat{\phi}(\pi^0) - \hat{\phi}(\pi^1))^\top$.
7: end for
8: for $n = 1, \cdots , N$ do
9: Collect a pair of trajectories $\tau^{n,0}, \tau^{n,1}$ from the environment by $\pi^{n,0}, \pi^{n,1}$, respectively.
10: Add it to $D_{\text{reward}}$.
11: end for.
12: Obtain the preference labels $\{o^n\}_{n=1}^N$ for $D_{\text{reward}}$ from human experts.
13: Run MLE $\hat{\theta} \leftarrow \arg\max_{\theta \in \Theta(B,H)} L(\theta, D_{\text{reward}}, \{o^n\}_{n=1}^N)$ where $L(\theta, D_{\text{reward}}, \{o^n\}_{n=1}^N)$ is defined in (1).
14: Return $\hat{\pi} = \arg\max_{\pi \in \Pi} (\hat{\phi}(\pi), \hat{\theta})$.
• Step 3: Estimate the ground truth reward using the dataset $D_{\text{reward}}$ and preference labels $\{o^n\}_{n=1}^N$.
• Step 4: Run RL algorithms (either online or offline) using the learned rewards and obtain a policy $\hat{\pi}$ that maximizes the cumulative learned rewards.
The above framework has been applied in practical applications, such as PEBBLE (Lee et al., 2021). However, these algorithms lack provable sample efficiency guarantees. In particular, it remains unclear in the literature how to collect the trajectories in Step 1 to enable accurate estimation of the ground truth reward. In our work, we strive to develop a concrete algorithm that adheres to the above framework while ensuring theoretical sample efficiency. We also emphasize that step 1 is reward-agnostic, and the collected dataset can be re-used for learning many different rewards as long as they are linear in the feature $\phi$.
Preference model. In this work, we assume the preference label follows the Bradley-Terry-Luce (BTL) model (Bradley and Terry, 1952) in Step 2, i.e., we have the following assumption:
Assumption 2. Suppose for any pair of trajectory $(\tau^0, \tau^1)$, we have
$$P(o = 1) = P(\tau^1 \succ \tau^0) = \sigma(r^*(\tau^1) - r^*(\tau^0)) = \frac{\exp(r^*(\tau^1))}{\exp(r^*(\tau^0)) + \exp(r^*(\tau^1))},$$
where $o$ is the human preference over $(\tau^0, \tau^1)$ and $\sigma(\cdot)$ is the sigmoid function.
Our analysis will leverage the quantity $\kappa := \sup_{|x| \leq r_{\text{max}}} |1/\sigma'(x)| = 2 + \exp(2r_{\text{max}}) + \exp(-2r_{\text{max}})$ to measure the difficulty of estimating the true reward from the BTL preference model.
3 ALGORITHM: REGIME
We propose an algorithm specifically designed for the PbRL setting when the transitions are unknown. In order to handle unknown transitions, we use the following mild oracle:
Definition 1 (Reward-free RL oracle). A reward-free learning oracle $P(\Pi, \epsilon, \delta)$ can return an estimated model $\hat{P}$ such that with probability at least $1 - \delta$, we have for all policy $\pi \in \Pi$ and $h \in [H], s \in S, a \in A$, $\|\hat{P}_1(\cdot) - P^*_1(\cdot)\|_1 \leq \epsilon'$, $\mathbb{E}_{\pi, P^*}[\|\hat{P}_h(\cdot|s, a) - P^*_h(\cdot|s, a)\|_1] \leq \epsilon'$ where $\|\cdot\|_1$ denotes total variation distance (i.e., $\ell_1$-norm).
This oracle necessitates accurate model learning through interactions with the environment. The required guarantee is relatively mild since we do not require a point-wise error guarantee, but rather an expectation-based guarantee under the ground truth transition. This oracle holds true not only in tabular MDPs (Jin et al., 2020a), but also in low-rank MDPs (Agarwal et al., 2020a; 2022), where the only assumption is the low-rank property of the transition dynamics, and features could be unknown to the learner. Low-rank MDPs find wide application in practical scenarios, including blocked MDPs (Du et al., 2019; Zhang et al., 2020a;b; Sodhani et al., 2021; 2022).
3.1 Algorithm
The algorithm is described in Algorithm 1. Given a learned model $\hat{P}$, we use $\hat{\phi}(\pi) = \mathbb{E}_{\tau \sim (\pi, \hat{P})}[\phi(\tau)]$ to estimate $\phi(\pi) := \mathbb{E}_{\tau \sim (\pi, P^*)}[\phi(\tau)]$. The algorithm mainly consists of four steps as follows.
**Step 1:** Collection of state-action trajectories by interacting with the environment (Line 4–11).
To learn the ground truth reward function, we collect exploratory state-action trajectories that cover the space spanned by $\phi(\cdot)$ before collecting any human feedback. To achieve this, at each iteration, we identify a set of explorative policy pairs that are not covered by existing data. We measure the extent to which the trajectory generated by $(\pi_0, \pi_1)$ can be covered by computing the norm of $\hat{\phi}(\pi_0) - \hat{\phi}(\pi_1)$ on the metric induced by the inverse covariance matrix $\Sigma_n^{-1}$ at time step $n$. After iterating this procedure $N$ times and obtaining sets of policies $\{(\pi_{n,0}, \pi_{n,1})\}_{n=1}^{N}$, we sample $N$ exploratory trajectory pairs by executing the policy pairs $(\pi_{n,0}, \pi_{n,1})$ for $n \in [N]$. Notably, this trajectory-collection process is reward-agnostic and thus the collected samples can be used to learn multiple rewards in multi-task RL.
**Step 2:** Collection of preference feedback by interacting with human experts (Line 12). If trajectory $\tau_{n,1}$ is preferred over $\tau_{n,0}$, then assign $o^n = 1$, otherwise assign $o^n = 0$.
**Step 3:** Reward learning via MLE (Line 13). We adopt the widely-used maximum likelihood estimation (MLE) approach to learn the reward function, which has also been employed in other works Ouyang et al. (2022); Christiano et al. (2017); Brown et al. (2019); Shin et al. (2023); Zhu et al. (2023). Specifically, we learn the reward model by maximizing the log-likelihood $L(\theta, D_{\text{reward}}, \{o^n\}_{n=1}^{N})$:
$$\sum_{n=1}^{N} \log \left( o^n \cdot \sigma((\theta, \phi(\tau_{n,1}) - \phi(\tau_{n,0}))) + (1 - o^n) \cdot \sigma((\theta, \phi(\tau_{n,0}) - \phi(\tau_{n,1}))) \right). \quad (1)$$
**Step 4:** RL with respect to learned rewards (Line 14). We obtain the near-optimal policy that maximizes the cumulative learned rewards.
Our algorithm differs significantly from the algorithms proposed in Pacchiano et al. (2021); Chen et al. (2022b). In their algorithms, they repeat the following steps: (a) collect new trajectories from the environment using policies based on the current learned reward and transition models, (b) collect human feedback for the obtained trajectories, (c) update the reward and transition models. A potential issue with this approach is that every time human feedback is collected, agents need to interact with the environment, causing a wait time for humans. In contrast, our algorithm first collects exploratory trajectories without collecting any human feedback in Step 1. Then, we query human feedback and learn the reward model in Step 2-3. As a result, we decouple the step of collecting exploratory data from that of collecting human feedback. Hence, in our algorithm, we can efficiently query human feedback in parallel, mirroring common practice done in InstructGPT. Moreover, our algorithm’s design leads to lower sample complexity for both trajectory pairs and human feedback than Pacchiano et al. (2021); Chen et al. (2022b). See Appendix A for our technical novelty.
**Remark 1.** Our collection method in Step 1 shares a similar idea to active learning. See Appendix A.
**Remark 2.** The majority of computational cost lies in line 5 in Algorithm 1. To implement the algorithm, gradient ascent can be applied here to solve the optimization problem. See Appendix A.
**Remark 3.** In Step 4 (Line 14), it is not necessary to use the same $\hat{P}$ as in Line 3. Instead, any sample-efficient RL algorithm can be employed w.r.t. the learned reward such as Lee et al. (2021).
3.2 Analysis
Now we provide the sample complexity of Algorithm 1 as shown in the following theorem.
**Theorem 1.** Let
$$\lambda \geq 4HR^2, \quad N \geq \tilde{O}\left(\frac{\lambda k^2 B^2 R^2 H^4 d^2 \log(1/\delta)}{\epsilon^2}\right), \quad \epsilon' \leq \frac{\epsilon}{6BR\sqrt{H^5d \log N}},$$
Then under Assumption 1 and 2, with probability at least $1 - \delta$, we have
$$V_{r^*, \hat{\pi}} \geq V_{r^*, \pi^*} - \epsilon.$$
Note the sample complexity in Theorem 1 does not depend on the complexity of $\Pi$ and thus we can learn arbitrary policy classes. When $\Pi = \Pi_{\text{Mar}}$, we have $\pi^* = \pi_g$ and thus we can compete against the global optimal policy.
Since the sample complexity of human feedback, denoted by \( N_{\text{hum}} \), is equal to \( N \), Theorem 1 shows that the sample complexity of human feedback required to learn an \( \epsilon \)-optimal policy scales with \( \tilde{O}(1/\epsilon^2) \) and is polynomial in the norm bounds \( B, R \), the horizon \( H \), and the dimension of the feature space \( d \). Notably, the sample complexity of human feedback \( N_{\text{hum}} \) only depends on the structural complexity of the reward function, regardless of the underlying transition model. This is because while our theorem requires that the learned transition model is accurate enough (\( \epsilon' \leq \frac{\epsilon}{6BRH^2} \)), we do not need human feedback to learn the transition model for this purpose. This property of our algorithm is particularly desirable when collecting human feedback is much more expensive than collecting trajectories from the environment. Existing works with sample-efficient guarantees, such as Pacchiano et al. (2021); Chen et al. (2022b), do not have this property. Our algorithm’s favorable property can be attributed to the careful design of the algorithm, where the step of collecting trajectories and learning transitions is reward-agnostic and thus separated from the step of collecting human feedback and learning rewards. Furthermore, note that our results indeed work beyond low-rank MDPs, as long as there exists a suitable reward-free model-learning oracle. See Appendix A for more details.
As the most relevant work, we compare our results with Pacchiano et al. (2021), which considers online learning in PbRL with unknown tabular transition models and linear reward parameterization. Let \( N_{\text{tra}} \) and \( N_{\text{hum}} \) denote the number of required trajectory pairs and human feedback, respectively. Then, to obtain an \( \epsilon \)-optimal policy, the algorithm in Pacchiano et al. (2021, Theorem 2) requires:
\[
N_{\text{tra}} = N_{\text{hum}} = \tilde{O}\left(\frac{|S|^2|A|d + \kappa^2d^2}{\epsilon^2} \log \frac{1}{\delta}\right).
\]
Here we omit the dependence on \( B, R, H \) to facilitate comparison. In contrast, in the setting considered in Pacchiano et al. (2021), by leveraging the reward-free learning oracle from Jin et al. (2020a), our algorithm achieves the following sample complexity:
\[
N_{\text{tra}} = \tilde{O}\left(\frac{|S|^2|A|d + \kappa^2d^2}{\epsilon^2} \log \frac{1}{\delta}\right), \quad N_{\text{hum}} = \tilde{O}\left(\frac{\kappa^2d^2}{\epsilon^2} \log \frac{1}{\delta}\right),
\]
where the number of required trajectory-pairs comes from Jin et al. (2020a)[Lemma 3.6]. We observe that our algorithm achieves a better sample complexity for human feedbacks than the previous work while retaining the total trajectory complexity. In particular, our algorithm has the advantage that \( N_{\text{hum}} \) depends only on the feature dimension \( d \) and not on \( |S| \) or \( |A| \). This improvement is significant since obtaining human feedback is often costly. Lastly, we note that a similar comparison can be made to the work of Chen et al. (2022b), which considers reward and transition models with bounded Eluder dimension.
4 REGIME IN LINEAR MDPS
So far, we have considered PbRL given reward-free RL oracle satisfying Definition 1. Existing works have shown the existence of such a model-based reward-free RL oracle in low-rank MDPs (Agarwal et al., 2020a; 2022). However, these results have not been extended to linear MDPs (Jin et al., 2020b) where model-free techniques are necessary. Linear MDPs are relevant to our setting because linear reward parametrization naturally holds in linear MDPs. Unfortunately, a direct reduction from linear MDPs to low-rank MDPs may introduce a dependence on the cardinality of \( S \) without assuming strong inductive bias in the function class. In this section, we propose a model-free algorithm that can overcome this dependence by making slight modifications to Algorithm 1. We begin by providing the definition of linear MDPs.
Assumption 3 (Linear MDPs (Jin et al., 2020b)). We suppose MDP is linear with respect to some known feature vectors \( \phi_h(s, a) \in \mathbb{R}^d \) (\( h \in [H], s \in S, a \in A \)). More specifically, if for each \( h \in [H] \), there exist \( d \) unknown signed measures \( \mu^*_h = (\psi^{(1)}_h, \cdots, \psi^{(d)}_h) \) over \( S \) and an unknown vector \( \theta^*_h \in \mathbb{R}^d \) such that \( P^*_h(\cdot|s, a) = \phi_h(s, a)^T \mu^*_h(\cdot) \) and \( r^*_h(s, a) = \phi_h(s, a)^T \theta^*_h \) for all \( (s, a) \in S \times A \). For technical purposes, we suppose the norm bound \( \| \mu^*_h(s) \|_2 \leq \sqrt{d} \) for any \( s \in S \).
In addition, we use \( N_{\Pi}(\epsilon) \) to denote the covering number of \( \Pi \), which is defined as follows:
Definition 2 (\( \epsilon \)-covering number). The \( \epsilon \)-covering number of the policy class \( \Pi \), denoted by \( N_{\Pi}(\epsilon) \), is the minimum integer \( n \) such that there exists a subset \( \Pi' \subset \Pi \) with \( |\Pi'| = n \) and for any \( \pi \in \Pi \) there exists \( \pi' \in \Pi' \) such that \( \max_{s,h \in [H]} \| \pi_h(\cdot|s) - \pi'_h(\cdot|s) \|_1 \leq \epsilon \).
Algorithm 2 REGIME-lin
Input: Regularization parameter $\lambda$, feature estimation sample complexity $K$.
Call Algorithm 4 with generating $K$ trajectories by interacting with the environment.
Call Algorithm 5 with reward function $(r_{h',j})_{h'\in[H]}$ to estimate $(\hat{\phi}(\pi))_{h,j}$ for all $\pi \in \Pi, h \in [H], j \in [d]$ using $K$ trajectories. Let $\hat{\phi}(\pi) = [\hat{\phi}_1(\pi), \cdots, \hat{\phi}_H(\pi)]$ where the $j$-th entry of $\hat{\phi}_h(\pi)$ is $(\hat{\phi}(\pi))_{h,j}$.
for $n = 1, \cdots, N$ do
Compute $(\pi^{n,0}, \pi^{n,1}) \leftarrow \arg \max_{\pi^0, \pi^1 \in \Pi} \| \hat{\phi}(\pi^0) - \hat{\phi}(\pi^1) \|_{\hat{\Sigma}_n^{-1}}$.
Update $\hat{\Sigma}_{n+1} = \hat{\Sigma}_n + (\hat{\phi}(\pi^0) - \hat{\phi}(\pi^1))(\hat{\phi}(\pi^0) - \hat{\phi}(\pi^1))^\top$.
end for
for $n = 1, \cdots, N$ do
Collect a pair of trajectories $\tau^{n,0}, \tau^{n,1}$ from the environment by $\pi^{n,0}, \pi^{n,1}$, respectively.
Add $(\tau^{n,0}, \tau^{n,1})$ to $D_{\text{reward}}$.
end for
Obtain the preference labels $\{o^{(n)}\}_{n=1}^N$ from human experts.
Run MLE $\hat{\theta} \leftarrow \arg \min_{\theta \in \Theta(B,H)} L_\lambda(\theta; D_{\text{reward}}, \{o^{(n)}\}_{n=1}^N)$.
Return $\hat{\pi} = \arg \max_{\pi \in \Pi} \hat{V}^\pi(\hat{r})$ where $\hat{V}^\pi(\hat{r})$ is obtained by calling Algorithm 5 with reward function $\hat{r} = \{\hat{r}_h\}_{h=1}^H$ for all $\pi$ where $\hat{r}_h(s,a) = \langle \phi_h(s,a), \hat{\theta} \rangle$.
4.1 ALGORITHM
The reward-free RL oracle that satisfies Definition 1 for learning accurate transitions may be excessively strong for linear MDPs. Upon closer examination of Algorithm 1, it becomes apparent that the learned transition model is solely used for estimating $\phi(\pi)$. Therefore, our approach focuses on achieving a precise estimation of $\phi(\pi)$.
Our main algorithm is described in Algorithm 2 with subroutines for estimating $\hat{\phi}(\pi)$. The overall structure of the primary algorithm resembles that of Algorithm 1. The key distinction lies in the part to accurately estimate $\hat{\phi}(\pi)$ within the subroutines, without relying on the abstract reward-free RL oracle (Definition 1). In the following, we provide a brief explanation of these subroutines. The detailed descriptions of these subroutines is deferred to Algorithm 4 and 5 in Appendix B.
Collecting exploratory data to learn transitions. Being inspired by the approach in Jin et al. (2020b); Wang et al. (2020), we construct an exploratory dataset by running LSVI-UCB (Jin et al., 2020b) with rewards equivalent to the bonus. Specifically, in the $k$-th iteration, we recursively apply the least square value iteration with a bonus term $\{b_k^h(s,a)\}_{h=1}^H$, which is introduced to induce exploration. This process yields an exploratory policy $\pi^k$ based on exploratory rewards $\{r_k^h\}_{h=1}^H$, where $r_k^h = b_k^h/H$. We then collect a trajectory by executing policy $\pi^k$. By repeating this procedure for $K$ iterations, we accumulate an exploratory dataset. The detailed algorithm is provided in Appendix B (Algorithm 4). It is important to note that this step involves generating $K$ trajectories through interactions with the environment.
Estimating $\phi(\pi)$ using the exploratory data. Let $(\phi(\pi))_{h,j}$ denote the $j$-th entry of $\phi_h(\pi) := \mathbb{E}_\pi[\phi_h(s_h,a_h)]$. Then to estimate $\phi(\pi)$, we only need to estimate $(\phi(\pi))_{h,j}$ for all $h \in [H], j \in [d]$.
Note that for all $\pi \in \Pi$, we have $\phi(\pi) = [\mathbb{E}_{\pi,P^*}[\phi_1(s_1,a_1)^\top], \cdots, \mathbb{E}_{\pi,P^*}[\phi_H(s_H,a_H)^\top]]^\top$. Here, the key observation is that $(\phi(\pi))_{h,j}$ is exactly the expected cumulative rewards with respect to the following reward function $r_{h',j}(s,a) = \phi_{h'}(s,a)^\top \theta_{h',j}$ for all $h' \in [H]$ (up to an $R$ factor) where $\theta_{h',j} = \frac{1}{R} \cdot e_j$ for $h' = h$ and $\theta_{h',j} = 0$, otherwise ($h' \neq h$). Here $e_j$ is the one-hot encoding vector whose $j$-th entry is 1. Therefore, with the collected dataset, we can run the least square policy evaluation to estimate $(\phi(\pi))_{h,j}$. The detail is in Algorithm 5 in Appendix B.
4.2 ANALYSIS
Now we present the sample complexity of Algorithm 2. The formal statement and proof are deferred to Appendix B and E.1.
Theorem 2 (Informal). By choosing parameters in an appropriate way and setting
\[ K \geq \tilde{O}\left(\frac{H^9 B^2 R^4 d^5 \log(N_{\Pi}(\epsilon')/\delta)}{\epsilon'^2}\right), \quad N \geq \tilde{O}\left(\frac{\lambda \kappa^2 B^2 R^2 H^4 d^2 \log(1/\delta)}{\epsilon'^2}\right), \quad \epsilon' = \frac{\epsilon}{72BR^2H\sqrt{d\log H}} \]
under Assumption 1, 2, and 3, with probability at least \(1 - \delta\), we have \(V^{r^*, \hat{\pi}} \geq V^{r^*, \pi_g} - \epsilon\). Furthermore, by selecting a policy class \(\Pi\) properly, we have \(V^{r^*, \hat{\pi}} \geq V^{r^*, \pi_g} - 2\epsilon\) by replacing \(\log(N_{\Pi}(\epsilon')/\delta) = Hd \log\left(\frac{12WR}{\epsilon'}\right)\) where \(W = \frac{(B + (H + \epsilon)\sqrt{d})H \log |A|}{\epsilon}\).
The first statement says Algorithm 2 can learn an \(\epsilon\)-optimal policy with the number of trajectory-pairs and human feedbacks as follows:
\[ N_{tra} = K + N = \tilde{O}\left(\frac{d^5 \log N_{\Pi}(\epsilon') + \kappa^2 d^2}{\epsilon'^2}\right), \quad N_{hum} = \tilde{O}\left(\frac{\kappa^2 d^2}{\epsilon'^2}\right). \]
Since the sample complexity depends on the covering number of \(\Pi\), we need to carefully choose the policy class. When we choose \(\Pi\) to be the log-linear policy class:
\[ \Pi = \left\{ \pi = \{\pi_h^\zeta\}_{h=1}^H : \pi_h^\zeta(a|s) = \frac{\exp(\zeta_h^\top \phi_h(s, a))}{\sum_{a'\in A} \exp(\zeta_h^\top \phi_h(s, a'))}, \zeta_h \in \mathbb{B}(d, W), \forall s \in S, a \in A, h \in [H] \right\}, \]
although \(\pi^* \neq \pi_g\), we can show that the value of \(\pi^*\) is close to the value of \(\pi_g\) up to \(\epsilon\) by setting sufficiently large \(W\). This immediately leads to the second statement in Theorem 2. Consequently, to learn an \(\epsilon\)-global-optimal policy, it is concluded that the number of required trajectory pairs and human feedbacks for Algorithm 2 does not depend on \(|S|\) at all.
Finally, we compare our work to Chen et al. (2022b), as it is the only existing work that addresses provable PbRL with non-tabular transition models. Their algorithm exhibits sample complexities that depend on the Eluder dimension associated with the transition models. However, in linear MDPs, it remains uncertain whether we can get upper-bound on the Eluder dimension without introducing a dependence on \(|S|\). Consequently, our Algorithm 2 is the first provable PbRL algorithm capable of achieving polynomial sample complexity that is independent of \(|S|\) in linear MDPs.
5 REGIME WITH ACTION-BASED COMPARISON
The drawback of the current results is that the sample complexity is dependent on \(\kappa\), which can exhibit exponential growth in \(r_{max}\) under the BTL model. This is due to the fact that \(\sup_{|x| \leq r_{max}} |1/\sigma(x)| = O(\exp(r_{max}))\). Such dependence on \(r_{max}\) is undesirable, especially when rewards are dense and \(r_{max}\) scales linearly with \(H\). Similar limitations are present in existing works, such as Pacchiano et al. (2021); Chen et al. (2022b). To address this challenge, we consider the action-based comparison model (Zhu et al., 2023) in this section. Here, we assume that humans compare two actions based on their optimal Q-values. Given a tuple \((s, a^0, a^1, h)\), the human provides feedback \(o\) following
\[ P(o = 1|s, a^0, a^1, h) = P(a^1 \succ a^0|s, h) = \sigma(A^*_h(s, a^1) - A^*_h(s, a^0)), \tag{2} \]
where \(A^*_h\) is the advantage function of the optimal policy. Similar to trajectory-based comparisons with linear reward parametrization, we assume linearly parameterized advantage functions:
**Assumption 4** (Linear Advantage Parametrization). An MDP has linear advantage functions with respect to some known feature vectors \(\phi_h(s, a) \in \mathbb{R}^d\) \((h \in [H], s \in S, a \in A)\) if for each \(h \in [H]\), there exists an unknown vector \(\xi_h^* \in \mathbb{R}^d\) such that \(A^*_h(s, a) = \phi_h(s, a)^\top \xi_h^*\) for all \((s, a) \in S \times A\). We assume for all \(s \in S, a \in A, h \in [H]\), we have \(\|\phi_h(s, a)\| \leq R, \|\xi_h^*\| \leq B\).
Generally, the value of \(|A^*_h(s, a)|\) tends to be much smaller than \(H\) since a large value of \(|A^*_h(s, a)|\) implies that it may be difficult to recover from a previous incorrect action even under the best policy \(\pi^*\) (Ross et al., 2011; Agarwal et al., 2019). Therefore, by defining \(B_{adv} = \sup_{(s, a)} |A^*_h(s, a)|\), we expect that \(B_{adv}\) will be much smaller than \(H\), even in scenarios with dense rewards.
In the following discussion, we will use \(Z(B, h)\) to denote the convex set \(\{\zeta \in \mathbb{R}^d : \|\zeta\| \leq B, \langle \phi_h(s, a), \zeta \rangle \leq B_{adv}, \forall s \in S, a \in A\}\). We consider the setting where \(\Pi = \Pi_{Mar}\) and assume the transition model is known for brevity. In the case of unknown transition models, we can employ the same approach as described in Section 3 with reward-free RL oracles.
We present our algorithm for action-based comparison models in Algorithm 3. In Line 19 we denote
\[ L(\xi, D_{adv}^h, \{o^{h,n}\}_{n=1}^N) := \sum_{n=1}^N \log \left( o^{h,n} \cdot \sigma(\langle \xi, \phi_h(s^{h,n}, a^{h,n,1}) - \phi(s^{h,n}, a^{h,n,0}) \rangle) \right) \]
Algorithm 3 REGIME–action
1: Input: Regularization parameter $\lambda$.
2: for $h = 1, \cdots, H$ do
3: Initialize $\Sigma_{h,1} = \lambda I$.
4: for $n = 1, \cdots, N$ do
5: Compute: $(\hat{\pi}_{h,n,0}, \hat{\pi}_{h,n,1}) \leftarrow \arg\max_{\pi^0, \pi^1 \in \Pi} \|E_{s_h \sim \pi^0} [\phi_h(s_h, \pi^0) - \phi_h(s_h, \pi^1)] \|_{\Sigma_{h,n}^{-1}}$,
6: where $\phi_h(s, \pi) = E_{a \sim \pi_h(\cdot|s)} [\phi_h(s, a)]$.
7: Update:
$$\Sigma_{h,n+1} = \Sigma_{h,n} + (E_{s_h \sim \pi_{h,n,0}} [\phi_h(s_h, \pi_{h,n,0}) - \phi_h(s_h, \pi_{h,n,1})])$$
$$\cdot (E_{s_h \sim \pi_{h,n,0}} [\phi_h(s_h, \pi_{h,n,0}) - \phi_h(s_h, \pi_{h,n,1})])^\top$$
8: end for
9: end for
10: for $h = 1, \cdots, H$ do
11: for $n = 1, \cdots, N$ do
12: Sample $s^{h,n}$ at time step $h$ by executing a policy $\pi^{h,n,0} = \{\pi^{h,n,0}_k\}_{k=1}^H$.
13: Sample actions $a^{h,n,0} \sim \pi^{h,n,0}_h(\cdot|s^{h,n}), a^{h,n,1} \sim \pi^{h,n,1}_h(\cdot|s^{h,n})$.
14: Add $(s^{h,n}, a^{h,n,0}, a^{h,n,1})$ to $D^h_{\text{adv}}$.
15: (These steps involve the interaction with environment)
16: end for
17: end for
18: Obtain the preference labels $\{o^{h,n}\}_{n=1}^N$ for $D^h_{\text{adv}}$ from human experts.
19: Run MLE $\hat{\xi}_h \leftarrow \arg\min_{\xi \in Z(B,h)} L(\xi, D^h_{\text{adv}}, \{o^{h,n}\}_{n=1}^N)$.
20: Compute: for all $s \in S, a \in A, h \in [H]$:
21: $\hat{A}_h(s, a) \leftarrow \phi_h(s, a)^\top \hat{\xi}_h, \hat{\pi}_h(s) \leftarrow \arg\max_{a \in A} \hat{A}_h(s, a)$.
22: Return $\hat{\pi} = \{\hat{\pi}_h\}_{h=1}^H$.
where $D^h_{\text{adv}} = \{s^{h,n}, a^{h,n,0}, a^{h,n,1}\}_{n=1}^N$.
5.1 ANALYSIS
Theorem 3. Let
$$\lambda \geq 4R^2, \quad N \geq \tilde{O}(\lambda \kappa_{\text{adv}}^2 B^2 R^2 H^2 d^2 \log(1/\delta)/\epsilon^2)$$
where $\kappa_{\text{adv}} = \sup_{|x| \leq B_{\text{adv}}} |1/\sigma'(x)|$ in REGIME–action. Then under Assumption 4, with probability at least $1 - \delta$, we have $V^{r^*, \hat{\pi}} \geq V^{r^*, r^*} - \epsilon$.
Theorem 3 demonstrates that for the action-based comparison model, the number of required human feedbacks scales with $\kappa_{\text{adv}}$ instead of $\kappa$. This implies that when $\sigma$ is a commonly used sigmoid function, the sample complexity is exponential in $B_{\text{adv}}$ rather than $r_{\text{max}}$. Crucially, $B_{\text{adv}}$ is always less than or equal to $r_{\text{max}}$, and as mentioned earlier, $B_{\text{adv}}$ can be $o(H)$ even in dense reward settings where $r_{\text{max}} = \Theta(H)$. Consequently, we achieve superior sample complexity compared to the trajectory-based comparison setting.
6 SUMMARY
We consider the problem of how to query human feedback efficiently in PbRL, i.e., the experimental design problem in PbRL. In particular, we design a reward-agnostic trajectory collection algorithm for human feedback querying when the transition dynamics is unknown. Our algorithm provably requires less human feedback to learn the true reward and optimal policy than existing literature. Our results also go beyond the tabular cases and cover common MDPs models including linear MDPs and low-rank MDPs. Further, we consider the action-based comparison setting and propose corresponding algorithms to circumvent the exponential scaling with $r_{\text{max}}$ of trajectory-based comparison setting.
REFERENCES
Agarwal, A., Jiang, N., Kakade, S. M., and Sun, W. (2019). Reinforcement learning: Theory and algorithms. Technical report.
Agarwal, A., Kakade, S., Krishnamurthy, A., and Sun, W. (2020a). Flambe: Structural complexity and representation learning of low rank mdps. arXiv preprint arXiv:2006.10814.
Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. (2020b). Optimality and approximation with policy gradient methods in Markov decision processes. In Conference on Learning Theory, pages 64–66. PMLR.
Agarwal, A., Song, Y., Sun, W., Wang, K., Wang, M., and Zhang, X. (2022). Provable benefits of representational transfer in reinforcement learning. arXiv preprint arXiv:2205.14571.
Bradley, R. A. and Terry, M. E. (1952). Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345.
Brown, D., Goo, W., Nagarajan, P., and Niekum, S. (2019). Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. In International conference on machine learning, pages 783–792. PMLR.
Cen, S., Cheng, C., Chen, Y., Wei, Y., and Chi, Y. (2022). Fast global convergence of natural policy gradient methods with entropy regularization. Operations Research, 70(4):2563–2578.
Chen, J. and Jiang, N. (2019). Information-theoretic considerations in batch reinforcement learning. In International Conference on Machine Learning, pages 1042–1051. PMLR.
Chen, J., Modi, A., Krishnamurthy, A., Jiang, N., and Agarwal, A. (2022a). On the statistical efficiency of reward-free exploration in non-linear rl. arXiv preprint arXiv:2206.10770.
Chen, X., Zhong, H., Yang, Z., Wang, Z., and Wang, L. (2022b). Human-in-the-loop: Provably efficient preference-based reinforcement learning with general function approximation. In International Conference on Machine Learning, pages 3773–3793. PMLR.
Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., and Amodei, D. (2017). Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30.
Du, S. S., Luo, Y., Wang, R., and Zhang, H. (2019). Provably efficient Q-learning with function approximation via distribution shift error checking oracle. In Advances in Neural Information Processing Systems, pages 8058–8068. PMLR.
Dudík, M., Hofmann, K., Schapire, R. E., Slivkins, A., and Zoghi, M. (2015). Contextual dueling bandits. In Conference on Learning Theory, pages 563–587. PMLR.
Glaese, A., McAleese, N., Trębacz, M., Aslanides, J., Firoiu, V., Ewalds, T., Rauh, M., Weidinger, L., Chadwick, M., Thacker, P., et al. (2022). Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375.
Jin, C., Krishnamurthy, A., Simchowitz, M., and Yu, T. (2020a). Reward-free exploration for reinforcement learning. In International Conference on Machine Learning, pages 4870–4879. PMLR.
Jin, C., Yang, Z., Wang, Z., and Jordan, M. I. (2019). Provably efficient reinforcement learning with linear function approximation. arXiv preprint arXiv:1907.05388.
Jin, C., Yang, Z., Wang, Z., and Jordan, M. I. (2020b). Provably efficient reinforcement learning with linear function approximation. In Conference on Learning Theory, pages 2137–2143. PMLR.
Laskey, M., Staszak, S., Hsieh, W. Y.-S., Mahler, J., Pokorny, F. T., Dragan, A. D., and Goldberg, K. (2016). Shiv: Reducing supervisor burden in dagger using support vectors for efficient learning from demonstrations in high dimensional state spaces. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 462–469.
|
zkVm3JqJzs
|
* I didn’t get the point of the proof for Proposition 1. What is the difference between your proof of proposition 1 and Theorem 2? The conclusion of coverage is for the popped $\mathcal{C}(\boldsymbol x_{n+1},u_{n+1})$ but there is no $\mathcal{C}(\boldsymbol x_{n+1},u_{n+1})$ during your proof. I think the authors need to well-articulate the proof.
|
Conformal Prediction for Deep Classifier via Label Ranking
Anonymous authors
Paper under double-blind review
Abstract
Conformal prediction is a statistical framework that generates prediction sets containing ground-truth labels with a desired coverage guarantee. The predicted probabilities produced by machine learning models are generally miscalibrated, leading to large prediction sets in conformal prediction. In this paper, we empirically and theoretically show that disregarding the probabilities’ value will mitigate the undesirable effect of miscalibrated probability values. Then, we propose a novel algorithm named Sorted Adaptive prediction sets (SAPS), which discards all the probability values except for the maximum softmax probability. The key idea behind SAPS is to minimize the dependence of the non-conformity score on the probability values while retaining the uncertainty information. In this manner, SAPS can produce sets of small size and communicate instance-wise uncertainty. Theoretically, we provide a finite-sample coverage guarantee of SAPS and show that the expected value of set size from SAPS is always smaller than APS. Extensive experiments validate that SAPS not only lessens the prediction sets but also broadly enhances the conditional coverage rate and adaptation of prediction sets.
1 Introduction
Machine learning is being deployed in many high-stakes tasks, such as autonomous driving (Bojarski et al., 2016), medical diagnostics (Caruana et al., 2015) and financial decision-making. The trust and safety in these applications are critical, as any erroneous prediction can be costly and dangerous. To assess the reliability of predictions, a popular solution is to quantify the model uncertainty, such as confidence calibration (Guo et al., 2017), MC-Dropout (Gal & Ghahramani, 2016), and Bayesian neural network (Smith, 2013; Blundell et al., 2015). However, these methods lack theoretical guarantees of model performance. This gives rise to the importance of Conformal Prediction (CP) (Vovk et al., 2005; Shafer & Vovk, 2008; Balasubramanian et al., 2014; Angelopoulos & Bates, 2021), which yields prediction sets containing ground-truth labels with a desired coverage guarantee.
In the literature, CP algorithms design non-conformity scores to quantify the degree of deviation between a new instance and the training data, determining the size of the final prediction sets. A higher non-conformity score is associated with a larger prediction set or region, indicating a lower level of confidence in the prediction. For example, Adaptive Prediction Sets (APS) (Romano et al., 2020) calculates the score by accumulating the sorted softmax values in descending order. However, the softmax probabilities typically exhibit a long-tailed distribution, allowing for easy inclusion of those tail classes in the prediction sets. To alleviate this issue, Regularized Adaptive Prediction Sets (RAPS) (Angelopoulos et al., 2021b) exclude unlikely classes by appending a penalty to classes beyond some specified threshold. The non-conformity score of RAPS still involves unreliable softmax probabilities, leading to suboptimal performance in conformal prediction. This motivates our question: does the probability value play a critical role in conformal prediction?
In this work, we show that the value of softmax probability might be redundant information for constructing the non-conformity score in conformal prediction. We provide an empirical analysis by removing the exact value of softmax probability while preserving the relative rankings of labels. The results indicate that APS using label ranking yields much smaller prediction sets than APS using the softmax outputs, at the same coverage rate. Theoretically, we show that, by removing the probability value, the size of prediction sets generated by APS is consistent with model prediction accuracy.
other words, a model with higher accuracy can produce smaller prediction sets, using APS without access to the probability value. The details of the analysis are presented in Subsection 3.1.
Inspired by the analysis, our key idea is to minimize the dependence of the non-conformity score on the probability values, while retaining the uncertainty information. Specifically, we propose **Sorted Adaptive prediction sets** (dubbed **SAPS**), which discards all the probability values except for the maximum softmax probability in the construction of non-conformity score. This can be achieved by replacing the non-maximum probability values with a constant, after sorting in descending order. In effect, SAPS can not only produce sets of small size but also communicate instance-wise uncertainty. Theoretically, we show that the expected value of set size from SAPS is always smaller than APS, using a well-calibrated model.
To verify the effectiveness of our method, we conduct thorough empirical evaluations on common benchmarks, including CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet (Deng et al., 2009). The results demonstrate that SAPS achieves superior performance over the compared methods, including APS and RAPS. For example, our approach reduces the average size of prediction sets from 20.95 to 2.98 – only \( \frac{1}{7} \) of the prediction set size from APS. Compared to RAPS, we show that SAPS not only produces a higher conditional coverage rate but also exhibits better adaptability to the instance difficulty.
We summarize our contributions as follows:
1. We empirically show that the probability value is not necessary in APS. Specifically, APS without probability value generates smaller prediction sets than vanilla APS. Moreover, we theoretically show that APS without probability value can provide stable prediction sets, in which the set size is consistent with the prediction accuracy of models.
2. We propose a novel non-conformity score–SAPS that minimizes the dependency on probability value while retaining the uncertainty information. We provide theoretical analyses to show the marginal coverage properties of SAPS and the advantage over APS.
3. Extensive experimental results demonstrate the effectiveness of our proposed method. We show that SAPS not only lessens the prediction sets but also broadly enhances the conditional coverage rate and adaptation of prediction sets.
4. We provide analyses to improve our understanding of the proposed method. In particular, we contrast with a special variant of RAPS and demonstrate the advantages of our method. We also investigate the effect of calibration on our method.
## 2 PRELIMINARIES
In this work, we consider the multi-class classification task with \( K \) classes. Let \( X \subset \mathbb{R}^d \) be the input space and \( Y := \{1, \ldots, K\} \) be the label space. We use \( \hat{\pi} : X \rightarrow \mathbb{R}^K \) to denote the pre-trained neural network that is used to predict the label of a test instance. Let \( (X, Y) \sim P_{XY} \) denote a random data pair satisfying a joint data distribution \( P_{XY} \). Ideally, \( \hat{\pi}_y(x) \) can be used to approximate the conditional probability of class \( i \) given feature \( x \), i.e., \( \mathbb{P}[Y = y | X = x] \). Then, the model prediction in classification tasks is generally made as: \( \hat{y} = \arg \max_{y \in Y} \hat{\pi}_y(x) \).
### Conformal prediction.
To provide a formal guarantee for the model performance, conformal prediction (Vovk et al., 2005) is designed to produce prediction sets containing ground-truth labels with a desired probability. Instead of predicting one-hot labels from the model outputs, the goal of conformal prediction is to construct a set-valued mapping \( C : X \rightarrow 2^Y \), which satisfies the **marginal coverage**:
\[
\mathbb{P}(Y \in C_{1-\alpha}(X)) \geq 1 - \alpha,
\]
where \( \alpha \in (0, 1) \) denotes the desired error rate and \( C_{1-\alpha}(X) \) is a subset of \( Y \). Particularly, a smaller value of \( \alpha \) will enlarge the predictions set, i.e.,
\[
\alpha_1 > \alpha_2 \implies C_{1-\alpha_1}(X) \subseteq C_{1-\alpha_2}(X)
\]
Before deployment, conformal prediction begins with a calibration step, using a calibration set \( D_{cal} := \{(x_i, y_i)\}_{i=1}^n \). The data of the calibration set is also i.i.d. drawn from the distribution...
Figure 1: (a) Sorted softmax probabilities of an example from ImageNet in descending order. (b) Set size for APS on various models. We use "w/ value" and "w/o value" to represent the vanilla APS and APS with label ranking, respectively. The numbers in brackets represent the prediction accuracy of the model. The sizes of the prediction sets are small after removing the probability value.
Specifically, we calculate a non-conformity score \( s_i = S(x_i, y_i) \) for each example \((x_i, y_i)\) in the calibration set, where \( s_i \) measures the degree of deviation between the given example and the training data. The \( 1 - \alpha \) quantile of the non-conformity scores \( \{s_i\}_{i=1}^n \) is then determined as a threshold \( \tau \). Formally, the value of \( \tau \) can be obtained as shown below:
\[
\tau = \inf \left\{ s : \frac{|\{i \in \{1, \ldots, n\} : s_i \leq s\}|}{n} \geq \frac{(n+1)(1-\alpha)}{n} \right\}
\]
During testing, we calculate the non-conformity score for each label given a new instance \( x_{n+1} \). Then, the corresponding prediction set \( C(x_{n+1}) \) comprises possible labels whose non-conformity score \( S(x_{n+1}, y) \) falls within the threshold \( \tau \):
\[
C_{1-\alpha}(x_{n+1}) = \{y \in Y : S(x_{n+1}, y) \leq \tau\}.
\]
The equation above exhibits a nesting property of threshold, i.e., \( \tau_1 \leq \tau_2 \implies \{y \in Y : S(x_{n+1}, y) \leq \tau_1\} \subseteq \{y \in Y : S(x_{n+1}, y) \leq \tau_2\} \). With a lower value of \( \tau \), the model tends to produce a smaller prediction set, indicating a higher level of confidence in the prediction. Conversely, the increase of \( \tau \) will enlarge the size of the prediction set, suggesting greater uncertainty of the prediction. In this manner, conformal prediction can be used to estimate the uncertainty or reliability of the model’s predictions.
**Adaptive prediction sets (APS).** In the APS method (Romano et al., 2020), the non-conformity scores are calculated by accumulating softmax probabilities in descending order. Formally, given a data pair \((x, y)\), the non-conformity score can be computed by:
\[
S(x, y, u; \hat{\pi}) := \sum_{i=1}^{o(y, \hat{\pi}(x))} \hat{\pi}_i(x) + u \cdot \hat{\pi}(o(y, \hat{\pi}(x)))(x),
\]
where \( o(y, \hat{\pi}(x)) \) denotes the index of \( \hat{\pi}_y(x) \) in the sorted softmax probabilities, i.e., \( \hat{\pi}_1(x), \ldots, \hat{\pi}_K(x) \), and \( u \) is an independent random variable satisfying a uniform distribution on \([0, 1]\). Given a test point \( x_{n+1} \), the prediction set of APS with the error rate \( \alpha \) is given by
\[
C_{1-\alpha}(x_{n+1}, u_{n+1}) := \{y \in Y : S(x_{n+1}, y, u_{n+1}; \hat{\pi}) \leq \tau\}.
\]
With the non-conformity score in Eq. 4, APS achieves a finite-sample marginal coverage guarantee. However, the softmax probabilities \( \hat{\pi}(x) \) typically exhibit a long-tailed distribution, where the tail probabilities with small values can be easily included in the prediction sets. Consequently, APS tends to produce large prediction sets for all inputs, regardless of the instance difficulty. For example, in Figure 1a, the long-tail probability distribution results in the non-conformity scores of many classes falling within \( \tau \). This motivates our analysis to investigate the role of probability value in conformal prediction.
3 Motivation and Method
3.1 Motivation
To analyze the role of probability values, we perform an ablation study by removing the influence of probability values in Eq. 4. In particular, we replace these probabilities with a constant $\gamma$ (e.g., $\gamma = 1$), after sorting in descending order. With the constant $\gamma$, the modified non-conformity score for a data pair $(x, y)$ with a pre-trained model $\hat{\pi}$ is:
$$S(x, y, u; \hat{\pi}) := \gamma \cdot [o(y, \hat{\pi}(x)) - 1 + u].$$
In the analysis, we fix the constant as 1 for simplification. Then, we conduct experiments on ImageNet (Deng et al., 2009) to compare the new non-conformity score to the vanilla APS. Here, we set the desired error rate as 10%, i.e., $\alpha = 0.1$. Following previous works (Romano et al., 2019; Angelopoulos et al., 2021b; Ghosh et al., 2023), we first randomly split the test dataset of ImageNet into two subsets: a conformal calibration subset of size 30K and a test subset of size 20K. For network architecture, we use seven models trained on ImageNet, with different levels of prediction performance (see Figure 1b). All models are calibrated by the temperature scaling procedure (Guo et al., 2017). Finally, experiments are repeated ten times and the median results are reported.
Probability values are not necessary. Figure 1b presents the results on various models, using APS with/without the probability value. The results indicate that APS solely based on label ranking generates smaller prediction sets than those generated with the vanilla APS, across various models. For example, with the Inception model, removing the probability values reduces the set size of 88.18 to 6.33. Using a transformer-based ViT model (Touvron et al., 2021), APS without probability value also obtains a smaller set size. From the comparison, we show that the probability value might be redundant information for non-conformity scores in conformal prediction. We proceed by theoretically analyzing the advantage of removing probability values in APS.
A theoretical interpretation. The empirical results above demonstrate that the probability value is not a critical component of the non-conformity score for conformal prediction. Here, we provide a formal analysis of APS without probability value through the following theorem:
**Theorem 1.** Let $A_r$ denote the accuracy of the top $r$ predictions on a trained model $\hat{\pi}$. Given a significance level $\alpha$, for any test instance $x \sim P_X$ and an independent random variable $u \sim U[0, 1]$, if there exists a number $k$ satisfying $A_k \geq 1 - \alpha > A_{k-1}$, the size of prediction set $C_{1-\alpha}(x, u)$ generated by APS without probability value can be obtained by
$$|C_{1-\alpha}(x, u)| = \begin{cases} k, & \text{if } u < \frac{1 - \alpha - A_{k-1}}{A_k - A_{k-1}}, \\ k - 1, & \text{otherwise}. \end{cases}$$
The expected value of the set size can be given by
$$\mathbb{E}_{u \sim [0, 1]}[C_{1-\alpha}(x, u)] = k - 1 + \frac{1 - \alpha - A_{k-1}}{A_k - A_{k-1}}.$$
The proof of Theorem 1 can be found in Appendix A. As indicated by Eq. 7, the prediction set size generated by APS without probability value is consistent with $k$. In other words, a higher model accuracy will lead to a smaller value of $k$, indicating a smaller prediction sets. This argument is clearly supported by experimental results shown in Figure 1b. In particular, we observe that using APS without probability value, models with higher accuracy produce a smaller prediction sets, while the vanilla APS does not exhibit this characteristic. For example, using ResNeXt101, the model achieves higher prediction accuracy than using ResNet152, while producing a larger prediction set. The analysis demonstrates the advantage of removing probability value in APS, via decreasing the sensitivity to tail probabilities.
3.2 Method
In the analysis above, we demonstrate that removing the probability value in APS can largely decrease the size of prediction sets. On the other hand, the expected value of the set size (shown in
Eq. 6) will oscillate between $k - 1$ and $k$, after removing the probability value. This implies a shortcoming of the modified non-conformity score in adaptation to instance-wise uncertainty, which may cause overcovering on easy examples.
To alleviate this limitation, we propose a novel conformal prediction algorithm, named **Sorted Adaptive Prediction Sets**. The key idea behind this algorithm is to minimize the dependence of the non-conformity score on the probability values while retaining the uncertainty information. In particular, we discard all the probability values except for the maximum softmax probability, which is usually used to measure the model confidence in the prediction. Formally, the non-conformity score can be calculated as
$$S(x, y, u; \hat{\pi}) := \begin{cases}
u \cdot \hat{\pi}_{\text{max}}(x), & \text{if } o(y, \hat{\pi}(x)) = 1, \\
\hat{\pi}_{\text{max}}(x) + (o(y, \hat{\pi}(x)) - 2 + u) \cdot \lambda, & \text{otherwise},
\end{cases}$$
where $\lambda$ is a hyperparameter showing the weight of ranking information, $\hat{\pi}_{\text{max}}(x)$ denotes the maximum softmax probability and $u$ denotes a uniform random variable. We provide a detailed analysis on the effect of $\lambda$ in Section 5.
In Eq. 8, we incorporate the uncertainty information via the maximum probability $\hat{\pi}_{\text{max}}(x)$, and use the constant $\lambda$ to mitigate the undesirable influence of tail probabilities. In this manner, the SAPS method can not only produce sets of small size, but also communicate instance-wise uncertainty. In other words, the prediction set can be smaller for easy inputs than for hard ones. We illustrate with an experiment in Figure 2, where the examples with wrong predictions have higher non-conformity scores provided by SAPS, compared to those of APS and RAPS. Moreover, for examples with correct predictions, the non-conformity scores defined in APS, RAPS, and APS are equivalent as the ranks of ground-truth labels are 1 (i.e., $S(x, y, u; \hat{\pi}) = u \cdot \hat{\pi}_{\text{max}}(x)$). The results indicate that the non-conformity score of SAPS can better characterize the deviation between a given example and the training data.
In what follows, we provide a formal analysis to show the effectiveness of our SAPS algorithm. We start by showing the finite-sample marginal coverage properties:
**Proposition 1.** *(Coverage guarantee of SAPS).* Suppose $(x_i, y_i, u_i)_{i=1,\ldots,n}$ and $(x_{n+1}, y_{n+1}, u_{n+1})$ are i.i.d. and let the prediction set of SAPS with error rate $\alpha$ as $C_{1-\alpha}(x, u) := \{y \in Y : S(x, y, u; \hat{\pi}) \leq \tau\}$, where $S(x, y, u; \hat{\pi})$ is the score function defined as in Eq. 8. Then for $\tau$ defined as $1 - \alpha$ quantile of scores $\{S(x_i, y_i, u_i; \hat{\pi})\}_{i=1,\ldots,n}$, we have the coverage guarantee:
$$P(y_{n+1} \in C_{1-\alpha}(x_{n+1}, u_{n+1})) \geq 1 - \alpha$$
The corresponding proof is provided in Appendix B. In the following, we further prove that SAPS always dominates APS in the size of prediction sets.
**Proposition 2.** *(SAPS dominates APS).* If $\hat{\pi}$ is well-calibrated and $\lambda \geq 1 - \frac{1}{K}$, for any test instance $x \sim P_X$ with a significance level $\alpha$, we have
$$E_{u \sim [0,1]}[|\mathcal{C}(x, u)|] \leq E_{u \sim [0,1]}[\tilde{\mathcal{C}}(x, u)]$$
where $u \sim U[0, 1]$. $\mathcal{C}(\cdot)$ and $\tilde{\mathcal{C}}(\cdot)$ represent the prediction set from SAPS and APS, respectively.
In other words, SAPS consistently generates a smaller prediction set than APS when the oracle model is available, while both algorithms maintain the desired marginal coverage rate. The formal pseudocode for SAPS is provided in the Appendix H.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Classification datasets. We consider three prominent datasets in our study: ImageNet (Deng et al., 2009), CIFAR-100 and CIFAR-10 (Krizhevsky et al., 2009), which are common benchmarks for conformal prediction. In the case of ImageNet, we split the test dataset containing 50000 images into 30000 images for the calibration set and 20000 images for the test set. For CIFAR-100 and CIFAR-10, we divide the corresponding test dataset equally into a calibration set containing 5000 images and a test set containing 5000 images.
Models. We employ twelve different classifiers, including nine standard classifiers, two transformer-based models, i.e., ViT (Dosovitskiy et al., 2020) and DeiT (Touvron et al., 2021), and a Vision-Language Model named CLIP (Radford et al., 2021). Aside from CLIP with zero-shot prediction capabilities, the remaining models are the pre-trained models on ImageNet. For CIFAR-10 and CIFAR-100, these models will be fine-tuned on the pre-trained models. Moreover, all classifiers are calibrated by the Temperature scaling procedure (Guo et al., 2017).
Conformal prediction algorithms. We compare the proposed method against APS (Romano et al., 2020) and RAPS (Angelopoulos et al., 2021b). Then, we choose the hyper-parameter that achieves the smallest set size on a validation set, which is a subset of the calibration set. Specifically, we tune the regularization hyperparameter of RAPS in \{0.001, 0.01, 0.1, 0.15, . . . , 0.5\} and hyperparameter \( \lambda \) in \{0.02, 0.05, 0.1, 0.15, . . . , 0.6\} for SAPS. All experiments are conducted with ten trials, and the median results are reported.
Evaluation. The primary metrics used for the evaluation of prediction sets are set size (average length of prediction sets; small value means high efficiency) and marginal coverage rate (fraction of testing examples for which prediction sets contain the ground-truth labels). These two metrics can be formally represented as:
\[
\text{Size} = \frac{1}{N_{\text{test}}} \sum_{i=1}^{N_{\text{test}}} |C(x_i)|
\]
\[
\text{coverage rate} = \frac{1}{N_{\text{test}}} \sum_{i=1}^{N_{\text{test}}} 1(y_i \in C(x_i))
\]
Conditional coverage rate. In this work, we propose an alternative metric to the SSCV criterion named Each-Size Coverage Violation (ESCV) that can be utilized for any number of classes, as shown below:
\[
\text{ESCV}(C, K) = \sup_j \max(0, (1 - \alpha) - \frac{| \{i \in J_j : y_i \in C(x_i)\} |}{|J_j|})
\]
where \( J_j = \{i : |C(x_i)| = j\} \) and \( j \in \{1, \ldots, K\} \). Specifically, ESCV measures the most significant violation of prediction sets with each size. This metric is practical because it only requires the set size, and is suitable for any classification problem, spanning from binary classes to large classes.
4.2 RESULTS
SAPS generates smaller prediction sets. In Table 1, the performance of set sizes and coverage rates for various classification tasks are presented. We can observe that the coverage rate of all conformal prediction methods is close to the desired coverage \( 1 - \alpha \). At different significance levels (i.e., 0.1 and 0.05), the prediction set size is consistently reduced by SAPS for ImageNet, CIFAR-100 and CIFAR-10, compared to APS and RAPS. For example, when evaluated on ImageNet, SAPS reduces the average set size from 20.95 of APS to 2.98. Moreover, as the scale of the classification task increases, the efficiency improvement achieved by SAPS becomes increasingly evident. Overall, the experiments show that our method has the desired coverage rate and a smaller set size than APS and RAPS. Due to space constraints, we only report the average results of multiple models on various classification tasks in Table 1, and detailed results for each model are available in Appendix D.
Table 1: Results of average set sizes on different datasets. We evaluate the performance of SAPS, APS, and RAPS by calculating the average set size across multiple models. It is evident that SAPS consistently outperforms APS and RAPS in various classification tasks, such as ImageNet, CIFAR-100, and CIFAR-10, and different significance levels ($\alpha = 0.1, 0.05$). **Bold** numbers indicate optimal performance.
| Datasets | $\alpha = 0.1$ Coverage | Size ↓ | $\alpha = 0.05$ Coverage | Size ↓ |
|------------|------------------------|-------|-------------------------|-------|
| | APS | RAPS | SAPS | APS | RAPS | SAPS | APS | RAPS | SAPS | APS | RAPS | SAPS |
| ImageNet | 0.899 | 0.900 | 0.900 | 20.95 | 3.29 | **2.98** | 0.949 | 0.950 | 0.950 | 44.67 | 8.57 | **7.55** |
| CIFAR-100 | 0.899 | 0.900 | 0.899 | 7.88 | 2.99 | **2.67** | 0.950 | 0.949 | 0.949 | 13.74 | 6.42 | **5.53** |
| CIFAR-10 | 0.899 | 0.900 | 0.898 | 1.97 | 1.79 | **1.63** | 0.950 | 0.950 | 0.950 | 2.54 | 2.39 | **2.25** |
Figure 3: (a) ESCV with different models on ImageNet with $\alpha = 0.1$. A good conformal prediction algorithm should keep the y-axis (e.g., ESCV) small. The results show that SAPS outperforms RAPS on most models. (b) Set size under different difficulties on VGG16. Small sets are required for easy examples, while hard ones require large sets. For example, SAPS generates smaller sets than RAPS on easy examples, but with difficulty improving, the size of SAPS will be larger than SAPS. (c) Set size on ImageNet-V2 at $\alpha = 0.1$.
SAPS acquires lower conditional coverage violation. In Figure 3a, we demonstrate that SAPS not only outperforms in efficiency but also boosts the conditional coverage rate, i.e., ESCV. Given that our study primarily focuses on improving the efficiency of prediction sets, the comparison of ESCV is limited to SAPS and RAPS. The results, shown in Figure 3a, demonstrate that for most models, SAPS would get smaller ESCV than RAPS. For example, on CLIP, SAPS reduces the ESCV from 0.9 to 0.37. In addition, on ImageNet, we can observe that the ESCV of SAPS for different models is more stable than RAPS. Specifically, the ESCV of SAPS can keep a low value on most models, but in the case of RAPS, the maximum ESCV even gets 0.9. The detailed results on CIFAR-10 and CIFAR-100 are provided in Appendix E.
SAPS exhibits higher adaptation. Adaptation indicates the ability to adjust the size of the prediction set based on the complexity or difficulty of individual examples. In other words, the prediction sets should be small for easy examples but large for hard ones. In this work, we employ the rank of the ground-truth labels in the sorted softmax probabilities to denote the difficulty. For instance, examples with serious difficulty are assigned high ranks for their ground-truth labels. In Figure 3b, the results show that the set size of SAPS has higher adaptation. Specifically, compared with RAPS, SAPS produces smaller sets for accurate predictions but larger sets for hard examples on VGG16. More results of different models are reported in Appendix F. Overall, We show that SAPS can improve the adaptation of prediction sets while maintaining small set sizes.
Experiments on distribution shifts. We also verify the effectiveness of our method on the new distribution, which is different from the training data distribution. Specifically, We divide the test dataset of ImageNet-V2 (Recht et al., 2019), which exhibits a distribution shift compared to the ImageNet, equally into a calibration set containing 5000 images and a test set containing 5000 images. Then, the test models are only pre-trained on ImageNet and not be fine-tuned. As shown in Figure 3c, the result shows that under $\alpha = 0.1$, our method can also generate the smallest sets when the conformal calibration set and the test set come from a new distribution.
Table 2: Set size and ESCV for RAPS ($k_r = 1$) and SAPS. We report the average value across various models with $\alpha = 0.1$. The detailed results of each model are provided in the Appendix G. **Bold** numbers indicate optimal performance.
| Datasets | Coverage | Size ↓ | ESCV ↓ |
|--------------|-------------------|--------------|--------------|
| | RAPS($k_r = 1$) | SAPS | RAPS($k_r = 1$) | SAPS | RAPS($k_r = 1$) | SAPS |
| ImageNet | 0.900 | 0.900 | 3.24 | **2.98** | 0.631 | **0.396** |
| CIFAR-100 | 0.899 | 0.899 | 2.79 | **2.67** | 0.390 | **0.302** |
| CIFAR-10 | 0.900 | 0.898 | **1.62** | 1.63 | 0.138 | **0.089** |
Figure 4: (a) Effect of the $\lambda$ on set size across various models. The black markers ($\star$, ♦, ▲, •) represent the results of APS without probability value. (b) Effect of the calibration dataset size on set size across various models. (c) Relationship between temperature and the set size of SAPS on ResNet152, where the horizon axis represents the log transformation of temperature $T$.
5 DISCUSSION
Effect of $\lambda$ and calibration size. In SAPS, we choose an optimal $\lambda$ by performing a search over an sequence to minimize the set size on a validation set. In this work, the validation set constitutes 20% of the calibration set. Here, we provide an empirical analysis to show whether set size is sensitive to $\lambda$ and calibration size. To this end, we conduct two experiments on ImageNet to analyze the effects of $\lambda$ and the size of the calibration set.
We present the results of four models in Figure 4. Indeed, Figure 4a illustrates that one can efficiently utilize grid search to find the optimal $\lambda$. Furthermore, as depicted in Figure 4b, nearly all models maintain stable results when the number of calibration sets increases. Overall, the results demonstrate that the set size is not sensitive to variations in $\lambda$ and calibration size.
SAPS vs. RAPS ($k_r = 1$). While SAPS has demonstrated strong promise, it shares a similarity in the definition of non-conformity scores with RAPS ($k_r = 1$), as shown below:
$$S(x, y, u, \hat{\pi}) = \sum_{i=1}^{o(y, \hat{\pi}(x)) - 1} \hat{\pi}_i(x) + u \ast \hat{\pi}_{o(y, \hat{\pi}(x))}(x) + \phi \cdot (o(y, \hat{\pi}(x)) - k_r)^+.$$
Here, $\phi$ represents the weight of regularization and $(z)^+$ denotes the positive part of $z$. To this end, we conduct a comprehensive experiment with $\alpha = 0.1$ on CIFAR-10, CIFAR-100, and ImageNet to compare SAPS and RAPS ($k_r = 1$).
As indicated in Table 2, SAPS outperforms RAPS ($k_r = 1$) in large-scale classification scenarios, achieving smaller prediction sets and lower conditional coverage violations. In the small-scale classification task (i.e., CIFAR-10), SAPS produces a comparable set size with RAPS ($k_r = 1$), and the ESCV of SAPS was more than 1.5 times as small as those from RAPS. Overall, employing a constant to substitute the noisy probabilities is an effective way to alleviate the negative implications of noisy probabilities further.
Relation to temperature scaling. In the literature, temperature scaling calibrates softmax probabilities output by models by minimizing Expected Calibration Error (ECE), leading to a reliable maximum probability. As defined in Eq. 8, the remaining probability value used in the non-conformity scores is the maximum probability, a question arises: what is a relation between temperature scaling and the set size of SAPS? Here, we vary the value of temperature $T = \{0.1, 0.5, 1, 1.1, 1.3, 1.5, 1.7, 1.9, 2, 5, 10, 20\}$ in temperature scaling. We utilize SAPS to test the ResNet152 model calibrated by different temperatures on the ImageNet benchmark. The results indicate that there exists a consistency between the temperature value and the set size.
As illustrated in Figure 4c, the temperature with the lowest ECE can achieve the smallest prediction sets. Specifically, the optimal temperature on ECE and set size are the same, i.e., 1.3. Moreover, as the ECE increases, the set size also increases. Indeed, temperature scaling can not change the permutation of the softmax probabilities but improves the confidence level of the maximum probability, resulting in the non-conformity scores of SAPS being more reliable. Overall, for SAPS, better confidence calibration can produce smaller prediction sets.
6 RELATED WORK
Conformal prediction is a statistical framework characterized by a finite-sample coverage guarantee. It has been utilized in various tasks including regression (Lei & Wasserman, 2014; Romano et al., 2019), classification (Sadinle et al., 2019), structured prediction (Bates et al., 2021), Large-Language Model (Kumar et al., 2023; Ren et al., 2023) and so on.
The primary focal points of CP are reducing prediction set size and enhancing coverage rate. Strategies to reduce the set size can be roughly split into the following two branches. The first approach involves leveraging post-hoc technologies (Romano et al., 2020; Angelopoulos et al., 2021a; Ghosh et al., 2023). There exist others concentrate on unique settings such as federated learning (Lu et al., 2023) and multi-label problem (Cauchois et al., 2020; Fisch et al., 2022; Papadopoulos, 2014), outlier detection (Bates et al., 2023; Chen et al., 2023; Guan & Tibshirani, 2022). Most existing post-hoc methods construct the non-conformity score based on unreliable probability values, leading to suboptimal performance. Different from previous post-hoc methods, we show that probability value is not necessary in non-conformity scores and design an effective method to remove the probability value while retaining uncertainty information.
Another avenue of research focuses on developing new training algorithms to reduce the average prediction set size (Colombo & Vovk, 2020; Chen et al., 2021; Stutz et al., 2022; Einbinder et al., 2022b; Bai et al., 2022; Fisch et al., 2021). Those training methods are usually computationally expensive due to the model retraining. Additionally, there is a growing number of work dedicated to enhancing coverage rate (Vovk, 2012; Shi et al., 2013; Löfström et al., 2015; Ding et al., 2023), including efforts to maintain the marginal coverage rate by modifying the assumption of exchangeability to accommodate factors such as adversaries (Gendler et al., 2021), covariate shifts (Tibshirani et al., 2019), label shifts (Podkopaev & Ramdas, 2021) and noisy labels (Einbinder et al., 2022a; Sesia et al., 2023). In this study, SAPS not only lessens the prediction sets, but also broadly enhances the conditional coverage rate and adaptation of prediction sets.
7 CONCLUSION
In this paper, we present SAPS, a simple alternative CP algorithm that generates smaller prediction sets. By integrating the label rank, SAPS effectively mitigates the negative effect of small probability, resulting in a stable prediction set. The extensive experiments show that SAPS can improve the conditional coverage rate and adaptation while maintaining a small prediction set. This method can be easily applied to any pre-trained classifiers. We hope that our insights inspire future research to leverage label ranking information for conformal prediction.
REFERENCES
Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. *arXiv preprint arXiv:2107.07511*, 2021.
Anastasios Nikolas Angelopoulos, Stephen Bates, Michael I. Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net, 2021a.
Anastasios Nikolas Angelopoulos, Stephen Bates, Michael I. Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net, 2021b.
Yu Bai, Song Mei, Huan Wang, Yingbo Zhou, and Caiming Xiong. Efficient and differentiable conformal prediction with general function classes. *arXiv preprint arXiv:2202.11091*, 2022.
Vineeth Balasubramanian, Shen-Shyang Ho, and Vladimir Vovk. *Conformal prediction for reliable machine learning: theory, adaptations and applications*. Newnes, 2014.
Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael Jordan. Distribution-free, risk-controlling prediction sets. *Journal of the ACM (JACM)*, 68(6):1–34, 2021.
Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano, and Matteo Sesia. Testing for outliers with conformal p-values. *The Annals of Statistics*, 51(1):149–178, 2023.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In *International Conference on Machine Learning*, pp. 1613–1622. PMLR, 2015.
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. *arXiv preprint arXiv:1604.07316*, 2016.
Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In *Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 1721–1730, 2015.
Maxime Cauchois, Suyash Gupta, and John Duchi. Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction. *arXiv preprint arXiv:2004.10181*, 2020.
Haoxian Chen, Ziyi Huang, Henry Lam, Huajie Qian, and Haofeng Zhang. Learning prediction intervals for regression: Generalization and calibration. In *International Conference on Artificial Intelligence and Statistics*, pp. 820–828. PMLR, 2021.
Xiongjie Chen, Yunpeng Li, and Yongxin Yang. Batch-ensemble stochastic neural networks for out-of-distribution detection. In *ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 1–5. IEEE, 2023.
Nicolo Colombo and Vladimir Vovk. Training conformal predictors. In *Conformal and Probabilistic Prediction and Applications*, pp. 55–64. PMLR, 2020.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 248–255. Ieee, 2009.
Tiffany Ding, Anastasios N Angelopoulos, Stephen Bates, Michael I Jordan, and Ryan J Tibshirani. Class-conditional conformal prediction with many classes. *arXiv preprint arXiv:2306.09335*, 2023.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020.
|
G2cG3mQqop
|
The comparison to related work does not include the body of work in language-guided image retrieval, which seems highly relevant here, particularly since many of those methods use a vision-language encoder and indexing scheme that clusters the image archive according to linguistic concepts, as the proposed approach does implicitly. However, given that the method does not address the underlying representation, but just relies on the LLM to perform clustering as a black box, this is a minor concern.
|
IMAGE CLUSTERING CONDITIONED ON TEXT CRITERIA
Sehyun Kwon†1, Jaeseung Park†1, Minkyu Kim◇, Jaewoong Cho◇, Ernest K. Ryu†*, Kangwook Lee◇∗
†Seoul National University, ◇KRAFTON, ∗University of Wisconsin–Madison, * Co-senior authors
ABSTRACT
Classical clustering methods do not provide users with direct control of the clustering results, and the clustering results may not be consistent with the relevant criterion that a user has in mind. In this work, we present a new methodology for performing image clustering based on user-specified text criteria by leveraging modern vision-language models and large language models. We call our method Image Clustering Conditioned on Text Criteria (IC|TC), and it represents a different paradigm of image clustering. IC|TC requires a minimal and practical degree of human intervention and grants the user significant control over the clustering results in return. Our experiments show that IC|TC can effectively cluster images with various criteria, such as human action, physical location, or the person’s mood, while significantly outperforming baselines.
1 INTRODUCTION
Image clustering has been studied as a prototypical unsupervised learning task, and it has been used to organize large volumes of visual data (Platt et al., 2003), to reduce the cost of labeling an unlabeled image dataset (Russell et al., 2008; Schmarje et al., 2022), and to enhance image retrieval systems (Wu et al., 2000; Jégou and Chum, 2012). Modern deep image clustering methods are often evaluated against pre-defined class labels of datasets viewed as the ground truth.
In practice, however, a user may have a criterion in mind for how to cluster or organize a set of images. The user may even want multiple clustering results of the same dataset based on different criteria. (See Figure 1.) But, classical clustering methods offer no direct mechanism for the user to control the clustering criterion; the clustering criteria for existing methods are likely determined by the inductive biases of the neural networks and the loss function, data augmentations, and feature extractors used within the method. This necessitates a new paradigm in image clustering, enabling diverse outcomes from a single dataset based on user-specified criteria and revolutionizing the conventional, implicitly dictated clustering processes.
Recently, foundation models have received significant recent interest due to their ability to understand and follow human instructions at an unprecedented level. Large language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023a;b; Chiang et al., 2023; OpenAI, 2023; Adams et al., 2023) perform remarkably well on a wide range of natural language tasks such as understanding, summarizing, and reasoning in zero- or few-shot settings. Vision-language models (VLMs) (Alayrac et al., 2022; Liu et al., 2023; Awadalla et al., 2023; Dai et al., 2023; Li et al., 2023a; Zhu et al., 2023; Gong et al., 2023) interpret natural language instructions in visual contexts and produce responses that seemingly exhibit in-depth image analyses and complex reasoning.
In this work, we present a new methodology based on foundation models for performing image clustering based on user-specified criteria provided in natural language text. We call our method Image Clustering Conditioned on Text Criteria (IC|TC), and it represents a different paradigm of image clustering: the user directs the method with the relevant clustering criterion, the same dataset can be clustered with multiple different criteria, and if the clustering results are not satisfactory, the user can edit the text criterion to iteratively refine the clustering results. IC|TC requires a minimal and practical degree of human intervention and grants the user significant control over the clustering results in return, and we argue that this makes IC|TC more practical and powerful compared to the classical purely unsupervised clustering methods.
1 Work done at KRAFTON. 2 Our code is available at https://github.com/sehyunkwon/ICTC.
(a) Sample images from the clustering results on the Stanford 40 Action dataset. Each result is obtained using a different text criterion: Action, Location, and Mood.
(b) Sample images from the clustering results on the PPMI dataset using the text criterion Instrument with different cluster numbers $K = 2$ and $7$.
Figure 1: Sample images from clustering results of IC|TC. The method finds clusters consistent with the user-specified text criterion. Furthermore, IC|TC provides cluster names (texts above each image cluster) along with the clusters, enhancing the interpretability of clustering results.
1.1 Contribution
Our main contributions are the proposal of the novel task of image clustering conditioned on text criteria and our method IC|TC for solving this task. The task is interesting because the setup where the user is willing and able to provide a textual description of the clustering criterion is practical, arguably more practical than the classical purely unsupervised clustering setup. The method IC|TC is interesting because it leverages modern multi-modal foundation models and solves the task well; our experiments demonstrate that IC|TC can indeed produce satisfactory clustering results consistent with the user-specified criteria.
2 Task definition: Image clustering conditioned on iteratively refined text criteria
The main task we consider in this work is defined as follows: Given a set of images, a number of clusters $K$, and a user-specified criterion expressed in natural language, partition the set of images into $K$ clusters such that the semantic meanings of the clusters are distinguished in a manner that is consistent with the specified user criterion.
Recent image clustering methods (Van Gansbeke et al., 2020; Park et al., 2021; Niu and Wang, 2021) find clusters that agree with pre-defined class labels for datasets such as CIFAR-10 (~90% accuracy). The semantic meanings of the clusters tend to correspond to the category of the foreground object, and the inductive biases of the neural networks and the loss function, data augmentations, and feature extractors used within the method are likely the cause of the clusters being chosen in this manner. In a given setup, however, the clusters returned by such classical clustering methods may not be consistent with the relevant criterion that a user has in mind.
Iterative refinement of text criteria. Under our main task, the text criterion is chosen through a process of iterative refinement: The user specifies a text criterion, performs clustering, examines the clustering results, and, if not satisfied, edits the text criterion to iteratively refine the clustering results. Sometimes, a user-defined text criterion immediately leads to a clustering result that is sufficiently consistent with what the user has in mind, but if not, this iterative prompt engineering procedure provides a practical means for converging to desired results. In practice, hyperparameters of classical clustering algorithms are chosen through an iterative process where the user inspects the clustering output and adjusts the parameters accordingly. In this work, we explicitly acknowledge the process of iteratively determining the text criterion and consider it to be part of the main task.
Comparison with classical clustering. Our task differs from classical clustering in that the user provides information characterizing the relevant criterion by which the images should be clustered. In contrast, classical clustering methods are purely unsupervised and use no such information.
Deep clustering methods are often evaluated against a pre-defined set of labels of a dataset, and such labels tend to focus on the type of object in the foreground. However, the question of how clustering algorithms could (or cannot) perform clustering with arbitrary criteria has been raised and studied in several prior works (Wolpert and Macready, 1997; Kleinberg, 2002; Caruana et al., 2006; Cui et al., 2007; von Luxburg et al., 2012; Caruana, 2013; McCarthy et al., 2020; Viswanathan et al., 2023). The use of user-defined text criteria makes our task not an instance of (classical) unsupervised clustering, but providing a text criterion is a necessary and practical intervention from the user if the goal is to perform clustering with arbitrary criteria.
Comparison with zero-shot classification. Our task differs from zero-shot classification in that zero-shot classification requires a pre-defined set of classes, and the goal is merely to assign images to these classes. In contrast, our task requires both finding the clusters and assigning images to the clusters. In fact, zero-shot classification can be considered an instance of our task when the user explicitly and precisely describes all $K$ clusters in the clustering criterion.
Figure 2: The IC|TC method. (Step 1) Vision-language model (VLM) extracts detailed relevant textual descriptions of images. (Step 2) Large language model (LLM) identifies the names of the clusters. (Step 3) LLM conducts clustering by assigning each description to the appropriate cluster. The entire procedure is guided by a user-specified text criterion (TC). (Optional TC Refinement). The user can update the text criterion if the clustering results are unsatisfactory. See Appendix B.4 for an unabridged sample output.
3 IC|TC: IMAGE CLUSTERING CONDITIONED ON TEXT CRITERIA
Our main method consists of 3 stages with an optional iterative outer loop. The user-specified text criterion TC is incorporated into 3 stages via text prompts roughly of the following form.
\[ P_{\text{step}1}(TC) = \text{"Characterize the image using a well-detailed description"} + TC \]
\[ P_{\text{step}2a}(TC) = \text{"Given a description of an image, label the image"} + TC \]
\[ P_{\text{step}2b}(TC, N, K) = \text{"Given a list of } \{N\} \text{ labels, cluster them into } \{K\} \text{ words"} + TC \]
\[ P_{\text{step}3}(TC) = \text{"Based on the image description, determine the most appropriate cluster"} + TC \]
The precise prompt for each experimental setup in this work is specified in Appendix B.3.1.
3.1 Step 1: Extract salient features from the image
In Step 1, the vision-language model (VLM) extracts salient features from the image in the form of text descriptions.
**Step 1 Vision-language model (VLM) extracts salient features**
**Input:** Image Dataset \( D_{\text{img}} \), Text Criteria TC, Descriptions \( D_{\text{des}} \leftarrow [] \)
**Output:** \( D_{\text{des}} \)
1: **for** img in \( D_{\text{img}} \) **do**
2: \( D_{\text{des}}.\text{append( VLM(img, P_{\text{step}1}(TC)) )} \) //append image description to \( D_{\text{des}} \)
3: **end for**
The user’s criterion TC determines the relevant features the VLM should focus on. For example, the user may wish to cluster with respect to the mood of a person in the image or the overall mood (atmosphere) of the scene. In such cases, the TC may slightly vary:
Criterion 1: Focus on the mood of the person in the center.
Criterion 2: Describe the general mood by inspecting the background.
3.2 Step 2: Obtaining Cluster Names
In Step 2, the large language model (LLM) discovers the cluster names in two sub-steps. In Step 2a, the LLM outputs raw initial labels of the images. Since the number of distinct initial labels is usually larger than $K$, in Step 2b, the LLM aggregates the raw initial labels into appropriate names of $K$ clusters. (Combining Steps 2a and 2b and asking the LLM to discover $K$ cluster names from $N$ image descriptions is infeasible due to the limited token lengths of the LLMs.)
**Step 2 Large Language Model (LLM) obtains $K$ cluster names**
**Input:** Descriptions $\mathcal{D}_{\text{des}}$, Text Criteria $\mathbf{T C}$, Dataset size $N$, Number of clusters $K$, $L_{\text{raw}} \leftarrow []$
**Output:** List of cluster names $C_{\text{name}}$
1: for description in $\mathcal{D}_{\text{des}}$ do
2: $L_{\text{raw}}$.append( LLM(description + $P_{\text{step2a}}(\mathbf{T C})$)) //append raw label to $L_{\text{raw}}$
3: end for
4: $C_{\text{name}} = \text{LLM}(L_{\text{raw}} + P_{\text{step2b}}(\mathbf{T C}, N, K))$ //Step 2b can be further optimized
The simplest instance of Step 2b, described above, directly provides $L_{\text{raw}}$, the full list of raw labels. However, we find that it is more efficient to convert $L_{\text{raw}}$ to a dictionary with labels being the keys and numbers of occurrences of the labels being the values. When the same raw label occurs many times, this optimization significantly reduces the token length of the input to the LLM of Step 2b.
Careful prompt engineering of $P_{\text{step2b}}(\mathbf{T C}, N, K)$ allows the user to refine the clusters to be consistent with the user’s criteria. For example, the user may append additional text prompts such as:
When categorizing the classes, consider the following criteria:
1. Merge similar clusters. For example, [sparrow, eagle, falcon, owl, hawk] should be combined into ‘birds of prey.’
2. Clusters should be differentiated based on the animal’s habitat.
3.3 Step 3: Clustering by Assigning Images
In Step 3, images are assigned to one of the final $K$ clusters. The text criterion $\mathbf{T C}$, text description of the images from Step 1, and the $K$ cluster names from Step 2 are provided to the LLM.
**Step 3 Large Language Model (LLM) assigns clusters to images**
**Input:** Descriptions $\mathcal{D}_{\text{des}}$, Text Criteria $\mathbf{T C}$, List of cluster names $C_{\text{name}}$, RESULT←[]
**Output:** RESULT
1: for description in $\mathcal{D}_{\text{des}}$ do
2: RESULT.append( LLM(description+$P_{\text{step3}}(\mathbf{T C}))$) //append assigned cluster
3: end for
3.4 Iteratively Editing the Algorithm through Text Prompt Engineering
**Main method IC|TC**
**Input:** Dataset $\mathcal{D}_{\text{img}}$, Text Criteria $\mathbf{T C}$, ADJUST ← True
1: while ADJUST do
2: RESULT ← do Steps 1–3 conditioned on $\mathbf{T C}$
3: if User determines RESULT satisfactory then
4: ADJUST ← False
5: else
6: $\mathbf{T C} \leftarrow \text{Update } \mathbf{T C}$ //user writes updated $\mathbf{T C}$
7: end if
8: end while
Our main method IC|TC is described above. Upon performing the clustering once, if the clusters are not sufficiently consistent with the specified text criterion $\mathbf{T C}$ or if the $\mathbf{T C}$ turns out to not precisely specify what the user had in mind, the user can update the $\mathbf{T C}$. This iterative process may continue until the clustering result is satisfactory, as judged by the user.
Table 1: Clustering with varying text criteria. Accuracies labeled with * are evaluated by having a human provide ground truth labels for 1000 randomly sampled images. In this experiment, we used LLaVA for VLM and GPT-4 for LLM.
| Dataset | Criterion | SCAN | Ours |
|------------------|-----------|------|------|
| Stanford 40 Action | Action | 0.397 | **0.774** |
| | Location | 0.359* | **0.822*** |
| | Mood | 0.250* | **0.793*** |
| PPMI | M.I. (K=7) | 0.632 | **0.964** |
| | M.I. (K=2) | 0.850 | **0.977** |
| | Location (K=2) | 0.512 | **0.914** |
| CIFAR-10-Gen | Object | **0.989** | 0.987 |
3.5 Producing Cluster Labels
Classically, the unsupervised clustering task does not require the method to produce labels or descriptions of the output clusters. Notably, however, IC|TC produces names describing the clusters. This is a significant advantage of IC|TC as it makes the clustering results more directly and immediately interpretable.
4 Experiments
We now present experimental results demonstrating the effectiveness of IC|TC. In this section, we partially describe the settings and results while deferring much of the details to the appendix. In particular, the precise text prompts used can be found in Appendix B.3.1.
IC|TC crucially relies on the use of foundation models, specifically a vision-language model (VLM) and a large language model (LLM) that have undergone instruction tuning. In our experiments, we mainly use LLaVA (Liu et al., 2023) for the VLM and GPT-4 (OpenAI, 2023) for the LLM, but Section 4.5 and Appendix B.2 presents ablation studies investigating how the performance is affected when other foundation models are used.
4.1 Clustering with Varying Text Criteria
In this experiment, we show that varying the text criterion TC indeed leads to varying clustering results of a single image dataset. The results demonstrate that IC|TC is highly flexible and can accommodate a variety of text criteria.
We use the Stanford 40 Action Dataset (Yao et al., 2011), which contains 9,532 images of humans performing various actions. The dataset comes with image labels describing a subject’s action among 40 classes, such as reading, phoning, blowing bubbles, playing violin, etc. We additionally define two different collections of labels. The first collection contains 10 classes describing the location, such as restaurant, store, sports facility, etc. The second collection contains 4 classes describing the mood of the scene, specifically joyful, adventurous, relaxed, and focused.
We utilize three text criteria, Action, Location, and Mood, to obtain three distinct clustering results. We evaluate the results based on how accurately the methods recover the three collections of labels described previously. This degree of control would be difficult or impossible for classical deep clustering methods. We compare our results against the prior deep clustering method SCAN (Van Gansbeke et al., 2020) and present the results in Table 1. Image samples are in Figure 1a.
(Note that we do not have the 9,532 ground truth labels for the Location and Mood criteria. Therefore, we evaluate accuracy by having a human provide ground truth labels on 1000 randomly sampled images.)
4.2 Clustering with varying granularity
In this experiment, we show that IC|TC can automatically control the granularity of clustering results by adjusting $K$, the number of clusters. We find that the cluster descriptions returned by IC|TC are highly interpretable and that the images are assigned to the clusters well for various values of $K$.
We use the People Playing Musical Instrument (PPMI) dataset (Wang et al., 2010; Yao and Fei-Fei, 2010), which contains 1,200 images of humans interacting with 12 different musical instruments. We select 700 images across 7 classes from the original dataset to reduce the size and difficulty of the task.
We use the text criterion Musical Instrument with number of clusters $K = 2$ and $K = 7$. With $K = 7$, images are indeed grouped into clusters such as violin, guitar, and other specific instruments, and 96.4% accuracy against the ground truth label of PPMI is achieved. With $K = 2$, images are divided into 2 clusters of brass instrument and string instrument and achieve a 97.7% accuracy. To clarify, we did not specifically instruct IC|TC to group the 7 instruments into brass and string instruments; the hierarchical grouping was discovered by IC|TC.
As an additional experiment, we also cluster the same set of images with the text criterion Location and $K = 2$. In this case, the images are divided into 2 clusters of indoor and outdoor, and achieve a 91.4% accuracy. We again compare our results against SCAN (Van Gansbeke et al., 2020) and present the results in Table 1. Image samples are provided in Figure 4.
4.3 Comparison with classical clustering methods
In this experiment, we compare IC|TC against several classical clustering algorithms on CIFAR-10, STL-10, and CIFAR-100. The three datasets have 10, 10, and 20 classes and 10,000, 8,000, and 10,000 images, respectively. We use the text criterion Object with the number of clusters equal to the number of classes in the dataset. The results in Table 2 show that IC|TC significantly outperforms classical clustering methods on CIFAR-10, STL-10 and CIFAR-100. Clustered sample images are provided in Appendix B.6.
This comparison is arguably unfair against the classical clustering methods as they do not utilize foundation models or any pre-trained weights. Nevertheless, our results demonstrate that IC|TC is competitive when the goal is to cluster images based on the foreground object type.
4.4 Fair clustering through text criterion refinement
Existing clustering methods sometimes exhibit biased results, and measures to mitigate such biases have been studied (Li et al., 2020; Zeng et al., 2023). Since foundation models are known to learn biases in their training data (Bommasani et al., 2022), IC|TC has the risk of propagating such biases into the clustering results. In this experiment, we show that by simply adding a prompt along the line of "Do not consider gender" to the text criterion, we can effectively mitigate biases in the clustering results.
FACET (Gustafson et al., 2023) is a benchmark dataset for evaluating the robustness and algorithmic fairness of AI and machine-learning vision models. It comprises 32,000 diverse images labeled with
Table 2: Comparison with classical clustering methods using criterion \texttt{Object}. IC|TC outperforms state-of-the-art methods on CIFAR-10, STL-10 and CIFAR-100.
| Method | CIFAR-10 | STL-10 | CIFAR-100 |
|---------------------------------------------|----------|--------|-----------|
| | ACC ↑ | NMI ↑ | ARI ↑ | ACC ↑ | NMI ↑ | ARI ↑ | ACC ↑ | NMI ↑ | ARI ↑ |
| IIC (Ji et al. (2019)) | 0.617 | 0.511 | 0.411 | 0.596 | N/A | N/A | 0.257 | N/A | N/A |
| SCAN (Van Gansbeke et al. (2020)) | 0.883 | 0.797 | 0.772 | 0.809 | 0.698 | 0.646 | 0.507 | 0.468 | 0.301 |
| SPICE (Niu and Wang (2021)) | 0.926 | 0.865 | 0.852 | 0.938 | 0.872 | 0.870 | 0.584 | 0.583 | 0.422 |
| RUC (Park et al. (2021)) | 0.903 | N/A | N/A | 0.867 | N/A | N/A | 0.543 | N/A | N/A |
| TCL (Yunfan et al. (2022)) | 0.887 | 0.819 | 0.780 | 0.868 | 0.799 | 0.757 | 0.531 | 0.529 | 0.357 |
| LLaVA only | 0.647 | 0.455 | 0.442 | 0.774 | 0.587 | 0.589 | 0.097 | 0.022 | 0.014 |
| Ours (LLaVA + Llama 2) | 0.884 | 0.789 | 0.759 | 0.974 | 0.939 | 0.944 | 0.526 | 0.554 | 0.374 |
| Ours (BLIP-2 + GPT-4) | 0.975 | 0.941 | 0.947 | 0.993 | 0.982 | 0.985 | 0.584 | 0.690 | 0.429 |
| Ours (LLaVA + GPT-4) | 0.910 | 0.823 | 0.815 | 0.986 | 0.966 | 0.970 | 0.589 | 0.642 | 0.422 |
Figure 5: (a) Biased results showing that male ‘Craftsman’ tend to be misclassified as ‘Laborer’. (b) Gender ratio of each cluster. When the ratio between males and females differs by more than 10%, the bar is colored red. Bias is mitigated by refining the text criterion into a ‘Fair prompt’.
several attributes, including 52 occupation classes. For this experiment, we sampled 20 images each for men and women from the craftsman, laborer, dancer, and gardener occupation classes, 160 images in total.
For this experiment, we define fairness to be achieved when each cluster maintains an equal proportion of genders. When we use the text criterion \texttt{Occupation}, IC|TC exhibited a gender bias. To mitigate this bias, we introduced a simple negative prompt, instructing IC|TC to not take gender into consideration and instead to focus on the activity. When the clustering was repeated, the results were promising: the gender ratio disparities in the craftsman and laborer clusters improved by 27.2% → 4.4% and 11.6% → 3.2%, respectively. Furthermore, the Dancer and Gardener clusters also experienced marginal reductions in disparities by 2.8% → 2.6% and 10.6% → 9.0%, respectively. The results are shown in Figure 5.
4.5 Further analyses
Ablation studies of LLMs and VLMs. We conduct an ablation study to evaluate whether LLMs actually serve a significant role in our methodology since one may wonder whether the vision-language model (VLMs) alone is sufficient. When we perform a ‘LLaVA only’ experiment that does not utilize an LLM, the performance is considerably lower. However, when we use LLMs of varying sizes, the performance is not affected significantly. The results and details are provided in Figure 3 and Appendix A.2. The results lead us to conclude that the LLM serves a crucial role (the VLM by itself is not sufficient), but the size of the LLM does not seem to be very important.
We also fix the LLM to GPT-4 and perform an ablation study on the choice of vision-language model (VLM). As an image captioning model, ClipCap (Mokady et al., 2021) cannot perform text conditioning, and this leads to poor performance. Blip-2 (Li et al., 2023b) and LLaVA (Liu et al., 2023) can extract information relevant to the text criteria, and they exhibit strong performance. The results and details are provided in Appendix A.1.
Data Contamination. When evaluating research using foundation models, the potential of data contamination is a significant concern (Wei et al., 2022; Du et al., 2022). The datasets we use to measure accuracy, namely CIFAR10, STL-10, CIFAR-100, and Stanford 40 Action, may have been used in the training of LLaVA. If so, the validity of the accuracy measurements comes into question.
To address this concern, we conducted an experiment with synthetically generated images. Specifically, we use Stable Diffusion XL (Rombach et al., 2022) and the CIFAR-10 labels to generate 1000 CIFAR-10-like images, and we call this dataset CIFAR-10-Gen. See Appendix B for further details. On this synthetic data, IC|TC achieves 98.7% accuracy. The fact that the accuracy on CIFAR-10-Gen is no worse than the accuracy on the actual CIFAR-10 dataset gives us confidence that the strong performance of IC|TC is likely not due to data contamination.
(Strictly speaking, the training data for Stable Diffusion may contain the CIFAR-10 images, and if so, we are not completely free from the risk of data contamination. However, the CIFAR-10-Gen dataset does not seem to contain exact copies of CIFAR-10 images, and we argue that the synthetic generation significantly mitigates the risk of data contamination.)
5 RELATED WORK
Image clustering. Modern deep clustering methods (Van Gansbeke et al., 2020; Park et al., 2021; Niu and Wang, 2021; Yunfan et al., 2022) adopt a multi-stage training approach. They begin with representation learning, which finds a representation that maps similar images to similar features, and then perform unsupervised clustering based on these feature representations. Additionally, to obtain more meaningful semantics, Zhong et al. (2021); Shen et al. (2021) proposed contrastive learning at not only the instance level but also at the cluster level. Misra and Maaten (2020); Cho et al. (2021); Kwon et al. (2023); Long et al. (2023); Metaxas et al. (2023) proposed specially designed representation learning for certain clustering criteria. The concurrent work Li et al. (2023c) is particularly relevant to our work as it presents Text-Aided Clustering (TAC), which leverages text as external knowledge to enhance image clustering performance. Specifically, Li et al. (2023c) enhanced feature discriminability by selecting specific WordNet nouns of images and mutually distilled the neighborhood information between the text and image modalities.
Foundation models. In recent years, foundation models have been improving at a remarkable pace, and combined with instruction tuning (Sanh et al., 2022; Ouyang et al., 2022; Wei et al., 2022), these foundation models can be applied more flexibly to downstream tasks. Vision-Language Models (VLMs) (Alayrac et al., 2022; Liu et al., 2023; Awadalla et al., 2023; Dai et al., 2023; Li et al., 2023a; Zhu et al., 2023; Gong et al., 2023) can provide users with appropriate descriptions of given images according to the requirements of the input prompt. Large language models (LLMs) (Chowdhery et al., 2022; Touvron et al., 2023a;b; OpenAI, 2023) exhibit remarkable abilities in a wide range of natural language processing tasks such as text summarization. Recently, Radford et al. (2021); Jia et al. (2021); Li et al. (2022); Dinh et al. (2022); Geng et al. (2023); Menon and Vondrick (2023); Zhang et al. (2022); Cai et al. (2023); Ren et al. (2023) have shown computer vision problems with no direct connection to language can be successfully addressed using large language models.
Image retrieval. Image retrieval aims to find images from a database that are relevant to a given query. This crucially differs from clustering in that clustering requires both finding the clusters and assigning the images to them; image retrieval techniques are very relevant to the sub-task of cluster assignment but not to the sub-task of finding the clusters. The fundamental approach in image retrieval is to assess the similarity among image features. Current approaches focus on two kinds of image representations: global features and local features. For global representations, Babenko et al. (2014); Tolas et al. (2015); Gordo et al. (2016); Cao et al. (2020); Lee et al. (2023) extracts activations from deep CNNs and aggregates them for obtaining global features. For local representations, Yi et al. (2016); Noh et al. (2017); Vassileios Balntas and Mikolajczyk (2016); DeTone et al. (2018); He et al. (2018); Dusmanu et al. (2019); Revaud et al. (2019) proposed well-embedded representations for all regions of interest. Recent state-of-the-art methods (Noh et al., 2017; Simeoni et al., 2019; Cao et al., 2020; Zhang et al., 2023; Wu et al., 2023) typically followed a two-stage paradigm: initially, candidates are retrieved using global features, and then they are re-ranked with local features. Recently, Vo et al. (2019); Liu et al. (2021); Baldrati et al. (2022); Tian et al. (2023) proposed to condition retrieval on user-specified language.
ACKNOWLEDGMENTS
EKR was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIP) [NRF-2022R1C1C1010010] and the Creative-Pioneering Researchers Program through Seoul National University. SK and EKR were partly supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) [NO.2021-0-01343-004, Artificial Intelligence Graduate School Program (Seoul National University)]. We thank Dimitris Papailiopoulos and Yong Jae Lee for providing insightful discussion. We thank Byeong-Uk Lee for providing valuable feedback on the manuscript.
ETHICS STATEMENT
Our methodology provides users with direct control over the clustering results, but this agency could be used maliciously to produce unfair and discriminatory results. However, it is unlikely that our work will be responsible for new unfair results that could not already be produced with a malicious user’s direct and overt intervention. On the other hand, it is possible for biases already in foundation models to propagate into our clustering methodology. Section 4.4 explicitly discusses this possibility and offers measures to mitigate such biases, and a well-intentioned user following the guidance of Section 4.4 is unlikely to amplify biases in the foundation models through the use of our method.
REPRODUCIBILITY STATEMENT
In this work, we use publically available datasets, describe the methodology in precise detail, and make our code available at https://github.com/sehyunkwon/ICTC. Of the two main foundation models we use, the vision-language model LLaVA (Liu et al., 2023) is fully open-source. However, the large language model GPT-4 (OpenAI, 2023) is a proprietary model, and we accessed it through the API offered by OpenAI. The API cost to conduct the experiments presented in this work was less than $3,000 (USD), so we argue that the proprietary API cost does not pose a significant barrier in terms of reproducibility. However, if OpenAI were to discontinue access to the GPT-4 version that we used, namely api-version=2023-03-15-preview, or if OpenAI discontinues access to GPT-4 altogether, then our experiments will no longer be exactly reproducible.
To address this concern, we carry out an ablation study that uses the open-source large language model Llama 2 (Touvron et al., 2023b) and observe that a similar, albeit slight worse, performance is attained. See Figure 3 and Appendix A.2. Therefore, even if GPT-4 becomes unavailable in the future, the results of this work will be similarly reproducible by using Llama 2 or any other large language model of power comparable to or stronger than Llama 2 and GPT-4.
REFERENCES
G. Adams, A. Fabbrì, F. Ladhak, E. Lehman, and N. Elhadad. From sparse to dense: GPT-4 summarization with chain of density prompting. arXiv:2309.04269, 2023.
J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, R. Ring, E. Rutherford, S. Cabi, T. Han, Z. Gong, S. Samangooei, M. Monteiro, J. Menick, S. Borgeaud, A. Brock, A. Nematzadeh, S. Sharifzadeh, M. Binkowski, R. Barreira, O. Vinyals, A. Zisserman, and K. Simonyan. Flamingo: a visual language model for few-shot learning. Neural Information Processing Systems, 2022.
A. Awadalla, I. Gao, J. Gardner, J. Hessel, Y. Hanafy, W. Zhu, K. Marathe, Y. Bitton, S. Gadre, S. Sagawa, J. Jitsev, S. Kornblith, P. W. Koh, G. Ilharco, M. Wortsman, and L. Schmidt. OpenFlamingo: An open-source framework for training large autoregressive vision-language models. arXiv:2308.01390, 2023.
A. Babenko, A. Slesarev, A. Chigorin, and V. Lempitsky. Neural codes for image retrieval. European Conference on Computer Vision, 2014.
A. Baldrati, M. Bertini, T. Uricchio, and A. Del Bimbo. Conditioned and composed image retrieval combining and partially fine-tuning clip-based features. Conference on Computer Vision and Pattern Recognition, 2022.
|
FiQRgzKl64
|
In the current MoE formulation the weights for a given architecture are formed of the linear combination of m (2) expert. The combination of experts is handled by the router. Given that all operations appear to be linear, can the router not directly generate the weights?
|
MIXTURE-OF-SUPERNETS: IMPROVING WEIGHT-SHARING SUPERNET TRAINING WITH ARCHITECTURE-ROUTED MIXTURE-OF-EXPERTS
Anonymous authors
Paper under double-blind review
ABSTRACT
Weight-sharing supernet has become a vital component for performance estimation in the state-of-the-art (SOTA) neural architecture search (NAS) frameworks. Although supernet can directly generate different subnetworks without retraining, there is no guarantee for the quality of these subnetworks because of weight sharing. In NLP tasks such as machine translation and pre-trained language modeling, we observe that given the same model architecture, there is a large performance gap between supernet and training from scratch. Hence, supernet cannot be directly used and retraining is necessary after finding the optimal architectures.
In this work, we propose mixture-of-supernets, a generalized supernet formulation where mixture-of-experts (MoE) is adopted to enhance the expressive power of the supernet model, with negligible training overhead. In this way, different subnetworks do not share the model weights directly, but do so indirectly through an architecture-based routing mechanism. As a result, model weights of different subnetworks are customized towards their specific architectures and the weight generation is learned by gradient descent. Compared to existing weight-sharing supernet for NLP, our method can minimize the retraining time, greatly improving training efficiency. In addition, the proposed method achieves the SOTA performance in NAS for building fast machine translation models, yielding better latency-BLEU tradeoff compared to HAT, the state-of-the-art NAS for MT. We also achieve the SOTA performance in NAS for building memory-efficient task-agnostic BERT models, outperforming NAS-BERT and AutoDistil in various model sizes.
1 INTRODUCTION
Neural architecture search (NAS) can automatically design architectures that achieve high quality on the natural language processing (NLP) task, while satisfying user-defined efficiency (e.g., latency, memory) constraints (Wang et al., 2020a; Xu et al., 2021; 2022a). Most straightforward way of NAS is treating it as the black-box optimization (Zoph et al., 2018; Pham et al., 2018). However, to get the architecture with the best accuracy, different model architectures need to be repeatedly trained and evaluated, which makes it impractical unless the dataset is very small. To overcome this issue, weight sharing is applied between different model architectures (Pham et al., 2018). In this case, supernet is constructed as the largest model in the search space, and each architecture is a subnetwork of it. Furthermore, recent works (Cai et al., 2020; Yu et al., 2020) show that with good training strategies, the subnetworks can be directly used for image classification with high performance (e.g., accuracy comparable to training the same architectures from scratch). However, it is more challenging to apply supernet in NLP tasks. In fact, we observed that directly using the subnetworks for NLP tasks can have a large performance gap. This is consistent with the recent NAS works (Wang et al., 2020a; Xu et al., 2021) on NLP, which retrain or finetune the architectures after using supernet to find the architecture candidates. This raises two issues: 1) it is unknown whether the selected architectures are optimal given the existence of this performance gap; 2) repeated training is still needed if we want to get the final accuracy of the Pareto front, i.e., the best models for different efficiency (e.g., model size or inference latency) budgets. In this work, we focus on improving the weight-sharing mechanism among subnetworks to minimize the performance gap.
Figure 1: Choices of linear layers for supernet training. The length and the height of the ‘Linear’ blocks correspond to the number of input and output features of the supernet respectively. The highlighted portions in blue color correspond to the architecture-specific weights extracted from the supernet. Different intensities of blue color in the ‘Linear’ blocks of the mixture-of-supernet correspond to different alignment scores generated by the router.
| Supernet | Weight sharing | Capacity | Overall Time (↓) | Average BLEU (↑) |
|-------------------|----------------|--------------|------------------|------------------|
| HAT (Wang et al., 2020) | Strict | Single Set | 508 hours | 25.93 |
| Layer-wise MoS | Flexible | Multiple Set | 407 hours (20%) | 27.21 (4.9%) |
| Neuron-wise MoS | Flexible | Multiple Set | 394 hours (22%) | 27.25 (5.1%) |
Table 1: Overall time savings and average BLEU improvements of MoS supernets vs. HAT for computing pareto front (latency constraints: 100 ms, 150 ms, 200 ms) for the WMT'14 En-De task. Overall time (single NVIDIA V100 hours) includes supernet training time, search time, and additional training time for the optimal architectures. Average BLEU is the average of BLEU scores of architectures in the pareto front (see Table 5 for individual scores). MoS supernets yield architectures that enjoy better latency-BLEU trade-offs than HAT and have an overall GPU hours (see A.4.10 for breakdown) savings of at least 20% w.r.t. HAT.
Typically, weight-sharing supernet is trained by repeatedly sampling an architecture from the search space and training the architecture-specific weights from the supernet (see Figure 1(a)). In the standard weight-sharing training (Yu et al., 2020; Cai et al., 2020), the first few output neurons are directly extracted to form a smaller subnetwork, as shown in Figure 1(a). Such a supernet has limited model capacity, which creates two challenges. First, the supernet enforces a strict notion of weight sharing between architectures, regardless of the difference among these architectures. This leads to the issue of co-adaptation (Bender et al., 2018; Zhao et al., 2021c) and gradient conflict (Gong et al., 2021). For instance, given a 5M-parameters model as a subnetwork of a 90M-parameters model, 5M weights are directly shared in the standard weight-sharing. The optimal shared weights for the 5M model could be non-optimal for the 90M model, since there could be large gradient conflicts in optimizing these two models (Gong et al., 2021). Second, the overall capacity of the architecture allocated by the supernet is limited by the number of parameters of a single DNN, i.e. the largest subnetwork in the search space. However, the number of subnetworks in the search space could be very large (e.g., billions). Using a single set of weights to simultaneously parameterize all of them could be insufficient (Zhao et al., 2021c). Due to these challenges, the gap between the performance of the supernet and the standalone (from scratch) model is usually large (Wang et al., 2020a; Ganesan et al., 2021; Yin et al., 2021), which makes the time consuming retraining step of the optimal architectures mandatory.
To overcome these challenges, we propose a Mixture-of-Supernets (MoS) framework that can perform architecture-specific weight extraction (e.g., allows a smaller architecture to not share some output neurons with a larger architecture) and allocate large capacity to an architecture without being limited by the number of parameters in a single DNN. MoS maintains a set of expert weight matrices and has two variants: layer-wise MoS and neuron-wise MoS. In layer-wise MoS, architecture-specific weight matrix is constructed based on a weighted combination of expert weight matrices at the level of set of neurons corresponding to an expert weight matrix. On the other hand, neuron-wise MoS constructs the same at the level of an individual neuron in each expert weight matrix. We show the
effectiveness of the proposed NAS method for building efficient task-agnostic BERT (Devlin et al., 2019) models and machine translation (MT) models. For building efficient BERT, our best supernet: (i) closes the gap and improves over SuperShaper (Ganesan et al., 2021) by 0.85 GLUE points, (ii) improves over NAS-BERT (Xu et al., 2021) and AutoDistil (Xu et al., 2022a) in various model sizes ($\leq 50M$ parameters). Compared to HAT (Wang et al., 2020a), our best supernet: (i) reduces the supernet vs. the standalone model gap by 26.5%, (ii) yields a better pareto front for latency-BLEU tradeoff (100 to 200 ms), and (iii) reduces the number of additional steps to close the gap by 39.8%. See Table 1 for a summary of the overall time savings and BLEU improvements of MoS supernets for WMT’14 En-De task. For this task, the supernet training time is 248 hours, while neuron-wise MoS and layer-wise MoS require additional hours of 14 and 18 hours respectively (less than 8% overhead, see A.4.10 for breakdown).
Main contributions: (1) We propose a formulation which can generalize weight sharing methods, including direct weight sharing (e.g., once-for-all network (Cai et al., 2020), BigNAS (Yu et al., 2020)) and flexible weight sharing (e.g., few-shot NAS (Zhao et al., 2021a)). This formulation allows us to improve supernet by enhancing the model’s expressive power. (2) We adopt the idea of MoE to improve the model capability. Specifically, the model’s weights are dynamically generated based on the activated subnetwork architecture. After training, this MoE can be converted into equivalent static models. This is because our supernets only depend on the subnetwork architecture, which is fixed after training. (3) We conduct comprehensive experiments, demonstrating that our supernets achieve the SOTA NAS results on building efficient task-agnostic BERT and MT models.
2 SUPERNET - FUNDAMENTALS
Supernet is a model that employs weight sharing to parameterize weights for millions of architectures. Supernet can provide quick performance predictions for various architectures, which reduces the search cost for NAS significantly. The training objective of the supernet can be formalized as follows. Let $X_{tr}$ denote the training data distribution. Let $x, y$ denote the training sample and label respectively, i.e., $x, y \sim X_{tr}$. Let $a_{rand}$ denote an architecture uniformly sampled from the search space $A$. Let $f_a$ denote the subnetwork with architecture $a$, and $f$ be parameterized by the supernet model weights $W$. Then, the training objective of the supernet can be given by,
$$\min_W \mathbb{E}_{x,y \sim X_{tr}} \mathbb{E}_{a_{rand} \sim A} [\mathcal{L}(f_{a_{rand}}(x; W), y)].$$ \hspace{1cm} (1)
The above formulation is known as single path one-shot (SPOS) optimization (Guo et al., 2020) of supernet training. Sandwich training (Yu et al., 2020) is another popular technique for training a supernet, where the largest architecture ($a_{big}$), the smallest architecture ($a_{small}$), and the architecture ($a_{rand}$) uniformly sampled from the search space are jointly optimized. The training objective of the supernet then becomes:
$$\min_W \mathbb{E}_{x,y \sim X_{tr}} [\mathbb{E}_{a_{rand} \sim A} [\mathcal{L}(f_{a_{rand}}(x; W), y)] + \mathcal{L}(f_{a_{big}}(x; W), y) + \mathcal{L}(f_{a_{small}}(x; W), y)].$$ \hspace{1cm} (2)
3 MIXTURE-OF-SUPERNETS
Existing supernets typically have limited model capacity to extract architecture-specific weights. For simplicity, assume the model function $f_a(x; W)$ is a fully connected layer (output $o = W x$, omitting bias term for brevity), where $x \in n_{in} \times 1$, $W \in n_{out} \times n_{in}$, and $o \in n_{out} \times 1$. $n_{in}$ and $n_{out}$ correspond to the number of input and output features respectively. Then, the weights ($W_a \in n_{out_a} \times n_{in}$) specific to architecture $a$ with $n_{out_a}$ output features are typically extracted by taking the first $n_{out_a}$ rows of $W$ (as shown in Figure 1(a)) from the supernet weight $W$. Assume one samples two architectures ($a$ and $b$) from the search space with the number of output features $n_{out_a}$ and $n_{out_b}$ respectively. Then, the weights corresponding to the architecture with the smallest number of output features will be a subset of those of the other architecture, sharing the first $|n_{out_a} - n_{out_b}|$ output features exactly. Such a weight extraction technique enforces a strict notion of weight sharing between architectures, regardless of the global architecture information (e.g., different number of features for all the other layers) of these architectures. For instance, architectures $a$ and $b$ can have
---
1Here we assume the number of input features does not change. If it will change, then only the first several columns of $W_a$ are extracted.
widely different model capacities (e.g., $5M$ vs $90M$ number of architecture-specific parameters). The smaller architecture (e.g., $5M$) has to share all its weights with the other architecture (e.g., $90M$) and the supernet (as modeled by $f_a(x; W)$) cannot allocate any weights that are specific to the smaller architecture only. Another problem with $f_a(x; W)$ is that the overall capacity of the supernet is bounded by the number of parameters in the largest subnetwork (i.e. $W$) from the search space. However, the supernet weights $W$ need to parameterize a large amount of different subnetworks in the search space. This is a fundamental limitation of the standard weight sharing mechanism.
Section 3.1 proposes a reformulation to address this limitation, which is instantiated using two methods (Layer-wise MoS, Section 3.2, Neuron-wise MoS, Section 3.3) and can be dropped into Transformers (see Section 3.4).
### 3.1 Generalized Model Function
We can reformulate the function $f_a(x; W)$ to a generalized form $g(x, a; E)$, which takes 2 inputs: the input data $x$, and the activated architecture $a$. $E$ includes the learnable parameters of $g$. Then, the training objective of the proposed supernet becomes,
$$\min_E \mathbb{E}_{x,y \sim X,Y} \mathbb{E}_{a_{rand} \sim A} [\mathcal{L}(g(x, a_{rand}; E), y)].$$
(3)
For the standard weight sharing mechanism mentioned above, $E = W$ and function $g$ just uses $a$ to perform the “trimming” operation on the weight matrix $W$, and forwards the subnetwork. To further minimize the objective equation 3, one feasible way is improving the capacity of the model function $g$. However, common ways such as adding hidden layers or hidden neurons are not applicable here, as we cannot change the final subnetwork architecture of mapping $x$ to $f_a(x; W)$. In this work, we propose to use the idea of Mixture-of-Experts (MoE) [Fedus et al., 2022] to improve the capacity of $g$. Specifically, we dynamically generate the weights $W_a$ according to specific architecture $a$ by routing to certain weights matrices from a set of expert weights. We call this architecture-routed MoE based supernet Mixture-of-Supernets (MoS), and design two routing mechanisms for function $g(x, a; E)$. Due to lack of space, the detailed algorithm for supernet training and search is shown in A.2.
### 3.2 Layer-wise MoS
Assume there are $m$ (number of experts) unique weight matrices ($\{E^i \in \mathbb{R}^{n_{out} \times n_{in}}\}_{i=1}^m$, or expert weights), which are learnable parameters. For simplicity, we only use a single linear layer as the example. For an architecture $a$ with $n_{out}$ output features, we propose the layer-wise MoS that computes the weights specific to the architecture $a$ (i.e. $W_a \in \mathbb{R}^{n_{out} \times n_{in}}$) by performing a weighted combination of expert weights, $W_a = \sum_i \alpha^i_a E^i_a$. Here, $E^i_a \in \mathbb{R}^{n_{out} \times n_{in}}$ corresponds to the standard top rows extraction from the $i^{th}$ expert weights. The alignment vector ($\alpha^i_a \in [0, 1]^m, \sum_i \alpha^i_a = 1$) captures the alignment scores of the architecture $a$ with respect to each expert (weights matrix). We encode the architecture $a$ as a numeric vector $\text{Enc}(a) \in \mathbb{R}^{n_{enc} \times 1}$ (e.g., a list of the number of output features for different layers), and apply a learnable router $r(\cdot)$ (an MLP with softmax) to produce such scores, i.e. $\alpha^i_a = r(\text{Enc}(a))$. Thus, the generalized model function for the linear layer (as shown in Figure 1(b)) can be defined as (omitting bias for brevity):
$$g(x, a; E) = W_a x = \sum_i r(\text{Enc}(a))^i E^i_a x.$$
(4)
Router $r(\cdot)$ controls the degree of weight sharing (unsharing) between two architectures by modulating the alignment scores ($\alpha^i_a$). For example, if $m = 2$ and $a$ is a subnetwork of the architecture $b$, the supernet could allocate weights that are specific to the smaller architecture $a$ only by setting $\alpha^a = (1, 0)$ and $\alpha^b = (0, 1)$. In this case, $g(x, a; E)$ only uses weights from $E_1$ and $g(x, b; E)$ only uses weights from $E_2$, so $E_1$ and $E_2$ can be updated towards the loss from architecture $a$ and $b$ without conflicts. It should be noted that few-shot NAS [Zhao et al., 2021c] can be seen as a special case of our framework if the router $r$ is rule-based. In addition, $g(\cdot)$ is essentially an MoE so that it has stronger expressive power and can lead the objective equation 3 to be smaller. After the supernet training completes, given an architecture $a$, the score $\alpha^a = r(\text{Enc}(a))$ can be generated offline. Expert weights are collapsed and the resulting number of parameters for the architecture $a$ becomes $n_{out} \times n_{in}$. Layer-wise MoS induces low degree of weight sharing between differently sized architectures shown by higher Jensen-Shannon distance between their alignment probability vectors compared to that of similarly sized architectures. See A.1.1 for more details.
| Supernet | MNLI | CoLA | MRPC | SST2 | QNLI | QQP | RTE | Avg. GLUE (↑) |
|------------------|------|------|------|------|------|-----|-----|---------------|
| Standalone | 82.61| **59.03** | 86.54 | 91.52 | 89.47 | 90.68 | 71.53 | 81.63 |
| Supernet (Sandwich) | 82.34| 57.58 | 86.54 | 91.74 | 88.67 | 90.39 | 73.26 | 81.50 (+0.13) |
| Layer-wise MoS (ours) | 82.40| 57.62 | 87.26 | 92.08 | **89.57** | **90.68** | **77.08** | 82.38 (+0.75) |
| Neuron-wise MoS (ours) | **82.68** | 58.71 | 87.74 | **92.16** | 89.22 | 90.49 | 76.39 | **82.48** (+0.85) |
Table 2: GLUE validation performance of different supernets (0 additional pretraining steps) compared to standalone (1x pretraining budget). The BERT architecture (67M parameters) is the top model from the pareto front of Supernet (Sandwich) on SuperShaper’s search space. Improvement (%) in GLUE average over standalone is enclosed in parentheses in the last column. Layer-wise and neuron-wise MoS perform significantly better than standalone.
### 3.3 Neuron-wise MoS
The layer-wise MoS follows a conventional MoE setup, i.e., each expert is a linear layer/module. The router decides to use which experts combination to forward the input \( x \) to, depending on \( a \). In this case, the degree of freedom of weights generation is \( m \), and the number of parameters grows by \( m \times |W| \), where \( |W| \) denotes the number of parameters in the standard supernet. Thus we need \( m \) to be large enough to keep a good flexibility for the subnetwork weights generation, but this will also introduce too many parameters into the supernet and make the layer-wise MoS hard to train. This motivates us to use a smaller granularity of weights to represent each expert. Specifically, we use neurons in DNN as experts. In terms of the weight matrix, neuron-wise MoS uses one row of matrix to represent an individual expert. In contrast, layer-wise MoS uses an entire weight matrix.
For neuron-wise MoS, the router output \( \beta_a = r(\cdot) \in [0, 1]^{n_{outbig} \times m} \) for each layer, and the sum of each row in \( \beta_a \) is 1. Similar to layer-wise MoS, we use an MLP to produce the \( n_{outbig} \times m \) matrix and apply softmax on each row. We formulate the function \( g(x, a; E) \) for neuron-wise MoS as
\[
W_a = \sum_i \text{diag}(\beta^i_a) E^i_a,
\]
where \( \text{diag}(\beta) \) constructs a \( n_{outbig} \times n_{outbig} \) diagonal matrix by putting \( \beta \) on the diagonal, and \( \beta^i_a \) is the \( i \)-th column of \( \beta_a \). \( E^i \) is still an \( n_{outbig} \times n_{in} \) matrix as in layer-wise MoS. Compared to the layer-wise MoS, the neuron-wise MoS has more flexibility (\( m \times n_{out_a} \) instead of only \( m \)) to control the degree of weight sharing between different architectures, while the number of parameters is still proportional to \( m \). Neuron-wise MoS provides a more fine-grained control of weight sharing between subnetworks. We compute gradient conflict using cosine similarity between the supernet gradient and the smallest subnet gradient, following NASVIT work (Gong et al., 2021). As discussed in A.1.2, we find that Neuron-wise MoS enjoys lowest gradient conflict compared to Layer-wise MoS and HAT, shown by highest cosine similarity.
### 3.4 Adding \( g(x, a; E) \) to Transformer
MoS is applicable to a single linear layer, multiple linear layers, and other parameterized layers (e.g., layer-norm). Since the linear layer dominates the number of parameters, we follow the approach used in most MoE work (Fedus et al., 2022). We take the standard weight-sharing based Transformer (\( f_a(x; W) \)) and replace the two linear layers in every feed-forward network block with \( g(x, a; E) \).
### 4 Experiments - Efficient BERT
#### 4.1 Experiment Setup
We discuss application of our proposed supernet for building efficient task-agnostic BERT (Devlin et al., 2019) models. We focus on the BERT pretraining task, where a language model is pretrained from scratch to learn task-agnostic text representations using a masked language modeling objective. The pretrained BERT model can then be directly finetuned on several downstream NLP tasks. We focus on building BERT models that are highly accurate yet small (e.g., 5M – 50M parameters). BERT supernet and standalone are pretrained from scratch on Wikipedia and Books Corpus (Zhu et al., 2015). We evaluate the performance of the BERT model by finetuning on each of the
Table 3: Comparison of neuron-wise MoS with NAS-BERT and AutoDistil for different model sizes (≤ 50M parameters) based on GLUE validation performance. Neuron-wise MoS use a search space of 550 architectures, which is on par with AutoDistil. The third column corresponds to the number of additional training steps required to obtain the weights for the final architecture after supernet training. Performance numbers for the baseline models are taken from the corresponding papers. On average GLUE, neuron-wise MoS can perform similarly or improves over NAS-BERT for different model sizes without any additional training. Neuron-wise MoS can improve over AutoDistil for most model sizes in average GLUE. See A.3.3 for the hyperparameters of best architectures.
seven tasks (chosen by AutoDistil [Xu et al., 2022a]) in the GLUE benchmark [Wang et al., 2018]. The architecture encoding, data preprocessing, pretraining settings, and finetuning settings are discussed in A.3.1. The baseline models are standalone and standard supernet as proposed in SuperShaper [Ganesan et al., 2021]. Our proposed models are layer-wise and neuron-wise MoS. All the supernets are trained using sandwich training. The parameters $m$ and router’s hidden dimension are set to 2 and 128, respectively, for MoS supernets.
4.2 Supernet vs. Standalone Gap
For studying the supernet vs. the standalone gap, the search space is taken from SuperShaper [Ganesan et al., 2021], which consists of BERT architectures that vary only in the hidden size at each layer ($\{720, 240, 360, 480, 540, 600, 768\}$) with fixed number of layers (12) and attention heads (12). The search space amounts to around 14B architectures. We study the supernet vs. the standalone model gap for the top model architecture from the pareto front of Supernet (Sandwich) [Ganesan et al., 2021]. Table 2 displays the GLUE benchmark performance of standalone training of the architecture (1x pretraining budget, which is 2048 batch size * 125,000 steps) as well as architecture-specific weights from different supernets (0 additional pretraining steps; that is, only supernet pretraining). The gap between the task-specific supernet and the standalone performance is bridged by MoS (layer-wise or neuron-wise) for 6 out of 7 tasks, including MNLI (which is a widely used task to indicate performance of a pretrained language model [Liu et al., 2019; Xu et al., 2022b]). The gap in average GLUE between the standalone model and the standard supernet is 0.13 points. Notably, equipped with customization and expressivity properties, the layer-wise and neuron-wise MoS significantly improve upon the standalone training by 0.75 and 0.85 average GLUE points, respectively.
4.3 Comparison with SOTA NAS
The state-of-the-art NAS frameworks for building a task-agnostic BERT model are NAS-BERT [Xu et al., 2021] and AutoDistil [Xu et al., 2022a]. The NAS-BERT pipeline includes: (1) supernet training (with a Transformer stack containing multi-head attention, feed-forward network [FFN] and convolutional layers in arbitrary positions), (2) search based on the distillation (task-agnostic) loss, and (3) pretraining the best architecture from scratch (1x pretraining budget, which is 2048 batch
2 SuperShaper [Ganesan et al., 2021] observe that SPOS performs poorly compared to sandwich training. Hence, we do not study SPOS for building BERT models. The learning curve is shown in A.3.2.
3 AutoDistil (proxy) outperforms SOTA distillation approaches such as TinyBERT [Jiao et al., 2020] and MINILM [Wang et al., 2020b] by 0.7 average GLUE points. Hence, we do not compare against these works.
size * 125,000 steps). The third step has to be executed for every constraint change and hardware change, which is very expensive. AutoDistil pipeline includes: (1) construct $K$ search spaces and train supernets for each search space independently, (2a) agnostic-search mode: search based on the self-attention distillation (task-agnostic) loss, (2b) proxy-search mode: search based on the MNLI validation score, and (3) extract the architecture-specific weights from the supernet without additional training. The first step can be expensive as pretraining $K$ supernets can take $K$ times training compute and memory, compared to training a single supernet. The proxy-search model can unfairly benefit AutoDistil, as it finetunes all the architectures in its search space on MNLI and uses the MNLI score to rank the architectures. For fair comparison with SOTA, we exclude MNLI task from evaluation.\footnote{See A.3.4 for the comparison of neuron-wise MoS against baselines that do not directly tune on the MNLI task, where we find that neuron-wise MoS improves over baselines consistently in terms of both average GLUE and MNLI task performance.}
Our proposed NAS pipeline overcomes all the issues with NAS-BERT and AutoDistil. For comparison with the SOTA NAS, our search space contains BERT architectures with homogeneous Transformer layers: hidden size (120 to 768 in increments of 12), attention heads (\{6, 12\}), intermediate FFN hidden dimension ratio (\{2, 2.5, 3, 3.5, 4\}). This search space amounts to 550 architectures, which is on par with AutoDistil. The supernet is based on neuron-wise MoS. The search uses the perplexity (task-agnostic) metric to rank the architectures. Unlike NAS-BERT which pretrains the best architecture from scratch (third step), the final architecture weights are directly extracted from the supernet without further pretraining. Unlike AutoDistil which pretrains $K$ supernets, the proposed pipeline pretrains exactly one supernet, which requires significantly less training compute and memory. Unlike AutoDistil’s proxy setting where MNLI performance guides the search, our proposed pipeline uses only task-agnostic metric (like AutoDistil’s agnostic). Table 3 shows the comparison of neuron-wise MoS based supernet with NAS-BERT and AutoDistil for different model sizes. The performance of NAS-BERT and AutoDistil are taken from the corresponding papers. On average GLUE, our proposed pipeline: (i) improves over NAS-BERT for 5M, 10M, and 30M model sizes, without any additional training (100% additional training compute savings, which is 2048 batch size * 125,000 steps). On average GLUE, our proposed pipeline: (i) improves over AutoDistil-proxy for 6.88M and 50M model sizes respectively with 1.88M and 0.1M fewer parameters respectively and (ii) improves over both AutoDistil-proxy and AutoDistil-agnostic for 26M model size. Besides achieving SOTA results, the main benefit of our method is reducing the heavy workload of training multiple models in either subnetwork retraining (NAS-BERT) or supernet training (AutoDistil).
5 EXPERIMENTS - EFFICIENT MACHINE TRANSLATION
5.1 EXPERIMENT SETUP
In this section, we discuss the application of proposed supernets for building efficient MT models. We follow the experimental setup provided by Hardware-aware Transformers (HAT \cite{wang2020hardware}), which is the SOTA NAS framework for building MT models that enjoy good latency-BLEU tradeoffs. We focus on three popular MT benchmarks \cite{bojar2014findings,wikimedia-foundation2019}: WMT’14 En-De, WMT’14 En-Fr and WMT’19 En-De, whose dataset statistics are shown in A.4.4. The architecture encoding, training settings for both supernet and standalone models are the same, which are discussed in A.4.2. The baseline supernets include: (i) HAT – HAT’s supernet that uses single path one-shot optimization, and (ii) Supernet (Sandwich) – Supernet that uses sandwich training. The proposed supernets include: (i) Layer-wise MoS – MoS with layer-wise routing and sandwich training and (ii) Neuron-wise MoS – MoS with neuron-wise routing and sandwich training. The parameters $m$ and router’s hidden dimension are set to 2 and 128 respectively for both MoS variants. See A.4.8 for the rationale behind the choice of ‘$m$’.
5.2 SUPERNET VS. STANDALONE GAP
HAT’s search space consists of 6M encoder-decoder architectures, with flexible embedding size (512 or 640), decoder layers (1 to 6), self / cross attention heads (4 or 8), and number of top encoder layers for the decoder to attend to (1 to 3). For a given architecture, supernet performance corresponds to evaluating the architecture-specific weights extracted from the supernet, while standalone performance corresponds to evaluating the architecture after training from scratch. For a random sample of
Table 4: Mean absolute error (MAE) and Kendall rank correlation coefficient between the supernet and the standalone model BLEU performance for 15 random architectures from the MT search space. Improvements (%) in mean absolute error over HAT are in parentheses. Our supernets enjoy minimal MAE and comparable ranking quality with respect to the baseline models.
| Dataset | WMT’14 En-De | WMT’14 En-Fr | WMT’19 En-De |
|---------------|--------------|--------------|--------------|
| | MAE (↓) | Kendall (↑) | MAE (↓) | Kendall (↑) | MAE (↓) | Kendall (↑) |
| HAT | 1.84 | 0.81 | 1.37 | 0.63 | 2.07 | 0.71 |
| Supernet (Sandwich) | 1.62 (12%) | 0.81 | 1.37 (0%) | 0.63 | 2.02 (2.4%) | 0.87 |
| Layer-wise MoS (ours) | 1.61 (12.5%) | 0.54 | 1.24 (9.5%) | 0.73 | 1.57 (24.5%) | 0.87 |
| Neuron-wise MoS (ours) | 1.13 (38.6%) | 0.71 | 1.2 (12.4%) | 0.85 | 1.48 (28.5%) | 0.81 |
Table 5: Latency vs. Supernet BLEU for the models on the pareto front, obtained by performing search with different latency constraints (100 ms, 150 ms, 200 ms) on the NVIDIA V100 GPU. Our supernets yield architectures that enjoy better latency-BLEU tradeoffs than HAT.
| Dataset / Latency Constraint | WMT’14 En-De | WMT’14 En-Fr | WMT’19 En-De |
|-----------------------------|--------------|--------------|--------------|
| | 100 ms | 150 ms | 200 ms | 100 ms | 150 ms | 200 ms |
| HAT | 25.26 | 26.25 | 26.28 | 38.94 | 39.26 | 39.16 | 42.61 | 43.07 | 43.23 |
| Layer-wise MoS (ours) | 26.28 | 27.31 | 28.03 | 39.34 | 40.29 | 41.24 | 43.45 | 44.71 | 46.18 |
| Neuron-wise MoS (ours) | 26.37 | 27.59 | 27.79 | 39.55 | 40.02 | 41.04 | 43.77 | 44.66 | 46.21 |
Architectures from the search space, a good supernet must have: (i) minimal mean absolute error (MAE) and (ii) high rank correlation between the standalone and the supernet performance. Table 4 shows the mean absolute error and Kendall rank correlation coefficient for 15 random architectures from the search space. Compared to HAT, supernet with sandwich training has better MAE and rank quality. This result highlights that sandwich training is essential for building good supernet compared to SPOS for machine translation. Compared to the supernet with sandwich training, our proposed supernets achieve comparable ranking quality for WMT’14 En-Fr and WMT’19 En-De tasks, while marginally underperforming for WMT’14 En-De task. Our proposed supernets achieve minimal MAE on all the three tasks. Specifically, neuron-wise MoS obtains the biggest MAE improvements, which suggests that additional training steps required to make MAE negligible might be the lowest for neuron-wise MoS among all the supernet variants (as we show in Section 5.4). We also plot the supernet and the standalone performance for each architecture, where we find that neuron-wise MoS particularly excels for almost all the top performing architectures (see A.4.3). The training overhead for MoS is generally negligible. For example, for WMT’14 En-De task, the supernet training time (single NVIDIA V100) is 248 hours, while neuron-wise MoS and layer-wise MoS require additional hours of 14 and 18 hours respectively (less than 8% overhead, see Section A.4.10 for details).
5.3 Comparison with SOTA NAS
The pareto front from the supernet can be obtained using the evolutionary search algorithm, which takes the supernet for quickly identifying the top performing candidate architectures, and the latency estimator, which can quickly discard candidate architectures that have latencies exceeding latency threshold. The settings for the evolutionary search algorithm and the latency estimator can be seen in A.4.4. We experiment with 3 latency thresholds: 100 ms, 150 ms, and 200 ms. Table 5 shows the latency vs. the supernet performance tradeoff for the models in the pareto front from different supernets. Compared to HAT, the proposed supernets achieve significantly higher BLEU for each latency threshold across all the datasets, which highlights the importance of architecture specialization and expressiveness of the supernet. See A.4.6 for the consistency of these trends for different seeds.
5.4 Additional Training to Close the Gap
The proposed supernets minimize the supernet vs. the standalone MAE gap significantly (as discussed in Section 5.2), but still do not make the gap negligible. To close the gap for an architecture, one need to extract the architecture-specific weights from the supernet and perform additional training until the standalone performance is reached (when the gap becomes 0). A good supernet should require minimal number of additional steps and time for the architectures extracted from the supernet to close
| Dataset | Supernet | Additional training steps (J) | Additional training time (NVIDIA V100 hours) (J) |
|---------|----------|-----------------------------|-----------------------------------------------|
| | WMT’14 En-De | WMT’14 En-Fr | WMT’19 En-De | WMT’14 En-De | WMT’14 En-Fr | WMT’19 En-De |
| HAT | 33K | 33K | 26K | 63.9 | 60.1 | 52.3 |
| Laye. MoS | 16K (51.5%) | 30K (9%) | 20K (23%) | 35.5 (44.4%) | 66.5 (-10.6%) | 45.2 (13.5%) |
| Neur. MoS | 13K (60%) | 26K (21%) | 16K (38.4%) | 31.0 (51.4%) | 61.7 (-2.7%) | 39.5 (24.5%) |
Table 6: Average number of additional training steps and time required for the models on the pareto front to close the supernet vs. standalone gap. Improvements (%) over HAT are shown in parentheses. Our supernets require minimal number of additional training steps and time to close the gap compared to HAT for most tasks. See A.4.5 for each latency constraint.
For additional training, we evaluate the test BLEU of each architecture after every 10K steps and stop when the test BLEU matches or exceeds the test BLEU of the standalone model. Table 6 displays the average number of additional training required for all the models on the pareto front from each supernet to close the gap. Compared to HAT, layer-wise MoS provides an impressive reduction of 9% to 51% in training steps, while neuron-wise MoS provides by far the largest reduction of 21% to 60%. For the WMT’14 En-Fr task, both MoS supernets require at least 2.7% more time than HAT to achieve SOTA BLEU across different constraints. These results highlight that architecture specialization and supernet expressivity are crucial in greatly improving training efficiency of the subnets extracted from the supernet.
6 RELATED WORK
In this section we briefly discuss existing NAS research in NLP. Evolved Transformer (ET) (So et al., 2019) is an initial work that searches for efficient MT models using NAS. It uses evolutionary search which can dynamically allocate training resources for promising candidates. ET requires 2M GPU hours. HAT (Wang et al., 2020a) propose a weight-sharing supernet as performance estimator. HAT uses supernet to amortize training cost for candidate MT evaluations needed by evolutionary search, which reduces overall search cost by 12000x compared to ET. NAS-BERT (Xu et al., 2021) partitions the BERT-Base model into blocks and trains a weight-sharing supernet to distill each block. During supernet training, NAS-BERT prunes less promising candidates from the search space using progressive shrinking. It can quickly identify the top architecture for each efficiency constraint. NAS-BERT needs to pretrain the top architecture from scratch for every constraint change, which can be very expensive. SuperShaper (Ganesan et al., 2021) pretrains a weight-sharing supernet for BERT using masked language modeling objective with sandwich training. The authors find that SPOS performs poorly compared to the sandwich training objective. AutoDistil (Xu et al., 2022a) employs few-shot NAS (Zhao et al., 2021b): construct K search spaces of non-overlapping BERT architectures and train a weight-sharing BERT supernet for each search space. The search is based on self-attention distillation loss with BERT-Base (task-agnostic search) and MNLI score (proxy search).
In computer vision community, K-shot NAS (Su et al., 2021) generates the weight for each subnet as a convex combination of different supernet weights in a dictionary with a simplex code. Their framework is similar to layer-wise MoS with the following key differences. K-shot NAS trains the architecture code generator and supernet iteratively due to training difficulty, while layer-wise MoS trains all its components jointly. K-shot NAS has been applied only in convolutional architectures for image classification tasks. K-shot NAS introduces too many parameters with increase in number of supernets (K), which is alleviated by neuron-wise MoS due to its granular weight specialization. In this work, we focus on tasks in NLP (and the relevant baselines), where we find that the supernets lag behind standalone models significantly in terms of performance. Also, authors of k-shot NAS do not release the code to reproduce their results. Hence, we do not evaluate against k-shot NAS.
7 CONCLUSION
In this work, we proposed Mixture-of-Supernets, a formulation to improve supernet by enhancing its expressive power. We showed that the idea of MoE can be adopted to generate flexible weights for subnetworks. From our extensive evaluation for building efficient BERT and MT models, we showed that our supernets can: (i) minimize the retraining time thereby improving the NAS efficiency significantly and (ii) yield high quality architectures satisfying user-defined constraints via NAS.
REFERENCES
Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In *International conference on machine learning*, pp. 550–559. PMLR, 2018.
Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. Findings of the 2014 workshop on statistical machine translation. In *Proceedings of the Ninth Workshop on Statistical Machine Translation*, pp. 12–58, Baltimore, Maryland, USA, June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W14/W14-3302.
Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once for all: Train one network and specialize it for efficient deployment. In *International Conference on Learning Representations*, 2020. URL https://arxiv.org/pdf/1908.09791.pdf.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *Journal of Machine Learning Research*, 23(120):1–39, 2022. URL http://jmlr.org/papers/v23/21-0998.html.
Vinod Ganesan, Gowtham Ramesh, and Pratyush Kumar. Supershaper: Task-agnostic super pre-training of BERT models with variable hidden dimensions. *CoRR*, abs/2110.04711, 2021. URL https://arxiv.org/abs/2110.04711.
Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Vikas Chandra, et al. Nasvit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training. In *International Conference on Learning Representations*, 2021.
Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In *Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI*, pp. 544–560, Berlin, Heidelberg, 2020. Springer-Verlag. ISBN 978-3-030-58516-7. doi: 10.1007/978-3-030-58517-4_32. URL https://doi.org/10.1007/978-3-030-58517-4_32.
Peter Izsak, Moshe Berchansky, and Omer Levy. How to train BERT with an academic budget. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pp. 10644–10652, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.831. URL https://aclanthology.org/2021.emnlp-main.831.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 4163–4174, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.372. URL https://aclanthology.org/2020.findings-emnlp.372.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019. URL http://arxiv.org/abs/1907.11692 cite arxiv:1907.11692.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics*, pp. 311–318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://aclanthology.org/P02-1040.
|
s25i99RTCg
|
Additionally, the optimization speeds of different modalities may inherently vary, leading to discrepancies in performance. In other words, the proposed method might achieve satisfactory results with simplistic datasets, but training becomes substantially more challenging when scaled to extensive, real-world data scenarios.
|
ABSTRACT
Multi-modal data-sets are ubiquitous in modern applications, and multi-modal Variational Autoencoders are a popular family of models that aim to learn a joint representation of the different modalities. However, existing approaches suffer from a coherence–quality tradeoff, where models with good generation quality lack generative coherence across modalities, and vice versa. We discuss the limitations underlying the unsatisfactory performance of existing methods, to motivate the need for a different approach. We propose a novel method that uses a set of independently trained, uni-modal, deterministic autoencoders. Individual latent variables are concatenated into a common latent space, which is fed to a masked diffusion model to enable generative modeling. We also introduce a new multi-time training method to learn the conditional score network for multi-modal diffusion. Our methodology substantially outperforms competitors in both generation quality and coherence, as shown through an extensive experimental campaign.
1 INTRODUCTION
Multi-modal generative modelling is a crucial area of research in machine learning that aims to develop models capable of generating data according to multiple modalities, such as images, text, audio, and more. This is important because real-world observations are often captured in various forms, and combining multiple modalities describing the same information can be an invaluable asset. For instance, images and text can provide complementary information in describing an object, audio and video can capture different aspects of a scene. Multi-modal generative models can also help in tasks such as data augmentation (He et al., 2023; Azizi et al., 2023; Sariyildiz et al., 2023), missing modality imputation (Antelmi et al., 2019; Da Silva–Filarder et al., 2021; Zhang et al., 2023; Tran et al., 2017), and conditional generation (Huang et al., 2022; Lee et al., 2019b).
Multi-modal models have flourished over the past years and have seen a tremendous interest from academia and industry, especially in the content creation sector. Whereas most recent approaches focus on specialization, by considering text as primary input to be associated mainly to images (Rombach et al., 2022; Saharia et al., 2022; Ramesh et al., 2022; Tao et al., 2022; Wu et al., 2022; Nichol et al., 2022; Chang et al., 2023) and videos (Blattmann et al., 2023; Hong et al., 2023; Singer et al., 2022), in this work we target an established literature whose scope is more general, and in which all modalities are considered equally important. A large body of work rely on extensions of the Variational Autoencoder (VAE) (Kingma & Welling, 2014) to the multi-modal domain; initially interested in learning joint latent representation of multi-modal data, such works have mostly focused on generative modeling. Multi-modal generative models aim at high-quality data generation, as well as generative coherence across all modalities. These objectives apply to both joint generation of new data, and to conditional generation of missing modalities, given a disjoint set of available modalities.
In short, multi-modal VAEs rely on combinations of uni-modal VAEs, and the design space consists mainly in the way the uni-modal latent variables are combined, to construct the joint posterior distribution. Early work such as Wu & Goodman (2018) adopt a product of experts approach, whereas others Shi et al. (2019) consider a mixture of expert approach. Product-based models achieve high generative quality, but suffer in terms of both joint and conditional coherence. This was found to be due to experts mis-calibration issues (Shi et al., 2019; Sutter et al., 2021). On the other hand, mixture-based models produce coherent but qualitatively poor samples. A first attempt to address the so called coherence-quality tradeoff (Daunhawer et al., 2022) is represented by the mixture of product of experts approach (Sutter et al., 2021). However recent comparative studies (Daunhawer et al., 2022) show that none of the existing approaches fulfill both the generative quality
and coherence criteria. A variety of techniques aim at finding a better operating point, such as contrastive learning techniques (Shi et al., 2021), hierarchical schemes (Vasco et al., 2022), total correlation based calibration of single modality encoders (Hwang et al., 2021), or different training objectives (Sutter et al., 2020). More recently, the work in (Palumbo et al., 2023) considers explicitly separated shared and private latent spaces to overcome the aforementioned limitations.
By expanding on results presented in (Daunhauer et al., 2022), in Section 2 we further investigate the tradeoff between generative coherence and quality, and argue that it is intrinsic to all variants of multi-modal VAEs. We indicate two root causes of such problem: latent variable collapse (Alemi et al., 2018; Dieng et al., 2019) and information loss due to mixture sub-sampling. To tackle these issues, in this work, we propose in Section 3 a new approach which uses a set of independent, uni-modal deterministic auto-encoders whose latent variables are simply concatenated in a joint latent variable. Joint and conditional generative capabilities are provided by an additional model that learns a probability density associated to the joint latent variable. We propose an extension of score-based diffusion models (Song et al., 2021b) to operate on the multi-modal latent space. We thus derive both forward and backward dynamics that are compatible with the multi-modal nature of the latent data. In section 4, we propose a novel method to train the multi-modal score network, such that it can both be used for joint and conditional generation. Our approach is based on a guidance mechanism, which we compare to alternatives. We label our approach Multi-modal Latent Diffusion (MLD).
Our experimental evaluation of MLD in Section 5 provides compelling evidence of the superiority of our approach for multi-modal generative modeling. We compare MLD to a large variety of VAE-based alternatives, on several real-life multi-modal data-sets, in terms of generative quality and both joint and conditional coherence. Our model outperforms alternatives in all possible scenarios, even those that are notoriously difficult because modalities might be only loosely correlated. Note that recent work also explore the joint generation of multiple modalities (Ruan et al., 2023; Hu et al., 2023), but such approaches are application specific, e.g. text-to-image, and essentially only target two modalities. When relevant, we compare our method to additional recent alternatives to multi-modal diffusion (Bao et al., 2023; Wesego & Rooshenas, 2023), and show superior performance of MLD.
2 LIMITATIONS OF MULTI-MODAL VAEs
In this work, we consider multi-modal VAEs (Wu & Goodman, 2018; Shi et al., 2019; Sutter et al., 2021; Palumbo et al., 2023) as the standard modeling approach to tackle both joint and conditional generation of multiple modalities. Our goal here is to motivate the need to go beyond such a standard approach, to overcome limitations that affect multi-modal VAEs, which result in a trade-off between generation quality and generative coherence (Daunhauer et al., 2022; Palumbo et al., 2023).
Consider the random variable $X = \{X^1, \ldots, X^M\} \sim p_D(x^1, \ldots, x^M)$, consisting in the set of $M$ of modalities sampled from the (unknown) multi-modal data distribution $p_D$. We indicate the marginal distribution of a single modality by $X^i \sim p_D(x^i)$ and the collection of a generic subset of modalities by $X^A \sim p_D(x^A)$, with $X^A \overset{\text{def}}{=} \{X^i\}_{i \in A}$, where $A \subset \{1, \ldots, M\}$ is a set of indexes. For example: given $A = \{1, 3, 5\}$, then $X^A = \{X^1, X^3, X^5\}$.
We begin by considering uni-modal VAEs as particular instances of the Markov chain $X \rightarrow Z \rightarrow \hat{X}$, where $Z$ is a latent variable and $\hat{X}$ is the generated variable. Models are specified by the two conditional distributions, called the encoder $Z | X=x \sim q_\psi(z | x)$, and the decoder $\hat{X} | Z=z \sim p_\theta(\hat{x} | z)$. Given a prior distribution $p_n(z)$, the objective is to define a generative model whose samples are distributed as closely as possible to the original data.
In the case of multi-modal VAEs, we consider the general family of Mixture of Product of Experts (MOPOE) (Sutter et al., 2021), which includes as particular cases many existing variants, such as Product of Experts (MVAE) (Wu & Goodman, 2018) and Mixture of Expert (MMVAE) (Shi et al., 2019). Formally, a collection of $K$ arbitrary subsets of modalities $S = \{A_1, \ldots, A_K\}$, along with weighting coefficients $\omega_i \geq 0$, $\sum_{i=1}^{K} \omega_i = 1$, define the posterior $q_\psi(z | x) = \sum_i \omega_i q_{\psi_i}(z | x^{A_i})$, with $\psi = \{\psi^1, \ldots, \psi^K\}$. To lighten the notation, we use $q_{\psi_i}$ instead of $q_{\psi_i}^{A_i}$ noting that the various $q_{\psi_i}^{A_i}$ can have both different parameters $\psi^{A_i}$ and functional form. For example, in the MOPOE (Sutter et al., 2021) parametrization, we have: $q_{\psi_i}^{A_i}(z | x^{A_i}) = \prod_{j \in A_i} q_{\psi_j}(z | x^j)$. Our exposition is
more general and not limited to this assumption. The selection of the posterior can be understood as the result induced by the two step procedure where i) each subset of modalities $A_i$ is encoded into specific latent variables $Y_i \sim q_{\psi,A_i}(\cdot | x^{A_i})$ and ii) the latent variable $Z$ is obtained as $Z = Y_i$ with probability $\omega_i$. Optimization is performed w.r.t. the following evidence lower bound (ELBO) (Daunhauer et al., 2022; Sutter et al., 2021):
$$L = \sum_i \omega_i \int p_D(x)q_{\psi,A_i}(z | x^{A_i}) \log p_\theta(x | z) - \log \frac{q_{\psi,A_i}(z | x^{A_i})}{p_n(z)} dz dx.$$
(1)
A well-known limitation called the latent collapse problem (Alemi et al., 2018; Dieng et al., 2019) affects the quality of latent variables $Z$. Consider the hypothetical case of arbitrary flexible encoders and decoders: then, posteriors with zero mutual information with respect to model inputs are valid maximizers of Equation (1). To prove this, it is sufficient to substitute the posteriors $q_{\psi,A_i}(z | x^{A_i}) = p_n(z)$ and $p_\theta(x | z) = p_D(x)$ into the Equation (1) to observe that the optimal value $L = \int p_D(x) \log p_D(x) dx$ is achieved (Alemi et al., 2018; Dieng et al., 2019). The problem of information loss is exacerbated in the case of multi-modal VAEs (Daunhauer et al., 2022). Intuitively, even if the encoders $q_{\psi,A_i}(z | x^{A_i})$ carry relevant information about their inputs $X^{A_i}$, step ii) of the multi-modal encoding procedure described above induces a further information bottleneck. A fraction $\omega_i$ of the time, the latent variable $Z$ will be a copy of $Y_i$, that only provides information about the subset $X^{A_i}$. No matter how good the encoding step is, the information about $X^{\{1,\ldots,M\}\setminus A_i}$ that is not contained in $X^{A_i}$ cannot be retrieved.
Furthermore, if the latent variable carries zero mutual information w.r.t. the multi-modal input, a coherent conditional generation of a set of modalities given others is impossible, since $\tilde{X}^{A_1} \perp X^{A_2}$ for any generic sets $A_1, A_2$. While the factorization $p_\theta(x | z) = \prod_{i=1}^M p_{\theta_i}(x^i | z), \theta = \{\theta_1, \ldots, \theta_M\}$ — where we use $p_{\theta_i}$ instead of $p_{\theta}$, to unclutter the notation — could enforce preservation of information and guarantee a better quality of the jointly generated data, in practice, the latent collapse phenomenon induces multi-modal VAEs to converge toward sub-optimal operating regime. When the posterior $q_\psi(z | x)$ collapses onto the uninformative prior $p_n(z)$, the ELBO in Equation (1) reduces to the sum of modality independent reconstruction terms $\sum_i \omega_i \sum_{j \in A_i} \int p_D(x^j)p_n(z) (\log p_{\theta_j}(x^j | z)) dz dx^j$.
In this case, flexible decoders can similarly ignore the latent variable and converge to the solution $p_{\theta_j}(x^j | z) = p_D(x^j)$ where, paradoxically, the quality of the approximation of the various marginal distributions is extremely high, while there is a complete lack of joint coherence.
General principles to avoid latent collapse consist in explicitly forcing the learning of informative encoders $q_\theta(z | x)$ via $\beta-$annealing of the Kullback-Leibler (KL) term in the ELBO and the reduction of the representational power of encoders and decoders. While $\beta-$annealing has been explored in the literature (Wu & Goodman, 2018) with limited improvements, reducing the flexibility of encoders/decoders clearly impacts the generation quality. Hence the presence of a trade-off: to improve coherence, the flexibility of encoders/decoders should be constrained, which in turns hurt generative quality. This trade-off has been recently addressed in the literature of multi-modal VAEs (Daunhauer et al., 2022; Palumbo et al., 2023), but our experimental results in Section 5 indicate that there is ample room for improvement, and that a new approach is truly needed.
3 Our Approach: Multi-modal Latent Diffusion
We propose a new method for multi-modal generative modeling that, by design, does not suffer from the limitations discussed in Section 2. Our objective is to enable both high-quality and coherent joint/conditional data generation, using a simple design (see Appendix A for a schematic representation). As an overview, we use deterministic uni-modal autoencoders, whereby each modality $X^i$ is encoded through its encoder $e_{\psi_i}$, which is a short form for $e_{\psi_i}$, into the modality specific latent variable $Z^i$ and decoded into the corresponding $\hat{X}^i = d_{\phi_i}(Z^i)$. Our approach can be interpreted as a latent variable model where the different latent variables $Z^i$ are concatenated as $Z = [Z^1, \ldots, Z^M]$. This corresponds to the parametrization of the two conditional distributions as $q_\psi(z | x) = \prod_{i=1}^M \delta(z^i - e_{\psi_i}(x^i))$ and $p_\theta(\hat{x} | z) = \prod_{i=1}^M \delta(\hat{x}^i - d_{\phi_i}(z^i))$, respectively. Then, in place of an ELBO, we optimize the parameters of our autoencoders by minimizing the following sum of
modality specific losses:
\[ L = \sum_{i=1}^{M} L_i, \quad L_i = \int p_D(x) l^i(x - d_{\theta^i}(e_{\psi^i}(x))) dx, \]
where \( l^i \) can be any valid distance function, e.g., the square norm \( \| \cdot \|_2^2 \). Parameters \( \psi^i, \theta^i \) are modality specific; then, minimization of Equation (2) corresponds to individual training of the different autoencoders. Since the mapping from input to latent is deterministic, there is no loss of information between \( X \) and \( Z \). Moreover, this choice avoids any form of interference in the back-propagated gradients corresponding to the uni-modal reconstruction losses. Consequently gradient conflicts issues [Javaloy et al., 2022], where stronger modalities pollute weaker ones, are avoided.
To enable such a simple design to become a generative model, it is sufficient to generate samples from the induced latent distribution \( Z \sim q_\psi(z) = \int p_D(x) q_\psi(z | x) dx \) and decode them as \( \hat{X} = d_\theta(Z) = [d_{\theta^1}(Z^1), \ldots, d_{\theta^M}(Z^M)] \). To obtain such samples, we follow the two-stage procedure described in [Loaiza-Ganem et al., 2022; Tran et al., 2021], where samples from the lower dimensional \( q_\psi(z) \) are obtained through an appropriate generative model. We consider score-based diffusion models in latent space [Rombach et al., 2022; Vahdat et al., 2021] to solve this task, and call our approach Multi-modal Latent Diffusion (MLD). It may be helpful to clarify, at this point, that the two-stage training of MLD is carried out separately. Uni-modal deterministic autoencoders are pre-trained first, followed by the training of the score-based diffusion model, which is explained in more detail later.
To conclude the overview of our method, for joint data generation, one can sample from noise, perform backward diffusion, and then decode the generated multi-modal latent variable to obtain the corresponding data samples. For conditional data generation, given one modality, the reverse diffusion is guided by this modality, while the other modalities are generated by sampling from noise. The generated latent variable is then decoded to obtain data samples of the missing modality.
### 3.1 Joint and Conditional Multi-modal Latent Diffusion Processes
In the first stage of our method, the deterministic encoders project the input modalities \( X^i \) into the corresponding latent spaces \( Z^i \). This transformation induces a distribution \( q_\psi(z) \) for the latent variable \( Z = [Z^1, \ldots, Z^M] \), resulting from the concatenation of uni-modal latent variables.
**Joint generation.** To generate a new sample for all modalities we use a simple score-based diffusion model in latent space [Sohl-Dickstein et al., 2015; Song et al., 2021b; Vahdat et al., 2021; Loaiza-Ganem et al., 2022; Tran et al., 2021]. This requires reversing a stochastic noising process, starting from a simple, Gaussian distribution. Formally, the noising process is defined by a Stochastic Differential Equation (SDE) of the form:
\[ dR_t = \alpha(t) R_t dt + g(t) dW_t, \quad R_0 \sim q(r, 0), \]
where \( \alpha(t) \) and \( g(t) \) are the drift and diffusion terms, respectively, and \( W_t \) is a Wiener process. The time-varying probability density \( q(r, t) \) of the stochastic process at time \( t \in [0, T] \), where \( T \) is finite, satisfies the Fokker-Planck equation [Oksendal, 2013], with initial conditions \( q(r, 0) \). We assume uniqueness and existence of a stationary distribution \( \rho(r) \) for the process Equation (3). The forward diffusion dynamics depend on the initial conditions \( R_0 \sim q(r, 0) \). We consider \( R_0 = Z \) to be the initial condition for the diffusion process, which is equivalent to \( q(r, 0) = q_\psi(r) \). Under loose conditions [Anderson, 1982], a time-reversed stochastic process exists, with a new SDE of the form:
\[ dR_t = (-\alpha(T-t) R_t + g^2(T-t) \nabla \log(q(R_t, T-t))) dt + g(T-t) dW_t, \quad R_0 \sim q(r, T), \]
indicating that, in principle, simulation of Equation (4) allows to generate samples from the desired distribution \( q(r, 0) \). In practice, we use a parametric score network \( s_\chi(r, t) \) to approximate the true score function, and we approximate \( q(r, T) \) with the stationary distribution \( \rho(r) \). Indeed, the generated data distribution \( q(r, 0) \) is close (in KL sense) to the true density as described by [Song et al., 2021a; Franzese et al., 2023]:
\[ \text{KL}[q_\psi(r) || q(r, 0)] \leq \frac{1}{2} \int_0^T g^2(t) \mathbb{E}[\| s_\chi(R_t, t) - \nabla \log q(R_t, t) \|^2] dt + \text{KL}[q(r, T) || \rho(r)], \]
Since the measures are not absolutely continuous w.r.t the Lebesgue measure, mutual information is \(+\infty\).
This is not necessary for the validity of the method [Song et al., 2021a].
where the first term on the r.h.s is referred to as score-matching objective, and is the loss over which the score network is optimized, and the second is a vanishing term for \( T \to \infty \).
To conclude, joint generation of all modalities is achieved through the simulation of the reverse-time SDE in Equation (4), followed by a simple decoding procedure. Indeed, optimally trained decoders (achieving zero in Equation (2)) can be used to transform \( Z \sim q_\psi(z) \) into samples from \( \int p_\theta(x \mid z)q_\psi(z)\mathrm{d}z = p_D(x) \).
**Conditional generation.** Given a generic partition of all modalities into non overlapping sets \( A_1 \cup A_2 \), where \( A_2 = (\{1,\ldots,M\} \setminus A_1) \), conditional generation requires samples from the conditional distribution \( q_\psi(z^{A_1} \mid z^{A_2}) \), which are based on masked forward and backward diffusion processes.
Given conditioning latent modalities \( z^{A_2} \), we consider a modified forward diffusion process with initial conditions \( R_0 = C(R_0^{A_1}, R_0^{A_2}) \), with \( R_0^{A_1} \sim q_\psi(r^{A_1} \mid z^{A_2}), R_0^{A_2} = z^{A_2} \). The composition operation \( C(\cdot) \) concatenates generated (\( R^{A_1} \)) and conditioning latents (\( z^{A_2} \)). As an illustration, consider \( A_1 = \{1,3,5\} \), such that \( X^{A_1} = \{X^1,X^3,X^5\} \), and \( A_2 = \{2,4,6\} \) such that \( X^{A_2} = \{X^2,X^4,X^6\} \). Then, \( R_0 = C(R_0^{A_1}, R_0^{A_2}) = C(R_0^{A_1}, z^{A_2}) = [R_0^1,z^2,R_0^3,z^4,R_0^5,z^6] \).
More formally, we define the masked forward diffusion SDE:
\[
\mathrm{d}R_t = m(A_1) \odot [\alpha(t)R_t\mathrm{d}t + g(t)\mathrm{d}W_t], \quad q(r,0) = q_\psi(r^{A_1} \mid z^{A_2})\delta(r^{A_2} - z^{A_2}).
\]
The mask \( m(A_1) \) contains \( M \) vectors \( u^i \), one per modality, and with the corresponding cardinality. If modality \( j \in A_1 \), then \( u^j = 1 \), otherwise \( u^j = 0 \). Then, the effect of masking is to “freeze” throughout the diffusion process the part of the random variable \( R_t \) corresponding to the conditioning latent modalities \( z^{A_2} \). We naturally associate to this modified forward process the conditional time varying density \( q(r,t \mid z^{A_2}) = q(r^{A_1},t \mid z^{A_2})\delta(r^{A_2} - z^{A_2}) \).
To sample from \( q_\psi(z^{A_1} \mid z^{A_2}) \), we derive the reverse-time dynamics of Equation (6) as follows:
\[
\mathrm{d}R_t = m(A_1) \odot [(-\alpha(T-t)R_t + g^2(T-t)\nabla \log(q(R_t,T-t \mid z^{A_2})))\mathrm{d}t + g(T-t)\mathrm{d}W_t],
\]
with initial conditions \( R_0 = C(R_0^{A_1}, z^{A_2}) \) and \( R_0^{A_1} \sim q(r^{A_1},T \mid z^{A_2}) \). Then, we approximate \( q(r^{A_1},T \mid z^{A_2}) \) by its corresponding steady state distribution \( \rho(r^{A_1}) \), and the true (conditional) score function \( \nabla \log(q(r,t \mid z^{A_2})) \) by a conditional score network \( s_\chi(r^{A_1},t \mid z^{A_2}) \).
### 4 Guidance Mechanisms to Learn the Conditional Score Network
A correctly optimized score network \( s_\chi(r,t) \) allows, through simulation of Equation (4), to obtain samples from the joint distribution \( q_\psi(z) \). Similarly, a conditional score network \( s_\chi(r^{A_1},t \mid z^{A_2}) \) allows, through the simulation of Equation (7), to sample from \( q_\psi(z^{A_1} \mid z^{A_2}) \). In Section 4.1, we extend guidance mechanisms used in classical diffusion models to allow multi-modal conditional generation. A naïve alternative is to rely on the unconditional score network \( s_\chi(r,t) \) for the conditional generation task, by casting it as an in-painting objective. Intuitively, any missing modality could be recovered in the same way as a uni-modal diffusion model can recover masked information. In Section 4.2, we discuss the implicit assumptions underlying in-painting from an information theoretic perspective, and argue that, in the context of multi-modal data, such assumptions are difficult to satisfy. Our intuition is corroborated by ample empirical evidence, where our method consistently outperform alternatives.
#### 4.1 Multi-time Diffusion
We propose a modification to the classifier-free guidance technique (Ho & Salimans, 2022) to learn a score network that can generate conditional and unconditional samples from any subset of modalities. Instead of training a separate score network for each possible combination of conditional modalities, which is computationally infeasible, we use a single architecture that accepts all modalities as inputs and a multi-time vector \( \tau = [t_1,\ldots,t_M] \). The multi-time vector serves two purposes: it is both a conditioning signal and the time at which we observe the diffusion process.
**Training:** learning the conditional score network relies on randomization. As discussed in Section 3.1, we consider an arbitrary partitioning of all modalities in two disjoint sets, \( A_1 \) and \( A_2 \). The set \( A_2 \)
contains randomly selected conditioning modalities, while the remaining modalities belong to set $A_1$. Then, during training, the parametric score network estimates $\nabla \log(q(r, t | z^{A_2}))$, whereby the set $A_2$ is randomly chosen at every step. This is achieved by the masked diffusion process from Equation (6), which only diffuses modalities in $A_1$. More formally, the score network input is $R_t = C(R_t^{A_1}, Z^{A_2})$, along with a multi-time vector $\tau(A_1, t) = [1(1 \in A_1), \ldots, 1(M \in A_1)]$. As a follow-up of the example in Section 3.1 given $A_1 = \{1, 3, 5\}$, such that $X^{A_1} = \{X^1, X^3, X^5\}$, and $A_2 = \{2, 4, 6\}$ such that $X^{A_2} = \{X^2, X^4, X^6\}$, then, $\tau(A_1, t) = [t, 0, t, 0, t, 0]$.
More precisely, the algorithm for the multi-time diffusion training (see A for the pseudo-code) proceeds as follows. At each step, a set of conditioning modalities $A_2$ is sampled from a predefined distribution $\nu$, where $\nu(\emptyset) \equiv \Pr(A_2 = \emptyset) = d$, and $\nu(U) \equiv \Pr(A_2 = U) = (1-d)/(2^M-1)$ with $U \in \mathcal{P}(\{1, \ldots, M\}) \setminus \emptyset$, where $\mathcal{P}(\{1, \ldots, M\})$ is the powerset of all modalities. The corresponding set $A_1$ and mask $m(A_1)$ are constructed, and a sample $X$ is drawn from the training data-set. The corresponding latent variables $Z^{A_1} = \{e_{\psi}(X^i)\}_{i \in A_1}$ and $Z^{A_2} = \{e_{\psi}(X^i)\}_{i \in A_2}$ are computed using the pre-trained encoders, and a diffusion process starting from $R_0 = C(Z^{A_1}, Z^{A_2})$ is simulated for a randomly chosen diffusion time $t$, using the conditional forward SDE with the mask $m(A_1)$. The score network is then fed the current state $R_t$ and multi-time vector $\tau(A_1, t)$, and the difference between the score network’s prediction and the true score is computed, applying the mask $m(A_1)$. The score network parameters are updated using stochastic gradient descent, and this process is repeated for a total of $L$ training steps. Clearly, when $A_2 = \emptyset$, training proceeds as for an un-masked diffusion process, since the mask $m(A_1)$ allows all latent variables to be diffused.
**Conditional generation:** any valid numerical integration scheme for Equation (7) can be used for conditional sampling (see A for an implementation using the Euler-Maruyama integrator). First, conditioning modalities in the set $A_2$ are encoded into the corresponding latent variables $z^{A_2} = \{e_j(x^j)\}_{j \in A_2}$. Then, numerical integration is performed with step-size $\Delta t = T/N$, starting from the initial conditions $R_0 = C(R_0^{A_1}, z^{A_2})$, with $R_0^{A_1} \sim \rho(r^{A_1})$. At each integration step, the score network $s_X$ is fed the current state of the process and the multi-time vector $\tau(A_1, \cdot)$. Before updating the state, the masking is applied. Finally, the generated modalities are obtained thanks to the decoders as $\hat{X}^{A_1} = \{d_j(R_T^{A_1})\}_{j \in A_1}$. Inference time conditional generation is not randomized: conditioning modalities are the ones that are available, whereas the remaining are the ones we wish to generate.
Any-to-any multi-modality has been recently studied through the composition of modality-specific diffusion models (Tang et al., 2023), by designing cross-attention and training procedures that allow arbitrary conditional generation. The work by Tang et al. (2023) relies on latent interpolation of input modalities, which is akin to mixture models, and uses it as conditioning signal for individual diffusion models. This is substantially different from the joint nature of the multi-modal latent diffusion we present in our work: instead of forcing entanglement through cross-attention between score networks, our model relies on joint diffusion process, whereby modalities naturally co-evolve according to the diffusion process. Another recent work (Wu et al., 2023) targets multi-modal conversational agents, whereby the strong, underlying assumption is to consider one modality, i.e., text, as a guide for the alignment and generation of other modalities. Even if conversational objectives are orthogonal to our work, techniques akin to instruction following for cross-generation, are an interesting illustration of the powerful capabilities of in-context learning of LLMs (Xie et al., 2022; Min et al., 2022).
### 4.2 IN-PAINING AND ITS IMPLICIT ASSUMPTIONS
Under certain assumptions, given an unconditional score network $s_X(r, t)$ that approximates the true score $\nabla \log q(r, t)$, it is possible to obtain a conditional score network $s_X(r^{A_1}, t | z^{A_2})$, to approximate $\nabla \log q(r^{A_1}, t | z^{A_2})$. We start by observing the equality:
$$q(r^{A_1}, t | z^{A_2}) = \int q(C(r^{A_1}, r^{A_2}), t | z^{A_2}) \, dr^{A_2} = \int \frac{q(z^{A_2} | C(r^{A_1}, r^{A_2}), t)}{q_\psi(z^{A_2})} q(C(r^{A_1}, r^{A_2}), t) \, dr^{A_2},$$
where, with a slight abuse of notation, we indicate with $q(z^{A_2} | C(r^{A_1}, r^{A_2}), t)$ the density associated to the event: the portion corresponding to $A_2$ of the latent variable $Z$ is equal to $z^{A_2}$ given that the whole diffused latent $R_t$ at time $t$, is equal to $C(r^{A_1}, r^{A_2})$. In the literature, the quantity $q(z^{A_2} | C(r^{A_1}, r^{A_2}), t)$ is typically approximated by dropping its dependency on $r^{A_1}$. This approxima-
tion can be used to manipulate Equation (8) as \( q(r_{A_1}, t \mid z_{A_2}) \sim \int q(r_{A_2}, t \mid z_{A_2})q(r_{A_1}, t \mid r_{A_2}, t) \, dr \).
Further Monte-Carlo approximations (Song et al., 2021b; Lugmayr et al., 2022) of the integral allow implementation of a practical scheme, where an approximate conditional score network is used to generate conditional samples. This approach, known in the literature as *in-painting*, provides high quality results in several *uni-modal* application domains (Song et al., 2021b; Lugmayr et al., 2022).
The KL divergence between \( q(z_{A_2} \mid C(r_{A_1}, r_{A_2}), t) \) and \( q(z_{A_2} \mid r_{A_2}, t) \) quantifies, fixing \( r_{A_1}, r_{A_2} \), the discrepancy between the true and approximated conditional probabilities. Similarly, the expected KL divergence \( \Delta = \int q(r, t)KL[q(z_{A_2} \mid C(r_{A_1}, r_{A_2}), t) || q(z_{A_2} \mid r_{A_2}, t)] \, dr \), provides information about the average discrepancy. Simple manipulations allow to recast this as a discrepancy in terms of mutual information \( \Delta = I(Z_{A_2}; R_{t_{A_2}}) - I(Z_{A_2}; R_{t_{A_2}}) \). Information about \( Z_{A_2} \) is contained in \( R_{t_{A_2}} \), as the latter is the result of a diffusion with the former as initial conditions, corresponding to the Markov chain \( R_{t_{A_2}} \rightarrow Z_{A_2} \), and in \( R_{t_{A_1}} \) through the Markov chain \( Z_{A_2} \rightarrow Z_{A_1} \rightarrow R_{t_{A_1}} \). The positive quantity \( \Delta \) is close to zero whenever the rate of loss of information w.r.t initial conditions is similar for the two subsets \( A_1, A_2 \). In other terms, \( \Delta \approx 0 \) whenever out of the whole \( R_t \), the portion \( R_{t_{A_2}} \) is a sufficient statistic for \( Z_{A_2} \).
The assumptions underlying the approximation are in general not valid in the case of multi-modal learning, where the robustness to stochastic perturbations of latent variables corresponding to the various modalities can vary greatly. Our claim are supported empirically by an ample analysis on real data in [B] where we show that multi-time diffusion approach consistently outperforms in-painting.
## 5 EXPERIMENTS
We compare our method MLD to MVAE (Wu & Goodman, 2018), MMVAE (Shi et al., 2019), MOPOE (Sutter et al., 2021), Hierarchical Generative Model (NEXUS) (Vasco et al., 2022) and Multi-view Total Correlation Autoencoder (MVTCAE) (Hwang et al., 2021), MMVAE+ (Palumbo et al., 2023) re-implementing competitors in the same code base as our method, and selecting their best hyper-parameters (as indicated by the authors). For fair comparison, we use the same encoder/decoder architecture for all the models. For MLD, the score network is implemented using a simple stacked multilayer perceptron (MLP) with skip connections (see [A] for more details).
**Evaluation metrics.** Coherence is measured as in Shi et al. (2019); Sutter et al. (2021); Palumbo et al. (2023), using pre-trained classifiers on the generated data and checking the consistency of their outputs. Generative quality is computed using Fréchet Inception Distance (FID) (Heusel et al., 2017) and Fréchet Audio Distance (FAD) (Kilgour et al., 2019) scores for images and audio respectively. Full details on the metrics are included in [C]. All results are averaged over 5 seeds (We report standard deviation in [E]).
**Results.** Overall, MLD largely outperforms alternatives from the literature, both in terms of coherence and generative quality. VAE-based models suffer from a coherence-quality trade-off and modality collapse for highly heterogeneous data-sets. We proceed to show this on several standard benchmarks from the multi-modal VAE-based literature (see [C] for details on the data-sets).
The first data-set we consider is MNIST-SVHN ([Shi et al., 2019]), where the two modalities differ in complexity. High variability, noise and ambiguity makes attaining good coherence for the SVHN modality a challenging task. Overall, MLD outperforms all VAE-based alternatives in terms of coherency, especially in terms of joint generation and conditional generation of MNIST given SVHN, see Table [I]. Mixture models (MMVAE, MOPOE) suffer from modality collapse (poor SVHN generation), whereas product of experts (MVAE, MVTCAE) generate better quality samples at the expense of SVHN to MNIST conditional coherence. Joint generation is poor for all VAE models. Interestingly, these models also fail at SVHN self-reconstruction which we discuss in [E]. MLD achieves the best performance also in terms of generation quality, as confirmed also by qualitative results (Figure [I]) showing for example how MLD conditionally generates multiple SVHN digits within one sample, given the input MNIST image, whereas other methods fail to do so.
The Multi-modal Handwritten Digits data-set (MHD) (Vasco et al., 2022) contains gray-scale digit images, motion trajectory of the hand writing and sounds of the spoken digits. In our experiments, we do not use the label as a forth modality. While digit image and trajectory share a good amount of information, the sound modality contains a lot more of modality specific variation. Consequently,
Table 1: Generation coherence and quality for MNIST-SVHN (M: MNIST, S: SVHN). The generation quality is measured in terms of Fréchet Modality Distance (FMD) for MNIST and FID for SVHN.
| Models | Coherence (%) ↑ | Quality ↓ |
|-----------------|-----------------|-----------|
| | Joint | M → S | S → M | Joint(M) | Joint(S) | M → S | S → M |
| MVAE | 38.19 | 48.21 | 28.57 | 13.34 | 68.9 | 68.0 | 13.66 |
| MMVAE | 37.82 | 11.72 | 67.55 | 25.89 | 146.82 | 393.33 | 53.37 |
| MOPOE | 39.93 | 12.27 | 68.82 | 20.11 | 129.2 | 373.73 | 43.34 |
| NEXUS | 40.0 | 16.68 | 70.67 | 13.84 | 98.13 | 281.28 | 53.41 |
| MVTCAE | 48.78 | 81.57 | 49.78 | 12.98 | 52.95 | 62.4 | 35.55 |
| MMVAE+ | 47.75 | 13.23 | 29.69 | 36.96 | 121.77 | 240.90 | 38.11 |
| MMVAE+(K=10) | 41.59 | 55.3 | 56.41 | 19.05 | 67.13 | 75.9 | 18.16 |
| MLD (ours) | 85.22 | 83.79 | 79.13 | 3.93 | 56.36 | 57.2 | 3.67 |
Figure 1: Qualitative results for MNIST-SVHN. For each model we report: MNIST to SVHN conditional generation in the left, SVHN to MNIST conditional generation in the right.
Conditional generation involving the sound modality, along with joint generation, are challenging tasks. Coherency-wise (Table 2), MLD outperforms all the competitors where the biggest difference is seen in joint and sound to other modalities generation (in the latter task MVTCAE performs better than other competitors but is still worse than MLD). MLD dominates alternatives also in terms of generation quality (Table 3). This is true both for image, sound modalities, for which some VAE-based models suffer in producing high quality results, demonstrating the limitation of these methods in handling highly heterogeneous modalities. MLD, in the other hand, achieves high generation quality for all modalities, possibly due to the independent training of the autoencoders avoiding interference.
Table 2: Generation Coherence (%) for MHD (Higher is better). Line above refer to the generated modality while the observed modalities subset are presented below.
| Models | Joint | I (Image) | T (Trajectory) | S (Sound) |
|-----------------|-------|-----------|----------------|-----------|
| | | T | S | TS | I | S | LS | I | T | LT |
| MVAE | 37.77 | 11.68 | 26.46 | 28.4 | 95.55 | 26.66 | 96.58 | 58.87 | 10.76 | 58.16 |
| MMVAE | 34.78 | 99.7 | 69.69 | 84.74 | 99.3 | 85.46 | 92.39 | 49.95 | 50.14 | 50.17 |
| MOPOE | 48.84 | 99.64 | 68.67 | 99.69 | 99.28 | 87.42 | 99.35 | 50.73 | 51.5 | 56.97 |
| NEXUS | 26.56 | 99.57 | 85.77 | 95.27 | 88.51 | 93.22 | 70.06 | 75.84 | 89.48 | |
| MVTCAE | 42.55 | 99.54 | 72.05 | 99.63 | 99.22 | 72.03 | 92.49 | 92.98 | 98.97 | 98.97 |
| MMVAE+ | 41.67 | 98.05 | 84.16 | 91.88 | 97.47 | 81.16 | 89.31 | 64.34 | 65.42 | 64.88 |
| MMVAE+(K=10) | 42.60 | 99.44 | 89.75 | 94.7 | 99.44 | 89.58 | 95.01 | 87.15 | 87.99 | 87.57 |
| MLD (ours) | 98.34 | 99.45 | 88.91 | 99.88 | 99.58 | 88.92 | 99.91 | 97.63 | 97.7 | 98.01 |
The POLYMNIST data-set (Sutter et al., 2021) consists of 5 modalities synthetically generated by using MNIST digits and varying the background images. The homogeneous nature of the modalities is expected to mitigate gradient conflict issues in VAE-based models, and consequently reduce modality collapse. However, MLD still outperforms all alternatives, as shown Figure 2. Concerning generation coherence, MLD achieves the best performance in all cases with the single exception of a single observed modality. On the qualitative performance side, not only MLD is superior to alternatives, but its results are stable when more modalities are considered, a capability that not all competitors share.
Finally, we explore the Caltech Birds CUB (Shi et al., 2019) data-set, following the same experimentation protocol in Daunhauer et al. (2022) by using real bird images (instead of ResNet-features as in Shi et al. (2019)). Figure 3 presents qualitative results for caption to image conditional generation. MLD is the only model capable of generating bird images with convincing coherence. Clearly, none of the VAE-based methods is able to achieve sufficient caption to image conditional generation quality using the same simple autoencoder architecture. Note that an image autoencoder with larger capacity improves considerably MLD generative performance, suggesting that careful engineering applied to modality specific autoencoders is a promising avenue for future work. We report quantitative
Table 3: Generation quality for MHD in terms of FMD for image and trajectory modalities and FAD for the sound modality (Lower is better).
| Models | I (Image) | T (Trajectory) | S (Sound) |
|-----------------|-----------|----------------|-----------|
| | Joint | T | S | Joint | I | S | LS | Joint | I | T | LT |
| MVAE | 93.73 | 92.55 | 14.68 | 39.51 | 20.42 | 38.77 | 19.25 | 14.14 | 14.08 | 14.47 |
| MMVAE | 224.01 | 16.29 | 8.38 | 170.41 | 10.65 | 0.85 | 69.91 | 122.61 | 10.42 | 10.01 |
| MOPOE | 147.81 | 16.29 | 8.38 | 15.89 | 13.92 | 0.52 | 33.38 | 0.53 | 18.53 | 24.11 | 23.93 |
| NEXUS | 281.76 | 116.65 | 282.34 | 117.24 | 18.59 | 6.67 | 33.01 | 7.54 | 13.99 | 19.52 | 18.71 | 16.3 |
| MVTCAE | 121.15 | 2.80 | 128.56 | 113.5 | 22.37 | 1.21 | 21.74 | 15.2 | 16.12 | 17.31 | 17.92 | 17.58 |
| MMVAEs | 97.19 | 1.83 | 70.72 | 62.43 | 21.10 | 1.38 | 8.52 | 7.22 | 14.58 | 14.33 | 14.34 | 14.32 |
| MMVAE+(K=10) | 85.98 | 1.83 | 70.72 | 62.43 | 21.10 | 1.38 | 8.52 | 7.22 | 14.58 | 14.33 | 14.34 | 14.32 |
MLD | 7.98 | 1.7 | 4.54 | 1.84 | 3.18 | 0.83 | 2.07 | 0.6 | 2.39 | 2.31 | 2.33 | 2.29 |
Figure 2: Results for POLYMNIST data-set. Left: a comparison of the generative coherence (%) ↑ and quality in terms of FID (↓) as a function of the number of inputs. We report the average performance following the leave-one-out strategy (see C). Right: are qualitative results for the joint generation of the 5 modalities.
results in E where we show generation quality FID metric. Due to the unavailability of the labels in this data-set, coherence evaluation as with the previous data-sets is not possible. We then resort to CLIP-Score (CLIP-S) Hessel et al. (2021), an image-captioning metric, that, despite its limitations for the considered data-set Kim et al. (2022), shows that MLD outperforms competitors.
6 CONCLUSION AND LIMITATIONS
We have presented a new multi-modal generative model, Multimodal Latent Diffusion (MLD), to address the well-known coherence–quality tradeoff that is inherent in existing multi-modal VAE-based models. MLD uses a set of independently trained, uni-modal, deterministic autoencoders. Generative properties of our model stem from a masked diffusion process that operates on latent variables. We also developed a new multi-time training method to learn the conditional score network for multi-modal diffusion. An extensive experimental campaign on various real-life data-sets, provided compelling evidence on the effectiveness of MLD for multi-modal generative modeling. In all scenarios, including cases with loosely correlated modalities and high-resolution datasets, MLD consistently outperformed the alternatives from the state-of-the-art.
Figure 3: Qualitative results on CUB data-set. Caption used as condition to generate the bird images. MLD* denotes the version of our method using a powerful image autoencoder.
REFERENCES
Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. Fixing a broken elbo. In *International conference on machine learning*, pp. 159–168. PMLR, 2018.
Brian DO Anderson. Reverse-time diffusion equation models. *Stochastic Processes and their Applications*, 12(3):313–326, 1982.
Luigi Antelmi, Nicholas Ayache, Philippe Robert, and Marco Lorenzi. Sparse multi-channel variational autoencoder for the joint analysis of heterogeneous data. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 302–311. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/antelmi19a.html.
Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J. Fleet. Synthetic data from diffusion models improves imagenet classification, 2023.
Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, and Jun Zhu. One transformer fits all distributions in multi-modal diffusion at scale, 2023.
Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models, 2023.
Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, and Dilip Krishnan. Muse: Text-to-image generation via masked generative transformers, 2023.
Matthieu Da Silva–Filarder, Andrea Ancora, Maurizio Filippone, and Pietro Michiardi. Multimodal variational autoencoders for sensor fusion and cross generation. In *2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)*, pp. 1069–1076, 2021. doi: 10.1109/ICMLA52953.2021.00175.
Imant Daunhawer, Thomas M. Sutter, Kieran Chin-Cheong, Emanuele Palumbo, and Julia E Vogt. On the limitations of multimodal VAEs. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=w-CPUXXrA7.
Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. Avoiding latent variable collapse with generative skip models. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 2397–2405. PMLR, 2019.
Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Jimenez Rezende, and Dan Rosenbaum. From data to functa: Your data point is a function and you can treat it like one. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 5694–5725. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/dupont22a.html.
Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, and Pietro Michiardi. How much is enough? a study on diffusion times in score-based generative models. *Entropy*, 25(4), 2023. ISSN 1099-4300. doi: 10.3390/e25040633. URL https://www.mdpi.com/1099-4300/25/4/633.
Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and XI-AOJUAN QI. IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE RECOGNITION? In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=nUmCcZ5RKF.
Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. *arXiv preprint arXiv:2104.08718*, 2021.
|
NJ6nyv3XWH
|
The authors claim that the proposed method is able to learn contextual information and relationships that are essential for fine-grained categorization. However, looking through the manuscript, it seems that the discussion and evidence are missing.
|
LEVERAGING GRAPH NEURAL NETWORKS TO BOOST FINE-GRAINED IMAGE CLASSIFICATION
Anonymous authors
Paper under double-blind review
ABSTRACT
Fine-grained image classification, which is a challenging task in computer vision, requires precise differentiation among visually similar object categories. In this paper, we introduce a novel approach that utilizes Graph Neural Network (GNN) blocks to enhance the clustering capability of feature vectors extracted from images within a deep neural network (DNN) framework. These GNN blocks capture intricate dependencies between feature vectors by modeling them as nodes within a graph. This graph-based approach enables our model to learn contextual information and relationships that are essential for fine-grained categorization. In practice, our proposed method demonstrates significant improvements in the accuracy of different fine-grained classifiers, with an average increase of (+2.78%) and (+3.83%) on the CUB200-2011 and Stanford Dog datasets, respectively, while achieving a state-of-the-art result (95.79%) on the Stanford Dog dataset. Furthermore, our method serves as a plug-in refinement module and can be easily integrated into different architectures.
1 INTRODUCTION
Fine-grained classification is an important task in computer vision. With the rapid advancement of technology, we now have the capability to collect and store a large amount of image data from various sources. However, classifying objects in images with high similarity, such as bird species, types of leaves, or electronic product models, remains a difficult challenge. This challenging problem has numerous real-world applications, including image recognition, disease diagnosis [Lu et al., 2023; Zhang et al., 2023; Wen et al., 2023], and even biodiversity monitoring [Horn et al., 2017; 2015a,b], where distinguishing between visually similar subcategories is crucial. Despite significant progress in using deep neural networks (DNN) to address this issue, there are still many challenges to overcome in order to achieve high accuracy and stability.
In contrast to standard image classification, fine-grained image classification presents greater difficulty for three primary reasons: (i) substantial intra-class variation, with objects in the same category exhibiting significant pose and viewpoint differences; (ii) subtle inter-class distinctions, where objects from different categories may closely resemble each other with minor differences; (iii) constraints on training data, as labeling fine-grained categories often demands specialized expertise and a substantial amount of annotation effort. For these reasons, fine-grained classification remains a formidable challenge for traditional deep neural networks (DNNs). This is primarily due to their limited capacity to discriminate between fine-grained features and the inherent difficulty in learning detailed patterns from limited training data.
This paper presents a GNN Post-Hoc (GPH) plugin that leverages the power of graph neural networks (GNNs) to enhance existing fine-grained image classification methods. We propose a design architecture that integrates GNN blocks into a conventional DNN architecture, allowing for the extraction of fine-grained features while maintaining the robustness and generalization capabilities of deep learning. Our approach aims to capture intricate inter-dependencies between feature vectors, effectively clustering them into meaningful groups that correspond to fine-grained categories. By doing so, we aim to improve the classification accuracy, particularly in scenarios where intra-class variations are significant.
In this work, we provide a comprehensive investigation into the effectiveness of our proposed model, benchmarking it against state-of-the-art methods on widely recognized fine-grained classification
datasets. We demonstrate that the incorporation of GNN blocks leads to substantial performance gains, showcasing the potential of this hybrid approach for fine-grained image classification tasks. Our contributions can be summarized as follows:
- We introduce a novel network architecture design in which GNN blocks are incorporated following the DNN encoder, improving the ability to cluster feature vectors and mitigating the ambiguity issue in fine-grained classification.
- The proposed design can be easily integrated into various fine-grained classifiers, enhancing performance, while the model’s complexity and processing time remain manageable.
- Our extensive experiments on publicly available datasets demonstrate the model’s capability to enhance feature clustering and accuracy, while also achieving state-of-the-art results on the Stanford Dogs dataset.
The remainder of this paper is organized as follows: Section 2 provides an overview of related work in the field of fine-grained classification and graph neural networks. In Section 3, we present our proposed model architecture in detail. Section 4 describes the experimental setup and presents empirical results. Finally, in Section 5, we discuss the implications of our findings and outline avenues for future research.
2 RELATED WORK
In this section, we present two research tracks related to our study, including fine-grained image classification and graph neural networks.
2.1 FINE-GRAINED IMAGE CLASSIFICATION
Recent deep learning research on fine-grained classification problems has primarily focused on two main directions, including convolutional neural networks (CNN)-based methods and visual attention-based methods.
**CNN-based Fine-Grained Image Classification** is commonly seen in general classification tasks and specifically in fine-grained classification problems. Common baseline CNN architectures such as MobileNet [Howard et al., 2017], DenseNet [Huang et al., 2017], ConvNeXT [Liu et al., 2022], and others can also be applied to fine-grained classification tasks. Notably, in 2022, both task-specific models the PIM [Chou et al., 2022] and the μ2Net+ [Gesmundo, 2022] achieved state-of-the-art performance on the NABirds and CUB-200-2011 datasets [Wah et al., 2011]. Currently, the HERBS model [Chou et al., 2023] stands out as one of the top-performing models on these datasets. It employs two innovative approaches, namely high-temperature refinement and background suppression, to address key challenges in fine-grained classification.
**Visual attention-based approaches** aim to mimic human visual attention by selectively focusing on informative regions or features within an image. One of the pioneering models utilizing this mechanism, Xiao et al. [2014], uses two-level attention to concentrate on both overall image context and fine-grained details. More recently, a reinforcement learning-based fully convolutional attention localization network [Liu et al., 2017] adaptively selects multiple task-driven visual attention regions. This model is renowned for being significantly more computationally effective in both the training and testing phases. Furthermore, the ViT-NeT [Kim et al., 2022] model augments the explicability of Vision Transformers [Dosovitskiy et al., 2021] by integrating a neural tree decoder, enabling the generation of predictions with hierarchical structures that facilitate improved comprehension and examination of the model’s decision-making process. In another context, MetaFormer [Yu et al., 2021] employs convolutional layers to encode visual information and transformer layers to fuse vision and meta information. Currently, the ViT-NeT and MetaFormer models are achieving the highest accuracy levels on the Stanford Dogs dataset [Khosla et al., 2011], and the NABirds dataset [Van Horn et al., 2015], respectively.
2.2 Graph neural networks
Graph neural networks (GNNs) can be categorized into four types, which encompass: convolutional graph neural networks (ConvGNNs), recurrent graph neural networks (RecGNNs), graph autoencoders (GAEs), and spatial-temporal graph neural networks (STGNNs). Inspired by the success of CNNs in computer vision, numerous methods have emerged to redefine convolution for graph data. These methods, collectively known as Convolutional Graph Neural Networks (ConvGNNs), can be categorized into two main streams: spectral-based and spatial-based approaches. Since the pioneering work on spectral-based ConvGNNs was presented by Bruna et al. (2014); various advancements, extensions, and approximations have been made in spectral-based ConvGNNs including GCN Kipf & Welling (2017), AGCN Li et al. (2018), and DualGCN Zhuang & Ma (2018). On the other hand, Spatial-based ConvGNNs define graph convolutions based on a node’s spatial relations (e.g., Velickovic et al. (2019); Xu et al. (2019); Chiang et al. (2019)). From a different perspective, spatial-based ConvGNNs share a similar concept of information propagation and message passing with RecGNNs. Furthermore, alongside RecGNNs and ConvGNNs, several other GNN variants have been devised, including Graph Autoencoders (GAEs) Kipf & Welling (2016) and Spatial-Temporal Graph Neural Networks (STGNNs) Yu et al. (2018).
3 Proposed Approach
3.1 Problem Definition
For the problem of fine-grained image classification, similar to the general image recognition, we are given a training dataset \( T = \{ (x_i, y_i) \}_{i=1}^{N} \) drawn from an unknown joint data distribution defined on \( X \times Y \), with \( X \subset \mathbb{R}^{3 \times H \times W} \) and \( Y \subset \{0, 1\}^C \) denoting the input image space and the output label space (\( H, W \) denoted as height and width of an image in \( X \)). In particular, the label space \( Y \) - which contains one-hot classification vectors, is the union space of all the \( C \) subspaces corresponding to the \( C \) subordinate categories of the same meta-category, i.e., \( Y = Y_1 \cup Y_2 \cup \cdots \cup Y_c \cup \cdots \cup Y_C \). Our goal is to learn a mapping function \( f : X \rightarrow Y \) that correctly classifies images into one of the \( C \) categories.
3.2 GPH Architecture Design
In order to improve the model’s understanding of complex image relationships and bolster its capability to distinguish subtle variations in fine-grained classification tasks, we propose a simple yet effective architecture design that utilizes a plug-in module based on GNNs. Figure 1 illustrates the workflow of our proposed design, in which the GNN encoder can be considered as a post-hoc plug-in. We first utilize a DNN-based encoder to generate feature vectors. These vectors are then constructed into a complete graph and input into a GNN model to obtain GNN embeddings, aiming to enhance the discriminative ability between feature clusters. The two features from the two encoders are then combined and fed into fully connected layers for classification. It is worth noting that the GNN plug-in can be integrated into any mainstream backbone network such as DenseNet, Swin Transformer, and ConvNeXT. In this section, we offer comprehensive insights into our GNN Post-Hoc structure, consisting of two primary components: the deep neural network encoder and the graph neural network encoder, along with an overview of the inference process.
In our network architecture, function $f$ consists of three components: (1) a deep neural network encoder $\Phi : X \rightarrow \mathbb{R}^m$ that maps each input image $x_i$ to a $l_2$-normalized feature embedding $z_i$; (2) a graph neural network encoder that constructs a fully connected graph $G$ from the obtained feature vectors within a batch $z = \{z_i\}_{i=1}^{b}$ and then maps them to $l_2$-normalized feature embeddings $g = \{g_i\}_{i=1}^{b}$ with $g_i \in \mathbb{R}^m$; (3) a classifier $\Psi : \mathbb{R}^m \rightarrow \mathbb{R}^C$ that maps each feature in the combined $m$-dimensional embeddings of $z$ and $g$ to a classification vector, where a cross-entropy loss can be applied after using a sigmoid function.
**DNN encoder.** This encoder can be a typical encoder in any DNN-based image classification methods. Given a training batch $\{x_i, y_i\}_{i=1}^{b}$ with batch size $b$, the images are fed into the feature extractor network, yielding $l_2$-normalized embeddings $\{z_i\}_{i=1}^{b}$: $z_i = \Phi(x_i)$.
**GNN encoder.** We enhance the capability of conventional classification networks for fine-grained classification tasks by incorporating a graph neural network module after their feature extraction module. Figure 2 illustrates a toy example depicting the distribution of feature points corresponding to images in a two-dimensional space. In Figure 2(a), the features extracted by conventional models exhibit good class separability, with features from the same class clustering closely together. However, there is a lack of clear differentiation between clusters of different classes, leading to potential misclassifications. On the other hand, our model also facilitates the grouping of elements of the same class while improving the separation between clusters of different classes, thereby enhancing the overall accuracy.
We denote a fully connected graph $G = (\mathcal{V}, \mathcal{E}, \mathcal{F})$, where $\mathcal{V}$ represents the set of images in each batch, i.e., $|\mathcal{V}| = b$, $\mathcal{E} = \{e_{ij}\}_{i,j=1,b}$ is the set of edges connecting images, and $\mathcal{F} = \{z_1, z_2, ..., z_b\}$ is the node features in the graph.
Our proposed GPH can employ various GNN architectures as the GNN encoder, such as GraphTransformer [Yun et al., 2019] and GraphSAGE [Hamilton et al., 2017] to learn the node embeddings, which are described by the feature matrix in $Z \in \mathbb{R}^{b \times m}$. Specifically, the initial node representation, which is the set of DNN embeddings $\{z_i\}_{i=1}^{b}$, are passed through multiple layers, with each layer encompassing two critical functions: AGGREGATE, responsible for gathering information from the neighbors of each node, and COMBINE, tasked with updating the node representations by combining the aggregated information from neighbors with the current node representations.
Mathematically, the general framework of the GNN encoder can be expressed as follows:
- **Initialization:** $Z^{(0)} = \mathcal{F}$.
- For each layer $l$-th of the GNN encoder ($l = 1, L$ with $L$ is the number of layers), we update the embeddings of the graph to have $Z^{(l)} = \{z_i^{(l)}\}_{i=1}^{b}$, which is encoded through two general functions (here, $z_i^{(l)}$ refers to $z_i$):
$$a_i^{(l)} = \text{AGGREGATE}^{(l)} \left\{ z_j^{(l-1)} : j \in \mathcal{N}(i) \right\}, \quad z_i^{(l)} = \text{COMBINE}^{(l)} \left\{ z_i^{(l-1)}, a_i^{(l)} \right\}$$
where $\mathcal{N}(i)$ is the set of neighbors for the $i$-th node.
**Feature combination.** The node representations $Z^{(L)}$ obtained at the last layer of the GNN encoder can be treated as the final node representations, and these features are subsequently merged with features from the DNN encoder $\{z_i\}_{i=1}^{b}$ as follows:
$$c_i = \text{COMBINE} \left\{ z_i^{(L)}, z_i \right\}.$$
These final features $\{c_i\}_{i=1}^{b}$ are then passed through the classifier $\Psi$ for classification.
Table 1: Dataset statistics. Imbalance is defined as the ratio of the number of images in the largest class to the number of images in the smallest class.
| Dataset | # Train | # Test | Imbalance |
|------------------|---------|--------|-----------|
| CUB-200-2011 | 5,994 | 5,794 | 1.03 |
| Stanford Dogs | 12,000 | 8,580 | 1.00 |
| NABirds | 23,929 | 24,633 | 15.00 |
4 EXPERIMENTS
4.1 DATASETS AND EXPERIMENTAL SETTINGS
Datasets. We perform experiments on three well-known fine-grained datasets: CUB-200-201 [Wah et al., 2011], Stanford Dogs [Khosla et al., 2011], and NABirds [Van Horn et al., 2015]. First, the CUB-200-201 dataset, i.e., Caltech-UCSD Birds-200-201, comprises 11,788 labeled images of bird species. Originally, the dataset included 200 bird species, but the extended version incorporates extra images for each category, resulting in a total of 201 classes. This dataset also provides attribute labels and landmark annotations, which offer supplementary information for detailed analysis. Second, the Stanford Dogs dataset consists of 20,580 images featuring 120 distinct dog breeds, and it does not include meta-information similar to CUB-200-201. And finally, the NABirds dataset, short for “North American Birds Dataset,” contains over 48,000 annotated images of 555 bird species found in North America. The division of training and testing data follows the predefined configurations in each dataset, with detailed quantities provided in Table 1.
Implementation details. All experiments are conducted on an NVIDIA Tesla T4 GPU with 15GB of RAM. Initially, all input images are resized to 224x224 pixels. We employ simple data augmentation techniques such as RandomHorizontalFlip and RandomRotation during training. The DNN encoder is trained using pre-trained weights from the ImageNet1K dataset. For the GNN encoder, we integrate four blocks in total. The first block transforms the output features of the base encoder into embeddings with a size of 1024. The remaining three blocks further transform the features to ensure that the output features have a consistent dimension of 1024. The model is fine-tuned for 50 epochs using a batch size of 32 for all models. As the proposed GPH can be influenced by the batch size, we provide detailed experiments to evaluate the results corresponding to different batch size configurations in section 4.2.3. We train the network using the Rectified Adam optimizer with a default epsilon value of $1e^{-8}$. The dimension of the embedding of the encoder network is set to 1024. We evaluate the top-1 classification error on the shuffled validation set. Additionally, the initial learning rate is set to $1e^{-5}$.\footnote{The source code of the implementation is available online (currently omitted due to blind review).}
4.2 EXPERIMENTAL RESULTS
Our empirical studies in this subsection are designed to answer the following key research questions.
- **Q1.** How is the effectiveness of the proposed design when applying various types of GNN encoders to the GPH architecture?
- **Q2.** To what extent does the GNN Post-Hoc model improve performance compared to regular classification networks and state-of-the-art fine-grained classification approaches?
- **Q3.** How do batch configurations affect the performance of the proposed model?
- **Q4.** How does integrating an additional GNN encoder with the DNN encoder impact the representation of feature vectors compared to a conventional classification model?
- **Q5.** How does the GNN aggregation functions affect the accuracy of the proposed model?
4.2.1 DIFFERENT GNN ENCODERS (Q1)
To investigate the effect of employing different GNN models as the GNN encoder, we perform an experiment using four popular GNN methods, including: GCN [Kipf & Welling, 2017], GAT
by assessing their performance on the three benchmark datasets while using Densenet201 as the underlying DNN backbone. Furthermore, we introduce another baseline plug-in adopting an Attention layer instead of the GNN encoder for comparison against the GPH architecture. Table 2 reveals that models equipped with these additional modules consistently enhance accuracy in contrast to the standard Densenet201. Remarkably, our four GPH models exhibit even more substantial improvements, particularly in the context of fine-grained classification across these three datasets.
Table 2: Model accuracy according to different GNN encoders.
| Model | Stanford Dogs | CUB-200-2011 | NAbirds |
|------------------------|---------------|--------------|---------|
| Densenet201 | 83.95 | 79.13 | 77.55 |
| Densenet201-Attention | 85.28 | 79.45 | 78.59 |
| Densenet201-GCN | 87.6 | 84.40 | 84.14 |
| Densenet201-GAT | 87.82 | 84.61 | 83.94 |
| Densenet201-SAGE | 87.39 | 84.43 | 83.54 |
| Densenet201-GraphTransformer | 88.09 | 84.48 | 83.62 |
4.2.2 Comparison with Existing Methods (Q2)
Baselines. To validate the effectiveness and generalization of our method, we investigate the performance of incorporating GPH on four different well-known DNNs and their variants, including DenseNet [Huang et al., 2017], MobileNet [Howard et al., 2017], ConvNext [Liu et al., 2022], and SwinTransformer [Liu et al., 2021], HERB [Chou et al., 2023]. It is important to highlight that our GPH is the only modification, while all other training configurations and hyperparameters remain unaltered from the original implementations. For consistency, we employ GraphTransformer as the GNN encoder for all experiments in this section. Even though we incorporate our proposed method across various techniques and assess it on diverse datasets, we maintain the consistent parameter configuration detailed earlier throughout all experiments.
Comparison results. Table 3 shows the impact of our GPH on fine-grained classification performance across different methods and datasets. Our interesting findings are summarized as follows:
• The table clearly illustrates that the incorporation of GPH consistently improves fine-grained classification results. Notably, we observe an average increase of +2.78%, +3.83%, and +3.29% on the Stanford Dogs, CUB-200-2011 datasets and NABirds, respectively.
• While GPH significantly enhances the performance of CNN-based models on both datasets, the improvement is more moderate for transformer-based models. We hypothesize that because of the inherent similarity between the attention mechanism of transformers and the nature of GNN, the accuracy improvement is not as substantial as with CNN-based models. For example, with models like DenseNet and MobileNet, accuracy increases by 3 – 6% on both datasets, while with Swin Transformer, it ranges from 1 – 2%. Notably, ConvNext shows a slight performance boost on the Stanford Dogs dataset but a significant improvement of 5 – 6% on CUB-200-2011.
• Improving existing fine-grained classification methods is a challenging endeavor. However, as shown in Table 3, our proposed approach achieves new state-of-the-art results on the Stanford Dogs dataset. It is worth noting that for the other two datasets, including CUB-200-2011 and NAbirds, we fail to reproduce the performance of state-of-the-art baselines, i.e., HERB and MetaFormer, even when referring to their GitHub pages.
• Additionally, we observe that for some models, when we add the GPH module to smaller variants, they achieve better accuracy than the larger variants without the module, while also being less time-consuming and complex. For instance, SwinT-Small-GPH (61.7M
2According to the comparison table in https://paperswithcode.com/sota/fine-grained-image-classification-on-stanford-dogs on 26/09/2023.
3https://github.com/chou141253/FGVC-HERBS.git
4https://github.com/dqshuai/MetaFormer.git
Table 3: The impact of GPH on fine-grained classification outcomes when incorporated into various DNN techniques. The accuracy gain when applying GPH is provided in the brackets.
| Method | Inference time | # params | Acc (%) |
|-------------------|----------------|----------|------------------|
| | | | Stanford Dogs | CUB-200-2011 | NABirds |
| MobileNetV3-S | 0.013 | 1.6M | 73.12 (+3.89) | 67.5 (+2.36) | 66.46 (+2.64) |
| MobileNetV3-S-GPH | 0.016 | 17.4M | 77.01 (+3.89) | 69.86 (+2.36) | 69.1 (+2.64) |
| MobileNetV3-L | 0.035 | 4.4M | 78.31 (+4.41) | 77.65 (+3.12) | 75.86 (+3.96) |
| MobileNetV3-L-GPH | 0.039 | 23.2M | 82.72 (+4.41) | 80.77 (+3.12) | 79.82 (+3.96) |
| DenseNet201 | 0.28 | 18.3M | 83.95 (+5.35) | 79.13 (+5.35) | 77.55 (+6.26) |
| DenseNet201-GPH | 0.29 | 73.7M | 87.72 (+3.77) | 84.48 (+5.35) | 83.81 (+6.26) |
| DenseNetT61 | 0.42 | 26.7M | 84.46 (+5.11) | 79.68 (+5.11) | 78.97 (+5.78) |
| DenseNetT61-GPH | 0.45 | 88.7M | 88.47 (+4.01) | 84.79 (+5.11) | 84.75 (+5.78) |
| SwinT-Small | 0.51 | 49.1M | 91.39 (+1.40) | 86.27 (+1.08) | 86.74 (+1.23) |
| SwinT-Small-GPH | 0.52 | 61.7M | 92.79 (+1.40) | 87.35 (+1.08) | 87.97 (+1.23) |
| SwinT-Big | 0.82 | 87.0M | 92.11 (+0.95) | 85.86 (+2.04) | 86.32 (+1.71) |
| SwinT-Big-GPH | 0.84 | 102.8M | 93.06 (+1.79) | 87.52 (+5.59) | 88.03 (+2.55) |
| ConvNextBase | 0.59 | 88.7M | 92.77 (+1.79) | 81.93 (+5.59) | 85.31 (+2.55) |
| ConvNextBase-GPH | 0.61 | 103.4M | 94.56 (+1.79) | 87.52 (+5.59) | 87.86 (+2.55) |
| ConvNextLarge | 1.22 | 197.9M | 93.71 (+2.08) | 81.74 (+6.06) | 85.53 (+2.58) |
| ConvNextLarge-GPH | 1.23 | 231.8M | 95.79 (+2.08) | 87.8 (+6.06) | 88.11 (+2.58) |
| HERB-SwinT | 1.74 | 286.6M | 88.62 (+0.28) | 89.9 (+0.47) | 90 (+0.61) |
| HERB-SwinT-GPH | 1.88 | 318.2M | 88.9 (+0.28) | 90.37 (+0.47) | 90.61 (+0.61) |
Avg. Improvement +2.51 +3.46 +3.04
parameters) outperforms SwinT-Big (87M parameters), and ConvNextBase-GPH (103.4M parameters) surpasses ConvNextLarge (197.9M). This partly demonstrates the effectiveness of the proposed module when integrated into different backbones. Regarding neural network complexity, despite a significant increase in the number of parameters in the proposed models compared to the base ones, the inference time varies only slightly between them.
In summary, our proposed approach consistently demonstrates enhanced performance across various classifiers and fine-grained datasets. Moreover, our method can easily integrate with cutting-edge classifiers to yield further enhancements. Notably, the parameter configuration for our approach remains uncomplicated, delivering favorable outcomes with a single setup across diverse classifiers and datasets.
4.2.3 THE IMPACT OF BATCH CONFIGURATIONS (Q3)
In both the training and inference phases of the proposed module, the feature learning process of the GNN encoder begins by constructing a complete graph based on the features of the DNN encoder within a batch. Therefore, batch configurations, including batch size and how images are selected, influence the model’s performance to some extent. In this part, we will examine the stability of GPH under different batch configurations.
Batch size. Figure 3 reveals that altering the batch size of the training and testing process has minimal impact on the accuracy of the baseline DNN models. Therefore, in this experiment, we only compare the results of 4 out of the 9 GPH variants for ease of illustration. The results plotted on both datasets demonstrate that larger models tend to exhibit higher stability, i.e., changes in batch size do not significantly affect performance. Among the models, DenseNet201-Attention and MobileNet exhibit the biggest variability. In contrast, the other 3 models show differences of less than 1%.
Shuffling the validation dataset during evaluation. Since GPH refines image latent embeddings using a fully connected graph of all embeddings within a batch, its performance may depend on the variation of the samples in the batch. In this section, we examine the stability of GPH under different batch configs of the evaluation datasets. Table 4 displays the comparison results of DenseNet161-GPH and SwinT-Big-GPH models on the validation dataset with two different orders: sequential and shuffled-data sampling. In the sequential data sampling scenario, data is drawn from one class
Figure 3: Performance comparison for GPHs using various batch sizes on both the Stanford Dogs dataset (on the left) and the CUB-200-2011 dataset (on the right). Note that experiments with large batch sizes on Densenet201-GPH, SwinT-Small-GPH, and ConvNextBase-GPH are omitted due to the GPU’s memory constraints.
Table 4: Evaluation results on the three datasets employing two distinct data sampling techniques during validation, namely Sequential and Shuffle.
| Method | Stanford Dogs | CUB-200-2011 | NABirds |
|-----------------|---------------|--------------|---------|
| | Sequential | Shuffle | Sequential | Shuffle | Sequential | Shuffle |
| Densenet161-GPH | 88.47 | 88.17 | 84.79 | 84.53 | 84.75 | 84.62 |
| SwinT-Big-GPH | 93.06 | 92.82 | 87.90 | 87.66 | 87.38 | 87.21 |
before moving on to the next class when filling the batches, making the variation of samples within each batch low. In contrast, in the common shuffled-data sampling, the variation within each batch is high since each sample is randomly picked from any class. As reported in Table 4, sequential sampling provides slightly better accuracy, but the gap is small (maximum 0.3%). Therefore, we can confirm that GPH provides a pretty stable result, and the diversity of classes within the same batch has a minor impact on the model’s classification performance.
Feature selection within a batch during evaluation and prediction. The question at hand is whether, with pre-trained weights obtained during the training of the GPH model and a batch size of $b$, the model’s input during testing or inference must necessarily be fixed with $b$ images for the GNN encoder to process. To address this question, we employ a method of filling the batch embedding with vectors of all ones. Specifically, assuming we have $b_t < b$ images for testing, $b_t$ images first pass through the DNN encoder to extract features $\{z_i\}_{i=1}^{b_t}$. Then, the $\{z_j\}_{j=b_t}^b$ are initialized as vectors of ones, and the entire set of $b$ features is subsequently input into the GNN encoder for processing, as described in Section 3.2. Table 5 presents the evaluation results on the validation set using this method with $b_1 = 1$, corresponding to different values of $b$. The results demonstrate the stability of the batch size-specific filling method, even with MobileNetV3-S-GPH, where this method achieves better accuracy than the conventional approach of taking the entire batch of images. Notably, the results for Densenet201-Attention-filled are favorable, while Densenet201-Attention performs poorly with a small batch size. From these results, it is evident that the filling method effectively addresses the posed question.
Table 5: The performance of models using various batch sizes after filling batch feature embeddings with ones tensors on the Stanford Dogs dataset.
| Model | Batch Size | 1 | 4 | 8 | 16 | 28 | 36 | 48 | 64 | 120 |
|------------------------|------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| MobilenetV3-S-GPH | | 72.54 | 75.57 | 76.22 | 76.54 | 76.57 | 76.78 | 76.71 | 76.82 | 77.01 |
| MobilenetV3-S-GPH-filled | | 76.2 | 76.33 | 76.52 | 76.64 | 76.93 | 76.98 | 76.94 | 76.92 | 77.01 |
| Densenet201-Attention | | 31.42 | 63.67 | 76.94 | 83.13 | 84.39 | 85.06 | 85.47 | - | - |
| Densenet201-Attention-filled | | 85.32 | 85.61 | 85.63 | 85.56 | 85.6 | 85.64 | 85.47 | - | - |
| Densenet201-GPH | | 86.33 | 88.05 | 88.14 | 88.2 | 88.31 | 88.31 | 88.09 | - | - |
| Densenet201-GPH-filled | | 87.25 | 87.47 | 87.68 | 88 | 88.27 | 88.27 | 88.09 | - | - |
4.2.4 Visual Analysis (Q4)
To identify the areas of primary interest in the images according to the models’ analysis, we utilize Grad-Cam Selvaraju et al. (2019) to display their activation maps on the original images, as depicted in Figure 4, where the color spectrum from blue to red represents values from low to high, with higher values indicating stronger focus of the model on that area. We can discern that all four models primarily concentrate on the object in the image, which is the dog. Nevertheless, in the case of the Densenet201-GPH and SwinT-Small-GPH models, our model gives more attention to the dog’s facial regions, seeking cues for assessment, whereas the baseline’s heatmap weights are spread across the entire dog.
4.2.5 GNN Aggregation Functions (Q5)
| Model | Sum | Mean |
|---------------------|------|------|
| Densenet201-Attention | 75.15 | 85.85 |
| Densenet201-SAGE | 66.90 | 87.39 |
The results presented in Table 6 illustrate a comparison between various GNN aggregation functions, specifically SUM and MEAN. These two functions yield divergent impacts on accuracy. The MEAN function leads to a notable improvement in model accuracy compared to DenseNet201, whereas the SUM operation has a detrimental effect. Additionally, we observe contrast in performance: when using the SUM function, SAGE achieves a lower accuracy than Attention, while the opposite holds true for the MEAN function.
5 Conclusion And Discussions
In our investigation, we identified a novel architectural design that appears deceptively straightforward yet has remained unexplored in prior studies. Rigorous experimentation conducted on benchmark datasets underscores the efficacy of our proposed approach, showcasing its seamless integration with a variety of fine-grained classifiers. These synergistic interactions yielded appreciable improvements in accuracy, establishing a new benchmark for performance in the field. Additionally, our architectural innovation fostered a reduction in both model parameters and inference latency when compared to conventional DNN methodologies.
Our research opens up several promising avenues for future exploration. First, further investigation can delve into optimizing the architecture and hyperparameters of the integrated GNN-DNN model for different fine-grained classification tasks. Additionally, exploring different graph construction strategies and graph neural network architectures may yield insights into improving model performance. Moreover, the application of this integrated approach to other computer vision tasks and datasets warrants exploration, as it has the potential to enhance various aspects of visual recognition.
REFERENCES
Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs, 2014.
Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, pp. 257–266, New York, NY, USA, 2019. Association for Computing Machinery. doi: 10.1145/3292500.3330925. URL https://doi.org/10.1145/3292500.3330925.
Po-Yung Chou, Cheng-Hung Lin, and Wen-Chung Kao. A novel plug-in module for fine-grained visual classification, 2022.
Po-Yung Chou, Yu-Yung Kao, and Cheng-Hung Lin. Fine-grained visual classification with high-temperature refinement and background suppression, 2023.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021.
Andrea Gesmundo. A continual development methodology for large-scale multitask dynamic ml systems, 2022.
William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 1024–1034, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html.
G. Van Horn, S. Branson, R. Farrell, S. Haber, J. Barry, P. Ipeirotis, P. Perona, and S. Belongie. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 595–604, Los Alamitos, CA, USA, jun 2015a. IEEE Computer Society. doi: 10.1109/CVPR.2015.7298658. URL https://doi.ieee.org/10.1109/CVPR.2015.7298658.
G. Van Horn, S. Branson, R. Farrell, S. Haber, J. Barry, P. Ipeirotis, P. Perona, and S. Belongie. The inaturalist species classification and detection dataset,. In CVPR, pp. 8769–8778, 2017.
Grant Van Horn, Elijah Cole, Sara Beery, Kimberly Wilber, Serge Belongie, and Oisin Mac Aodha. Benchmarking representation learning for natural world image collections,. In CVPR, 2015b.
Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications, 2017. URL http://arxiv.org/abs/1704.04861.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269, 2017. doi: 10.1109/CVPR.2017.243.
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization (FGVC), 2011.
Sangwon Kim, Jaeyeal Nam, and Byoung Chul Ko. ViT-NeT: Interpretable vision transformers with neural tree decoder. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 11162–11172. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/kim22g.html.
|
gLZeEpfVjy
|
Theorem 4.10 shows that the error bound of Theorem 4.5 is at least as strong as the full domain generalization bound without the sub-domain information. However, the authors may overlook the finiteness of real datasets, which is also important for reliable generalization bound and thus may lead to a different conclusion. Considering that the sub-domains have fewer samples than the whole domain, the finiteness improves more value of the sub-domain generalization bound than the whole-domain generalization bound.
|
Understanding and Robustifying Sub-domain Alignment for Domain Adaptation
Anonymous authors
Paper under double-blind review
Abstract
In unsupervised domain adaptation (UDA), aligning source and target domains improves the predictive performance of learned models on the target domain. A common methodological improvement in alignment methods is to divide the domains and align sub-domains instead. These sub-domain-based algorithms have demonstrated great empirical success but lack theoretical support. In this work, we establish a rigorous theoretical understanding of the advantages of these methods that have the potential to enhance their overall impact on the field. Our theory uncovers that sub-domain-based methods optimize an error bound that is at least as strong as non-sub-domain-based error bounds and is empirically verified to be much stronger. Furthermore, our analysis indicates that when the marginal weights of sub-domains shift between source and target tasks, the performance of these methods may be compromised. We therefore implement an algorithm to robustify sub-domain alignment for domain adaptation under sub-domain shift, offering a valuable adaptation strategy for future sub-domain-based methods. Empirical experiments across various benchmarks validate our theoretical insights, prove the necessity for the proposed adaptation strategy, and demonstrate the algorithm’s competitiveness in handling label shift.
1 Introduction
Supervised deep learning has achieved unprecedented success in a wide range of real-world applications. However, obtaining labeled data may be costly, labor-intensive, and/or time-consuming in certain applications, particularly in medical and biological domains (Lu et al., 2017; Li et al., 2020). To this end, unsupervised domain adaptation (UDA) transfers knowledge from a labeled source domain to a different but related unlabeled target domain (Parahami et al., 2021). However, efficient UDA is challenging due to the statistical discrepancies between two domains, hereafter referred to as domain shift (Wang & Deng, 2018; Sankaranarayanan et al., 2018; Deng et al., 2019). To address this challenge, much of the UDA research has focused on reducing the distributional gap between the source and target domains (Shen et al., 2018; Liu et al., 2016; Isola et al., 2017; Tzeng et al., 2015; 2017; 2020; Ganin & Lempitsky, 2015; Ganin et al., 2016; Peng et al., 2018). Recent methods further partition the data into sub-domains and align the sub-domains instead (Pinheiro, 2018; Long et al., 2018; Deng et al., 2019). One straightforward definition of the sub-domains is the conditional distributions based on the classification label. Other strategies for defining sub-domains include cross-domain adaptive clustering (Li et al., 2021b), classifier-based backprop-induced weighting (Westfechtel et al., 2023), domain consensus clustering (Li et al., 2021a), joint learning of domain-invariant features and classifiers (Shi & Sha, 2012), and the use of deep clustering (Gao et al., 2020). These sub-domain-based algorithms have shown substantial empirical success. However, the benefits of sub-domain alignments have not been rigorously justified.
In this work, we present a theoretical analysis to establish that the sub-domain based methods are in fact optimizing a generalization bound that is at least as strong as (and empirically much stronger than) the full-domain-based objective functions. Our analysis further reveals that when the marginal weights of the sub-domains shift between source and target, the sub-domain based methods can fail. We then present a novel UDA algorithm, Domain Adaptation via Rebalanced Sub-domain Alignment (DARSA), that is motivated by our analysis and addresses the case when marginal sub-domain weights shift. DARSA optimizes reweighted classification error and discrepancy between sub-domains of the source and target tasks. The reweighting scheme follows a simple intuition:
important sub-domains in the target domain need more attention. To illustrate the concept visually, Figure 1 highlights the strengths of sub-domain alignment, providing insight into how our method operates and the benefits it brings. The contribution of our work is two-fold:
• **Theoretical Contribution:** Our work analyzes and provides a theoretical foundation for sub-domain based methods in domain adaptation, addressing their previous lack of rigorous understanding. Our theoretical framework not only supports our algorithm but can be extended to other methods, contributing to broader impact and value in the field.
• **Algorithmic Contribution:** Our theoretical analysis leads to our algorithm DARSA. DARSA addresses shifted marginal sub-domain weights, which adversely impact existing sub-domain-based methods. We empirically verify its competitive performance under label shifting on various benchmarks, confirming our theoretical insights and validating the proposed adaptation strategy.
2 RELATED WORK
We review the most relevant work below and provide a comprehensive discussion in Appendix A.
**Sub-domain-based Domain Adaptation.** Sub-domain alignment has been proven effective in aligning multi-modal distributions and enhancing performance across various tasks (Deng et al., 2019; Long et al., 2018; Pinheiro, 2018; Shi & Sha, 2012; Jiang et al., 2020; Snell et al., 2017). While these methods have demonstrated empirical success, a detailed theoretical perspective on the benefits of incorporating sub-domain structures has yet to be fully explored. Our work complements these existing methodologies by providing a comprehensive theoretical understanding of their inherent advantages. Our theory has the potential to further enhance their overall impact on the field.
**Discrepancy-based Domain Adaptation.** UDA commonly tries to reduce the distribution gap between the source and target domains. One approach to achieve this is discrepancy-based methods in the feature space (Tzeng et al., 2014; Long et al., 2015; Sun et al., 2016), which often use maximum
mean discrepancy (MMD) \cite{Borgwardt2006}. While MMD is a well-known Reproducing Kernel Hilbert Space (RKHS) metric, it is weaker than the Wasserstein-1 distance \cite{Lu2020}. Therefore, we use Wasserstein-1 distance in our work.
**Theoretical Analysis of Domain Adaptation.** Many existing domain adaptation methods are inspired by the generalization bounds based on the $\mathcal{H}$-divergence \cite{Ben-David2006} which is a modified version of the total variation distance that restricts the hypothesis to a given class. These generalization bounds can be estimated by learning a domain classifier with a finite Vapnik–Chervonenkis (VC) dimension. However, this results in a loose bound for most neural networks \cite{Li2018}. In this work, we use the Wasserstein distance for two reasons. First, the Wasserstein-1 distance is upper bounded by the total variation distance \cite{Ben-David2010}, leading to stronger generalization bounds. Additionally, the Wasserstein-1 distance is bounded above by the Kullback-Leibler divergence (a special case of the Rényi divergence when $\alpha$ goes to 1 \cite{Fournier2015}), giving stronger bounds than those presented by Redko et al \cite{Redko2017} and Mansour et al \cite{Mansour2012}. Additionally, the Wasserstein distance has stable gradients even when the compared distributions are far apart \cite{Gulrajani2017}.
### 3 Preliminaries
Assume a labeled source dataset $\{(x^i_S, y^i_S)\}_{i=1}^{N_S}$ from a source domain $X_S$ with distribution $P_S$ and an unlabeled target dataset $\{(x^i_T)\}_{i=1}^{N_T}$ from a target domain $X_T$ with distribution $P_T$. The source dataset has $N_S$ labeled samples and the target dataset has $N_T$ unlabeled samples. We assume that the samples $x^i_S \in X \subseteq \mathbb{R}^d$ and $x^i_T \in X \subseteq \mathbb{R}^d$ are independently drawn from $P_S$ and $P_T$, respectively. The goal is to learn a classifier $f(x)$ that predicts labels $\{y^i_T\}_{i=1}^{N_T}$ for the target dataset. We further assume that $P_S$ and $P_T$ are probability densities of Borel probability measures in the Wasserstein space $\mathcal{P}_1(\mathbb{R}^d)$, i.e., the space of probability measures with finite first moment.
**Sub-domains.** We assume that both $X_S$ and $X_T$ are mixtures of $K$ sub-domains. In other words, we have $P_S = \sum_{k=1}^{K} w^k_S P^k_S$ and $P_T = \sum_{k=1}^{K} w^k_T P^k_T$ where we use $P^k_S$ and $P^k_T$ to respectively represent the distribution of the $k$-th sub-domain of the source domain and that of the target domain, and $w^k_S/w^k_T$ correspond to the weights of each sub-domain. Note that $w_S = [w^1_S, \ldots, w^K_S]$ and $w_T = [w^1_T, \ldots, w^K_T]$ belong to $\Delta_K$ (the $K - 1$ probability simplex). It is straightforward to define sub-domains as conditional distributions, such that the $k$-th sub-domain is represented as $P^k_S = P(X_S|Y_S = k)$ and $P^k_T = P(X_T|Y_T = k)$, where $Y_S$ and $Y_T$ are the source and target labels, respectively. However, we note that the framework presented in this work is applicable across various sub-domain methods.
**Probabilistic Classifier Discrepancy.** For a distribution $\mathcal{D}$, we define the discrepancy between two functions $f$ and $g$ as:
$$\gamma_{\mathcal{D}}(f, g) = \mathbb{E}_{x \sim \mathcal{D}}[|f(x) - g(x)|].$$
We use $g_T$ and $g_S$ to represent the true labeling functions of the target and source domains, respectively. We use $\gamma_S(f) = \gamma_{P_S}(f, g_S)$ and $\gamma_T(f) = \gamma_{P_T}(f, g_T)$ to respectively denote the discrepancies of a hypothesis $f$ to the true labeling function for the source and target domains.
**Wasserstein Distance.** The Kantorovich-Rubenstein dual representation of the Wasserstein-1 distance \cite{Villani2009} between two distributions $P_S$ and $P_T$ is defined as
$$W_1(P_S, P_T) = \sup_{||f||_L \leq 1} \mathbb{E}_{x \sim P_S}[f(x)] - \mathbb{E}_{x \sim P_T}[f(x)],$$
where the supremum is over the set of 1-Lipschitz functions (all Lipschitz functions $f$ with Lipschitz constant $L \leq 1$. For notational simplicity, we use $D(X_1, X_2)$ to denote a distance between the distributions of any pair of random variables $X_1$ and $X_2$. For instance, $W_1(\Phi(X_S), \Phi(X_T))$ denotes the Wasserstein-1 distance between the distributions of the random variables $\Phi(X_S)$ and $\Phi(X_T)$ for any transformation $\Phi$.
### 4 Understanding Sub-domain-based Methods
We now present our theoretical analysis of sub-domain-based methods, with all proofs deferred to the Appendix B. We first present a generalization bound for domain adaptation that is closely
related to existing work, and then establish a novel generalization bound for sub-domain-based methods, aligning with the objectives used by these existing methods. Furthermore, we demonstrate that the sub-domain-based generalization bound is at least as strong as the non-sub-domain-based generalization bound, which establishes a rigorous theoretical understanding of the advantages of these methods. Our analysis also uncovers that when the marginal weights of sub-domains shift between the source and the target task, sub-domain methods can potentially fail.
4.1 Generalization Bounds for Domain Adaptation
Before presenting our novel theoretical results about sub-domain-based domain adaptation, we first present an upper bound closely related to Ben-David et al. (2010) and Li et al. (2018), Theorem A.8. It is worth noting that we use the Wasserstein-1 distance in our analysis, as it provides a stronger bound than the total variation distance Redko et al. (2017) employed by Ben-David et al. (2010).
**Theorem 4.1** (Full Domain Generalization Bound). For a hypothesis \( f : \mathcal{X} \to [0, 1] \),
\[
\gamma_T(f) \leq \gamma_S(f) + (\lambda + \lambda_H)W_1(P_S, P_T) + \gamma^*,
\]
where \( \gamma^* = \min_{f \in \mathcal{H}} \gamma_S(f) + \gamma_T(f) \), \( \mathcal{H} \) is a hypothesis class included in the set of \( \lambda_H \)-Lipschitz functions, and the true functions \( g_T \) and \( g_S \) are both \( \lambda \)-Lipschitz functions (as defined in Appendix B.1).
**Remark 4.2.** The upper bound in Theorem 4.1 consists of three components: (i) \( \gamma_S(f) \) is the performance of the hypothesis on the source domain, (ii) \( W_1(P_S, P_T) \) is the distance between the source and the target domains, and (iii) \( \gamma^* \) is a constant related to the difference between the source and the target problems that cannot be addressed by domain adaptation. For succinctness and clarity of the following analysis, we assume without loss of generality that \( \lambda + \lambda_H \leq 1 \), simplifying the bound to
\[
\gamma_T(f) \leq \gamma_S(f) + W_1(P_S, P_T) + \gamma^*.
\]
Numerous works attempt to solve the domain adaptation problem by designing algorithms that minimize similar generalization bounds to the one in equation 2, e.g., Theorem 1 in Ben-David et al. (2010). These approaches consist of two components: (i) a mapping \( \Phi : \mathcal{X} \to \mathcal{H} \) that transforms the original problem by embedding \( X_S \) and \( X_T \) into a shared hidden space \( \mathcal{H} \), and (ii) a hypothesis \( h : \mathcal{H} \to [0, 1] \) for prediction. Since \( \gamma_T(h \circ \Phi) = \gamma_{\Phi(X_T)}(h) \), with Theorem 4.1 we have a generalization bound of the function \( h \circ \Phi : \mathcal{X} \to [0, 1] \) on the original target problem:
\[
\gamma_T(h \circ \Phi) = \gamma_{\Phi(X_T)}(h) \leq \gamma_{\Phi(X_S)}(h) + W_1(\Phi(X_S), \Phi(X_T)) + \gamma^*_\Phi.
\]
If the distance between \( \Phi(X_S) \) and \( \Phi(X_T) \), i.e., \( W_1(\Phi(X_S), \Phi(X_T)) \), is close and the classification error of \( h \) on the transformed source problem, i.e., \( \gamma_{\Phi(X_S)}(h) \), remains low, then the performance of the hypothesis \( h \circ \Phi \) on the original target problem can be guaranteed. This motivation has led to a variety of domain adaptation frameworks with objectives of the following format:
\[
\min_{\Phi : \mathcal{X} \to \mathcal{H}, h : \mathcal{H} \to [0, 1]} \gamma_{\Phi(X_S)}(h) + \alpha D(\Phi(X_S), \Phi(X_T)),
\]
where \( \gamma_{\Phi(X_S)}(h) \) is the classification error of \( h \) on the transformed source problem, \( D \) is a distance between distributions and \( \alpha \) is the balancing weight. In this work, we use Wasserstein-1 distance.
4.2 Analysis of Sub-domain-based Methods
We first present several results that will be used to build the main theorem. These results themselves may be of interest. First of all, Theorem 4.1 directly leads to the following proposition:
**Proposition 4.3** (Individual Sub-domain Generalization Bound). For \( k \in \{1, \ldots, K\} \), where \( K \) represents the total number of distinct sub-domains, for sub-domain \( X^k_S \) with distribution \( P^k_S \) and \( X^k_T \) with distribution \( P^k_T \), it holds any \( f \in \mathcal{H} \) that
\[
\gamma^k_T(f) \leq \gamma^k_S(f) + W_1(P^k_S, P^k_T) + (\gamma^k)^*,
\]
where \( (\gamma^k)^* = \min_{f \in \mathcal{H}} \gamma^k_S(f) + \gamma^k_T(f) \), \( \mathcal{H} \) is a hypothesis class included in the set of \( \lambda_H \)-Lipschitz functions, the true functions \( g_T \) and \( g_S \) are both \( \lambda \)-Lipschitz functions, and \( \lambda + \lambda_H \leq 1 \).
The second result below shows that the classification error of any hypothesis \( f \) on a domain can be decomposed into a weighted sum of the classification errors of \( f \) on its sub-domains.
Lemma 4.4 (Decomposition of the Classification Error). For any hypothesis \( f \in \mathcal{H} \),
\[
\gamma_S(f) = \sum_{k=1}^{K} w^k_S \gamma^k_S(f), \quad \gamma_T(f) = \sum_{k=1}^{K} w^k_T \gamma^k_T(f).
\]
(6)
With above results, we present a generalization bound with sub-domain information:
Theorem 4.5 (Sub-domain-based Generalization Bound).
\[
\gamma_T(f) \leq \sum_{k=1}^{K} w^k_T \gamma^k_S(f) + \sum_{k=1}^{K} w^k_T W_1(P^k_S, P^k_T) + \sum_{k=1}^{K} w^k_T (\gamma^k)^*.
\]
(7)
In particular, in a balanced domain adaptation setting where for all \( k \), \( w^k_S = w^k_T \), we have that
\[
\gamma_T(f) \leq \gamma_S(f) + \sum_{k=1}^{K} w^k_S W_1(P^k_S, P^k_T) + \sum_{k=1}^{K} w^k_S (\gamma^k)^*.
\]
(8)
Remark 4.6. Note that the format of the RHS of equation (8) is reminiscent of the objectives used by the majority of the sub-domain-based methods.
We next show that, under reasonable assumptions, the weighted sum of distances between corresponding sub-domains of the source and target domains is at most as large as the distance between the marginal distribution of the source domain and that of the target domain.
Theorem 4.7 (Benefits of Sub-domain Alignment). Under the following assumptions:
A1. For all \( k \), \( P^k_S / P^k_T \) are Gaussian distributions with mean \( m^k_S / m^k_T \) and covariance \( \Sigma^k_S / \Sigma^k_T \).
A2. Distance between the paired source-target sub-domain is less or equal to distance between the non-paired source-target sub-domain, i.e., \( W_1(P^k_S, P^k_T) \leq W_1(P^{k'}_S, P^{k'}_T) \) for \( k \neq k' \).
A3. There exists a small constant \( \epsilon > 0 \), such that \( \max_{1 \leq k \leq K} (\text{tr}(\Sigma^k_S)) \leq \epsilon \) and \( \max_{1 \leq k \leq K} (\text{tr}(\Sigma^k_T)) \leq \epsilon \).
Then the following inequality holds:
\[
\sum_{k=1}^{K} w^k_T W_1(P^k_S, P^k_T) \leq W_1(P_S, P_T) + \delta_c,
\]
(9)
where \( \delta_c = 4\sqrt{\epsilon} \). In particular, when \( w^k_S = w^k_T \) for all \( k \),
\[
\sum_{k=1}^{K} w^k_S W_1(P^k_S, P^k_T) \leq W_1(P_S, P_T) + \delta_c.
\]
(10)
Remark 4.8. In Appendix C, we provide empirical evidence to verify that these assumptions are satisfied on real-world datasets. We note that the assumption of a Gaussian distribution for \( X^k \) is not unreasonable since it is often the result of a complex transformation, \( \Phi \), and the Central Limit Theorem indicates that the outcome of such a transformation is approximately normally distributed under regularity assumptions (please see Appendix C.1 for empirical evidence).
Remark 4.9. \( \delta_c \) is a constant dependent only on the variance of the features but not the covariance between features in different dimensions. Moreover, the inequality holds empirically without \( \delta_c \) as demonstrated in Figure 3 as well as Figure 7 and Figure 8 in Appendix G.2.
Theorem 4.7 shows that the objective function of sub-domain methods is at least as strong as the objective function of domain alignment methods, explaining its improved performance. However, if the marginal weights of the sub-domain shifts, i.e., \( w^k_S \neq w^k_T \), the inequality in equation (10) is not likely to hold and the framework can collapse. One such example is the scenario of shifted label distributions where \( w^k_T \) and \( w^k_S \) (class weights for target and source domains) can be vastly different. To overcome this, we propose to minimize an objective with the simple intuition that important sub-domains in the target domain need more attention. With this motivation, we propose the following objective function for UDA with shifted label distribution:
\[
L(f) = \sum_{k=1}^{K} w^k_T \gamma^k_S(f).
\]
(11)
In other words, \( L \) reweights the losses of sub-domains so that the sub-domain with more weight in the target domain can be emphasized more. We next prove that through the proposed approach, we can again obtain a sub-domain-based generalization bound that is at least as strong as the full domain generalization bound without the sub-domain information.
Theorem 4.10. Let \( \mathcal{H} = \{ f | f : \mathcal{X} \rightarrow [0, 1] \} \) denote a hypothesis space. Under the assumptions in Theorem 4.7, for any \( f \in \mathcal{H} \) such that:
\[
\sum_{k=1}^{K} w^k_T \gamma^k_S(f) \leq \sum_{k=1}^{K} w^k_S \gamma^k_S(f),
\]
(12)
then we have \( \sum_{k=1}^{K} w_T^k (\gamma^k)^* \leq \gamma^* \). Further, let
\[
\epsilon_c(f) = \sum_{k=1}^{K} w_T^k \gamma_S^k(f) + \sum_{k=1}^{K} w_T^k W_1(P_S^k, P_T^k) + \sum_{k=1}^{K} w_T^k (\gamma^k)^*
\]
denote the sub-domain-based generalization bound and let
\[
\epsilon_g(f) = \gamma_S(f) + W_1(P_S, P_T) + \gamma^*
\]
denote the generalization bound without any sub-domain information, we have,
\[
\epsilon_c(f) \leq \epsilon_g(f) + \delta_c.
\]
**Remark 4.11.** In Section 6.1 and Appendix G.2, we provide extensive empirical evidence to establish that equation (12) can easily hold, as the left hand side is the optimization objective. Moreover, in these sections, we offer empirical evidence to further verify the value of this theoretical result by showing that our proposed bound is empirically much stronger than the existing one.
Inspired by our analysis, we propose a framework, Domain Adaptation with Rebalanced Sub-domain Alignment (DARSA), for imbalanced UDA, a special case of the sub-domain weight shifting scenario where the class weights of the target domain shifts from that of the source domain.
## 5 METHODS
In DARSA, we divide the source domains into sub-domains based on class labels, and divide target domains into sub-domains using predicted class labels (serving as pseudo labels, which have shown success in previous research (Deng et al., 2019; Lee et al., 2013)) for unlabeled target domains. Motivated by Theorem 4.10, the framework of DARSA, shown in Figure 2, is composed of a source encoder \( f_E^S \) parameterized by \( \theta_E^S \), a target encoder \( f_E^T \) parameterized by \( \theta_E^T \), and a classifier \( f_Y \) parameterized by \( \theta_Y \). The pseudo-code for DARSA can be found in Appendix D.
The objective function of DARSA is defined as follows:
\[
\min_{\theta_Y, \theta_E^S, \theta_E^T} \lambda_Y L_Y + \lambda_D L_D + L_C,
\]
where \( L_Y, L_D, L_C \) are losses described below with relative weights given by \( \lambda_Y \) and \( \lambda_D \).
**Weighted source domain classification error \( L_Y \).** The weighted source domain classification error in Theorem 4.10 can be further expressed as:
\[
\sum_{k=1}^{K} w_T^k \gamma_S^k(f) = \sum_{k=1}^{K} w_T^k \int P_S(x|c=k)|f(x) - g_S(x)|dx
\]
\[
= \sum_{k=1}^{K} w_T^k \int \frac{P_S(c=k|x)P_S(x)}{P_S(c=k)} |f(x) - g_S(x)|dx = \sum_{k=1}^{K} w_T^k \mathbb{E}_{x \sim D_s} w_S^k(x) |f(x) - g_S(x)|,
\]
where variable \( c \) represents class, \( w_T^k = P_T(c=k), w_S^k = P_S(c=k), w_S^k(x) = P_S(c=k|x) \). We set \( P_S(c=k|x) = 1 \) only when data point \( x \) is in class \( k \), otherwise \( P_S(c=k|x) = 0 \). \( w_S^k \) can be set to the marginal source label distribution, and \( w_T^k \) can be estimated from the target predictions.
From equation (14), \( L_Y(\theta_Y, \theta_E^S) \) is defined as:
\[
L_Y(\theta_Y, \theta_E^S) = \frac{1}{N_S} \sum_{x_i \in X_S} 1_{y_i = k} \frac{w_T^k}{w_S^k} \ell(\hat{y}_i, y_i),
\]
where \( \hat{y}_i = f_Y(f_E^S(x_i)) \) is the predicted label and \( \ell \) can be any non-negative loss function (e.g., cross-entropy loss for classification tasks).
**Weighted source-target subdomain discrepancy \( L_D \).** The weighted source-target domain discrepancy in Theorem 4.10 can be further expressed as:
\[
L_D(\theta_E^S, \theta_E^T, \theta_Y) = \sum_{k=1}^{K} w_T^k W_1(P_S^k, P_T^k) = \sum_{k=1}^{K} w_T^k W_1(f_E^S(x_S^k), f_E^T(x_T^k)),
\]
where \( x_S^k \) are source samples with labels \( y_S = k \), and \( x_T^k \) are target samples with predicted labels \( \hat{y}_T = k \). We leverage the Sinkhorn algorithm (Cuturi, 2013) to approximate the Wasserstein metric.
**Clustering loss \( L_C \).** The clustering loss \( L_C = \lambda_c L_{\text{intra}} + \lambda_a L_{\text{inter}} \) is comprised of two components: the intra-clustering loss, \( L_{\text{intra}} \), and the inter-clustering loss, \( L_{\text{inter}} \). The role of \( L_{\text{intra}} \) is to satisfy the assumption A.3 in Theorem 4.7. It encourages embeddings of the same label to
Figure 2: The DARSA framework. Orange lines representing the clustering loss \( L_C \), green lines indicating domain discrepancy \( L_D \), and purple lines indicating source classification loss \( L_Y \).
Cluster tightly together, while also pushing embeddings of different labels to separate by at least a user-specified distance, \( m \) (Luo et al., 2018). The inter-clustering loss \( L_{\text{inter}} \) further enhances sub-domain alignment by aligning the centroids of source sub-domains with those of their corresponding target sub-domains in the representation space. We define \( L_{\text{intra}} \) and \( L_{\text{inter}} \) as follows:
\[
L_{\text{intra}}(\theta_S^E, \theta_T^E, \theta_Y) = L_{\text{intra}}(f_S^E(X_S)) + L_{\text{intra}}(f_T^E(X_T)),
\]
\[
L_{\text{intra}}(f_E^S(X)) = \frac{1}{N^2} \sum_{i,j=1}^{N} \left[ \delta_{ij} D_{ij} + (1 - \delta_{ij}) \max(0, m - D_{ij}) \right];
\]
\[
L_{\text{inter}}(\theta_S^E, \theta_T^E, \theta_Y) = \frac{1}{K} \sum_{k=1}^{K} \| C(f_S^E(x_T^k)) - C((f_T^E(x_T^k))) \|^2,
\]
where \( N \) represents the number of samples in the domain \( X \) and \( C(\cdot) \) calculates the centroids of the sub-domains, \( \delta_{ij} = 1 \) only if \( x_i \) and \( x_j \) have the same label; otherwise, \( \delta_{ij} = 0 \). We use the ground truth label or the predicted label if \( x \) is in source domain or target domain, respectively. \( m \) is a pre-defined distance controlling how separated each sub-domain should be. \( D_{ij} = \| f_E(x_i) - f_E(x_j) \|^2 \) represents distance between \( x_i \) and \( x_j \).
6 EXPERIMENTS
In this section, we verify our theoretical results and assess DARSA’s efficacy through real-world experiments. We begin by empirically confirming the superiority of the sub-domain-based generalization bound (Theorem 4.10) in Section 6.1. Then, we verify that the assumptions for Theorem 4.10 are empirically satisfied on real-world datasets (details in Appendix C). Next, we demonstrate the vital role of subdomain weight re-balancing in Section 6.2, and show DARSA’s robustness to minor weight estimation discrepancies. Lastly, given that our theoretical analysis guarantees that DARSA should have competitive performance in scenarios where the number of classes is not overwhelming, we evaluate DARSA on real-world datasets with this property. Comparing with other state-of-the-art UDA baselines, we verify the correctness of our analysis as well as an advantage of DARSA that its strong performance can be guaranteed on particular real-world applications such as those in medical and operations research. We base the following confirmatory experiments on two sets of datasets.
Experiments on the Digits Datasets. In our Digits datasets experiments, we evaluate our performance across four datasets: MNIST (M) (LeCun et al., 1998), MNIST-M (MM) (Ganin et al., 2016), USPS (U), and SVHN (S), all modified to induce label distribution shifts. Here, the parameter \( \alpha \) denotes the class imbalance rate, representing a ratio such as \( 1:\alpha \) and \( \alpha:1 \) for the odd:even distribution in the source and target datasets, respectively. Weak and strong imbalance correspond to \( \alpha = 3 \) and \( \alpha = 8 \). For comprehensive details, refer to Appendix G.
Experiments on the TST Dataset. We use the Tail Suspension Test (TST) dataset (Gallagher et al., 2017) of local field potentials (LFPs) from 26 mice with two genetic backgrounds: Clock-\(\Delta\)19 (a bipolar disorder model) and wildtype. This dataset is publicly available (Carlson et al., 2023). Our
1The code to replicate all experiments is available at: https://anonymous.4open.science/r/DARSA/
study involves two domain adaptation tasks, predicting the current condition - home cage (HC), open field (OF), or tail-suspension (TS) - from one genotype to the other. We subsample datasets to induce label distribution shifts with imbalance rate = 2. For comprehensive details, refer to Appendix H.
6.1 Empirical Analysis of our Proposed Generalization Bound
We first verify the pivotal result in Theorem 4.10 that the sub-domain based generalization bound is at least as tight as the non-sub-domain bound. We empirically evaluate the proposed bound on the Digits datasets under weak imbalance. As shown in Figure 3, our empirical results demonstrate that the sub-domain-based generalization bound in Theorem 4.5 is empirically much stronger than the non-sub-domain-based bound in Theorem 4.1, corroborating our insights for the effectiveness of sub-domain based methods. Additional experiments on the other UDA tasks in the Digits datasets under weak and strong imbalance also support this claim, and full results are in Appendix G.
6.2 Importance of Re-weighting
Here, we experiment on the Digits datasets under weak imbalance to demonstrate the importance of (i) weights re-weighting and (ii) the accuracy of target sub-domain weights estimation. We compare DARSA with one variation of DARSA which employs uniform weights for all sub-domains and another variation which swaps sub-domain weights estimation of source with target. We also include two other baselines where the weights of the target domain are chosen to be deviating from the truth. Specifically, we compare DARSA with the following configurations:
- DARSA: Full algorithm where weights are inferred.
- DARSA Oracle: Utilizing true values of $w^k_T$.
- DARSA Small Divergence: Setting $w^k_T$ to be 20% divergent from true values.
- DARSA Large Divergence: Setting $w^k_T$ to be 50% divergent from true values.
- DARSA Flip: Swapping $w^k_T$ with $w^k_S$, effectively flipping importance weighting.
- DARSA Uniform: Assigning uniform weights for all sub-domains.
The results of these experiments are in Table 1. We verify the importance of subdomain weights re-balancing by showing that the performance of DARSA degrades significantly without the weights re-balancing or wrong sub-domain weights, further corroborating the value of our insights. Additionally, while the oracle case provides the best performance, inferring the weights in the DARSA algorithm provides nearly the same quality of predictions. In addition, we found our method, DARSA is robust to minor divergence in weights estimation and varying imbalance rates.
6.3 DARSA on Real-world Datasets
We now compare DARSA with many competing algorithms on these two datasets. Full details on the experiments, the rationale for competing algorithms choices, and their settings are in Appendix G and Appendix H for the Digits and TST datasets, respectively.
Table 1: Evaluation of the importance of re-weighting on Digits datasets under weak imbalance. Performance is measured by prediction accuracy (%) on the target domain.
| Method | M → MM | MM → M | U → M | S → M |
|-------------------------|--------|--------|-------|-------|
| DARSA Oracle | 96.2 | 98.4 | 92.7 | 92.6 |
| DARSA Uniform | 67.9 | 96.6 | 75.9 | 71.7 |
| DARSA Small Divergence | 95.6 | 98.3 | 91.4 | 92.4 |
| DARSA Large Divergence | 85.0 | 98.2 | 86.1 | 85.2 |
| DARSA Flip | 55.7 | 65.7 | 57.4 | 65.7 |
| DARSA | 96.0 | **98.8** | 92.6 | 90.1 |
Table 2: Summary of UDA results on the Digits datasets with shifted label distribution, measured in terms of prediction accuracy (%) on the target domain.
| Method | M → MM | MM → M | U → M | S → M | M → MM | MM → M | U → M | S → M |
|-------------------------|--------|--------|-------|-------|--------|--------|-------|-------|
| DANN (Gan et al., 2016) | 63.1 | 93.0 | 59.8 | 64.9 | 61.1 | 90.2 | 49.1 | 57.3 |
| DSN (Bousmalis et al., 2016) | 62.3 | 98.4 | 59.9 | 15.2 | 57.5 | 95.3 | 30.3 | 17.8 |
| ADDA (Tzeng et al., 2017) | 88.2 | 90.7 | 44.8 | 42.4 | 47.9 | 89.4 | 45.7 | 45.3 |
| pixelDA (Bousmalis et al., 2017) | 95.0 | 96.0 | 72.0 | 68.0 | **81.0** | 95.6 | 29.2 | 60.4 |
| CDAN (Long et al., 2018) | 58.7 | 96.0 | 42.0 | 38.3 | 37.1 | 90.6 | 34.8 | 32.5 |
| WDGRIL (Shen et al., 2018) | 60.4 | 93.6 | 63.9 | 64.3 | 22.3 | 91.4 | 46.7 | 52.2 |
| MCD (Saito et al., 2018) | 58.1 | 98.2 | 74.6 | 75.5 | 37.4 | **97.5** | 76.1 | 66.7 |
| CAT (Deng et al., 2019) | 54.1 | 95.4 | 81.0 | 65.8 | 48.9 | 93.8 | 61.3 | 62.2 |
| MDD (Zhang et al., 2019) | 48.7 | 97.7 | 82.3 | 62.4 | 47.6 | 93.6 | 83.2 | 64.5 |
| DRANE (Lee et al., 2021) | 95.2 | 97.8 | 86.5 | 40.2 | 63.3 | 96.1 | 54.2 | 31.3 |
| Source Only | 47.9 | 91.5 | 40.8 | 53.7 | 39.6 | 88.4 | 27.8 | 47.2 |
| DARSA | 96.0 | **98.8** | **92.6** | **90.1** | 78.8 | 97.3 | **87.9** | **83.5** |
Digits. Results shown in Table 2 demonstrates DARSA’s competitiveness in handling label shifting. Additionally, DARSA performs well with varying imbalance rates (Appendix Table 5) and competes favorably in scenarios without label distribution shifts (Appendix Table 7).
TST. As demonstrated in Table 3, DARSA achieves competitive performance on this biologically relevant task. For comprehensive experimental details, refer to Appendix H.
Table 3: Summary of UDA results on the TST datasets with shifted label distribution, measured in terms of prediction accuracy (%) on the target domain.
| Method | Clock-Δ19 to Wildtype | Wildtype to Clock-Δ19 |
|-------------------------|-----------------------|-----------------------|
| DANN | 79.9 | 81.5 |
| WDGRIL | 79.6 | 79.5 |
| DSN | 79.4 | 80.9 |
| ADDA | 75.1 | 72.6 |
| CAT | 77.3 | 78.6 |
| CDAN | 75.0 | 73.6 |
| Source only | 73.8 | 70.4 |
| DARSA | **86.6** | **84.8** |
Ablation. To assess the impact of each component within our objective function (Section 5), we conduct an ablation study under weak imbalance. Due to space constraint, the results of this investigation are detailed in Appendix G.6. The ablation analysis confirms that each component in our objective function contributes to the overall performance. Therefore, we recommend the use of all components for optimal results. In addition, we have included feature space visualizations in Appendix E and Appendix Figure 9 which demonstrate that the learned representation of DARSA has improved separation when using all the components, supporting the effectiveness of the proposed objective function.
7 CONCLUSION
Sub-domain-based algorithms have demonstrated considerable empirical success across various applications in domain adaptation. However, a comprehensive theoretical understanding of their advantages had been elusive. This work addresses this gap and presents a substantial contribution by providing a rigorous theoretical perspective on the benefits of sub-domain-based methods, thereby potentially enhancing their overall impact in the field. Moreover, our analysis leads to an algorithm DARSA with improved robustness to the shift of sub-domain weights and label distributions.
REPRODUCIBILITY STATEMENT
Rigorous definitions and complete proofs of our theoretical analysis are included in the Appendix [3] with empirical evidence to verify assumptions in Appendix [C]. The code to replicate all experiments is available at: https://anonymous.4open.science/r/DARSA/. Full details on the experiments, competing algorithms, and their settings are in Appendix [C] and Appendix [H] for the Digits and TST dataset, respectively. The MNIST, BSDS500, USPS, and SVHN datasets are publicly available with an open-access license. The Tail Suspension Test (TST) dataset (Gallagher et al., 2017) is available to download at https://research.repository.duke.edu/concern/datasets/zc77sr3lx?locale=en for free under a Creative Commons BY-NC Attribution-NonCommercial 4.0 International license. The experiments are conducted on a computer cluster equipped with a NVIDIA GeForce RTX 2080 Ti that has a memory capacity of 11019MiB.
REFERENCES
Isabela Albuquerque, João Monteiro, Tiago H Falk, and Ioannis Mitliagkas. Adversarial target-invariant representation learning for domain generalization. arXiv preprint arXiv:1911.00804, 8, 2019.
Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5):898–916, 2010.
Eytan Bakshy, Max Balandat, and Kostya Kashin. Open-sourcing ax and botorch: New ai tools for adaptive experimentation. URL https://ai.facebook.com/blog/open-sourcing-ax-and-botorch-new-ai-tools-for-adaptive-experimentation.
Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006.
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151–175, 2010.
Gilles Blanchard, Aniket Anand Deshmukh, Ürun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. The Journal of Machine Learning Research, 22(1):46–100, 2021.
Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, Hans-Peter Kriegel, Bernhard Schölkopf, and Alex J Smola. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57, 2006.
Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. Advances in neural information processing systems, 29, 2016.
Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3722–3731, 2017.
David Carlson, Sunil Kumar, and Kafui Dzirasa. Multi-region local field potential recordings during a tail-suspension test. Duke Research Data Repository, 2023.
Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26, 2013.
Julie Delon and Agnes Desolneux. A wasserstein-type distance in the space of gaussian mixture models. SIAM Journal on Imaging Sciences, 13(2):936–970, 2020.
Zhijie Deng, Yucen Luo, and Jun Zhu. Cluster alignment with a teacher for unsupervised domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9944–9953, 2019.
|
gzYgsZgwXa
|
Brownian motion is the erratic motion of particles suspended in a medium due to continuous collisions. Assumption 1 assume the additive process as the Brownian motion. Please explain why is this valid and the intuition behind.
|
Path Choice Matters for Clear Attribution in Path Methods
Borui Zhang, Wenzhao Zheng, Jie Zhou, Jiwen Lu∗
Department of Automation, Tsinghua University, China
{zhang-br21, zhengwz18}@mails.tsinghua.edu.cn; {jzhou, lujiwen}@tsinghua.edu.cn
Abstract
Rigorousness and clarity are both essential for interpretations of DNNs to engender human trust. Path methods are commonly employed to generate rigorous attributions that satisfy three axioms. However, the meaning of attributions remains ambiguous due to distinct path choices. To address the ambiguity, we introduce Concentration Principle, which centrally allocates high attributions to indispensable features, thereby endowing aesthetic and sparsity. We then present SAMP, a model-agnostic interpreter, which efficiently searches the near-optimal path from a pre-defined set of manipulation paths. Moreover, we propose the infinitesimal constraint (IC) and momentum strategy (MS) to improve the rigorousness and optimality. Visualizations show that SAMP can precisely reveal DNNs by pinpointing salient image pixels. We also perform quantitative experiments and observe that our method significantly outperforms the counterparts.
1 Introduction
The lack of transparency in deep neural networks (DNNs) hinders our understanding of how these complex models make decisions (Bodria et al., 2021; Zhang & Zhu, 2018; Gilpin et al., 2018), which poses significant risks in safety-critical applications like autonomous driving and healthcare. Numerous interpretation methods (Zeiler & Fergus, 2014; Bach et al., 2015; Zhou et al., 2016; Selvaraju et al., 2017) have been proposed to shed light on the underlying behavior of DNNs. These methods attribute model outputs to specific input features to reveal the contributions. In this way, attribution methods serve as valuable debugging tools for identifying model or data mistakes. However, despite these efforts, users often lack confidence in attributions, which can be blamed on lack of rigorousness and clarity in current methods. Attributions are influenced by three types of artifacts (Sundararajan et al., 2017), namely data artifacts, model mistakes, and interpretation faults. To enhance user trust, it is crucial to sever the impact of the last factor.
One way to enhance the reliability of interpretations is ensuring their theoretical rigorousness. Given a complex mapping function \( f : \mathcal{X} \mapsto \mathbb{R} \), we define the target point \( x^T \in \mathcal{X} \) and the baseline point \( x^0 \). Interpretations aim at explaining how the baseline output \( y^0 \) gradually becomes \( y^T \) when baseline \( x^0 \) changes to \( x^T \). Early interpretation methods (Selvaraju et al., 2017; Montavon et al., 2018) employ Taylor expansion on the baseline as \( y^T = y^0 + \nabla f(x^0)^T(x^T - x^0) + R_1(x^T) \). However, the local linear approximation can hardly interpret nonlinear models due to non-negligible errors of the Lagrangian remainder \( R_1(x^T) \), which makes attributions less convincing. An intuitive solution is to split the path from \( x^0 \) to \( x^T \) into small segments, each of which tends to be infinitesimal. In this formulation, the variation \( \Delta y \) can be formulated in integral form as \( \Delta y = y^T - y^0 = \int \nabla f(x)^T \, dx \). The attributions \( a_i \) of each feature \( x_i \) is gradually accumulated through the line integral, which is commonly referred to as path methods (Friedman, 2004; Sundararajan et al., 2017; Xu et al., 2020; Kapishnikov et al., 2021). Game theory research (Friedman, 2004) has proved that path methods are the only method satisfying three axioms, namely dummy, additivity, and efficiency.
Ensuring rigorousness alone is insufficient for convincing interpretations. Distinct path choices in existing path methods highly impact attributions and lead to ambiguity in interpretations. Integrated Gradients (IG) (Sundararajan et al., 2017) adopts a simple straight line from \( x^0 \) to \( x^T \) for symmetry. BlurIG (Xu et al., 2020) defines a path by progressively blurring the data \( x^T \) adhering to additional
∗Corresponding author.
1 Code: https://github.com/zbr17/SAMP
scale-space axioms. GuidedIG (Kapishnikov et al., 2021) slightly modifies the straight line to bypass points with sharp gradients. However, the question of which path choice is better remains unanswered. The lack of research on the optimal path selection hampers the clarity of attributions.
To the best of our knowledge, we are the first to consider the optimal path for clarity. To start with, we define the **Concentration Principle**. This principle guides the interpreter to identify the most essential features and allocate significant attributions to them, resulting in aesthetic and sparse interpretations. Subsequently, we propose **SAMP** (Salient Manipulation Path), which greedily searches the near-optimal path from a pre-defined set of manipulation paths. Moreover, we constrain the $l_1$-norm of each manipulation step below an upper bound to ensure the infinitesimal condition for the line integral and employ the momentum strategy to avoid converging to local solutions. Visualizations on MNIST (Deng, 2012), CIFAR-10 (Krizhevsky et al., 2009), and ImageNet (Deng et al., 2009) demonstrate the superiority of SAMP in discovering salient pixels. We also conduct quantitative experiments and observe a clear improvement compared with other interpretation methods as shown in Figure 1.
We highlight our contributions as follows:
- **Concentration Principle for Clear Attributions.** We introduce Concentration Principle, which enhances the clarity of attributions by prioritizing sparse salient features.
- **A Model-agnostic Interpreter, SAMP.** The proposed interpreter SAMP is able to efficiently discover the near-optimal path from a pre-defined set of manipulation paths.
- **Two Play-and-plug Auxiliary Modules.** We design infinitesimal constraint (IC) and the momentum strategy (MS) to ensure rigorousness and optimality.
- **Consistent Improvement in Explainability.** Qualitative and quantitative experiments show SAMP pinpoints salient areas accurately and consistently outperforms counterparts.
## 2 RELATED WORK
Considerable attempts expect to reveal the mysterious veil of DNNs by different techniques. Ad-hoc methods (Zhang et al., 2018b; Liang et al., 2020; Agarwal et al., 2021; Wan et al., 2020; Wang & Wang, 2021; Shen et al., 2021; Barbiero et al., 2022) try to observe or intervene in latent variables of DNNs, which rely on specific model types. On the contrary, post-hoc methods (Simonyan et al., 2014; Bach et al., 2015; Zhou et al., 2016; Selvaraju et al., 2017; Lundberg & Lee, 2017) ignore concrete implementations and focus on imitating the outside behavior. According to how attributions are generated, we mainly divide post-hoc methods into two categories: perturbation methods (Ribeiro et al., 2016; Fong & Vedaldi, 2017; Petsiuk et al., 2018) and back-propagation methods (Zeiler & Fergus, 2014; Bach et al., 2015; Selvaraju et al., 2017).
**Perturbation Methods.** An intuitive idea for attributions is to perturb the inputs and observe the output variations. Occlusion method (Zeiler & Fergus, 2014) simply covers up partial areas of input and examines the score change. LIME (Ribeiro et al., 2016) interprets the local space around the prediction by linear regression. Prediction difference analysis (Zintgraf et al., 2017) describes the output variation from a probabilistic perspective. Meaningful perturbation (Fong & Vedaldi, 2017) aims at discovering the deletion regions with compact information, which is further extended by RISE (Petsiuk et al., 2018) by the weighted average of multiple random masks. DANCE (Lu et al., 2021) introduces a subtle perturbation to input without influence on internal variables. Most perturbation methods require multiple iterations, which leads to a heavy computation burden. Moreover, most of these methods lack rigorous axiomatic guarantees.
**Back-propagation Methods.** Another kind of interpretations recovers signals or generates attributions by back-propagating information layer by layer. Early research (e.g., Deconvolution (Zeiler & Fergus, 2014) and Guided-BP (Springenberg et al., 2015)) reverses the forward procedure and...
recover active signals in input space. Recent attempts generate attributions by propagating gradients (Shrikumar et al., 2016), relevance (Bach et al., 2015), and difference-from-reference (Shrikumar et al., 2017). Most methods choose gradients as the propagation intermediary for ease. Grad-CAM (Selvaraju et al., 2017) and its variants (Chattopadhyay et al., 2018) directly interpolate gradients from the top layer to input size as the saliency map. SmoothGrad (Smilkov et al., 2017) aims at removing noise by averaging multiple gradients at neighbor points. The first-order Taylor decomposition (Montavon et al., 2018) assigns attributions by linearization with the gradient around the given root point. Since the difference between the data and the root is often not infinitesimal, expansion based on a single-point gradient results in a large error (Lagrangian remainder), which damages the rigorousness of interpretations. Path methods (Sundararajan et al., 2017; Xu et al., 2020; Kapishnikov et al., 2021) fix this issue by dividing the integral path into small segments. Game theory guarantees that path methods are the only method satisfying three fundamental axioms (see Proposition 1 in Friedman et al. (Friedman, 2004)). However, different path choices (e.g., straight line in space (Sundararajan et al., 2017) or frequency (Xu et al., 2020) and guided path along flat landscape (Kapishnikov et al., 2021)) indicate distinct attribution allocations, which makes the meaning of attribution ambiguous. Therefore, we introduce the Concentration Principle and discuss how to obtain a near-optimal path through the proposed SAMP method.
3 METHOD
In this section, we first summarize the canonical path methods (Friedman, 2004). Then we define Concentration Principle in Section 3.2. Subsequently, we propose the Salient Manipulation Path and derive an efficient algorithm under Brownian motion assumption in Section 3.3. Finally, we introduce infinitesimal constraint (IC) for rigorous line integrals and momentum strategy (MS) to escape from local sub-optimal solutions in Section 3.4.

(a) Concentration Principle prioritizes attributions (green point A) with large distance from mean point P. (b) SAMP chooses the directions with max gradient projection (colored in red), and attributions allocated along this path mainly concentrate on salient pixels.
3.1 PRELIMINARY: PATH METHOD
Path methods (Friedman, 2004) for additive cost-sharing methods are derived from Aumann-Shapley (Aumann & Shapley, 1974), which is first introduced to machine learning by IG (Sundararajan et al., 2017). We define the many-to-one mapping as \( f : \mathcal{X} \rightarrow \mathbb{R} \), where input \( x^T \in \mathcal{X} \) has \( d \) features and \( y^T \) denotes its output. An intuitive idea of interpreting models is to analyze how the output \( y^0 \) turns to \( y^T \) when gradually changing baseline \( x^0 \) to \( x^T \). Considering the difference between \( x^0 \) and \( x^T \) is not infinitesimal, the naive Taylor decomposition \( y^T = y^0 + \nabla f(x^0)^T(x^T - x^0) + R_1(x^T) \) suffers from large Lagrangian remainder \( R_1(x^T) \). Therefore, it is a natural improvement to divide path from \( x^0 \) to \( x^T \) into multiple segments, which should be small enough. Assuming the model \( f \) is differentiable, the output variation \( \Delta y \) can be expanded as
\[
\Delta y = y^T - y^0 = \int_{\rho=0}^{1} \frac{\partial f(\gamma(\rho))}{\partial \gamma(\rho)} \frac{\partial \gamma(\rho)}{\partial \rho} \, d\rho,
\]
where \( \gamma(\rho) \) is path function \( x = \gamma(\rho) \) and \( \gamma(0) = x^0, \gamma(1) = x^T \). We define each feature’s attribution as \( a_i \) and \( \Delta y \) equals sum of \( a_i \) (namely completeness (Sundararajan et al., 2017)):
\[
a_i \triangleq \int_{\rho=0}^{1} \frac{\partial f(\gamma(\rho))}{\partial \gamma_i(\rho)} \frac{\partial \gamma_i(\rho)}{\partial \rho} \, d\rho, \quad \Delta y = \sum_{i=1}^{d} a_i.
\]
Game theory research (Friedman, 2004) has proved that the path method is the only interpretation method satisfying three fundamental axioms (i.e., completeness, additivity, and dummy). However, choices of path function \( \gamma(\rho) \) highly impact the attribution allocation, which hampers the clarity of the interpretations. In this paper, we explore an explicit selection criterion among candidate paths.
### 3.2 Criterion and Candidate Set
Existing path methods lack clarity due to various path choices. In Eq. (2), the attribution \( a \) is a function of the selected path \( \gamma \) as \( a = g(\gamma) \) given \( x^T, x^0, f \). However, conventional interpretations often scatter the attributions over all pixels (Kapishnikov et al., 2021; Smilkov et al., 2017) due to unpredictable distractors. To address this, we propose the **Concentration Principle**, which introduces a selection preference for the allocation of attributions. Instead of scattering attributions across all features, we aim to concentrate them centrally on the indispensable features.
**Definition 1 (Concentration Principle).** A path function \( \gamma^* \) is said to satisfy Concentration Principle if the attribution \( a \) achieves the max \( \text{Var}(a) = \frac{1}{d} \sum_{i=1}^{d} (a_i - \bar{a})^2 \).
**Remark.** Considering \( \sum_{i=1}^{d} a_i \) is a constant \( C = \Delta y \), the variance of \( a \) could depict the concentration degree. For a 3-feature case, this principle prefers \( a = (0.7, 0.2, 0.1) \) to \( (0.4, 0.3, 0.3) \). For image input in Figure 1, this principle achieves aesthetic and sparsity. Our method clearly pinpoints important pixels while IG (Sundararajan et al., 2017) spreads attributions over most pixels. We also conduct a counting model example in Appendix A.2 to illustrate the potential challenge.
Under this principle, we explore to formulate a tractable optimization problem. To maintain consistency in our formulation, we introduce the start point \( x^S \) and the end point \( x^E \). To approximate the line integral in Eq. (1), we use Riemann sum by dividing the path into \( n \) segments as:
\[
\Delta y = \sum_{k=1}^{n} \nabla f(x^k)^T dx^k,
\]
where \( dx^k \) is the \( k \)th segment along the path and \( x^k = x^S + \sum_{l=1}^{k} dx^l \). Analogous to Eq. (2), we calculate each attribution \( a_i \) as \( a_i = \sum_{k=1}^{n} (\nabla f(x^k))_i (dx^k)_i \). However, it is intractable to directly find the optimal path from the infinite set \( \Gamma \) of all path functions. Thus we construct a finite **Manipulation Path** set \( \Gamma_s \subseteq \Gamma \), along which we manipulate images by inserting or deleting \( s = d/n \) pixels per step. The formal definition is as follows:
**Definition 2 (Manipulation Path).** The \( k \)th segment \( dx^k \) of a manipulation path \( \gamma \in \Gamma_s \) satisfies
\[
(dx^k)_i = \begin{cases}
x^E_i - x^k_i, & i \in \Omega_k \\
0, & \text{Otherwise}
\end{cases},
\]
where \( |\Omega_k| = s \) and all \( \Omega_k \) consist of a non-overlapping partition of all pixel indices which satisfy that \( \forall k \neq l, \Omega_k \cap \Omega_l = \emptyset \) and \( \bigcup_{k=1}^{n} \Omega_k = \{1, \cdots, d\} \).
**Remark.** \( \Gamma_s \) is a finite set and \( |\Gamma_s| \) equals to \( d!/(s!)^n \).
Following Definitions 1 and 2, we formulate the optimal path selection problem as follows:
\[
\gamma^* = \arg \max_{\gamma \in \Gamma_s} \text{Var}(a) = \frac{1}{d} \sum_{i=1}^{d} \left( a_i - \frac{C}{d} \right)^2.
\]
Solving Equation 5 directly is computationally challenging. To overcome this, we propose SAMP algorithm, which leverages Brownian motion assumption to efficiently search for the optimal path.
### 3.3 Salient Manipulation Path
The intuition of Eq. (5) is to enlarge the distance \( D_{ap} \) between \( a \) (limited in the hyperplane \( \sum_{i=1}^{d} a_i = C \)) and the center point \( (C/d, \cdots, C/d) \) in Figure 2a. We need to introduce prior knowledge to accelerate the search process. Without loss of generality, we set \( s = 1 \) for ease of derivation. Since the attributions are assigned sequentially, we regard the allocation process as a stochastic process\(^2\). We define the partial sum \( u_k \) as \( \sum_{i=1}^{k} a_i \) and make the following assumption:
\(^2\)We use lowercase letters to denote random variables for consistency.
Assumption 1 (Allocation as Brownian motion). We assume the additive process \( \{u_t, t \geq 0\} \) as the Brownian motion and \( u_t \sim N(0, \sigma^2) \) if without any constraint condition.
We now explain the rationality of the assumption. In the model-agnostic case, we consider the input space to be isotropic. If without any constraint condition, we assume \( E(a_i) = E(dy^i) = 0 \) with randomly sampled step \( dx^i \) and \( a_i, a_j \) for any \( i \neq j \) are independent. It is important to note that we do NOT directly assume the \( a_i, a_j \) as conditional independent given the condition \( \sum_{i=1}^{d} a_i = C \).
Then we assume that every \( a_i \) complies with a Gaussian distribution (i.e., \( a_i \sim N(0, \sigma^2) \)). If we subdivide time infinitely, the additive process \( \{u_t, t \geq 0\} \) tends to a Brownian motion.
Proposition 1. By Brownian motion assumption, the conditional joint distribution \( P(\tilde{a}|C) = P(a_1, \cdots, a_{d-1}|u_d = C) \) is a multivariate Gaussian distribution as:
\[
P(\tilde{a}|C) = \frac{1}{(2\pi)^{\frac{d-1}{2}} \sqrt{|\Sigma|}} \exp \left\{ -\frac{1}{2} \left\| \tilde{a} - \frac{C}{d} \mathbf{1} \right\|^2_{\Sigma^{-1}} \right\},
\]
where \( \Sigma = \sigma(I - \frac{J}{d}) \in \mathbb{R}^{(d-1) \times (d-1)} \) and \( J \) is all-one matrix.
Remark. See proof in Appendix A.1. Eq. (6) reveals that the conditional distribution is centered at point \( P \) in Figure 2a. For any \( i \neq j \), \( \text{Cov}(a_i, a_j|u_d = C) = -\sigma/d \) indicates that allocating more to \( a_i \) results in less to \( a_j \). Moreover, \( E(u_k|u_d = C) = kC/d \) reveals that a randomly selected path tends to produce a linear variation in output. Surprisingly, we observe the curve shapes of IG (Sundararajan et al., 2017), XRAI (Kapishnikov et al., 2019), and Grad-CAM (Selvaraju et al., 2017) in Figure 1 are nearly straight lines, which is consistent with theoretical analysis.
As the dimension of images is always high, we investigate the asymptotic property of \( P(\tilde{a}|C) \) as:
Proposition 2. Since \( \lim_{d \to \infty} \Sigma = \sigma I \), conditional covariance \( \text{Cov}(a_i, a_j|u_d = C) \) is nearly zero with high dimension \( d \). Thus we can approximate Eq. (6) as:
\[
\hat{P}(\tilde{a}|C) = \frac{\exp \left( -\frac{D^2_{ap}}{2\sigma^2} \right)}{(2\pi)^{\frac{d-1}{2}} \sqrt{|\Sigma|}}.
\]
Remark. \( \hat{P}(\tilde{a}|C) = P(\tilde{a}|C)e^{a_d^2/(2\sigma^2)} \). As the last attribution \( a_d \) tends to 0 if \( d \) is high enough, the approximation error of \( \hat{P}(\tilde{a}|C) \) is tolerable.
Since image dimension is always high, we regard any two attributions \( a_i, a_j \) as nearly independent by Proposition 2. Therefore, we can maximize each attribution separately with negligible error while reducing the computational complexity from factorial \( O(d!) \) to linear \( O(d) \). Specifically, we choose \( s \) pixels with the largest projection of gradient \( \nabla f(x^k) \) onto \( dx^k \). We name this greedy selection strategy as Salient Manipulation Path (SAMP) and take insertion direction as an example to formulate SAMP as:
\[
(dx^k)_i = \begin{cases} x^E_i - x^k_i, & i \in M_k \\ 0, & \text{Otherwise} \end{cases},
\]
where \( M_k = \{i | i \in \text{top}_s \{\alpha_j\}\} \) (\( \text{top}_s(\cdot) \) means the largest \( s \) elements) and \( \alpha_j = (\nabla f(x^k))_j(x^E_j - x^k_j) \) if \( x^E_j \neq x^k_j \) and \(-\infty\) otherwise. It is obvious that the path defined above belongs to \( \Gamma_s \).
3.4 Towards Rigorousness and Optimality
Two potential issues still remain in our proposed SAMP interpreter. First, if the step size \( |dx^k| \) is too large, the infinitesimal condition may be violated, thereby breaking the completeness axiom in
Algorithm 1: The SAMP++ algorithm.
Input: Start point \( x^S \); End point \( x^E \); Upper bound \( \eta \); Momentum coefficient \( \lambda \).
Output: Attribution \( a \); Path segments \( D \).
1. Reset \( k = 0 \) and set of path segments \( D = \emptyset \);
2. Initialize \( x^k = x^S \), \( a^k = 0 \), \( g^k = \nabla f(x^S) \);
3. while \( x^k \neq x^E \) do
4. Increase index \( k \) by 1;
5. Update \( g^k = \lambda g^{k-1} + (1-\lambda) \nabla f(x^k) \);
6. Compute \( \alpha_j = g^k_j(x^E_j - x^k_j) \) if \( x^E_j \neq x^k_j \) and \(-\infty\) otherwise;
7. Construct \( M_k = \{i | i \in \text{top}_s \{\alpha_j\}\} \);
8. Compute \( (dx^k)_i = x^E_i - x^k_i \) if \( i \in M_k \) and 0 otherwise;
9. If \( \|dx^k\|_1 > \eta \): \( dx^k = \frac{\eta}{\|dx^k\|_1} dx^k \);
10. Move current point \( x^k = x^{k-1} + dx^k \);
11. Update attribution \( a^k = a^{k-1} + g^k \cdot dx^k \);
12. Expand \( D = D \cup \{dx^k\} \);
13. Return \( a^k, D \).
Eq. (2). Besides, most greedy algorithms tend to get stuck in the local sub-optimal solution. To address these, we propose the infinitesimal constraint and the momentum strategy respectively.
**Infinitesimal Constraint (IC).** To ensure the completeness axiom, we need to restrict each step size below a given bound $\eta > 0$. Therefore we rectify $d\mathbf{x}^k$ in Eq. (8) as:
$$
d\hat{\mathbf{x}}^k = \begin{cases}
\frac{\eta}{\|d\mathbf{x}^k\|_1} d\mathbf{x}^k, & \text{if } \|d\mathbf{x}^k\|_1 > \eta \\
d\mathbf{x}^k, & \text{Otherwise}
\end{cases}
$$
(9)
Note that the above constraint does not affect the convergence of SAMP. According to the definition of manipulation paths, it is easy to know the sum of L1 norm of all steps is a constant value as $\sum_{k=1} \|d\mathbf{x}^k\|_1 = \|\mathbf{x}^S - \mathbf{x}^E\|_1 = C$. As long as $\eta > 0$, the constrained SAMP can certainly converge after finite iterations.
**Momentum Strategy (MS).** Due to the nature of greedy algorithms, SAMP runs the risk of falling into a local optimum. Inspired by the gradient descent with momentum, we incorporate the momentum strategy to coast across the flat landscape through the inertia mechanism as follows:
$$
g^k = \lambda g^{k-1} + (1 - \lambda)\nabla f(\mathbf{x}^k).
$$
(10)
By substituting $d\mathbf{x}^k$ with $d\hat{\mathbf{x}}^k$ and $\nabla f(\mathbf{x}^k)$ with $g^k$, we formulate SAMP++ in Algorithm 1.
4 EXPERIMENT
In this section, we conduct qualitative and quantitative experiments to demonstrate the superiority of our proposed SAMP method. Due to the wide variety of interpretability methods, they often need to be evaluated from multiple dimensions (Nauta et al., 2022). Our proposed SAMP method belongs to attribution methods. We first perform qualitative experiments to verify the Concentration Principle claimed above and compare the visualization results with other counterparts in Section 4.2. Subsequently, we employ Deletion/Insertion metrics (Petsiuk et al., 2018) to examine SAMP quantitatively and conduct a completeness check with the Sensitivity-N metric (Ancona et al., 2018) in Section 4.3. Extensive ablation studies demonstrate the effectiveness of each feature in Section 4.4.
4.1 EXPERIMENTAL SETTING
**Datasets and Models.** We evaluate SAMP on the widely used MNIST (Deng, 2012), CIFAR-10 (Krizhevsky et al., 2009), and ImageNet (Deng et al., 2009). For MNIST and CIFAR-10 datasets, we simply build two five-layer CNNs (c.f. Appendix A.3) and train them to convergence using AdamW optimizer (Loshchilov & Hutter, 2017). For ImageNet dataset, we use the pre-trained ResNet-50 model (He et al., 2016) from PyTorch torchvision package (Paszke et al., 2019).
**Metrics.** Interpretations should faithfully reveal the attention of model decisions. One evaluation for judging attributions is to check whether features with large attribution have a significant effect outputs. Therefore, we choose the Deletion/Insertion metrics (Petsiuk et al., 2018) for quantitative comparison. We delete/insert pixels sequentially in the descending order of attributions, plot the output curve, and calculate the area under the curve (AUC). For Deletion, a smaller AUC indicates better interpretability; for insertion, a larger AUC is expected. Moreover, we wish to examine the effect of the infinitesimal constraint (IC) on the rigorousness of SAMP. Therefore, we adopt the Sensitivity-N metric (Ancona et al., 2018) by calculating the Pearson correlation between the sum of attributions and the model output for completeness check.
**Implementation Details.** We compare Deletion/Insertion metrics of SAMP with 12 mainstream interpretation methods. Following the configuration (Petsiuk et al., 2018), we set the baseline point as a zero-filled image for Deletion and a Gaussian-blurred image for Insertion. We randomly select 100 images from each dataset and report the mean and standard deviation of AUCs. Specifically, for MNIST and CIFAR-10, we set the Gaussian blur kernel size $s_g$ to 11 the variance $\sigma_g$ to 5, and the step size for calculating metrics $s_m$ to 10; for ImageNet, $s_g = 31$, $\sigma_g = 5$, and $s_m = 224 \times 8$. If without special specifications, we fix the step size $s$ in SAMP as $224 \times 16$ for ImageNet and 10 for other datasets, the ratio of the infinitesimal upper bound $\eta$ to $\|\Delta \mathbf{x}\|_1$ as 0.1, and the momentum coefficient $\lambda$ as 0.5. We perform all experiments with PyTorch on one NVIDIA 3090 card.
---
3The benchmark code will be released together with SAMP.
Figure 3: Verification of Concentration Principle. (a) Visualizations of intermediate points and corresponding attributions along the path solved by SAMP. (b) The output score curve from the baseline point to the target image.
Figure 4: Visualizations on MNIST, CIFAR-10, and ImageNet compared with other methods.
4.2 Qualitative Visualization
4.2.1 Verification of Properties
We first verify whether SAMP can reach an expected path towards Concentration Principle. For clear visualization, we set the baseline point as zero-filled, and choose the manipulation direction from $x^0$ to $x^T$. Along the path solved by SAMP, the intermediate points and corresponding attributions at different stages are visualized separately, as shown in Figure 3a. We can see that the first 25% of the path has precisely pinpointed the subject animal. Besides, we plot the output scores at different stages along the manipulation path in Figure 3b. A rapid rise can be observed at the start, which indicates that SAMP tends to capture the most salient pixels first. At the same time, there is a small drop at the end. We ascribe this to background pixels, which interfere with the output score.
4.2.2 Visualization Comparison
We compare the visualization results of SAMP with other mainstream interpretation methods (Ribeiro et al., 2016; Selvaraju et al., 2017; Sundararajan et al., 2017; Smilkov et al., 2017; Shrikumar et al., 2017; Bach et al., 2015; Petsiuk et al., 2018; Kapishnikov et al., 2019; Xu et al., 2020; Kapishnikov et al., 2021). After randomly selecting input images on MNIST, CIFAR-10, and ImageNet, we calculate the attribution results of each method. We first convert the attributions to a grayscale image for visualization and also superimpose the attribution values with the original image. Figure 4 shows the comparison of the SAMP method with existing methods. As can be seen, the attribution results allocated by our method pinpoint important pixels and localize all pixels on salient objects most completely. Additionally, the attribution results of the SAMP++ approach are broadly similar to SAMP, but the results of SAMP++ are more fine-grained due to the infinitesimal constraints (for instance, the subject is more separated from the background).
4.3 Quantitative Analysis
We conducted quantitative experiments to assess the performance of SAMP, including metrics such as Deletion/Insertion and Sensitivity-N check. In addition, we also carried out evaluations such as $\mu$Fidelity (Novello et al., 2022) and pointing game (Zhang et al., 2018a) in Section A.5.4.
Table 1: Deletion/Insertion metrics on MNIST, CIFAR-10, and ImageNet.
| Method | MNIST | CIFAR-10 | ImageNet |
|--------------|------------------------|------------------------|------------------------|
| | Deletion↓ | Insertion↑ | Deletion↓ | Insertion↑ | Deletion↓ | Insertion↑ |
| LRP | -0.003 (±0.13) | 0.808 (±0.10) | -0.257 (±0.49) | 1.452 (±0.37) | 0.210 (±0.13) | 0.575 (±0.15) |
| CAM | 0.221 (±0.15) | 0.715 (±0.11) | 0.314 (±0.31) | 0.863 (±0.23) | 0.313 (±0.129) | 0.897 (±0.13) |
| LIME | 0.282 (±0.14) | 0.597 (±0.09) | 0.479 (±0.29) | 0.722 (±0.24) | 0.312 (±0.13) | 0.898 (±0.14) |
| Grad-CAM | 0.221 (±0.15) | 0.715 (±0.11) | 0.314 (±0.31) | 0.863 (±0.23) | 0.313 (±0.13) | 0.897 (±0.13) |
| IG | -0.038 (±0.14) | 0.795 (±0.11) | -0.372 (±0.54) | 1.452 (±0.40) | 0.197 (±0.13) | 0.725 (±0.20) |
| SmoothGrads | 0.003 (±0.13) | 0.547 (±0.11) | 0.777 (±0.55) | 0.517 (±0.28) | 0.300 (±0.13) | 0.605 (±0.17) |
| DeepLIFT | -0.025 (±0.14) | 0.791 (±0.11) | -0.300 (±0.51) | 1.443 (±0.38) | 0.216 (±0.12) | 0.688 (±0.18) |
| RISE | 0.059 (±0.11) | 0.651 (±0.12) | 0.149 (±0.35) | 0.904 (±0.27) | 0.282 (±0.13) | 0.849 (±0.15) |
| XRAI | 0.120 (±0.12) | 0.754 (±0.10) | 0.248 (±0.33) | 0.910 (±0.21) | 0.346 (±0.16) | 0.865 (±0.14) |
| Blur IG | 0.021 (±0.02) | 0.804 (±0.17) | -0.107 (±0.39) | 1.407 (±0.47) | 0.261 (±0.14) | 0.712 (±0.22) |
| Guided IG | -0.041 (±0.14) | 0.762 (±0.10) | -0.276 (±0.47) | 1.209 (±0.35) | 0.167 (±0.13) | 0.699 (±0.21) |
| SAMP (ours) | -0.093 (±0.14) | 1.074 (±0.18) | -0.733 (±0.67) | 1.458 (±0.40) | 0.154 (±0.12) | 0.984 (±0.20) |
| SAMP++ (ours)| -0.137 (±0.151) | 1.050 (±0.18) | -0.899 (±0.72) | 1.514 (±0.43) | 0.145 (±0.12) | 1.116 (±0.24) |
Figure 5: Sensitivity-N check for IC.
Figure 6: Impact of momentum coefficient $\lambda$.
4.3.1 Deletion/Insertion Comparison
To precisely compare the performance, we calculate the Deletion/Insertion metrics (Petsiuk et al., 2018). We randomly sampled 100 images and report the mean and standard deviation of the AUCs (see Table 1), where “SAMP” represents the original algorithm described in Eq. (8) and “SAMP++” denotes Algorithm 1 with the infinitesimal constraint (IC) and momentum strategy (MS). Our method consistently outperforms all other methods on three datasets. We ascribe this to Concentration Principle that facilitates our method to perform clear saliency rankings. In addition, the improved version significantly improves the original one in most cases. We believe that the momentum strategy plays an essential role in prompting the algorithm to break free from the local point (c.f. Section 4.4 for ablation studies.).
4.3.2 Sensitivity-N Check
In this part, we show the importance of the infinitesimal constraint (IC) on rigorousness (or completeness (Sundararajan et al., 2017)). Sensitivity-N (Ancona et al., 2018) checks the completeness by calculating the Pearson correlation of $\sum_j a_j$ and $\Delta y$. We gradually increase $\beta = \|x\|_1/\eta$ (i.e., decrease the upper bound $\eta$ in Eq. (9)) and draw the curve of the correlation w.r.t. $\beta$ (see Figure 5).
With the decrease of $\eta$, the correlation increases significantly. This is because IC limits each step to be infinitesimal, which ensures that Lagrangian remainder tends to 0, thereby enhancing rigorousness of Eq. (3). Interestingly, Figure 5a shows that with the further decrease of $\eta$, the numerical error becomes the main error source, and the correlation no longer rises; because $\eta$ is not small enough at the start of Figure 5b, most steps are not cropped, thereby leading to a flat correlation curve.
4.4 Ablation Study
4.4.1 Influence of IC and MS
We perform ablation studies on the infinitesimal constraint (IC) and momentum strategy (MS), as shown in Table 2. As we can see, the improvement of SAMP in Deletion/Insertion metrics mainly comes from MS. According to Figure 6, SAMP achieves the largest improvement when $\lambda \approx 0.3$. Table 3 shows that IC has no significant impact on Deletion/Insertion metrics, which can be attributed to the fact that IC is primarily designed to maintain rigor and lacks a direct connection with...
Table 2: Ablation study on IC and MS.
| Setting | ImageNet |
|-------------|-------------------|
| | Deletion↓ | Insertion↑ |
| SAMP | 0.154 (±0.118) | 0.984 (±0.195) |
| +MS | **0.144** (±0.115) | **1.088** (±0.251) |
| +IC | 0.159 (±0.121) | 1.056 (±0.185) |
| +MS/IC | **0.145** (±0.116) | **1.116** (±0.241) |
Table 3: Influence of upper bound $\eta$
| Bound $^{-1}$ | ImageNet |
|---------------|-------------------|
| ($\|\Delta x\|_1/\eta$) | Deletion↓ | Insertion↑ |
| 1/10 | **0.159** (±0.121) | 1.056 (±0.185) |
| 1/50 | 0.218 (±0.133) | **1.130** (±0.168) |
| 1/100 | 0.249 (±0.142) | 1.031 (±0.155) |
| 1/200 | 0.279 (±0.147) | 0.939 (±0.159) |
enhancing metrics. In addition, smaller $\eta$ (e.g., $\eta = 1/200$) leads to finer-grained visualization (see Figure 8), which is due to the shortened step size that focuses more on details.
### 4.4.2 Choice of Baseline Points
We wonder whether different baseline choices affect the performance of SAMP. Therefore, we set four different sets of baseline points with $\eta = \|x\|_1/50$. “B” means padding with zero, “W” means padding with one, “U” denotes uniformly random initialization, and “G” denotes Gaussian random initialization. In the symbol “X+Y”, X represents the deletion direction, and Y represents the insertion direction. Figure 7 shows that the impact of different baselines on the explanations is not significant compared to different methods in Figure 4. Specifically, different baselines have little impact on the contour information, but they do significantly affect the overall intensity (e.g., brightness), which leads to visual differences.


### 4.4.3 Choice of Paths
The choice of path also has a certain influence on Deletion/Insertion (as shown in Table 4). We discover that only using the path $x^T \rightarrow x^0$ achieves the best Deletion; only using $x^0 \rightarrow x^T$ reaches the highest Insertion. We actually use both directions at the same time, and sum attributions generated by two directions to obtain a trade-off between two metrics.
Table 4: Influence of path choices.
| Path | ImageNet |
|------|-------------------|
| to $x_0$ | to $x_T$ | Deletion↓ | Insertion↑ |
| ✓ | ✓ | **0.108** (±0.107) | 0.659 (±0.171) |
| | ✓ | 0.199 (±0.135) | **1.330** (±0.230) |
| ✓ | ✓ | **0.159** (±0.121) | **1.056** (±0.185) |
### 5 Conclusion
To obtain user trust, interpretations should possess rigorousness and clarity. Even though path method (Sundararajan et al., 2017) identifies fundamental axioms for rigorousness, attributions remain ambiguous due to indeterminate path choices. In this paper, we first define Concentration Principle. Subsequently, we propose Salient Manipulation Path (SAMP), which is a fast and greedy interpreter for solving the approximate optimal path efficiently. To enhance the rigorousness and optimality of SAMP, we propose the infinitesimal constraint (IC) and momentum strategy (MS) respectively. Visualization experiments show that the attribution generated by our method accurately discovers significant pixels and completely pinpoints all pixels of salient objects. Quantitative experiments demonstrate that our method significantly outperforms the current mainstream interpretation methods. Moreover, qualitative experiments also reveal that SAMP can obtain higher-quality semantic segmentations by visualizing the attribution values only employing class-level annotations. Investigating the utility of saliency-based explanations in annotation-limited tasks (such as weakly-supervised object recognition) and other promising domains (with a focus on NLP and medical image analysis) represents an exciting direction for further study.
ACKNOWLEDGEMENT
This work was supported in part by the National Key Research and Development Program of China under Grant 2022ZD0160102, and in part by the National Natural Science Foundation of China under Grant 62125603, Grant 62321005, and Grant 62336004.
REFERENCES
Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, and Geoffrey E Hinton. Neural additive models: Interpretable machine learning with neural nets. In NeurIPS, pp. 4699–4711, 2021.
Marco Ancona, Enea Ceolini, Cengiz Özti̇reli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In ICLR, 2018.
Robert J Aumann and Lloyd S Shapley. Values of non-atomic games. Princeton University Press, 1974.
Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Pietro Lió, Marco Gori, and Stefano Melacci. Entropy-based logic explanations of neural networks. In AAAI, pp. 6046–6054, 2022.
Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and Salvatore Rinzivillo. Benchmarking and survey of explanation methods for black box models. arXiv, abs/2102.13076, 2021.
Aditya Chattopadhyay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In WACV, pp. 839–847, 2018.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248–255, 2009.
Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In ICCV, 2017.
Eric J Friedman. Paths and consistency in additive cost sharing. International Journal of Game Theory, 32(4):501–518, 2004.
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining explanations: An overview of interpretability of machine learning. In DSAA, pp. 80–89, 2018.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016.
Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, and Michael Terry. Xrai: Better attributions through regions. In ICCV, pp. 4948–4957, 2019.
Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, and Tolga Bolukbasi. Guided integrated gradients: An adaptive path method for removing noise. In CVPR, pp. 5050–5058, 2021.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
|
G0EVNrBQh6
|
Therefore, it can be inferred that ensembling Gaussian noise plays a more crucial role in generating the human-identifiable features than ensembling different models, which undermines the soundness of the claim that the presence of human-identifiable features is inherent in the perturbations themselves, rather than being a result of added Gaussian noise.
|
INVESTIGATING HUMAN-IDENTIFIABLE FEATURES HIDDEN IN ADVERSARIAL PERTURBATIONS
Anonymous authors
Paper under double-blind review
ABSTRACT
Neural networks perform exceedingly well across various machine learning tasks but are not immune to adversarial perturbations. This vulnerability has implications for real-world applications. While much research has been conducted, the underlying reasons why neural networks fall prey to adversarial attacks are not yet fully understood. Central to our study, which explores up to five attack algorithms across three datasets, is the identification of human-identifiable features in adversarial perturbations. Additionally, we uncover two distinct effects manifesting within human-identifiable features. Specifically, the masking effect is prominent in untargeted attacks, while the generation effect is more common in targeted attacks. Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models. In addition, our findings indicate a notable extent of similarity in perturbations across different attack algorithms when averaged over multiple models. This work also provides insights into phenomena associated with adversarial perturbations, such as transferability and model interpretability. Our study contributes to a deeper understanding of the underlying mechanisms behind adversarial attacks and offers insights for the development of more resilient defense strategies for neural networks.
1 INTRODUCTION
Neural networks achieve an unprecedented level of performance across a vast array of machine learning tasks (Hinton et al., 2012) and many more applications are expected to emerge in the near future. Thus, it is particularly concerning that small changes to input data known as adversarial perturbations, which are often imperceptible to human eyes, can dramatically alter neural networks’ judgments (Szegedy et al., 2014), thereby compromising their reliability. This vulnerability introduces significant risks to real-world applications of neural networks. For instance, an adversarial perturbation applied to a traffic sign could cause an autonomous car to misread a stop sign as a 45 mph speed limit sign. This misunderstanding could trigger a sudden acceleration, possibly resulting in accidents (Eykholt et al., 2018).
In this paper, we carefully examine the underlying properties of adversarial perturbations, which leads us to hypothesize that human-identifiable features exist within these perturbations. These features are often easily recognized by humans, such as the tire of a car, a cock’s crest, or more generally, a particular object’s shape. In the process of validating this hypothesis, we identify two factors obscuring human-identifiable features: excessive noise and incomplete feature information. To reveal the hidden features concealed by these two factors, both factors must be mitigated without altering the essence of the perturbations. Since different neural networks usually generate perturbations with noise and incomplete information different from each other, averaging these perturbations, derived from the same image, effectively minimizes the effects of noise. At the same time, assembling incomplete information leads to the emergence of human-identifiable features.
We find that, using the methodology described above, human-identifiable features emerge. In fact, two types of human-identifiable features show up in our study: the masking effect and the generation effect. The masking effect includes important features from the input image, potentially with a sign inversion. The generation effect adds new features to the original image, simulating a different class, which can potentially result in misclassification by both humans and neural networks.
We demonstrate our finding by using five different attack algorithms, including both gradient-based and search-based algorithms, and three different datasets, including MNIST (Deng [2012]), CIFAR-10 (Krizhevsky & Hinton [2009]), and ImageNet (Deng et al. [2009]). We also quantify our results by evaluating the recognizability and attack strength of the perturbations. To further highlight that human-identifiable features are critical in causing the model to misclassify, we employ pixel-level annotations to extract these features from the perturbations. Our findings confirm that these features result in significantly more adverse attacks compared to other features, even if they correspond to a smaller attack surface and vector norm within such perturbations. Moreover, the "masking effect" results in an intriguing phenomenon: Different algorithms produce perturbations that, when summed across different independent models, converge to higher cosine similarity values.
Perturbations containing human-identifiable features enable us to explain important phenomena related to adversarial perturbations, including their transferability (Szegedy et al. [2014], Papernot et al. [2016]), the enhancement of model explainability through adversarial training (Goodfellow et al. [2015], Tsipras et al. [2019], Ross & Doshi-Velez [2018], Santurkar et al. [2019]), and the role of non-robust features (Ilyas et al. [2019]). It is noted that our experimental findings do not rule out other factors that could cause adversarial perturbations to impact the performance of neural networks. The key finding of the paper is summarized in Figure 1.

**Figure 1:** Overview of the key findings of this work. A DenseNet-121 model classifies the image of a tree frog correctly (87.5%). Averaging perturbations from multiple neural networks leads to the emergence of distinctive, frog-shaped features (middle panel). After subtracting the features from the original image, the DenseNet-121 model misclassifies the image as an African chameleon (81.0%).
## Related Work
Several works investigate the existence and/or the mechanisms behind adversarial perturbations. These include (1) the over-linearity in neural networks (Goodfellow et al. [2015], Tramèr et al. [2017]), which indicates a linear relationship between input and output changes of a neural network. This allows small perturbations to impact output in high-dimensional input spaces. (2) The view that adversarial perturbations being non-robust features argues that neural networks, trained solely to minimize loss, may learn non-robust features irrelevant to human perception but predictive for model classification (Schmidt et al. [2018], Tsipras et al. [2019], Ilyas et al. [2019]). Adversarial perturbations aim to alter these non-robust features, making them unrecognizable to humans yet influential in machine decisions. There are more works attempting to investigate this problem (Tanay & Griffin [2016], Shafahi et al. [2019], Zhang et al. [2022], Han et al. [2023]).
Elsayed et al. [2018] show that perturbations, generated under targeted attacks, using an ensemble of multiple models, and the inclusion of a retinal layer in each model decreases human classification accuracy of adversarial examples by approximately 10% in just 60-70 milliseconds. The authors suggest that a brief image display limits the brain’s time for top-down processing, making the brain act like a feedforward neural network, hence the perturbations may fool humans. Athalye et al. [2017] and Brown et al. [2017] show that perturbations generated by targeted attack algorithms and augmented through the Expectation Over Transformation (EOT) technique increase their robustness in fooling neural networks and better align with human perception.
Differing from prior studies, we demonstrate the concept that human-identifiable features occur naturally in perturbations created by both untargeted and targeted attack algorithms, requiring no additional constraints. We further show that these features are effective in deceiving models and use this concept to elucidate several key phenomena.
3 ASSUMPTIONS MADE IN THIS STUDY
3.1 PERTURBATIONS CONTAIN HUMAN-IDENTIFIABLE FEATURES
In image classification, data is labeled by humans according to features that they can recognize. A well-trained model should rely, at least partially, on human-identifiable features to correctly classify the image. Hence, adversarial perturbations, which can fool models, will likely modify the features that the model’s classification is based on. As a result, perturbations may incorporate human-identifiable features from the original image, which likely is a key factor that deceives models.
3.2 FACTORS CONCEALING HUMAN-IDENTIFIABLE FEATURES
In practice, human-identifiable features are often not readily visible in perturbations. We hypothesize that high noise levels and incomplete feature information in perturbations hinder the visibility of human-identifiable features. The origins of these two issues are explored below.
The neural network’s gradient may be noisy, as indicated by Smilkov et al. (2017). Since adversarial perturbations are often derived from the gradient, they may also exhibit high noise levels.
Due to a model’s limited computational capacity and challenges in achieving the global optimum of weight parameters during training, neural networks might not fully capture all useful information in the data for classification. Perturbations are specifically designed to maximize the model’s loss function, thus affecting only the features the model has learned. As a result, such perturbations may contain only a subset of human-identifiable features. The extent and distribution of this incompleteness affect the visibility of these features.
4 EXPERIMENTAL METHOD
Our objective is to reduce noise and assemble incomplete features within adversarial perturbations, without changing their nature. Since the noise in perturbations originating from different models of the same image is likely independent, we in effect reduce the noise by averaging the perturbations generated by different neural networks. Because two perturbations from distinct models are unlikely to exhibit identical human-identifiable features, upon averaging, we in practice aggregate the incomplete components of the associated features. Additionally, we argue that averaging perturbations does not introduce any new information, as illustrated in Section 5.2.2, where the high attack success rate is maintained for averaged perturbations.
In order to overcome the two issues, it is necessary to acquire a sufficient number of neural networks to produce enough number of perturbations. However, the number of available neural networks is often limited. To solve this problem, we use a method inspired by SmoothGrad (Smilkov et al., 2017). In this method, we create multiple copies of a single input image, each incorporated with different Gaussian noise. This flexibility enables us to generate subtly different perturbations for each model using the same image, thereby increasing the sample size of perturbations. Such a process can potentially enhance the visibility of human-identifiable features through reducing the noise.
Mathematically, generating perturbations with multiple neural networks and the incorporation of Gaussian noise can be expressed as follows:
$$V(x) = \frac{1}{mn} \sum_{i=1}^{m} \sum_{j=1}^{n} V_i(x + N_{ij}(0, \sigma^2))$$ \hspace{1cm} (1)
Where $V$ denotes the generated perturbations, $x$ represents the input image, and $n$ and $m$ specify the number of image copies for each model and the total number of models used, respectively. Each perturbation $V_{ij}$ is generated from the $i^{th}$ model using the $j^{th}$ sample of Gaussian noise, represented by $N(0, \sigma^2)$, where the noise has a mean of 0 and a standard deviation of $\sigma$. Note that while adding Gaussian noise is useful, it is not essential for revealing human-identifiable features, which is further discussed in Appendix A.
4.1 Experimental Setup
In the ImageNet experiment, we investigate perturbations generated by gradient-based attacks, including Basic Iterative Method (BIM) (Kurakin et al., 2016), CW attack (Carlini & Wagner, 2017), and DeepFool attack (Moosavi-Dezfooli et al., 2016) as well as search-based attacks, including Square attack (Andriushchenko et al., 2020), and One-pixel attack (Su et al., 2019). We will first discuss gradient-based attacks, followed by search-based attacks. For the experiment, we selected 20 classes out of 1000 classes in the validation set and used the first 10 images in each class, yielding a total of 200 images. All images are scaled to a [0,1] pixel value range.
We divided all neural networks used in the experiment into two categories: source models and testing models. Source models are used to create adversarial perturbations. Testing models are used to evaluate both the attack strength and how obvious the human-identifiable features are in the perturbations.
Perturbations are generated in two distinct settings:
Single model setting (SM): In this setting, we generate perturbations in the usual way, i.e. sending an image into a single source model, specifically ResNet50 (He et al., 2016), and obtain corresponding perturbations, which serve as a baseline for comparison.
Multiple models with Gaussian noise setting (MM+G): In this setting, we generate perturbations according to Eqn. 1. We use Gaussian noise with a mean of 0 to generate perturbations. The variance is kept low to minimally affect perturbations and is determined by the employed attack algorithms. For the BIM attack, we use a standard deviation of 0.02. For both CW and DeepFool attacks, the standard deviation is 0.05. For each input image, we add 10 different Gaussian noise samples according to the above-mentioned method. We repeat this for each of the 270 source models used in the experiment, resulting in a total of 2,700 perturbations for each image. These 2,700 perturbations are then averaged to create one final perturbation for further analysis.
In our experiment, we download 274 models with diverse sets of architecture from PyTorchCV (Sémery, 2018). 270 models are used as source models in the MM+G setting, and the remaining four are designated as testing models, including VGG-16 (Simonyan & Zisserman, 2014), ResNet-50 (He et al., 2016), DenseNet-121 (Huang et al., 2017), and BN-Inception (Szegedy et al., 2016). Note that in the SM setting, the source model ResNet-50 is identical to the ResNet-50 model used in the testing models, which is known as a white-box attack.
Due to space constraints, the experimental setups, details of attack algorithms, techniques for managing perturbations, and experimental results are shown in Appendix B. The experimental setup for the MNIST and CIFAR-10 datasets is listed in Appendix C.
5 Experimental Results
5.1 Types of Human-Identifiable Features
To gain a better understanding of the paper, we introduce two types of human-identifiable features based on our experimental results: the masking effect and the generation effect.
For the masking effect, the perturbations mimic specific features in the input image but are inverted to act as the negative counterparts of those features. When these perturbations are combined with the original image, they lower the contrast in pixel values of key classification features. This reduces the value of the inner product between the image and the gradient of its labeled class score, thus increasing the likelihood of misclassification. The "masking effect" usually occurs in untargeted attacks.
The generation effect is characterized by the injection of additional features into the original image. These features closely imitate the attributes of a different class, causing human observers and neural networks to misclassify the image into an incorrect class. This effect is primarily observed in targeted attacks.
1The selected classes are: great white shark, cock, tree frog, green mamba, giant panda, ambulance, barn, baseball, broom, bullet train, cab, cannon, teapot, teddy, trolleybus, wallet, lemon, pizza, cup, and daisy.
5.2 Untargeted Attack
Figure 2 shows three images from the ImageNet dataset with their corresponding adversarial perturbations generated in the SM and MM+G settings for BIM, CW, and DeepFool attack algorithms under the untargeted attack mode.
In the SM setting, the perturbation contains a large amount of noise and incomplete information that obscures, as an example, the features of a shark in the perturbation, shown in the middle of Figure 2. In the MM+G setting, in comparison, the perturbation reveals an identifiable feature, the shark contour, which resembles that of the original image. This observation supports our assumptions that human-identifiable features can be hidden within perturbations. Due to background noise as well as its incomplete feature, the shark may not appear noticeable to human eyes. Averaging perturbations from different models effectively minimizes noise and puts together incomplete parts of the features, leading to the emergence of pronounced human-identifiable features.
The phenomenon that perturbations contain features that resemble those of the original image with a flip sign, demonstrated in Figure 2, is the masking effect, as discussed previously. We emphasize that the masking effect is consistently observed in perturbations generated under the MM+G setting throughout our experiments. Please refer to Appendix D for additional examples of the masking effect.

**Figure 2:** Adversarial perturbations generated by untargeted attacks in SM and MM+G settings. In the SM setting, perturbations appear as noise when viewed by humans. On the contrary, in the MM+G setting, however, perturbations reveal clear, human-identifiable features that resemble those in the original images, which is the masking effect mentioned in the text.
5.2.1 Evaluating Recognizability of Perturbations
In this experiment, we assess the recognizability of human-identifiable features in adversarial perturbations by conducting evaluation tests on all perturbations generated in the previous experiment. For enhanced visualization, the perturbations are linearly scaled according to the method outlined in Appendix B.2. Since the masking effect produces features that closely mirror those in the original images, we should be able to infer the original image’s label from the generated perturbations if these features are distinct enough. Therefore, in the human evaluation test, we assign the label of each perturbation’s corresponding image as the correct answer.
For the human evaluation test, we randomly divided the 200 perturbations into four equal subsets. In each subset, twelve different participants were tasked with assigning the most appropriate label to each perturbation, selecting from a predefined list of 20 classes in our experimental dataset. After discarding the highest and lowest classification accuracy in each subset, we calculated an average accuracy of 80.7% for the BIM attack algorithm under the MM+G setting. This high level of accuracy suggests that the features introduced by the masking effect are indeed highly recognizable for humans. For reference, random guessing yields a 5% accuracy rate.
Due to resource constraints, a comprehensive human evaluation across all settings (SM, MM+G) and three attack algorithms was not feasible. Instead, we employed the VGG-16 model, which has not been included in our source models, for machine evaluation. The settings for the evaluation mirrored those for human testing, except we multiplied the perturbations by 0.5 to mitigate the effect posed by domain shifting, thereby improving the model classification accuracy. Additionally, we designated the model’s output class as the one with the highest output value among the 20 pre-defined classes, consistent with the method used in the human evaluation test.
In the SM setting, VGG-16’s accuracy is 5.5%, 4.5%, and 5.0% for BIM, CW, and DeepFool attacks, respectively, which is roughly equivalent to random guessing. On the contrary, VGG-16 achieved accuracies of 56.0%, 38.0%, and 38.0% for BIM, CW, and DeepFool attack algorithms, respectively. It is worth noting that the model’s classification performance was much less accurate than that of human participants, largely due to the domain shift introduced by classifying perturbations instead of images. The fact that both human and machine classifiers significantly outperform random guessing suggests that the perturbations contain features that are crucial for classifying the original images.
### 5.2.2 Evaluating Attack Strength
To verify that adversarial perturbations obtained from the MM+G setting can still attack neural networks, we evaluated their attack strength. We used 200 adversarial perturbations generated in the experiment under both SM and MM+G settings. Then, we processed them by multiplying a quantity $\varepsilon$ with their signs, as shown below:
$$V' = \varepsilon \cdot \text{sign}(V)$$
Here $V$ and $V'$ are the original and processed perturbations, respectively. $\varepsilon = 0.02$ for all attack algorithms. This process is similar to the fast gradient sign method (Goodfellow et al., 2015). As a result, every perturbation has the same $L_{\text{inf}}$ and $L_2$ norm. Next, we incorporated the processed perturbations into the input images and sent them to testing models to evaluate their classification accuracy.
On average, the four testing models correctly classify 81.8% of the input images. However, in the SM setting, the incorporation of perturbations generated by the BIM attack lowers classification accuracy to 63.3%. In the MM+G setting, perturbations further reduce classification accuracy to 13.2% for the BIM attack. Similar outcomes have also been observed from the CW and DeepFool attacks. This confirms the strong attack ability of perturbations generated within the MM+G setting, suggesting that the essence of perturbations remains unaltered after processing by Eqn. [1]. For comprehensive information on attack accuracy, please consult Appendix E.
### 5.2.3 Experiment on Contour Extraction
In this section, our objective is to ascertain whether human-identifiable features serve as a key factor in fooling neural networks. We first designate the area within the contour of labeled objects in the input image as the "contour," and the area outside of this contour as the "background". Since precisely defining human-identifiable features is a complex task, we opt for a more inclusive definition. Thus we redefine the contour part of perturbations as human-identifiable features. This is based on the observation that humans classify images primarily based on internal contour information rather than background information. We then use pixel-level annotation from the ImageNet-S dataset (Gao et al., 2022) to extract the contour part of the perturbations. A visual inspection of the contour extracted from perturbations identifies a good alignment with the human-identifiable features. For examples showcasing the results, please refer to Appendix F.
After separating contours and backgrounds from 200 adversarial perturbations, we process them with Eqn. [2] keeping $\varepsilon = 0.02$. These perturbations are applied to the original images and tested across four models. The results show that contour-only perturbations, generated by BIM, CW, and DeepFool algorithms, dramatically decrease the average model accuracy from 81.8% to between 32.2% and 37.0%. In comparison, perturbations affecting only the background result in mild drops in accuracy, between 60.6% and 69.0%. It is important to note that the areal ratio of the contour to the background in the images stands at 0.83. This suggests that even if the area covered by the perturbed contour and its associated $L_2$ norm are comparatively smaller than those of the background, the contour’s ability to facilitate effective attacks is substantially greater.
In our subsequent analysis, we vary $\epsilon$ from 0.01 to 0.1 in Eqn. 2 at 0.01 intervals, altering the $L_{\text{inf}}$ norm of the contours and backgrounds to observe changes in testing models’ classification accuracy. In Figure 3, the x-axis is the perturbation’s $L_{\text{inf}}$ norm, while the y-axis shows the average accuracy over four testing models. Data points labeled only by the attack algorithm denote perturbations with contours being extracted, whereas those labeled by both the attack algorithm and ‘BG’ denote background extraction. The differences in model accuracies between contour removal and background removal in perturbations displayed in Figure 3 again demonstrate that human-identifiable features are critical to adversarial attacks.
5.2.4 SEARCH-BASED ATTACKS
Previous experiments in this paper focused on gradient-based attacks. Here we examine whether search-based attacks like Square attack and One-pixel attack also generate perturbations containing human-identifiable features. The experimental setup remains the same as before, except we skip the process of adding Gaussian noise to input images since Square attack and One-pixel attack are inherently stochastic.
Random search-based algorithms are less efficient than backpropagation in terms of optimization capability, and they require more computational resources to produce pronounced human-identifiable features. Consequently, we limited our study to a subset of the 200 images used previously. Parameter details for the attack algorithms are listed in Appendix C.
In Figure 4, we show the perturbations generated under the MM+G setting from the square attack and One-pixel attack. Figure 4 shows that search-based attack algorithms also reveal the masking effect, even in the absence of Gaussian noise as long as a sufficient number of perturbations are averaged. This strengthens our argument that adversarial perturbations contain human-identifiable features. More images are shown in Appendix D.
5.2.5 CONVERGENCE OF PERTURBATIONS
Our assumptions suggest that, under the untargeted attack and the MM+G setting, perturbations of the same image from different attack algorithms are likely to be similar. This is because the masking effect reduces contrasts on key features of an image and those key features should not be dependent on the attack algorithm. Furthermore, noise has near-zero similarity between different perturbations, which reduces overall similarity among perturbations. In contrast, in the MM+G setting, noise is removed through averaging. As a result, we expect increased similarity between perturbations of the same image across different attack algorithms. To quantify the similarity, we compute the cosine similarity between adversarial perturbations produced by different attack algorithms for the same image under the MM+G setting.
The calculation of cosine similarity is based on the contour part of the perturbations, which can be extracted using pixel-level labels from the ImageNet-S dataset, as described in Section 5.2.3. We then averaged these cosine similarity values across all 200 perturbations used in the experiment.
The averaged cosine similarities are notably high, ranging between 0.43 and 0.64, which agrees with our expectations and is shown in Figure 5. The similarity matrix for the MM and SM settings is shown in Appendix A.3.
6 TARGETED ATTACK
In the targeted attack mode, the generation effect becomes prominent. Although the experimental setup for targeted and untargeted attacks is the same, we use different image classes in the ImageNet dataset. This choice is dictated by the strong correlation between the class being attacked and the prominence of human-identifiable features in an image. For instance, between highly disparate classes—such as converting a car into a snake—different neural networks yield a variety of features due to the multiple feasible ways to make this transformation. Consequently, when averaging the perturbations generated by different models, the features tend to cancel out, making it challenging to obtain conclusive results. In contrast, when transforming closely related classes like hens into cocks, the required perturbations are more consistent across models. For example, adding a crest distinguishes a hen from a cock.
Figure 6 showcases adversarial examples generated by the CW attack under targeted attack mode. For the two targeted attacks, the input images are labeled as Siamese cat and hen, and the target classes are designated as tiger and cock, respectively. Under the MM+G setting, we notice the change in the cat’s fur to orange, more pronounced stripes, and a shift in eye color from blue to orange, all of which are features synonymous with a tiger, as shown in Figure 6(a). In Figure 6(b), the increase of the redness and the size of the crest on a hen are observed, making the hen resemble a cock. Additionally, a green patch appears below the chicken’s head, and its feathers take on more brilliant hues, all are typical cock traits. Both examples reinforce our hypothesis that adversarial perturbations carry human-identifiable features. The generation effect is subtler than the masking effect. We contemplate that this subtlety is due to the lack of a standard transformation in the generation effect. This subtlety is also reflected in the challenge of transferring targeted attacks across diverse models, as noted in Liu et al. (2017) and Wang et al. (2023).
7 DISCUSSION
We have proposed that adversarial perturbations contain human-identifiable features, which in turn contribute to the misclassification by neural networks. Based on this concept, three important phenomena of adversarial perturbations can be explained:
Figure 5: Cosine similarity for perturbations generated from different attack algorithms
Figure 6: Examples of adversarial examples generated from targeted attack algorithm under MM+G and SM settings: (a) Transforming a Siamese cat to a tiger. (b) Transforming a hen to a cock. Under the MM+G setting, the tiger-like traits and the features of a cock become more pronounced, demonstrating the effect of generation.
(1) Transferability: Researchers have found that a single perturbation can deceive multiple neural networks, extending beyond the model from which it was originally generated (Szegedy et al., 2014; Papernot et al., 2016). According to research on explainable AI, the features used by neural networks for classification may align with those used by humans (Smilkov et al., 2017; Selvaraju et al., 2017). In our study, we have found that adversarial perturbations modify features in the original image that are critical for humans to classify. Since neural networks may rely on human-identifiable features for classification and adversarial attacks alter some or all of these features, as perturbations are derived from different models, there is a likelihood that altered features overlap in different perturbations. As a result, perturbations are capable of being transferable across various models.
(2) Improving neural networks’ interpretability via adversarial training: By utilizing adversarial examples in the training process, adversarial training enhances the interpretabilities of neural network gradients as well as perturbations. This, in turn, makes network decision-making mechanisms easier to understand (Ross & Doshi-Velez, 2018; Tsipras et al., 2019; Santurkar et al., 2019).
In Section 3.2, we have pointed out that human-identifiable features are difficult to observe primarily due to excessive noise in the gradient and incomplete feature information. In the following, we contemplate how adversarial training effectively mitigates these two issues, allowing the reappearance of human-identifiable features.
As long as the network’s gradient of the loss function remains roughly the same when incorporating perturbations to an input image (Goodfellow et al., 2015), adversarial training, in essence, minimizes the $L_2$ norm of the gradient (Simon-Gabriel et al., 2019). Since noise increases the $L_2$ norm of the gradient without yielding any performance benefit, it will be minimized during training. Furthermore, reducing the $L_2$ norm of the gradient results in a more evenly distributed range of gradient values, which facilitates the integration of incomplete human-identifiable features.
To illustrate the above argument, we consider a simplified example using a linear classifier. If two values of input data are equal, the classifier’s output remains the same as long as the sum of the corresponding weights is constant. The classifier can thus assign these weights arbitrarily, provided their sum remains unchanged. When minimizing the $L_2$ norm, the classifier aims to distribute the weights of the two values as evenly as possible. This avoids undue focus on specific regions while neglecting others, thus allowing for more complete information in the model’s weights. Given the piecewise linear properties of neural networks, similar results can be extended to neural networks.
Consequently, similar to what we have demonstrated earlier with the MM+G setting, adversarial training not only reduces noise but also allows for more complete information in the gradient. This results in gradients/perturbations that are better aligned with human perception.
(3) Non-trivial accuracy for classifiers trained on a manipulated dataset: Researchers found that classifiers trained on a perturbed dataset through targeted attack and relabeled as the target class, surprisingly, have demonstrated high accuracy on a clean testing set (Ilyas et al., 2019). The underlying causes of such a phenomenon are not yet fully understood.
We provide a partial explanation for the phenomenon by pointing out that adversarial perturbations contain human-identifiable features. Even when the neural network is trained on seemingly incorrect labels, the perturbations still contain accurate human-identifiable features aligned with correct labels. As a result, the network can still be trained to correctly classify input data based on those human-identifiable features. This leads to a non-trivial classification accuracy on a clean dataset.
8 CONCLUSION
In this study, we make several noteworthy discoveries concerning adversarial perturbations. First, when we average perturbations over different neural networks for a single image, we find two types of features that are easily recognizable by humans. Second, the contour part of these perturbations is significantly more effective at attacking models than that of the background part. Third, when averaged across different neural networks, perturbations created by different attack algorithms show notable similarities. These findings support the idea that human-identifiable features are inherently embedded in a large class of adversarial perturbations. This insight enables us to explain three related phenomena. Our study shows that human-identifiable features play an important role in fooling neural networks, which has been overlooked in the literature.
REFERENCES
Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, et al. Square attack: A query-efficient black-box adversarial attack via random search for the l2 norm. In *International Conference on Machine Learning*, 2020.
Anish Athalye, Logan Engstrom, Andrew Ilyas, et al. Synthesizing robust adversarial examples. *CoRR*, 2017.
Tom Brown, Dandelion Mané, Aurko Roy, et al. Adversarial patch. *arXiv*, 2017.
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In *IEEE Symposium on Security and Privacy*, 2017.
Jia Deng, Wei Dong, Richard Socher, et al. ImageNet: A large-scale hierarchical image database. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2009.
Li Deng. The MNIST database of handwritten digit images for machine learning research. *IEEE Signal Processing Magazine*, 29(6):141–142, 2012.
Gamaleldin Elsayed, Shreya Shankar, Brian Cheung, et al. Adversarial examples that fool both computer vision and time-limited humans. In *Advances in Neural Information Processing Systems*, 2018.
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, et al. Robust physical-world attacks on deep learning visual classification. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2018.
Shanghua Gao, Zhong-Yu Li, Ming-Hsuan Yang, et al. Large-scale unsupervised semantic segmentation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In *International Conference on Learning Representations*, 2015.
Sicong Han, Chenhao Lin, Chao Shen, et al. Interpreting adversarial examples in deep learning: A review. *Association for Computing Machinery Computing Surveys*, 2023.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, et al. Deep residual learning for image recognition. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2016.
Geoffrey Hinton, Li Deng, Dong Yu, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. *IEEE Signal Processing Magazine*, 29: 82–97, 2012.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, et al. Densely connected convolutional networks. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2017.
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, et al. Adversarial examples are not bugs, they are features. In *Advances in Neural Information Processing Systems*, 2019.
Hoki Kim. Torchattacks: A pytorch repository for adversarial attacks. *arXiv preprint arXiv:2010.01950*, 2020.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. [https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf), 2009.
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. *arXiv*, 2016.
Yanpei Liu, Xinyun Chen, Chang Liu, et al. Delving into transferable adversarial examples and black-box attacks. In *International Conference on Learning Representations*, 2017.
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: A simple and accurate method to fool deep neural networks. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2016.
|
ZEZ0CPmoSI
|
This paper uses the compressed gradient as shown in (3), which is a direct compression on the full gradient. Given the prevalence of error feedback techniques in handling compression errors, it is pertinent to consider whether the proposed matrix step size can be extended to incorporate error feedback methods.
|
DEt-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization
Hanmin Li Avetik Karagulyan Peter Richtárik
King Abdullah University of Science and Technology
{hanmin.li, avetik.karagulyan, peter.richtarik}@kaust.edu.sa
Abstract
This paper introduces a new method for minimizing matrix-smooth non-convex objectives through the use of novel Compressed Gradient Descent (CGD) algorithms enhanced with a matrix-valued stepsize. The proposed algorithms are theoretically analyzed first in the single-node and subsequently in the distributed settings. Our theoretical results reveal that the matrix stepsize in CGD can capture the objective’s structure and lead to faster convergence compared to a scalar stepsize. As a byproduct of our general results, we emphasize the importance of selecting the compression mechanism and the matrix stepsize in a layer-wise manner, taking advantage of model structure. Moreover, we provide theoretical guarantees for free compression, by designing specific layer-wise compressors for the non-convex matrix smooth objectives. Our findings are supported with empirical evidence.
1 Introduction
The minimization of smooth and non-convex functions is a fundamental problem in various domains of applied mathematics. Most machine learning algorithms rely on solving optimization problems for training and inference, often with structural constraints or non-convex objectives to accurately capture the learning and prediction problems in high-dimensional or non-linear spaces. However, non-convex problems are typically NP-hard to solve, leading to the popular approach of relaxing them to convex problems and using traditional methods. Direct approaches to non-convex optimization have shown success but their convergence and properties are not well understood, making them challenging for large scale optimization. While its convex alternative has been extensively studied and is generally an easier problem, the non-convex setting is of greater practical interest often being the computational bottleneck in many applications.
In this paper, we consider the general minimization problem:
$$\min_{x \in \mathbb{R}^d} f(x),$$
where $f : \mathbb{R}^d \to \mathbb{R}$ is a differentiable function. In order for this problem to have a finite solution we will assume throughout the paper that $f$ is bounded from below.
Assumption 1. There exists $f^\text{inf} \in \mathbb{R}$ such that $f(x) \geq f^\text{inf}$ for all $x \in \mathbb{R}^d$.
The stochastic gradient descent (SGD) algorithm (Moulines & Bach, 2011; Bubeck et al., 2015; Gower et al., 2019) is one of the most common algorithms to solve this problem. In its most general form, it can be written as
$$x^{k+1} = x^k - \gamma g(x^k),$$
where $g(x^k)$ is a stochastic estimator of $\nabla f(x^k)$ and $\gamma > 0$ is a positive scalar stepsize. A particular case of interest is the compressed gradient descent (CGD) algorithm (Khirirat et al., 2018), where the estimator $g$ is taken as a compressed alternative of the initial gradient:
$$g(x^k) = C(\nabla f(x^k)),$$
and the compressor $C$ is chosen to be a "sparser" estimator that aims to reduce the communication overhead in distributed or federated settings. This is crucial, as highlighted in the seminal paper by
Konečný et al. (2016), which showed that the bottleneck of distributed optimization algorithms is the communication complexity. In order to deal with the limited resources of current devices, there are various compression objectives that are practical to achieve. These include also compressing the model broadcasted from server to clients for local training, and reducing the computational burden of local training. These objectives are mostly complementary, but compressing gradients has the potential for the greatest practical impact due to slower upload speeds of client connections and the benefits of averaging Kairouz et al. (2021). In this paper we will focus on this latter problem.
An important subclass of compressors are the sketches. Sketches are linear operators defined on $\mathbb{R}^d$, i.e., $C(y) = Sy$ for every $y \in \mathbb{R}^d$, where $S$ is a random matrix. A standard example of such a compressor is the Rand-$k$ compressor, which randomly chooses $k$ entries of its argument and scales them with a scalar multiplier to make the estimator unbiased. Instead of communicating all $d$ coordinates of the gradient, one communicates only a subset of size $k$, thus reducing the number of communicated bits by a factor of $d/k$. Formally, Rand-$k$ is defined as follows: $S = \frac{d}{k} \sum_{j=1}^{k} e_{ij} e_{ij}^\top$, where $e_{ij}$ is the $ij$-th standard basis vector in $\mathbb{R}^d$. We refer the reader to (Safaryan et al., 2022) for an overview on compressions.
Besides the assumption that function $f$ is bounded from below, we also assume that it is $L$ matrix smooth, as we are trying to take advantage of the entire information contained in the smoothness matrix $L$ and the stepsize matrix $D$.
**Assumption 2** (Matrix smoothness). There exists $L \in S^d_+$ such that
$$f(x) \leq f(y) + \langle \nabla f(y), x - y \rangle + \frac{1}{2} \langle L(x - y), x - y \rangle$$
holds for all $x, y \in \mathbb{R}^d$.
The assumption of matrix smoothness, which is a generalization of scalar smoothness, has been shown to be a more powerful tool for improving supervised model training. In Safaryan et al. (2021), the authors proposed using smoothness matrices and suggested a novel communication sparsification strategy to reduce communication complexity in distributed optimization for convex objectives. The technique was adapted to three distributed optimization algorithms in the convex setting, resulting in significant communication complexity savings and consistently outperforming the baselines. The results of this study demonstrate the efficacy of the matrix smoothness assumption in improving distributed optimization algorithms.
The case of block-diagonal smoothness matrices is particularly relevant in various applications, such as neural networks (NN). In this setting, each block corresponds to a layer of the network, and we characterize the smoothness with respect to nodes in the $i$-th layer by a corresponding matrix $L_i$. Unlike in the scalar setting, we favor the similarity of certain entries of the argument over the others. This is because the information carried by the layers becomes more complex, while the nodes in the same layers are similar. This phenomenon has been observed visually in various studies, such as those by Yosinski et al. (2015) and Zintgraf et al. (2017).
We study two matrix stepsized CGD-type algorithms and analyze their convergence properties for non-convex matrix-smooth functions. As mentioned earlier, we put special emphasis on the block-diagonal case. We design our sketches and stepsizes in a way that leverages this structure, and we show that in certain cases, we can achieve compression without losing in the overall communication complexity.
### 1.1 Related Work
Many successful convex optimization techniques have been adapted for use in the non-convex setting. Here is a non-exhaustive list: adaptivity (Dvinskikh et al., 2019; Zhang et al., 2020), variance reduction (J Reddi et al., 2016; Li et al., 2021), and acceleration (Guminov et al., 2019). A paper of particular importance for our work is that of Khaled & Richtárik (2020), which proposes a unified scheme for analyzing stochastic gradient descent in the non-convex regime. A comprehensive overview of non-convex optimization can be found in (Jain et al., 2017; Danilova et al., 2022).
A classical example of a matrix stepsized method is Newton’s method. This method has been popular in the optimization community for a long time (Gragg & Tapia, 1974; Miel, 1980; Yamamoto, 1987).
However, computing the stepsize as the inverse Hessian of the current iteration results in significant computational complexity. Instead, quasi-Newton methods use an easily computable estimator to replace the inverse of the Hessian (Broyden, 1965; Dennis & Moré, 1977; Al-Baali & Khalfan, 2007; Al-Baali et al., 2014). An example is the Newton-Star algorithm (Islamov et al., 2021), which we discuss in Section 2.
Gower & Richtárik (2015) analyzed sketched gradient descent by making the compressors unbiased with a sketch-and-project trick. They provided an analysis of the resulting algorithm for the linear feasibility problem. Later, Hanzely et al. (2018) proposed a variance-reduced version of this method. Sketches are also of independent interest. In particular, Song et al. (2023) described a way of designing the distribution of sketch matrices, while Lee et al. (2019); Qin et al. (2023) used sketches in solving empirical risk minimization problems.
Leveraging the layer-wise structure of neural networks has been widely studied for optimizing the training loss function. For example, (Zheng et al., 2019) propose SGD with different scalar stepsizes for each layer, (Yu et al., 2017; Ginsburg et al., 2019) propose layer-wise normalization for Stochastic Normalized Gradient Descent, and (Dutta et al., 2020; Wang et al., 2022) propose layer-wise compression in the distributed setting.
DCGD, proposed by Khirirat et al. (2018), has since been improved in various ways, such as in (Horvath et al., 2019; Li et al., 2020). There is also a large body of literature on other federated learning algorithms with unbiased compressors (Alistarh et al., 2017; Mishchenko et al., 2019; Gorbunov et al., 2021; Mishchenko et al., 2022; Maranjyan et al., 2022; Horváth et al., 2023).
1.2 Contributions
Our paper contributes in the following ways:
• We propose two novel matrix stepsize sketch CGD algorithms in Section 2, which, to the best of our knowledge, are the first attempts to analyze a fixed matrix stepsize for non-convex optimization. We present a unified theorem in Section 3 that guarantees stationarity for minimizing matrix-smooth non-convex functions. The results show that taking our algorithms improve on their scalar alternatives. The complexities are summarized in Table 1 for some particular cases.
• We design our algorithms’ sketches and stepsize to take advantage of the layer-wise structure of neural networks, assuming that the smoothness matrix is block-diagonal. In Section 4, we prove that our algorithms achieve better convergence than classical methods.
• Assuming the that the server-to-client communication is less expensive Konečný et al. (2016); Kairouz et al. (2021), we propose distributed versions of our algorithms in Section 5, following the standard FL scheme, and prove weighted stationarity guarantees. Our theorem recovers the result for DCGD in the scalar case and improves it in general.
• We validate our theoretical results with experiments. The plots and framework are provided in the Appendix.
1.3 Preliminaries
The usual Euclidean norm on $\mathbb{R}^d$ is defined as $\|\cdot\|$. We use bold capital letters to denote matrices. By $I_d$ we denote the $d \times d$ identity matrix, and by $O_d$ we denote the $d \times d$ zero matrix. Let $\mathbb{S}_d^{++}$ (resp. $\mathbb{S}_d^+$) be the set of $d \times d$ symmetric positive definite (resp. semi-definite) matrices. Given $Q \in \mathbb{S}_d^{++}$ and $x \in \mathbb{R}^d$, we write $\|x\|_Q := \sqrt{\langle Qx, x \rangle}$, where $\langle \cdot, \cdot \rangle$ is the standard Euclidean inner product on $\mathbb{R}^d$. For a matrix $A \in \mathbb{S}_d^{++}$, we define by $\lambda_{\text{max}}(A)$ (resp. $\lambda_{\text{min}}(A)$) the largest (resp. smallest) eigenvalue of the matrix $A$. Let $A_i \in \mathbb{R}^{d_i \times d_i}$ and $d = d_1 + \ldots + d_\ell$. Then the matrix $A = \text{Diag}(A_1, \ldots, A_\ell)$ is defined as a block diagonal $d \times d$ matrix where the $i$-th block is equal to $A_i$. We will use $\text{diag}(A) \in \mathbb{R}^{d \times d}$ to denote the diagonal of any matrix $A \in \mathbb{R}^{d \times d}$. Given a function $f : \mathbb{R}^d \to \mathbb{R}$, its gradient and its Hessian at point $x \in \mathbb{R}^d$ are respectively denoted as $\nabla f(x)$ and $\nabla^2 f(x)$. A random vector $x \in \mathbb{R}^d$ is an $\varepsilon$-stationary point if $\mathbb{E}\left[\|\nabla f(x)\|^2\right] \leq \varepsilon^2$, where the expectation is over the randomness of the algorithm.
2 THE ALGORITHMS
Below we define our two main algorithms:
\[ x^{k+1} = x^k - D S^k \nabla f(x^k), \]
(det-CGD1)
and
\[ x^{k+1} = x^k - T^k D \nabla f(x^k). \]
(det-CGD2)
Here, \( D \in \mathbb{S}_{++}^d \) is the fixed stepsize matrix. The sequences of random matrices \( S^k \) and \( T^k \) satisfy the following assumption.
**Assumption 3.** We will assume that the random sketches that appear in our algorithms are i.i.d., unbiased, symmetric and positive semi-definite for each algorithm. That is
\[ S^k, T^k \in \mathbb{S}_+^d, \quad S^k \overset{\text{iid}}{\sim} S \quad \text{and} \quad T^k \overset{\text{iid}}{\sim} T \]
\[ \mathbb{E}[S^k] = \mathbb{E}[T^k] = I_d, \quad \text{for every} \quad k \in \mathbb{N}, \]
where \( S \) and \( T \) are probability distributions over \( \mathbb{S}_+^d \).
A simple instance of det-CGD1 and det-CGD2 is the vanilla GD. Indeed, if \( S^k = T^k = I_d \) and \( D = \gamma I_d \), then \( x^{k+1} = x^k - \gamma \nabla f(x^k) \). In general, one may view these algorithms as Newton-type methods. In particular, our setting includes the Newton Star (NS) algorithm by Islamov et al. (2021):
\[ x^{k+1} = x^k - (\nabla^2 f(x^\text{inf}))^{-1} \nabla f(x^k). \]
(NS)
The authors prove that in the convex case it converges to the unique solution \( x^\text{inf} \) locally quadratically, provided certain assumptions are met. However, it is not a practical method as it requires knowledge of the Hessian at the optimal point. This method, nevertheless, hints that constant matrix stepsize can yield fast convergence guarantees. Our results allow us to choose the \( D \) depending on the smoothness matrix \( L \). The latter can be seen as a uniform upper bound on the Hessian.
The difference between det-CGD1 and det-CGD2 is the update rule. In particular, the order of the sketch and the stepsize is interchanged. When the sketch \( S \) and the stepsize \( D \) are commutative w.r.t. matrix product, the algorithms become equivalent. In general, a simple calculation shows that if we take
\[ T^k = D S^k D^{-1}, \]
(5)
then det-CGD1 and det-CGD2 are the same. Defining \( T^k \) according to (5), we recover the unbiasedness condition:
\[ \mathbb{E}[T^k] = D \mathbb{E}[S^k] D^{-1} = I_d. \]
(6)
However, in general \( D \mathbb{E}[S^k] D^{-1} \) is not necessarily symmetric, which contradicts to Assumption 3. Thus, det-CGD1 and det-CGD2 are not equivalent for our purposes.
3 MAIN RESULTS
Before we state the main result, we present a stepsize condition for det-CGD1 and det-CGD2, respectively:
\[ \mathbb{E}[S^k D L D S^k] \preceq D, \]
(7)
and
\[ \mathbb{E}[D T^k L T^k D] \preceq D. \]
(8)
In the case of vanilla GD (7) and (8) become \( \gamma < L^{-1} \), which is the standard condition for convergence. Below is the main convergence theorem for both algorithms in the single-node regime.
**Theorem 1.** Suppose that Assumptions 1-3 are satisfied. Then, for each \( k \geq 0 \)
\[ \frac{1}{K} \sum_{k=0}^{K-1} \mathbb{E}\left[ \| \nabla f(x^k) \|^2_D \right] \leq \frac{2(f(x^0) - f^\text{inf})}{K}, \]
(9)
if one of the below conditions is true:
i) The vectors \( x^k \) are the iterates of det-CGD1 and \( D \) satisfies (7);
ii) The vectors \( x^k \) are the iterates of det-CGD2 and \( D \) satisfies (8).
It is important to note that Theorem 1 yields the same convergence rate for any \( D \in S^{d}_{++} \), despite the fact that the matrix norms on the left-hand side cannot be compared for different weight matrices. To ensure comparability of the right-hand side of (9), it is necessary to normalize the weight matrix \( D \) that is used to measure the gradient norm. We propose using determinant normalization, which involves dividing both sides of (9) by \( \det(D)^{1/d} \), yielding the following:
\[
\frac{1}{K} \sum_{k=0}^{K-1} \mathbb{E} \left[ \| \nabla f(x^k) \|^2_D \right] \leq \frac{2(f(x^0) - f_{\text{inf}})}{\det(D)^{1/d} K}.
\]
This normalization is meaningful because adjusting the weight matrix to \( \frac{D}{\det(D)^{1/d}} \) allows its determinant to be 1, making the norm on the left-hand side comparable to the standard Euclidean norm. It is important to note that the volume of the normalized ellipsoid \( \{ x \in \mathbb{R}^d : \| x \|^2_D/\det(D)^{1/d} \leq 1 \} \) does not depend on the choice of \( D \in S^{d}_{++} \). Therefore, the results of (9) are comparable across different \( D \) in the sense that the right-hand side of (9) measures the volume of the ellipsoid containing the gradient.
3.1 Optimal matrix stepsize
In this section, we describe how to choose the optimal stepsize that minimizes the iteration complexity. The problem is easier for det-CGD2. We notice that (8) can be explicitly solved. Specifically, it is equivalent to
\[
D \preceq (\mathbb{E} [T^k L T^k])^{-1}.
\]
We want to emphasize that the RHS matrix is invertible despite the sketches not being so. Indeed. The map \( h : T \to TLT \) is convex on \( S^{d}_{++} \). Therefore, Jensen’s inequality implies
\[
\mathbb{E} [T^k L T^k] \succeq \mathbb{E} [T^k] L \mathbb{E} [T^k] = L \succ O_d.
\]
This explicit condition on \( D \) can assist in determining the optimal stepsize. Since both \( D \) and \( (\mathbb{E} [T^k L T^k])^{-1} \) are positive definite, then the right-hand side of (10) is minimized exactly when
\[
D = (\mathbb{E} [T^k L T^k])^{-1}.
\]
Note that the explicit solution of \( D \) needs to be calculated only once, at the beginning of the algorithm. It is then fixed for all iterations. The situation is different for det-CGD1. According to (10), the optimal \( D \) is defined as the solution of the following constrained optimization problem:
\[
\begin{align*}
\text{minimize} & \quad \log \det(D^{-1}) \\
\text{subject to} & \quad \mathbb{E} [S^k D L D S^k] \preceq D \\
& \quad D \in S^{d}_{++}.
\end{align*}
\]
Proposition 1. The optimization problem (13) with respect to stepsize matrix \( D \in S^{d}_{++} \), is a convex optimization problem with a convex constraint.
The proof of this proposition can be found in the Appendix. It is based on the reformulation of the constraint to its equivalent quadratic form inequality. Using the trace trick, we can prove that for every vector chosen in the quadratic form, it is convex. Since the intersection of convex sets is convex, we conclude the proof.
One could consider using the CVXPY (Diamond & Boyd, 2016) package to solve (13), provided that it is first transformed into a Disciplined Convex Programming (DCP) form (Grant et al., 2006). Nevertheless, (7) is not recognized as a DCP constraint in the general case. To make CVXPY applicable, additional steps tailored to the problem at hand must be taken.
Table 1: Summary of communication complexities for det-CGD1 and det-CGD2 with different sketches and stepsize matrices. The \( D_i \) here for det-CGD1 is \( W_i \) with the optimal scaling determined using Theorem 2, for det-CGD2 it is the optimal stepsize matrix defined in (12). The constant \( 2(f(x^0) - f_{\text{inf}})/\varepsilon^2 \) is hidden, \( \ell \) is the number of layers, \( k_i \) is the mini-batch size for the \( i \)-th layer if we use the rand-\( k \) sketch. The notation \( \tilde{L}_{i,k} \) is defined as \( \frac{k}{d-1} \text{diag}(L_i) + \frac{k-1}{d-1} L_i \).
| No. | The method | \((S^k_i, D_i)\) | \( l \geq 1, d_i, k_i, \sum_{i=1}^\ell k_i = k \), layer structure | \( l = 1, k_i = k \), general structure |
|-----|------------|------------------|-------------------------------------------------|--------------------------------------|
| 1. | det-CGD1 | \((I_d, \gamma L^{-1})\) | \( d \cdot \det(L)^{1/d} \) | \( d \cdot \det(L)^{1/d} \) |
| 2. | det-CGD1 | \((I_d, \gamma \text{diag}^{-1}(L))\) | \( d \cdot \det(\text{diag}(L))^{1/d} \) | \( d \cdot \det(\text{diag}(L))^{1/d} \) |
| 3. | det-CGD1 | \((I_d, \gamma I_{d_i})\) | \( d \cdot (\prod_{i=1}^\ell d_i \lambda_{\max}(L_i))^{1/d} \) | \( d \cdot \lambda_{\max}(L) \) |
| 4. | det-CGD1 | \((\text{rand}-1, \gamma I_{d_i})\) | \( \ell \cdot (\prod_{i=1}^\ell d_i \lambda_{\max}(L_i))^{1/d} \) | \( \ell \cdot \lambda_{\max}(L) \) |
| 5. | det-CGD1 | \((\text{rand}-1, \gamma L^{-1})\) | \( \ell \cdot \left( \prod_{i=1}^\ell d_i \lambda_{\max}(L_i) \right)^{1/d} \) | \( \ell \cdot \lambda_{\max}(L) \) |
| 6. | det-CGD1 | \((\text{rand}-1, \gamma L^{-1/2})\) | \( \ell \cdot \left( \prod_{i=1}^\ell d_i \lambda_{\max}(L_i)^{1/2} \right)^{1/d} \) | \( \ell \cdot \lambda_{\max}(L) \) |
| 7. | det-CGD1 | \((\text{rand}-1, \gamma \text{diag}^{-1}(L))\) | \( \ell \cdot \left( \prod_{i=1}^\ell d_i \lambda_{\max}(L_i)^{1/2} \right)^{1/d} \) | \( \ell \cdot \lambda_{\max}(L) \) |
| 8. | det-CGD1 | \((\text{rand}-k_i, \gamma \text{diag}^{-1}(L_i))\) | \( k \cdot \left( \prod_{i=1}^\ell d_i \lambda_{\max}(L_i) \right)^{1/d} \) | \( k \cdot \lambda_{\max}(L) \) |
| 9. | det-CGD2 | \((I_d, L^{-1})\) | \( d \cdot \det(L)^{1/d} \) | \( d \cdot \det(L)^{1/d} \) |
| 10. | det-CGD2 | \((\text{rand}-1, \text{diag}^{-1}(L))\) | \( \ell \cdot \left( \prod_{i=1}^\ell d_i \lambda_{\max}(L_i) \right)^{1/d} \) | \( \ell \cdot \lambda_{\max}(L) \) |
| 11. | det-CGD2 | \((\text{rand}-k_i, \text{diag}^{-1}(L_i))\) | \( k \cdot \left( \prod_{i=1}^\ell d_i \lambda_{\max}(L_i) \right)^{1/d} \) | \( k \cdot \lambda_{\max}(L) \) |
| 12. | det-CGD2 | \((\text{Bern}-q_i, q_i L^{-1})\) | \( \left( \sum_{i=1}^\ell q_i \right)^{1/d} \) | \( \left( \sum_{i=1}^\ell q_i \right)^{1/d} \) |
| 13. | GD | \((I_d, \lambda_{\max}(L))\) | N/A | \( d \cdot \lambda_{\max}(L) \) |
4 LEVERAGING THE LAYER-WISE STRUCTURE
In this section we focus on the block-diagonal case of \( L \) for both det-CGD1 and det-CGD2. In particular, we propose hyper-parameters of det-CGD1 designed specifically for training NNs. Let us assume that \( L = \text{Diag}(L_1, \ldots, L_\ell) \), where \( L_i \in S^d_{++} \). This setting is a generalization of the classical smoothness condition, as in the latter case \( L_i = LI_{d_i} \) for all \( i = 1, \ldots, \ell \). Respectively, we choose both the sketches and the stepsize to be block diagonal: \( D = \text{Diag}(D_1, \ldots, D_\ell) \) and \( S^k = \text{Diag}(S^k_1, \ldots, S^k_\ell) \), where \( D_i, S^k_i \in S^d_{++} \).
Let us notice that the left hand side of the inequality constraint in (13) has quadratic dependence on \( D \), while the right hand side is linear. Thus, for every matrix \( W \in S^d_{++} \), there exists \( \gamma > 0 \) such that
\[
\gamma^2 \lambda_{\max} \left( E \left[ S^k W L W S^k \right] \right) \leq \gamma \lambda_{\min}(W).
\]
Therefore, for \( \gamma W \) we deduce
\[
E \left[ S^k (\gamma W) L (\gamma W) S^k \right] \preceq \gamma^2 \lambda_{\max} \left( E \left[ S^k W L W S^k \right] \right) I_d \preceq \gamma \lambda_{\min}(W) I_d \preceq \gamma W. \tag{14}
\]
The following theorem is based on this simple fact applied to the corresponding blocks of the matrices \( D, L, S^k \) for det-CGD1.
**Theorem 2.** Let \( f : \mathbb{R}^d \to \mathbb{R} \) satisfy Assumptions 1 and 2, with \( L \) admitting the layer-separable structure \( L = \text{Diag}(L_1, \ldots, L_\ell) \), where \( L_1, \ldots, L_\ell \in S^d_{++} \). Choose random matrices \( S^k_1, \ldots, S^k_\ell \in S^d_{++} \) to satisfy Assumption 3 for all \( i \in [\ell] \), and let \( S^k := \text{Diag}(S^k_1, \ldots, S^k_\ell) \). Furthermore, choose matrices \( W_1, \ldots, W_\ell \in S^d_{++} \) and scalars \( \gamma_1, \ldots, \gamma_\ell > 0 \) such that
\[
\gamma_i \leq \lambda_{\max}^{-1} \left( E \left[ W_i^{-1/2} S^k_i W_i L_i W_i S^k_i W_i^{-1/2} \right] \right) \quad \forall i \in [\ell]. \tag{15}
\]
Letting \( W := \text{Diag}(W_1, \ldots, W_\ell) \), \( \Gamma := \text{Diag}(\gamma_1 I_{d_1}, \ldots, \gamma_\ell I_{d_\ell}) \) and \( D := \Gamma W \), we get
\[
\frac{1}{K} \sum_{k=0}^{K-1} E \left[ \| \nabla f(x^k) \|^2 \frac{\Gamma W}{\det(\Gamma W)^{1/d}} \right] \leq \frac{2(f(x^0) - f_{\text{inf}})}{\det(\Gamma W)^{1/d} K}. \tag{16}
\]
In particular, if the scalars \( \{\gamma_i\} \) are chosen to be equal to their maximum allowed values from (15), then the convergence factor of (16) is equal to
\[
\det (\Gamma W)^{-\frac{1}{d}} = \left[ \prod_{i=1}^{\ell} \lambda_{\max}^{d_i} \left( \mathbb{E} \left[ W_i^{-\frac{1}{2}} S_i^k W_i L_i W_i S_i^k W_i^{-\frac{1}{2}} \right] \right) \right]^{\frac{1}{d}} \det(W^{-1})^{\frac{1}{d}}.
\]
Table 1 contains the (expected) communication complexities of det-CGD1, det-CGD2 and GD for several choices of \( W, D \) and \( S^k \). Here are a few comments about the table. We deduce that taking a matrix stepsize without compression (row 1) we improve GD (row 13). A careful analysis reveals that the result in row 5 is always worse than row 7 in terms of both communication and iteration complexity. However, the results in row 6 and row 7 are not comparable in general, meaning that neither of them is universally better. More discussion on this table can be found in the Appendix.
**Compression for free.** Now, let us focus on row 12, which corresponds to a sampling scheme where the \( i \)-th layer is independently selected with probability \( q_i \). Mathematically, it goes as follows:
\[
T_i^k = \frac{\eta_i}{q_i} I_{d_i}, \quad \text{where} \quad \eta_i \sim \text{Bernoulli}(q_i).
\]
Jensen’s inequality implies that
\[
\left( \sum_{i=1}^{\ell} q_i d_i \right) \cdot \prod_{i=1}^{\ell} \left( \frac{1}{q_i} \right)^{\frac{d_i}{d}} \geq d.
\]
The equality is attained when \( q_i = q \) for all \( i \in [\ell] \). The expected bits transferred per iteration of this algorithm is then equal to \( k_{\exp} = qd \) and the communication complexity equals \( d \det(L)^{1/d} \).
Comparing with the results for det-CGD2 with rand-\( k_{\exp} \) on row 11 and using the fact that \( \det(L) \leq \det(\text{diag}(L)) \), we deduce that the Bernoulli scheme is better than the uniform sampling scheme. Notice also, the communication complexity matches the one for the uncompressed det-CGD2 displayed on row 9. This, in particular means that using the Bern-\( q \) sketches we can compress the gradients for free. The latter means that we reduce the number of bits broadcasted at each iteration without losing in the total communication complexity. In particular, when all the layers have the same width \( d_i \), the number of broadcasted bits for each iteration is reduced by a factor of \( q \).
### 5 DISTRIBUTED SETTING
In this section we describe the distributed versions of our algorithms and present convergence guarantees for them. Let us consider an objective function that is sum decomposable:
\[
f(x) := \frac{1}{n} \sum_{i=1}^{n} f_i(x),
\]
where each \( f_i : \mathbb{R}^d \to \mathbb{R} \) is a differentiable function. We assume that \( f \) satisfies Assumption 1 and the component functions satisfy the below condition.
**Assumption 4.** Each component function \( f_i \) is \( L_i \)-smooth and is bounded from below: \( f_i(x) \geq f_i^{\inf} \) for all \( x \in \mathbb{R}^d \).
This assumption also implies that \( f \) is of matrix smoothness with \( \bar{L} \in \mathbb{S}_{++}^d \), where \( \bar{L} = \frac{1}{n} \sum_{i=1}^{n} L_i \).
Following the standard FL framework (Konečnỳ et al., 2016; McMahan et al., 2017; Khirirat et al., 2018), we assume that the \( i \)-th component function \( f_i \) is stored on the \( i \)-th client. At each iteration, the clients in parallel compute and compress the local gradient \( \nabla f_i \) and communicate it to the central server. The server, then aggregates the compressed gradients, computes the next iterate, and in parallel broadcasts it to the clients. See the below pseudo-codes for the details.
**Theorem 3.** Let \( f_i : \mathbb{R}^d \to \mathbb{R} \) satisfy Assumption 4 and let \( f \) satisfy Assumption 1 and Assumption 2 with smoothness matrix \( L \). If the stepsize satisfies
\[
DLD \preceq D,
\]
then the following convergence bound is true for the iterates of Algorithm 1:
$$\min_{0 \leq k \leq K-1} \mathbb{E}\left[ \|\nabla f(x^k)\|^2_D \right] \leq \frac{2(1 + \lambda_D/n)^K (f(x^0) - f^\inf)}{\det(D)^{1/d} K} + \frac{2\lambda_D \Delta^\inf}{\det(D)^{1/d} n},$$
where $\Delta^\inf := f^\inf - \frac{1}{n} \sum_{i=1}^n f_i^\inf$ and
$$\lambda_D := \max_i \left\{ \lambda_{\max} \left( \mathbb{E} \left[ L_i^{1/2} (S_i^k - I_d) DLD (S_i^k - I_d) L_i^{1/2} \right] \right) \right\}.$$
Algorithm 1 Distributed det-CGD1
1: **Input:** Starting point $x^0$, stepsize matrix $D$, number of iterations $K$
2: **for** $k = 0, 1, 2, \ldots, K - 1$ **do**
3: The devices in parallel:
4: sample $S_i^k \sim S$;
5: compute $S_i^k \nabla f_i(x^k)$;
6: broadcast $S_i^k \nabla f_i(x^k)$.
7: The server:
8: combines $g^k = \frac{D}{n} \sum_{i=1}^n S_i^k \nabla f_i(x^k)$;
9: computes $x^{k+1} = x^k - g^k$;
10: broadcasts $x^{k+1}$.
11: **end for**
12: **Return:** $x^K$
Algorithm 2 Distributed det-CGD2
1: **Input:** Starting point $x^0$, stepsize matrix $D$, number of iterations $K$
2: **for** $k = 0, 1, 2, \ldots, K - 1$ **do**
3: The devices in parallel:
4: sample $T_i^k \sim T$;
5: compute $T_i^k D \nabla f_i(x^k)$;
6: broadcast $T_i^k D \nabla f_i(x^k)$.
7: The server:
8: combines $g^k = \frac{1}{n} \sum_{i=1}^n T_i^k D \nabla f_i(x^k)$;
9: computes $x^{k+1} = x^k - g^k$;
10: broadcasts $x^{k+1}$.
11: **end for**
12: **Return:** $x^K$
The same result is true for Algorithm 2 with a different constant $\lambda_D$. The proof of Theorem 3 and its analogue for Algorithm 2 are presented in the Appendix. The analysis is largely inspired by (Khaled & Richtárik, 2020, Theorem 1). Now, let us examine the right-hand side of (20). We start by observing that the first term has exponential dependence in $K$. However, the term inside the brackets, $1 + \lambda_D/n$, depends on the stepsize $D$. Furthermore, it has a second-order dependence on $D$, implying that $\lambda_{\alpha D} = \alpha^2 \lambda_D$, as opposed to $\det(\alpha D)^{1/d}$, which is linear in $\alpha$. Therefore, we can choose a small enough coefficient $\alpha$ to ensure that $\lambda_D$ is of order $n/K$. This means that for a fixed number of iterations $K$, we choose the matrix stepsize to be "small enough" to guarantee that the numerator of the first term is bounded. The following corollary summarizes these arguments, and its proof can be found in the Appendix.
**Corollary 1.** We reach an $\varepsilon$-stationarity, that is the right-hand side of (20) is upper bounded by $\varepsilon^2$, if the following conditions are satisfied:
$$DLD \preceq D, \quad \lambda_D \leq \min \left\{ \frac{n}{K}, \frac{n\varepsilon^2}{4\Delta^\inf \det(D)^{1/d}} \right\}, \quad K \geq \frac{12(f(x^0) - f^\inf)}{\det(D)^{1/d} \varepsilon^2}.$$
Proposition 3 in the Appendix proves that these conditions with respect to $D$ are convex. In order to minimize the iteration complexity for getting $\varepsilon^2$ error, one needs to solve the following optimization problem
$$\text{minimize} \quad \log \det(D^{-1})$$
subject to $D$ satisfies (21).
Choosing the optimal stepsize for Algorithm 1 is analogous to solving (13). One can formulate the distributed counterpart of Theorem 2 and attempt to solve it for different sketches. Furthermore, this leads to a convex matrix minimization problem involving $D$. We provide a formal proof of this property in the Appendix. Similar to the single-node case, computational methods can be employed using the CVXPY package. However, some additional effort is required to transform (21) into the disciplined convex programming (DCP) format.
The second term in (20) corresponds to the convergence neighborhood of the algorithm. It does not depend on the number of iteration, thus it remains unchanged, after we choose the stepsize. Nevertheless, it depends on the number of clients $n$. In general, the term $\Delta^\inf/n$ can be unbounded, when $n \to +\infty$. However, per Corollary 1, we require $\lambda_D$ to be upper-bounded by $n/K$. Thus,
Figure 1: Comparison of standard DCGD, DCGD with matrix smoothness, D-det-CGD1 and D-det-CGD2 with optimal diagonal stepsizes under rand-1 sketch. The stepsize for standard DCGD is determined using (Khaled & Richtárik, 2020, Proposition 4), the stepsize for DCGD with matrix smoothness along with $D_1$, $D_2$ is determined using Corollary 1, the error level is set to be $\varepsilon^2 = 0.0001$. Here $G_{K,D} := \frac{1}{K} \left( \sum_{k=0}^{K-1} \| \nabla f(x^k) \|_D^2 / \det(D)^{1/d} \right)$.
the neighborhood term will indeed converge to zero when $K \to +\infty$, if we choose the stepsize accordingly.
We compare our results with the existing results for DCGD. In particular we use the technique from Khaled & Richtárik (2020) for the scalar smooth DCGD with scalar stepsizes with the results from (Khaled & Richtárik, 2020, Corollary 1). See the Appendix for the details on the analysis of Khaled & Richtárik (2020). Finally, we back up our theoretical findings with experiments. See Figure 1 for a simple experiment confirming that Algorithms 1 and 2 have better iteration and communication complexity compared to scalar stepsized DCGD. The graphs of the two proposed algorithms coincide, as the diagonal stepsize and the diagonal sketch commute, resulting in the same method. For more details on the experiments we refer the reader to the corresponding section in the Appendix.
6 CONCLUSION
In this paper, we enhance compressed gradient descent method with matrix-valued stepsize for general non-convex objectives. Convergence guarantees are provided for the algorithms both in the single node case and the distributed setting. By considering the layer-wise structure of models such as neural networks, we are able to design compression mechanisms that achieve compression for free. This is the first time matrix stepsize is used and analyzed together with compression in the non-convex case. Our theoretical findings are supported with abundant numerical experiments.
6.1 LIMITATIONS
It is worth noting that every point in $\mathbb{R}^d$ can be enclosed within some volume 1 ellipsoid. Therefore, having the average $D$-norm of the gradient bounded by a small number does not guarantee that the average Euclidean norm is small. However, for a fixed $D$, the standard Euclidean norm is equivalent to the weighted $D$-norm. This is due to
$$\lambda_{\min}(D) \cdot \frac{\| \nabla f(x) \|_D^2}{\det(D)^{1/d}} \leq \| \nabla f(x) \|_D^2 / (\det(D))^{1/d} \leq \lambda_{\max}(D) \cdot \frac{\| \nabla f(x) \|_D^2}{\det(D)^{1/d}}.$$
This relation is further validated by our experiments described in the Appendix.
REFERENCES
Mehiddin Al-Baali and H Khalfan. An overview of some practical quasi-newton methods for unconstrained optimization. *Sultan Qaboos University Journal for Science [SQUJS]*, 12(2): 199–209, 2007.
Mehiddin Al-Baali, Emilio Spedicato, and Francesca Maggioni. Broyden’s quasi-Newton methods for a nonlinear system of equations and unconstrained optimization: a review and open problems. *Optimization Methods and Software*, 29(5):937–954, 2014.
Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: Communication-efficient SGD via gradient quantization and encoding. *Advances in neural information processing systems*, 30, 2017.
Charles G Broyden. A class of methods for solving nonlinear simultaneous equations. *Mathematics of computation*, 19(92):577–593, 1965.
Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. *Foundations and Trends® in Machine Learning*, 8(3-4):231–357, 2015.
Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. *ACM transactions on intelligent systems and technology (TIST)*, 2(3):1–27, 2011.
Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, and Innokentiy Shibaev. Recent theoretical advances in non-convex optimization. In *High-Dimensional Optimization and Probability: With a View Towards Data Science*, pp. 79–163. Springer, 2022.
John E Dennis, Jr and Jorge J Moré. Quasi-Newton methods, motivation and theory. *SIAM review*, 19(1):46–89, 1977.
Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded modeling language for convex optimization. *The Journal of Machine Learning Research*, 17(1):2909–2913, 2016.
Aritra Dutta, El Houcine Bergou, Ahmed M Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 3817–3824, 2020.
Darina Dvinskikh, Aleksandr Ogaltsov, Alexander Gasnikov, Pavel Dvurechensky, Alexander Tyurin, and Vladimir Spokoiny. Adaptive gradient descent for convex and non-convex stochastic optimization. *arXiv preprint arXiv:1911.08380*, 2019.
Boris Ginsburg, Patrice Castonguay, Oleksii Hrinchuk, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, Huyen Nguyen, Yang Zhang, and Jonathan M Cohen. Stochastic gradient methods with layer-wise adaptive moments for training of deep networks. *arXiv preprint arXiv:1905.11286*, 2019.
Eduard Gorbunov, Konstantin P Burlachenko, Zhize Li, and Peter Richtárik. Marina: Faster non-convex distributed learning with compression. In *International Conference on Machine Learning*, pp. 3788–3798. PMLR, 2021.
Robert M Gower and Peter Richtárik. Randomized iterative methods for linear systems. *SIAM Journal on Matrix Analysis and Applications*, 36(4):1660–1690, 2015.
Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, and Peter Richtárik. Sgd: General analysis and improved rates. In *International Conference on Machine Learning*, pp. 5200–5209. PMLR, 2019.
William B Gragg and Richard A Tapia. Optimal error bounds for the Newton–Kantorovich theorem. *SIAM Journal on Numerical Analysis*, 11(1):10–13, 1974.
Michael Grant, Stephen Boyd, and Yinyu Ye. Disciplined convex programming. *Global optimization: From theory to implementation*, pp. 155–210, 2006.
|
McfYbKnpT8
|
Similarly, is there some way to benefit from the fact that the “shape” of the parameter settings is the same throughout the experiment and don’t change from one instance to the next? Is it possible to re-evaluate several performant hyperparameter settings from one instance on a new instance to quickly collect data?
|
L2P-MIP: Learning to Presolve for Mixed Integer Programming
Chang Liu\textsuperscript{1}, Zhichen Dong\textsuperscript{1}, Haobo Ma\textsuperscript{1}, Weilin Luo\textsuperscript{2}, Xijun Li\textsuperscript{2}, Bowen Pang\textsuperscript{2}, Jia Zeng\textsuperscript{2}, Junchi Yan\textsuperscript{1,*}
\textsuperscript{1}Department of Computer Science and Engineering, Shanghai Jiao Tong University
\textsuperscript{2}Huawei Noah’s Ark Lab
\{only-changer,niconi19,witcher,yanjunchi\}@sjtu.edu.cn
\{luo weilin3,pangbowen2,xijun.li,zeng.jial@huawei.com
PyTorch Code: \url{https://github.com/Thinklab-SJTU/L2P-MIP}
Abstract
Modern solvers for solving mixed integer programming (MIP) often rely on the branch-and-bound (B&B) algorithm which could be of high time complexity, and presolving techniques are well designed to simplify the instance as pre-processing before B&B. However, such presolvers in existing literature or open-source solvers are mostly set by default agnostic to specific input instances, and few studies have been reported on tailoring presolving settings. In this paper, we aim to dive into this open question and show that the MIP solver can be indeed largely improved when switching the default instance-agnostic presolving into instance-specific presolving. Specifically, we propose a combination of supervised learning and classic heuristics to achieve efficient presolving adjusting, avoiding tedious reinforcement learning. Notably, our approach is orthogonal from many recent efforts in incorporating learning modules into the B&B framework after the presolving stage, and to our best knowledge, this is the first work for introducing learning to presolve in MIP solvers. Experiments on multiple real-world datasets show that well-trained neural networks can infer proper presolving for arbitrary incoming MIP instances in less than 0.5s, which is neglectable compared with the solving time often hours or days.
1 Introduction and Related Work
Mixed integer programming (MIP) is a general optimization formulation of various real-world optimization applications, such as scheduling and production planning. In its commonly studied linear form, MIP minimizes a linear objective function over a set of integer points that satisfy a finite family of linear constraints. Due to its NP-hard nature, in modern MIP solvers (SCIP (Gamrath et al., 2020), GUROBI (Gurobi, 2021), CPLEX (IBM, 2021)), the branch-and-bound (B&B) algorithm is widely employed. B&B traverses the candidate solutions systematically, in which the set of candidate solutions is considered to form a search tree with the full set at the root. However, B&B can suffer from severe scalability issues in branching selections, especially for real-world applications.
Efforts have been made to reduce the time cost of B&B by including an extra step: given an MIP instance, the solver first pre-processes and simplifies the instance before passing it to B&B. This step is usually named Presolve, and various presolvers have been designed to reduce the size of the input instance. Via presolving, the original MIP instance is simplified by removing irrelevant information e.g. redundant constraints and variables. After presolving, B&B only needs to solve the smaller simplified instance. Though the presolving step itself does cost extra time, it leads to a great time saving for the B&B algorithm, and in total improves the performance of the MIP solver significantly (Achterberg et al., 2019). It has been shown in early studies (Bixby et al., 2004; Achterberg & Wunderling, 2013b) that, after appropriately presolving, 1.3x speed up can be acquired and more than 15% unsolvable instances become solvable within the time limit. Due to the page limit, we place the description of the commonly used presolvers in the Appendix (A.1).
*Correspondence author. The work was in part supported by National Key Research and Development Program of China (2020AAA0107600), Huawei Technologies, NSFC (62222607), and SJTU Trans-med Awards Research (STAR) 20210106.
Figure 1: Presolving in MIP solvers. For each incoming MIP instance, the solver first presolves it to simplify the problem, which includes multiple rounds and the utilization of multiple presolvers. Then, the simplified instance is passed to the B&B algorithm and solved.
In existing MIP solvers, presolving is routinely adopted by the default setting, being agnostic to input instances. As the default setting may not always be suitable, several works (Hutter et al., 2009; 2011; Lindauer et al., 2022) propose to find one single robust configuration (setting) for all problem instances. However, they still cannot tailor presolving for each unseen instance. In this paper, we argue that tailoring suitable presolving for each individual instance could reach better performances. Researchers of the latest work (Galabova, 2023) conclude that “an analysis of presolve would be incomplete without an investigation of this effect for particular instances”, which necessitates instance-specific tailoring presolving. Moreover, the value of instance-specific tailoring in presolving has been empirically shown in (Frank et al., 2010).
To this end, we try to design an efficient method that can tailor presolving for MIP instances, which is able to integrate into existing MIP solvers. In general, customized presolving includes how to pick the next presolver (determine the order), limit the max used rounds of each presolver, and set the time used for each presolver. Especially, some of the operations can be related to each other, for example, the order of the presolvers can influence their utilization rate and efficiency. Hence, finding the best presolving is a challenging task and remains under-studied in literature.
To achieve instance-adaptive presolving, one provable way is using heuristic algorithms to search for the best presolving. However, heuristic searching can be too time-consuming to serve as a pre-processor. To improve efficiency, neural networks can be used to fit the behavior of heuristic algorithms, since neural networks can infer the suitable presolving in a short time. More specifically, by taking a closer look at the presolving parameters, we argue that the priority is more influential and can affect the other parameters since priority determines the execution order of presolvers. As shown in many previous works (Elble, 2010; Lodi & Tramontani, 2013; Galabova, 2023), the performance of presolving is very sensitive to the order of the presolvers.
In this paper, we propose a hybrid algorithmic neural framework for improving presolving, namely Learning to Resolve (L2P). Firstly, we modify simulated annealing to search for the most suitable presolving given each instance. Then, we adapt neural networks that learn the mapping from instance to the found presolving. When applied to unseen instances, the well-trained neural networks can infer suitable presolving in a considerably short time (less than 0.5s in the experiments). Besides, considering the attributes and relations among different presolving parameters, we decide to build hybrid inference networks, in which the priority is regarded as prior knowledge to guide the learning.
We conduct experiments on popular MIP datasets with scales from small to large and two industry-level datasets. Results show that there is indeed much room for improvement in the default presolving of MIP solvers, and the solver can be indeed largely improved when switching the default instance-agnostic presolving into instance-specific presolving by L2P. This suggests that default presolving is a performance-limiting factor of the solver and deserves more attention. We consider this task could be a new direction for using machine learning technologies to further improve MIP solvers.
The related works cover different aspects, including solving MIP instances, presolving in MIP solvers, and auto configuration, which we leave in Appendix A.2. The highlights of this paper are four-fold:
1) To the best of our knowledge, this is the first work in literature proposing adaptively tailored presolving w.r.t. MIP solvers. Better presolving could significantly reduce the time consumed in solving MIP instances but few works in the literature consider improving it.
2) We propose a hybrid neural framework equipped with heuristic algorithms as supervision to predict the suitable presolving for each input MIP instance, which combines the advantage of searching effectiveness of heuristic algorithms and inference efficiency of neural networks.
3) Experimental results on both public and private industrial benchmarks show the effectiveness and efficiency of our L2P. It has also demonstrated the necessity of adaptively selecting presolving instead of the default instance-agnostic presolving to boost the performance of the MIP solver.
4) We have open-sourced our code as a benchmark of utilizing machine learning to improve the presolving in MIP solvers, please refer to our Github repository for more details.
2 FORMULATION AND METHODOLOGY
In this section, first, we introduce presolving and its role in solving MIP. Then, we adopt a simulated annealing based heuristic searching method that can find the most suitable presolving but with large time consumption. Finally, we propose a deep learning approach to learn the presolving found by the simulated annealing with the advantage of efficiency, named learning to presolve (L2P).
2.1 PRELIMINARIES: MIP SOLVER AND PRESOLVING
It is well known that any mixed integer linear programming can be written in canonical form:
$$\min \{ c^\top x : Ax \leq b, x \in \mathbb{Z}^p \times \mathbb{R}^{n-p} \},$$
(1)
where $n$ is the number of variables, $m$ is the number of constraints, $c \in \mathbb{R}^n$ is the objective coefficient vector, $A \in \mathbb{R}^{m \times n}$ is the constraint coefficient matrix, $b \in \mathbb{R}^m$ is the constraint right-hand-side vector, and $p \leq n$ is the number of integer variables. We assume the first $p$ variables are integer variables and the last $n - p$ variables are continuous variables.
In general, the branch-and-bound (B&B) algorithm is utilized to solve the MIP instance to global optimal, which follows a divide-and-conquer. However, the B&B algorithm requires lots of time and resources to find the optimal solution if the size of the input MIP instance is large. Therefore, in modern MIP solvers, presolving is conducted to simplify the original instance, as Fig. 1 shows. In the popular open-source MIP solver SCIP, multiple presolvers are used to reduce the size of the model by removing irrelevant information like redundant constraints, strengthening the linear programming relaxation by exploiting integrality information, and extracting useful information in presolving.
There are three key parameters for each presolver: 1) priority denotes the order in which different presolvers are executed; 2) max-rounds denotes the maximal number of rounds the presolver participates in; 3) timing denotes the timing mask of the presolver. In the process of presolving, at every step, one presolver is selected from the presolver pool based on the priority of each presolver. When all presolvers are selected, we refill the presolver pool with the presolvers that are used for less than their max-rounds times. As we can see, the priorities of the presolvers tend to have greater impacts on the performance of presolving, which is also illustrated in (Eible, 2010).
In existing solvers, the parameters of presolvers are all set by default, no matter how the input instance varies. Though the default presolving parameters are designed by experts, we consider using unchanged presolving for changeable inputs is not a good idea. In our opinion, the ideal MIP solver should analyze the feature of the input instance and tailor suitable presolving parameters. In this way, the power of presolving is fully utilized, and so is the power of the whole solver. Therefore, we aim to design a general approach to finding the best presolving parameters given each input instance, in other words, changed from instance-agnostic presolving to instance-specific presolving.
2.2 SIMULATED ANNEALING FOR SEARCHING BEST PRESOLVING PARAMETERS
We start with a search-based baseline for tuning the presolving parameters. SCIP includes 14 presolvers and 3 parameters (priority, max-rounds, timing) for each, and one needs to traverse total 42 ($14 \times 3$) parameters which can be challenging. For this purpose, we have tried several popular heuristic tools including Bayesian optimization (BO), simulated annealing (SA), and evolution strategy, we resort to SA (Van Laarhoven & Aarts, 1987) for its suitability for discrete variables which take up 2/3 parameters in our case. While BO with Gaussian processes is designed more suited to continuous variables. Our ablation in Sec. 3.4 shows that SA-based presolving tuning outperforms BO-based one. We place the detail of SA and how we adapted it in our L2P in Appendix (A.3).
Figure 2: Our proposed framework L2P for learning to presolve. (Left): in the training process, we use simulated annealing to search for the most suitable presolving parameters for each MIP instance, which will be the label to update our neural networks during the network training. (Right): in the inference process, we use the well-trained neural networks from the training process to tailor presolving in an instance-specific manner. Multiple criteria including the solving time and the PD Integral are used to evaluate the performance of the presolving parameters.
2.3 L2P: Learning to Presolve
The drawback of SA is its considerable time cost which can be even higher than the time for B&B to actually solve the problem. Therefore, we propose to utilize neural networks along with SA, and the paradigm is termed learning to presolve (L2P). We train the neural networks via the data generated by SA in the training set and use the well-trained neural networks for inference when testing. The inference time of neural networks is insignificant and can be readily used in real-world applications.
2.3.1 Framework Design
As shown in Fig. 2, our proposed L2P includes the training process and inference process. Specifically, for each MIP dataset, we first feed the MIP instances from the training set to the simulated annealing algorithm, which outputs the best presolving parameters for each instance. Then, we regard the data pair (MIP instance / best presolving parameters) as the (input / label) of our neural networks. The input instances first pass our feature extractor and acquire the graph-embedding representation. Then, our inference network takes the embedding and predicts the presolving parameters. Now, for each input MIP instance, we set the predicted presolving to the corresponding position in the MIP solver, and let the modified solver solve the MIP instance. Finally, we analyze the results of the running via multiple criteria including the solving time and primal-dual gap integral. For network updating, we calculate the loss between the best presolving parameters from the simulated annealing (the label) and the predicted presolving parameters from the inference network (the prediction). The loss is passed backward through both the inference network and the feature extractor.
For the inference process, we utilize the well-trained feature extractor and inference network after training. For every incoming MIP instance unseen before, we feed it to the neural networks and acquire the predicted presolving parameters. In the same way, we modify the solver and then solve the instance. Since we save time for simulated annealing, the inference process costs very little time (less than 0.5s) and the whole framework is able to be embedded into real-world MIP solvers.
2.3.2 Feature Extractor Design
For the design of the feature extractor, we first represent a given MIP instance as a bipartite graph \((G, C, E, V)\) based on the method in (Gasse et al., 2019). In the bipartite graph, \(C \in \mathbb{R}^{m \times c}\) corresponds to the features of the constraints; \(V \in \mathbb{R}^{n \times d}\) denotes the features of the variables; and an edge \(e_{ij} \in E\) between a constraint node \(i\) and a variable node \(j\) if the corresponding coefficient \(A_{i,j} \neq 0\).
Figure 3: Inspecting the inference network of L2P, follows the data to pass through inside modules. Inspired by knowledge-based residual learning, we regard the priority information as the prior knowledge for learning max-rounds and timing via hybrid neural networks. After calculating three losses, a dynamic loss averaging method is adapted to aggregate them.
We use the same features as the existing work (Prouvost et al., 2020). Next, the bipartite graph is sent as input into a two-interleaved graph convolutional neural network (GCNN) (Gasse et al., 2019). In detail, the graph convolution is broken into two successive passes, one from the variable side to the constraint side, and one from the constraint side to the variable side:
\[ c_i^{(k+1)} \leftarrow f_C \left( c_i^{(k)}, \sum_{j} g_C(c_i^{(k)}, v_j^{(k)}, e_{ij}) \right), \quad v_j^{(k+1)} \leftarrow f_V \left( v_j^{(k)}, \sum_{i} g_V(c_i^{(k)}, v_j^{(k)}, e_{ij}) \right), \]
where \( f_C, g_C, f_V \) and \( g_V \) are 2-layer perceptrons. We adopt the ReLU as the activation function. While \( k \) represents the number of times that we perform the convolution.
### 2.3.3 Inference Network Design
As for the inference network, we design the shared-bottom neural networks to predict the priority, max-rounds, and timing simultaneously, in other words, make three predictions. As Fig. 3 shows, we first use a hidden layer to process the graph-embedding representation and regard the results as shared features, which are used for all three predictions. As mentioned in Sec. 2.1, we find that priority is the most important among all presolving parameters. Since priority determines the order of all presolvers, it can also balance the influence of all presolvers. In this sense, we believe that priority is more significant than max-rounds and timing. Therefore, we consider designing special hybrid neural networks after the shared features to utilize this property.
As Fig. 3 illustrates, there are three output branches after the shared features, corresponding to priority, max-rounds, and timing respectively. For the priority branch, we use the normal fully connected layers to make predictions. Then, inspired by the knowledge-based residual learning (KRL (Zheng et al., 2021b; Liu et al., 2021)), we consider using the priority as the prior knowledge to better predict the max-rounds and timing. The key idea of KRL is to treat the prior knowledge as a weak learner and use another neural network model to boost it, which turns out to be a hybrid model. In our inference network, the priority is considered as the prior knowledge in the other two branches, and we use two more fully connected layers as the neural network to boost the performance. Due to the page limit, we place the detailed derivation process and proofs of KRL in Appendix (A.4). As proved by KRL, these hybrid knowledge-based residual networks help to reduce the difficulty of learning the parameters and increase the robustness and accuracy of the inference network.
Although the total loss can decrease significantly during the learning process, we observe that the training of the three output branches always converges at different speeds. In fact, it is the hardest task for our inference network to predict adequate priorities for presolvers. The loss of the priority branch can hardly fall as quickly as that of two other branches. Consequently, we have to spend additional time on training the max-rounds and timing branches despite the learning of them having already converged, which could easily lead to over-fitting. To avoid this, we exploit a dynamic loss averaging method (Liu et al., 2019) to respectively assign each output branch a variable weight.
Table 1: Performance on easy, medium, and hard datasets. \( m \) and \( n \) denotes the average number of constraints and variables. We calculate the solving time/PD Integral and report the improvement/effectiveness compared to the default setting. For each instance, **SA runs for hours/days but our L2P only needs milliseconds**, (see more about time difference in Sec. 3.1.4) We run the experiments five times with 5 random seeds and report the average results. (refer to Sec. 3.2)
| | Easy: Set Covering \((n = 1000, m = 500)\) | Easy: Max Independent Set \((n = 500, m = 1953)\) | Easy: MIRP small \((n = 709, m = 841)\) |
|------------------|---------------------------------------------|-------------------------------------------------|----------------------------------------|
| | Time (s) ↓ | Improv. ↑ | Effect. ↑ | Time (s) ↓ | Improv. ↑ | Effect. ↑ | Time (s) ↓ | Improv. ↑ | Effect. ↑ |
| Default | 7.22 | - | - | 8.66 | - | - | 56.54 | - | - |
| SA | 7.21 | 0.00% | 0% | 8.66 | 0.00% | 0% | 46.67 | 17.46% | 58% |
| Random | 7.27 | 0.00% | 0% | 8.69 | 0.00% | 0% | 54.42 | 3.75% | 21% |
| SMAC3 | 7.24 | 0.00% | 0% | 8.70 | 0.00% | 0% | 50.52 | 10.64% | 42% |
| FBAS | 7.31 | 0.00% | 0% | 8.73 | 0.00% | 0% | 54.64 | 3.44% | 21% |
| L2P | 7.22 | 0.00% | 0% | 8.68 | 0.00% | 0% | 50.25 | 11.12% | 46% |
| | Medium: Corlat \((n = 466, m = 486)\) | Medium: MIK \((n = 413, m = 346)\) | Hard: MIRP large \((n = 4120, m = 6857)\) |
|------------------|---------------------------------------|-----------------------------------|------------------------------------------|
| | Time (s) ↓ | Improv. ↑ | Effect. ↑ | Time (s) ↓ | Improv. ↑ | Effect. ↑ | PD Integral ↓ | Improv. ↑ | Effect. ↑ |
| Default | 31.02 | - | - | 237.50 | - | - | 2958.83 | - | - |
| SA | 15.93 | 48.63% | 65% | 228.38 | 3.84% | 7% | 1473.75 | 49.81% | 60% |
| Random | 29.34 | 5.43% | 22% | 239.85 | 0.00% | 0% | 2834.26 | 4.21% | 19% |
| SMAC3 | 24.09 | 22.34% | 42% | 233.65 | 1.62% | 6% | 2574.77 | 12.98% | 40% |
| FBAS | 25.84 | 16.69% | 39% | 238.75 | 0.00% | 0% | 2746.09 | 7.19% | 20% |
| L2P | 20.24 | 34.74% | 55% | 230.33 | 3.02% | 7% | 2118.23 | 28.41% | 35% |
| | Hard: Item Placement \((n = 1083, m = 195)\) | Hard: Load Balancing \((n = 61000, m = 64304)\) | Hard: Anonymous \((n = 37881, m = 49603)\) |
|------------------|-----------------------------------------------|-------------------------------------------------|--------------------------------------------|
| | PD Integral ↓ | Improv. ↑ | Effect. ↑ | PD Integral ↓ | Improv. ↑ | Effect. ↑ | PD Integral ↓ | Improv. ↑ | Effect. ↑ |
| Default | 221630.77 | - | - | 5857.95 | - | - | 68319.60 | - | - |
| SA | 210593.56 | 4.98% | 56% | 5550.99 | 5.24% | 36% | 44940.63 | 34.22% | 55% |
| Random | 221685.75 | 0.00% | 0% | 5879.17 | 0.00% | 0% | 53132.15 | 22.23% | 55% |
| SMAC3 | 217220.32 | 1.99% | 30% | 5733.76 | 2.12% | 38% | 42460.63 | 37.85% | 55% |
| FBAS | 222096.19 | 0.00% | 0% | 5862.64 | 0.00% | 0% | 55181.74 | 19.23% | 55% |
| L2P | 210637.88 | 4.96% | 42% | 5558.61 | 5.11% | 48% | 33278.48 | 51.29% | 55% |
when aggregating the three losses. We place the detailed mathematics formulation of dynamic loss averaging in Appendix (A.5). Intuitively, the branch with a slower converge speed would be assigned a larger weight and vice versa. In this way, we can accelerate the learning of priority prediction and thus provide more reliable prior knowledge for the inference of max-rounds and timing.
3 EXPERIMENTS
Please note that except for the following subsections, we have placed the additional experiments and discussions in the appendix, including the multiple-run results with standard deviation (A.7), ablation studies by adjusting the size of training data (A.9), experiments on the popular MIPLIB dataset (A.8), illustration of the searching process (A.10), illustration of the improved presolving parameters (A.11), and the discussion of limitations and future work (A.12).
3.1 PROTOCOLS
3.1.1 DATASETS
We follow (Gasse et al., 2019; 2022) and use popular datasets in our experiments. We evaluate our approach on the four levels of difficulty: easy, medium, hard, and industrial-level datasets:
1) Easy datasets comprise three popular synthetic MIP benchmarks: **Set Covering** (Balas & Ho, 1980), **Maximum Independent Set** (Bergman et al., 2016) and **Maritime Inventory Routing Problem** (MIRP) (Papageorgiou et al., 2014). We artificially generate instances in line with (Gasse et al., 2019; Sun et al., 2021; Jiang & Grossmann, 2015).
2) Medium datasets include **CORLAT** (Gomes et al., 2008) and **MIK** (Atamtürk, 2003), which are widely used benchmarks (He et al., 2014; Nair et al., 2020).
3) Hard datasets from NeurIPS 2021 Competition (Gasse et al., 2022) include **Item Placement**, which involves spreading items that need to be placed; **Load Balancing**, inspired by real-life applications of large-scale systems; **Anonymous**, inspired by a large-scale industrial application; and **Maritime Inventory Routing problem** (MIRP) with hard problem settings.
4) **Private Industrial Benchmarks**. We collect real-world data concerning planning and scheduling in a production planning engine of an influential corporation and formulate them as MIP instances. The production planning problem is to plan daily production for hundreds of factories according to customers’ daily and predicted demand. The problem is subject to material transportation and production capacity constraints, which aim to minimize the production cost and lead time simultaneously.
For the datasets used in our experiments, we follow the common usage in existing works (Gasse et al., 2019; Han et al., 2023; Wang et al., 2023; Li et al., 2023a) including splitting data into training and testing sets with 80% and 20% instances. For the easy datasets, we generate 1000 instances for each. For the medium and hard datasets, we directly split the instances provided by their original benchmarks. Here we list these datasets and their total number of instances: Corlat(2000) Gomes et al. (2008), MIK(100) Atamtürk (2003), Item Placement(10000) Gasse et al. (2022), Load Balancing(10000) Gasse et al. (2022), Anonymous(118) Gasse et al. (2022). The two industrial datasets contain 1,000 instances for each, as collected from two periods respectively (from 15 May 2022 to 15 Sept. 2022 and from 8 Oct. 2022 to 8 Dec. 2022).
### 3.1.2 Evaluation Metrics
Throughout all experiments, we use SCIP 7.0.3 (Gamrath et al., 2020) as the back-end solver, which is the state-of-the-art open-source MIP solver. Note it is nontrivial to test our approach on the commercial solvers e.g. Gurobi, for the limited access to their interfaces. Besides, we use Ecole 0.7.3 (Prouvost et al., 2020) and PySCIPOpt 3.5.0 (Maher et al., 2016) for better implementation. Except for the presolving module, we keep all the other SCIP settings/parameters by default. We use two popular evaluation metrics, i.e., the average solving time (**Time**, lower is better), and the average primal-dual gap integral (**PD integral**, lower is better). To better show the performance of improving presolving, we calculate the improvement (**Imprv.**, the higher the better) made by the compared methods compared to SCIP’s default settings. Moreover, we calculate the effectiveness (**Effect.**, the higher the better), aka. the “better/win rate”. Here “better” means the solving time/PD integral of improved presolving is better than the solving time/PD integral of SCIP’s default presolving. The higher effectiveness means more instances the method can find better presolving. The testing is based on the MIP solver itself, and we directly acquire the solving time/PD integral from the solver. The solving time/PD integral contains both the presolving process and the B&B process, which is directly required by SCIP, Ecole, and PySCIPOpt. For details and the mathematics formulation of PD integral, refer to the documentation[^1] or our detailed description in Appendix (A.6).
### 3.1.3 Implementation Details
For SA, we set the initial temperature as 1e5 with the decay rate 0.9 until it reaches the minimum temperature of 1e-2. For the neural networks, we use ADAM with a batch size of 32, and learning rate of 1e-4, and a hidden size of 64. For the feature extractor, we follow the same settings as (Gasse et al., 2019) for building graph embeddings. For the hybrid inference networks, we set the hidden size as 64. The loss functions used in our methods are ListMLE (Xia et al., 2008) for the priority and Cross-Entropy (Good, 1952) for the max-round and timing. ListMLE is a loss function designed for ranking, which is suitable for learning the priority since the priority denotes the order/rank of the presolvers. The number of epochs for training is 10,000. The experiments are conducted in a Linux workstation with NVIDIA 3090 GPU and AMD Ryzen Threadripper 3970X 32-Core CPU. Particularly, for the hard datasets in which the instance scale is large, we gradually reduce the batch size to 4 until they can be put into the GPU wholly. Our work can be readily reproduced via these settings and our code in the Github repository.
### 3.1.4 Compared Methods
1) **Default**: following the same presolving parameters by the default setting of SCIP to solve all MIP instances. 2) **Random**: we randomly select the parameters for all presolvers for 10 times and record the best ones. 3) **SMAC3** ((Lindauer et al., 2022)): the latest automatic configuration framework
[^1]: [https://www.ecole.ai/2021/ml4co-competition/#metrics](https://www.ecole.ai/2021/ml4co-competition/#metrics)
Table 2: Performance on two industrial datasets. \( m \) and \( n \) denote the average number of constraints and variables respectively. We record the accumulation of the solving time over all instances in the dataset and report the improvement compared to the default setting.
| | Industrial Dataset #1 (\( n = 1494, m = 5583 \)) | Industrial Dataset #2 (\( n = 8456, m = 2392 \)) |
|------------------|---------------------------------------------------|---------------------------------------------------|
| | Solving Time (s) ↓ | Improvement ↑ | Solving Time (s) ↓ | Improvement ↑ |
| Default | 21280.31 | - | Default | 2994.69 | - |
| SA | 20224.38 | 4.96% | SA | 2347.07 | 21.62% |
| Random | 21387.25 | 0.00% | Random | 2887.32 | 3.57% |
| L2P (ours) | 20420.28 | 4.06% | L2P (ours) | 2447.56 | 18.27% |
aims at finding one single configuration for each MIP category. 4) **FBAS** ((Georges et al., 2018)): one algorithm selection method designed for MIP, which combines several standard ML techniques to select a well-performing algorithm based on a feature description of the input MIP instance. 5) **SA** (simulated annealing): the sub-module in our L2P that uses the simulated annealing to search for the best presolving parameters, of which the time consumption is huge. 6) **L2P** (Ours): our proposed L2P is orthogonal to other progress made in learning for MIP in previous literature.
For every method in our experiments, they use their own algorithm to improve the presolving (running time), and then SCIP uses the improved presolving to solve the MIP instance (solving time), where the solving time/PD integral is used for evaluation. In other words, there are two steps in our experiments:
1) **Running step**: we use SA/Random/SMAC3/FBAS/L2P to find a better presolving, and deploy the new presolving in the MIP solver; 2) **Solving step** (including presolving and B&B): we use the adjusted MIP solver to solve the MIP instance without further intervention. The metrics (solving time/PD integral) in the tables are directly acquired by the API from SCIP in the solving step.
In the running step, for each instance, our L2P needs less than **0.05s** in the medium datasets, and less than **0.5s** in the hard datasets, while the SA needs hours in the medium datasets and days in the hard datasets. When we claim that the time consumption of SA is unacceptable, we mean its running time. Therefore, we can regard SA as an offline method (running for hours/days) while Random/SMAC3/FBAS/L2P are online methods (running for seconds). The running time of L2P is negligible compared to the improvement it brings. Therefore, we should focus on the comparison among online methods, in other words, between our L2P and Random/SMAC3/FBAS.
### 3.2 Experiments on Public MIP Datasets
To verify the performance of our L2P, we conduct experiments on various common MIP datasets in Table 1. We run the experiments with 5 random seeds and report the average improvement and effectiveness compared to the default setting. Due to the page limit, we place the detailed standard deviation results in Appendix (A.7). For the easy datasets, the improvement is not significant. We consider it is because the MIP solver has made sufficient optimization to these classic problems. Besides, the easy datasets are constructed by experts in operational research, in which thus there is not much redundancy. Therefore, even SA cannot find more suitable presolving parameters. However, it is impossible for the solver to pre-optimize all kinds of MIP instances, especially for real-world MIP instances. The MIP instances obtained from the real world are constructed by practitioners with different expertise in operational research. Usually, there is much more redundancy in MIP instances from the real world than those from the academy. In the medium datasets, we can see that our proposed methods can make a significant improvement in the Corlat dataset, which can save more than 1/3 of the solving time. When it comes to hard datasets, we change the evaluation metric from solving time to the PD integral, and our L2P still reaches good performances. Compared to SA which needs hours or days to finish searching for each instance, our proposed L2P can make inferences in merely seconds. It turns out that the special design of our proposed L2P for presolving does show its value since L2P can outperform the latest SMAC3 in most cases. Considering that SCIP has been well developed and updated for 20 years, we think our current improvements to SCIP are meaningful for both methodology development and practical use.
### 3.3 Experiments on Private Industrial Datasets
We conduct experiments on two industrial benchmarks provided by our industrial partner in Table 2. As shown in the caption, the scale of this dataset is large and of relatively high variance. There is still
Table 3: Generalization test on MIRP small and large datasets. We train L2P on the small/large dataset and test it on the large/small dataset respectively.
| | Easy:MIRP small ($n = 709$, $m = 841$) | Hard:MIRP large ($n = 4120$, $m = 6857$) |
|----------------------|----------------------------------------|-----------------------------------------|
| | Solving Time (s) ↓ | Improvement ↑ | PD Integral ↓ | Improvement ↑ |
| Default | 56.54 | - | 2958.83 | - |
| Random | 54.42 | 3.75% | 2834.26 | 4.21% |
| L2P (small → small) | 50.25 | 11.12% | L2P (large → large) | 2118.23 | 28.41% |
| L2P (large → small) | 52.66 | 6.86% | L2P (small → large) | 2680.31 | 9.41% |
Figure 4: Performance drop by removing components from vanilla L2P on Corlat dataset. We remove the components one by one from the full version of L2P.
Table 4: Generalization test of our L2P w.r.t improvement. We train our L2P on Corlat, MIK, and the combined dataset of Corlat and MIK. Then, we evaluate the performance of L2P on the original Corlat and MIK dataset.
| | Corlat | MIK | Corlat + MIK |
|------------------|--------|-----|--------------|
| train test | | | |
| Corlat | 34.74% | 15.52% | 33.84% |
| MIK | 1.12% | 3.02% | 1.87% |
a large improvement room for the default setting. SA can find more suitable presolving parameters, which reduces the solving time by 4.96% and 21.62% on the two datasets, respectively. Our method still shows a notable improvement compared to the default setting and the performance gain (4.06% and 18.27%) is close to SA’s, while SA needs hours to run but our L2P only needs seconds. Due to potential privacy issue, we did not test SMAC3 and FBAS on these datasets.
3.4 Generalization Test and Ablation Study
To test the generalization of L2P, first, we conduct experiments on MIRP small and large datasets. The two datasets are significantly different in complexity denoted by scale. We train L2P with MIRP small dataset and observe its performance on MIRP large datasets, and vice versa. The statistics of the datasets and experiment results are reported in Table 3. We can see that on both tests our L2P outperforms the baselines when generalized from another dataset, which denotes its generalization effectiveness. Besides, we add more experiments in Table 4, where the experimental settings are the same as the settings in Table 1, including the data size and the metric. Instead of training domain by domain normally, we try multiple train/test settings on different domains. In the first two columns, we can see that L2P can handle unseen domains (families) of instances. In the last column, we train L2P with the mixed dataset and L2P still reaches good performance when testing on both two domains.
Fig. 4 shows the effect of removing components in L2P one by one: from the dynamic loss averaging (DLA), to the hybrid knowledge-based residual learning (KRL), to the output branches, to the shared feature, to GCNN, and at last, we replace SA with Bayesian optimization. We can see that removing the KRL module leads to a nearly 10% performance drop. Therefore, the hybrid KRL structure in L2P is more significant. When we change SA to BO, we note that BO can hardly find better presolving parameters, as the improvement downgrades from 10% to almost 0%.
4 Conclusion and Outlook
We propose a paradigm of learning to presolve for MIP solvers. Instead of using the default instance-agnostic presolving as in existing solvers, we use simulated annealing to search for suitable presolving in an instance-specific manner. Furthermore, we design the hybrid neural networks to learn the results generated from SA. Experiments on both public and private MIP datasets show its performance and cost-efficiency. We hope our results and open-source can draw wide attention and further evaluation could be performed on commercial solvers which we believe is a less-studied yet promising area. One possible future work is to combine the learning-based solvers e.g. (Zhang et al., 2024) tailored to MIP and more general methods (Li et al., 2023c) with our presolve techniques. Also, one may combine other instance generation models (Li et al., 2023b; Chen et al., 2024) for training set augmentation. For more discussion of potential limitations and future plans, please refer to Appendix (A.12).
REFERENCES
T. Achterberg and R. Wunderling. Mixed integer programming: Analyzing 12 years of progress. In *Facets of Combinatorial Optimization*, pp. 449–481. Springer, 2013a.
Tobias Achterberg and Roland Wunderling. Mixed integer programming: Analyzing 12 years of progress. In *Facets of combinatorial optimization*, pp. 449–481. Springer, 2013b.
Tobias Achterberg, Robert E. Bixby, Zonghao Gu, Edward Rothberg, and Dieter Weninger. Presolve reductions in mixed integer programming. *INFORMS Journal on Computing*, 32(2):473–506, November 2019.
Alper Atamtürk. On the facets of the mixed–integer knapsack polyhedron. *Mathematical Programming*, 98(1):145–175, 2003.
Yunsheng Bai, Derek Xu, Yizhou Sun, and Wei Wang. Glsearch: Maximum common subgraph detection via learning to search. In *ICML*, pp. 588–598, 2021.
Egon Balas and Andrew Ho. Set covering algorithms using cutting planes, heuristics, and subgradient optimization: a computational study. In *Combinatorial Optimization*, pp. 37–60. Springer, 1980.
Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d’horizon. *Eur. J. Oper. Res.*, 290:405–421, 2021.
David Bergman, Andre A Cire, Willem-Jan Van Hoeve, and John Hooker. *Decision diagrams for optimization*, volume 1. Springer, 2016.
Timo Berthold, Matteo Francobaldi, and Gregor Hendel. Learning to use local cuts. *arXiv preprint arXiv:2206.11618*, 2022.
Robert Bixby and Edward Rothberg. Progress in computational mixed integer programming—a look back from the other side of the tipping point. *Annals of Operations Research*, 149:309–325, December 2007.
Robert E Bixby, Mary Fenelon, Zonghao Gu, Ed Rothberg, and Roland Wunderling. Mixed-integer programming: A progress report. In *The sharpest cut: the impact of Manfred Padberg and his work*, pp. 309–325. SIAM, 2004.
A. L. Brearley, G. Mitra, and H. P. Williams. Analysis of mathematical programming problems prior to applying the simplex algorithm. *Mathematical Programming*, 8:54–83, December 1975.
Xinyan Chen, Yang Li, Runzhong Wang, and Junchi Yan. Mixsatgen: Learning graph mixing for sat instance generation. In *International Conference on Learning Representations*, 2024.
Xinyun Chen and Yuandong Tian. Learning to perform local rewriting for combinatorial optimization. In *NeurIPS*, volume 32, 2019.
Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. *arXiv preprint arXiv:1711.02257*, 2017.
Joseph M Elble. *Computational experience with linear optimization and related problems*. University of Illinois at Urbana-Champaign, 2010.
Marc Etcheve, Zacharie Alès, Côme Bissuel, Oliver Juan, and Safia Kedad-Sidhoum. Reinforcement learning for variable selection in a branch and bound algorithm. In *International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research*, 2020, pp. 176–185. Cham, Switzerland, September 2020. Springer.
Hutter Frank, Hoos Holger, H., and Leyton-Brown Kevin. Automated configuration of mixed integer programming solvers. In *International Conference on Integration of Artificial Intelligence (AI) and Operations Research (OR) Techniques in Constraint Programming*, pp. 186–202. Springer, June 2010.
Ivet Galabova. Presolve, crash and software engineering for highs. 2023.
|
WnqD3EiylC
|
Bra-ket notation is used for inner products (e.g., in Definition 1 and Theorem 4) without any introduction. I happen to have heard of this notation, but have never seen it in a paper, and doubt many people in ICLR are familiar with it.
|
THE REPRESENTATION JENSEN-SHANNON DIVERGENCE
Anonymous authors
Paper under double-blind review
ABSTRACT
Statistical divergences quantify the difference between probability distributions, thereby allowing for multiple uses in machine-learning. However, a fundamental challenge of these quantities is their estimation from empirical samples since the underlying distributions of the data are usually unknown. In this work, we propose a divergence inspired by the Jensen-Shannon divergence which avoids the estimation of the probability density functions. Our approach embeds the data in a reproducing kernel Hilbert space (RKHS) where we associate data distributions with uncentered covariance operators in this representation space. Therefore, we name this measure the representation Jensen-Shannon divergence (RJSD). We provide an estimator from empirical covariance matrices by explicitly mapping the data to an RKHS using Fourier features. This estimator is flexible, scalable, differentiable, and suitable for minibatch-based optimization problems. Additionally, we provide an estimator based on kernel matrices without an explicit mapping to the RKHS. We provide consistency convergence results for the proposed estimator as well as connections with Shannon’s differential entropy. Moreover, we demonstrate that this quantity is a lower bound on the Jensen-Shannon divergence, leading to a variational approach to estimate it with theoretical guarantees. We leverage the proposed divergence to train generative networks, where our method mitigates mode collapse and encourages samples diversity. Additionally, RJSD surpasses other state-of-the-art techniques in multiple two-sample testing problems, demonstrating superior performance and reliability in discriminating between distributions.
1 INTRODUCTION
Divergences quantify the difference between probability distributions. In machine-learning, divergences can be applied to a wide range of tasks, including generative modeling (generative adversarial networks, variational auto-encoders), two-sample testing, anomaly detection, and distribution shift detection. The family of $f$-divergences is among the most popular statistical divergences, including the well-known Kullback-Leibler and Jensen-Shannon divergences. A fundamental challenge to using divergences in practice is that the underlying distribution of data is unknown, and thus divergences must be estimated from observations. Several divergence estimators have been proposed (Yang & Barron [1999], Sriperumbudur et al. [2012], Krishnamurthy et al. [2014], Moon & Hero [2014], Singh & Póczos [2014], Li & Turner [2016], Noshad et al. [2017], Moon et al. [2018], Bu et al. [2018], Berrett & Samworth [2019], Liang [2019], Han et al. [2020], Sreekumar & Goldfeld [2022]), most of which fall into four categories: plug-in, kernel density estimation, $k$-nearest neighbors, and neural estimators.
Kernel methods are another approach for measuring the interaction between probability distributions. For example, the maximum mean discrepancy (MMD) (Gretton et al. [2012]) is a divergence computed as the distance between the mean embeddings (first-order moments) of the two probability distributions in a reproducing kernel Hilbert space (RKHS). However, due to the underlying geometry, MMD lacks a straightforward connection with classical information theory tools (Bach [2022]). On the other hand, covariance operators (second-order moments) in RKHS have been used to propose multiple information theoretic quantities, such as marginal, joint, and conditional entropy (Sanchez-Girado et al. [2014]), as well as mutual information (Yu et al. [2019]), and total correlation (Yu et al. [2021]). However, strategies for estimating divergences within this framework have been less explored.
To fill this void, we propose a kernel-based information theoretic learning framework for divergence estimation. We make the following contributions:
• A novel divergence, the representation Jensen-Shannon divergence (RJSD), that avoids the estimation of the underlying density functions by mapping the data to an RKHS where distributions can be embedded using uncentered covariance operators acting in this representation space.
• An estimator from empirical covariance matrices that explicitly map data samples to an RKHS using Fourier features. This estimator is flexible, scalable, differentiable, and suitable for minibatch-based optimization problems. Additionally, an estimator based on kernel matrices without an explicit mapping to the RKHS is provided. Consistency results and sample complexity bounds for the proposed estimator are discussed.
• A connection between the kernel-based entropy and Shannon’s entropy, as well as the relationship between RJSD with the classical Jensen-Shannon divergence. Namely, RJSD emerges as a lower bound on the classical Jensen-Shannon divergence enabling the construction of a variational estimator for the classical Jensen-Shannon divergence with statistical guarantees.
We use RJSD for training generative adversarial networks and show that it prevents mode collapse and encourages diversity, leading to more accurate and heterogeneous results. We also apply RJSD for two-sample testing problems and show that it accurately detects differences between probability distribution functions even for cases where other state-of-the-art measures fall short.
2 BACKGROUND
2.1 MEAN EMBEDDINGS AND COVARIANCE OPERATORS
Let \((\mathcal{X}, \mathcal{B}_\mathcal{X})\) be a measurable space and \(\kappa : \mathcal{X} \times \mathcal{X} \to \mathbb{R}_{\geq 0}\) be a positive definite kernel. There exists a mapping \(\phi : \mathcal{X} \to \mathcal{H}\), where \(\mathcal{H}\) is a reproducing kernel Hilbert space (RKHS), such that \(\kappa(x, x') = \langle \phi(x), \phi(x') \rangle_{\mathcal{H}}\). The kernel mean embedding (Smola et al., 2007) is a mapping \(\mu\) from \(\mathcal{M}_+^1(\mathcal{X})\) to \(\mathcal{H}\), where \(\mathcal{M}_+^1(\mathcal{X})\) is the space of probability measures on \(\mathcal{X}\). The kernel mean embedding is defined as follows:
\[
\mu_P = \mathbb{E}_{X \sim P}[\phi(X)] = \int_\mathcal{X} \phi(x) \, dP(x), \quad \text{for } P \in \mathcal{M}_+^1.
\]
(1)
An important property of the mean embedding is that if \(\mathbb{E}_{X \sim P}[\kappa(X, X)] < \infty\), for any \(f \in \mathcal{H}\), then \(\mathbb{E}_{X \sim P}[f(X)] = \langle f, \mu_P \rangle_{\mathcal{H}}\).
Another related mapping is the uncentered covariance operator (Baker, 1973). In this case, \(P \in \mathcal{M}_+^1\) is mapped to an operator \(C_P : \mathcal{H} \to \mathcal{H}\) given by:
\[
C_P = \mathbb{E}_{X \sim P}[\phi(X) \otimes \phi(X)] = \int_\mathcal{X} \phi(x) \otimes \phi(x) \, dP(x),
\]
(2)
where \(\otimes\) is the tensor product. Similarly, for any \(f, g \in \mathcal{H}\), \(\mathbb{E}_{X \sim P}[f(X)g(X)] = \langle g, C_P f \rangle_{\mathcal{H}}\). The covariance operator is positive semi-definite and Hermitian (self-adjoint). Additionally, if the kernel is bounded, the covariance operator is trace class (Sanchez Giraldo et al., 2014; Bach, 2022). The spectrum of the covariance operator is discrete and consists of non-negative eigenvalues \(\lambda_i\) with \(\sum \lambda_i < \infty\) for which we can extend functions on \(\mathbb{R}\) such as \(t \log(t)\) and \(t^\alpha\) to covariance operators via their spectrum (Naoum & Gittan, 2004). For a sample \(X = \{x_i\}_{i=1}^N\) of size \(N\), where \(x_i \in \mathcal{X}\), drawn from \(P\), the empirical uncentered covariance operator is defined as:
\[
C_X = \frac{1}{N} \sum_{i=1}^N \phi(x_i) \otimes \phi(x_i)
\]
(3)
2.2 KERNEL-BASED INFORMATION THEORY
We can define information theoretic quantities on the spectrum of normalized covariance operators with unit trace. This observation was made by Sanchez Giraldo et al. (2014) who proposed the kernel-based entropy functional: \(S_\alpha(C_P) = \frac{1}{1-\alpha} \log \left[ \text{Tr}(C_P^\alpha) \right]\). \(\text{Tr}(\cdot)\) denotes the trace operator, \(C_P^\alpha\) is defined based on the spectrum of \(C_P\) and \(\alpha > 0\) is the entropy order. This quantity resembles quantum Rényi entropy (Müller-Lennert et al., 2013) where the covariance operator plays the role of a density matrix.\(^1\) In the limit when \(\alpha \to 1\), \(S_{\alpha \to 1}(C_P) = - \text{Tr}(C_P \log C_P)\) becomes von Neumann entropy of the covariance operator. This connection between covariance operators in RKHS and information theory has been also discussed by Bach (2022).
\(^1\)A density matrix is a matrix that describes the quantum state of a physical system
Kernel-based entropy estimator: The kernel-based entropy estimator relies on the spectrum of the empirical uncentered covariance operator in Eqn. [3]. We focus on the case of normalized kernels where \( \kappa(x, x) = 1 \) for all \( x \in \mathcal{X} \). We denote the Gram matrix \( K_X \), consisting of all normalized pairwise kernel evaluations of data points in the sample \( X \), that is \( (K_X)_{ij} = \kappa(x_i, x_j) \) for \( i, j = 1, \ldots, N \). It can be shown that \( C_X \) and \( \frac{1}{N} K_X \) have the same non-zero eigenvalues (Sanchez Giraldo et al., 2014; Bach, 2022), yielding the kernel-based entropy estimator:
\[
S(K_X) = - \text{Tr} \left( \frac{1}{N} K_X \log \frac{1}{N} K_X \right) = - \sum_{i=1}^{N} \lambda_i \log \lambda_i,
\]
where \( \lambda_i \) represents the \( i \)th eigenvalue of \( \frac{1}{N} K_X \). The eigen-decomposition of \( K_X \) has \( O(N^3) \) time complexity, which needs to be taken into consideration depending on the use case.
Covariance-based estimator: Alternatively, we can use an explicit mapping \( \phi_\omega : \mathcal{X} \to \mathcal{H}_D \) to a finite dimensional RKHS. We propose to use Fourier features to construct a mapping function to \( \mathcal{H}_D \). For \( \mathcal{X} \subseteq \mathbb{R}^d \) and a shift-invariant kernel \( \kappa(x, x') = \kappa(x - x') \), the random Fourier features (RFF) (Rahimi & Recht, 2007) is a method to create a smooth feature mapping \( \phi_\omega(x) : \mathcal{X} \to \mathbb{R}^D \) so that \( \kappa(x - x') \approx \langle \phi_\omega(x), \phi_\omega(x') \rangle \). To generate an RFF mapping, we compute the Fourier transform of the kernel, \( p(\omega) = \frac{1}{2\pi} \int e^{-i \omega^\top \delta} \kappa(\delta) d\delta \), which yields a distribution on \( \mathbb{R}^d \) with density \( p(\omega) \). From this distribution, we draw \( D/2 \) i.i.d samples \( \omega_1, \ldots, \omega_{D/2} \in \mathbb{R}^d \). Finally, the mapping is given by
\[
\phi_\omega(x) = \sqrt{\frac{2}{D}} \begin{bmatrix} \cos(\omega_1^\top x), \sin(\omega_1^\top x), \cdots, \cos(\omega_{D/2}^\top x), \sin(\omega_{D/2}^\top x) \end{bmatrix}.
\]
Letting \( \Phi_X = [\phi_\omega(x_1)^T, \phi_\omega(x_2), \cdots, \phi_\omega(x_N)]^T \) be the \( N \times D \) matrix containing the mapped samples, we can compute the empirical uncentered covariance matrix as \( C_X = \frac{1}{N} \Phi_X^\top \Phi_X \). Finally, we exploit the eigenvalues of the uncentered covariance matrix to compute the von Neumann entropy of \( C_X \) as:
\[
S(C_X) = - \text{Tr} (C_X \log C_X) = - \sum_{i=1}^{D} \lambda_i \log \lambda_i,
\]
where \( \lambda_i \) represents the \( i \)th eigenvalue of \( C_X \). This eigendecomposition has \( O(D^3) \) time complexity, where \( D \) is independent of the sample size.
Both estimators of kernel-based entropy can be used in gradient based learning (Sanchez Giraldo & Principe, 2013; Siperumbudur & Szabo, 2015). The kernel-based entropy has been used as a building block for other matrix-based measures, such as joint and conditional entropy, mutual information (Yu et al., 2019), total correlation (Yu et al., 2021), and divergence (Hoyos Osorio et al., 2022). Despite the success of the aforementioned measures, their connection with the classical information theory counterparts remains unclear.
For the case where \( \mathcal{X} \subseteq \mathbb{R}^d \) and the distribution \( P \) has a corresponding probability density function \( p \), we can establish an explicit connection between the kernel-based entropy estimator and Shannon’s differential entropy, \( H(p) = - \int_X p(x) \log p(x) dx \).
**Definition 1.** Let \( \phi : \mathcal{X} \to \mathcal{H} \) be a mapping to a reproducing kernel Hilbert space (RKHS), and \( \kappa : \mathcal{X} \times \mathcal{X} \to \mathbb{R}_{\geq 0} \) be a positive definite kernel, such that \( \kappa(x, x') = \langle \phi(x), \phi(x') \rangle_\mathcal{H} \), and \( \kappa(x, x) = 1 \) for all \( x \in \mathcal{X} \). Then, the kernel density function induced by the mapping \( \phi \) is defined as follows:
\[
\hat{p}(x) = \frac{1}{h} \langle \phi(x), C_P \phi(x) \rangle = \frac{1}{h} \int_X \kappa^2(x, x') dP(x') = \frac{1}{h} \int_X \kappa^2(x, x') p(x') dx',
\]
where \( h = \int_X \langle \phi(x), C_P \phi(x) \rangle dx \) is the normalizing constant.
Eqn. [6] can be interpreted as an instance of the Born rule which calculates the probability of finding a state \( \phi(x) \) in a system described by the covariance operator \( C_P \) (González et al., 2022). Equivalently, the right-most side can be seen as smoothing the density \( p \) with a kernel \( \kappa^2(\cdot, \cdot) \).
**Theorem 1.** Let \( \hat{p}(x) \) be the kernel density function induced by a mapping \( \phi : \mathcal{X} \to \mathcal{H} \), then, the cross entropy between \( p \) and \( \hat{p} \) is:
\[
H(p, \hat{p}) = - \int_X p(x) \log \hat{p}(x) dx = S(C_P) + \log(h).
\]
Proof: See Appendix A.1
From Theorem 1 we can see that the covariance operator entropy relates to a plug-in estimator of Shannon’s differential entropy based on the Parzen density estimator. We can use well-known results about the convergence of the Parzen-density estimator (Dmitriev & Tarasenko [1974]) to derive the convergence of both kernel-based and covariance-based entropy estimators.
Theorem 2. Let \( \kappa(x, x') = \exp(-\gamma_N \|x - x'\|^2) \) be a Gaussian kernel with scale parameter \( \gamma_N = \frac{1}{N^{1/4}} \), and let \( p(x) \) be any bounded probability density function on \( X \), then \( S(K_X) \) converges to \( H(p) \) as \( N \to \infty \) with probability one. Proof: See Appendix A.2
A similar result can be proved for empirical covariance operators generated through RFFs.
Theorem 3. Let \( \phi_\omega : X \to \mathbb{R}^D \) be a Fourier features mapping approximating the Gaussian kernel with scale parameter \( \frac{\gamma_N}{2} = \frac{1}{4} N^{1/4} \), and let \( p(x) \) be any bounded probability density function on \( X \). Then, \( S(C_X) \) converges to \( H(p) \) as \( N \to \infty \) and \( D \to \infty \) with probability one. Proof: See Appendix A.3
3 REPRESENTATION JENSEN-SHANNON DIVERGENCE
For two probability measures \( P \) and \( Q \) on a measurable space \((X, B_X)\), the Jensen-Shannon divergence (JSD) is defined as follows:
\[
D_{JS}(P, Q) = H\left(\frac{P + Q}{2}\right) - \frac{1}{2} \left(H(P) + H(Q)\right),
\]
where \( \frac{P + Q}{2} \) is the mixture of both distributions and \( H(\cdot) \) is Shannon’s entropy. Properties of JSD, such as boundedness, convexity, and symmetry have been extensively studied (Briet & Harremoës [2009], Sra [2021]). The Quantum counterpart of the Jensen-Shannon divergence (QJSD) between density matrices \( \rho \) and \( \sigma \) is defined as \( D_{JS}(\rho, \sigma) = S\left(\frac{\rho + \sigma}{2}\right) - \frac{1}{2} \left(S(\rho) + S(\sigma)\right) \), where \( S(\cdot) \) is von Neumann’s entropy. QJSD is everywhere defined, bounded, symmetric, and positive if \( \rho \neq \sigma \) (Sra [2021]). Similar to the kernel-based entropy, we let the covariance operators play the role of the density matrices to derive a measure of divergence that can be computed directly from data samples.
Definition 2. Let \( P \) and \( Q \) be two probability measures defined on a measurable space \((X, B_X)\), and let \( \phi : X \to \mathcal{H} \) be a mapping to a reproducing kernel Hilbert space (RKHS) \( \mathcal{H} \), such that \( \langle \phi(x), \phi(x) \rangle_{\mathcal{H}} = 1 \) for all \( x \in X \). Then, the representation Jensen-Shannon divergence (RJSD) between uncentered covariance operators \( C_P \) and \( C_Q \) is defined as:
\[
D_{JS}^\phi(C_P, C_Q) = S\left(\frac{C_P + C_Q}{2}\right) - \frac{1}{2} \left(S(C_P) + S(C_Q)\right).
\]
3.1 THEORETICAL PROPERTIES
RJSD inherits most of the properties of classical and quantum Jensen-Shannon divergence. Non-negativity: \( D_{JS}^\phi(C_P, C_Q) \geq 0 \). Positivity: \( D_{JS}^\phi(C_P, C_Q) = 0 \) if and only if \( C_P = C_Q \). Symmetry: \( D_{JS}^\phi(C_P, C_Q) = D_{JS}^\phi(C_Q, C_P) \). Boundedness: \( D_{JS}^\phi(C_P, C_Q) \leq \log(2) \). Also, \( D_{JS}^\phi(C_P, C_Q)^{1/2} \) is a metric on the cone of uncentered covariance matrices in any dimension (Virosztek [2021]).
Below, we introduce key properties of RJSD and the connection with its classical counterpart.
Theorem 4. For all probability measures \( P \) and \( Q \) defined on \( X \), and covariance operators \( C_P \) and \( C_Q \) with RKHS mapping \( \phi(\cdot) \) under the conditions of Definition 2, the following inequality holds:
\[
D_{JS}^\phi(C_P, C_Q) \leq D_{JS}(P, Q)
\]
Proof: See Appendix A.4
Theorem 5. Let \( P \) and \( Q \) be two probability measures defined on \( X \), with probability density functions \( p \) and \( q \) respectively. If there exists a mapping \( \phi^* \) such that \( p(x) = \frac{1}{h_P} \langle \phi^*(x), C_P \phi^*(x) \rangle \) and \( q(x) = \frac{1}{h_Q} \langle \phi^*(x), C_Q \phi^*(x) \rangle \), then:
\[
D_{JS}(P, Q) = D_{JS}^{\phi^*}(C_P, C_Q).
\]
Proof: See Appendix A.5
This theorem implies that the bound in Eqn. 10 is tight for optimal functions \( \phi(x) \) that approximate the true underlying distributions through Eqn. 6. Theorems 4 and 5 can be used to obtain a variational estimator of Jensen-Shannon divergence (see Section 4).
Finally, we show that RJSD relates to MMD with kernel \( \kappa^2(\cdot, \cdot) \), where MMD is formally defined as
\[
MMD_\kappa(P, Q) = \| \mu_P - \mu_Q \|_{\mathcal{H}}.
\]
**Theorem 6.** For all probability measures \( P \) and \( Q \) defined on \( X \), and covariance operators \( C_P \) and \( C_Q \) with RKHS mapping \( \phi(x) \) such that \( \langle \phi(x), \phi(x) \rangle_{\mathcal{H}} = 1 \quad \forall x \in X \):
\[
D_{JS}^\phi(C_P, C_Q) \geq \frac{1}{8} MMD_{\kappa^2}(P, Q)
\]
(12)
**Proof:** See Appendix A.6.
The result of Theorem 6 should not be underestimated. Since MMD is a lower bound on the RJSD, any discrepancies between distributions that can be detected with MMD should be also detected with RJSD. That is RJSD should be at least as good as MMD. Moreover, it also shows that RJSD is well defined for characteristic kernel, for which RJSD is non zero if \( P \neq Q \).
### 3.2 Estimating the Representation Jensen-Shannon Divergence
Given two sets of samples \( X = \{x_i\}_{i=1}^N \subset X \) and \( Y = \{y_i\}_{i=1}^M \subset X \) with unknown distributions \( P \) and \( Q \), we propose two estimators of RJSD.
**Kernel-based estimator:** Here, we propose an estimator of RJSD from kernel matrices without an explicit mapping to the RKHS.
**Lemma 1.** Let \( Z \) be the mixture of the samples of \( X \) and \( Y \), that is, \( Z = \{z_i\}_{i=1}^{N+M} \) where \( z_i = x_i \) for \( i \in \{1, \ldots, N\} \) and \( z_i = y_{i-N} \) for \( i \in \{N+1, \ldots, N+M\} \). Also, let \( K_Z \) be the kernel matrix consisting of all normalized pairwise kernel evaluations of the samples in \( Z \), then
\[
S\left(\frac{N}{N+M}C_X + \frac{M}{N+M}C_Y\right) = S(K_Z).
\]
(Proof: See Appendix A.7).
Since the spectrum of \( K_X \) and \( C_X \) have the same non-zero eigenvalues, likewise \( K_Y \) and \( C_Y \), the divergence can be directly computed from samples in the input space as:
\[
D_{JS}^\kappa(X, Y) = S(K_Z) - \left(\frac{N}{N+M}S(K_X) + \frac{M}{N+M}S(K_Y)\right)
\]
(13)
Leveraging the convergence results in Bach (2022)[Proposition 7] of the empirical estimator \( S(K_X) \) to \( S(C_P) \), we can show that \( D_{JS}^\phi(X, Y) \) converges to the population quantity \( D_{JS}^\phi(C_P, C_Q) \) at a rate \( O\left(\frac{1}{\sqrt{N}}\right) \), assuming \( N = M \). Details of this rate are given in Appendix A.8. Additionally, a direct consequence of Theorem 2 is that under the same assumptions of the theorem, \( D_{JS}^\kappa(X, Y) \) converges to \( D_{JS}(P, Q) \) as \( N \to \infty \) with probability one.
**Covariance-based estimator:** We propose to use Fourier features to construct a mapping function \( \phi_\omega : X \to \mathcal{H}_D \) to a finite-dimensional RKHS as explained in Section 2.2. Let \( \Phi_X \in \mathbb{R}^{N \times D} \) and \( \Phi_Y \in \mathbb{R}^{M \times D} \) be the matrices containing the mapped samples of each distribution. Then, the empirical uncentered covariance matrices are computed as \( C_X = \frac{1}{N}\Phi_X^\top \Phi_X \) and \( C_Y = \frac{1}{M}\Phi_Y^\top \Phi_Y \). Finally, the covariance-based RJSD estimator is defined as:
\[
D_{JS}^\omega(C_X, C_Y) = S\left(\frac{N}{N+M}C_X + \frac{M}{N+M}C_Y\right) - \left(\frac{N}{N+M}S(C_X) + \frac{M}{N+M}S(C_Y)\right),
\]
(14)
Finally, we use Eqn. 5 to estimate the entropies of the covariance matrices. Notice, that the use of the Fourier features is not solely to reduce computational burden by approximating the kernel-based estimator. The Fourier features allow a parameterization of the representation space, for kernel-learning. We can treat the Fourier features as learnable parameters within a neural network (Fourier Feature network), optimizing them to maximize divergence and enhance its discriminatory power. Consequently, the Fourier features approach offers a more versatile estimator that extends beyond reducing computational cost.
4 VARIATIONAL ESTIMATION OF CLASSICAL JENSEN-SHANNON DIVERGENCE
We exploit the lower bound in Theorem 4 to derive a variational method for estimating the classical Jensen-Shannon divergence (JSD) given only samples from \( P \) and \( Q \). Accordingly, we choose \( \Phi \) to be the family of functions \( \phi_\omega : X^d \rightarrow \mathcal{H}^D \) parameterized by \( \omega \in \Omega \). Here, we aim to optimize the Fourier features to maximize the lower bound in Eqn. 4. Notice that we can also use a neural network \( f_\omega \) with a Fourier features mapping \( \phi_\omega \) in the last layer, that is, \( \phi_\omega \circ f_\omega = \phi_\omega(f_\omega(x)) \). We call this network a Fourier-features network (FFN). Finally, we can compute the divergence based on this representation, leading to a neural estimator of classical JSD.
**Definition 3.** (Jensen-Shannon divergence variational estimator). Let \( \Phi = \{\phi_\omega \circ f_\omega\}_{\omega \in \Omega} \) be the set of functions parameterized by a FFN. We define our JSD variational estimator as:
\[
\hat{D}_{JS}(P, Q) = \sup_{\omega \in \Omega} D_{JS}^\omega(C_P, C_Q).
\]
This approach leverages the expressive power of deep networks and combines it with the capacity of kernels to embed distributions in a RKHS. This formulation allows to model distributions with complex structures and improve the convergence of the estimator by the universal approximation properties of the neural networks (Wilson et al., 2016; Liu et al., 2020). Algorithm 1 in Appendix B describes the proposed estimator.
5 EXPERIMENTS
5.1 VARIATIONAL JENSEN-SHANNON DIVERGENCE ESTIMATION
First, we evaluate the performance of our variational estimator of Jensen-Shannon divergence (JSD) in a tractable toy experiment. Here, \( P \sim p(x; l_p, s_p) \) and \( Q \sim p(x; l_q, s_q) \) are two Cauchy distributions with location parameters \( l_p \) and \( l_q \) and scale parameters \( s_p = s_q = 1 \). We vary the location parameter of \( Q \) over time to control the target divergence. We use a closed form of the JSD between Cauchy distributions derived by Nielsen & Okamura (2022) to determine the location parameter (see Appendix C.1 for more details). Then, we apply Algorithm 1 to estimate JSD drawing \( N = 512 \) samples from both distributions at every epoch. We compare the estimates of divergence against different neural estimators. JSD corresponds to the mutual information between the mixture distribution and a Bernoulli distribution indicating when a sample is drawn from \( P \) or \( Q \). Therefore, we use mutual information estimators to approach the JSD estimation, such as NWJ (Nguyen et al., 2010), infoNCE (Oord et al., 2018), CLUB (Cheng et al., 2020), MINE (Belghazi et al., 2018). We also employ KNIFE (Pichler et al., 2022) to estimate the entropy terms and compute JSD.
Fig. 1 shows the estimation results. All compared methods approximate JSD; however, some of them struggle to adapt to distribution changes. These abrupt adjustments could lead to instabilities during training. In contrast to the compared methods, the RJSD estimator accurately estimates divergence with a lower variance, adjusting itself smoothly to changes in the distributions. Additionally, by using Exponential Moving averages (EMA) of the covariance matrices, the estimation variance decreases further yielding a smoother estimation. Finally, we compute RJSD for a fixed set of Fourier features without any optimization (no gradients backpropagated), and we can observe that RJSD still approximates the true divergence. This result agrees with theorem 5 suggesting that the computed kernel implicitly approximates the underlying distributions of the data.
5.2 GENERATIVE ADVERSARIAL NETWORKS
Generative Adversarial Networks (GANs) are a family of models to generate images/audio. GANs algorithms minimize the dissimilarity between the generated and the real data distributions (Farnia & Ozdaglar, 2020). For example, the vanilla GAN algorithm (Goodfellow et al., 2020) minimizes the Jensen-Shannon divergence (JSD), whereas Wasserstein-GANs (Arjovsky et al., 2017) and MMD-GANs (Li et al., 2017) minimize their respective statistical distances.
GANs, however, usually suffer from mode collapse failing to cover the multiple modes (classes) of the real data (Choi & Han, 2022). This deficiency yields generative distributions with lower entropy compared to the target distribution (Che et al., 2016). One common approach to prevent mode collapse is through entropy regularizers (Belghazi et al., 2018; Dieng et al., 2019).
Figure 1: Jensen-Shannon Divergence estimation for two set of samples following Cauchy distributions (N = 512). We compare the following estimators: NWJ (Nguyen et al., 2010), infoNCE (Oord et al., 2018), CLUB (Cheng et al., 2020), MINE (Belghazi et al., 2018), KNIFE (Pichler et al., 2022), RJSD, RJSD with EMA, RJSD for a fixed kernel. The black line is the closed-form JS divergence between the Cauchy distributions. The parameters of the distributions are changed every 200 epochs to increase the divergence.
Below, we propose a methodology for training GANs using RJSD in the objective function. From first principles, RJSD should work for reducing mode collapse without requiring auxiliary entropy regularizers. The RJSD-GAN is formulated as follows:
$$\min_{\theta \in \Theta} \max_{\omega \in \Omega} D_{JS}^{\omega}(X, Y^{\theta}),$$
where $X$ are samples from the real data, and $Y^{\theta}$ are samples created by a generator $G_{\theta}$. Instead of classifying real and fake samples, we use a Fourier-features network $\{\phi_{\omega} \circ f_{\omega}\}_{\omega \in \Omega}$ (FFN, see Section 4) to learn a multidimensional representation in an RKHS where the divergence is maximized. Subsequently, the generator $\{G_{\theta}\}_{\theta \in \Theta}$ attempts to minimize RJSD. We follow a single-step alternating gradient method (see Algorithm 3 in Appendix B). We assess our GAN formulation in two well-known mode-collapse experiments: eight Gaussians dataset and stacked MNIST.
5.2.1 Synthetic Experiments
We apply RJSD to train a GAN in a synthetic experiment. The target distribution is a mixture of eight Gaussian distributions arranged in a circle. Fig. 2 shows the real data and the samples generated by various learning functions to train GANs. As expected, the standard (vanilla) GAN fails to generate samples from all modes (Fig. 2(a)). The Hinge (Lim & Ye, 2017) and Wasserstein-GP GANs (Gulrajani et al., 2017) successfully produce samples representing all eight modes, yet Figs. 2(b) and 2(c) exhibit generated samples with reduced variance/diversity (lower entropy) within each mode: a phenomenon termed intra-class collapse. As we observe, the generated samples fail to cover the entire support of each Gaussian mode clustering towards the center. In contrast to the compared methods, the samples generated by the RJSD-GAN show improved mode coverage and higher diversity. This is visually noticeable in Fig.
| Average KL divergence |
|----------------------|
| RJSD |
| Wasserstein-GP |
| Hinge |
| Modes (Max 1000) | KL |
|------------------|----|
| TCGAN (Radford et al., 2015) | 990 ± 3.80 |
| ALI (Dumoulin et al., 2016) | 16.0 ± 5.40 |
| Unrolled GAN (Rezende et al., 2016) | 48.7 ± 4.32 |
| VEGAN (Srivastava et al., 2017) | 150 ± 2.95 |
| Wasserstein-GAN (Arjovsky et al., 2017) | 990 ± 0.72 |
| PresGAN (Karras et al., 2018) | 999.6 ± 0.11 |
| PacGAN (Karras et al., 2018) | 1000.0 ± 0.06 |
| GAN+MINE (Belghazi et al., 2018) | 1000.0 ± 0.05 |
| GAN + rep JSD | 1000.0 ± 0.04 |
Table 1: KL divergence between real and generated distributions on eightmodes dataset.
Table 2: Number of modes and KL divergence between real and generated distributions on stacked MNIST.
Additionally, we perform the following quantitative analysis. We cluster the eight modes generated by each method and estimate their mean and covariance matrices (see Fig. 1 in Appendix C.2.1). Then, we calculate the Kullback-Leibler (KL) divergence between the real Gaussian modes and their generated counterparts. Finally, we average the divergence among the eight modes. Table 1 highlights the superiority of RJSD in terms of KL divergence when contrasted with the baseline methods. This empirical evidence supports the efficacy of RJSD to avoid mode collapse and to generate samples matching the target distribution beyond visual comparability.
5.2.2 Stacked MNIST
We conduct a quantitative evaluation to assess the efficacy of RJSD in reducing mode collapse on the stacked MNIST dataset. This dataset consists of three randomly sampled MNIST digits stacked along different color channels. This procedure results in 1000 possible classes (modes) corresponding to all combinations of the 10 digits. We use the standard DCGAN generator architecture (Radford et al., 2015), and modify the discriminator architecture to include a Fourier-features mapping (see implementation details in Appendix C.2.2). We compare our method against a considerable number of GAN algorithms using the same generator and following the same evaluation protocol. We utilize a pre-trained classifier to quantify the number of distinct generated modes. Additionally, we calculate the Kullback-Leibler (KL) divergence between the distribution of the generated modes and the real mode distribution. Finally, we average the results over five runs. Table 2 shows the results, and RJSD captures all modes and steadily generates samples from all classes achieving the lowest KL-divergence compared to the baseline approaches. Although our algorithm is a standard GAN that explicitly minimizes the Jensen-Shannon divergence, RJSD does not require the incorporation of entropy regularizers or mode-collapse prevention mechanisms beyond the learning function itself.
5.3 Two Sample Testing
We evaluate the performance of RJSD for two-sample testing on different datasets and compare it against different state-of-the-art (SOTA) methods. We perform the following tests: (a) RJSD-FF: Two-sample test based on RJSD, optimizing the Fourier features. (b) RJSD-RFF: Two-sample test based on RJSD using random Fourier features, optimizing just the length-scale of the associated Gaussian kernel. (c) RJSD-D: Two-sample test based on RJSD using a deep Fourier-features network as explained in section 4 (see implementation details in Appendix C.3). (d) RJSD-KF: Two-sample test based on the kernel RJSD estimator, optimizing the length-scale of a Gaussian kernel. (e) MMD-O: Two-sample test based on MMD, optimizing the length-scale of the Gaussian kernel (Liu et al., 2020). (f) MMD-D: Two-sample test based on MMD with a deep kernel (Liu et al., 2020). (g) C2ST-L: a classifier two-sample test based on the output classification scores (Cheng & Cloninger, 2022). (h) C2ST-S: a classifier two-sample test based on the sign of the output classification scores (Lopez-Paz & Oquab, 2016).
We perform two-sample testing on two synthetic and two real-world datasets. Specifically, we perform permutation tests and the testing procedure is detailed in Appendix C.3.
Blobs dataset (Liu et al., 2020): In this dataset, \( P \) and \( Q \) are mixtures of nine Gaussians with the same modes. Each mode in \( P \) is an isotropic Gaussian; however, the modes in \( Q \) have different
---
\[^{2}\text{We did not perform this test for large size datasets due to computational restrictions.}\]
Figure 4: Average test power. (a) Blobs data. (b) High dimensional Gaussian mixture (GM), fixed $d = 10$. (c) High dimensional GM, fixed $N + M = 4000$ (d) Higgs data. Significance level $\alpha = 0.05$.
Table 3: MNIST average test power ($\alpha = 0.05$). Bold represents higher mean per column.
| $N + M$ | 200 | 300 | 400 | 500 | 600 |
|--------|---------|---------|---------|---------|---------|
| RJSD-FF| 0.374 ± 0.100 | 0.811 ± 0.012 | 0.996 ± 0.001 | **1.000 ± 0.000** | **1.000 ± 0.000** |
| RJSD-RFF| 0.184 ± 0.025 | 0.320 ± 0.029 | 0.436 ± 0.030 | 0.644 ± 0.037 | 0.800 ± 0.051 |
| RJSD-D | 0.352 ± 0.084 | **0.898 ± 0.108** | **1.000 ± 0.000** | **1.000 ± 0.000** | **1.000 ± 0.000** |
| MMD-O | 0.148 ± 0.035 | 0.221 ± 0.042 | 0.283 ± 0.042 | 0.398 ± 0.050 | 0.498 ± 0.035 |
| MMD-D | **0.449 ± 0.124** | 0.704 ± 0.182 | 0.983 ± 0.010 | 0.999 ± 0.003 | **1.000 ± 0.000** |
| C2ST-L| 0.254 ± 0.126 | 0.424 ± 0.113 | 0.818 ± 0.102 | 0.967 ± 0.029 | 0.994 ± 0.010 |
| C2ST-S| 0.181 ± 0.112 | 0.364 ± 0.104 | 0.759 ± 0.121 | 0.945 ± 0.042 | 0.986 ± 0.014 |
covariances. Here, we perform two-sample testing increasing the number of samples per blob ($N = 9 \times$ samples per blob). Fig. 4(a) presents the results. We can clearly see that RJSD-FF, RJSD-D, and JSD outperform all SOTA methods. We can conclude that even for a small number of samples the RJSD-based methods exhibit high test power.
High-Dimensional Gaussian Mixtures (Liu et al., 2020): In this dataset, $\mathbb{P}$ and $\mathbb{Q}$ have the same modes, and their covariances differ only on a single dimension. See Liu et al. (2020) for details. We test both, changing the number of samples while keeping the dimension constant ($d = 10$) and maintaining the number of samples ($N = 4000$) while modifying the dimensionality. Figs. 4(b) and 4(c) display the results. RJSD-D and RJSD-FF are the winners in most settings, although C2ST-L performs better at higher dimensions.
Higgs dataset (Baldi et al., 2014): Following Liu et al. (2020) we perform two-sample testing on the Higgs dataset ($d = 4$) as we increase the number of samples. Fig. 4(d) shows the results. Once again, RJSD-D and RJSD-FF outperform the baselines in almost all scenarios.
MNIST generative model: Here, we train RJSD models to distinguish between the distribution $\mathbb{P}$ of MNIST digits and the distribution $\mathbb{Q}$ of generated samples from a pretrained deep convolutional generative adversarial network (DCGAN) (Radford et al., 2015). Table 3 reports the average test power for all methods as we increase the number of samples. RJSD-D consistently outperforms the compared methods, except with the lowest number of observations.
6 CONCLUSIONS
We introduce the representation Jensen-Shannon divergence (RJSD), a novel measure based on embedding distributions in a feature space allowing the construction of non-parametric estimators based on Fourier features. Notably, this estimator demonstrates scalability, differentiability, making it suitable for diverse machine-learning problems. We demonstrate that RJSD provides a lower bound on the classical Jensen-Shannon divergence leading to a variational estimator of high precision compared to related approaches. We leverage this novel divergence to train generative networks, and the empirical results show that RJSD effectively mitigates mode collapse yielding generative models that produce more accurate and diverse results. Furthermore, when applied to two-sample testing, RJSD surpasses other SOTA techniques demonstrating superior performance and reliability to discriminate between distributions. These findings highlight the significant practical implications of our divergence measure.
REFERENCES
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In *International conference on machine learning*, pp. 214–223. PMLR, 2017.
Francis Bach. Information theory with kernel methods. *IEEE Transactions on Information Theory*, 2022.
Charles R Baker. Joint measures and cross-covariance operators. *Transactions of the American Mathematical Society*, 186:273–289, 1973.
Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. *Nature communications*, 5(1):4308, 2014.
Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In *International conference on machine learning*, pp. 531–540. PMLR, 2018.
Thomas B Berrett and Richard J Samworth. Efficient two-sample functional estimation and the super-oracle phenomenon. *arXiv preprint arXiv:1904.09347*, 2019.
Jop Briët and Peter Harremoës. Properties of classical and quantum jensen-shannon divergence. *Physical review A*, 79(5):052311, 2009.
Yuheng Bu, Shaofeng Zou, Yingbin Liang, and Venugopal V Veeravalli. Estimation of kl divergence: Optimal minimax rate. *IEEE Transactions on Information Theory*, 64(4):2648–2674, 2018.
Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. *arXiv preprint arXiv:1612.02136*, 2016.
Pengyu Cheng, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan, and Lawrence Carin. Club: A contrastive log-ratio upper bound of mutual information. In *International conference on machine learning*, pp. 1779–1788. PMLR, 2020.
Xiuyuan Cheng and Alexander Cloninger. Classification logit two-sample testing by neural networks for differentiating near manifold densities. *IEEE Transactions on Information Theory*, 68(10):6631–6662, 2022.
Jinyoung Choi and Bohyung Han. Mcl-gan: Generative adversarial networks with multiple specialized discriminators. *Advances in Neural Information Processing Systems*, 35:29597–29609, 2022.
Adji B Dieng, Francisco JR Ruiz, David M Blei, and Michalis K Titsias. Prescribed generative adversarial networks. *arXiv preprint arXiv:1910.04302*, 2019.
Yu G Dmitriev and Felix P Tarasenko. On the estimation of functionals of the probability density and its derivatives. *Theory of Probability & Its Applications*, 18(3):628–633, 1974.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. *arXiv preprint arXiv:1606.00704*, 2016.
Farzan Farnia and Asuman Ozdaglar. Gans may have no nash equilibria. *arXiv preprint arXiv:2002.09124*, 2020.
Fabio A González, Alejandro Gallego, Santiago Toledo-Cortés, and Vladimir Vargas-Calderón. Learning with density matrices and random features. *Quantum Machine Intelligence*, 4(2):23, 2022.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. *Communications of the ACM*, 63(11):139–144, 2020.
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012.
|
BEH4mGo7zP
|
In 4.1.1, N= 16384. What does N refers to? Residue number? R refers to this. Number of vertices on surface? But in 5.1, the author mentioned they sampled 500,000 points for each protein. Besides, Does 500,000 sample points refer to the point cloud after downsampling? According to my opinion, using MSMS, there would be 100-120 vertices around each residue, which means 500,000 points at least represents a protein with length 5,000. That is a very long sequence. I don't believe the average minimum length of the pretraining proteins is 5,000.
|
Pre-training Sequence, Structure, and Surface Features for Comprehensive Protein Representation Learning
Youhan Lee*, Hasun Yu*, Jaemyung Lee*, Jaehoon Kim
Kakao Brain
{youhan.lee, shawn.yu, james.brain, jack.brain}@kakaobrain.com
Abstract
Proteins can be represented in various ways, including their sequences, 3D structures, and surfaces. While recent studies have successfully employed sequence- or structure-based representations to address multiple tasks in protein science, there has been significant oversight in incorporating protein surface information, a critical factor for protein function. In this paper, we present a pre-training strategy that incorporates information from protein sequences, 3D structures, and surfaces to improve protein representation learning. Specifically, we utilize Implicit Neural Representations (INRs) for learning surface characteristics, and name it ProteinINR. We confirm that ProteinINR successfully reconstructs protein surfaces, and integrate this surface learning into the existing pre-training strategy of sequences and structures. Our results demonstrate that our approach can enhance performance in various downstream tasks, thereby underscoring the importance of including surface attributes in protein representation learning. These findings underline the importance of understanding protein surfaces for generating effective protein representations.
1 Introduction
Proteins are vital components of biological systems, executing a myriad of functions that underpin an extensive array of cellular processes and biological pathways. These intricate macromolecules have multi-faceted characteristics that can be represented through different paradigms, including but not limited to their amino acid sequences, three-dimensional (3D) structures, and the specific attributes of their surface regions. In recent years, advancements in high-throughput sequencing (HTS) technologies, cryogenic electron microscopy (cryo-EM), and sophisticated algorithms for protein structure prediction (Jumper et al., 2021; Baek et al., 2021; Lin et al., 2022) have led to an explosion of available protein sequence (Suzek et al., 2007) and structure (Berman et al., 2000; Varadi et al., 2022; Lin et al., 2023) data, most of which have been made publicly accessible. Leveraging these abundant datasets, recent studies (Meier et al., 2021; Zhang et al., 2022, 2023) have successfully employed machine learning models pre-trained on this data, resulting in significant progress in tackling an array of downstream tasks in the field of protein science.
Despite these strides, there exists a notable oversight in the current landscape of protein representation learning: the often-underestimated significance of protein surface characteristics. The attributes of a protein’s surface are crucial in determining its functional properties, particularly in the context of molecular interactions like ligand binding, enzymatic catalysis, and signal transduction between molecules (Gamza et al., 2020; Sommath et al., 2021). While existing works for protein representation learning have focused heavily on encoding amino acid sequences and 3D structural elements, they have largely neglected the indispensable role that protein surfaces serve, thus leaving an unaddressed gap in the prevailing research. More specifically, the protein structure can be divided into the atoms comprising the backbone and the components constituting the side chains. In this context, the protein surfaces are determined by both backbone and side chain atoms. However, traditional protein structure encoders typically process protein 2D graphs or 3D geometric graphs that only contain
*These authors contributed equally to this work
Table 1: Comparison of different protein encoders with and without sequence, structure, or surface pre-training. Our model, ESM-GearNet-INR-MC, covers three modalities, sequence, structure, and surface in both encoding and pre-training, achieving comprehensive protein representation learning.
| Method | Sequence Encoder | Structure Encoder | Sequence Pre-training | Structure Pre-training | Surface Encoder | Surface Pre-training |
|----------------------|------------------|-------------------|-----------------------|------------------------|-----------------|---------------------|
| CNN | ✓ | | | | | |
| Transformer | ✓ | | | | | |
| GVP | ✓ | ✓ | | | | |
| GearNet | ✓ | ✓ | | | | |
| ESM-1b | ✓ | ✓ | | | | |
| ProtBert | ✓ | ✓ | | | | |
| DeepFRI | ✓ | ✓ | | | | |
| LM-GVP | ✓ | ✓ | | | | |
| ESM-GearNet | ✓ | ✓ | | | | |
| GearNet-DP | ✓ | ✓ | | | | |
| ESM-GearNet-MC | ✓ | ✓ | | | | |
| ESM-GearNet-INR-MC (Ours) | ✓ | ✓ | | | | ✓ |
alpha carbon or backbone atoms, respectively. Consequently, state-of-the-art representations often lack consideration for side-chain information.
In response to this significant gap, our research aims to offer a comprehensive solution. We propose an all-encompassing pre-training strategy that incorporates information from all three essential aspects of proteins: sequences, 3D structures, and notably, surfaces. Our approach is pioneering in that it is the first to specifically target the learning of protein surface attributes, and it employs cutting-edge Implicit Neural Representations (INRs) (Chen & Wang, 2022) to achieve this goal effectively. This inclusive approach enables our model to enhance performance across various downstream tasks, thereby emphasizing the importance of incorporating surface information in protein representation learning.
In summary, our contributions include:
- We are the first to propose a pre-training strategy that incorporates information from protein sequences, structures, and surfaces.
- We utilize Implicit Neural Representations (INRs) as an effective mechanism for learning surface characteristics of proteins.
- We conduct a comprehensive comparison of the effects of pre-training on protein sequences, structures, and surfaces, thereby demonstrating the efficacy of learning about surfaces.
2 RELATED WORK
2.1 PROTEIN REPRESENTATION LEARNING
Most studies in the field of protein representation learning have adopted one of three main approaches: (i) focusing on protein sequences, (ii) concentrating on protein structures, or (iii) employing a hybrid strategy that incorporates both sequence and structural information.
In the first approach, which focuses on learning protein sequences, researchers commonly adopt the architecture of pre-trained language models from the field of Natural Language Processing (NLP), such as Transformer (Vaswani et al., 2017), BERT (Devlin et al., 2018), and GPT (Radford et al., 2018), to effectively represent proteins by learning their amino acid sequences as if they were language (Elnaggar et al., 2007; Meier et al., 2021; Notin et al., 2022). The second approach generally employs Graph Neural Networks (GNNs)-based architectures (Ingraham et al., 2019; Jing et al., 2020; Hermosilla et al., 2020; Zhang et al., 2022) to capture the intricate structural features of proteins. In the third approach, hybrid models aim to learn from both protein sequences and structures. Notable studies, such as DeepFRI (Gligorijević et al., 2021) and LM-GVP (Wang et al., 2022) have utilized encoders for both sequence and structural information and have pre-trained on sequence data. STEPS (Chen et al., 2023) and ESM-GearNet (Zhang et al., 2023) have gone a step further by also pre-training on structural information to achieve enhanced performance.
However, these methods have not taken into account the significant role of protein molecular surface information plays in various biological processes. Traditionally, molecular surfaces are defined using Connelly surfaces (Connolly, 1983; Sanner et al., 1996) based on van der Waals (vdW) radii, often represented as mesh-based structures derived from signed distance functions. Seminal work for modeling protein molecular surfaces is MaSIF (molecular surface interaction fingerprinting) (Gainza et al., 2020), which fingerprints molecular surfaces expressed as molecular meshes using pre-defined and pre-calculated physical and geometrical features. To remove the high pre-computation costs of featurization, Svørrisson et al. (2021) proposed dMaSIF, which showed that modeling molecular surfaces as a point cloud with atom categories per point is competitive. Somnath et al. (2021) proposed HOLOProt, which attempted to segment the protein surface into “superpixels” for more efficient consideration of surface information and used the features in conjunction with structure features in a multi-modal modeling manner. However, theoretically, molecular surfaces are continuous surfaces with infinite resolution, which existing mesh-based approaches cannot fully express. To tackle this challenge, we utilize the Implicit Neural Representations (INRs) approach, a technique capable of perfectly capturing infinite resolution characteristics. Our model, called ProteinINR, understands protein molecular surface resolution independently. Furthermore, our ProteinINR model is a generalizable INR approach, allowing us to develop a single model capable of representing many protein structures. On the other hand, Wang et al. (2023) proposed a harmonic message passing, called HMR, which considered surfaces during molecular representation learning. Compared to HOLOProt and HMR, which focused on the design of encoder, we use INRs as a pre-training framework in which a structure encoder is trained to extract structure features to recover molecular surface.
### 2.2 Implicit Neural Representations
Point cloud-based (Qi et al., 2017a,b; Thomas et al., 2019; Zhang et al., 2021), mesh-based (Sinha et al., 2016; Bagautdinov et al., 2018; Verma et al., 2018), and voxel-based (Curless & Levoy, 1996; Wu et al., 2015; Tatarchenko et al., 2017; Zeng et al., 2017) methods have historically relied on fixed-sized coordinates or grids to represent 3D assets. Unfortunately, these approaches suffer from resolution dependency, making them insufficient for modeling or rendering high-resolution 3D assets effectively. In contrast, Implicit Neural Representations (INRs) concentrate on learning parameterized functions that predict location-specific information for given arbitrary query coordinates by utilizing seminal methods such as auto-decoding (Park et al., 2019b; Mescheder et al., 2019), Fourier features (Tancik et al., 2020; Mildenhall et al., 2021), sinusoidal activations (Sitzmann et al., 2020b), meta-learning (Tancik et al., 2021; Dupont et al., 2022a,b; Bauer et al., 2023), or transformer-based architecture (Chen & Wang, 2022). The inherent differentiation gives INR the benefit of being independent of resolution, enabling it to depict scenes and objects with outstanding precision and fidelity (Chen et al., 2021; Sajjadi et al., 2022; Jun & Nichol, 2023).
Grattarola & Vanderheyden (2022) proposed a generalized INR, which is the only work dedicated to the study of INR for proteins. The contributions were crucial because generalized INR expanded the use of INR to topological systems that do not possess a well-defined coordinate system. They utilized 2D graph spectral embedding to learn INR for various real-world systems in non-Euclidean domains, including proteins. Nevertheless, although the work demonstrated the capacity to generalize across diverse systems, it necessitated the training of individual Multi-Layer Perceptron (MLP) models for each sample, hence constraining its ability to generalize across datasets. Our study provides evidence that it is feasible to represent protein surfaces using the INR with the Euclidean coordinate system. Furthermore, our study contributes to the area by showcasing the feasibility of a generalizable INR model capable of representing an entire dataset with a single model.
### 3 Preliminaries
#### 3.1 Protein Graph
Proteins are constructed by 20 different amino acids. Their 3D structures are formed through the chemical bonds and interactions among the atoms of the amino acids and making them naturally suited for graph representation. Based on the GearNet’s representation (Zhang et al., 2022), which exhibits high performance for downstream tasks we aim to solve, a protein $\mathcal{P}$ is expressed as a relational graph $\mathcal{G}_\mathcal{P}$, made up of $(V, E, R)$. $V$ is the set of nodes and each node presents a residue in protein and includes the amino acid residue type and 3D coordinate. $E$ is the set of edges among
Figure 1: An illustration of our proposed strategy for pre-training sequences, structures, and surfaces to solve downstream tasks.
Figure 2: An overview of our ProteinINR architecture. The points tokens, structure tokens, and latent tokens are calculated using high-frequency-aware point encoder, structure encoder (GearNet-Edge-IEConv), and three-dimensional convolution layers, respectively. Points are 16k resolution. Transformer encoders output parameters of an MLP using the tokens, then SDF values are obtained using the parameters for the query coordinates.
nodes with their types $R$ such as the edges between two residues located within a certain distance on the protein sequence or 3D coordinates.
### 3.2 INRs
To model the surface of the protein, and we utilize Signed Distance Function (SDF) to represent the surface. SDF is a well-established strategy for representing 3D shapes as scalar fields. The SDF is a mathematical expression that assigns a scalar value to a given coordinate $x$, expressing the distance $d$ between the spatial point and the closest point on the shape’s surface as follow:
$$F(x) = s : x \in \mathbb{R}^3, s \in \mathbb{R}. \quad (1)$$
We employ the methodology of DeepSDF (Park et al., 2019a) and train a model that possess continuous implicit representations, which describe the $F$ for geometric molecular surfaces. We define the inside surface as $d < 0$ and the outside surface as $d > 0$. Following this definition, the equation $F(x) = 0$ implies the molecular surface boundary, specifically defining the molecular surface. In summary, we train a model that encode a protein molecular surface and produce INR parameters, which imply the $F$.
4 METHOD
As mentioned earlier, we aim to pre-train sequences, structures, and surfaces of proteins for better protein representation. The sequence, structure, and surface are quite different modalities and thus establishing strategies for pre-training the sequence, structure, and surface is very important. To learn a large volume of structural and sequence data, we employ the “series-fusion” approach which has demonstrated superior performance in previous work (Zhang et al., 2023). First, we pre-train the sequences on sequence encoder and use this encoding as input for the structure encoder. Then, we pre-train the structure encoder on the surfaces using ProteinINR and utilize the weights of pre-trained structure encoder as initial weights for pre-training on the structures. Then, we pre-train the structure encoder on the structure through multi-view contrastive learning based on the approach from Zhang et al. (2022) to obtain the final protein representation. Our pre-training strategy can be seen as continual pre-training (Ke et al., 2022). Finally, we leverage protein representation from the pre-trained model on three modalities to solve downstream tasks. Figure 1 contains an illustration of our pre-training strategy.
4.1 Generalizable Implicit Neural Representations for Protein
To effectively pre-train protein surfaces, we employ INRs. In the early stages of INRs, a coordinate-based MLP is trained for each individual instance. However, with the increasing amount of datasets, the computational expense associated with training multiple MLPs for each individual data point has become too costly. Consequently, various solutions have been proposed to develop a generalizable INR to accommodate an entire dataset within a single model. One notable approach, TransINR (Chen & Wang, 2022), entails leveraging Transformer architecture, particularly for INR parameter calculation based on multiple partial views of 3D objects as conditioning inputs. This technique has garnered considerable attention in the field. Building upon these advancements, ProteinINR adopts and first extends these methodologies in the protein field. It represents an expressive and generalizable INR that can effectively capture the shapes of tens of thousands of protein instances within a single model.
4.1.1 Encoding Protein Using Point and Structure Encoder
The ProteinINR framework first encodes a certain protein instance $P$ into a protein point set embedding $h$. ProteinINR inputs the 3D protein asset as a protein point cloud $P \in \mathbb{R}^{N \times 3}$. The variable $N$ denotes the number of points in the point cloud of the protein molecular surface, and we randomly sample 16,384 points to input the point encoder in our experiments. ProteinINR utilizes the Dual-scale Point Cloud Recognition (DSPoint) (Zhang et al., 2021) Encoder $\psi$ to address the complex and irregular nature of protein surfaces, which exhibit intricate high-frequency features. This encoder effectively captures a given point cloud’s high-frequency and low-frequency characteristics, demonstrating notable efficacy in the tasks that involve high-frequency features, such as point cloud segmentation. Following the process of updating point features through the DSPoint method, we downsample the points into a reduced set of $M$ points $P \in \mathbb{R}^{M \times 3}$ by utilizing the deformable Kernel Point Convolution (KPConv) networks (Thomas et al., 2019). Ultimately, a learnable linear transformation is implemented on the downsampled points to align the embeddings’ hidden dimension prior to cross attention.
RGB values are frequently utilized as characteristics for individual points in point cloud modeling. ProteinINR considers the chemical properties of protein surfaces stemming from their electrical environment as chemical colors. Although MaSIF utilized a pre-computation technique to determine the chemical colors, the computational cost associated with this approach is prohibitively expensive. Fortunately, dMaSIF has shown that it is possible to create a comprehensive representation of chemical properties by utilizing atom category features and distances inside an end-to-end learning framework. Building upon these findings, we adopt a similar approach for protein point cloud chemical color representation. We integrate two essential elements into our approach, namely atom categorical embeddings and the property of Top K closest distances. Incorporating these characteristics into the point cloud encoder leads to the formation of embeddings that encompass the surface’s chemical attributes. The utilization of the encoder and chemical features mentioned ensures that ProteinINR represents the protein molecular surface by considering the intricate interaction between the protein’s structural and chemical characteristics.
The primary contribution of our study is employing INR training as a pre-training technique to inject the knowledge of protein surface characteristics into the protein structure encoder. In order to accomplish this, we represent an input protein as a structure and incorporate a protein structure encoder into the INR training process, which allows us to encode the protein structure graph $G_P$ and generate protein structure embeddings $g \in \mathbb{R}^{R \times h}$ where $R$ and $h$ are length of residues and length of hidden dimension, respectively. Finally, the embeddings $h$ can be used for various downstream tasks. The utilization of this architectural design enables the protein structure encoder to actively participate in the process of acquiring surface-aware representation learning. As a result, the structure encoder enhances its ability to comprehend and depict protein molecule surfaces comprehensively. We note the extracted point embedding as $p \in \mathbb{R}^{M \times h}$.
### 4.1.2 Spatially Arranged Latent Representations
Recently, Spatial Functa (Bauer et al., 2023) has demonstrated improvements in the quality of latent representations when two-dimensional spatial inductive biases are incorporated. Building upon this, we extend the concept to three-dimensional protein surfaces. In ProteinINR, the latent embeddings $z \in \mathbb{R}^{L \times c}$ with length of $L$ are initially rearranged into a three-dimensional voxel grid $z \in \mathbb{R}^{i \times j \times k \times c}$, where $c$, $i$, $j$, and $k$ are feature size, width, height, and depth of latent grid respectively. Following that, we implemented 3D convolutions on the reorganized embeddings, which allowed for the incorporation of spatial inductive biases inside the latent embeddings. Finally, latent embeddings are rearranged to have the original shape $z \in \mathbb{R}^{L \times c}$ and projected through a learnable projection layer to have the feature dimension $z \in \mathbb{R}^{L \times h}$. While this approach may seem simple, the results are remarkably effective, leading to enhanced INR performance, as further elucidated in our ablation study.
### 4.1.3 Transformer Encoder for INRs
In ProteinINR, the latent representation (referred to as $z$) of a protein instance’s surface is obtained using a transformer encoder. The initial step is the concatenation of the protein surface point cloud embedding $p$, structural embedding $g$, and latent embedding $z$ as follows:
$$h = \text{Concat}(p, s, z), h \in \mathbb{R}^{(M+R+L) \times h}$$
Next, the final latent codes $z$ are obtained through self-attention processes where protein information is propagated over all protein-related tokens and latent embeddings.
### 4.1.4 INR Decoder and SDF Regression
In order to strengthen the ability of ProteinINR to capture localized and fine-grained details of local surfaces, we utilize the decoder introduced by Lee et al. (2023). This decoder has demonstrated a significant improvement of over 50% compared to the prior TransINR model. The improvement is achieved by introducing a locality inductive bias into the INR framework. In ProteinINR, the locality-aware INR decoder $D_\phi$ utilizes the latent code $z$ to predict the SDF $\tilde{s}$ for $K$ query coordinates $x \in \mathbb{R}^{K \times 3}$ near molecular surface of $N$ protein samples $P^n$. The optimization of ProteinINR is fulfilled by minimizing the L2 loss between the predicted SDF values and the corresponding SDF values for each SDF sample. Furthermore, clamping techniques are employed to focus the model’s attention on the specific details within the vicinity of the surface region. We used the clamp value of 0.2, as employed in the DeepSDF. More detailed steps are followed:
$$\tilde{s} = D_\phi(x, z)$$
$$\min_{\psi, z} \frac{1}{NK_n} \sum_{n=1}^{N} \sum_{i=1}^{K_n} \| \text{clamp}(s_i, \delta) - \text{clamp}(\tilde{s}_i, \delta) \|_2^2$$
### 4.2 Pre-training on Sequences and Structures
To effectively learn protein representations from a large volume of structural and sequence data, we employ the “series fusion” approach, which has demonstrated superior performance in previous
work (Zhang et al., 2023). In the “series fusion” architecture, the output from the trained language model is fed into the structure encoder. We utilize ESM-1b (Meier et al., 2021) as the trained language model. To encode a protein graph by learning their structural information, we adopt the GearNet-Edge-IEConv architecture, which performs best across most tasks, as the structure encoder and then we pre-train the structure encoder on structures by employing the multi-view contrastive learning approach (Zhang et al., 2022). The detailed information about the architecture and hyper-parameters of structure pre-training we used is described in Appendix A.1.
5 EXPERIMENTS AND RESULTS
5.1 DATASET PREPARATION FOR PRE-TRAINING
INR training Before calculating samples of the signed distance function, we generated the zero-level surface, namely, the molecular surface, represented by the equation \( F(x) = 0 \) in implicit representations. To accomplish this objective, we utilized the MSMS program (Connolly, 1983; Sanner et al., 1996), which is well-established triangulation software for molecular surfaces. Subsequently, we computed the SDF values for the points acquired by the sampling approach utilized in DeepSDF. The sample points are sampled near to the molecular mesh obtained via MSMS. In that case, the SDF values are their distances from the nearest vertices point of a given molecular surface mesh. In this work, 500,000 points were generated for SDF training, serving as the SDF points independent of the protein point cloud input. These points and corresponding SDF values are utilized as the target data for INR training. We train ProteinINR in 50 epochs with learning rate of 1e-4.
Structure pre-training To pre-train structural information, we utilize AlphaFold Protein Structure Database version 2 (Varadi et al., 2022) to pre-train the models. We use protein structure prediction data for 20 species and Swiss-Prot (Boeckmann et al., 2003). In-depth details and statistics about the data we used are provided in the Appendix A.2.
5.2 EXPERIMENTAL SETTINGS
Downstream tasks To quantify representation power of our proposed method, we adopt three downstream tasks. As in GearNet paper, we choose Enzyme Commission (EC) number prediction task and Gene Ontology (GO) term prediction proposed from Gligorijević et al. (2021). Fold Classification (FC) suggested from Hou et al. (2018) is adopted as downstream evaluation as well. EC task is prediction of EC numbers of proteins which represent biomedical reactions they catalyze. GO task is divided into three sub-tasks by their ontologies, biological process (BP), molecular function (MF), cellular component (CC). Each task predicts whether a protein is associated with a specific GO term. For EC and GO tasks, \( F_{max} \) and pair-centric area under precision-recall curve (AUPR) values are calculated to measure performance. In the FC task, fold labels of proteins are classified, and mean accuracy is used to evaluate performance.
We evaluate a total of seven models: i) GearNet, which is trained directly on the downstream tasks with a structure module; ii) GearNet-INR, where the structure module is pre-trained on the surfaces, and then trained on the downstream task; iii) GearNet-MC, whose structure module is pre-trained on the structures by multi-view contrastive learning, and then trained on downstream tasks; iv) GearNet-INR-MC, whose structure module is pre-trained on the surfaces, subsequently on the structures, and then trained on downstream tasks; v) ESM-GearNet-MC, where a sequence encoder is pre-trained, followed by pre-training on the structures; vi) ESM-GearNet-INR, where a sequence encoder is pre-trained, followed by pre-training on the surfaces; vii) ESM-GearNet-INR-MC, which entails pre-trained a sequence encoder, then pre-training the structure module on the surface, followed by further training on the structure, and finally training on the downstream tasks. We use ESM-1b as the sequence encoder and GearNet-Edge-IEConv as the structure encoder. We finetune each task with the datasets as described in Appendix A.3. The model is trained for 50 epochs on EC, 200 epochs on GO, and 300 epochs on fold classification task. We finetune and evaluate the model upon the framework proposed by GearNet (Zhang et al., 2022) and all other settings for finetuning models is same except batch size. We use batch size as 16 per step (8 A100 GPUs and 2 for each GPU) for all experiments.
Table 2: Performance on downstream tasks. We compare the models with and without using the pre-trained weights from ProteinINR. We highlight the cases where performance is the best in terms of $F_{\text{max}}$ and AUPR for EC and GO task and mean accuracy for FC in bold. † indicates scores extracted from Xu et al. (2023), which conducted different settings compared to our study.
| Method | EC | GO-BP | GO-MF | GO-CC | FC | Sum |
|-----------------|--------|-------|-------|-------|-------|-----|
| | $F_{\text{max}}$ | AUPR | $F_{\text{max}}$ | AUPR | $F_{\text{max}}$ | AUPR | $F_{\text{max}}$ | AUPR | Acc | |
| ESM-1b† | 86.9 | 88.4 | 45.2 | 33.2 | 65.9 | 63.0 | 47.7 | 32.4 | - | - |
| ESM-2† | 87.4 | 88.8 | 47.2 | **34.0** | 66.2 | **64.3** | 47.2 | 35.0 | - | - |
| GearNet | 81.6 | 83.7 | 44.8 | 25.2 | 60.4 | 52.9 | 43.3 | 26.8 | 46.8 | 465.5 |
| GearNet-INR | 81.4 | 83.7 | 44.7 | 26.5 | 59.9 | 52.1 | 43.0 | 27.2 | 47.6 | 466.1 |
| GearNet-MC | 87.2 | 88.9 | 49.9 | 26.4 | 64.6 | 55.8 | 46.9 | 27.1 | 51.5 | 498.3 |
| GearNet-INR-MC | 86.9 | 88.9 | 49.8 | 26.0 | 65.4 | 56.1 | 47.7 | 26.6 | 51.1 | 498.5 |
| ESM-GearNet-MC | 89.0 | 89.7 | **53.5** | 27.5 | **68.7** | 57.9 | 49.4 | 32.4 | **53.8** | 521.9 |
| ESM-GearNet-INR | 89.0 | 90.3 | 50.8 | 33.4 | 67.8 | 62.6 | **50.6** | **36.9** | 48.9 | **530.3** |
| ESM-GearNet-INR-MC | 89.6 | 90.3 | 51.8 | 33.2 | 68.3 | 58.0 | 50.4 | 35.7 | 50.8 | 528.7 |
Figure 3: Above images are the examples of reconstructed meshes and surfaces from ProteinINR for given proteins. ProteinINR preserves the intricate details of irregular protein surfaces, particularly capturing features such as ring-like and hole shapes with remarkable fidelity.
5.3 Experimental results
Representing protein surface shapes using ProteinINR The procedure for acquiring a triangular mesh that corresponds to a specific protein using INR parameters from ProteinINR is outlined as follows. Initially, the SDFs are calculated for the vertices of a voxel grid with a regular size of 128. Following this, the marching cubes algorithm (Chernyaev, 1995) is employed to compute the mesh. Protein surface samples reconstructed using ProteinINR are depicted in Figure 3. It is worth mentioning that the protein molecular surfaces exhibit significant irregularity and possess high-frequency properties. Intriguingly, ProteinINR effectively preserves intricate information, even hole or ring-like shapes. In addition, we calculated the Chamfer distance between the ground truth and the reconstructed meshes for the test set. A subset of 30,000 data points was selected, and the computed average Chamfer distance [A.4] was 0.003. The number might be quite decent in the context of Chamfer distances for natural 3D objects as reported in studies (Mescheder et al., 2019; Park et al., 2019b; Sitzmann et al., 2020a; Liu et al., 2023) related to SDF reconstruction. These findings indicate that ProteinINR effectively acquires generalizable INRs that can accurately depict the uneven surfaces of proteins.
Downstream evaluation We compare the performance of the structure encoder initialized with weights from a pre-trained ProteinINR model and without such initialization across various downstream tasks related to protein function. Intriguingly, we can see that ESM-GearNet-INR-MC and ESM-GearNet-INR outperform the previous state-of-the-art model, ESM-GearNet-MC, when taking the summation of all scores. This demonstrates our main contribution clearly, emphasizing that incorporating surface-related features, which have not been explored by previous models, into protein pre-training representation learning enables comprehensive representation learning for proteins. Additionally, we observe a rapid decrease in pre-training loss as depicted in Figure 4, which provides an additional evidence.
Table 3: Results on EC task depending on pre-training order.
| Method | Pre-training order | EC | GO-BP | GO-MF | GO-CC | FC | Sum |
|-------------------------|-----------------------------|--------|-------|-------|-------|------|-----|
| ESM-GearNet-INR-MC | sequences → surfaces → 3D-structures | 89.6 | 90.3 | 51.8 | 33.2 | 68.3 | 58.0 | 50.4 | 35.7 | 50.8 | 528.1 |
| GearNet-INR-MC | surfaces → 3D-structures | 86.9 | 88.9 | 49.8 | 26.0 | 65.4 | 56.1 | 47.7 | 26.6 | 51.1 | 498.5 |
| GearNet-MC-INR | 3D-structures → surfaces | 84.1 | 86.0 | 46.9 | 25.9 | 62.1 | 54.3 | 44.8 | 27.2 | 47.6 | 478.9 |
Protein function primarily occurs on the surface and is closely associated with surface features. The observed enhancement in protein function tasks indicates that acquiring surface understanding using INR is advantageous. In contrast, a noticeable enhancement in performance is not observed in the FC task. Since surface features imply higher representations of the outer part of protein structure, these features may not contribute substantially to classifying the overall fold structure. Otherwise, as the process of pre-training progresses, the loss gap between models diminishes. We attribute this trend to the nature of our encoder, which focuses solely on alpha carbons. While surface information is derived from full-atom information, the encoder only learns from alpha carbon structures. So, we hypothesize that the 2nd stage mutual information maximization during pre-training on structure data biases the model toward alpha carbon structure information after 1st stage surface pre-training.
Nonetheless, even under these limited conditions, the models including the protein surface modality show performance gains.
**Experiment on order** Considering the original results (Table 2), it is evident that ESM has the most dominant impact on the downstream tasks, revealing the supportive role of structure and surface in enhancing performance on downstream tasks. Table 3 enables us to compare the significance of structure and surface pre-training while excluding the dominant influence of sequence. We can see that the structure encoder (GearNet-INR-MC), which learned structure information last, had superior performance compared to GearNet-MC-INR, which was pre-trained in the opposite manner. Based on the results of GearNet-INR (466.1) and GearNet-MC (498.3) shown in Table 2, it seems that in the absence of pre-training on sequences, structure pre-training has a greater influence on downstream tasks compared to surfaces. We conjecture that this observation supports the findings shown in Table 3.
**Ablation study on 3D latent embedding** In ProteinINR, we incorporate a 3D convolution layer to introduce a spatial inductive bias to the latent space. To evaluate the effect of the approach, we conduct an analysis of the learning curve of INR when incorporating or excluding spatial inductive bias. As depicted in Figure 5 in Appendix, the existence of spatial inductive bias clearly enhances the learning of INR.
## 6 Conclusion
We propose a pre-training strategy for learning from sequences, structures, and surfaces of proteins to achieve better protein representation. For the first time, we use INR to pre-train the protein surface, introducing a method we call ProteinINR. We confirm that ProteinINR effectively reconstructs the protein surfaces. Moreover, the results on the downstream tasks demonstrate that learning the protein surface can lead to better protein representation.
Our work represents an important step towards incorporating protein surfaces, which play a crucial role in protein functions. There are several interesting avenues for further research: generating new proteins from the latent representation of surfaces we pre-train; applying our approach to other types of molecules, such as small-molecule drugs; and identifying a better strategy for integrating all three modalities, particularly the effective integration of surface and structure pre-training.
Meanwhile, our approach has a limitation of dependency on protein structure, so the use of predicted structures may worsen the performance of our method for proteins without known structures.
7 REPRODUCIBILITY
The architectural design of ProteinINR is influenced by TransINR, and we utilize the decoder model introduced by Lee et al. (Lee et al., 2023). The DSPoint and KPConv, and GearNet are implemented from their official codes. The training dataset used in pre-training structures is prepared from the AF2 prediction dataset, similar to in GearNet. The procedure for generating SDF data is implemented in accordance with the approach described in the DeepSDF framework. To assess the performance of downstream tasks, we use the well-published TorchDrug framework (Zhu et al., 2022). We describe detailed information regarding the training and evaluation in Section 4, Section 5, and Appendix.
8 ETHICAL STATEMENT
In this work, we focus on advancing the topic of protein representation learning by incorporating surface information alongside sequence and 3D structure-based representations. We acknowledge the importance of ethical considerations in scientific research and we aim to provide further clarification on the following ethical aspects.
We provide transparent and comprehensive details about our methodology, experiments, and results. We list any limitations or potential biases in our research.
Our research aims to elucidate the significance of surfaces in protein representation learning, hence potentially influencing drug discovery and enzyme development. This would have the potential impact on a wide range of applications, including the identification of innovative therapeutic targets, the development of more promising drugs, improvements in agricultural productivity, and ultimately, improvements in human health.
Our purpose of this research is to make a constructive and positive contribution to the domain of protein science while upholding the ethical conduct of our research.
REFERENCES
Minkyung Baek, Frank DiMaio, Ivan Anishchenko, Justas Dauparas, Sergey Ovchinnikov, Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch, R Dustin Schaeffer, et al. Accurate prediction of protein structures and interactions using a three-track neural network. *Science*, 373(6557):871–876, 2021.
Timur Bagautdinov, Chenglei Wu, Jason Saragih, Pascal Fua, and Yaser Sheikh. Modeling facial geometry using compositional vaes. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 3877–3886, 2018.
Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Schwarz, and Hyunjik Kim. Spatial functa: Scaling functa to imagenet classification and generation. *arXiv preprint arXiv:2302.03130*, 2023.
Helen M Berman, John Westbrook, Zukang Feng, Gary Gilliland, Talapady N Bhat, Helge Weissig, Ilya N Shindyalov, and Philip E Bourne. The protein data bank. *Nucleic acids research*, 28(1):235–242, 2000.
Brigitte Boeckmann, Amos Bairoch, Rolf Apweiler, Marie-Claude Blatter, Anne Estreicher, Elisabeth Gasteiger, Maria J Martin, Karine Michoud, Claire O’Donovan, Isabelle Phan, et al. The swiss-prot protein knowledgebase and its supplement trembl in 2003. *Nucleic acids research*, 31(1):365–370, 2003.
Can Chen, Jingbo Zhou, Fan Wang, Xue Liu, and Dejing Dou. Structure-aware protein self-supervised learning. *Bioinformatics*, 39(4):btad189, 2023.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020.
|
hESD2NJFg8
|
How useful are the confidence scores generated by the LLMs (Appendix F.1, table 6)? Showing the mean and variance of confidence score for each dataset may help in understanding its impact on performance.
|
LABEL-FREE NODE CLASSIFICATION ON GRAPHS WITH LARGE LANGUAGE MODELS (LLMs)
Zhikai Chen¹, Haitao Mao¹, Hongzhi Wen¹, Haoyu Han¹, Wei Jin², Haiyang Zhang³, Hui Liu¹, Jiliang Tang¹
¹Michigan State University ²Emory University ³Amazon.com
{chenzh85, haitaoma, wenhongz, hanhaoy1, liuhui7, tangjili}@msu.edu, wei.jin@emory.edu, hhaiz@amazon.com
ABSTRACT
In recent years, there have been remarkable advancements in node classification achieved by Graph Neural Networks (GNNs). However, they necessitate abundant high-quality labels to ensure promising performance. In contrast, Large Language Models (LLMs) exhibit impressive zero-shot proficiency on text-attributed graphs. Yet, they face challenges in efficiently processing structural data and suffer from high inference costs. In light of these observations, this work introduces a label-free node classification on graphs with LLMs pipeline, LLM-GNN. It amalgamates the strengths of both GNNs and LLMs while mitigating their limitations. Specifically, LLMs are leveraged to annotate a small portion of nodes and then GNNs are trained on LLMs’ annotations to make predictions for the remaining large portion of nodes. The implementation of LLM-GNN faces a unique challenge: how can we actively select nodes for LLMs to annotate and consequently enhance the GNN training? How can we leverage LLMs to obtain annotations of high quality, representativeness, and diversity, thereby enhancing GNN performance with less cost? To tackle this challenge, we develop an annotation quality heuristic and leverage the confidence scores derived from LLMs to advanced node selection. Comprehensive experimental results validate the effectiveness of LLM-GNN on text-attributed graphs from various domains. In particular, LLM-GNN can achieve an accuracy of 74.9% on a vast-scale dataset OGBN-PRODUCTS with a cost less than 1 dollar. Our code is available from https://github.com/CurryTang/LLMGNN.
1 INTRODUCTION
Graphs are prevalent across multiple disciplines with diverse applications (Ma & Tang [2021]). A graph is composed of nodes and edges, and nodes often with certain attributes, especially text attributes, representing properties of nodes. For example, in the OGBN-PRODUCTS dataset (Hu et al. [2020b]), each node represents a product, and its corresponding textual description corresponds to the node’s attribute. Node classification is a critical task for graphs which aims to assign labels to unlabeled nodes based on a part of labeled nodes, node attributes, and graph structures. In recent years, Graph Neural Networks (GNNs) have achieved superior performance in node classification (Kipf & Welling [2016], Hamilton et al. [2017], Veličković et al. [2017]). Despite the effectiveness of GNNs, they always assume the ready availability of ground truth labels as a prerequire. Particularly, such assumption often neglects the pivotal challenge of procuring high-quality labels for graph-structured data: (1) given the diverse and complex nature of graph-structured data, human labeling is inherently hard; (2) given the sheer scale of real-world graphs, such as OGBN-PRODUCTS (Hu et al. [2020b]) with millions of nodes, the process of annotating a significant portion of the nodes becomes both time-consuming and resource-intensive.
Compared to GNNs which require adequate high-quality labels, Large Language Models (LLMs) with massive knowledge have showcased impressive zero-shot and few-shot capabilities, especially for the node classification task on text-attributed graphs (TAGs) (Guo et al. [2023], Chen et al. [2023], He et al. [2023a]). Such evidence suggests that LLMs can achieve promising performance with-
out the requirement for any labeled data. However, unlike GNNs, LLMs cannot naturally capture and understand informative graph structural patterns (Wang et al., 2023a). Moreover, LLMs can not be well-tuned since they can only utilize limited labels due to the limitation of input context length (Dong et al., 2022). Thus, though LLMs can achieve promising performance in zero-shot or few-shot scenarios, there may still be a performance gap between LLMs and GNNs trained with abundant labeled nodes (Chen et al., 2023). Furthermore, the prediction cost of LLMs is much higher than that of GNNs, making it less scalable for large datasets such as OGBN-ARXIV and OGBN-PRODUCTS (Hu et al., 2020b).
In summary, we make two primary observations: (1) Given adequate annotations with high quality, GNNs excel in utilizing graph structures to provide predictions both efficiently and effectively. Nonetheless, limitations can be found when adequate high-quality annotations are absent. (2) In contrast, LLMs can achieve satisfying performance without high-quality annotations while being costly. Considering these insights, it becomes evident that GNNs and LLMs possess complementary strengths. This leads us to an intriguing question: Can we harness the strengths of both while addressing their inherent weaknesses?
In this paper, we provide an affirmative answer to the above question by investigating the potential of harnessing the zero-shot learning capabilities of LLMs to alleviate the substantial training data demands of GNNs, a scenario we refer to as label-free node classification. Notably, unlike the common assumption that ground truth labels are always available, noisy labels can be found when annotations are generated from LLMs. We thus confront a unique challenge: How can we ensure the high quality of the annotation without sacrifice diversity and representativeness? On one hand, we are required to consider the design of appropriate prompts to enable LLMs to produce more accurate annotations. On the other hand, we need to strategically choose a set of training nodes that not only possess high-quality annotations but also exhibit informativeness and representativeness, as prior research has shown a correlation between these attributes and the performance of the trained model (Huang et al., 2010).
To overcome these challenges, we propose a label-free node classification on graphs with LLMs pipeline, LLM-GNN. Different from traditional graph active node selection (Wu et al., 2019; Cai et al., 2017), LLM-GNN considers the node annotation difficulty by LLMs to actively select nodes. Then, it utilizes LLMs to generate confidence-aware annotations and leverages the confidence score to further refine the quality of annotations as post-filtering. By seamlessly blending annotation quality with active selection, LLM-GNN achieves impressive results at a minimal cost, eliminating the necessity for ground truth labels. Our main contributions can be summarized as follows:
1. We introduce a new label-free pipeline LLM-GNN to leverage LLMs for annotation, providing training signals on GNN for further prediction.
2. We adopt LLMs to generate annotations with calibrated confidence, and introduce difficulty-aware active selection with post filtering to get training nodes with a proper trade-off between annotation quality and traditional graph active selection criteria.
3. On the massive-scale OGBN-PRODUCTS dataset, LLM-GNN can achieve 74.9% accuracy without the need for human annotations. This performance is comparable to manually annotating 400 randomly selected nodes, while the cost of the annotation process via LLMs is under 1 dollar.
2 PRELIMINARIES
In this section, we introduce text-attributed graphs and notation utilized in our study. We then review two primary pipelines on node classification. The first pipeline is the default node classification pipeline to evaluate the performance of GNNs (Kipf & Welling, 2016), while it totally ignores the data selection process. The second pipeline further emphasizes the node selection process, trying to identify the most informative nodes as training sets to maximize the model performance within a given budget.
Our study focuses on Text-Attributed Graph (TAG), represented as $G_T = (\mathcal{V}, \mathcal{A}, T, X)$. $\mathcal{V} = \{v_1, \cdots, v_n\}$ is the set of $n$ nodes paired with raw attributes $T = \{t_1, t_2, \ldots, t_n\}$. Each text attribute can then be encoded as sentence embedding $X = \{x_1, x_2, \ldots, x_n\}$ with the help of SentenceBERT (Reimers & Gurevych, 2019). The adjacency matrix $\mathcal{A} \in \{0, 1\}^{n \times n}$ represents graph connectivity where $\mathcal{A}[i, j] = 1$ indicates an edge between nodes $i$ and $j$. Although our study puts more emphasis on TAGs, it has the potential to be extended to more types of graphs through methods like Liu et al. (2023) and Zhao et al. (2023).
Traditional GNN-based node classification pipeline. assumes a fixed training set with ground truth labels $y_{V_{train}}$ for the training set $V_{train}$. The GNN is trained on those graph truth labels. The well-trained GNN predicts labels of the rest unlabeled nodes $V \setminus V_{train}$ in the test stage.
Traditional graph active learning-based node classification. aims to select a group of nodes $V_{act} = S(A, X)$ from the pool $V$ so that the performance of GNN models trained on those graphs with labels $y_{V_{act}}$ can be maximized.
Limitations of the current pipelines. Both pipelines above assume they can always obtain ground truth labels (Zhang et al., 2021c; Wu et al., 2019) while overlooking the intricacies of the annotation process. Nonetheless, annotations can be both expensive and error-prone in practice, even for seemingly straightforward tasks (Wei et al., 2021). For example, the accuracy of human annotations for the CIFAR-10 dataset is approximately 82%, which only involves the categorization of daily objects. Annotation on graphs meets its unique challenge. Recent evidence (Zhu et al., 2021a) shows that the human annotation on graphs is easily biased, focusing nodes sharing some characteristics within a small subgraph. Moreover, it can be even harder when taking graph active learning into consideration, the improved annotation diversity inevitably increase the difficulty of ensuring the annotation quality. For instance, it is much easier to annotate focusing on a few small communities than annotate across all the communities in a social network. Considering these limitations of existing pipelines, a pertinent question arises: Can we design a pipeline that can leverage LLMs to automatically generate high-quality annotations and utilize them to train a GNN model with promising node classification performance?
3 METHOD
To overcome the limitations of current pipelines for node classifications, we propose a new pipeline Label-free Node Classification on Graphs with LLMs, short for LLM-GNN. It (1) adopts LLMs that demonstrate promising zero-shot performance on various node classification datasets (Chen et al., 2023; He et al., 2023a) as the annotators; and (2) introduces the (difficulty-aware) active selection and optional filtering strategy to get training nodes with high annotation quality, representativeness, and diversity simultaneously.
3.1 AN OVERVIEW OF LLM-GNN
The proposed LLM-GNN pipeline is designed with four flexible components as shown in Figure 1: (difficulty-aware) active node selection, (confidence-aware annotations), optional post-filtering, and GNN model training and prediction. Compared with the original pipelines with ground truth label, the annotation quality of LLM provides a unique new challenge.
(1) The active node selection phase is to find a candidate node set for LLM annotation. Despite only considering the diversity and representativeness (Zhang et al., 2021c) as the original baseline, we pay additional attention to the influence on annotation quality. Specifically, we incorporate a difficulty-aware heuristic that correlates the annotation quality with the feature density.
(2) With the selected node set, we then utilize the strong zero-shot ability of LLMs to annotate those nodes with confidence-aware prompts. The confidence score associated with annotations is essential, as LLM annotations (Chen et al., 2023), akin to human annotations, can exhibit a certain degree of label noise. This confidence score can help to identify the annotation quality and help us filter high-quality labels from noisy ones.
(3) The optional post-filtering stage is a unique step in our pipeline, which aims to remove low-quality annotations. Building upon the annotation confidence, we further refine the quality of annotations with LLMs’ confidence scores and remove those nodes...
with lower confidence from the previously selected set and (4) With the filtered high-quality annotation set, we then train GNN models on selected nodes and their annotations. Ultimately, the well-trained GNN model is then utilized to perform predictions. It should be noted that the framework we propose is very flexible, with different designs possible for each part. For example, for the part of active node selection, we can use conventional active learning methods combined with post-filtering to improve the overall quality of the labeling. We then detail each component.
3.2 Difficulty-aware Active Node Selection
Node selection aims to select a node candidate set, which will be annotated by LLMs, and then learned on GNN. Notably, the selected node set is generally small to ensure a controllable money budget. Unlike traditional graph active learning which mainly takes diversity and representativeness into consideration, label quality should also be included since LLM can produce noisy labels with large variances across different groups of nodes.
In the difficulty-aware active selection stage, we have no knowledge of how LLMs would respond to those nodes. Consequently, we are required to identify some heuristics building connections to the difficulty of annotating different nodes. A preliminary investigation of LLMs’ annotations brings us inspiration on how to infer the annotation difficulty through node features, where we find that the accuracy of annotations generated by LLMs is closely related to the clustering density of nodes.
To demonstrate this correlation, we employ k-means clustering on the original feature space, setting the number of clusters equal to the distinct class count. 1000 nodes are sampled from the whole dataset and then annotated by LLMs. They are subsequently sorted and divided into ten equally sized groups based on their distance to the nearest cluster center. As shown in Figure 2, we observe a consistent pattern: nodes closer to cluster centers typically exhibit better annotation quality, which indicates lower annotation difficulty. Full results are included in Appendix F.2. We then adopt this distance as a heuristic to approximate the annotation reliability. Since the cluster number equals the distinct class count, we denote this heuristic as C-Density, calculated as \( C\text{-Density}(v_i) = \frac{1}{1 + \| x_{v_i} - x_{CC_{v_i}} \| } \), where for any node \( v_i \) and its closest cluster center \( CC_{v_i} \), and \( x_{v_i} \) represents the feature of node \( v_i \). We demonstrate the effectiveness of this method through theoretical explanation in Appendix K.

(a) CORA
(b) CITeseer
Figure 2: The annotation accuracy by LLMs vs. the distance to the nearest clustering center. The bars represent the average accuracy within each selected group, while the blue line indicates the cumulative average accuracy. At group \( i \), the blue line denotes the average accuracy of all nodes in the preceding \( i \) groups.
We then incorporate this annotation difficulty heuristic in traditional active selection. For traditional active node selection, we select \( B \) nodes from the unlabeled pools with top scores \( f_{act}(v_i) \), where \( f_{act}() \) is a score function (we defer the detailed introduction of \( f_{act}() \) to Appendix A). To benefit the performance of the trained models, the selected nodes should have a trade-off between annotation difficulty and traditional active selection criteria (e.g., representativeness [Cai et al., 2017] and diversity [Zhang et al., 2021c]). In traditional graph active learning, selection criteria can usually be denoted as a score function, such as PageRank centrality \( f_{pg}(v_i) \) for measuring the structural diversity. Then, one feasible way to integrate difficulty heuristic into traditional graph active learning is ranking aggregation. Compared to directly combining several scores via summation or multiplication, ranking aggregation is more robust to scale differences since it is scale-invariant and considers only the relative ordering of items. Considering the original score function for graph active learning as \( f_{act}(v_i) \), we denote \( r_{f_{act}}(v_i) \) as the high-to-low ranking percentage. Then, we incorporate difficulty heuristic by first transforming C-Density\( (v_i) \) into a rank \( r_{C\text{-Density}}(v_i) \). Then we combine these two scores: \( f_{DA\text{-act}}(v_i) = \alpha_0 \times r_{f_{act}}(v_i) + \alpha_1 \times r_{C\text{-Density}}(v_i) \). “DA” stands for difficulty aware. Hyper-parameters \( \alpha_0 \) and \( \alpha_1 \) are introduced to balance annotation difficulty and traditional graph active
learning criteria such as representativeness, and diversity. Finally, nodes \( v_i \) with larger \( f_{DA-\text{act}}(v_i) \) are selected for LLMs to annotate, which is denoted as \( Y_{\text{anno}} \).
Table 1: Accuracy of annotations. Yellow denotes the best and Green denotes the second best result. The cost is determined by comparing the token consumption to that of zero-shot prompts.
| Prompt Strategy | CORA | Cost | OGBN-PRODUCTS | Cost | WIKICS | Cost |
|--------------------------|------------|------|---------------|------|------------|------|
| Vanilla (zero-shot) | 68.33 ± 6.55 | 2.2 | 75.33 ± 4.99 | 1 | 68.33 ± 1.89 | 1 |
| Vanilla (one-shot) | 68.01 ± 6.35 | 2.2 | 79.67 ± 4.50 | 1.8 | 72.00 ± 3.56 | 2.4 |
| TopK (zero-shot) | 68.01 ± 6.35 | 2.2 | 79.67 ± 4.50 | 1.8 | 72.00 ± 3.56 | 2.4 |
| Most Voting (zero-shot) | 68.00 ± 7.35 | 1.1 | 75.33 ± 4.99 | 1.1 | 69.00 ± 2.16 | 1.1 |
| Hybrid (zero-shot) | 67.33 ± 6.80 | 1.5 | 73.67 ± 5.25 | 1.4 | 71.00 ± 2.83 | 1.4 |
| Hybrid (one-shot) | 70.33 ± 6.80 | 2.9 | 75.33 ± 5.25 | 2.5 | 73.67 ± 5.25 | 2.9 |
3.3 CONFIDENCE-AWARE ANNOTATIONS
After obtaining the candidate set \( Y_{\text{anno}} \) through active selection, we use LLMs to generate annotations for nodes in the set. Despite we select nodes easily to annotate with difficulty-aware active node selection, we are not aware of how the LLM responds to the nodes at that stage. It leads to the potential of remaining low-quality nodes. To figure out the high-quality annotations, we need some guidance on their reliability, such as the confidence scores. Inspired by recent literature on generating calibrated confidence from LLMs (Xiong et al., 2023; Tian et al., 2023; Wang et al., 2022), we investigate the following strategies: (1) directly asking for confidence (Tian et al., 2023), denoted as “Vanilla (zero-shot)”; (2) reasoning-based prompts to generate annotations, including chain-of-thought and multi-step reasoning (Wei et al., 2022; Xiong et al., 2023); (3) TopK prompt, which asks LLMs to generate the top \( K \) possible answers and select the most probable one as the answer (Xiong et al., 2023); (4) Consistency-based prompt (Wang et al., 2022), which queries LLMs multiple times and selects the most common output as the answer, denoted as “Most voting”; (5) Hybrid prompt (Wang et al., 2023a), which combines both TopK prompt and consistency-based prompt. In addition to the prompt strategy, few-shot samples have also been demonstrated critical to the performance of LLMs (Chen et al., 2023). We thus also investigate incorporating few-shot samples into prompts. In this work, we try 1-shot sample to avoid time and money cost. Detailed descriptions and full prompt examples are shown in Appendix D.
Then, we do a comparative study to identify the effective prompts in terms of accuracy, calibration of confidence, and costs. (1) For accuracy and cost evaluation, we adopt popular node classification benchmarks CORA, OGBN-PRODUCTS, and WIKICS. We randomly sample 100 nodes from each dataset and repeat with three different seeds to reduce sampling bias, and then we compare the generated annotations with the ground truth labels offered by these datasets. The cost is estimated by the number of tokens in input prompts and output contents. (2) It is important to note that the role of confidence is to assist us in identifying label reliability. Therefore, To validate the quality of the confidence produced by LLMs is to examine how the confidence can reflect the quality of the corresponding annotation. Therefore, we check how the annotation accuracy changes with the confidence. Specifically, we randomly select 300 nodes and sort them in descending order based on their confidence. Subsequently, we calculate the annotation accuracy for the top \( k \) nodes where \( K \) is varied in \( \{50, 100, 150, 200, 250, 300\} \). For each \( K \), a higher accuracy indicates a better quality of the generated confidence. Empirically, we find that reasoning-based prompts will generate outputs that don’t follow the format requirement, and greatly increase the query time and costs. Therefore, we don’t further consider them in this work. For other prompts, we find that for a small portion of inputs, the outputs of LLMs do not follow the format requirements (for example, outputting an annotation out of the valid label names). For those invalid outputs, we design a self-correction prompt and set a larger temperature to review previous outputs and regenerate annotations.
The evaluation of performance and cost is shown in Table 1, and the evaluation of confidence is shown in Figure 3. The full results are included in Appendix E. From the experimental results, we make the following observations: First, LLMs present promising zero-shot prediction performance on all datasets, which suggests that LLMs are potentially good annotators. Second, compared to zero-shot prompts, prompts with few-shot demonstrations could slightly increase performance with double costs. Third, zero-shot hybrid strategies present the most effective approach to extract high-quality annotations since the confidence can greatly indicate the quality of annotation. We thus adopt
zero-shot hybrid prompt in the following studies and leave the evaluation of other prompts as one future work.
3.4 POST-FILTERING
After achieving annotations together with the confidence scores, we may further refine the set of annotated nodes since we have access to confidence scores generated by LLMs, which can be used to filter high-quality labels. However, directly filtering out low-confidence nodes may result in a label distribution shift and degrade the diversity of the selected nodes, which will degrade the performance of subsequently trained models. Unlike traditional graph active learning methods which try to model diversity in the selection stage with criteria such as feature dissimilarity (Ren et al., 2022), in the post-filtering stage, label distribution is readily available. As a result, we can directly consider the label diversity of selected nodes. To measure the change of diversity, we propose a simple score function change of entropy (COE) to measure the entropy change of labels when removing a node from the selected set. Assuming that the current selected set of nodes is \( V_{\text{sel}} \), then COE can be computed as:
\[
\text{COE}(v_i) = H(\hat{y}_{V_{\text{sel}} - \{v_i\}}) - H(\hat{y}_{V_{\text{sel}}})
\]
where \( H() \) is the Shannon entropy function (Shannon, 1948), and \( \hat{y} \) denotes the annotations generated by LLMs. It should be noted that the value of COE may possibly be positive or negative, and a small COE\( (v_i) \) value indicates that removing this node could adversely affect the diversity of the selected set, potentially compromising the performance of trained models. When a node is removed from the selected set, the entropy adjusts accordingly, necessitating a re-computation of COE. However, it introduces negligible computation overhead since the size of the selected set \( V_{\text{anno}} \) is usually much smaller than the whole dataset. COE can be further combined with confidence \( f_{\text{conf}}(v_i) \) to balance diversity and annotation quality, in a ranking aggregation manner. It should be noted that \( r_{\text{C-Density}}(v_i) \) is also available in the post filtering phase. So, the final filtering score function \( f_{\text{filter}} \) can be stated as:
\[
f_{\text{filter}}(v_i) = \beta_0 \times r_{\text{conf}}(v_i) + \beta_1 \times r_{\text{COE}}(v_i) + \beta_2 \times r_{\text{C-Density}}(v_i).
\]
Hyper-parameters \( \beta_0, \beta_1, \) and \( \beta_2 \) are introduced to balance label diversity and annotation quality. \( r_{\text{conf}} \) is the high-to-low ranking percentage of the confidence score \( f_{\text{conf}} \). To conduct post-filtering, each time we remove the node with the smallest \( f_{\text{filter}} \) value until a pre-defined maximal number is reached.
3.5 GNN TRAINING AND PREDICTION
After obtaining the training labels, we further train a GNN. Our framework supports a variety of GNNs, and we select GCN, the most popular model, as our primary subject of study. Additionally, another critical component during the training process is the loss function. Traditional GNN-based pipelines mainly adopt cross-entropy loss; however, due to the noisy labels generated by the LLMs, we may also utilize a weighted cross-entropy loss in this part. Specifically, we can use the confidence scores from the previous section as the corresponding weights.
4 EXPERIMENT
In this section, we present experiments to evaluate the performance of our proposed pipeline \( \text{LLM-GNN} \). We begin by detailing the experimental settings. Next, we investigate the following research questions: **RQ1.** How do active selection, post-filtering, and loss function affect the performance of \( \text{LLM-GNN} \)? **RQ2.** How does the performance and cost of \( \text{LLM-GNN} \) compare to other label-free node classification methods? **RQ3.** How do different budgets affect the performance of the pipelines? **RQ4.** How do LLMs’ annotations compare to ground truth labels?
4.1 EXPERIMENTAL SETTINGS
In this paper, we adopt the following TAG datasets widely adopted for node classification: CORA (McCallum et al., 2000), CITESEER (Giles et al., 1998), PUBMED (Sen et al., 2008), OGBN-ARXIV, OGBN-PRODUCTS (Hu et al., 2020b), and WIKICS (Mernyei & Cangea, 2020). Statistics and descriptions of these datasets are in Appendix C.
In terms of the settings for each component in the pipeline, we adopt gpt-3.5-turbo-0613\(^1\) to generate annotations. In terms of the prompt strategy for generating annotations, we choose the “zero-shot hybrid strategy” considering its effectiveness in generating calibrated confidence. We leave the evaluation of other prompts as future works considering the massive costs. For the budget of the active selection, we refer to the popular semi-supervised learning setting for node classifications (Yang et al., 2016) and set the budget equal to 20 multiplied by the number of classes. For GNNs, we adopt one of the most popular models GCN (Kipf & Welling, 2016).
---
\(^1\) [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Table 2: Impact of different active selection strategies. We show the top three performance in each dataset with pink, green, and yellow, respectively. To compare with traditional graph active selections, we underline their combinations with our selection strategies if the combinations outperform their corresponding traditional graph active selection. OOT means that this method cannot scale to large-scale graphs because of long execution time.
| | CORA | CITeseer | PUBMED | WikiCS | OGBN-ARXIV | OGBN-PRODUCTS |
|------------------|------------|------------|------------|------------|------------|---------------|
| Random | 70.48 ± 0.73 | 65.11 ± 1.12 | 72.98 ± 2.15 | 60.69 ± 1.73 | 64.59 ± 0.16 | 70.40 ± 0.60 |
| Random-W | 71.77 ± 0.75 | 65.92 ± 1.05 | 73.92 ± 1.75 | 61.42 ± 1.54 | 64.95 ± 0.19 | 71.96 ± 0.59 |
| C-Density | 42.22 ± 1.59 | 66.44 ± 0.34 | 74.43 ± 0.28 | 57.77 ± 0.85 | 44.08 ± 0.39 | 8.29 ± 0.00 |
| PS-Random-W | 72.38 ± 0.72 | 67.18 ± 0.92 | 73.31 ± 1.65 | 62.60 ± 0.94 | 65.22 ± 0.15 | 71.62 ± 0.54 |
| Density | 72.40 ± 0.35 | 61.06 ± 0.95 | 74.43 ± 0.28 | 64.96 ± 0.53 | 51.77 ± 0.24 | 20.22 ± 0.11 |
| Density-W | 72.39 ± 0.34 | 59.88 ± 0.97 | 73.00 ± 0.19 | 63.80 ± 0.69 | 51.03 ± 0.27 | 20.97 ± 0.15 |
| DA-Density | 70.73 ± 0.32 | 62.92 ± 1.05 | 74.43 ± 0.28 | 63.08 ± 0.45 | 51.33 ± 0.29 | 8.50 ± 0.32 |
| PS-Density-W | 74.61 ± 0.13 | 61.00 ± 0.55 | 74.50 ± 0.23 | 65.57 ± 0.45 | 51.73 ± 0.29 | 19.15 ± 0.18 |
| DA-Density-W | 67.29 ± 0.96 | 62.98 ± 0.77 | 73.39 ± 0.35 | 63.26 ± 0.62 | 51.36 ± 0.39 | 8.52 ± 0.11 |
| AGE | 69.15 ± 0.38 | 54.25 ± 0.31 | 74.55 ± 0.54 | 55.51 ± 0.12 | 46.68 ± 0.30 | 65.63 ± 0.15 |
| AGE-W | 69.70 ± 0.45 | 57.60 ± 0.35 | 64.30 ± 0.49 | 55.15 ± 0.14 | 47.84 ± 0.35 | 64.92 ± 0.19 |
| DA-AGE | 74.38 ± 0.24 | 59.92 ± 0.42 | 74.20 ± 0.51 | 59.39 ± 0.21 | 48.21 ± 0.35 | 60.03 ± 0.11 |
| PS-AGE-W | 72.61 ± 0.39 | 57.44 ± 0.49 | 64.00 ± 0.44 | 56.13 ± 0.11 | 47.12 ± 0.39 | 68.62 ± 0.15 |
| DA-AGE-W | 74.96 ± 0.22 | 58.41 ± 0.45 | 65.85 ± 0.67 | 59.19 ± 0.24 | 47.79 ± 0.32 | 59.95 ± 0.23 |
| RIM | 69.86 ± 0.38 | 63.44 ± 0.42 | 76.22 ± 0.16 | 66.72 ± 0.16 | OOT | OOT |
| DA-RIM | 73.99 ± 0.44 | 60.33 ± 0.40 | 79.17 ± 0.11 | 67.82 ± 0.32 | OOT | OOT |
| PS-RIM-W | 73.19 ± 0.45 | 62.85 ± 0.49 | 74.52 ± 0.19 | 69.84 ± 0.19 | OOT | OOT |
| DA-RIM-W | 74.73 ± 0.41 | 60.80 ± 0.57 | 77.94 ± 0.24 | 68.22 ± 0.25 | OOT | OOT |
| GraphPart | 68.57 ± 2.18 | 66.59 ± 1.34 | 77.50 ± 1.23 | 67.28 ± 0.87 | OOT | OOT |
| GraphPart-W | 69.90 ± 2.03 | 68.20 ± 1.42 | 78.91 ± 1.04 | 68.43 ± 0.92 | OOT | OOT |
| DA-GraphPart | 69.35 ± 1.92 | 69.37 ± 1.27 | 79.49 ± 0.85 | 68.72 ± 1.01 | OOT | OOT |
| PS-GraphPart-W | 69.92 ± 1.75 | 69.06 ± 1.19 | 78.84 ± 1.05 | 66.90 ± 1.05 | OOT | OOT |
| DA-GraphPart-W | 68.61 ± 1.32 | 68.82 ± 1.17 | 79.89 ± 0.79 | 67.13 ± 1.23 | OOT | OOT |
| FeatProp | 72.82 ± 0.08 | 66.61 ± 0.55 | 73.90 ± 0.15 | 64.08 ± 0.12 | 66.06 ± 0.07 | 74.04 ± 0.15 |
| FeatProp-W | 73.56 ± 0.13 | 68.04 ± 0.69 | 76.90 ± 0.19 | 63.80 ± 0.21 | 66.32 ± 0.15 | 74.32 ± 0.14 |
| PS-FeatProp | 72.24 ± 0.25 | 69.06 ± 0.32 | 74.98 ± 0.35 | 66.09 ± 0.35 | 66.14 ± 0.37 | 74.91 ± 0.17 |
| PS-FeatProp-W | 76.23 ± 0.07 | 68.64 ± 0.71 | 78.84 ± 1.05 | 64.72 ± 0.19 | 65.84 ± 0.19 | 74.58 ± 0.24 |
to show the potential of the LLM-GNN pipeline. Therefore, we do not tune the hyper-parameters in both difficulty-aware active selection and post-filtering but simply setting them with the same value.
In terms of evaluation, we compare the generated prediction of GNNs with the ground truth labels offered in the original datasets and adopt accuracy as the metric. Similar to (Ma et al., 2022), we adopt a setting where there’s no validation set, and models trained on selected nodes will be further tested based on the rest unlabeled nodes. All experiments will be repeated for 3 times with different seeds. For hyper-parameters of the experiment, we adopt a fixed setting commonly used by previous papers or benchmarks (Kipf & Welling, 2016; Hamilton et al., 2017; Hu et al., 2020b). One point that should be strengthened is the number of training epochs. Since there’s no validation set and the labels are noisy, models may suffer from overfitting (Song et al., 2022). However, we find that most models work well across all datasets by setting a small fixed number of training epochs, such as 30 epochs for small and medium-scale datasets (CORA, CITeseer, PUBMED, and WikiCS), and 50 epochs for the rest large-scale datasets. This setting, which can be viewed as a simpler alternative to early stop trick (without validation set) (Bai et al., 2021) for training on noisy labels so that we can compare different methods more fairly and conveniently.
4.2 RQ1. IMPACT OF DIFFERENT ACTIVE SELECTION STRATEGIES
We conduct a comprehensive evaluation of different active selection strategies, which is the key component of our pipeline. Specifically, we examine how effectiveness of (1) difficulty-aware active node selection before LLM annotation (2) post-filtering after LLM annotation and how they combine with traditional active learning algorithms (3) loss functions. For selection strategies, we consider (1) Traditional graph active selection: Random selection, Density-based selection (Ma et al., 2022), GraphPart (Ma et al., 2022), FeatProp (Wu et al., 2019), Degree-based selection (Ma et al., 2022), Pagerank centrality-based selection (Ma et al., 2022), AGE (Cai et al., 2017), and RIM (Zhang et al., 2021b). (2) Difficulty-aware active node selection: C-Density-based selection, and traditional graph active selections combined with C-Density. To denote these selections, we add the prefix “DA-.”
For example, “DA-AGE” means combining the original AGE method with our proposed C-Density.
(3) Post Filtering: Traditional graph active selection combined with confidence and COE-based selections, we add the prefix “PS-“. For FeatProp, as it selects candidate nodes directly using the K-Medoids algorithm (Wu et al., 2019), integrating it with difficulty-aware active selections is not feasible. For loss functions, we consider both cross entropy loss and weighted cross entropy loss, where we add a “-W” postfix for the latter. Detailed introductions of these methods are shown in Appendix A. The results for GCN are shown in Table 2. In terms of space limits, we move part of the results and more ablation studies to Appendix I.
From the experimental results, we make the following observations:
1. The proposed post-filtering strategy presents promising effectiveness. Combined with traditional graph active learning methods like GraphPart, RIM, and Featprop, it can consistently outperform. Combined with FeatProp, it can achieve both promising accuracy and better scalability.
2. Although C-Density-based selection can achieve superior annotation quality, merely using this metric will make the trained model achieve poor performance. To better understand this phenomenon, we check the labels of the selected nodes. We find that the problem lies in label imbalance brought by active selection. For example, we check the selected nodes for PUBMED, and find that all annotations belong to one class. We further find that tuning the number of clustering centers for C-Density can trade off between diversity and annotation quality, where a larger $K$ can mitigate the class imbalance problem. However, it proposes a challenge to find a proper $K$ for massive-scale datasets like OGBN-PRODUCTS, where weighted loss and post-filtering are more effective.
3. Comparing normal cross entropy loss to weighted cross entropy loss, weighted cross entropy loss further enhance the performance for most of the cases.
4. In a nutshell, we summarize the following empirical rules of thumbs: (1) Featprop-based methods can consistently achieve promising performance across different datasets efficiently; (2) Comparing DA and PS, DA costs less since we don’t need LLMs to generate the confidence and we may use a simpler prompt. PS can usually get better performance. On large-scale datasets, PS usually get much better results.
### 4.3 (RQ2.) Comparison with other label-free node classification methods
To demonstrate the effectiveness and novelty of our proposed pipeline, we further conduct a comparison with other label-free node classification pipelines, which include: (1) Zero-shot node classification method: SES, TAG-Z (Li & Hooi, 2023); (2) Zero-shot classification models for texts: BART-large-MNLI (Lewis et al., 2019); and (3) Directly using LLMs for predictions: LLMs-as-Predictors (Chen et al., 2023). Detailed introductions of these models can be found in Appendix A. We compare both performance and costs of these models, and the results are shown in Table 3.
| Methods | Acc | Cost | Acc | Cost |
|--------------------|-----|------|-----|------|
| SES(*) | 13.08 | N/A | 6.67 | N/A |
| TAG-Z(*) | 37.08 | N/A | 47.08 | N/A |
| BART-large-MNLI | 13.2 | N/A | 28.8 | N/A |
| LLMs-as-Predictors | 73.33 | 79 | 75.33 | 1572 |
| LLM-GNN | 66.32 | 0.63 | 74.91 | 0.74 |
From the experimental results in the table, we can see that (1) our proposed pipeline LLM-GNN can significantly outperform SES, TAG-Z and BART-large-MNLI. (2) Despite LLMs-as-Predictors has better performance than LLM-GNN, its cost is much higher than LLM-GNN. For example, the cost of LLMs-as-Predictors in OGBN-PRODUCTS is 2,124× that of LLM-GNN. Besides, the promising performance of LLMs-as-Predictors on OGBN-ARXIV may be an exception, relevant to the specific prompts leveraging the memorization of LLMs (Chen et al., 2023).
### 4.4 (RQ3.) How do different budgets affect the performance of our pipelines?
We conduct a comprehensive evaluation on different budgets rather the fixed budget in previous experiments. It aims to examine how effective our algorithm is when confronting different real-world scenarios with different to meet different cost and performance requirements. Experiments are typically conducted on the CORA dataset by setting the budget as {35, 70, 105, 140, 175, 280, 560, 1,120}. We choose both random selections and those methods that perform well in Table 2. We can have the following observations from Figure 4: (1) with the increase in the budget, the performance tends to increase gradually. (2) unlike using ground truth, the performance growth...
is relatively limited as the budget increases. It suggests that there exists a trade-off between the performance and the cost in the real-world scenario.
4.5 (RQ4.) CHARACTERISTICS OF LLMs’ ANNOTATIONS
Although LLMs’ annotations are noisy labels, we find that they are more benign than the ground truth labels injected with synthetic noise adopted in (Zhang et al., 2021b). Specifically, assuming that the quality of LLMs’ annotations is $q\%$, we randomly select $(1-q)\%$ ground truth labels and flip them into other classes uniformly to generate synthetic noisy labels. We then train GNN models on LLMs’ annotations, synthetic noisy labels, and LLMs’ annotations with all incorrect labels removed, respectively. The results are demonstrated in Figure 5, from which we observe that: LLMs’ annotations present totally different training dynamics from synthetic noisy labels. The extent of over-fitting for LLMs’ annotations is much less than that for synthetic noisy labels.

**Figure 4:** Investigation on how different budgets affect the performance of LLM-GNN. Methods achieving top performance in Table 2 and random selection-based methods are compared in the figure. CE and WE means normal cross entropy loss and weighted loss, respectively.

**Figure 5:** Comparisons among LLMs’ annotations, ground truth labels, and synthetic noisy labels. “Ran” represents random selection, “GT” indicates the ground truth labels. “Filtered” means replacing all wrong annotations with ground truth labels.
5 RELATED WORKS
**Graph active learning.** Graph active learning (Cai et al., 2017) aims to maximize the test performance with nodes actively selected under a limited query budget. To achieve this goal, algorithms are developed to maximize the informativeness and diversity of the selected group of nodes (Zhang et al., 2021c). These algorithms are designed based on assumptions. In Ma et al. (2022), diversity is assumed to be related to the partition of nodes, and thus samples are actively selected from different communities. In Zhang et al. (2021c, b, a), representativeness is assumed to be related to the influence of nodes, and thus nodes with a larger influence score are first selected. Another line of work directly sets the accuracy of trained models as the objective (Gao et al., 2018; Hu et al., 2020a; Zhang et al., 2022), and then adopts reinforcement learning to do the optimization.
**LLMs for graphs.** Recent progress on applying LLMs for graphs (He et al., 2023a; Guo et al., 2023) aims to utilize the power of LLMs and further boost the performance of graph-related tasks. LLMs are either adopted as the predictor (Chen et al., 2023; Wang et al., 2023a; Ye et al., 2023), which directly generates the solutions or as the enhancer (He et al., 2023a), which takes the capability of LLMs to boost the performance of a smaller model with better efficiency. In this paper, we adopt LLMs as annotators, which combine the advantages of these two lines to train an efficient model with promising performance and good efficiency, without the requirement of any ground truth labels.
6 CONCLUSION
In this paper, we revisit the long-term ignorance of the data annotation process in existing node classification methods and propose the pipeline label-free node classification on graphs with LLMs to solve this problem. The key design of our pipelines involves LLMs to generate confidence-aware annotations, and using difficulty-aware selections and confidence-based post-filtering to further enhance the annotation quality. Comprehensive experiments validate the effectiveness of our pipeline.
7 ACKNOWLEDGEMENTS
This research is supported by the National Science Foundation (NSF) under grant numbers CNS 2246050, IIS1845081, IIS2212032, IIS2212144, IOS2107215, DUE 2234015, DRL 2025244 and IOS2035472, the Army Research Office (ARO) under grant number W911NF-21-1-0198, the Home Depot, Cisco Systems Inc, Amazon Faculty Award, Johnson&Johnson, JP Morgan Faculty Award and SNAP.
REFERENCES
Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, and Tongliang Liu. Understanding and improving early stopping for learning with noisy labels. Advances in Neural Information Processing Systems, 34:24392–24403, 2021.
Parikshit Bansal and Amit Sharma. Large language models as annotators: Enhancing generalization of nlp models at minimal cost. arXiv preprint arXiv:2306.15766, 2023.
Hongyun Cai, Vincent W Zheng, and Kevin Chen-Chuan Chang. Active learning for graph embedding. arXiv preprint arXiv:1705.05085, 2017.
Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, et al. Exploring the potential of large language models (llms) in learning on graphs. arXiv preprint arXiv:2307.03393, 2023.
Bosheng Ding, Chengwei Qin, Linlin Liu, Lidong Bing, Shafiq Joty, and Boyang Li. Is gpt-3 a good data annotator? arXiv preprint arXiv:2212.10450, 2022.
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. A survey for in-context learning. arXiv preprint arXiv:2301.00234, 2022.
Li Gao, Hong Yang, Chuan Zhou, Jia Wu, Shirui Pan, and Yue Hu. Active discriminative network representation learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 2142–2148. International Joint Conferences on Artificial Intelligence Organization, 7 2018. doi: 10.24963/ijcai.2018/296. URL https://doi.org/10.24963/ijcai.2018/296
Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056, 2023.
C. Lee Giles, Kurt D. Bollacker, and Steve Lawrence. Citeseer: An automatic citation indexing system. In Proceedings of the Third ACM Conference on Digital Libraries, DL ’98. pp. 89–98, New York, NY, USA, 1998. ACM. ISBN 0-89791-965-3. doi: 10.1145/276675.276685. URL http://doi.acm.org/10.1145/276675.276685
Jiayan Guo, Lun Du, and Hengyu Liu. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066, 2023.
Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
Haibo He and Edwardo A. Garcia. Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 21(9):1263–1284, 2009. doi: 10.1109/TKDE.2008.239.
Xiaoxin He, Xavier Bresson, Thomas Laurent, and Bryan Hooi. Explanations as features: Llm-based features for text-attributed graphs. arXiv preprint arXiv:2305.19523, 2023a.
Xingwei He, Zhenghao Lin, Yeyun Gong, Alex Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen, et al. Annollm: Making large language models to be better crowd-sourced annotators. arXiv preprint arXiv:2303.16854, 2023b.
|
OeQE9zsztS
|
The centeredness assumption on the base kernel is worrying me, as I do not fully understand its practical and theoretical implications. Where is this assumption necessary? I understand that it is necessary for Proposition 1. Is the purpose to make the smoothness notion shift invariant and not having to worry about the constant part of the target function that is not covered by the smoothness $r_t(f)$?
|
SPECTRALLY TRANSFORMED KERNEL REGRESSION
Runtian Zhai, Rattana Pukdee, Roger Jin, Maria-Florina Balcan, Pradeep Ravikumar
Carnegie Mellon University
{rzhai,rpukdee,rrjin,ninamf,pradeepr}@cs.cmu.edu
ABSTRACT
Unlabeled data is a key component of modern machine learning. In general, the role of unlabeled data is to impose a form of smoothness, usually from the similarity information encoded in a base kernel, such as the $\epsilon$-neighbor kernel or the adjacency matrix of a graph. This work revisits the classical idea of spectrally transformed kernel regression (STKR), and provides a new class of general and scalable STKR estimators able to leverage unlabeled data. Intuitively, via spectral transformation, STKR exploits the data distribution for which unlabeled data can provide additional information. First, we show that STKR is a principled and general approach, by characterizing a universal type of “target smoothness”, and proving that any sufficiently smooth function can be learned by STKR. Second, we provide scalable STKR implementations for the inductive setting and a general transformation function, while prior work is mostly limited to the transductive setting. Third, we derive statistical guarantees for two scenarios: STKR with a known polynomial transformation, and STKR with kernel PCA when the transformation is unknown. Overall, we believe that this work helps deepen our understanding of how to work with unlabeled data, and its generality makes it easier to inspire new methods.
1 INTRODUCTION
The past decade has witnessed a surge of new and powerful algorithms and architectures for learning representations (Vaswani et al., 2017; Devlin et al., 2019; Chen et al., 2020; He et al., 2022); spurred in part by a boost in computational power as well as increasing sizes of datasets. Due to their empirical successes, providing an improved theoretical understanding of such representation learning methods has become an important open problem. Towards this, a big advance was made recently by HaoChen et al. (2021), who showed that when using a slight variant of popular contrastive learning approaches, termed spectral contrastive learning, the optimal learnt features are the top-$d$ eigenfunctions of a population augmentation graph. This was further extended to other contrastive learning approaches (Johnson et al., 2023; Cabannes et al., 2023), as well as more generally to all augmentation-based self-supervised learning methods (Zhai et al., 2024).
A high-level summary of this recent line of work is as follows: The self-supervised learning approaches implicitly specify inter-sample similarity encoded via a Mercer base kernel. Suppose this kernel has the spectral decomposition $K(x, x') = \sum_{i=1}^{\infty} \lambda_i \psi_i(x) \psi_i(x')$, where $\lambda_1 \geq \lambda_2 \geq \cdots \geq 0$. The above line of work then showed that recent representation learning objectives can learn the optimal $d$ features, which are simply the top-$d$ eigenfunctions $[\psi_1, \cdots, \psi_d]$ of this base kernel. Given these $d$ features, a “linear probe” is learned atop via regression. It can be seen that this procedure is equivalent to kernel regression with the truncated kernel $K_d(x, x') = \sum_{i=1}^{d} \lambda_i \psi_i(x) \psi_i(x')$. More generally, one can extend this to regression with a spectrally transformed kernel (STK) $K_s(x, x') = \sum_{i=1}^{\infty} s(\lambda_i) \psi_i(x) \psi_i(x')$, where $s : [0, +\infty) \rightarrow [0, +\infty)$ is a general transformation function. We call this generalized method spectrally transformed kernel regression (STKR). Then, $K_d$ amounts to an STK with the “truncation function” $s(\lambda_i) = \lambda_i 1_{\{i \leq d\}}$.
In fact, STK and STKR were quite popular two decades ago in the context of semi-supervised learning, which similar to more recent representation learning approaches, aims to leverage unlabeled data. Their starting point again was a base kernel encoding inter-sample similarity, but unlike recent representation learning approaches, at that period this base kernel was often explicitly rather than implicitly specified. For manifold learning this was typically the $\epsilon$-neighbor or the heat kernel (Belkin & Niyogi, 2003). For unlabeled data with clusters, this was the cluster kernel (Chapelle et al., 2002).
And for graph structured data, this was typically the (normalized) adjacency or Laplacian matrix of an explicitly specified adjacency graph (Chung, 1997; Belkin & Niyogi, 2003). A range of popular approaches then either extracted top eigenfunctions, or learned kernel machines. These methods include LLE (Roweis & Saul, 2000), Isomap (Tenenbaum et al., 2000), Laplacian eigenmap (Belkin & Niyogi, 2003) for manifold learning; spectral clustering (Ng et al., 2001) for clustered data; and label propagation (Zhu & Ghahramani, 2002; Zhou et al., 2003) for graph structured data. With respect to kernel machines, Bengio et al. (2004) linked these approaches to kernel PCA, and Chapelle et al. (2002); Smola & Kondor (2003); Zhu et al. (2006) proposed various types of STK.
In this work, we revisit STK and STKR, and provide three sets of novel results. Our first contribution is elevating STKR to be a principled and general way of using unlabeled data. Unlabeled data is useful as it provides additional information about the data distribution $P_X$, but the kernel could be independent of $P_X$. STKR implicitly mixes the information of $P_X$ and the kernel in the process of constructing the STK. We then prove the generality of STKR via an existence result (Theorem 1): Suppose the target function satisfies a certain unknown “target smoothness” that preserves the relative smoothness at multiple scales, then there must exist an STK that describes this target smoothness.
Our second contribution is implementing STKR with general transformations for the inductive setting. Most prior work is limited to the transductive setting where test samples are known at train time (Zhou et al., 2003; Johnson & Zhang, 2008), in large part because it is easier to carry out spectral transformation of the finite-dimensional Gram matrix than the entire kernel function itself. But for practical use and a comprehensive analysis of STKR, we need inductive approaches as well. Towards this, Chapelle et al. (2002) solved an optimization problem for each test point, which is not scalable; Chapelle et al. (2006, Chapter 11.4) provided a more scalable extension that “propagates” the labels to unseen test points after transductive learning, but they still needed to implicitly solve a quadratic optimization program for each set of test points. These approaches moreover do not come with strong guarantees. Modern representation learning approaches that use deep neural networks to represent the STK eigenfunctions inductively do provide scalable approaches, but no longer have rigorous guarantees. To the best of our knowledge, this work develops the first inductive STKR implementation that (a) has closed-form formulas for the predictor, (b) works for very general STKs, (c) is scalable, and importantly, (d) comes with strong statistical guarantees. We offer detailed implementations with complexity analysis, and verify their efficiency with experiments on real tasks in Section 5.
Our third contribution is developing rigorous theory for this general inductive STKR, and proving nonparametric statistical learning bounds. Suppose the target function $f^*$ is smooth w.r.t. an STK $K_s$, and there are $n$ labeled and $m$ unlabeled samples both i.i.d.. We prove estimation and approximation error bounds for the STKR predictor (in $L^2$ norm) when $s(\lambda)$ is known or completely unknown. By incorporating recent theoretical progress, three of our four bounds have tightness results.
In a nutshell, this work conceptually establishes STKR as a general and principled way of learning with labeled and unlabeled data together with a similarity base kernel; algorithmically we provide scalable implementations for general inductive STKR, and verify them on real datasets; statistically we prove statistical guarantees, with technical improvements over prior work. Limitations and open problems are discussed in Section 6, and more related work can be found in Appendix A. We also provide a table of notations at the beginning of the Appendix for the convenience of our readers.
2 DERIVING STKR FROM DIFFUSION INDUCED MULTISCALE SMOOTHNESS
Let the input space $\mathcal{X}$ be a compact Hausdorff space, $\mathcal{Y} = \mathbb{R}$ be the label space, and $P_{XY}$ be the underlying data distribution over $\mathcal{X} \times \mathcal{Y}$, whose marginal distribution $P_X$ is a Borel measure with support $\mathcal{X}$. We will use the shorthand $dp(x)$ to denote $dP_X(x)$. Let $L^2(P_X)$ be the Hilbert space of $L^2$ functions w.r.t. $P_X$ that satisfy $\int f(x)^2 dp(x) < +\infty$, with $(f_1, f_2)_{P_X} = \int f_1(x)f_2(x) dp(x)$ and $\|f\|_{P_X} = \sqrt{(f, f)_{P_X}}$. $f \in L^2(P_X)$ also implies $f \in L^1(P_X)$, which guarantees that $E_{X \sim P_X}[f(X)]$ exists and is finite. Let a base kernel $K(x, x')$ encode inter-sample similarity information over $\mathcal{X}$. We assume full access to $K$ (i.e. we can compute $K(x, x')$ for all $x, x'$), and that $K$ satisfies:
(i) $K$ is a Mercer kernel, so it has the spectral decomposition: $K(x, x') = \sum_{i=1}^{\infty} \lambda_i \psi_i(x)\psi_i(x')$, where the convergence is absolute and uniform. Here $\lambda_i, \psi_i$ are the eigenvalues and orthonormal eigenfunctions of the integral operator $T_K : L^2(P_X) \to L^2(P_X)$ defined as $(T_K f)(x) = \int f(x')K(x, x') dp(x')$, such that $\lambda_1 \geq \lambda_2 \geq \cdots \geq 0$, and $(\psi_i, \psi_j)_{P_X} = \delta_{i,j} = 1_{(i=j)}$.
(ii) $K$ is centered: Defined as $T_K 1 = 0$, where $1(x) \equiv 1$ and $0(x) \equiv 0$. One can center any $K$ by $\tilde{K}(x_0, y_0) = K(x_0, y_0) - \int K(x_0, y_0) dp(x) - \int K(x_0, y) dp(y) + \int \int K(x, y) dp(x) dp(y)$.
Why assuming centeredness? In this work, we view the smoothness and scale of a function \( f \) as two orthogonal axes, since our smoothness pertains to the inter-sample similarity. Thus, we view \( f_1 \) and \( f_2 \) as equally smooth if they differ by a constant a.e.. If \( K \) is not centered, then this will not be true under the RKHS norm (see Section 2.1). In practice centering is not a necessary step, though often recommended in kernel PCA.
This work investigates the regression function estimation problem in nonparametric regression, with error measured in \( L^2 \) norm (see Györfi et al. (2002) for an introduction of regression problems):
**Problem.** Let \( f^*(x) := \int y \, dP_{XY}(y|x) \in L^2(P_X) \) be the target regression function. Given \( n \) labeled samples \((x_1, y_1), \cdots, (x_n, y_n)\) i.i.d. \( P_{XY} \), \( m \) unlabeled samples \( x_{n+1}, \cdots, x_{n+m} \) i.i.d. \( P_X \), and access to \( K(x, x') \) for any \( x, x' \in X \), find a predictor \( \hat{f} \in L^2(P_X) \) with low prediction error:
\[
\text{err}(\hat{f}, f^*) := \mathbb{E}_{X \sim P_X} \left[ (\hat{f}(X) - f^*(X))^2 \right] = \| \hat{f} - f^* \|_{L^2(P_X)}^2.
\]
One can also think of \( f^* \) as the target function, and \( y = f^*(x) + \epsilon \), where \( \epsilon \) is random noise with zero mean. Let \( \{\lambda_i : i \in I\} \) be the set of non-zero eigenvalues of \( T_K \), then define \( K^p(x, x') := \sum_{i \in I} \lambda_i^p \psi_i(x) \psi_i(x') \) for all \( p \in \mathbb{R} \), which corresponds to an STK with \( s(\lambda) = \lambda^p \). The set \( \{K^p\} \) delineates a diffusion process w.r.t. \( K \), because \( K^{p+1}(x, x') = \int K^p(x, x_0)K(x', x_0)dp(x_0) \), so that \( K^{p+1} \) captures similarity with one additional hop to \( K^p \). For continuous diffusion, \( p \) can be real-valued. Then, the reproducing kernel Hilbert space (RKHS) associated with \( K^p \) for any \( p \geq 1 \) is:
\[
H_{K^p} := \left\{ f = \sum_{i \in I} u_i \psi_i : \sum_{i \in I} \frac{u_i^2}{\lambda_i^p} < \infty \right\}, \quad \langle \sum_i u_i \psi_i, \sum_i v_i \psi_i \rangle_{H_{K^p}} = \sum_i \frac{u_i v_i}{\lambda_i^p},
\]
and \( \| f \|_{H_{K^p}}^2 = \langle f, f \rangle_{H_{K^p}} \). \( K^p \) is the reproducing kernel of \( H_{K^p} \), as one can verify for all \( f \in H_{K^p} \) and \( x \) that \( \langle f, K^p_x \rangle_{H_{K^p}} = f(x) \), for \( K^p_x(z) := K^p(x, z) \). \( H_{K^p} \) is also denoted by \( H_K \). \( H_{K^p} \) are called power spaces (Fischer & Steinwart, 2020) or interpolation Sobolev spaces (Jin et al., 2023).
Kernel ridge regression (KRR) is a classical least-squares algorithm. KRR with \( K \) is given by:
\[
\hat{f} \in \arg \min_{f \in H_K} \left\{ \frac{1}{n} \sum_{i=1}^n (f(x_i) - y_i)^2 + \beta_n \| f \|_{H_K}^2 \right\}
\]
for some \( \beta_n > 0 \). Although KRR is very widely used, the problem is that it does not leverage the unlabeled data, because the optimal solution of KRR only depends on \( x_1, \cdots, x_n \) but not \( x_{n+1}, \cdots, x_{n+m} \), as is explicitly shown by the Representer Theorem (Schölkopf & Smola, 2002, Theorem 4.2): All minimizers of KRR admit the form \( \hat{f}^*(x) = \sum_{j=1}^n \alpha_j^* K(x, x_j) \), where
\[
\alpha^* \in \arg \inf_{\alpha \in \mathbb{R}^n} \left\{ \frac{1}{n} \sum_{i=1}^n \left[ \sum_{j=1}^n \alpha_j K(x_i, x_j) - y_i \right]^2 + \beta_n \sum_{i,j=1}^n \alpha_i \alpha_j K(x_i, x_j) \right\}.
\]
One consequence is that for KRR, the whole base kernel could be useless. Consider the graph example on the right, where only the three shaded nodes are labeled, and \( K \) is the adjacency matrix. With KRR, the unlabeled nodes are useless and can be removed. Then, the graph becomes three isolated nodes, so it has zero impact on the learned predictor.
### 2.1 Diffusion Induced Multiscale Smoothness
Let us use this graph example to motivate STKR. Unlabeled samples are useful as they offer more information about the marginal distribution \( P_X \). The problem is that we don’t know the connection between \( K \) and \( P_X \). So while KRR can leverage \( K \), it does not necessarily exploit more information about \( P_X \) than supervised learning over the \( n \) labeled samples, which is why the unlabeled samples are useless in our graph example. To address this, the seminal work Belkin et al. (2006) proposed this elegant idea of explicitly including another regularizer \( \| f \|_{L^2}^2 \) that reflects the intrinsic structure of \( P_X \). For instance, \( \| f \|_{L^2}^2 \) can be defined with the Laplace-Beltrami operator in manifold learning, or the graph Laplacian for graphs. In comparison, STKR also exploits \( P_X \), but in an implicit way—the construction of the STK mixes \( K \) with \( P_X \). To see this: In our graph example, suppose we were to use the STK \( K^2 \), i.e. a two-step random walk. Then, (a) the graph would be useful again because the three labeled nodes were connected in \( K^2 \), and (b) we mixed \( K \) with \( P_X \) since \( K^2 \) is essentially an...
integral of $K \times K$ over $P_X$. The main takeaway from the above analysis is: With STKR, we impose another kind of smoothness we call the target smoothness, and it mixes the information of $K$ with the information of $P_X$. In the rest of this section, we formally characterize this target smoothness.
We start with formally defining “smoothness”. Suppose the inter-sample similarity is characterized by a metric $d(x, x')$ over the input space $X$, then one can naturally measure the smoothness of $f$ by its Lipschitz constant $\text{Lip}_d(f) = \sup_{x, x' \in X, x \neq x'} \frac{|f(x) - f(x')|}{d(x, x')}$. So it suffices to specify $d(x, x')$. If $X$ is an Euclidean space, then one can choose $d$ to be the Euclidean distance, which is used in lots of prior work (Tenenbaum et al., 2000; Belkin & Niyogi, 2003). The caveat is that the Euclidean distance is not guaranteed to be correlated with similarity, and $X$ is not necessarily Euclidean in the first place.
Instead, one can use $K$ to define the metric, which should align better with inter-sample similarity by the definition of $K$. And if one further assumes transitivity of similarity, i.e. $(a, b)$ and $(b, c)$ being similar implies that $(a, c)$ are similar, then $K^p$ also aligns with similarity. The kernel metric of $K^p$ is given by $d_{K^p}(x, x') := \|K^p_x - K^p_{x'}\|_{H_{K^p}} = \sum_i \lambda_i^p (\psi_i(x) - \psi_i(x'))^2$, which is equivalent to the diffusion distance defined in Coifman & Lafon (2006), and $p$ can be real-valued. Thus, kernel diffusion $\{K^p\}$ induces a multiscale metric geometry over $X$, where a larger $p$ induces a weaker metric. Here “weaker” means $d_{K^p} = O(d_{K^q})$ if $p > q$. One can also think of $\{K^p\}_{p > 1}$ as forming a chain of smooth function classes: $L^2(P_X) \supset H_{K^1} \supset H_{K^2} \supset \cdots$, and for continuous diffusion we can also have sets like $H_{K^{1.5}}$. A larger $p$ imposes a stronger constraint since $H_{K^p}$ is smaller.
Now we show: $\|f\|_{H_{K^p}}$ is equal to its Lipschitz constant. But this is not true for $\text{Lip}_d(f)$, which is not very tractable under the topological structure of $X$. Thus, we consider the space of finite signed measures over $X$, denoted by $\mathcal{X}$. For any function $f$ on $X$, define its mean $\bar{f}$ as a linear functional over $\mathcal{X}$, such that $\bar{f}(\mu) = \int_X f(x)\mu(x)$. Then, define $d_{K^p}(\mu, \nu) := \|f_{K^p_x} - f_{K^p_{x'}}\|_{H_{K^p}}$ for $\mu, \nu \in \mathcal{X}$, and $\text{Lip}_{d_{K^p}}(f) := \sup_{\mu, \nu \in \mathcal{X}, \mu \neq \nu} \frac{|f(\mu) - f(\nu)|}{d_{K^p}(\mu, \nu)}$. In other words, $f$ is smooth if its mean w.r.t. $\mu$ does not change too much when the measure $\mu$ over $X$ changes by a little bit. Then, we have:
**Proposition 1** (Proofs in Appendix B). This $\text{Lip}_{d_{K^p}}(f)$ satisfies: $\text{Lip}_{d_{K^p}}(f) = \|f\|_{H_{K^p}}, \forall f \in H_{K^p}$.
We define $r_{K^p}(f) := \|f - \mathbb{E}_{P_X}[f]\|^2_{P_X}/\text{Lip}_{d_{K^p}}(f)^2 = \|f - \mathbb{E}_{P_X}[f]\|^2_{P_X}/\|f\|^2_{H_{K^p}}$, and use it to measure the smoothness of any $f \in L^2(P_X)$ at scale $p \geq 1$. Here $\|f\|_{H_{K^p}}$ is extended to all $f \in L^2(P_X)$: If $\exists f_p \in H_{K^p}$ such that $f - \mathbb{E}_{P_X}[f] = f_p (P_X-a.e.)$, then $\|f\|_{H_{K^p}} := \|f_p\|_{H_{K^p}}$. If there is no such $f_p$, then $\|f\|_{H_{K^p}} := +\infty$. Since $K$ is centered, for any $f_1$ and $f_2$ that differ by a constant $P_X$-a.e., there is $r_{K^p}(f_1) = r_{K^p}(f_2)$. This would not be true without the centeredness assumption. We define $r_{K^p}(f)$ as a ratio to make it scale-invariant, i.e. $f$ and $2f$ are equally smooth, for the same purpose of decoupling smoothness and scale. And in Appendix B.2, we will discuss the connection between $r_{K^p}(f)$ and discriminant analysis, as well as the Poincaré constant.
Now we characterize “target smoothness”, an unknown property that the target $f^*$ possesses. We assume that it has the same form $r_t(f) := \|f - \mathbb{E}_{P_X}[f]\|^2_{P_X}/\text{Lip}_{d_t}(f)^2$, for some metric $d_t$ over $\mathcal{X}$. Then, we assume all functions with “target smoothness” belong to a Hilbert space $H_t$, and $x, x'$ are similar if all functions in $H_t$ give them similar predictions, i.e. $d_t(\mu, \nu) = \sup_{\|f\|_{H_t} = 1} |\bar{f}(\mu) - \bar{f}(\nu)|$. We also assume that target smoothness implies base smoothness, i.e. $H_t \subset H_K$ (this is relaxable).
### 2.2 Target Smoothness Can Always Be Obtained from STK: Sufficient Condition
Let $r_t(f)$ be defined as above. Our first theorem gives the following sufficient condition: If the target smoothness preserves relative multiscale smoothness, then it must be attainable with an STK.
**Theorem 1.** If $r_t(f)$ preserves relative smoothness: “$\forall f_1, f_2 \in L^2(P_X)$, if $r_{K^p}(f_1) \geq r_{K^p}(f_2)$ for all $p \geq 1$, then $r_t(f_1) \geq r_t(f_2)$”, and $H_t \subset H_K$, then $r_t(f) = \|f - \mathbb{E}_{P_X}[f]\|^2_{P_X}/\|f\|^2_{H_t}$, and $H_t$ must be an RKHS, whose reproducing kernel is $K_s$ that admits the following form:
$$K_s(x, x') = \sum_{i: \lambda_i > 0} s(\lambda_i) \psi_i(x) \psi_i(x'),$$
for a transformation function $s : [0, +\infty) \to [0, +\infty]$ that is: (i) monotonically non-decreasing, (ii) $s(\lambda) \leq M \lambda$ for some constant $M > 0$, (iii) continuous on $[0, +\infty)$, and (iv) $C^\infty$ on $(0, +\infty)$.
The proof is done by sequentially showing that (i) $H_t$ is an RKHS; (ii) Its reproducing kernel is $K_s(x, x') := \sum_i s_i \psi_i(x) \psi_i(x')$, with $s_1 \geq s_2 \geq \cdots \geq 0$; (iii) $s_i = O(\lambda_i)$; (iv) There exists such a function $s(\lambda)$ that interpolates all $s_i$. From now on, we will use $H_{K_s}$ to denote $H_t$. This theorem
implies that \( s(\lambda) \) makes the eigenvalues decay faster than the base kernel, but it does not imply that \( K_s \) is a linear combination of \( \{K^p\}_{p \geq 1} \). This result naturally leads to KRR with \( \|f\|_{H_{K_s}}^2 = \|f\|_{H_t}^2 \):
\[
\tilde{f} \in \arg\min_{f \in H_{K_s}} \left\{ \frac{1}{n} \sum_{i=1}^{n} (f(x_i) - y_i)^2 + \beta_n \|f - \mathbb{E}_{X \sim P_X}[f(X)]\|_{H_{K_s}}^2 \right\},
\]
(4)
which we term spectrally transformed kernel regression (STKR). One could also relax the assumption \( H_t \subset H_K \) by considering \( H_{K^p} \) for \( p \geq p_0 \) where \( p_0 < 1 \). Assuming that \( H_{K^{p_0}} \) is still an RKHS and \( H_t \subset H_{K^{p_0}} \), one can prove the same result as Theorem 1, with (ii) changed to \( s(\lambda) \leq M \lambda^{p_0} \).
Now we develop theory for STKR, and show how it exploits the unlabeled data. Here is a road map:
(a) We first study the easier transform-aware setting in Section 3, where a good \( s(\lambda) \) is given by an oracle. But even though \( s(\lambda) \) is known, \( K_s \) is inaccessible as one cannot obtain \( \psi_i \) with finite samples. Unlabeled data becomes useful when one constructs a kernel \( \hat{K}_s \) to approximate \( K_s \).
(b) In reality, such an oracle need not exist. So in Section 4, we study the harder transform-agnostic setting where we have no knowledge of \( s(\lambda) \) apart from Theorem 1. We examine two methods:
(i) STKR with inverse Laplacian (Example 1), which is popular in semi-supervised learning and empirically works well on lots of tasks though the real \( s \) might not be inverse Laplacian.
(ii) STKR with kernel PCA, which extracts the top-\( d \) eigenfunctions to be an encoder and then learns a linear probe atop. This is used in many manifold and representation learning methods. Here, unlabeled data is useful when approximating \( \psi_1, \cdots, \psi_d \) in kernel PCA.
**Notation:** For any kernel \( K \), we use \( G_K \in \mathbb{R}^{(n+m) \times (n+m)} \), \( G_{K,n} \in \mathbb{R}^{n \times n} \), \( G_{K,m} \in \mathbb{R}^{m \times m} \) to respectively denote its Gram matrix on all, labeled and unlabeled samples, i.e. \( G_K[i,j] = K(x_i, x_j) \).
### 3 Transform-Aware: STKR with Known Polynomial Transform
Let the scale of \( f^* \) be measured by \( B \). This section supposes that \( s(\lambda) \) is known, and the following:
**Assumption 1.** \( s(\lambda) = \sum_{p=1}^{\infty} \pi_p \lambda^p \) is a polynomial, with \( \pi_p \geq 0 \).
**Assumption 2.** There exists a constant \( \kappa > 0 \) such that \( K(x,x) \leq \kappa^2 \) for \( P_X \)-almost all \( x \).
**Assumption 3.** \( \mathbb{E}_{P_X}[f^*] = 0 \), and there exist constants \( B, \epsilon > 0 \) such that: \( \|f^*\|_{P_X} \leq B \), \( f^* \in H_{K_s} \), and \( \|f^*\|_{H_{K_s}}^2 \leq \epsilon \|f^*\|_{P_X}^2 \) (i.e. \( r_t(f^*) \geq \epsilon^{-1} \)). (cf. the isometry property in Zhai et al. (2024))
**Assumption 4.** \( P_XY \) satisfies the moment condition for \( \sigma, L > 0 \): \( \mathbb{E}[|y - f^*(x)|^r] \leq \frac{1}{2} r! \sigma^r L^{r-2} \) for all \( r \geq 2 \) and \( P_X \)-almost all \( x \). (e.g. For \( y - f^*(x) \sim N(0, \sigma^2) \), this holds with \( L = \sigma \).)
Assumption 1 is a natural condition for discrete diffusion, such as a multi-step random walk on a graph, and \( p \) starts from 1 because \( s(0) = 0 \). The assumption \( \mathbb{E}_{P_X}[f^*] = 0 \) in Assumption 3 is solely for the simplicity of the results, without which one can prove the same but more verbose bounds. The moment condition Assumption 4 is essentially used to control the size of the label noise.
**Method:** We implement inductive STKR by constructing a computable kernel \( \hat{K}_s(x,x') \) to approximate the inaccessible \( K_s \). For example, if \( \hat{K}_s(x,x') = \hat{K}^2(x,x') = \int K(x,x_0)K(x',x_0)dp(x_0) \), then a Monte-Carlo approximation can be done by replacing the integral over \( x_0 \) with an average over \( x_1, \cdots, x_{n+m} \). Computing this average leverages the unlabeled data. Specifically, we define:
\[
\hat{f} \in \arg\min_{f \in H_{K_s}} \left\{ \frac{1}{n} \sum_{i=1}^{n} (f(x_i) - y_i)^2 + \beta_n \|f\|_{H_{K_s}}^2 \right\},
\]
(5)
where \( \hat{K}_s(x,x') := \sum_{p=1}^{\infty} \pi_p \hat{K}^p(x,x') \); \( \hat{K}^1 = K; \forall p \geq 2, \hat{K}^p(x,x') = \frac{v_K(x)^T G_{K}^{p-2} v_K(x')} {(n+m)p^{-1}} \).
Here, \( v_K(x) \in \mathbb{R}^{n+m} \) such that \( v_K(x)[i] = K(x,x_i), i \in [n+m] \). One can compute \( \hat{K}_s(x,x') \) for any \( x, x' \) with full access to \( K(x,x') \). Let \( y = [y_1, \cdots, y_n] \in \mathbb{R}^n \), and \( v_{K_s,n}(x) \in \mathbb{R}^n \) be defined as \( v_{K_s,n}(x)[i] = K_s(x,x_i) \) for \( i \in [n] \). The following closed-form solutions can be derived from the Representer Theorem. While they are not necessarily unique, we will use them throughout this work:
\[
\begin{align*}
\hat{f}(x) &= v_{K_s,n}(x)^T \hat{\alpha}, & \hat{\alpha} &= (G_{K_s,n} + n \beta_n I_n)^{-1} y; \\
\hat{f}(x) &= v_{\hat{K}_s,n}(x)^T \hat{\alpha}, & \hat{\alpha} &= (G_{\hat{K}_s,n} + n \beta_n I_n)^{-1} y.
\end{align*}
\]
(6)
(7)
Results overview: Now, for all \( s \) and \( f^* \) that satisfy the above assumptions, we bound the prediction error \( \| \hat{f} - f^* \|_{P_X} \). The bound has two parts and here is a synopsis: In Part 1 (Theorem 2), we assume access to \( K_s \), and use the general results in Fischer & Steinwart (2020) to bound the estimation error entailed by KRR with finite samples and label noise; In Part 2 (Theorem 3), we bound the approximation error entailed by using \( \tilde{K}_s \) to approximate the inaccessible \( K_s \).
**Theorem 2.** Let \( M \) be given by Theorem 1. If eigenvalues of \( K_s \) decay by order \( p^{-1} \) for \( p \in (0, 1] \), i.e. \( s(\lambda_i) = O(i^{-\frac{1}{p}}) \) for all \( i \), then under Assumptions 2 and 4, for a sequence of \( \{\beta_n\}_{n \geq 1} \) with \( \beta_n = \Theta(n^{-\frac{1}{1+p}}) \), there is a constant \( c_0 > 0 \) independent of \( n \geq 1 \) and \( \tau \geq \kappa^{-1}M^{-\frac{1}{2}} \) such that
\[
\| \hat{f} - f^* \|_{P_X}^2 \leq c_0 \tau^2 \kappa^2 M \left[ (\epsilon B^2 + \sigma^2) n^{-\frac{1}{1+p}} + \max \left\{ L^2, \kappa^2 M \epsilon B^2 \right\} n^{-\frac{1+2p}{1+p}} \right]
\]
holds for all \( f^* \) satisfying Assumption 3 and sufficiently large \( n \) with probability at least \( 1 - 4e^{-\tau} \).
Remark. The \( O(n^{-\frac{1}{1+p}}) \) learning rate is minimax optimal as shown in Fischer & Steinwart (2020), i.e. one can construct an example where the learning rate is at most \( \Omega(n^{-\frac{1}{1+p}}) \). And under Assumption 2, one can always choose \( p = 1 \) since \( i \cdot s(\lambda_i) \leq \sum_{j=1}^{i} s(\lambda_j) \leq M \sum \lambda_j = M \text{Tr}(T_K) \leq M \kappa^2 \). So one statistical benefit of using an appropriate \( s \) is to make the eigenvalues decay faster (i.e. make \( p \) smaller). Also note that the random noise should scale with \( f^* \), which means that \( \sigma, L = \Theta(B) \).
**Theorem 3.** Let \( \hat{\lambda}_1 \) be the largest eigenvalue of \( \frac{G_K}{n+m} \), and denote \( \lambda_{\text{max}} := \max \{ \lambda_1, \hat{\lambda}_1 \} \). Then, under Assumptions 1 and 2, for any \( \delta > 0 \), it holds with probability at least \( 1 - \delta \) that:
\[
\| \hat{f} - \tilde{f} \|_{P_X}^2 \leq 8s(\lambda_{\text{max}}) \nabla_\lambda \left( \frac{s(\lambda)}{\lambda} \right)_{\lambda=\lambda_{\text{max}}} \frac{\beta_n^{-2}\kappa^4}{\sqrt{n+m}} \left( 2 + \sqrt{2 \log \frac{1}{\delta}} \right) \frac{\| y \|_2^2}{n}
\]
Remark. The key to prove this is to first prove a uniform bound for \( |\tilde{K}_s(x, x_j) - K_s(x, x_j)| \) over all \( x \) and \( j \). With Assumptions 3 and 4, an \( O(B^2 + \sigma^2 + L^2) \) bound for \( \| y \|_2^2 \) can be easily obtained. If \( \beta_n = \Theta(n^{-\frac{1}{1+p}}) \) as in Theorem 2, then with \( m = \omega(n^{\frac{1}{1+p}}) \) this bound vanishes, so more unlabeled samples than labeled ones are needed. Moreover, \( \hat{\lambda}_1 \) is known to be close to \( \lambda_1 \) when \( n + m \) is large:
**Lemma 2.** (Shawe-Taylor et al., 2005, Theorem 2) For any \( \delta > 0 \), with probability at least \( 1 - \delta \),
\[
\hat{\lambda}_1 \leq \lambda_1 + \frac{\kappa^2}{\sqrt{n+m}} \left[ 2\sqrt{2} + \sqrt{19 \log \frac{2(n+m+1)}{\delta}} \right].
\]
Implementation: STKR amounts to solving \( A \hat{\alpha} = y \) for \( A = G_{\tilde{K},n} + n\beta_n I_n \) by Eqn. (7). There are two approaches: (i) Directly computing \( A \) (Algorithm 3 in Appendix C) can be slow due to costly matrix multiplication; (ii) Iterative methods are faster by only performing matrix-vector multiplication. Algorithm 1 solves \( A \hat{\alpha} = y \) via Richardson iteration. We name it STKR-Prop as it is very similar to label propagation (Label-Prop) (Zhou et al., 2003). If \( s(\lambda) = \sum_{p=1}^{q} \pi_p \lambda^p \) and \( q < \infty \), and computing \( K(x, x') \) for any \( x, x' \) takes \( O(1) \) time, then Algorithm 1 has a time complexity of \( O((q+n)m)^2 \beta_n^{-1} s(\lambda) \log \frac{1}{\epsilon} \) for achieving error less than \( \epsilon \), where \( \lambda \) is a known upper bound of \( \lambda_1 \) (see derivation in Appendix C). Besides, STKR-Prop is much faster when \( K \) is sparse. In particular, for a graph with \( |E| \) edges, STKR-Prop runs in \( \tilde{O}(q|E|\beta_n^{-1}) \) time, which is as fast as Label-Prop.
At inference time, one can store \( v \) computed in line 4 of Algorithm 1 in the memory. Then for any \( x \), there is \( \hat{f}(x) = \sum_{i=1}^{n+m} K(x_i, x)v_i + \pi_1 \sum_{j=1}^{m} K(x_j, x)\hat{\alpha}_j \), which takes \( O(n+m) \) time to compute. This is much faster than Chapelle et al. (2002) who solved an optimization problem for each new \( x \).
For some other transformations, including the inverse Laplacian we are about to discuss, \( s \) is complex, but \( s^{-1}(\lambda) = \sum_{p=0}^{q-1} \xi_p \lambda^{p-r} \) is simple. For this type of \( s(\lambda) \), Algorithm 1 is infeasible, but there is a viable method in Algorithm 2: It finds \( \theta \in \mathbb{R}^{n+m} \) such that \( Q\theta = [\hat{\alpha}, 0_m]^T \) and \( M\theta = \tilde{y} \), where \( Q := \sum_{p=0}^{q-1} \xi_p \left( \frac{G_K}{n+m} \right)^p \), \( M := (n+m) \tilde{I}_n \left( \frac{G_K}{n+m} \right)^r + n\beta_n Q \), \( \tilde{I}_n := \text{diag}\{1, \cdots, 1, 0, \cdots, 0\} \) with \( n \) ones and \( m \) zeros, and \( \tilde{y} := [y, 0_m]^T \). In Appendix C we will derive these formulas step by step, and prove its time complexity to be \( \tilde{O}(\max\{q,r\}(n+m)^2 \beta_n^{-1}) \). And at inference time, one can compute \( \hat{f}(x) = v_K(x)^T \left( \frac{G_K}{n+m} \right)^{r-1} \theta \) in \( O(n+m) \) time for any \( x \) by storing \( \left( \frac{G_K}{n+m} \right)^{r-1} \theta \) in the
Algorithm 1 STKR-Prop for simple $s$
**Input:** $G_K$, $s(\lambda)$, $\beta_n$, $y$, $\gamma$, $\epsilon$
1: Initialize: $\hat{\alpha} \leftarrow 0 \in \mathbb{R}^n$
2: while True do
# Compute $u = (G_{K,n} + n\beta_n I_n)\hat{\alpha}$
3: $\hat{\alpha} \leftarrow \frac{1}{n+m} G_{K,n+m,n}\hat{\alpha}$, $v \leftarrow 0 \in \mathbb{R}^{n+m}$
4: for $p = q, \cdots, 2$ do $v \leftarrow G_{K,v} + \pi_p \hat{\alpha}$
5: $u \leftarrow G_{K,n+m,n}^T v + \pi_1 G_{K,n}\hat{\alpha} + n\beta_n \hat{\alpha}$
6: if $\|u - y\|_2 < \epsilon \|y\|_2$ then return $\hat{\alpha}$
7: $\hat{\alpha} \leftarrow \hat{\alpha} - \gamma(u - y)$
Algorithm 2 STKR-Prop for simple $s^{-1}$
**Input:** $G_K$, $s^{-1}(\lambda)$, $\beta_n$, $y$, $\gamma$, $\epsilon$
1: Initialize: $\theta \leftarrow 0 \in \mathbb{R}^{n+m}$, $\tilde{y} \leftarrow [y, 0_m]^T$
2: while True do
# Compute $u = M\theta$
3: $v \leftarrow 0 \in \mathbb{R}^{n+m}$
4: for $p = q - 1, \cdots, 0$ do $v \leftarrow G_{K,v} + \xi_p \theta$
5: $u \leftarrow \left[ \begin{array}{c} G_{K,n+m,n} \\ (n+m)^{-1} \end{array} \right][1 : n], 0_m]^T + n\beta_n v$
6: $a \leftarrow u - \tilde{y}$, $\theta \leftarrow \theta - \gamma a$
7: if $\|a\|_2 < \epsilon \|y\|_2$ then return $\theta$
memory, where $v_K$ is defined as in Eqn. (5). Once again, for a graph with $|E|$ edges, STKR-Prop has a time complexity of $O(\max\{q, r\}|E|\beta_n^{-1})$, which is as fast as Label-Prop. Finally, here we showed the existence of a good solver (Richardson), but practitioners could surely use other linear solvers.
4 Transform-agnostic: Inverse Laplacian and Kernel PCA
We have derived learning guarantees for general inductive STKR when $s$ is known. This is useful, but in reality, it is unreasonable to presume that such an oracle $s$ will be given. What should one do if one has zero knowledge of $s(\lambda)$ but still want to enforce target smoothness? Here we provide two parallel methods. The first option one can try is STKR with the canonical inverse Laplacian transformation. Laplacian as a regularizer has been widely used in various context (Zhou et al., 2003; Johnson & Zhang, 2008; HaoChen et al., 2021; Zhai et al., 2024). For our problem, we want $\|f\|^2_{H_K} = f^\top K_s^{-1} f$ to be the Laplacian, so the kernel $K_s$ should be the inverse Laplacian:
Example 1 (Inverse Laplacian for the inductive setting). For $\eta \in (0, \lambda_1^{-1})$, define $K_s$ such that $K_s^{-1}(x, x') = K^{-1}(x, x') - \eta K^0(x, x')$. $K^{-1}$ and $K^0$ are STKs with $s(\lambda) = \lambda^{-1}$ and $s(\lambda) = \lambda^0$. Then, $s^{-1}(\lambda) = \lambda^{-1} - \eta > 0$ for $\lambda \in (0, \lambda_1]$ ($s^{-1}$ is the reciprocal, not inverse), $s(\lambda) = \frac{\lambda}{1 - \eta \lambda} = \sum_{p=1}^\infty \eta^{p-1} \lambda^p$, and $\|f\|^2_{H_K} = \|f\|^2_{P_X} - \eta \|f\|^2_{P_X}$. Classical Laplacian has $\eta = 1$ and $\lambda_1 < 1$. For the connection between transductive and inductive versions of Laplacian, see Appendix B.3.
This canonical transformation empirically works well on lots of tasks, and also have this guarantee:
Proposition 3. Let $s$ be the inverse Laplacian (Example 1), and $s^*$ be an arbitrary oracle satisfying Theorem 1. Suppose $f^*$ satisfies Assumption 3 w.r.t. $s^*$, but STKR (Eqn. (7)) is performed with $s$. Then, Theorem 3 still holds for $\hat{f}$ given by Eqn. (6), and Theorem 2 holds by replacing $\epsilon$ with $M\epsilon$.
Note that this result does not explain why inverse Laplacian is so good — its superiority is mainly an empirical observation, so it could still be bad on some tasks, for which there is the second option. The key observation here is that since $s$ is proved in Theorem 1 to be monotonic, the order of $\psi_1, \psi_2, \cdots$ must remain unchanged. So if one is asked to choose $d$ functions to represent the target function, regardless of $s$ the best choice with the lowest worst-case approximation error must be $\psi_1, \cdots, \psi_d$:
Proposition 4. Let $s$ be any transformation function that satisfies Assumption 3 for this $s$. Then, the following holds for all $\hat{\Psi} = [\hat{\psi}_1, \cdots, \hat{\psi}_d]$ such that $\hat{\psi}_i \in L^2(P_X)$, as long as $s(\lambda_1)\epsilon > 1$ and $\frac{s(\lambda_{d+1})}{s(\lambda_1)}[s(\lambda_1)\epsilon - 1] \leq \frac{1}{2}$:
$$\max_{f \in F_s} \min_{w \in \mathbb{R}^d} \|w^\top \hat{\Psi} - f\|^2_{P_X} \geq \frac{s(\lambda_{d+1})}{s(\lambda_1) - s(\lambda_{d+1})}[s(\lambda_1)\epsilon - 1]B^2.$$ To attain equality, it is sufficient for $\hat{\Psi}$ to span $\text{span}\{\psi_1, \cdots, \psi_d\}$, and necessary if $s(\lambda_d) > s(\lambda_{d+1})$.
Method: This result motivates using representation learning with two stages: A self-supervised pretraining stage that learns a $d$-dimensional encoder $\hat{\Psi} = [\hat{\psi}_1, \cdots, \hat{\psi}_d]$ with the unlabeled samples, and a supervised fitting stage that fits a linear probe on $\hat{\Psi}$ with the labeled samples. The final predictor is $\hat{f}_d(x) = \hat{w}^\top \hat{\Psi}(x)$, for which we do not include a bias term since $f^*$ is assumed to be centered.
For pretraining, the problem boils down to extracting the top-$d$ eigenfunctions of $T_K$, for which a classical method is kernel PCA (Schölkopf & Smola, 2002, Chapter 14). Indeed, kernel PCA has been widely applied in manifold learning (Belkin & Niyogi, 2003; Bengio et al., 2004), and more recently self-supervised pretraining (Johnson et al., 2023). Suppose that $G_{K,m} \in \mathbb{R}^{m \times m}$, the Gram matrix of $K$ over $x_{n+1}, \cdots, x_{n+m}$, is at least rank-$d$. Then, kernel PCA can be formulated as:
\[ \hat{\psi}_i(x) = \sum_{j=1}^{m} v_i[j]K(x_{n+j}, x), \tag{8} \]
where \( G_{K,m}v_i = m\lambda_i v_i; \quad \tilde{\lambda}_1 \geq \cdots \geq \tilde{\lambda}_d > 0; \quad v_i \in \mathbb{R}^m; \quad \forall i,j \in [d], \langle v_i, v_j \rangle = \frac{\delta_{i,j}}{m\lambda_i}. \)
For any \( i,j \in [d], \) there is \( \langle \hat{\psi}_i, \hat{\psi}_j \rangle_{H_K} = v_i^\top G_{K,m}v_j = \delta_{i,j}. \) Consider running KRR w.r.t. \( K \) over all \( f = w^\top \hat{\Psi}. \) For \( \hat{f} = \hat{w}^\top \hat{\Psi}, \) there is \( \| \hat{f} \|_{H_K}^2 = \sum_{i,j=1}^{d} \hat{w}_i \hat{w}_j \langle \hat{\psi}_i, \hat{\psi}_j \rangle_{H_K} = \| \hat{w} \|_2^2. \) So it amounts to minimize \( \frac{1}{n} \sum_{i=1}^{n} (\hat{w}^\top \hat{\Psi}(x_i) - y_i)^2 + \beta_n \| \hat{w} \|_2^2 \) as in ridge regression, which is an approximation of STKR with a “truncation function” \( s(\lambda_i) = \lambda_i \) if \( i \leq d, \) and 0 otherwise (not a real function if \( \lambda_d = \lambda_{d+1} \)). Denote \( \hat{\Psi}(X_n) = [\hat{\Psi}(x_1), \ldots, \hat{\Psi}(x_n)] \in \mathbb{R}^{d \times n}. \) Then, the final predictor is given by:
\[ \hat{f}_d = \hat{w}^* \hat{\Psi}, \quad \hat{w}^* = (\hat{\Psi}(X_n)^\top \hat{\Psi}(X_n) + n\beta_n I_d)^{-1} \hat{\Psi}(X_n)y. \tag{9} \]
**Results overview:** We now bound the prediction error of \( \hat{f}_d \) for all \( f^* \) satisfying Assumption 3, with no extra knowledge about \( s(\lambda). \) The bound also has two parts. In Part 1 (Theorem 4), we bound the estimation error entailed by KRR over \( H_{\hat{\Psi}} \) given by Eqn. (9), where \( H_{\hat{\Psi}} \) is the RKHS spanned by \( \hat{\Psi} = [\hat{\psi}_1, \ldots, \hat{\psi}_d], \) which is a subspace of \( H_K. \) In Part 2 (Theorem 5), we bound the approximation error, which is the distance from \( f^* \) to this subspace \( \hat{\Psi}. \) Note that if \( \hat{\Psi} \) has insufficient representation capacity (e.g. \( d \) is small), then the approximation error will not vanish. Specifically, let \( \tilde{f}_d \) be the projection of \( f^* \) onto \( H_{\hat{\Psi}}, \) i.e. \( \tilde{f}_d = \hat{w}^\top \hat{\Psi}, \) and \( \langle \tilde{f}_d, f^* - \tilde{f}_d \rangle_{H_K} = 0. \) Then, Part 1 bounds the KRR estimation error with \( \tilde{f}_d \) being the target function, and Part 2 bounds \( \| f^* - \tilde{f}_d \|_2^2. \)
**Theorem 4.** Let \( M \) be given by Theorem 1. Then, under Assumptions 2 and 4, for Eqn. (9) with a sequence of \( \{\beta_n\}_{n \geq 1} \) with \( \beta_n = \Theta(n^{-\frac{1}{1+p}}) \) for any \( p \in (0,1], \) and any \( \delta > 0 \) and \( \tau \geq \kappa^{-1}, \) if
\[ n \geq 16\kappa^4 \tilde{\lambda}_d^{-2} \left( 2 + \sqrt{2 \log \frac{2}{\delta}} \right)^2, \]
then there is a constant \( c_0 > 0 \) independent of \( n, \tau \) such that:
\[ \| \hat{f}_d - \tilde{f}_d \|_{P_X}^2 \leq 3 \left( \| f^* - \tilde{f}_d \|_{P_X}^2 + \frac{\tilde{\lambda}_d}{4} \| f^* - \tilde{f}_d \|_{H_K}^2 \right) + c_0 \tau^2 \left[ (\kappa^2 M e B^2 + \kappa^2 \sigma^2) n^{-\frac{1}{1+p}} + \kappa^2 \max \{ L^2, \kappa^2 M e B^2 \} n^{-\frac{1+2p}{1+p}} \right] \]
holds for all \( f^* \) under Assumption 3 and sufficiently large \( n \) with probability at least \( 1 - 4e^{-\tau} - \delta. \)
**Remark.** This bound has two terms. The first term bounds the gap between \( y \) and new labels \( \tilde{y}, \) where \( \tilde{y}_i = y_i - f^*(x_i) + \tilde{f}_d(x_i). \) The second term again comes from the results in Fischer & Steinwart (2020). Comparing the second term to Theorem 2, we can see that it achieves the fastest minimax optimal learning rate (i.e. \( p \) can be arbitrarily close to 0), as the eigenvalues decay the fastest with \( s \) being the “truncation function”. But the side effect of this statistical benefit is the first term, as the \( d \)-dimensional \( \hat{\Psi} \) has limited capacity. The coefficient 3 can be arbitrarily close to 1 with larger \( n, c_0. \)
Our astute readers might ask why \( \hat{\Psi} \) is learned only with the unlabeled samples, while in the last section STKR was done with both labeled and unlabeled samples. This is because in the supervised fitting stage, the function class is the subspace spanned by \( \hat{\Psi}. \) To apply uniform deviation bounds in Theorem 4, this function class, and therefore \( \hat{\Psi}, \) must not see \( x_1, \ldots, x_n \) during pretraining. On the contrary, the function class in Theorem 2 is \( H_K, \) which is independent of \( x_1, \ldots, x_n \) by definition.
**Theorem 5.** Let \( M \) be given by Theorem 1. Let \( f^* - \tilde{f}_d = bg, \) where \( b \in \mathbb{R}, \) and \( g \in H_K \) such that \( \| g \|_{H_K} = 1 \) and \( \langle g, \hat{\psi}_i \rangle_{H_K} = 0 \) for \( i \in [d]. \) Then,
\[ \| f^* - \tilde{f}_d \|_{P_X}^2 = b^2 \| g \|_{P_X}^2, \]
and
\[ \| f^* - \tilde{f}_d \|_{H_K}^2 = b^2 \leq \frac{\epsilon M \lambda_1}{\lambda_1 - \| g \|_{P_X}^2} B^2 \]
for all \( f^* \) satisfying Assumption 3. \tag{10}
And if Assumption 2 holds, then for any \( \delta > 0, \) it holds with probability at least \( 1 - \delta \) that:
\[ \lambda_{d+1} \leq \| g \|_{P_X}^2 \leq \lambda_{d+1} + \frac{\kappa^2}{\sqrt{m}} \left( 2\sqrt{d} + 3\sqrt{\log \frac{6}{\delta}} \right). \]
**Remark.** When \( m \) is sufficiently large, \( \| g \|_{P_X}^2 \) can be very close to \( \lambda_{d+1}. \) Compared to Proposition 4, one can see that the bound for \( \| f^* - \tilde{f}_d \|_{P_X}^2 = b^2 \| g \|_{P_X}^2 \) given by this result is near tight provided that \( \frac{s(\lambda_1)}{\lambda_1} = \frac{s(\lambda_{d+1})}{\lambda_{d+1}} = M: \) The only difference is that Eqn. (10) has \( \epsilon M \lambda_1 - \frac{1}{2} \) instead of \( \epsilon M \lambda_1 - 1. \)
Table 1: Experiment results. We compare Label-Prop (LP) to STKR-Prop (SP) with inverse Laplacian (Lap), with polynomial \( s(\lambda) = \lambda^6 \) (poly), with kernel PCA (topd), and with \( s(\lambda) = \lambda \) (KRR) (i.e. KRR with base kernel). (t) and (i) indicate transductive and inductive settings. Test samples account for 1% of all samples. We report the accuracies of the argmax prediction of the estimators (%). Optimal hyperparameters are selected using a validation set (see Appendix D for details). Standard deviations are given across ten random seeds.
| Dataset | LP (t) | SP-Lap (t) | SP-poly (t) | SP-topd (t) | SP-Lap (i) | SP-poly (i) | SP-topd (i) | KRR (i) |
|---------|--------|------------|-------------|-------------|-----------|-------------|-------------|--------|
| Computers | 77.30±0.05 | 77.81±0.94 | 76.72±4.12 | 80.80±0.06 | 77.15±2.64 | 71.91±4.13 | 80.80±3.28 | 26.35±3.44 |
| Cora | 73.38±0.00 | 77.04±0.74 | 71.48±5.80 | 69.26±7.82 | 67.78±7.62 | 65.19±9.11 | 63.70±6.00 | 28.52±5.56 |
| DBLP | 66.44±3.78 | 65.42±5.02 | 64.52±4.20 | 64.89±4.60 | 65.20±4.92 | 64.51±4.05 | 63.10±3.41 | 44.80±3.86 |
Our analysis in this section follows the framework of Zhai et al. (2024), but we have the following technical improvements: (a) Estimation error: They bound with classical local Gaussian and localized Rademacher complexity, while we use the tighter bound in Fischer & Steinwart (2020) that is minimax optimal; (b) Approximation error: Our Theorem 5 has three improvements. (i) \( \|g\|^2_{P_X} - \lambda_{d+1} \) is \( O(\sqrt{d}) \) instead of \( O(d) \); (ii) It does not require delocalization of the top-\( d \) eigenfunctions, thereby removing the dependence on the covariance matrix; (iii) Our bound does not depend on \( \lambda_d^{-1} \).
Eigen-decomposition of \( G_{k,m} \) takes \( O(m^3) \) time in general, though as of today the fastest algorithm takes \( O(m^\omega) \) time with \( \omega < 2.38 \) (Demmel et al., 2007), and could be faster if the kernel is sparse.
5 EXPERIMENTS
We implement STKR-Prop (SP) with inverse Laplacian (Lap), polynomial (poly) \( s(\lambda) = \lambda^p \), and kernel PCA (topd). We run them on several node classification tasks, and compare them to Label-Prop (LP) and KRR with the base kernel (i.e. STKR with \( s(\lambda) = \lambda \)). Details and full results are deferred to Appendix D, and here we report a portion of the results in Table 1, in which the best and second-best performances for each dataset are marked in red and blue. We make the following observations:
(a) STKR works pretty well with general polynomial \( s(\lambda) \) in the inductive setting. In the transductive setting, the performance of SP-Lap is similar to LP, and SP-poly is slightly worse. The inductive performance is slightly worse than transductive, which is reasonable since there is less information at train time for the inductive setting. Note that LP does not work in the inductive setting.
(b) STKR with \( s(\lambda) = \lambda^p \) for \( p > 1 \) is much better than KRR (i.e. \( p = 1 \)). In fact, we observe that for STKR with \( s(\lambda) = \lambda^p \), a larger \( p \) performs better (see Figure 2 in Appendix D). This suggests one possible reason why inverse Laplacian works so well empirically: It contains \( K^p \) for \( p = 1, 2, \ldots \), so it can use multi-step similarity information up to infinitely many steps.
(c) STKR also works pretty well with kernel PCA. Specifically, on 3 of the 9 datasets we use, such as Computers, kernel PCA is better than LP and STKR with inverse Laplacian. This shows that inverse Laplacian and kernel PCA are two parallel methods — neither is superior.
6 CONCLUSION
This work revisited the classical idea of STKR, and proposed a new class of general and scalable STKR estimators able to leverage unlabeled data with a base kernel. We established STKR as a general and principled approach, provided scalable implementations for general transformation and inductive settings, and proved statistical bounds with technical improvements over prior work.
Limitations and open problems. This work assumes full access to \( K(x, x') \), but in some cases computing \( K(x, x') \) might be slow or impossible. The positive-pair kernel in contrastive learning (Johnson et al., 2023) is such an example, for which computing \( K \) is hard but computing \( \|f\|^2_{H_K} \) is easy, so our methods need to be modified accordingly. Also, this work does not talk about how to choose the right base kernel \( K \), which is a critical yet difficult open problem. For graph tasks, STKR like label propagation only leverages the graph, but it does not utilize the node features that are usually provided, which are important for achieving high performances in practice. Finally, this work focuses on the theoretical part, and a more extensive empirical study on STKR is desired, especially within the context of manifold learning, and modern self-supervised and semi-supervised learning.
There are three open problems from this work. (i) Improving the minimax optimal learning rate: In this work, we provided statistical bounds w.r.t. \( n, m \) jointly, but one question we did not answer is: If \( m \) is sufficiently large, can we improve the minimax optimal learning rate w.r.t. \( n \) proved in prior work on supervised learning? (ii) Distribution shift: Diffusion induces a chain of smooth function classes \( L^2(P_X) \supset H_{K_1} \supset H_{K_2} \supset \cdots \), but this chain will collapse if \( P_X \) changes. Can one learn predictors or encoders that are robust to the shift in \( P_X \)? (iii) Combining multiple kernels: In practice, usually the predictor is expected to satisfy multiple constraints. For example, an image classifier should be invariant to small rotation, translation, perturbation, etc. When each constraint induces a kernel, how should a predictor or encoder be learned? We leave these problems to future work.
CODE
The code of Section 5 can be found at https://colab.research.google.com/drive/1m8OENF2lvxW3BB6CVEu45SGeK9IoYpd1?usp=sharing.
ACKNOWLEDGMENTS
We would like to thank Zico Kolter, Andrej Risteski, Bingbin Liu, Elan Rosenfeld, Shanda Li, Yuchen Li, Tanya Marwah, Ashwini Pokle, Amirth Selur and Xiaoyu Huang for their feedback on the early draft of this work, and Yiping Lu and Fanghui Liu for their useful discussions. We are grateful to our anonymous ICLR reviewers, with whose help this work has been greatly improved. We acknowledge the support of NSF via IIS-1909816, IIS-2211907, ONR via N00014-23-1-2368, DARPA under cooperative agreement HR00112020003, and Bloomberg Data Science PhD fellowship.
REFERENCES
Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7 (11), 2006.
Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In International Conference on Machine Learning, pp. 541–549. PMLR, 2018.
Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux, Jean-François Paiement, Pascal Vincent, and Marie Ouimet. Learning eigenfunctions links spectral embedding and kernel pca. Neural computation, 16(10):2197–2219, 2004.
Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in neural information processing systems, 32, 2019.
David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HklkeR4KPB.
Gilles Blanchard, Olivier Bousquet, and Laurent Zwald. Statistical properties of Kernel Principal Component Analysis. Machine Learning, 66(2-3):259–294, March 2007. doi: 10.1007/s10994-006-6895-9. URL https://hal.science/hal-00373789.
Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In International Conference on Learning Representations, 2018.
Haim Brezis. Functional analysis, Sobolev spaces and partial differential equations. Springer, 2011.
Simon Buchholz. Kernel interpolation in sobolev spaces is not consistent in low dimensions. In Conference on Learning Theory, pp. 3410–3440. PMLR, 2022.
Vivien Cabannes, Bobak Kiani, Randall Balestrierio, Yann Lecun, and Alberto Bietti. The SSL interplay: Augmentations, inductive bias, and generalization. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 3252–3298. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/cabannes23a.html.
|
yVJd8lKyVX
|
In the ablation experiments section, there is a lack of explanation for the decrease in experimental performance, particularly why the performance declines when n_t is 1 and n_s is 0, and why is there not a more in-depth exploration of the impact of the quantities of n_s and n_t on the results?
|
Hybrid Sharing for Multi-label Image Classification
Zihao Yin\textsuperscript{2,3}, Chen Gan\textsuperscript{2,3}, Kelei He\textsuperscript{1,3,*}, Yang Gao\textsuperscript{2,3} and Junfeng Zhang\textsuperscript{1,3}
\textsuperscript{1} Medical School of Nanjing University
\textsuperscript{2} State Key Laboratory for Novel Software Technology, Nanjing University
\textsuperscript{3} National Institute of Healthcare Data Science, Nanjing University
\{zihao.yin, chengan\}@smail.nju.edu.cn
\{hkl, gaoy, jfzhang\}@nju.edu.cn
Abstract
Existing multi-label classification methods have long suffered from label heterogeneity, where learning a label obscures another. By modeling multi-label classification as a multi-task problem, this issue can be regarded as a negative transfer, which indicates challenges to achieve simultaneously satisfied performance across multiple tasks. In this work, we propose the Hybrid Sharing Query (HSQ), a transformer-based model that introduces the mixture-of-experts architecture to image multi-label classification. HSQ is designed to leverage label correlations while mitigating heterogeneity effectively. To this end, HSQ is incorporated with a fusion expert framework that enables it to optimally combine the strengths of task-specialized experts with shared experts, ultimately enhancing multi-label classification performance across most labels. Extensive experiments are conducted on two benchmark datasets, with the results demonstrating that the proposed method achieves state-of-the-art performance and yields simultaneous improvements across most labels. The code is available at this URL.
1 Introduction
In computer vision, multi-label classification (MLC) attempts to predict multiple labels that may simultaneously appear in a single sample. It is more realistic and intuitive as a sample typically has multiple attributes in real scenarios. However, the semantic correlation and heterogeneity among different labels pose a significant challenge to MLC, resulting in the labels either complement or conflict with each other. Previous works \cite{Liu2021a, Ridnik2023, Ye2020} achieved impressive performance via transformers or graph neural networks, trying to explore the correlation among labels with shared backbone across labels. These approaches neglected the heterogeneity among labels, which becomes the key obstacle to simultaneous improvement across labels.
In contrast to traditional multi-label classification approaches, MLC can be formulated as a multi-task learning (MTL) problem by modeling the prediction of each label as an individual task. The correlation and heterogeneity of the labels in MLC thus correspond to the task transfer problem of MTL, where learning a new task may perfect (positive transfer) or deteriorate (negative transfer) another. Under this context, the power of MTL in mitigating negative transfer may help improve the performance of MLC.
Precedent works like \cite{Ma2018} in MTL include a mixture of experts (MoE, \cite{Jacobs1991}), which utilizes a group of learned experts to handle different tasks separately. MOE has been widely adopted in natural language processing, where experts are expected to process words of various lexical categories. We advocate employing MoE in MLC image classification, which shares commonality with lexical category handling. Furthermore, we notice that the conventional MoE approach has primarily emphasized the utilization of expert groups within a specific task, with limited attention to the exchange of expertise group knowledge across different tasks. This approach may not align seamlessly with the MLC requirements, which will be scrutinized in our work.
*: Corresponding author
In this work, we introduce Hybrid Sharing Query (HSQ), a MoE-based MLC method with a novel proposed fusion strategy to better exploit semantic correlation and heterogeneity among labels and generate better underlying shared representation and task-specific representation. Additionally, we prioritize the adaptive fusion of label-specific and shared features in the classification task of each label, suppressing negative transfer and enhancing performance on the majority of labels. Specifically, we employ a group of shared experts to mine correlation among labels to generate multiple distinct shared features while assigning a group of task-specialized experts to each task to extract a series of label-specific features. This design can balance label-specific and shared features across labels while also emphasizing unique label-specific features for each individual label. Moreover, we employ gate networks to adaptively re-weight and harmonize features from task-specialized experts and shared experts, enhancing positive correlations and suppressing negatives among tasks.
Experiments show that the proposed method outperforms all tested baselines across multiple datasets on the majority of labels. The proposed method is also compatible with transformer-based MLC methods, indicating potential improvement to existing works.
Our contribution is three-fold:
- We present MoE to the MLC task, with gated task-specialized and shared experts to capture correlation and heterogeneity adaptively by formulating the MLC as an MTL problem.
- We empirically demonstrate that the fused experts help to extract correlations between tasks, encourage positive correlation sharing and suppress negative transfer, which benefits the overall and per-label performance and mitigates cross-label performance gap.
- We verify the superiority of our proposed model on two benchmark datasets with state-of-the-art performance overall and per-label.
2 RELATED WORK
Multi-label classification in computer vision. Models via various approaches have been proposed to address MLC. Zhu et al. (2017) use convolutional networks on an attention map to optimize ResNet prediction. Rajpurkar et al. (2017) solve the medical multi-label problem by using DenseNet (Huang et al., 2017). Wang et al. (2016) attempt to extract features from the image and generate the label as a sequence through a learned joint embedding space. Chen et al. (2019a) introduce graph convolutional network into this task, mapping label word embedding to inter-dependent object classifiers. Lanchantin et al. (2021a); Liu et al. (2021a) introduce transformer into MLC. These methods fail to see the negative transfer and positive correlation among labels. Some works also notice a similar problem in MLC from the MTL aspect. Wu et al. (2019) try to mitigate such a problem via a different architecture. Our study aims to improve overall performance in MLC while attempting to simultaneously enhance performance on as many labels as possible.
Multi-task learning. MLC can be recognized as a special case of MTL, treating each label as a separate classification task (Wu et al., 2019). Previous works on this topic include hard and soft parameter sharing, etc. Hard parameter sharing (Caruana, 1997) comes with a shared feature extraction backbone as a bottom and task-specialized towers as a top. Soft parameter sharing does not explicitly share network components across tasks but jointly learns other information through gradient sharing or other techniques. Duong et al. (2015); Yang and Hospedales (2017) encourage knowledge sharing across experts via different constraints like L2 norm. Cross-Stitch (Misra et al., 2016) trains two networks for two tasks and shares gradients between some layers controlled by gates. However, these architectures require more attention to the correlations among tasks, and the naive knowledge-sharing strategy may hamper the performance of models. In this work, we propose HSQ to reveal these correlations in the hope of generating a better representation for each task.
Mixture of experts in deep learning. Efforts have been made to improve models’ performance by scaling up the model size with MoE (Jacobs et al., 1991), which first attempts to combine the outputs of several experts with a gate network. MMOE (Ma et al., 2018) with similar settings further decouples the seesaw phenomenon between several tasks by assigning exclusive gate and tower networks to each task. MOEC (Xie et al., 2023) adopts a clustering loss to impose variance-based constraints on the routing stage, obtaining clusters of experts with more diverse knowledge.
PLE [Tang et al., 2020] adds shared and task-specific experts to MMOE to allow better information sharing between tasks. Traditional MoE imposes a substantial computational burden since all experts activate, even when only some tasks are required. To mitigate such a cost, the sparse MoE (Shazeer et al., 2017) strategy emerges in contrast to the regular dense one. The routing strategy determines which experts contribute to the task output. Zhou et al. (2022); Rosenbaum et al. (2018); Nie et al. (2021); Zuo et al. (2022); Roller et al. (2021); Dai et al. (2022) and others explore various routing strategies, including randomizing, hashing, expert-choosing, etc. Switch Transformer (Fedus et al., 2022) introduces a sparse MoE to the transformer layer to replace the feed-forward neural network. Our method introduces the MoE into the multi-label classification field by virtue of task-specialized and shared experts exploiting correlations among tasks. Moreover, we utilize a gate network to enhance positive correlation and suppress negative correlation in pursuit of better fusion.
Figure 1: The architecture of the proposed HSQ. After being extracted by the backbone, the input image’s features are then processed by a transformer, where learnable classification tokens are used as the query. \(N\) Hybrid Sharing Layers are employed sequentially, consisting of \(L\) groups of task-specialized experts and a group of shared ones. Individual gates for every group control the weighted outputs. A final classification head is utilized to make predictions.
3 METHOD
The MLC task for images is to find all possible correct labels in a pre-defined label set for a provided image. Thus, our model takes the provided image \(I\) and gives probability scoring \(\hat{L} \in \mathbb{R}^L\) on all labels \(L\). The proposed HSQ model comprises three main parts, namely, 1) feature extraction backbone, 2) query transformer, and 3) mixture-of-expert hybrid sharing head. The backbone extracts image representation with a robust replaceable network, followed by a transformer-based query model to explore underlying information between such extracted representation and each given label. The Hybrid Sharing Layers are applied to better exploit the correlations between every possible task and suppress potential negative transfer problems.
3.1 Feature Extraction Backbone
Features in any given image will be extracted through a feature extraction backbone. Multiple preceding works have contributed to this stage. We employ various well-established models to capture global and local feature information within images more effectively. For a 3-channel input image $I \in \mathbb{R}^{3 \times H_i \times W_i}$, $H_i$ and $W_i$ are the height and width of an image. A feature extractor is applied to extract feature $R \in \mathbb{R}^{C_i \times H \times W}$, where $C_i$ denotes the number of feature embedding, with a succeeding convolutional layer linearly projecting its feature space from $C_i$ into $C$.
3.2 Query Transformer
The semantic heterogeneity across labels requires the model to discern and capture unique feature representations specific to each individual task. Inspired by the remarkable performance of the query-based classifier, we employ learnable query tokens for classification to mitigate semantic conflicts between tasks. Specifically, this work employs a transformer to better extract and wrap task-specific underlying features in class-wise learnable tokens.
Given an extracted image representation $R$, an encoder-decoder standard transformer is applied to inspect features for each label. On the encoder side, the image representation from the backbone is flattened into $R \in \mathbb{R}^{C \times HW}$ and proceeded by $N_e$ encoder layers as tokens. To decouple different labels effectively, we endorse Liu et al. (2021a); Lanchantin et al. (2021b) to use learnable tokens as the query. On the decoder side, a learnable token is fed to the transformer decoder as the query for each possible label so that the feature of each label would be learned individually. $N_d$ decoder layers are stacked to extract the features of input representations in accordance with each possible label. The decoder layer accepts $T \in \mathbb{R}^{L \times C}$ for every $L$ possible label, where $C$ is the embedding dimension for each token. The cross-attention module in the transformer decoder performs on the query from the learnable label tokens (decoder) and the key and value from the extracted features (encoder), facilitating each label to mine respective representations.
3.3 Hybrid Sharing Layer
Given the potential semantic correlations among different labels, the features extracted from corresponding tasks may exhibit a positive correlation, providing complementary information to enhance model performance. However, improper exploitation of these correlations through learning jointly may cause performance degeneration since parts of these labels conflict with each other semantically due to their inherent heterogeneity, making them hard to learn jointly. To better leverage the positive correlations while suppressing detrimental impact due to heterogeneity between different tasks, we introduce the MoE mechanism into the multi-label classification area, inspired by the success of Progressive Layered Extraction (PLE) (Tang et al., 2020). Particularly, we employ several shared and task-specialized experts to capture positively correlated features among tasks and task-specific features, respectively, with a gate network adaptively fusing these features. The design of experts and gates can be very flexible and compatible as long as the output shapes are aligned, and in this work, we employ simple but effective linear layers to illustrate our approach.
Figure 2 depicts the details of Hybrid Sharing Layers, where $L$ indicates the number of tasks, i.e., the number of labels in multi-label classification. For any task $t_i$, $i \in \{1, 2, \cdots, L\}$, a group of task-specialized experts $E_{ts,j}$, $j \in \{1, 2, \cdots, n_t\}$, is assigned to extract features for this task exclusively, where $n_t$ refers to the number of experts for this task. Apart from these task-specialized experts groups for every task, a group of shared experts $E_{s,j}$, $j \in \{1, 2, \cdots, n_s\}$ is responsible for gathering global patterns and dispatching them to those potentially positively correlated tasks. Outputs of each expert group are harmonized by gate network, respectively, so that each task would have customized control on the weight of task-specialized and shared experts’ outputs.
Algorithm 1 outlines the detailed mechanism of the MoE mechanism that we applied. Let $X_t \in \mathbb{R}^{N \times L \times d_i}$ be a batched input to the mixture-of-expert layer, where $N$ refers to batch size and $d_i$ means input embedding dimension. And let $X_s$ be the input for shared experts group with the exact same shape as $X_t$. The outputs of a Hybrid Sharing Layer comprise task-specialized outputs and a shared output.
In the task-specialized section, each label (task) is processed independently. For a batched input $X_{ti} \in \mathbb{R}^{N \times d_i}$ on task $t_i$, a set of task-specific experts, denoted as $E_{ts,j} \in \mathbb{R}^{d_i \times d_o}$, is utilized,
Figure 2: The Architecture of a Hybrid Sharing Layer. For $L$ labels, the layer consists of $L$ groups of task-specialized experts and a group of shared experts. The detailed structure of the shared experts is illustrated on the right.
where $j$ represents the $j$-th expert in the group. $Y_{t_i|t_i,j} = X_{t_i}E_{t_i,j} \in \mathbb{R}^{N \times d_o}$ represents the output of expert $E_{t_i,j}$ on task $t_i$. The subscript of $Y$, separated by $|$, refers to the output task and expert subscript, respectively, meaning that it is the output of the $j$-th task-specialized expert in $t_i$ and takes inputs from task $t_i$. Similarly, a group of shared experts, denoted as $E_{s,j} \in \mathbb{R}^{d_i \times d_o}$, is used for task $t_i$. Shared experts, which would be used in all tasks, also accept $X_{t_i}$ in task $t_i$, and $Y_{t_i|s,j} = X_{t_i}E_{s,j} \in \mathbb{R}^{N \times d_o}$ represents the output of expert $E_{s,j}$ on task $t_i$. A gate network, denoted as $G_{t_i} \in \mathbb{R}^{d_i \times (L \cdot n_t + n_s)}$, is employed to produce weights for outputs from all shared and task-specialized experts on task $t_i$. The gate network takes $X_{t_i}$ as input, and outputs $\text{Softmax}(X_{t_i}G_{t_i}) \in \mathbb{R}^{N \times (n_t + n_s)}$ as the weights for experts’ outputs. Here, $n_t$ task-specialized experts for $t_i$ and all $n_s$ shared ones are employed. The task output is a weighted mean of all experts with activation $\sigma$, as described in the following equations, where $(k)$ stands for tensor indexing.
$$Y_{t_i} = \text{Concat}(Y_{t_i|t_i,j}, Y_{t_i|s,j}) \in \mathbb{R}^{N \times d_o \times (n_t + n_s)}$$
$$O_{t_i} = \sum_k \left[ \sigma(Y_{t_i})^{(k)} \odot \text{Softmax}(X_{t_i}G_{t_i})^{(k)} \right] \in \mathbb{R}^{N \times d_o}$$
In the shared section, all shared experts $E_s$ and task-specialized experts $E_{t_i}$ are utilized to gather potential features, with a total of $n_s + L \times n_t$ experts. These experts use shared input $X_s$ as their input. Similar to the task-specialized part, a gate fuses shared and task-specialized features. The shared gate network, denoted as $G_s$, harmonizes the outputs from both shared experts and task-specialized experts across all tasks with weights derived from the shared input $X_s$. Algorithm 1 described shared and task-specialized parts in the Hybrid Sharing Layer.
$$Y_s = \text{Concat}(Y_{s|s,j}, Y_{s|t_i,j}) \in \mathbb{R}^{N \times d_o \times (L \cdot n_t + n_s)}$$
$$O_s = \sum_k \left[ \sigma(Y_s)^{(k)} \odot \text{Softmax}(X_sG_S)^{(k)} \right] \in \mathbb{R}^{N \times d_o}$$
It is worth noting that the shared and task-specialized parts receive the outputs from their respective parts in the previous layer as inputs, except the initial layer, which uses an identical input.
4 EXPERIMENT
We have performed extensive experiments on two datasets, MS-COCO and PASCAL VOC, to verify the superiority of our model. In accordance with the preceding works, we choose mean average precision as our primary metric. Some experiments also report some secondary metrics, including overall F1-score (OF1) and pre-category F1-score (CF1). Metrics on Top-3 are also reported. The definitions of these metrics are available in the Appendix.
Algorithm 1: Hybrid Sharing Layer Procedure
Data: Input to shared experts $X_s$; Input to task $t_i$ experts $X_{t_i}$; Shared expert gate $G_s$; Task $t_i$ expert gate $G_{t_i}$; Number of shared experts $n_s$; Number of task-specialized experts per task $n_{t_i}$; Shared experts $E_{s,(i)}$; Task-specialized experts $E_{t_i,(i)}$; Number of labels $L$; Activation function $\sigma$
Result: Shared output $O_s$; Task-specialized output $O_{t_i}$
$Y_s = []$
for $i \leftarrow 1$ to $L$ do
$Y_{t_i} = []$
end
for $i \leftarrow 1$ to $L$ do
for $j \leftarrow 1$ to $n_{t_i}$ do
$Y_{s|t_i,j} = X_s E_{t_i,j}$
$Y_s$.append($Y_{s|t_i,j}$)
$Y_{t_i|t_i,j} = X_{t_i} E_{t_i,j}$
$Y_{t_i}$.append($Y_{t_i|t_i,j}$)
end
end
for $j \leftarrow 1$ to $n_s$ do
$Y_{s|s,j} = X_s E_{s,j}$
$Y_s$.append($Y_{s|s,j}$)
for $i \leftarrow 1$ to $L$ do
$Y_{t_i|s,j} = X_{t_i} E_{s,j}$
$Y_{t_i}$.append($Y_{t_i|s,j}$)
end
end
$A_s \leftarrow \text{Softmax}(G_s X_s)$
$Y'_s \leftarrow \text{Concat}(Y_s)$
$O_s \leftarrow \sum A_s \odot \sigma(Y_s)$
for $i \leftarrow 1$ to $L$ do
$Y'_{t_i} \leftarrow \text{Concat}(Y_{t_i})$
$A_{t_i} \leftarrow \text{Softmax}(G_{t_i} X_{t_i})$
$O_{t_i} \leftarrow \sum A_{t_i} \odot \sigma(Y_{t_i})$
end
4.1 Ablation Study
As shown in Table 1, the number of shared experts greatly influences the performance. We choose ResNet10T as the backbone and Q2L with the same backbone as the baseline. An ablation study is performed on MS-COCO. The input image is fixed at a size of $576 \times 576$. The proposed model, which includes shared experts (HSQ), outperforms the baseline by 1.3% on mAP, demonstrating that including shared experts facilitates the transfer of information between tasks and mitigates negative transfer. The results also reveal that removing shared experts from the model leads to a considerable drop in performance due to the complete cutoff of sharing information among all tasks, underscoring the importance of sharing features in achieving substantial performance improvements. HSQ-Linear indicates that the hybrid sharing layers of the model are replaced by fully-connected layers with the same depth and dimensions, sharing all information across all labels without discriminating task-specialized information. It is demonstrated that the inclusion of shared experts is a crucial factor in enhancing the performance of the proposed model compared with HSQ-Linear. The findings highlight the potential benefits of incorporating shared experts and can inform the development of future multi-label image classification models.
4.2 Performance on the MS-COCO Dataset
MS-COCO [Lin et al., 2014] is a large dataset of 80 object classes originally for image segmentation and object detection tasks. By extracting object information in annotations, it is also widely used to evaluate various models for multi-label image classification tasks. We test our model on MS-
Table 1: Ablation Study on MS-COCO. \( n_s, n_t \) stand for the number of shared experts and task-specialized experts per task, † indicates that it is not available in the original work and we implement it in this paper.
| Method | Backbone | Resolution | \( n_s \) | \( n_t \) | mAP(%) |
|-------------------------|------------|------------|-----------|-----------|--------|
| Q2L-R10T† | ResNet10T | 576 × 576 | - | - | 74.8 |
| HSQ-R10T(Ours) | ResNet10T | 576 × 576 | 0 | 1 | 71.5 |
| HSQ-Linear | ResNet10T | 576 × 576 | - | - | 75.7 |
| HSQ-R10T(Ours) | ResNet10T | 576 × 576 | 1 | 1 | 76.3 |
| HSQ-R10T(Ours) | ResNet10T | 576 × 576 | 4 | 1 | 76.1 |
| HSQ-R10T(Ours) | ResNet10T | 576 × 576 | 16 | 1 | **76.5** |
COCO to compare it with previous well-known works and state-of-the-art approaches. Results are shown in Table 2. We use ResNet101 (He et al., 2016) and ConvNeXt (Liu et al., 2022) (CyN) as the backbone and set input resolution to 576 × 576. Those backbones noted with -22k indicate that they are pre-trained on ImageNet-22k. Our HSQ model with CyN as the backbone achieves state-of-the-art performance at an mAP of 92.0%. Among all ResNet101-based approaches, our model outperforms all its counterparts. HSQ-R101 at the resolution of 576 × 576 achieves an mAP of 87.1%. Please note that for this model, we employ two successive Hybrid Sharing Layers of \( d_o = 1024, 512 \), MLP with one hidden layer of 128 neurons as gate and one of 64 as classifier.
Table 2: Performance (%) on MS-COCO. Bests are in bold. † indicates that it is not available in the original work and we implement it in this paper.
| Method | Backbone | Resolution | mAP | All | Top3 |
|-------------------------|------------|------------|---------|-----------|-----------|
| | | | CF1 | OF1 | CF1 | OF1 |
| SRN† | ResNet101 | 224 × 224 | 77.1 | 71.2 | 75.8 | 67.4 | 72.9 |
| ResNet-101† | ResNet101 | 224 × 224 | 78.3 | 72.8 | 76.8 | 69.7 | 73.6 |
| CADM† | ResNet101 | 448 × 448 | 82.3 | 77.0 | 79.6 | 73.5 | 76.0 |
| ML-GCN† | ResNet101 | 448 × 448 | 83.0 | 78.0 | 80.3 | 74.2 | 76.3 |
| KSSNet† | ResNet101 | 448 × 448 | 83.7 | 77.2 | 81.5 | - | - |
| MS-CMA† | ResNet101 | 448 × 448 | 83.8 | 78.4 | 81.0 | 74.3 | 77.2 |
| MCAR† | ResNet101 | 448 × 448 | 83.8 | 78.0 | 80.3 | 75.1 | 76.7 |
| SSGR† | ResNet101 | 576 × 576 | 83.8 | 76.8 | 79.7 | 72.7 | 76.2 |
| C-Trans† | ResNet101 | 576 × 576 | 85.1 | 79.9 | 81.7 | 76.0 | 77.6 |
| ADD-GCN† | ResNet101 | 576 × 576 | 85.2 | 80.1 | 82.0 | 75.8 | 77.9 |
| Q2L-R101† | ResNet101 | 448 × 448 | 84.9 | 79.3 | 81.5 | 73.3 | 75.4 |
| Q2L-R101† | ResNet101 | 576 × 576 | 86.5 | 81.0 | 82.8 | 76.5 | 78.3 |
| SST† | ResNet101 | 448 × 448 | 85.9 | 80.2 | 82.2 | 76.0 | 77.9 |
| ResNet101+TF† | ResNet101 | 576 × 576 | 85.9 | 80.3 | 82.4 | - | - |
| PSD+TF† | ResNet101 | 576 × 576 | 86.7 | 81.2 | 82.9 | - | - |
| SCO-DCNN† | ResNet101 | 576 × 576 | 86.0 | 79.8 | 83.0 | - | - |
| HSQ-R101(Ours) | ResNet101 | 576 × 576 | **87.1**| **81.8** | **83.4** | **91.8** | **93.4** |
| ASL† | TResNetL | 448 × 448 | 86.6 | 81.4 | 81.8 | 75.1 | 77.4 |
| TResNetL† | TResNetL(22k) | 448 × 448 | 88.4 | - | - | - | - |
| Q2L-TResL† | TResNetL | 448 × 448 | 87.3 | 81.6 | 83.1 | 77.0 | 78.5 |
| Q2L-TResL† | TResNetL(22k) | 448 × 448 | 89.2 | 83.8 | 84.9 | 79.0 | 80.2 |
| MlTr-l† | MLTr-l(22k) | 384 × 384 | 88.5 | 83.3 | 84.9 | - | - |
| Swin-L† | Swin-L(22k) | 384 × 384 | 89.6 | 84.8 | 86.1 | 80.0 | 81.1 |
| CvT-w24† | CvT-w24(22k) | 384 × 384 | 90.5 | 85.4 | 86.6 | 80.3 | 81.3 |
| Q2L-SwinL† | Swin-L(22k) | 384 × 384 | 90.5 | 85.4 | 86.4 | 80.5 | 81.2 |
| Q2L-CvT† | CvT-w24(22k) | 384 × 384 | 91.3 | 85.9 | 86.8 | 80.8 | 81.6 |
| ML-Decoder† | TResNet-XL(Open Image) | 640 × 640 | 91.2 | 76.8 | 76.9 | 90.8 | 92.0 |
| HSQ-CvN(Ours) | ConvNeXt(22k) | 576 × 576 | **92.0**| **86.6** | **87.5** | **94.0** | **95.2** |
4.3 PERFORMANCE ON THE VOC DATASET
PASCAL-VOC (Everingham et al., 2015) 2007 is also a well-acknowledged dataset for multi-label image classification. It comprises images of 20 classes and is split into train-val and test sets. We follow previous work to train on train-val set and validate on test set on 2007 version. The results of
Table 3: Performance (%) on VOC, in terms of per-label AP and mAP. Bests are in bold.
| Methods | aero | bike | bird | boat | bottle | bus | car | cat | chair | cow | mAP |
|------------------|------|------|------|------|--------|-----|-----|-----|-------|-----|-----|
| CNN-RNN (Wang et al., 2016) | 96.7 | 83.1 | 94.2 | 92.8 | 61.2 | 82.1| 89.1| 94.2| 64.2 | 83.6| 84.0|
| VGG+SVM (Simonyan and Zisserman, 2015) | 98.9 | 95.0 | 96.8 | 95.4 | 69.7 | 90.4| 93.5| 96.0| 74.2 | 86.6| 89.7|
| Fev+Lv (Yang et al., 2016) | 97.9 | 97.0 | 96.6 | 94.6 | 73.6 | 93.9| 96.5| 95.5| 73.7 | 90.3| 90.6|
| HCP (Wei et al., 2015) | 98.6 | 97.1 | 98.0 | 95.6 | 75.3 | 94.7| 95.8| 97.3| 73.1 | 90.2| 90.9|
| RDAL (Wang et al., 2017) | 98.6 | 97.4 | 96.3 | 96.2 | 75.2 | 92.4| 96.5| 97.1| 76.5 | 92.0| 91.9|
| RARL (Chen et al., 2018) | 98.6 | 97.1 | 97.1 | 95.5 | 75.6 | 92.8| 96.8| 97.3| 78.3 | 92.2| 92.0|
| SSGRL(576) (Chen et al., 2019c) | 99.7 | 98.4 | 98.0 | 97.6 | 85.7 | 96.2| 98.2| 98.8| 82.0 | 98.1| 95.0|
| MCAR (Gao and Zhou, 2021) | 99.7 | 99.0 | 98.5 | 98.2 | 85.4 | 96.9| 97.4| 98.9| 83.7 | 95.5| 94.8|
| ASL(TResNet-L) (Kudnuk et al., 2021a) | 99.9 | 98.4 | 98.9 | 98.7 | 86.8 | 98.2| 98.7| 98.5| 83.1 | 98.3| 95.8|
| ADD-GCN(576) (Ye et al., 2020) | 99.8 | 99.0 | 98.4 | 99.0 | 86.7 | 98.1| 98.5| 98.3| 85.8 | 98.3| 96.0|
| Q2L-TResL (Liu et al., 2021a) | 99.9 | 98.9 | 99.0 | 98.4 | 87.7 | 98.6| 98.8| 99.1| 84.5 | 98.3| 96.1|
| HSQ-CVN(22k) | 99.9 | 99.9 | 97.2 | 99.4 | 84.1 | 99.1| 98.3| 99.1| 84.9 | 100.0| 96.4|
| Methods | table | dog | horse | mbike | person | plant | sheep | sofa | train | tv | mAP |
|------------------|-------|-----|-------|-------|--------|-------|-------|------|-------|----|-----|
| CNN-RNN (Wang et al., 2016) | 70.0 | 92.4 | 91.7 | 84.2 | 93.7 | 59.8| 93.2| 75.3| 99.7 | 78.6| 84.0|
| VGG+SVM (Simonyan and Zisserman, 2015) | 87.8 | 96.0 | 96.3 | 93.1 | 97.2 | 70.0| 92.1| 80.3| 98.1 | 87.0| 89.7|
| Fev+Lv (Yang et al., 2016) | 82.8 | 95.4 | 97.7 | 95.9 | 98.6 | 77.6| 88.7| 78.0| 98.3 | 89.0| 90.6|
| HCP (Wei et al., 2015) | 80.0 | 97.3 | 96.1 | 94.9 | 96.3 | 78.3| 94.7| 76.2| 97.9 | 91.5| 90.9|
| RDAL (Wang et al., 2017) | 87.7 | 96.8 | 97.5 | 93.8 | 98.5 | 81.6| 93.7| 82.8| 98.6 | 89.3| 91.9|
| RARL (Chen et al., 2018) | 87.6 | 96.9 | 96.5 | 93.6 | 98.5 | 81.6| 93.1| 83.2| 98.5 | 89.3| 92.0|
| SSGRL(576) (Chen et al., 2019c) | 89.7 | 98.8 | 98.7 | 97.0 | 99.0 | 86.9| 98.1| 85.8| 99.0 | 93.7| 95.0|
| MCAR (Gao and Zhou, 2021) | 88.8 | 99.1 | 98.2 | 95.1 | 99.1 | 84.8| 97.1| 87.8| 98.3 | 94.8| 94.8|
| ASL(TResNet-L) (Kudnuk et al., 2021a) | 89.5 | 98.8 | 99.2 | 98.6 | 99.3 | 89.5| 99.4| 86.8| 99.6 | 95.2| 95.8|
| ADD-GCN(576) (Ye et al., 2020) | 88.9 | 98.8 | 99.0 | 97.4 | 99.2 | 88.3| 98.7| 90.7| 99.5 | 97.0| 96.0|
| Q2L-TResL (Liu et al., 2021a) | 89.2 | 99.2 | 99.2 | 99.2 | 99.3 | 90.2| 98.8| 88.3| 99.5 | 95.5| 96.1|
| HSQ-CVN(22k) | 91.2 | 99.2 | 99.9 | 99.9 | 99.2 | 88.0| 100.0| 91.6| 99.8 | 97.8| 96.4|
Our model is compared against various established methods and state-of-the-art techniques. Notably, our proposed approach surpasses all its counterparts, achieving an impressive mAP score of 96.4%. The per-class average precision is also presented, with the SOTA performance bolded. Items that improve in comparison with previous works are underscored in the last two rows.
Performance among Labels and Cross-label Comparison Among the 20 available labels, our model exhibits superior performance in 15 of them. Compared to Q2L, another transformer-based model, our model improves 105 pairs of labels, 27 pairs more than Q2L achieved. To provide a visual comparison, we randomly select two labels with moderate performance (i.e., “table” and “train”) and illustrate them in Figure 7. In this graph, each dot represents a specific approach, with dots in the upper-right corner indicating better performance. We further explore the performance difference between two pairs of labels with similar semantics, as depicted in Figure 3 and 4. Our method not only outperforms previous work but also exhibits a smaller absolute cross-label performance gap.
Robust Performance across Multiple Image Scales. In addition to previous experiments on MS-COCO, we perform extra experiments on different image scales to verify the performance of our model improves as image resolution decreases in Figure 6. We perform experiments on $576 \times 576$, $488 \times 448$ and $384 \times 384$. Results confirm that our model provides consistent considerable performance as image scale decreases.
4.4 Visualization Result
The proposed model incorporates several gates to harmonize outputs from experts. We verify that different tasks would rely on different experts on PASCAL VOC. Figure 5 depicts the weights of experts’ outputs on all 20 tasks and the average load across tasks on a sampled data batch. The last row represents an expert’s average load across all tasks. Weights are softmax-activated values of the gate networks’ outputs, presented in log scale, along 33 different experts on the X-axis, 32 of which are shared, and the last one is respective task-specialized. A lighter block color indicates an expert with more weight in the final harmonized output. It is evident to note that all tasks focus on different experts. For instance, $E_{s,8}$, $E_{s,30}$, $E_{s,28}$ have the most significant impact on chair, dog and horse respectively. The weight distributions across experts exhibit variations among tasks, indicating that...
distinct tasks rely on different sets of experts, each extracting distinctive representations, while even average loads on experts show that all experts are engaged during inference.

**Figure 5:** Experts load visualization on 20 labels of VOC2007. Each block indicates a weight for an expert on one task. The first 20 rows represent 20 labels from the VOC dataset, and the last row stands for the average load of experts. The x-axis denotes different experts, where experts 1-32 are shared among all tasks, while expert 33 is task-specific. The color represents weight in log space.

**Figure 6:** Performance comparison between Q2L (Liu et al., 2021a) and HSQ on MS-COCO with backbone as ConvNext (Liu et al., 2022)(22k)

**Figure 7:** AP on VOC2007 (table and train). The upper-right points in the figure perform better. AP is in %.
## 5 Conclusion
In this paper, regarding MLC as an MTL problem, we introduce HSQ, a transformer-based multi-label image classification model, which is constituted of a feature extraction backbone, query transformer, and Hybrid Sharing Layers that provide evident information sharing among tasks with shared and task-specialized experts leveraging inter- and intra-task information, respectively. Task-specialized experts are organized by respective gate networks, allowing each task to accept correlated information from shared experts independently. Shared experts accept input from all tasks, fusing all potentially useful information. Our model mitigates the negative transfer problem in MLC when formulating it as an MTL problem, where learning several labels jointly may hinder performance improvement. Our experiments demonstrate that HSQ provides a significant improvement on tested datasets. Furthermore, HSQ can simultaneously enhance per-label performance across multiple labels, mitigate performance gap among labels, and effectively handle semantic correlation and heterogeneity.
Reproducibility Statement In this paper, we make efforts to provide detailed information to ensure the reproducibility and completeness of our work. Figure 1 illustrates the architecture of our model. Algorithm 1 and Figure 2 provide a clear overview and procedure for our crucial component, the Hybrid Sharing Layer. Section A.3 in the Appendix describes the hyper-parameters and devices we use, including optimizer, learning rate, etc. Section B.2 and B.3 describe details on how we prepare our dataset, including version, partitioning strategy, etc. The code will be available upon acceptance.
ACKNOWLEDGMENTS
This work is partially supported by the National Natural Science Foundation of China (grant no. 62106101), and the Natural Science Foundation of Jiangsu Province (grant no. BK20210180). This work is also partially supported by the AI & AI for Science Project of Nanjing University.
REFERENCES
Shilong Liu, Lei Zhang, Xiao Yang, Hang Su, and Jun Zhu. Query2label: A simple transformer way to multi-label classification, 2021a.
Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baruch, and Asaf Noy. MI-decoder: Scalable and versatile classification head. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 32–41, 2023.
Jin Ye, Junjun He, Xiaojiang Peng, Wenhao Wu, and Yu Qiao. Attention-driven dynamic graph convolutional network for multi-label image recognition. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16, pages 649–665. Springer, 2020.
Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H. Chi. Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, pages 1930–1939, New York, NY, USA, July 2018. Association for Computing Machinery.
Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79–87, 1991.
Feng Zhu, Hongsheng Li, Wanli Ouyang, Nenghai Yu, and Xiaogang Wang. Learning spatial regularization with image-level supervisions for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5513–5522, 2017.
Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. Cnn-rnn: A unified framework for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2285–2294, 2016.
Zhao-Min Chen, Xiu-Shen Wei, Peng Wang, and Yanwen Guo. Multi-label image recognition with graph convolutional networks. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 5177–5186, 2019a.
Jack Lanchantin, Tianlu Wang, Vicente Ordonez, and Yanjun Qi. General multi-label image classification with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16478–16488, 2021a.
Sen Wu, Hongyang R Zhang, and Christopher Ré. Understanding and improving information transfer in multi-task learning. In International Conference on Learning Representations, 2019.
|
Ggu3cWldTy
|
In this unified framework, is there any evidence of the (possible) completeness of base methods, even for a single kind of methods? Or in other words, could there be a choice of complete methods $\mathcal{M}(P)$ for problem $P$, such that adding any other methods (equivalently, adding additional objective function) would result in that the components of $\alpha$ outside the complete set $\mathcal{M}(P)$ would always converge to 0.
|
UNIFIED MIRROR DESCENT:
TOWARDS A BIG UNIFICATION OF DECISION MAKING
Anonymous authors
Paper under double-blind review
ABSTRACT
Decision-making problems, encompassing single-agent, cooperative multi-agent, competitive multi-agent, and mixed cooperative-competitive cases, are ubiquitous in real-world applications. In the past several decades, substantial strides in theoretical and algorithmic advancements have been achieved within these fields. Nevertheless, these fields have been predominantly evolving independently, giving rise to a fundamental question: Can we develop a single algorithm to effectively tackle all these scenarios? In this work, we embark upon an exploration of this question by introducing a unified approach to address all types of decision-making scenarios. First, we propose a unified mirror descent (UMD) algorithm which synergistically integrates multiple base policy update rules. Specifically, at each iteration, the new policy of an agent is computed by weighting the base policies obtained through different policy update rules. One of the advantages of UMD is that only minimal modifications are required when integrating new policy update rules. Second, as the evaluation metric of the resulting policy is non-differentiable with respect to the weights of the base policies, we propose a simple yet effective zero-order method to optimize these weights. Finally, we conduct extensive experiments on 24 benchmark environments, which shows that in over 87% (21/24) games UMD performs better than or on-par with the base policies, demonstrating its potential to serve as a unified approach for various decision-making problems. To our knowledge, this is the first attempt to comprehensively study all types of decision-making problems under a single algorithmic framework.
1 INTRODUCTION
Decision-making problems spanning from single-agent to multi-agent settings are ubiquitous in our daily life (Rizk et al., 2018). In single-agent contexts, reinforcement learning (RL) has proved effective in real-world applications ranging from robotic navigation (Singh et al., 2022) to plasma control in nuclear fusion research (Degrave et al., 2022), and substantial progress on theoretical underpinnings of policy optimization has been made in recent works (Mei et al., 2020; Zhan et al., 2023; Gaur et al., 2023). Moving beyond single-agent RL, the challenge inherently becomes more intricate, and various methods have been tailored to effectively tackle different multi-agent problems, especially for multi-agent cooperative RL (Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2018; Son et al., 2019; Wang et al., 2021) and zero-sum games (Bailey & Pilouras, 2018; Kangarshahi et al., 2018; Wibisono et al., 2022; Kozuno et al., 2021; Lee et al., 2021; Jain et al., 2022; Ao et al., 2023; Liu et al., 2023; Cen et al., 2023; Sokota et al., 2023). Nevertheless, these fields have been predominantly evolving independently. Furthermore, it remains elusive and unexplored when venturing to more complicated general-sum cases (Song et al., 2022) where the sum of agents’ payoffs is non-zero and mixed cooperative-competitive cases (Xu et al., 2023) where agents in the same team need to cooperate with each other. This motivates us to answer a fundamental question:
Can we leverage a single reinforcement learning algorithm with minimal modifications to handle the decision-making of single-agent, cooperative multi-agent, competitive multi-agent, and mixed cooperative-competitive cases?
As one of the most popular algorithms, mirror descent (MD) (Vural et al., 2022) has demonstrated its power in RL (Tomar et al., 2022) and game theory (Cen et al., 2023; Sokota et al., 2023). With
Figure 1: The Y-axis is the normalized improvement of UMD (RS) versus baselines: > 1 means UMD (RS) outperforms the baselines, = 1 means UMD (RS) matches the baselines, and < 1 means UMD (RS) lags behind the baselines. (i) In over 87% (21/24) games UMD (RS) outperforms or matches the baselines. (ii) The numbers of games in which UMD (RS) significantly outperforms the baselines are: 4 (KL), 11 (EU), 7 (ME), and 7 (ML). (iii) For the four baselines, none of them can consistently outperform all the others across all types of decision-making problems.
Different mirror maps such as the negative entropy and Euclidean norm, various policy update rules have been induced in the literature. Despite their success in either theoretical convergence guarantee or strong empirical performance, they are typically limited to single-agent RL [Tomar et al., 2022; Zhan et al., 2023; Gaur et al., 2023] and zero-sum games [Bailey & Piliouras, 2018; Kangarshahi et al., 2018; Wibisono et al., 2022; Kozuno et al., 2021; Lee et al., 2021; Jain et al., 2022; Ao et al., 2023; Liu et al., 2023; Cen et al., 2023; Sokota et al., 2023]. For general-sum [Bai et al., 2021; Song et al., 2022], and mixed cooperative-competitive settings [Kurach et al., 2020; Xu et al., 2023], the most straightforward idea is to directly apply contemporary MD methods to solve these more complicated scenarios. However, there is no affirmative answer to the question of which one can consistently outperform all the others when applying these MD methods to different decision-making problems. Even under the tabular setting, a comprehensive empirical study of the performance of contemporary MD methods in various types of decision-making problems is lacking.
In this work, we aim to develop a single reinforcement learning algorithm which will be individually adopted by each agent (i.e., decentralized execution) while still effectively handling different types of decision-making problems. As this is the first attempt, we focus on the tabular setting, which, though has been often studied in single-agent and zero-sum games, yet unexplored for more complicated general-sum and mixed cooperative-competitive settings. Our contributions are threefold.
• We propose a unified mirror descent (UMD) algorithm by synergistically integrating multiple policy update rules induced by different mirror maps (e.g., negative entropy and Euclidean norm). More specifically, at each iteration, the new policy of an agent is computed by weighting the base policies derived from the policy update rules. UMD is easy to extend to integrate new policy update rules with only minimal modifications required.
• Optimizing the weights assigned to different base policies, unfortunately, is non-trivial as the evaluation metric of the resulting policy (e.g., the return in single-agent settings) is non-differentiable with respect to these weights. To address this issue, we propose a simple yet effective zero-order hyperparameter optimization (HPO) method to optimize these weights. Different from existing zero-order HPO methods, the performance improvement is used to only determine the update direction of the weights rather than the update magnitude, which is more effective when the evaluation metric converges relatively fast.
• We conduct extensive experiments on 24 benchmark games which are divided into 5 types (Figure 1): single-agent, competitive zero-sum, competitive general-sum, cooperative, and mixed cooperative-competitive. Experimental results show that in over 87% (21/24) games UMD performs better than or on-par with all the base policies, demonstrating its potential to serve as a unified approach for a wide range of decision-making problems. Moreover, to our knowledge, our experiments also provide the first comprehensive empirical study of all types of (tabular) decision-making problems under a single algorithmic framework.
2 RELATED WORK
Mirror descent (MD) (Vural et al., 2022) has demonstrated effectiveness in learning optimal policies in single-agent RL (Tomar et al., 2022) and proved the last-iterate convergence in learning approximate equilibrium in zero-sum games (Bailey & Piliouras, 2018; Kangarshahi et al., 2018; Wibisono et al., 2022; Kozuno et al., 2021; Lee et al., 2021; Jain et al., 2022; Ao et al., 2023; Liu et al., 2023; Cen et al., 2023; Sokota et al., 2023). Moving beyond zero-sum games, the last-iterate convergence of MD was established for several classes of games such as polymatrix and potential games (Anagnostides et al., 2022). In this work, instead of theoretically comparing the policy update rules induced by different mirror maps which could be difficult, particularly for general-sum (Bai et al., 2021; Song et al., 2022) and mixed cooperative-competitive cases (Kurach et al., 2020; Xu et al., 2023), we propose a unified mirror descent (UMD) algorithm which generalizes multiple policy update rules. UMD is easy to extend to integrate new policy update rules with minimal modifications required. Moreover, our experiments also provide the first comprehensive study of all types of (tabular) decision-making problems under a single algorithmic framework.
Our work is also related to zero-order hyperparameter optimization (HPO) which can update the parameters of interest without access to the true gradient, which has been extensively adopted in adversarial robustness of deep neural networks (Ilyas et al., 2018), meta-learning (Song et al., 2020), and transfer learning (Tsai et al., 2020). The most related work is (Wang et al., 2022), which applied zero-order optimization methods to neural architecture search (NSA) and established the connection between gradient-based NAS and zero-order methods. In this work, we propose a simple yet effective zero-order HPO method in which the performance improvement is used to only determine the update direction of the weights rather than the update magnitude, which is more effective than existing methods in (Wang et al., 2022) when the evaluation metric converges relatively fast.
3 PROBLEM STATEMENT
A decision-making problem, either single-agent, cooperative multi-agent, competitive multi-agent, or mixed cooperative-competitive settings, can be described as a decentralized partially observable Markov decision process (Dec-POMDP) (Oliehoek & Amato, 2016) formulated as a tuple \((N, S, A, O, \Omega, P, R, \gamma)\). \(N\) is the set of agents. \(S\) is the (finite) set of the states. \(A = \times_{i \in N} A_i\) and \(O = \times_{i \in N} O_i\) where \(A_i\) and \(O_i\) are the (finite) set of actions and observations of agent \(i\), respectively. We denote \(a \in A\) as the joint action of agents where \(a_i \in A_i\) is the action of agent \(i\). \(\Omega = \times_{i \in N} \Omega_i\) where \(\Omega_i : S \times A \rightarrow O_i\) is the observation function, which specifies the observation \(o_i \in O_i\) of agent \(i\) when agents take \(a \in A\) at the state \(s \in S\). \(P : S \times A \times S \rightarrow [0, 1]\) is the transition function which specifies the probability of transiting to \(s' \in S\) when agents take \(a \in A\) at the state \(s \in S\). \(R = \{r_i\}_{i \in N}\) where \(r_i : S \times A \rightarrow \mathbb{R}\) is the reward function of agent \(i\) and \(\gamma \in [0, 1)\) is the discount factor. At time step \(t \geq 0\), each agent has an action-observation history (i.e., a decision point) \(\tau^t_i \in T_i\) where \(T_i = (O_i \times A_i)^*\) and constructs its individual policy \(\pi_i : T_i \times A_i \rightarrow [0, 1]\) to maximize its own return. The joint policy of agents is denoted as \(\pi = (\pi_i)_{i \in N}\). Then, the value function of agent \(i\) is defined as \(V_i(\pi) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^t r^t_i | s_0, \pi]\) where \(r^t_i\) is the agent \(i\)'s reward at time step \(t\) and \(s_0\) is the initial state. Moreover, at decision point \(\tau^t_i\), the action-value function of an action \(a \in A_i\) is defined as \(Q(\tau^t_i, a, \pi) = \mathbb{E}[\sum_{h=t+1}^{\infty} \gamma^h r^h_i | \tau^t_i, a^t_i = a, \pi]\).
We first introduce the solution concepts used in this work. A policy \(\pi_i\) of agent \(i\) is said to be optimal\(^1\) if it is optimal in every decision point belonging to the agent. In single-agent and cooperative settings, this optimal policy achieves the maximum return for the agent/team. In (multi-agent) competitive and mixed cooperative-competitive settings, we use Nash equilibrium (NE) as the solution.
\(^1\)Precisely, it is soft optimal (Sokota et al., 2023). We omit the prefix soft for brevity.
A joint policy is an NE if each agent’s policy is optimal, given that other agents do not change their policies. Formally, let \( \pi^* = (\pi_i^*)_{i \in N} \) be the NE. Then, agent \( i \)'s policy satisfies:
\[
\pi_i^*(\tau_i^t) = \arg\max_{\pi_i \in \Pi_i} \mathbb{E}_{a \sim \pi_i(\tau_i^t)} Q(\tau_i^t, a; \{\pi_i, \pi^*_{-i}\}) + \epsilon H(\pi_i), \quad \forall \tau_i^t,
\]
where \( \Pi_i = \Delta(A_i) \) is agent \( i \)'s policy space and \( \Delta(\cdot) \) is the action simplex, \( \pi^*_{-i} \) denote the joint policy of all agents except agent \( i \), \( \epsilon \) is the regularization temperature, and \( H \) is Shannon entropy.
In single-agent and cooperative settings, the evaluation metric for a policy/joint policy is the expected return of the agent/team. In other cases, the evaluation metric for a joint policy is the distance of the policy to the NE, called the NE-Gap. Formally, the NE-Gap of the joint policy \( \pi \) is defined as
\[
\text{NE-Gap}(\pi) = \sum_{i \in N} [V_i(\pi_{i}^{\text{BR}}, \pi_{-i}) - V_i(\pi)],
\]
where \( \pi_{i}^{\text{BR}} \) is the best response (BR) policy of agent \( i \) against other agents. Note that in mixed cooperative-competitive cases, the BR policy should be the team’s BR policy (see Appendix C.2 for more details on the evaluation protocol).
Many methods have been developed to solve the problem (1) for single-agent (Tomar et al., 2022) and multi-agent settings (Sokota et al., 2023). However, for multi-agent settings, most of the existing works typically focus on two-player zero-sum games, while little has been known for more complicated cases including general-sum and mixed cooperative-competitive settings. Nevertheless, notice that Eq. (1) provides a unified description for all the decision-making scenarios as it presents the optimality condition from a single agent’s perspective. This motivates us to develop a unified policy update rule, which, when individually adopted by each agent, offers an efficient method to solve the problem (1), i.e., achieving optimal expected return in single-agent and cooperative settings while finding approximate NE in competitive and mixed cooperative-competitive cases.
4 UNIFIED MIRROR DESCENT
As we aim to develop a unified policy update rule that will be individually adopted by each agent in each decision point, we only focus on the policy learning of agent \( i \) in a single decision point \( \tau_i \in T_i \) and henceforth, the index \( i \) and \( \tau_i \) are ignored as they are clear from the context, and with a slight abuse of notation, we use \( A \) to represent the action set \( A_i \) of agent \( i \). Let \( \pi \in \Pi \) be the agent’s policy and \( Q(a) \) be the action-value of an action \( a \in A \). Note that the joint policy of other agents \( \pi_{-i} \) is also omitted in the action-value function. Then, we aim to solve the following problem:
\[
\pi^* = \arg\max_{\pi \in \Pi} \mathbb{E}_{a \sim \pi} Q(a) + \epsilon H(\pi).
\]
In single-agent and two-player zero-sum (i.e., purely competitive) settings, the most commonly used method to solve the problem (2) is mirror descent. Formally, the update rule takes the form
\[
\pi_{k+1} = \arg\max_{\pi \in \Pi} \mathbb{E}_{a \sim \pi} Q_k(a) - f(\pi, \pi_k),
\]
where \( k \leq K \) is the iteration, \( Q_k \) is the action-value function induced by \( \pi_k \), \( f \) is called the regularizer. As each choice of \( f \) induces a specific policy update rule, in Section 4.1, we present four candidates and then propose a new update rule by integrating them with minimal modifications.
4.1 A UNIFIED POLICY UPDATE RULE
Let \( f(\pi, \pi_k) = \epsilon B_\phi(\pi, \rho) + \frac{1}{\eta} B_\phi(\pi, \pi_k) \). Then, we have
\[
\pi_{k+1} = \arg\max_{\pi \in \Pi} \mathbb{E}_{a \sim \pi} Q_k(a) - \epsilon B_\phi(\pi, \rho) - \frac{1}{\eta} B_\phi(\pi, \pi_k),
\]
where \( B_\phi \) denotes the Bregman divergence with respect to the mirror map \( \phi \), which is defined as
\[
B_\phi(x; y) = \phi(x) - \phi(y) - \langle \nabla \phi(y), x - y \rangle
\]
with \( \langle \cdot, \cdot \rangle \) being the standard inner product, \( \epsilon > 0 \) is the regularization temperature, \( \rho \) is the magnet policy (Sokota et al., 2023), and \( \eta > 0 \) is the stepsize (i.e., learning rate). When the mirror map \( \phi \) is taken to be the negative entropy \( \phi(x) = \sum_j x_j \ln x_j \), the Bregman divergence is the well-known KL divergence, and hence, we have
\[
\pi_{k+1} = \arg\max_{\pi \in \Pi} \mathbb{E}_{a \sim \pi} Q_k(a) - \epsilon \text{KL}(\pi, \rho) - \frac{1}{\eta} \text{KL}(\pi, \pi_k).
\]
It is easy to get that Eq. (5) possesses the closed-form solution in settings with discrete actions and unconstrained domains as follows: \( \forall a \in A \),
\[
\pi_{k+1}^{KL}(a) \propto [\pi_k(a) \rho(a)^{\epsilon \eta} e^{\eta Q_k(a)}]^{\frac{1}{1+\epsilon \eta}}.
\]
We use superscript “KL” to indicate that Eq. (5) is induced with the KL divergence. The magnet policy \( \rho \) is updated through \( \rho_{k+1}(a) \propto \rho_k(a)^{1-\eta} \pi_{k+1}(a)^\eta \). When \( \phi(x) = \frac{1}{2} \|x\|_2^2 \), the Bregman divergence is the Euclidean distance. Then, we have
\[
\pi_{k+1} = \arg\max_{\pi \in \Pi} \mathbb{E}_{a \sim \pi} Q_k(a) - \frac{\epsilon}{2} \| \pi - \rho \|_2^2 - \frac{1}{2\eta} \| \pi - \pi_k \|_2^2.
\]
(7)
Similarly, we can derive the closed-form solution to Eq. (7) as follows (see Appendix B for details on the derivation): \( \forall a \in A \),
\[
\pi_{k+1}^{\text{EU}}(a) = \frac{\epsilon \rho(a) + \frac{1}{\eta} \pi_k(a) + Q_k(a) - \frac{1}{|\mathcal{A}|} \sum_{a' \in \mathcal{A}} Q_k(a')}{(\epsilon + \frac{1}{\eta})}.
\]
(8)
We use superscript “EU” to indicate that Eq. (8) is induced with the Euclidean distance. In addition, following Bailey & Pilouras (2018), we can consider the following optimization problem in each decision point:
\[
\pi_{k+1} = \arg\max_{\pi \in \Pi} \eta \sum_{h=0}^{k} r_h(\pi) - \phi(\pi),
\]
(9)
where \( r_h(\pi) \) is the (expected) reward of the agent taking \( \pi \). Notice that the reward is determined by the environment in single-agent settings while depends on both the environment and other agents’ policies in multi-agent settings. More precisely, in multi-agent settings, \( r_h(\pi) = r_h(\pi, \pi_{-i}) \). Then, we have another two base policy update rules, Exponential Multiplicative Weight Update (MWU_e, ME for short) and Linear Multiplicative Weight Update (MWU_l, ML for short), as follows: \( \forall a \in A \),
\[
\pi_{k+1}^{\text{ME}}(a) = \frac{\pi_k(a)e^{\eta v_k(a)}}{\sum_{a' \in \mathcal{A}} \pi_k(a')e^{\eta v_k(a')}}, \quad \pi_{k+1}^{\text{ML}}(a) = \frac{\pi_k(a)(1 + (\epsilon^\eta - 1)v_k(a))}{\sum_{a' \in \mathcal{A}} \pi_k(a')(1 + (\epsilon^\eta - 1)v_k(a'))},
\]
(10)
where \( v_k(a) \) denotes the reward obtained by changing the policy \( \pi_k \) to a single action \( a \in A \).
With the above introduced four choices, we are ready to present a new policy update rule by integrating these base policies. To this end, we introduce a weight vector denoted by \( \alpha = (\alpha_1, \alpha_2, \alpha_3, \alpha_4) \) with \( \sum_{j=1}^{4} \alpha_j = 1 \) and \( \alpha_j \geq 0, 1 \leq j \leq 4 \). Then, the new policy of the agent is computed by weighting the four base policies using \( \alpha \): \( \forall a \in A \),
\[
\pi_{k+1}(a) = \alpha_1 \pi_{k+1}^{\text{KL}}(a) + \alpha_2 \pi_{k+1}^{\text{EU}}(a) + \alpha_3 \pi_{k+1}^{\text{ME}}(a) + \alpha_4 \pi_{k+1}^{\text{ML}}(a).
\]
(11)
We call Eq. (11) the unified mirror descent (UMD), and the pseudo-code is shown in Algorithm 1.
The intuition behind UMD is twofold. First, although the four base policy update rules have been widely employed to solve different decision-making problems, there is no affirmative answer to the question of which one can consistently outperform all the others in terms of learning performance across all types of decision-making problems. Most of the existing theoretical results are typically limited to single-agent (Tomar et al., 2022) or two-player zero-sum games (Liu et al., 2023), and only restricted classes of games such as polymatrix and potential games have been considered while going beyond zero-sum games (Anagnostides et al., 2022). Instead of theoretically comparing these base schemes which could be difficult (if not impossible), particularly for general-sum (Song et al., 2022) and mixed cooperative-competitive settings (Xu et al., 2023), we propose a unified approach, UMD, that generalizes the base policy update rules. Intuitively, as UMD could inherit the properties of these algorithms, it could surpass or match these base methods in terms of learning performance. Second, UMD can be reduced to any of these base policy update rules by adjusting their weights. For example, when \( \alpha_1 = 1 \), UMD is reduced to MMD, the state-of-the-art method which unifies single-agent RL and two-player zero-sum games. In this situation, UMD could inherit the convergence guarantee of MMD in some cases such as two-player zero-sum games (Sokota et al., 2023).
### 4.2 Zero-order Hyperparameter Optimization
The key to UMD is to optimize \( \alpha \), which unfortunately, is a non-trivial task as the evaluation metric, denoted by \( L(\alpha) \) (the expected return or NE-Gap), is non-differentiable with respect to \( \alpha \). To address this issue, we propose two zero-order methods to optimize \( \alpha \). We adopt two representative techniques: random search follows the traditional gradient estimation algorithms (Liu et al., 2020) while GradientLess Descent (Golovin et al., 2020) uses direct search.
Random Search (RS). When updating the hyperparameter $\alpha$, we first sample $M$ candidates $\{u_i\}_{i=1}^M$ from a spherically symmetric distribution $u_i \sim q$. Then, we compute the update as follows:
$$u^* = -\sum_{i=1}^{M} \text{Sgn}\left[\mathcal{L}(\text{Proj}(\alpha + \mu u_i)) - \mathcal{L}(\text{Proj}(\alpha - \mu u_i))\right] u_i,$$
where $\text{Sgn}(z)$ is defined as: $\text{Sgn}(z) = 1$ if $z > 0$, $\text{Sgn}(z) = -1$ if $z < 0$, otherwise, $\text{Sgn}(z) = 0$. $\mu$ is the smoothing parameter determining the radius of the sphere. $\text{Proj}(\cdot)$ is the projection operation to ensure that $\alpha$ is well-defined. Finally, $\alpha$ is updated as $\alpha \leftarrow \text{Proj}(\alpha + u^*)$. Note that the operation $\text{Sgn}(\cdot)$ plays an important role and differentiates it from vanilla RS without this operation (Wang et al., 2022). Intuitively, in the games where the performance $\mathcal{L}$ converges quickly, the magnitude of $\mathcal{L}(\text{Proj}(\alpha + \mu u_i)) - \mathcal{L}(\text{Proj}(\alpha - \mu u_i))$ would be too small to derive an effective update. In contrast, by using the operation $\text{Sgn}(\cdot)$, the difference between the performance of $\alpha + \mu u_i$ and $\alpha - \mu u_i$ only determines the update direction, not the update magnitude.
GradientLess Descent (GLD). Similar to RS, when updating the hyperparameter $\alpha$, we first sample $M$ candidates $\{u_i\}_{i=1}^M$. However, instead of sampling from a fixed radius ($\mu$ in RS), we independently sample the candidates on spheres with various radii uniformly sampled from the interval $[r, R]$. Then, we follow a similar rule to compute the update as follows:
$$u^* = -\sum_{i=1}^{M} \text{Sgn}\left[\mathcal{L}(\text{Proj}(\alpha + u_i)) - \mathcal{L}(\alpha)\right] u_i.$$
Finally, we have $\alpha \leftarrow \text{Proj}(\alpha + u^*)$. In contrast, in vanilla GLD (Wang et al., 2022), $\alpha$ is updated according to the comparison between $\mathcal{L}(\alpha)$ and $\mathcal{L}(\text{Proj}(\alpha + u_i))$: $\alpha$ steps to the one with the best performance, or stays unchanged if none of them makes an improvement.
In addition, considering the trade-off between the learning performance and learning speed, instead of updating $\alpha$ at each iteration, we update it every $\kappa \geq 1$ iteration (a two-timescale manner).
**Algorithm 1: Unified Mirror Descent (UMD)**
1. Initialization: $\pi_1(a) = 1/|\mathcal{A}|, \forall a \in \mathcal{A}, \alpha = (0.25, 0.25, 0.25, 0.25)$;
2. for iteration $k = 1, 2, \ldots, K - 1$ do
3. Compute $\pi_{KL}^{k+1}, \pi_{EU}^{k+1}, \pi_{ME}^{k+1}, \pi_{ML}^{k+1}$ through Eq. (6), (8), and (10), respectively;
4. if $k \% \kappa = 0$ then
5. Sample candidates $\{u_i\}_{i=1}^M$, get $u^*$ through RS in Eq. (12) or GLD in Eq. (13);
6. Update the parameters $\alpha \leftarrow \text{Proj}(\alpha + u^*)$;
end
9. Return: $\pi_K(a) = \alpha_1 \pi_{KL}^K(a) + \alpha_2 \pi_{EU}^K(a) + \alpha_3 \pi_{ME}^K(a) + \alpha_4 \pi_{ML}^K(a), \forall a \in \mathcal{A}$
### 5 EXPERIMENTS
In this section, we investigate our framework on a set of benchmark environments. We first present the experimental setups, and then the results and analysis to provide insights into our framework.
#### 5.1 EXPERIMENTAL SETUPS
We consider 24 games which are divided into 5 types: single-agent, cooperative, competitive zero-sum, competitive general-sum, and mixed cooperative-competitive (MCC, for short). We construct the single-agent and MCC environments by modifying some zero-sum games. All the games are implemented in OpenSpiel (Lanctot et al., 2019). For single-agent and cooperative environments, we use the return to measure the quality of the policy/joint policy. For other cases, we use NE-Gap as the measure. In addition, to provide a clear overview of the results (Figure 1), we compute the normalized improvement of UMD versus baselines (take KL as an example): $V(\pi_{UMD}^*) / V(\pi_{KL}^*)$ for single-agent and cooperative environments, $(\text{NE-Gap}(\pi_{Random}) - \text{NE-Gap}(\pi_{UMD})) / (\text{NE-Gap}(\pi_{Random}) - \text{NE-Gap}(\pi_{KL}))$ for other environments. All methods we compare are UMD (RS), UMD (GLD), and the four base policies: KL, EU, ME, and ML. For single-agent cases, we also include Q-learning as a baseline. All experiments are performed on a machine with a 24-core Intel(R) Core(TM) i9-12900K and NVIDIA RTX A4000, and the results are obtained with 3 random seeds. The full experimental details on the games, evaluation protocol, and hyperparameters can be found in Appendix C.
5.2 Results and Analysis
Figure 1 presents the normalized improvement of UMD (here, we refer to UMD (RS)) versus baselines (the results for UMD (GLD) can be found in Appendix D.1). Several conclusions can be drawn from the results. (i) In over 87% (21/24) games UMD performs better than or on-par with baselines, demonstrating its effectiveness in solving various types of decision-making problems. (ii) In zero-sum games, UMD matches KL in all the games except Leduc. From the results, we hypothesize that UMD inherits the convergence guarantee of KL in two-player zero-sum games (Sokota et al., 2023). (iii) For some games beyond zero-sum settings, UMD can outperform the baselines. For example, in Auction, Tiny_Hanabi_B, MCC_Kuhn_A, and MCC_Kuhn_B, UMD significantly outperforms KL, which has not been observed in previous works. (iv) For the four baselines, none of them can consistently outperform all the others across different types of games, which supports the motivation of this work. For example, in Leduc, KL outperforms EU (KL > UMD > EU), while EU performs better than KL (EU > UMD > KL) in MCC_Kuhn_B.
We present the learning curves of different methods in different types of games in Figure 2 to Figure 6 (the quantitative results are given in Appendix D.1). (i) In single-agent cases (Figure 2), all the methods are comparable and outperform the vanilla Q-learning algorithm, showing that they can effectively solve single-agent problems. (ii) In cooperative settings (Figure 3), all the methods except EU and UMD (GLD) in Tiny_Hanabi_A can converge to the optimal value of the game, showing that they are effective in solving cooperative games. Surprisingly, in game B, C, and D, KL converges slower than other methods. (iii) In competitive zero-sum games (Figure 4), KL outperforms other methods in Kuhn and Leduc. For all the other games, UMD (RS) and KL can consistently converge to the approximate NE (low NE-Gap), while other methods can struggle or even diverge in some of the games. Typically, UMD (RS) performs better than UMD (GLD). In addition, although KL is the state-of-the-art method in (two-player) zero-sum games, it converges slower than UMD and other methods in some of the games. (iv) In competitive general-sum games (Figure 5), a surprising observation is that both UMD (RS) and UMD (GLD) can consistently converge to approximate NE in all the games, and in Auction, they significantly outperform KL and other methods. (v) In mixed cooperative-competitive cases (Figure 6), UMD (RS) can consistently converge to the approximate NE in all the games. In MCC_Kuhn_A and MCC_Kuhn_B, UMD (RS) significantly surpasses KL both in terms of convergence speed and the final NE-Gap. In summary, UMD (RS) can effectively solve all types of (tabular) decision-making problems, i.e., either achieving the optimal return in single-agent and cooperative cases or finding approximate NE in other cases. Moreover, in some of the games, UMD (RS)/UMD (GLD) can significantly outperform all the baselines.


The key to UMD is the optimization of $\alpha$. Intuitively, an effective HPO method should be able to identify which one of the policy update rules performs best and then assign a larger weight to this policy update rule. To verify that our proposed RS/GLD satisfies this requirement, we present the performance of different methods along with the evolution of the weights of different baselines over the learning process in Figure 7. In the left figure, we can see that when using vanilla RS/GLD ($\nu$-RS/$\nu$-GLD), UMD cannot converge to the approximate NE of the game, showing that the proposed RS/GLD is indispensable for the success of UMD. In the middle left figure, we can see that at the early stage of learning, the NE-Gap of all four base policies decreases. However, at the latter stage, EU converges to a high NE-Gap. In this situation, the weight assigned to EU should be decreased, which was exactly observed in RS and GLD in the middle right figure, demonstrating that RS and GLD can quickly adjust the weights assigned to the base policies. In the right figure, we can see that the vanilla RS and GLD cannot efficiently leverage the performance difference between the base policies to optimize the weights, leading to the failure of finding the approximate NE of the game. In addition, RS typically performs better than GLD. We hypothesize that RS is more efficient in exploring the parameter space as it uses more samples ($\alpha + \mu u_i$ and $\alpha - \mu u_i$) to get the update.
direction $u^*$ (2 times more than GLD which only involves $\alpha + u_i$). It is worth noting that although RS uses more samples, it does not introduce much extra computational cost compared to GLD. In Appendix D.3 we present the wall-clock time of one iteration of each method to support this claim. In fact, UMD (RS) and UMD (GLD) are still computationally efficient even compared to the four baselines. Figure 7 is obtained in Goofspiel, and more results can be found in Appendix D.2.

We also perform some ablation studies on the parameters in RS/GLD: $\kappa$, $M$, and $\mu$. Here, we only focus on $\mu$, and the results are shown in Figure 8 for single-agent and cooperative cases, $\mu$ has very little influence on the learning performance, while for other settings, different games may have different optimal $\mu$. It is worth noting that though different games may require different $\mu$, it is the only hyperparameter that requires some effort for tuning, which is also one of the advantages of our approach. For $\kappa$ and $M$, the results can be found in Appendix D.2.

6 CONCLUSIONS AND FUTURE DIRECTIONS
In this work, we make the first attempt to develop a single algorithm to effectively handle all types of decision-making problems under the tabular setting, including single-agent, cooperative, competitive, and mixed cooperative-competitive cases. The contributions are threefold. First, we propose a unified mirror descent (UMD) algorithm by weighting multiple base policies induced by different mirror maps to compute the new policy of an agent at each iteration. UMD is easy to extend to include new policy update rules with only minimal modifications required. Second, to optimize the weights of different base policies, we devise a simple yet effective zero-order method in which the improvement of learning performance is used to only determine the update direction of the weights rather than the update magnitude, which is more efficient than existing zero-order methods. Finally, we perform extensive experiments on 24 benchmark environments. The results show that in over 87% games UMD performs better than or on-par with baselines, demonstrating that UMD could serve as an effective unified approach for all types of (tabular) decision-making problems. Last but not least, our experiments, to our knowledge, also provide the first comprehensive empirical study of all types of (tabular) decision-making problems under a single algorithmic framework.
In this work, we focus on the decision-making problems under the tabular setting. Thus, the environments in our experiments are relatively small and simple. In future works, we may consider more complex environments where tabular representation may be a struggle (e.g., high memory and time requirements, impossible to enumerate the state space). In this situation, we need to consider a more powerful representation of the policy such as a neural network-based policy (Mnih et al., 2015), and thus, devising a single deep reinforcement learning (deep RL) algorithm to handle all types of (not restricted to tabular but more complex) decision-making problems is necessary.
REFERENCES
Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, and Tuomas Sandholm. On last-iterate convergence beyond zero-sum games. In ICML, pp. 536–581, 2022.
Ruicheng Ao, Shicong Cen, and Yuejie Chi. Asynchronous gradient play in zero-sum multi-agent games. In ICLR, 2023.
Yu Bai, Chi Jin, Huan Wang, and Caiming Xiong. Sample-efficient learning of Stackelberg equilibria in general-sum games. In NeurIPS, pp. 25799–25811, 2021.
James P Bailey and Georgios Piliouras. Multiplicative weights update in zero-sum games. In EC, pp. 321–338, 2018.
Shicong Cen, Yuejie Chi, Simon Shaolei Du, and Lin Xiao. Faster last-iterate convergence of policy optimization in zero-sum Markov games. In ICLR, 2023.
Christian Schroeder de Witt, Tarun Gupta, Denys Makoviychuk, Viktor Makoviychuk, Philip HS Torr, Mingfei Sun, and Shimon Whiteson. Is independent learning all you need in the StarCraft multi-agent challenge? arXiv preprint arXiv:2011.09533, 2020.
Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de Las Casas, et al. Magnetic control of tokamak plasmas through deep reinforcement learning. Nature, 602(7897):414–419, 2022.
Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. In AAAI, pp. 2974–2982, 2018.
Jakob Foerster, Francis Song, Edward Hughes, Neil Burch, Iain Dunning, Shimon Whiteson, Matthew Botvinick, and Michael Bowling. Bayesian action decoder for deep multi-agent reinforcement learning. In ICML, pp. 1942–1951, 2019.
Mudit Gaur, Amrit Singh Bedi, Di Wang, and Vaneet Aggarwal. On the global convergence of natural actor-critic with two-layer neural network parametrization. arXiv preprint arXiv:2306.10486, 2023.
Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, and Qiuyi Zhang. Gradientless descent: High-dimensional zeroth-order optimization. In ICLR, 2020.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
Chloe Ching-Yun Hsu, Celestine Mendler-Dünner, and Moritz Hardt. Revisiting design choices in proximal policy optimization. arXiv preprint arXiv:2009.10897, 2020.
Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In ICML, pp. 2137–2146, 2018.
Rahul Jain, Georgios Piliouras, and Ryann Sim. Matrix multiplicative weights updates in quantum zero-sum games: Conservation laws & recurrence. In NeurIPS, pp. 4123–4135, 2022.
Ehsan Asadi Kangarshahi, Ya-Ping Hsieh, Mehmet Fatih Sahin, and Volkan Cevher. Let’s be honest: An optimal no-regret framework for zero-sum games. In ICML, pp. 2488–2496, 2018.
Tadashi Kozuno, Pierre Ménard, Remi Munos, and Michal Valko. Model-free learning for two-player zero-sum partially observable Markov games with perfect recall. In NeurIPS, pp. 11987–11998, 2021.
Karol Kurach, Anton Raichuk, Piotr Stańczyk, Michał Zając, Olivier Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, et al. Google research football: A novel reinforcement learning environment. In AAAI, pp. 4501–4510, 2020.
|
p5SurcLh24
|
Below, I have a few questions and constructive feedback to the authors: The EPS is defined as the *smallest* possible set of policies that are not provably Bayes-suboptimal. Why is it the smallest set?
|
UNIFYING MODEL-BASED AND MODEL-FREE REINFORCEMENT LEARNING WITH EQUIVALENT POLICY SETS
Anonymous authors
Paper under double-blind review
ABSTRACT
Model-based and model-free reinforcement learning (RL) each possess relative strengths that prevent either algorithm from strictly dominating the other. Model-based RL often offers greater data efficiency, as it can use models to evaluate many possible behaviors before choosing one to enact. However, because models cannot perfectly represent complex environments, agents that rely too heavily on models may suffer from poor asymptotic performance. Model-free RL avoids this problem at the expense of data efficiency. In this work, we seek a unified approach to RL that combines the strengths of both algorithms. To this end, we propose equivalent policy sets (EPS), a novel tool for quantifying the limitations of models for the purposes of decision making. Based on this concept, we propose Unified RL, a novel RL algorithm that uses models to constrain model-free RL to the set of policies that are not provably suboptimal, according to model-based bounds on policy performance. We demonstrate across a range of benchmarks that Unified RL effectively combines the relative strengths of both model-based and model-free RL, in that it achieves comparable data efficiency to model-based RL and exceeds the data efficiency of model-free RL, while achieving asymptotic performance similar or superior to that of model-free RL. Additionally, we show that Unified RL outperforms a number of existing state-of-the-art model-based and model-free RL algorithms, and can learn effective policies in situations where either model-free or model-based RL alone fail.
1 INTRODUCTION
Recent successes in model-based reinforcement learning (MBRL) have demonstrated the enormous value that learned representations of environmental dynamics (i.e., models) can confer to autonomous decision making. For example, models allow agents to evaluate many possible future behaviors, without requiring additional expensive and potentially dangerous environmental interactions. This process is referred to as planning, and is a cornerstone of autonomous decision making. Models also hold the potential to facilitate cross-task knowledge transfer (Killian et al., 2017) and intelligent exploration (Lowrey et al., 2018; Sekar et al., 2020; Mehta et al., 2021, 2022). In practice, MBRL algorithms often achieve higher data efficiency than their model-free counterparts (Deisenroth & Rasmussen, 2011; Heess et al., 2015; Gal et al., 2016a; Chua et al., 2018; Janner et al., 2019; Hafner et al., 2019, 2020; Lin et al., 2023).
Although useful, models come with their own set of drawbacks. Because models typically possess limited representational capacity, they will always fall short of capturing the full complexity of the real environmental dynamics, which may help explain why MBRL often fails to match the asymptotic performance of model-free RL (MFRL) (Wang et al., 2019). This limitation of models is exacerbated by the objective mismatch problem (Lambert et al., 2020): model-learning objectives typically used in MBRL, which are based on some generic measure of accuracy, are often misaligned with the overall goal of increasing reward, which has been shown to negatively impact MBRL performance in practice (Agarwal et al., 2021).
Several recent approaches have attempted to address objective mismatch by deriving model-learning objectives that are more aligned with the overall RL objective, to enable learned models to be more
useful for policy improvement (Joseph et al., 2013; Luo et al., 2018; Lambert et al., 2020; Rajeswaran et al., 2020; Chow et al., 2020; Grimm et al., 2020; D’Oro et al., 2020; Eysenbach et al., 2022; Ghugare et al., 2022). However, because practical models will always differ from the true dynamics by some degree, we hypothesize that over-reliance on models will invariably result in some degree of suboptimality. For this reason, we take an alternative approach to addressing the objective mismatch problem. We seek to develop agents that understand the limitations of their models, allowing them to switch to an alternative (e.g., a model-free) learning paradigm in situations where models are not useful for policy improvement. We hypothesize that such an agent would enjoy the benefits of both model-based and model-free learning. To this end, we propose equivalent policy sets (EPS), a novel tool for quantifying the limitations of a model for estimating optimal behaviors.
We define the EPS as the set of all policies that are not provably suboptimal, using bounds on the performance of candidate policies, computed using the model. Intuitively, the EPS captures the usefulness of a particular model class for discerning optimal from suboptimal policies.
Based on the concept of the EPS, we propose Unified RL, a principled approach to combining MBRL and MFRL that takes advantage of their relative strengths. Unified RL constrains the policy found by MFRL (e.g., soft actor-critic) to lie within the set of non-provably suboptimal policies (the EPS). Here, models are used as a sort of “pre-filtering” step that eliminates provably suboptimal policies from consideration by MFRL. Unified RL leverages the ability of models to rapidly rule-out suboptimal candidate behaviors, while avoiding limitations on asymptotic performance that they introduce.
We show empirically that Unified RL is able to combine the benefits of both model-based and model-free RL on a range of challenging continuous control benchmarks. Furthermore, we show that Unified RL outperforms a wide range of state-of-the-art model-based and model-free RL algorithms. Finally we show that Unified RL is robust to failure of either its model-based or model-free components. Specifically, when distractors are introduced that prevent the agent from learning well-aligned models, Unified RL continues to make learning progress using model-free policy updates. On the other hand, when poorly selected model-free hyperparameters are used that cause MFRL to fail, Unified RL resorts to MBRL.
2 BACKGROUND
We represent the environment with which the agent interacts as a Markov decision process (MDP) with initial state distribution $s_0 \sim p_0(s_0)$, state transition dynamics $s_{t+1} \sim T(s_{t+1}|s_t, a_t)$, reward function $r_t \sim R(r_t|s_t, a_t)$ for $t \in \{0, ..., T\}$, and discount factor $\gamma \in [0, 1]$. For simplicity, we assume $\gamma = 1$ and hence ignore it in future exposition. We consider continuous control problems, wherein the agent learns a policy $\pi \in \Pi$ where $\pi : S \times A \rightarrow [0, \infty)$ is a state-dependent probability density function over a real-valued action space.
In this work, we formulate the RL problem in Bayesian terms, although the approach is not restricted to using Bayesian algorithms. We are therefore concerned with the Bayesian posterior over state transition and reward functions, given by $p(w|D) = p(D|w)p(w)/p(D)$, where $D$ is comprised of data observed thus far in the environment, $w$ denotes a parameter vector that parameterizes both the state transition and reward functions, and $p(w)$ is our prior. The prior represents our belief about the dynamics before observing data $D$, and can be informed by domain-specific knowledge or from previous tasks. In this work we do not assume that we possess any prior knowledge, and therefore choose a generic prior (Sec. 3.2). We denote our models of the state transition function and reward function, conditioned on a certain parameter vector $w$, as $p(s'|s, a, w)$ and $p(r|s, a, w)$, respectively. The distribution of trajectories $\tau$ given a particular policy $\pi$ and parameters $w$ is given by $p(\tau|\pi, w) = p(s_0)\pi(a_0|s_0)p(r_0|s_0, a_0, w)\prod_{t=1}^{T} p(s_t|s_{t-1}, a_{t-1}, w)\pi(a_t|s_t)p(r_t|s_t, a_t, w)$. Our inferred posterior distribution over trajectories given the available data $D$ and a policy $\pi$ is given by $p(\tau|D) = \mathbb{E}_{p(w|D)}[p(\tau|\pi, w)]$. We denote the expected return of $\pi$ given a particular parameter vector $w$ as $J(\pi|w) = \mathbb{E}_{p(\tau|\pi, w)}\left[\sum_{t=0}^{T} r_t|\pi, w\right]$. Finally, we define the Bayesian return of a policy $\pi$ to be the expected sum of rewards achieved by $\pi$, in expectation over our Bayesian posterior over trajectories
$$J(\pi|D) = \mathbb{E}_{p(\tau|D, \pi)}\left[\sum_{t=0}^{T} r_t|\pi, D\right].$$ (1)
This is the quantity that our approach to Bayesian RL attempts to maximize. We refer to a policy that maximizes the Bayesian return \( \pi^* \in \arg\max_{\pi \in \Pi} J(\pi|D) \) as the Bayes-optimal policy. Similarly, we refer to any policy \( \pi \notin \arg\max_{\pi \in \Pi} J(\pi|D) \) as Bayes-suboptimal.
For many interesting model classes, exact Bayesian posteriors are intractable, and must therefore be approximated with some tractable distribution family. We denote approximate posteriors with \( q(w; \theta) \in Q \), where \( \theta \) denotes the parameters of the distribution. For example, if \( q \) is a multivariate normal distribution, \( \theta \) may contain the mean vector and variance matrix. We henceforth refer to \( q \) as our model, because it encodes our learned representation of (our posterior over) the environmental dynamics.
3 UNIFYING MODEL-BASED AND MODEL-FREE REINFORCEMENT LEARNING
Here we introduce the notion of equivalent policy sets (EPS) as a tool for quantifying the limitations of models for the purposes of approximating optimal policies. Subsequently, we describe Unified Reinforcement Learning, which builds on the concept of the EPS to combine the strengths of model-based and model-free RL.
3.1 EQUIVALENT POLICY SETS
To achieve our ultimate goal of developing agents that can flexibly switch between model-free and model-based learning, agents must understand the limitations of models for evaluating and improving policies. To this end, we propose equivalent policy sets (EPS) as a tool for quantifying the usefulness of a model for discerning optimal from suboptimal policies. More precisely, we define the EPS \( \Pi_E(\theta, D) \subseteq \Pi \) to be the set of policies that are not provably Bayes-suboptimal, using a model with parameters \( \theta \) and available data \( D \). To prove the suboptimality of a particular policy \( \pi \), we use our model to compute a lower bound on (a function \( f \) of) the improvement in Bayesian return of a new policy \( \pi' \) over \( \pi \),
\[
L(\pi, \pi', \theta, D) \leq f((J(\pi'|D) - J(\pi|D))),
\]
where \( f \) is a monotonically increasing function. Although one could use any such \( L \), in this work we take \( L \) to be of the form
\[
L(\pi, \pi', \theta, D) = \mathbb{E} \left[ f \left( \frac{p(D|w)p(w)}{q(w; \theta)} (J(\pi'|w) - J(\pi|w)) \right) \right],
\]
which we derive in Sec. A.1 of the Appendix using Jensen’s inequality. This particular form of \( L \) requires \( f \) to be concave, and is closely related to \( f \)-divergences, a generalization of the widely used KL and Rényi divergences (Li & Turner, 2016; Wan et al., 2020). In the closely-related field of variational inference, the effect of the choice of \( f \) is an active area of research, and gives rise to various divergence metrics (Kingma & Welling, 2013; Burda et al., 2015; Li & Turner, 2016; Dieng et al., 2017; Chen et al., 2018; Wan et al., 2020). In this work we primarily consider \( f = \log \), as this is the most well-studied choice of \( f \) (Blei et al., 2017). \( L \) is tight (i.e., inequality 2 holds with equality) when \( q(w; \theta) \propto p(D|w)p(w)(J(\pi'|w) - J(\pi|w)) \). Note that, although \( L \)
Figure 1: Unified RL combines model-based and model-free RL using the equivalent policy set (EPS). At each iteration, data from a shared buffer are used to update a model-based policy and a model-free policy. We then check whether the model-free policy is contained within the EPS, that is, the set of policies that cannot be proven to be suboptimal, according to bounds on policy performance computed using the model. If the model-free policy is within the EPS, it is used to collect another episode of data in the environment, which is added to the data buffer. Otherwise, the model-based policy is used to collect more data.
depends on the parameters $\theta$ of the approximate posterior $q$, inequality (2) bounds the exact difference in Bayesian return between $\pi'$ and $\pi$.
Inequality (2) allows us to prove the suboptimality of any policy $\pi$ for which there exists a new policy $\pi'$ (in the same domain as $\pi$) such that $L(\pi, \pi', \theta, D) > f(0)$, because this condition implies that $\pi'$ achieves higher Bayesian return than $\pi$, and therefore $\pi$ is not Bayes-optimal. We can therefore use $L$ to construct the EPS, which we define to be the set of all policies $\pi$ for which there does not exist a provably better $\pi' \in \Pi$, using model parameters $\theta$ and data $D$,
$$\Pi_E(\theta, D) = \{\pi : \max_{\pi' \in \Pi} L(\pi, \pi', \theta, D) \leq f(0)\}. \quad (4)$$
### Equivalent Policy Sets for Understanding the Limitations of Models
In the limit of an infinitely expressive model (that is, $q$ can represent any posterior over $w$), $L$ is tight, meaning that the EPS reduces to a singleton set that contains only the Bayes-optimal policy. However, limitations in modeling resources make this practically infeasible, and in general the model will always contain some inaccuracies. Existing approaches to MBRL largely have not dealt with this problem, and instead treat the model's approximation of the optimal policy as ground-truth. This can result in highly suboptimal policies, especially when the model is misaligned [Lambert et al., 2020] [Agarwal et al., 2021]. The EPS addresses this problem by quantifying how inaccuracies in our imperfect model translate into uncertainty about the optimal policy, where this uncertainty is represented as a set of policies that may be optimal, according to our model. Limitations in model class prevent $q$ from matching the ideal posterior, causing $L$ to be loose and thereby increasing the size of the EPS. By maintaining this set, we avoid over-reliance on the model, and open the possibility of using an alternative learning paradigm such as MFRL to choose a policy to deploy. This intuition provides the basis for Unified RL, which we describe in the next section.
### 3.2 Unified Reinforcement Learning
Unified RL builds on the concept of the EPS introduced in the previous section, and is summarized in Alg. 1 and Fig. 1. Unified RL can be thought of as a model-free RL algorithm, where the policy is constrained to lie within the EPS. Through this constraint, Unified RL is able to eliminate many provably suboptimal policies from consideration, thus retaining the data-efficiency benefits of MBRL. However, because Unified RL uses the model only to identify the set of policies that may be optimal rather than to estimate a single optimal policy, it avoids over-reliance on the model, and thus avoids the objective mismatch problem associated with typical MBRL approaches. Constraining the model-free policy to lie within the EPS does not in principle prevent MFRL from discovering the Bayes-optimal policy, as the Bayes-optimal policy will always lie within the EPS regardless of the model used to compute the EPS.
We take a simple approach to combining model-based and model-free RL using the EPS, and leave more complex variants to future work. Before each episode, an MBRL and an off-policy MFRL algorithm use the available data $D$ to compute what we refer to as the model-based policy $\pi^{MB}$ and the model-free policy $\pi^{MF}$, respectively. Subsequently, the agent checks whether the model-free policy is within the EPS; that is, it checks whether or not a lower bound can be constructed using the model that proves that the model-based policy achieves higher Bayesian return than the model-free policy. If the model-free policy is within the EPS, the agent executes it in the real environment to collect one episode of new data. If not, the agent instead executes the model-based policy, which is guaranteed to be within the EPS. The new data are then added to the shared data buffer, and the
**Algorithm 1 Unified RL**
1: **Given:** initial dataset $D$
2: **for** each iteration **do**
3: $\pi^{MB}, \theta = \text{MBRL}(D)$
4: $\pi^{MF} = \text{SAC}(D)$
5: Estimate $\hat{L}(\pi^{MF}, \pi^{MB}, \theta, D)$
6: **if** $\hat{L} > -\infty$ **then**
7: $\pi = \pi^{MB}$
8: **else**
9: $\pi = \pi^{MF}$
10: **end if**
11: **for** time step $t=0,...,T$ **do**
12: $a_t \sim \pi(a_t|s_t)$
13: $s_{t+1}, r_t = \text{env.step}(a_t)$
14: $D \leftarrow D \cup \{s_t, a_t, r_t, s_{t+1}\}$
15: **end for**
16: **end for**
entire process repeats. Note that this approach does not require the EPS to be represented explicitly. Instead, the EPS is maintained implicitly in the sense that the lower bound in equation (2) provides a condition that allows one to check whether a given policy is within the EPS. We describe the individual components of our approach in more detail below, with additional details in Sec. A.2 of the Appendix.
**Model-Based RL** The MBRL component of our algorithm proceeds in two distinct steps: model training and policy training. During the model training step, we estimate the posterior parameters $\theta$ by fitting a Bayesian LSTM dynamics model to our environmental data $D$, by maximizing an evidence lower bound on data log likelihood \cite{Kingma2015, Gal2016a}.
$$\mathcal{L}_{\text{model}}(\theta, D) = \mathbb{E}_{w \sim q(w; \theta)} \left[ \sum_{i=1}^{|D|} \sum_{t=1}^{T} \log p(s_{t+1}^{(i)}, r_t^{(i)} | s_{\leq t}^{(i)}, a_{\leq t}^{(i)}, w) \right] - D_{KL}(q(w; \theta) || p(w)). \quad (5)$$
Specifically, we use the binary dropout formulation of Bayesian LSTMs \cite{Gal2016a}, wherein sampling a weight from the posterior $w \sim q(w; \theta)$ is accomplished by sampling a binary dropout mask from a fixed Bernoulli distribution \cite{Gal2016b}. In this formulation, the prior $p(w)$ is approximately a Normal distribution, while the posterior is a Bernoulli \cite{Gal2016b}. Our dynamics model $p(s_{t+1}^{(i)}, r_t^{(i)} | s_{\leq t}^{(i)}, a_{\leq t}, w)$ is a Gaussian distribution over next state $s_{t+1}$ and reward $r_t$ with a diagonal covariance matrix, given the states $s_{\leq t}$ and actions $a_{\leq t}$ at all previous timesteps. The choice to represent state transition dynamics as Gaussians with diagonal covariance matrices is similar to past work \cite{Gal2016a, Chua2018, GamboaHiguera2018, Chow2020, Eysenbach2022, Freed2023}, with the primary difference being that our dynamics model is recurrent. Specifically, we use an LSMT dynamics model, as we found this to yield more stable gradient-based policy optimization compared to a simple feed-forward MLP.
During the policy training step, we train a Tanh-Gaussian policy \cite{Haarnoja2018} to maximize the expected cumulative reward predicted by our model. Depending on the environment, we found that one of two methods yielded the best results. In both methods, we start by sampling a set of weights from our approximate posterior (which corresponds to sampling a set of dropout masks). In the first method, for each weight, we sample a set of initial states from the initial state distribution, which we assume to be known. Subsequently, we sample a full $T$-length trajectory, starting from each initial state, by iteratively sampling actions from the policy, followed by a reward and state transition from the model. Given a batch of sampled trajectories, we compute the policy loss as the negative total reward along the trajectory averaged across sampled trajectories, plus a policy entropy bonus. Similar to \cite{GamboaHiguera2018}, we found that gradient clipping stabilized policy optimization and improved results. We refer to this method as full-trajectory policy training, because full-length trajectories are rolled out.
The second method of policy training that we employ is identical to that used by \cite{Hafner2019}, with the slight modification that trajectories are sampled using various dropout masks, and trajectories are sampled in raw state space as opposed to latent space. In summary, states are sampled uniformly from the data buffer, and trajectory segments of length $H = 16$ are sampled starting from those states. Value estimates are then computed using a critic network and the predicted trajectory rewards. The critic is then updated to produce more accurate value estimates, and the policy is updated to produce higher value estimates. In either case, the dropout mask that we use to sample a particular trajectory is held constant during the entire trajectory; this is to reflect the fact that even though there is uncertainty in the dynamics model parameters $w$, the parameters do not change during a single trajectory \cite{Gal2016a}.
**Model-Free RL** We use Soft Actor-Critic \cite{Haarnoja2018} as the off-policy MFRL component of our algorithm. We found that standard SAC performed poorly when run off-policy; therefore, we incorporate two modifications suggested by \cite{Ball2023} that we found yielded superior off-policy performance while preserving SAC’s on-policy performance. Specifically, we used layer normalization in our Q networks, and omit the entropy term from the Q network loss.
Lower Bound Estimation
Using the posterior parameters $\theta$ obtained during the model learning process and $f = \log$, it is possible to compute a Monte-Carlo estimate of $\mathcal{L}$ as
$$\hat{\mathcal{L}}(\pi^{MF}, \pi^{MB}, \theta, D) = \sum_{i=1}^{K} \left( \log \frac{p(D|w_i)p(w_i)}{q(w_i; \theta)} + \log \left( \hat{J}(\pi^{MB}|w_i) - \hat{J}(\pi^{MF}|w_i) \right) \right),$$
for $w_1, ..., w_K \sim q(w; \theta)$. Here, $p(D|w_i)$ is the probability of all state transitions and rewards in the dataset given parameters $w_i$, and $\hat{J}(\pi^{MB}|w_i)$ and $\hat{J}(\pi^{MF}|w_i)$ are themselves Monte-Carlo estimates of the expected return for the model-based and model-free policies respectively, computed by rolling out a batch of $M$ trajectories from the model using parameters $w_i$ and policies $\pi^{MB}$ and $\pi^{MF}$, respectively. More details on the estimation of this bound are provided in Sec. A.2 of the Appendix.
We review some useful properties of $\hat{\mathcal{L}}$. As $K \to \infty$ and $M \to \infty$, by the law of large numbers, $\hat{\mathcal{L}} \to \mathcal{L}$. However, for $K \to \infty$ and finite $M$, by Jensen’s inequality, $\hat{\mathcal{L}} \leq \mathcal{L}$. This property does not change our algorithm in principle, because $\hat{\mathcal{L}}$ for finite $M$ is still a lower bound on $\log(J(\pi^{MB}|D) - J(\pi^{MF}|D))$. The only practical implication in using $\hat{\mathcal{L}}$ in place of $\mathcal{L}$ is that the algorithm becomes more conservative, preferring model-free RL more often, as it becomes more difficult to prove that the model-based policy achieves a higher Bayesian return. When $K$ is also finite, $\hat{\mathcal{L}}$ is stochastic, and we can no longer say it is strictly a lower bound on $\log(J(\pi^{MB}|D) - J(\pi^{MF}|D))$, though on average it is. Practically, the stochasticity of $\hat{\mathcal{L}}$ injects some randomness into policy selection. We did not find this to be an issue as long as a large enough value of $K$ and $M$ were used.
To check if the model-free policy is in the EPS, we must check whether $\hat{\mathcal{L}}(\pi^{MB}, \pi^{MF}, \theta, D) > \log(0) = -\infty$. Note that in equation 6 all terms except $\log \left( \hat{J}(\pi^{MB}|w_i) - \hat{J}(\pi^{MF}|w_i) \right)$ will be defined and finite. However, as $\hat{J}(\pi^{MB}|w_i) - \hat{J}(\pi^{MF}|w_i) \to 0$ from the right, $\log \left( \hat{J}(\pi^{MB}|w_i) - \hat{J}(\pi^{MF}|w_i) \right) \to -\infty$. Therefore, this term dominates $\hat{\mathcal{L}}$ when the model-free policy is on or near the boundary of the relevant set, allowing us to simply check whether $\hat{J}(\pi^{MB}|w_i) - \hat{J}(\pi^{MF}|w_i) > 0, \forall i = 1, ..., K$. This property is particularly convenient because it allows us to ignore the $\log p(D|w_i)$ term, which would normally require calling the model on the entire dataset.
4 EXPERIMENTS
Our experiments seek to answer three questions:
1. Can Unified RL successfully combine the strengths of model-based and model-free RL?
2. Does Unified RL perform favorably compared to state-of-the-art prior work?
3. Is Unified RL effective in situations where either MBRL or MFRL alone fail?
We address questions 1 and 2 in Sec. 4.1 and question 3 in Sec. 4.2 and Sec. 4.3. In our experiments, we consider a range of challenging continuous control tasks from the OpenAI gym benchmark suite [Brockman et al., (2016)], Deepmind Control Suite (DMC) [Tassa et al., (2018)], and the ROBEL robotics benchmark suite [Ahn et al., (2020)]. Specifically, we consider OpenAI gym Hopper, Walker, Ant, and Half-Cheetah, as well as DMC Cartpole Swingup and ROBEL DCIawTurnFixed. We make two modifications to the standard environments for the sake of simplicity. First, we disabled early episode termination in the OpenAI gym tasks, as early termination has been shown to cause issues for MBRL [Wang et al., (2019)]. Second, we focus on short time horizon tasks; specifically, we consider episodes of length $T = 100$ for all OpenAI gym and DMC tasks, except for Hopper and Cartpole, which we considered episodes of length $T = 200$. We found that these episodes were sufficiently long to allow agents to learn the desired behaviors.
4.1 DATA EFFICIENCY AND ASYMPTOTIC PERFORMANCE
To empirically evaluate the effectiveness of Unified RL at combining the strengths of model-based and model-free RL, we compare Unified RL to its constituent model-based and model-free compo-
Figure 2: Training curves on benchmark tasks. Solid lines indicate the average return per episode across 5 runs, while shaded regions denote 95% confidence intervals. We find that Unified RL successfully combines the strengths of both model-based and model-free RL. In environments where either MBRL or SAC strictly dominates the other, Unified RL at least matches the better of these two algorithms. In situations where MBRL learns faster initially but is eventually surpassed by SAC, Unified RL achieves higher performance than either algorithm alone. Additionally, Unified RL also performs favorably compared to the other baselines, and is the only algorithm we tested that consistently performs well across all tasks.
Table 1: Mean episode return on benchmark tasks. Reported below are episodes returns averaged across the entire training process for 5 distinct random seeds, with 95% confidence intervals. The final column is the average rank that each algorithm achieves across all environments. We find that Unified RL often achieves higher mean episode return compared to either MBRL or SAC, indicating that Unified RL is able to combine the strengths of both algorithms. Additionally, Unified RL is the only algorithm we tested that performed consistently well across all tasks, being the most high-ranking algorithm on average.
| Ant | Hopper | Walker | Half Cheetah | Cartpole | DClawTurnFixed | Avg Rank |
|-----------|-----------|-----------|--------------|------------|----------------|----------|
| Unified RL| 493 ± 9.9 | 750 ± 2.4 | 267 ± 5.9 | 571 ± 3.5 | 66.7 ± 0.3 | 963 ± 11 | 2.33 |
| SAC | 457 ± 7.9 | 633 ± 14 | 229 ± 2.1 | 540 ± 12 | 67.5 ± 0.9 | 970 ± 11 | 3.5 |
| MBRL | 311 ± 10 | 705 ± 16 | 253.2 ± 0.8 | 263 ± 48 | 64.6 ± 0.2 | 989 ± 16 | 4.5 |
| ALM | 188 ± 77 | 563 ± 6.2 | 127 ± 7.6 | 520 ± 27 | 31.5 ± 2.9 | −177 ± 111 | 7.33 |
| DDPG | 76 ± 6.4 | 542 ± 42 | 293 ± 21 | 422 ± 34 | 25.3 ± 3.7 | 877 ± 17 | 6.83 |
| SVG | 443 ± 15 | 684 ± 14 | 163 ± 6.6 | 582 ± 21 | 65.0 ± 0.3 | −705 ± 14 | 4.83 |
| TD3 | 204 ± 3.8 | 632 ± 27 | 250 ± 4.2 | 412 ± 23 | 32.6 ± 7.8 | 902 ± 26 | 5.83 |
| HL | 319 ± 12 | 616 ± 4.5 | 280 ± 19 | 569 ± 8.2 | 66.6 ± 1.8 | 889 ± 29 | 4.16 |
| PPO | 85 ± 1.6 | 582 ± 18 | 356 ± 12 | 104 ± 16 | 69.2 ± 0.1 | −404 ± 56 | 5.16 |
We also compare to several prior state-of-the-art approaches: Aligned Latent Models (ALM) (Ghugare et al., 2022), Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), Twin Delayed DDPG (TD3) (Fujimoto et al., 2018), Proximal Policy Optimization (PPO) (Schulman et al., 2017), Stochastic Value Gradient (SVG) (Heess et al., 2015), and Hybrid Learning (HL) (Pinosky et al., 2023). DDPG, TD3, and PPO are state-of-the-art model-free methods. ALM, SVG and HL are high-performing algorithms that combine aspects of model-based and model-free RL.
We report our results in two ways. First, we show mean episode return vs. number of environmental steps in Fig. 2. Here, solid lines indicate average episode return averaged across 5 independent random seeds, while shaded regions denote 95% confidence intervals. Second, we report the average episode return across the entire training process for each algorithm, which is equivalent to area under the learning curve normalized by number of training episodes, in Table ???. This statistic is relevant because it blends both data efficiency and asymptotic performance into a single scalar
performance metric. Here again we report the average return across 5 random seeds, with 95% confidence intervals.
Our first observation is that Unified RL succeeds at combining the strengths of its two constituent algorithms. In cases where one algorithm strictly dominates, such as Hopper and Walker, we see that Unified RL does at least as well as the better-performing constituent. Moreover, we find that in environments such as Ant and Half-Cheetah, where MBRL learns rapidly initially but is eventually surpassed by SAC, Unified RL achieves higher performance than either algorithm alone. This finding indicates that Unified RL enables a synergy between MBRL and MFRL that is superior to simply running both algorithms separately and picking the best one at each timestep. We additionally observe that of all the algorithms we tested, Unified RL was unique in that it performed well across all tasks. Interestingly, ALM seemed to suffer from instability, possibly due to issues in Q learning caused by the shorter episode lengths that we use in our experiments.
4.2 Robustness to Model Misalignment
One of our central claims is that Unified RL helps avoid the objective mismatch problem by allowing the agent to switch to MFRL when the model is misaligned (that is, ill-suited to helping the agent improve its policy). To test this claim, we evaluate Unified RL on a task that we designed to induce model misalignment in MBRL. Recall that distractors are components of the observation that are predictable but task-irrelevant. Distractors exacerbate model misalignment, because typical model-learning objectives do not prioritize the modeling of task-relevant observation components over the task-irrelevant distractors. This results in models that do not accurately represent the task-relevant components. In our experiments, we appended time-dependent sinusoids of fixed frequency to the observations. Sinusoids were grouped together into groups of 10, where all 10 sinusoids in a group had the same phase. Each group was assigned a random phase, preventing the model from simply memorizing the distractors. Five such groups were appended to the observations. The hyperparameters used for SAC, MBRL, and Unified RL for this experiment were identical to those used in the original Ant environment.
The reward curves for this experiment are shown in Fig. 3. We observe that MBRL utterly fails to make learning progress in the presence of distractors, while MFRL is relatively unphased. Unified RL performs slightly better than MFRL, indicating that it is able to effectively fall back on MFRL when its model is misaligned.
4.3 Robustness to Failures of Model-Free RL
We do not expect MFRL to always achieve higher asymptotic performance than MBRL in all environments; for example, MFRL may fail to escape a poor local minimum or have poorly tuned hyperparameters. Unified RL has the advantage over other approaches such as MBRL with Model-Free Fine Tuning [Nagabandi et al., 2018], which runs MBRL for a manually specified number of episodes before switching to MFRL, in that Unified RL only switches to MFRL when the model-based policy isn’t provably superior. Therefore, in situations where MFRL fails to learn effectively, we expect Unified RL to utilize model-based learning exclusively. To test this claim, we compare the performance of Unified RL to MBRL and SAC in the Ant environment, where the entropy penalty for both SAC and the SAC component of Unified RL was set far higher than its ideal value. As expected, this prevented SAC from learning effectively, both alone and within Unified RL. Indeed we found that Unified RL recognized that SAC was ineffective at solving the task, instead relied exclusively on MBRL.
5 Related Work
Similar to [Duff, 2002; Deisenroth & Rasmussen, 2011; Gal et al., 2016a; Chua et al., 2018; Gamboa Higuera et al., 2018; Mehta et al., 2021, 2022], we consider a Bayesian formulation of MBRL. The characteristic feature of these approaches is an explicit representation of uncertainty in their estimate of the environmental dynamics [Gal et al., 2016a; Depeweg et al., 2017], and Gam-
Higuera et al. (2018) are most similar to our approach, in that they use Bayesian neural networks (BNNs) to represent beliefs over dynamics, and learn policies by backpropagating gradients through model rollouts.
Several recent approaches have been proposed for combining model-based and model-free RL. For example, Hybrid Learning (Pinosky et al., 2023) used a learned dynamics model to determine an optimal time to switch between a planned action sequence and a policy learned using MFRL. Stochastic Value Gradients (Heess et al., 2015) proposed a spectrum of policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. Finally, Model-Based RL with Model-Free Fine Tuning initialized MFRL with a policy trained for a fixed number of episodes using MBRL. The primary drawback to these approaches that is addressed in our work is that they use hard-coded or heuristic methods for selecting which learning modality to use in a given situation, rather than switching based on a measure of the model’s ability to contribute to policy improvement.
Recent approaches for improving model alignment in MBRL optimized policies with respect to lower bounds similar to $L$. For example, Luo et al. (2018) considered iteratively constructing lower bounds that hold locally in policy space, which is then optimized jointly with respect to both the model and policy. Eysenbach et al. (2022) and Ghugare et al. (2022) considered jointly optimizing a global lower bound on policy performance with respect to both the model and policy parameters. Chow et al. (2020) proposed an EM algorithm to jointly improve the model and the policy with respect to a variational lower bound. One fundamental limitation of these approaches is that they do not address the suboptimalities introduced by the fact that models have limited representational capacity. In environments with complex dynamics that the model class is ill-suited to represent, a lower bound on policy performance may differ significantly from the true objective we wish to optimize (i.e., $L$ will be a loose bound for the true objective $J$), resulting in a poorly aligned policy-learning objective and suboptimal policies. Our approach builds on these ideas, but takes a fundamentally different approach: rather than using the model to approximate a single optimal policy, we maintain a set of policies that may be optimal, which is then refined by model-free RL, thereby avoiding over-reliance on potentially inaccurate models.
6 LIMITATIONS
Our approach has a few limitations that are worth noting. First, our approach does not incorporate intelligent exploration, and simply assume that the best policy at any given iteration is the ideal policy to collect new data, be it model-based or model-free. This assumption is potentially disadvantageous in environments that require extensive exploration, where short-term reward should be sacrificed for the purposes of information gain. This limitation could potentially be circumvented with a slight modification to the bound in equation [3] to include an exploration bonus corresponding to an approximation of the amount of information gained by executing a particular policy, similar to that used in Houthooft et al. (2016).
Another important limitation is that because Unified RL maintains two separate (model-based and model-free policies, but only collects data from one in a given episode, at least one of the two policies will be performing some amount of off-policy learning. This restricts our choice of model-free RL algorithm to off-policy algorithms, such as SAC or Q-learning. Even though SAC is in principle an off-policy algorithm, we found standard SAC to perform poorly when learning off-policy, requiring modifications to the Q learning process (Sec. 3.2) (Ball et al., 2023). This limitation could potentially be avoided by modifying the Unified RL algorithm to maintain one policy, that is updated with model-free RL, but constrained to lie within the equivalent policy set. This could be accomplished by incorporating a constraint into the model-free policy updates, similar to a trust region as used in PPO (Schulman et al., 2017).
7 DISCUSSION AND FUTURE WORK
In this work, we propose equivalent policy sets (EPS), which we define as the set of policies that are not provably Bayes-suboptimal, according to bounds on policy performance constructed using a model. The EPS provides a valuable tool for quantifying how inaccuracies in the model translate into uncertainty in their estimate of the optimal policy. Using this tool, agents can better understand
|
2dHmhoWweE
|
While leveraging multiple ascent steps can improve over the original SAM/ASAM, a prior study [2] shows that the inner gradient ascent can be calculated periodically while maintaining similar performance to the conventional SAM (i.e. it is redundant to compute the ascent gradient at every step). Can the author elaborate more on this?
|
LOOKBEHIND-SAM: \( k \) STEPS BACK, 1 STEP FORWARD
Anonymous authors
Paper under double-blind review
ABSTRACT
Sharpness-aware minimization (SAM) methods have gained increasing popularity by formulating the problem of minimizing both loss value and loss sharpness as a minimax objective. In this work, we increase the efficiency of the maximization and minimization parts of SAM’s objective to achieve a better loss-sharpness trade-off. By taking inspiration from the Lookahead optimizer, which uses multiple descent steps ahead, we propose Lookbehind, which performs multiple ascent steps behind to enhance the maximization step of SAM and find a worst-case perturbation with higher loss. Then, to mitigate the variance in the descent step arising from the gathered gradients across the multiple ascent steps, we employ linear interpolation to refine the minimization step. Lookbehind leads to a myriad of benefits across a variety of tasks. Particularly, we show increased generalization performance, greater robustness against noisy weights, as well as improved learning and less catastrophic forgetting in lifelong learning settings.
1 INTRODUCTION
Improving the optimization methods used in deep learning is a crucial step to enhance the performance of current models. Notably, building upon the long-recognized connection between the flatness of the loss landscape and generalization [Hochreiter & Schmidhuber 1994; Keskar et al. 2016; Dziugaite & Roy 2017; Neyshabur et al. 2017; Izmailov et al. 2018], sharpness-aware training methods have gained recent popularity due to their ability to significantly improve generalization performance compared to minimizing the empirical risk using stochastic gradient descent (SGD). Particularly, sharpness-aware minimization (SAM) [Foret et al. 2021] was recently proposed as an effective means to simultaneously both loss value and loss sharpness during training. Given a neural network with parameters \( \phi \), some loss function \( L(\phi) \), SAM seeks parameters in flat regions by formulating the problem as a minimax optimization:
\[
\min_{\phi} \max_{\|\epsilon\|_2 \leq \rho} L(\phi + \epsilon),
\]
where worst-case perturbations \( \epsilon \) are applied to parameters \( \phi \), with the distance between original and perturbed parameters being controlled by \( \rho \). SAM approximates the maximization step by first performing a single gradient ascent step and then using the gradient of the loss to do a single descent step from the original solution. This leads to finding a low-loss parameter configuration \( \phi \) such that the loss is also low in the neighborhood \( \rho \) which will lead to flatter solutions. Several follow-up methods have emerged to further enhance its performance [Kwon et al. 2021; Zhuang et al. 2022; Kim et al. 2022] and reduce its computation overhead [Du et al. 2022a,b; Liu et al. 2022a].
Despite the recent success, improving upon SAM requires a delicate balance between loss value and sharpness. Ideally, the optimization process would converge towards minima that offer a favorable compromise between these two aspects, thereby leading to high generalization performance. However, naively increasing the neighborhood size \( \rho \) used to find the perturbed solutions in SAM leads to a considerable increase in training loss, despite improving sharpness (Figure 1, full circles). In other words, putting too much emphasis on finding the worst-case perturbation is expected to bias convergence to flat but high-loss regions and negatively impact generalization performance.
Instead of performing a single ascent step akin to SAM, performing multiple ascent steps is a promising way of increasing the neighborhood region to find perturbed solutions, and thus further reducing sharpness. However, this is not what is observed empirically (Figure 1, empty circles). In fact, previous works [Foret et al. 2021; Andriushchenko & Flammarion 2022] have shown that
such a multistep variant may hurt performance. A possible cause is the increased gradient instability originating from moving farther away from our original solution (Liu et al., 2022b). Note that such instability may also be present when using a high $\rho$, even in single-ascent step SAM. In this case, applying a variance reduction technique such as Lookahead (Zhang et al., 2019) with SAM as inner optimizer may help mitigate the performance loss when using larger $\rho$. However, as we demonstrate in our experiments, this is also not helpful (Figure 1, empty triangles).
In this work, we present a novel optimization method, called Lookbehind, that leverages the benefits of multiple ascent steps and variance reduction to improve the efficiency of the maximization and minimization parts of equation 1. This leads to Lookbehind successfully reducing both loss and sharpness across small and large neighborhood sizes (Figure 1, full triangles), achieving the best loss-sharpness trade-off.
In practice, improving the loss and sharpness trade-off results in a myriad of benefits across several training regimes. Particularly, when applying Lookbehind to SAM and ASAM, we show a considerable improvement in terms of generalization performance across several models and datasets. Moreover, models trained with Lookbehind have increased robustness against noisy weights at inference time. Lastly, we evaluate Lookbehind in the context of lifelong learning and show an improvement both in terms of learning and catastrophic forgetting on multiple models and datasets.
2 BACKGROUND: SHARPNESS-AWARE MINIMIZATION
Our method, Lookbehind, builds upon sharpness-aware minimization (SAM) methods with the goal of solving the inner maximization problem of SAM more accurately while stabilizing the outer minimization part of SAM’s objective. We will start by briefly introducing the sharpness-aware minimization methods used throughout the paper.
To solve the problem in equation 1 using standard stochastic gradient methods, SAM (Foret et al., 2021) proposes to estimate the gradient of the minimax objective in two steps. The first step is to approximate the inner maximization $\epsilon(\phi)$ using one step of gradient ascent; the second is to compute the loss gradient at the perturbed parameter $\phi + \epsilon(\phi)$. This leads to the following parameter update:
$$\phi_t = \phi_{t-1} - \eta \nabla_\phi L(\phi_{t-1} + \epsilon(\phi_{t-1})), \quad \epsilon(\phi) := \rho \frac{\nabla L(\phi)}{||\nabla L(\phi)||_2}. \quad (2)$$
Several follow-up sharpness-aware methods have been proposed to further improve upon the original formulation. Notably, a conceptual drawback of SAM is the use of a fixed-radius Euclidean ball as maximization neighborhood, which is sensitive to re-parametrizations such as weight re-scaling (Dinh et al., 2017; Stutz et al., 2021). To address this problem, ASAM (Kwon et al., 2021) was proposed as an adaptive version of SAM, which redefines the maximization neighborhood in equation 1 as component-wise normalized balls $||\epsilon/\phi||_2 \leq \rho$. This leads to the modified parameter update:
$$\phi_t = \phi_{t-1} - \eta \nabla_\phi L(\phi_{t-1} + \epsilon(\phi_{t-1})), \quad \epsilon(\phi) := \rho \frac{T^2_\phi(\nabla L(\phi))}{||T_\phi(\nabla L(\phi))||_2} \quad (3)$$
where $T_\phi(v) := \phi \odot v$ denotes the component-wise multiplication operator associated to $\phi$. In what follows, we use both SAM and ASAM as our baseline sharpness-based learning methods.
3 LOOKBEHIND OPTIMIZER
Our algorithm, Lookbehind (+SAM), presents a novel way to improve the solution found by SAM’s objective (equation 1). The intuition of Lookbehind is two-fold. First, we improve the maximization part of SAM’s objective by performing multiple ascent steps to find a worst-case weight perturbation that has a higher loss than the original, single-step SAM within a given neighborhood of the original point. We refer to such maximization of the loss as we perform multiple ascent steps in SAM as looking behind. In other words, we are looking behind in the sense that we are climbing the loss landscape. (This term is inspired by the Lookahead optimizer [Zhang et al., 2019], where looking ahead refers to the minimization of the loss as they perform multiple descent steps.)
Second, to improve the minimization part of SAM’s objective, we reduce the variance derived from the multiple ascent steps by aggregating the gradients along the way for the descent step and performing linear interpolation in the parameter space. This results in an alleviation of the instability that arises from (1) performing multiple ascent steps due to the various gradients gathered in the ascent phase not being aligned with each other and (2) the substantial departure away from the original point as performing ascent steps, which negatively impacts SAM’s minimization objective and consequent loss-sharpness trade-off (Figure 1). Lookbehind combines instead the gradients computed at intermediate distances, improving upon the multiple ascent step variant of SAM (Multistep-SAM).
A visual comparison between Multistep-SAM and Lookbehind is illustrated in Figure 2.
While Multistep-SAM performs $k$ ascent steps $(\phi_{t,1}', \ldots, \phi_{t,k}')$ for the final update, Lookbehind uses slow weights $(\phi_t, \phi_{t+1}, \ldots)$ and fast weights $(\phi_{t,1}, \ldots, \phi_{t,k})$, where fast weights are updated using the gradients from $k$ ascent SAM steps. Then, the slow weights are updated toward the fast weights through linear interpolation. Even though both methods entail the same number of gradient computations, Lookbehind has a stabilizing effect over Multistep-SAM by combining the gradient information.
The pseudo-code for Lookbehind is presented in Algorithm 1. After synchronizing the fast weights (line 2) and the perturbed weights (line 3), we sample a minibatch (line 4) and perform $k$ ascent steps of SAM by preserving the previously perturbed slow weights (line 7) and introducing further perturbations in the subsequent inner step (line 6); corresponding descent steps are tracked and the fast weights are updated accordingly (line 8). After $k$ steps, a linear interpolation of the fast and slow weights is conducted (line 10).
Algorithm 1 Lookbehind+SAM
Require: Parameters $\phi_0$, loss $L$, inner steps $k$, slow and fast weights step sizes $\alpha$ and $\eta$, neighborhood size $\rho$, training set $D$
1: for $t = 1, 2, \ldots$ do
2: $\phi_{t,0} \leftarrow \phi_{t-1}$
3: $\phi_{t,0}' \leftarrow \phi_{t-1}'
4: Sample mini-batch $d \sim D$
5: for $i = 1, 2, \ldots, k$ do
6: $\epsilon \leftarrow \rho \frac{\nabla L_d(\phi_{t,i-1}')}{\|\nabla L_d(\phi_{t,i-1}')\|_2}$
7: $\phi_{t,i}' \leftarrow \phi_{t,i-1}' + \epsilon$
8: $\phi_{t,i} \leftarrow \phi_{t,i-1} - \eta \nabla L_d(\phi_{t,i}')$
9: end for
10: $\phi_t \leftarrow \phi_{t-1} + \alpha(\phi_{t,k} - \phi_{t-1})$
11: end for
12: return $\phi$
4 EXPERIMENTAL RESULTS
In this section, we start by introducing our baselines (Section 4.1), and then we conduct several experiments to showcase the benefits of achieving a better sharpness-loss trade-off in SAM methods. Particularly, we test the generalization performance on several models and datasets (Section 4.2) and analyze the loss landscapes at the end of training in terms of sharpness (Section 4.3). Then, we study the robustness provided by the different methods in noisy weight settings (Section 4.4). Lastly, we analyze how the ability to continuously learn is affected in sequential training settings (Section 4.5).
For the following experiments, we use residual networks (ResNets) (He et al., 2016) and wide residual networks (WRN) (Zagoruyko & Komodakis, 2016) models trained from scratch on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and ImageNet (Deng et al., 2009). We report the mean and standard deviation over 3 different seeds throughout the paper unless noted otherwise. Additional training and hyperparameter search details are provided in Appendices A.3 and A.4.
4.1 BASELINES
On top of the previously discussed Lookbehind+SAM method, we note that our algorithm can be easily extended to ASAM by using the component-wise rescaling (equation 3) in the inner loop updates. We call this variant Lookbehind+ASAM. Additionally to SGD, vanilla SAM, and vanilla ASAM, we compare Lookbehind+SAM/ASAM to the following methods:
- **Multistep-SAM/ASAM**: As previously discussed in Section 3, this corresponds to performing multiple ascent steps to the vanilla SAM and ASAM algorithms, with the final update using the gradient from the last step.
- **Lookahead+SAM/ASAM**: We use Lookahead with sharpness-aware methods by applying single-step SAM and ASAM as the inner optimizers. A detailed description of Lookahead+SAM/ASAM is provided in Appendix A.2.
- **Lookahead+SGD**: For the sake of completeness, we also apply the Lookahead optimizer to SGD, as originally proposed by Zhang et al. (2019).
4.2 GENERALIZATION PERFORMANCE
We start by reporting the generalization performance on several models and datasets in Table 1. We observe that models trained with Lookbehind achieve the best generalization performance across all architectures and datasets. This is observed for both SAM and ASAM. Moreover, we see the Lookbehind+SAM/ASAM variants always outperform Lookahead+SGD, which further validates applying Lookbehind to sharpness-aware minimization methods. Importantly, we note that Lookbehind is the only method to outperform vanilla SAM and ASAM on ImageNet. We note, however, that the improvement of the loss-sharpness trade-off achieved by Lookbehind leads to a myriad of benefits on top of increased generalization performance, as demonstrated next.
Table 1: Generalization performance (validation accuracy %) of the different methods on several models trained on CIFAR-10, CIFAR-100, and ImageNet.
| Dataset | Model | CIFAR-10 | CIFAR-100 | ImageNet |
|---------|----------------|----------|-----------|----------|
| | ResNet-34 | 95.84±.13| 74.35±1.23| 69.91±.04|
| | WRN-28-2 | 93.58±.11| 75.96±.12| 69.63±.12|
| | ResNet-50 | 95.59±.21| 78.80±.08| 69.92±.07|
| | WRN-28-10 | 94.01±.02| 78.53±.18| 70.01±.06|
| | ResNet-18 | | | 70.16±.08|
| SGD | | | | |
| Lookahead + SGD | | | | |
| SAM | | | | |
| Multistep-SAM | | | | |
| Lookahead + SAM | | | | |
| Lookbehind + SAM | | | | |
| ASAM | | | | |
| Multistep-ASAM | | | | |
| Lookahead + ASAM | | | | |
| Lookbehind + ASAM | | | | |
| Dataset | Model | CIFAR-10 | CIFAR-100 | ImageNet |
|---------|----------------|----------|-----------|----------|
| | ResNet-34 | 95.80±.07| 76.57±.59| 70.01±.06|
| | WRN-28-2 | 93.93±.20| 77.03±.65| 69.92±.07|
| | ResNet-50 | 95.72±.15| 80.50±.06| 69.99±.07|
| | WRN-28-10 | 94.39±.09| 80.55±.06| 70.16±.08|
| | ResNet-18 | 95.80±.11| 80.09±.10| |
| SAM | | | | |
| Multistep-SAM | | | | |
| Lookahead + SAM | | | | |
| Lookbehind + SAM | | | | |
| ASAM | | | | |
| Multistep-ASAM | | | | |
| Lookahead + ASAM | | | | |
| Lookbehind + ASAM | | | | |
| Dataset | Model | CIFAR-10 | CIFAR-100 | ImageNet |
|---------|----------------|----------|-----------|----------|
| | ResNet-34 | 95.32±.02| 78.62±.67| 70.13±.06|
| | WRN-28-2 | 94.41±.09| 81.67±.28| |
| | ResNet-50 | 95.91±.14| 77.81±.52| 70.06±.01|
| | WRN-28-10 | 95.06±.15| 81.67±.06| |
| | ResNet-18 | 96.01±.15| 77.55±1.10| 70.00±.11|
| SAM | | | | |
| Multistep-SAM | | | | |
| Lookahead + SAM | | | | |
| Lookbehind + SAM | | | | |
| ASAM | | | | |
| Multistep-ASAM | | | | |
| Lookahead + ASAM | | | | |
| Lookbehind + ASAM | | | | |
4.3 Sharpness across large neighborhood regions
We move on to analyzing the sharpness of the minima found at the end of training for each method. To do this, we measure the sharpness of the trained models using $m$-sharpness (Foret et al., 2021) by computing
$$\frac{1}{n} \sum_{M \in D} \max_{\|\epsilon\|_2 \leq r} \frac{1}{m} \sum_{s \in M} L_s(\phi + \epsilon) - L_s(\phi)$$
(4)
and
$$\frac{1}{n} \sum_{M \in D} \max_{\|\epsilon/\phi\|_2 \leq r} \frac{1}{m} \sum_{s \in M} L_s(\phi + \epsilon) - L_s(\phi)$$
(5)
for SAM and ASAM, respectively, where $D$ represents the training dataset, which is composed of $n$ minibatches $M$ of size $m$. To avoid ambiguity, we denote the radius used by $m$-sharpness as $r$. Instead of only measuring sharpness in close vicinity to the found solutions, i.e. using $r = 0.05$ as in Figure 1, we vary the radius $r$ over which $m$-sharpness is calculated. Particularly, we iterate over $r \in \{0.05, 0.5, 1.0, \ldots, 5.0\}$ for SAM and $r \in \{0.5, 1.0, \ldots, 5.0\}$ for ASAM.
The sharpness over different radii of the different methods, when also trained with different $\rho$, are shown in Figure 3. We observe that on top of Lookbehind improving sharpness at the nearby neighborhoods (as previously shown in Figure 1), SAM and ASAM models trained with Lookbehind also converge to flatter minima at the end of training, as measured on an extensive range of tested radii. This is consistent across training with different $\rho$ on both SAM and ASAM. Even though the minima found by the Lookahead and Multistep variants tend to have low sharpness when training with the default $\rho$, such benefits diminish at higher $\rho$.
![Figure 3: Sharpness at multiple $m$-sharpness’s radius $r$ using ResNet-34 trained on CIFAR-10. Darker shades indicate training with higher neighborhood sizes $\rho$, ranging from $\rho \in \{0.05, 0.1, 0.2\}$ for SAM and $\rho \in \{0.5, 1.0, 2.0\}$ for ASAM. Lower sharpness is better.]
4.4 Model robustness
We now assess model robustness against noisy weights. This is a particularly important use case when deployment models in highly energy-efficient hardware implementations that are prone to variabilities and noise (Xu et al., 2013; Kern et al., 2022; Spoon et al., 2021). Similar to previous works (Joshi et al., 2020; Mordido et al., 2022), we apply a multiplicative Gaussian noise to the model parameters $\phi$ after training in the form of $\phi \times \delta$, with $\delta \sim N(1, \sigma^2)$ and update the batch normalization statistics after the noise perturbations. Robustness results are presented in Figure 4.
We see that Lookbehind shows the highest robustness observed by preserving the most amount of validation accuracy across the tested noise levels. This is observed for both SAM and ASAM on all models and datasets. We note that the benefits of using sharpness-aware minimization methods to increase model robustness to noisy weights were shown by previous works (Mordido et al., 2022). Our results share these findings and further show that Lookbehind considerably boosts the robustness benefits of training with SAM and ASAM across several models and datasets.
4.5 LIFELONG LEARNING
Lastly, we evaluate the methods in lifelong learning where a model with a limited capacity is trained on a stream of tasks. The goal is then to maximize performance across tasks without having access to previous data. In our experiments, we replicate the same setup used in Lookahead-MAML (Gupta et al., 2020), which is a lifelong learning method that combines the concept of slow and fast weights of Lookahead with meta-learning principles (Finn et al., 2017). Moreover, we replace Lookahead with Lookbehind, creating a novel algorithm called Lookbehind-MAML. Since meta-learning is out of the scope of this work, we implemented only the constant learning rate setting for simplicity, i.e., the C-MAML variant (Gupta et al., 2020).
We train a 3- and a 4-layer convolutional network on Split-CIFAR100 and Split-TinyImageNet, respectively. We report the following metrics by evaluating the model on the held-out data set: average accuracy (higher is better) and forgetting (lower is better). Additional details about the algorithms, training, and datasets are provided in Appendix A.5. The results are presented in Table 2. In the first setting, we do not use ER and directly compare our method with SGD, SAM, and Multistep-SAM. We observe that Lookbehind achieves the best performance both in terms of average accuracy and forgetting. In the second setting, we apply ER to the previous methods. Once again, we see an improvement when using our variant. Finally, we directly compare Lookahead-C-MAML with Lookbehind-C-MAML and also notice an overall performance improvement.
Table 2: Lifelong learning performance in terms of average accuracy (higher is better) and forgetting (lower is better) on Split-CIFAR100 and Split-TinyImageNet.
| Dataset | Metric | Split-CIFAR100 Avg. accuracy ↑ | Forgetting ↓ | Split-TinyImaginet Avg. accuracy ↑ | Forgetting ↓ |
|---------|-------------------------|-------------------------------|--------------|-----------------------------------|--------------|
| SGD | | 58.41±4.95 | 22.74±4.85 | 43.48±0.80 | 26.51±0.71 |
| SAM | | 57.81±1.05 | 23.27±0.57 | 56.34±1.72 | 20.39±1.83 |
| Multistep-SAM | | 59.58±0.34 | 15.09±0.48 | 56.09±1.17 | 20.70±1.05 |
| Lookbehind + SAM | | 59.93±1.54 | 14.10±0.98 | 56.60±0.68 | 18.99±0.62 |
| ER + SGD | | 64.84±1.29 | 12.90±0.23 | 49.19±0.93 | 19.06±0.26 |
| ER + SAM | | 68.28±1.30 | 13.98±0.42 | 65.59±0.19 | 9.89±0.14 |
| ER + Multistep-SAM | | 65.49±4.10 | 15.20±2.53 | 65.75±0.16 | 9.90±0.09 |
| ER + Lookbehind + SAM | | 68.87±0.79 | 12.37±0.11 | 65.91±0.27 | 9.11±0.63 |
| Lookahead-C-MAML | | 65.44±0.99 | 13.96±0.86 | 61.93±1.55 | 11.53±1.11 |
| Lookbehind-C-MAML | | 67.15±0.74 | 12.40±0.49 | 62.16±0.86 | 11.21±0.44 |
5 SENSITIVITY ANALYSIS
In this section, we analyze the sensitivity of Lookbehind to different hyper-parameter settings in terms of generalization performance (Sections 5.1, 5.2, and 5.3). For the following experiments, we used ResNet-34 and ResNet-50 models trained from scratch on CIFAR-10 and CIFAR-100, respectively. Training and hyperparameter search details are provided in Appendices A.3 and A.4.
5.1 Sensitivity to the inner step $k$
Validation accuracies of the different methods when using different $k$ are presented in Figure 5. We observe that Lookbehind is the only method that consistently outperforms the SAM and ASAM baselines on both CIFAR-10 and CIFAR-100, across all the tested inner steps $k$. Interestingly, our method tends to keep improving when increasing $k$, while this trend is not observed for either the Lookahead or the Multistep variants. Moreover, we see that Multistep-SAM/ASAM does not provide a clear improvement over the respective SAM and ASAM baselines, as previously discussed in prior work (Foret et al., 2021; Andrushchenko & Flammarion, 2022). On the other hand, the Lookahead variants show a slight improvement over Multistep, particularly when combining Lookahead with SAM and ASAM on CIFAR-10 and SAM on CIFAR-100. Overall, we see that Lookbehind reaches the highest validation accuracy on every tested model and dataset configuration when combined with both SAM and ASAM.

(a) ResNet-34 on CIFAR-10. (b) ResNet-50 on CIFAR-100.
Figure 5: Comparison of generalization performance (validation accuracy %) between Multistep-SAM/SAM, Lookahead + SAM/ASAM, and Lookbehind + SAM/ASAM. The vanilla SAM and ASAM baselines with default $\rho$ are represented by the horizontal, dotted line.
5.2 Sensitivity to the outer step size $\alpha$
The validation accuracies of Lookbehind across different $\alpha$ and $k$ are presented in Figure 6. We see that Lookbehind always improves over the baselines when considering the full grid search. This is also reflected in a finer-grained manner, where Lookbehind improves over the baselines in all $k$, except $k = 2$ on SAM and CIFAR-10. We notice a diagonal trend, suggesting there is a relation between $\alpha$ and $k$. Specifically, the results suggest that a higher $\alpha$ is better when increasing $k$. These results show that Lookbehind is robust to the choice of $k$ and $\alpha$ and while tuning these hyperparameters may improve performance, using a default high $\alpha$ (e.g. 0.5 or 0.8) with high $k$ (e.g. 5 or 10) often results in good performance.
5.3 Sensitivity to the neighborhood size $\rho$
We now analyze the effects of training with increasing $\rho$ with the different methods. Results are presented in Figure 7. We see that our method is the only one that consistently outperforms SAM and ASAM across all the tested $\rho$. As previously suggested, significantly increasing $\rho$ in the SAM and ASAM baselines, e.g. $\rho = 0.5$ and $\rho = 5.0$, respectively, decreases performance relative to their default $\rho$, e.g. $\rho = 0.05$ and $\rho = 0.5$, respectively. Notwithstanding, we note that ASAM shows higher relative robustness to higher $\rho$ than SAM, indicated by ASAM’s ability to continue increasing performance on up to $4 \times$ the default neighborhood size, i.e. from $\rho = 0.5$ to $\rho = 2.0$. Lastly, we note that the Lookbehind and Multistep variants show similar trends as the SAM and ASAM baselines.
Figure 6: Sensitivity of Lookbehind to $\alpha$ and $k$ when combined with SAM and ASAM in terms of generalization performance (validation accuracy %). The validation accuracies of the SAM and ASAM variants are presented in the middle of the heatmap (white middle point). All models were trained with the default $\rho$. Blue represents an improvement in terms of validation accuracy over such baselines, while red indicates a degradation in performance.
Figure 7: Validation accuracies with different trained $\rho$ for the different methods using ResNet-34 trained on CIFAR-10. Darker shades represent larger inner steps $k$, ranging from $k \in \{2, 5, 10\}$.
ASAM baselines. Overall, we observe that Lookbehind is more robust to the choice of $\rho$ compared to the other methods.
6 ADAPTIVE $\alpha$
Lookbehind adds two additional hyperparameters to SAM/ASAM – just as the Lookahead optimizer adds two hyperparameters to SGD - which introduces additional hyperparameter tuning on top of $\rho$ and $\eta$. To mitigate this added complexity in settings where computational resources are scarce, we investigate if we can remove the need to tune $\alpha$ by instead computing it analytically during training. We refer to this adaptive formulation of $\alpha$ as $\alpha^*$. The main idea is to set $\alpha^*$ proportionally to the alignment of the gradients obtained during the multiple ascent steps:
$$\alpha^* = (\cos(\theta) + 1)/2,$$
where $\theta$ is defined by the angle between the first gathered gradient and the final update direction:
$$\theta = \frac{(\phi_{t,1} - \phi_t) \cdot (\phi_{t,k} - \phi_t)}{\|\phi_{t,1} - \phi_t\|_2 \cdot \|\phi_{t,k} - \phi_t\|_2}.$$
If the gradients are completely aligned, then $\alpha^* = 1$. On the other hand, if the gradients are not aligned, then $0 \leq \alpha^* < 1$, with lower values representing lower alignment.
Results when using Lookbehind with a static $\alpha$ and a dynamic $\alpha^*$ are presented in Table 3. Overall, we observe that using an adaptive $\alpha$ is a viable alternative to tuning a static $\alpha$ in instances where compute is scarce. Note that our goal with adaptive $\alpha$ is not necessarily to outperform static $\alpha$.
Table 3: Generalization performance (validation acc. %) of Lookbehind with static and adaptive $\alpha$.
| Dataset Model | CIFAR-10 | CIFAR-100 | ImageNet |
|---------------|----------|-----------|----------|
| | ResNet-34| WRN-28-2 | ResNet-50| WRN-28-10| ResNet-18 |
| Lookbehind + SAM + adaptive $\alpha$ | $96.27 \pm .07$ | $94.81 \pm .22$ | $78.62 \pm .48$ | $80.99 \pm .02$ | $70.16 \pm .08$ |
| Lookbehind + ASAM + adaptive $\alpha$ | $96.33 \pm .04$ | $94.88 \pm .12$ | $78.33 \pm .36$ | $80.86 \pm .13$ | $70.07 \pm .12$ |
but instead to achieve competitive performance while having one less hyperparameter. Importantly, we emphasize that Lookbehind with adaptive $\alpha$ consistently outperforms all the compared methods presented in Table 1, similarly to static $\alpha$. Due to space constraints, we refer the reader to Appendix A.1.4 for additional analysis on how $\alpha^*$ varies during training. Moreover, additional discussions are provided in Appendix A.1.
7 RELATED WORK
Sharpness-aware minimization (SAM) (Foret et al., 2021) is an attempt to improve generalization by finding solutions with both low loss value and low loss sharpness. This is achieved by minimizing an estimation of the maximum loss over a neighborhood region around the parameters. There is currently a lot of active work that focuses on improving SAM. More specifically, modifications of the original SAM algorithm were proposed to further improve generalization performance (Zhuang et al., 2022; Kim et al., 2022; Kwon et al., 2021; Liu et al., 2022b) and efficiency (Du et al., 2022c; Zhou et al., 2022; Liu et al., 2022a). Performing multiple ascent steps was present in Foret et al. (2021), however, the improvements over single ascent step SAM were either insignificant or even shown to degrade performance in some settings (Andrushchenko & Flammarion, 2022).
SAM’s benefits have transcended improving generalization performance, ranging from higher robustness to label noise (Foret et al., 2021; Kwon et al., 2021), lower quantization error (Liu et al., 2021b), and less sensitivity to data imbalance (Liu et al., 2021a). Here, on top of analyzing the benefits of Lookbehind on generalization performance, we focused on further improving the recently observed benefits of SAM on improving robustness against noisy weights (Kim et al., 2022; Mordido et al., 2022) and reducing catastrophic forgetting in lifelong learning (Mehta et al., 2021).
Closest to our work, Kim et al. (2023) concurrently conducted a similar study by averaging the gradients obtained during multiple SAM ascent steps. One of the differences is the decoupling of the inner step $k$ and the outer step size $\alpha$ in our approach, which allows us to seek optimal combinations between these two hyperparameters. In fact, as depicted in Figures 6 and 13, $\alpha = 1/k$ is generally not the best overall $\alpha$ to use, including when determining $\alpha^*$ (Figure 10). We also extend the empirical discussions by applying our method with ASAM, which often produces superior results (as shown in Table 1). Additionally, we explore the applicability of our approach to lifelong learning (by applying our method with MAML) and robustness settings.
8 CONCLUSION
In this work, we proposed the Lookbehind optimizer, which can be plugged on top of existing sharpness-aware training methods to improve performance over a variety of benchmarks. Our experiments show that our method improves the generalization performance on multiple models and datasets, increases model robustness, and promotes the ability to continuously learn in lifelong learning settings. Even though the goal of this work is to tackle the lack of performance due to a poor sharpness-loss trade-off, another important issue inherent to any multiple ascent step SAM method is the computational overhead which increases training time by a factor $k$. In the future, it would be interesting to investigate how to improve the efficiency of multiple ascent steps, e.g. by switching the minibatch at each inner step of Lookbehind.
REFERENCES
Maksym Andriushchenko and Nicolas Flammarion. Towards understanding sharpness-aware minimization. In *International Conference on Machine Learning*, 2022.
Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In *European Conference on Computer Vision*, 2018.
Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc’Aurelio Ranzato. On tiny episodic memories in continual learning. *arXiv preprint arXiv:1902.10486*, 2019.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2009.
Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In *International Conference on Machine Learning*, 2017.
Jiawei Du, Zhou Daquan, Jiashi Feng, Vincent Tan, and Joey Tianyi Zhou. Sharpness-aware training for free. In *Advances in Neural Information Processing Systems*, 2022a.
Jiawei Du, Hanshu Yan, Jiashi Feng, Joey Tianyi Zhou, Liangli Zhen, Rick Siow Mong Goh, and Vincent Tan. Efficient sharpness-aware minimization for improved training of neural networks. In *International Conference on Learning Representations*, 2022b.
Jiawei Du, Daquan Zhou, Jiashi Feng, Vincent Tan, and Joey Tianyi Zhou. Sharpness-aware training for free. *Advances in Neural Information Processing Systems*, 2022c.
Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. In *Conference on Uncertainty in Artificial Intelligence*, 2017.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International Conference on Machine Learning*, 2017.
Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In *International Conference on Learning Representations*, 2021.
Gunshi Gupta, Karmesh Yadav, and Liam Paull. Look-ahead meta learning for continual learning. *Advances in Neural Information Processing Systems*, 2020.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2016.
Sepp Hochreiter and Jürgen Schmidhuber. Simplifying neural nets by discovering flat minima. *Advances in Neural Information Processing Systems*, 1994.
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry P. Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In *Conference on Uncertainty in Artificial Intelligence*, 2018.
Vinay Joshi, Manuel Le Gallo, Simon Haefeli, Irem Boybat, Sasidharan Rajalekshmi Nandakumar, Christophe Piveteau, Martino Dazzi, Bipin Rajendran, Abu Sebastian, and Evangelos Eleftheriou. Accurate deep neural network inference using computational phase-change memory. *Nature Communications*, 2020.
Jonathan Kern, Sébastien Henwood, Gonçalo Mordido, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Yvon Savaria, and François Leduc-Primeau. MemSE: Fast MSE prediction for noisy memristor-based DNN accelerators. In *IEEE International Conference on Artificial Intelligence Circuits and Systems*, 2022.
|
L3yJ54gv3H
|
The authors claim that ConvResNeXts can 'efficiently learn the function without suffering from the curse of dimensionality'. What does 'efficiency' here mean, sample efficiency or computational efficiency?
|
Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks
Anonymous authors
Paper under double-blind review
Abstract
Convolutional residual neural networks (ConvResNets), though overparameterized, can achieve remarkable prediction performance in practice, which cannot be well explained by conventional wisdom. To bridge this gap, we study the performance of ConvResNeXts, which cover ConvResNets as a special case, trained with weight decay from the perspective of nonparametric classification. Our analysis allows for infinitely many building blocks in ConvResNeXts, and shows that weight decay implicitly enforces sparsity on these blocks. Specifically, we consider a smooth target function supported on a low-dimensional manifold, then prove that ConvResNeXts can adapt to the function smoothness and low-dimensional structures and efficiently learn the function without suffering from the curse of dimensionality. Our findings partially justify the advantage of overparameterized ConvResNeXts over conventional machine learning models.
1 Introduction
Deep learning has achieved significant success in various real-world applications, such as computer vision (Goodfellow et al., 2014; Krizhevsky et al., 2012; Long et al., 2015), natural language processing (Bahdanau et al., 2014; Graves et al., 2013; Young et al., 2018), and robotics (Gu et al., 2017). One notable example of this is in the field of image classification, where the winner of the 2017 ImageNet challenge achieved a top-5 error rate of just 2.25% (Hu et al., 2018) using ConvResNets on a training dataset of 1 million labeled high-resolution images in 1000 categories.
Among various deep learning models, ConvResNets have gained widespread popularity in practical applications (Chen et al., 2017; He et al., 2016; Szegedy et al., 2017; Zhang et al., 2017). Compared to vanilla feedforward neural networks (FNNs), ConvResNets possess two distinct features: convolutional layers and skip connections. Specifically, each block of ConvResNets consists of a subnetwork, called bottleneck, and an identity connection between inconsecutive blocks. The identity connection effectively mitigates the vanishing gradient issue. Each layer of the bottleneck contains several filters (channels) that convolve with the input. Using this ConvResNet architecture, He et al. (2016) won 1st place on the ImageNet classification task with a 3.57% top-5 error in 2015. ConvResNets have various extensions, one of which is ConvResNeXts (Xie et al., 2017) (detailed introductions of ConvResNeXts is deferred to Section 2.3). This structure generalizes ConvResNets and includes them as a special case. Each building block in ConvResNeXts has a parallel architecture that enables multiple “paths” within the block. Figure 1(b) illustrates the structure of ConvResNeXts.
There are few theoretical works about ConvResNet, despite its remarkable empirical success. Previous research has focused on the representation power of FNNs (Barron, 1993; Cybenko, 1989; Kohler & Krzyzak, 2005; Suzuki, 2018; Yarotsky, 2017), while limited literature exists on ConvResNets. Oono & Suzuki (2019) developed a representation and statistical estimation theory of ConvResNets, and showed that if the network architecture is appropriately designed, ConvResNets with $O(n^{D/(2\alpha+D)})$ blocks can achieve a minimax optimal convergence rate $\tilde{O}(n^{-2\alpha/(2\alpha+D)})$ when approximating a $C^\alpha$ function with $n$ samples. Additionally, Liu et al. (2021) proved that ConvResNets can universally approximate any function in the Besov space $B_{p,q}^\alpha$ on $d$-dimensional manifolds with arbitrary accuracy. Here, the Besov space includes functions with spatially heterogeneous smoothness and generalizes more elementary function spaces such as Sobolev and Hölder spaces. Liu et al. (2021) improved the
convergence rate $\tilde{O}(n^{-2\alpha/(2\alpha+d)})$ for ConvResNets with $O(n^{d/(2\alpha+d)})$ blocks. Their results only depend on the intrinsic dimension $d$, rather than the data dimension $D$.
These previous works, however, could not explain the success of ConvResNets in an overparameterized regime, where the number of blocks can be much larger than the sample size. In practice, the performance of ConvResNets becomes better when they go deeper (He et al., 2016; Wang et al., 2022) and wider (Xie et al., 2017), but the previous results required the number of blocks to be chosen carefully according to unknown quantities of interest such as intrinsic dimension $d$, smoothness parameter $\alpha$ and radii of the Besov ball and so on. For instance, Liu et al. (2021) requires the number of blocks for ConvResNets to be $O(n^{d/(2\alpha+d)})$, which is smaller than the order of the sample size $n$. Overparameterized CovResNets with larger number of blocks, if we believe in the existing theory, would result in suboptimal rate and worst results — despite it is the opposite in practice.
To bridge this gap, we study ConvResNeXts trained with weight decay under an overparameterization regime (Xie et al., 2017). The ConvResNeXt is a generalization of the ConvResNet and can cover ConvResNets as a special case. Specifically, we study the same nonparametric classification problem as Liu et al. (2021), where the target function is supported on a $d$-dimensional smooth manifold $M$. We prove that even if ConvResNeXts are overparameterized, i.e., the number of blocks is larger than the order of the sample size $n$, they can still achieve an asymptotic minimax rate for learning Besov functions. That is, given that the target function belongs to the Besov space $B_{p,q}^s(M)$, the risk of the estimator given by the ConvResNeXt class converges to the optimal risk at the rate $\tilde{O}(n^{-\frac{\alpha/d}{\alpha/d+1}(1-o(1))})$ with $n$ samples. We remark that weight decay, which play an important role in our analysis, is a common method in deep learning to reduce overfitting (Krogh & Hertz, 1991; Smith, 2018). With this approach, ConvResNeXts can have infinitely many blocks to achieve arbitrary accuracy, which corresponds to the real-world applications (He et al., 2016; Wang et al., 2022).
Moreover, our theory shows that one can scale the number of “paths” $M$ in each block with the depth $N$ as roughly $MN \gtrsim n^{\frac{1}{2\alpha/d+1}}$, which does not affect the convergence rate. This partially justifies the flexibility of the ConvResNeXt architecture when designing the bottlenecks.
Our work is partially motivated by Zhang & Wang (2022). However, our work distinguishes itself through two new technical advancements. Firstly, we develop approximation theory for ConvResNeXts, while Zhang & Wang (2022) only focuses on (a parallel variant of) FNNs. Secondly, we take into account low-dimensional geometric structures of data. Notably, the statistical rate of convergence in our theory only depends on the intrinsic dimension $d$, which circumvents the curse of dimensionality in Zhang & Wang (2022). Another technical highlight of our paper is bounding the covering number of weight-decayed ConvResNeXts, which is essential for computing the critical radius of the local Gaussian complexity. This technique provides a tighter bound than choosing a single radius of the covering number as in Suzuki (2018); Zhang & Wang (2022). To the best of our knowledge, our work is the first to develop approximation theory and statistical estimation results for ConvResNeXts, as well as overparameterized ConvResNets.
2 PRELIMINARIES
In this section, we introduce some concepts on manifolds. Details can be found in Tu (2011) and Lee (2006). Then we provide a detailed definition of the Besov space on smooth manifolds and the ConvResNeXt architecture.
2.1 SMOOTH MANIFOLD
Firstly, we briefly introduce manifolds, the partition of unity and reach. Let $M$ be a $d$-dimensional Riemannian manifold isometrically embedded in $\mathbb{R}^D$ with $d$ much smaller than $D$.
Definition 1 (Chart). A chart on $M$ is a pair $(U,\phi)$ such that $U \subset M$ is open and $\phi : U \rightarrow \mathbb{R}^d$, where $\phi$ is a homeomorphism (i.e., bijective, $\phi$ and $\phi^{-1}$ are both continuous).
In a chart $(U,\phi)$, $U$ is called a coordinate neighborhood, and $\phi$ is a coordinate system on $U$. Essentially, a chart is a local coordinate system on $M$. A collection of charts that covers $M$ is called an atlas of $M$.
Definition 2 ($C^k$ Atlas). A $C^k$ atlas for $M$ is a collection of charts $\{(U_i,\phi_i)\}_{i \in A}$ which satisfies $\bigcup_{i \in A} U_i = M$, and are pairwise $C^k$ compatible:
$\phi_i \circ \phi_j^{-1} : \phi_j(U_i \cap U_j) \rightarrow \phi_i(U_i \cap U_j)$ and $\phi_j \circ \phi_i^{-1} : \phi_i(U_i \cap U_j) \rightarrow \phi_j(U_i \cap U_j)$
are both $C^k$ for any $i,j \in A$. An atlas is called finite if it contains finitely many charts.
Definition 3 (Smooth Manifold). A smooth manifold is a manifold \( M \) together with a \( C^\infty \) atlas.
Classical examples of smooth manifolds are the Euclidean space, the torus, and the unit sphere. Furthermore, we define \( C^s \) functions on a smooth manifold \( M \) as follows:
Definition 4 (\( C^s \) functions on \( M \)). Let \( M \) be a smooth manifold and \( f : M \to \mathbb{R} \) be a function on \( M \). A function \( f : M \to \mathbb{R} \) is \( C^s \) if for any chart \((U, \phi)\) on \( M \), the composition \( f \circ \phi^{-1} : \phi(U) \to \mathbb{R} \) is a continuously differentiable up to order \( s \).
We next define the \( C^\infty \) partition of unity, which is an important tool for studying functions on manifolds.
Definition 5 (Partition of Unity, Definition 13.4 in [Tu (2011)]). A \( C^\infty \) partition of unity on a manifold \( M \) is a collection of \( C^\infty \) functions \( \{\rho_i\}_{i \in A} \) with \( \rho_i : M \to [0, 1] \) such that for any \( x \in M \),
1. there is a neighbourhood of \( x \) where only a finite number of the functions in \( \{\rho_i\}_{i \in A} \) are nonzero;
2. \( \sum_{i \in A} \rho_i(x) = 1 \).
An open cover of a manifold \( M \) is called locally finite if every \( x \in M \) has a neighborhood that intersects with a finite number of sets in the cover. The following proposition shows that a \( C^\infty \) partition of unity for a smooth manifold always exists.
Proposition 1 (Existence of a \( C^\infty \) partition of unity, Theorem 13.7 in [Tu (2011)]). Let \( \{U_i\}_{i \in A} \) be a locally finite cover of a smooth manifold \( M \). Then there is a \( C^\infty \) partition of unity \( \{\rho_i\}_{i=1}^{n} \) where every \( \rho_i \) has a compact support such that \( \text{supp}(\rho_i) \subset U_i \).
Let \( \{(U_i, \phi_i)\}_{i \in A} \) be a \( C^\infty \) atlas of \( M \). Proposition 1 guarantees the existence of a partition of unity \( \{\rho_i\}_{i \in A} \) such that \( \rho_i \) is supported on \( U_i \). To characterize the curvature of a manifold, we adopt the geometric concept: reach.
Definition 6 (Reach [Federer, 1959; Niyogi et al., 2008]). Denote
\[ G = \left\{ x \in \mathbb{R}^D : \exists p \neq q \in M \text{ such that } \|x - p\|_2 = \|x - q\|_2 = \inf_{y \in M} \|x - y\|_2 \right\}. \]
as the set of points with at least two nearest neighbors on \( M \). The closure of \( G \) is called the medial axis of \( M \). Then the reach of \( M \) is defined as
\[ \tau = \inf_{x \in M} \inf_{y \in G} \|x - y\|_2. \]
Reach has a simple geometrical interpretation: for every point \( x \in M \), the osculating circle’s radius is at least \( \tau \). A large reach for \( M \) indicates that the manifold changes slowly.
2.2 Besov Functions on a Smooth Manifold
We next define the Besov function space on the smooth manifold \( M \), which generalizes more elementary function spaces such as the Sobolev and Hölder spaces. Roughly speaking, functions in the Besov space are only required to have weak derivatives with bounded total variation. Notably, this includes functions with spatially heterogeneous smoothness, which requires more locally adaptive methods to achieve optimal estimation errors [Donoho et al., 1998]. Please see Appendix A for examples and how kernel ridge regressions, including the Neural Tangent Kernels, cannot be optimal on Besov functions. To define Besov functions rigorously, we first introduce the modulus of smoothness.
Definition 7 (Modulus of Smoothness [DeVore & Lorentz, 1993; Suzuki, 2018]). Let \( \Omega \subset \mathbb{R}^D \). For a function \( f : \mathbb{R}^D \to \mathbb{R} \) be in \( L^p(\Omega) \) for \( p > 0 \), the \( r \)-th modulus of smoothness of \( f \) is defined by
\[ w_{r,p}(f,t) = \sup_{\|h\|_2 \leq t} \|\Delta^r_h(f)\|_{L^p}, \text{ where } \]
\[ \Delta^r_h(f)(x) = \begin{cases} \sum_{j=0}^{r} \binom{r}{j} (-1)^{r-j} f(x + jh) & \text{if } x \in \Omega, x + rh \in \Omega, \\ 0 & \text{otherwise}. \end{cases} \]
Definition 8 (Besov Space \( B_{p,q}^\alpha(\Omega) \)). For \( 0 < p, q \leq \infty, \alpha > 0, r = \lfloor \alpha \rfloor + 1 \), define the seminorm \( |\cdot|_{B_{p,q}^\alpha} \) as
\[
|f|_{B_{p,q}^\alpha(\Omega)} := \begin{cases}
\left( \int_0^\infty (t^{-\alpha} w_{r,p}(f,t))^q \frac{dt}{t} \right)^{\frac{1}{q}} & \text{if } q < \infty, \\
\sup_{t>0} t^{-\alpha} w_{r,p}(f,t) & \text{if } q = \infty.
\end{cases}
\]
The norm of the Besov space \( B_{p,q}^s(\Omega) \) is defined as \( \|f\|_{B_{p,q}^s(\Omega)} := \|f\|_{L^p(\Omega)} + |f|_{B_{p,q}^\alpha(\Omega)} \). Then the Besov space is defined as \( B_{p,q}^\alpha(\Omega) = \{ f \in L^p(\Omega) ||f||_{B_{p,q}^\alpha} < \infty \} \).
Moreover, we show that functions in the Besov space can be decomposed using B-spline basis functions in the following proposition.
Proposition 2 (Decomposition of Besov functions). Any function \( f \) in the Besov space \( B_{p,q}^\alpha, \alpha > d/p \) can be decomposed using B-spline of order \( m, m > \alpha \): for any \( x \in \mathbb{R}^d \), we have
\[
f(x) = \sum_{k=0}^{\infty} \sum_{s \in J(k)} c_{k,s}(f) M_{m,k,s}(x),
\]
where \( J(k) := \{ 2^{-k}s : s \in [-m, 2^k+m]^d \subset \mathbb{Z}^d \} \), \( M_{m,k,s}(x) := M_m(2^k(x-s)) \), and \( M_k(x) = \prod_{i=1}^d M_k(x_i) \) is the cardinal B-spline basis function which can be expressed as a polynomial:
\[
M_m(z) = \frac{1}{m!} \sum_{j=1}^{m+1} (-1)^j \binom{m+1}{j} (z-j)_+^m
\]
\[
= ((m+1)/2)^m \frac{1}{m!} \sum_{j=1}^{m+1} (-1)^j \binom{m+1}{j} \left( \frac{z-j}{(m+1)/2} \right)_+^m.
\]
We next define \( B_{p,q}^\alpha \) functions on \( M \).
Definition 9 (\( B_{p,q}^\alpha \) Functions on \( M \) [Geller & Pesenson 2011; Tribel 1992]). Let \( M \) be a compact smooth manifold of dimension \( d \). Let \( \{ (U_i, \phi_i) \}_{i=1}^{C_M} \) be a finite atlas on \( M \) and \( \{ \rho_i \}_{i=1}^{C_M} \) be a partition of unity on \( M \) such that \( \text{supp}(\rho_i) \subset U_i \). A function \( f : M \to \mathbb{R} \) is in \( B_{p,q}^\alpha(M) \) if
\[
\|f\|_{B_{p,q}^\alpha(M)} := \sum_{i=1}^{C_M} \| (f \rho_i) \circ \phi_i^{-1} \|_{B_{p,q}^\alpha(\mathbb{R}^d)} < \infty.
\]
Since \( \rho_i \) is supported on \( U_i \), the function \( (f \rho_i) \circ \phi_i^{-1} \) is supported on \( \phi(U_i) \). We can extend \( (f \rho_i) \circ \phi_i^{-1} \) from \( \phi(U_i) \) to \( \mathbb{R}^d \) by setting the function to be 0 on \( \mathbb{R}^d \setminus \phi(U_i) \). The extended function lies in the Besov space \( B_{p,q}^s(\mathbb{R}^d) \) (Tribel 1992, Chapter 7).
2.3 Architecture of ConvResNeXt
We introduce the architecture of ConvResNeXts. ConvResNeXts have three main features: convolution kernel, residual connections, and parallel architecture.
Consider one-sided stride-one convolution in our network. Let \( W = \{ W_{j,k,l} \} \in \mathbb{R}^{w' \times K \times w} \) be a convolution kernel with output channel size \( w' \), kernel size \( K \) and input channel size \( w \). For \( z \in \mathbb{R}^{D \times w} \), the convolution of \( W \) with \( z \) gives \( y \in \mathbb{R}^{D \times w'} \) such that
\[
y = W \ast z, \quad y_{i,j} = \sum_{k=1}^{K} \sum_{l=1}^{w} W_{j,k,l} z_{i+k-1,l},
\]
where \( 1 \leq i \leq D, 1 \leq j \leq w' \) and we set \( z_{i+k-1,l} = 0 \) for \( i+k-1 > D \), as demonstrated in Figure 1(a).
The building blocks of ConvResNeXts are residual blocks. Given an input \( x \), each residual block computes \( x + F(x) \), where \( F \) is a subnetwork called bottleneck, consisting of convolutional layers.
In ConvResNeXts, a parallel architecture is introduced to each building block, which enables multiple “paths” in each block. In this paper, we study the ConvResNeXts with rectified linear unit (ReLU) activation function, i.e., \( \text{ReLU}(z) = \max\{z, 0\} \). We next provide the detailed definition of ConvResNeXts as follows:
Figure 1: (a) Demonstration of the convolution operation $W \ast z$, where the input is $z \in \mathbb{R}^{D \times w}$, and the output is $W \ast z \in \mathbb{R}^{D \times w'}$. Here $W_{j,:,:}$ is a $D \times w$ matrix for the $j$-th output channel. (b) Demonstration of the ConvResNeXt. $f_{1,1}, \ldots, f_{N,M}$ are the building blocks, each building block is a convolution neural network.
Definition 10. Let the neural network comprise $N$ residual blocks, each residual block has a parallel architecture with $M$ building blocks, and each building block contains $L$ layers. The number of channels is $w$, and the convolution kernel size is $K$. Given an input $x \in \mathbb{R}^D$, a ConvResNeXt with ReLU activation function can be represented as
$$f(x) = W_{\text{out}} \cdot \left( \sum_{m=1}^{M} f_{N,m} + \text{id} \right) \circ \cdots \circ \left( \sum_{m=1}^{M} f_{1,m} + \text{id} \right) \circ P(x),$$
$$f_{n,m} = W_{L}^{(n,m)} \ast \text{ReLU} \left( W_{L-1}^{(n,m)} \ast \cdots \ast \text{ReLU} \left( W_{1}^{(n,m)} \ast x \right) \right),$$
where $\text{id}$ is the identity operator, $P : \mathbb{R}^D \rightarrow \mathbb{R}^{D \times w_0}$ is the padding operator satisfying $P(x) = [x, 0 \ldots 0] \in \mathbb{R}^{D \times w}$, $\{W_{l}^{(n,m)}\}_{l=1}^{L}$ is a collection of convolution kernels for $n = 1, \ldots, N, m = 1, \ldots, M$, $W_{\text{out}} \in \mathbb{R}^{w_L}$ denotes the linear operator for the last layer, and $\ast$ is the convolution operation defined in (4).
The structure of ConvResNeXts is shown in Figure 1(b). When $M = 1$, the ConvResNeXt defined in Definition 10 reduces to a ConvResNet. For notational simplicity, we omit biases in the neural network structure by extending the input dimension and padding the input with a scalar 1 (See Proposition 18 for more details). The channel with 0’s is used to accumulate the output.
3 THEORY
In this section, we study a binary classification problem on $\mathcal{M} \subseteq [-1, 1]^D$. Specifically, we are given i.i.d. samples $\{x_i, y_i\}_{i=1}^{n} \sim \mathcal{D}$ where $x_i \in \mathcal{M}$ and $y_i \in \{0, 1\}$ is the label. The label $y$ follows the Bernoulli-type distribution
$$\mathbb{P}(y = 1 | x) = \frac{\exp(f^*(x))}{1 + \exp(f^*(x))} \quad \text{and} \quad \mathbb{P}(y = 0 | x) = \frac{1}{1 + \exp(f^*(x))}$$
for some $f^* : \mathcal{M} \rightarrow \mathbb{R}$ belonging to the Besov space. More specifically, we make the following assumption on $f^*$.
Assumption 1. Let $0 < p, q \leq \infty$, $d/p < \alpha < \infty$. Assume $f^* \in B_{p,q}^{\alpha}(\mathcal{M})$ and $\|f^*\|_{B_{p,q}^{\alpha}(\mathcal{M})} \leq C_F$ for some constant $C_F > 0$.
To learn $f^*$, we minimize the empirical logistic risk over the training data:
$$\hat{f} = \arg \min_{f \in \mathcal{F}_{\text{Conv}}} \frac{1}{n} \sum_{i=1}^{n} [y_i \log(1 + \exp(-f(x_i))) + (1 - y_i) \log(1 + \exp(f(x_i))))],$$
where $\mathcal{F}_{\text{Conv}}$ is some neural network class specified later. For notational simplicity, we denote the empirical logistic risk function in (5) as $L_n(f)$, and denote the population logistic risk as
$$\mathbb{E}_D[L(f)] = \mathbb{E}_{(x,y) \sim \mathcal{D}}[y \log(1 + \exp(-f(x))) + (1 - y) \log(1 + \exp(f(x)))].$$
We next specify the class of ConvResNeXts for learning $f^*$:
$$\mathcal{F}_{\text{Conv}}(N, M, L, K, w, B_{\text{res}}, B_{\text{out}}) = \left\{ f \mid f \text{ is in the form of (10) with } N \text{ residual blocks. Every residual block has } M \text{ building blocks with each building block containing } L \text{ layers. Each layer has kernel size bounded by } K, \text{ number of channels bounded by } w, \sum_{n=1}^{N} \sum_{m=1}^{M} \sum_{\ell=1}^{L} \|W_{\ell}^{(n,m)}\|_F^2 \leq B_{\text{res}}, \|W_{\text{out}}\|_F^2 \leq B_{\text{out}}, f(x) \in [0, 1] \text{ for any } x \in \mathcal{M}. \right\}.$$
Note that the hyperparameters of $\mathcal{F}_{\text{Conv}}$ will be specified in our theoretical analysis later.
As can be seen, $\mathcal{F}_{\text{Conv}}$ contains the Frobenius norm constraints of the weights. For the sake of computational convenience in practice, such constraints can be replaced with weight decay regularization the residual blocks and the last fully connected layer separately. More specifically, we can use the following alternative formulation:
$$\hat{f} = \arg\min_{f \in \mathcal{F}_{\text{Conv}}(N, M, L, K, w, \infty, \infty)} \mathcal{L}_n(f) + \lambda_1 \sum_{n=1}^{N} \sum_{m=1}^{M} \sum_{\ell=1}^{L} \|W_{\ell}^{(n,m)}\|_F^2 + \lambda_2 \|W_{\text{out}}\|_F^2,$$
where $\lambda_1, \lambda_2 > 0$ are properly chosen regularization parameters.
### 3.1 Approximation Theory
In this section, we provide a universal approximation theory of ConvResNeXts for Besov functions on a smooth manifold:
**Theorem 3.** For any Besov function $f_0$ on a smooth manifold satisfying $p, q \geq 1, \alpha - d/p > 1$,
$$\|f_0\|_{B_{p,q}^\alpha(\mathcal{M})} \leq C_F,$$
for any $P > 0$ and any ConvResNeXt class $\mathcal{F}_{\text{Conv}}(N, M, L, K, w, B_{\text{res}}, B_{\text{out}})$ satisfying $L = L' + L_0 - 1, L' \geq 3$, where $L_0 = \lceil \frac{D}{K-1} \rceil$, and
$$MN \geq C_M P, \quad w \geq C_1(dm + D), \quad B_{\text{res}} \leq C_2 L/K,$$
$$B_{\text{out}} \leq C_3 C_F^2 ((dm + D)LK)^L (C_M P)^{L-2/p},$$
there exists $f \in \mathcal{F}_{\text{Conv}}(N, M, L, K, w, B_{\text{res}}, B_{\text{out}})$ such that
$$\|f - f_0\|_\infty \leq C_F C_M \left( C_4 P^{-\alpha/d} + C_5 \exp(-C_6 L' \log P) \right),$$
where $C_1, C_2, C_3$ are universal constants and $C_4, C_5, C_6$ are constants that only depends on $d$ and $m$, $d$ is the intrinsic dimension of the manifold and $m$ is an integer satisfying $0 < \alpha < \min(m, m - 1 + 1/p)$.
The approximation error of the network is bounded by the sum of two terms. The first term is a polynomial decay term that decreases with the size of the neural network and represents the trailing term of the B-spline approximation. The second term reflects the approximation error of neural networks to piecewise polynomials, decreasing exponentially with the number of layers. The proof is deferred to Section 4.1 and the appendix.
### 3.2 Estimation Theory
**Theorem 4.** Suppose Assumption 1 holds. Set $L = L' + L_0 - 1, L' \geq 3$, where $L_0 = \lceil \frac{D}{K-1} \rceil$, and
$$MN \geq C_M P, \quad P = O(n^{2\alpha/d(1-1/L)+1-2/pL}), \quad w \geq C_1(dm + D).$$
Let $\hat{f}$ be the global minimizer given in (5) with the function class $\mathcal{F} = \mathcal{F}_{\text{Conv}}(N, M, L, K, w, B_{\text{res}}, B_{\text{out}})$. Then we have
$$\mathbb{E}_D[\mathcal{L}(\hat{f}(x), y)] \leq \mathbb{E}_D[\mathcal{L}(f^*(x), y)] + C_7 \left( \frac{K^{2/L-2} w^{3L-4} L^{3L-2}}{n^{2\alpha/d(1-1/L)+1-2/pL}} \right) + C_8 \exp(-C_6 L'),$$
where the logarithmic terms are omitted. $C_1$ is the constant defined in Theorem 3, $C_7, C_8$ are constants that depend on $C_F, C_M, d, m, K$ is the size of the convolution kernel.
We would like to make the following remarks about the results:
• **Strong adaptivity:** By setting the width of the neural network to \( w = 2C_1 D \), the model can adapt to any Besov functions on any smooth manifold, provided that \( dm \leq D \). This remarkable flexibility can be achieved simply by tuning the regularization parameter. The cost of overestimating the width is a slight increase in the estimation error. Considering the immense advantages of this more adaptive approach, this mild price is well worth paying.
• **No curse of dimensionality:** the above error rate only depends polynomially on the ambient dimension \( D \) and exponentially on the hidden dimension \( d \). Since in real data, the hidden dimension \( d \) can be much smaller than the ambient dimension \( D \), this result shows that neural networks can explore the low-dimension structure of data to overcome the curse of dimensionality.
• **Overparameterization is fine:** the number of building blocks in a ConvResNeXt does not influence the estimation error as long as it is large enough. In other words, this matches the empirical observations that neural networks generalize well despite overparameterization.
• **Close to minimax rate:** The lower bound of the 1-Lipschitz error for any estimator \( \theta \) is
\[
\min_{\theta} \max_{f^* \in B_{p,q}} L(\theta(D), f^*) \gtrsim n^{-\frac{\alpha/d}{2\alpha/d+1}},
\]
where \( \gtrsim \) notation hides a factor of constant. The proof can be found in Appendix E. Comparing to the minimax rate, we can see that as \( L \to \infty \), the above error rate converges to the minimax rate up to a constant term. In other words, overparameterized ConvResNeXt can achieve close to the minimax rate in estimating functions in Besov class. In comparison, all kernel ridge regression including any NTKs will have a suboptimal rate lower bounded by \( \frac{2\alpha-d}{2\alpha} \), which is suboptimal.
• **Deeper is better:** with larger \( L \), the error rate decays faster with \( n \) and get closer to the minimax rate. This indicates that deeper model can achieve better performance than shallower models when the training set is large enough.
• **Tradeoff between width and depth:** With a fixed budget in the number of parameters, the tradeoff between width and depth is crucial for achieving the best performance, and this often requires repeated, time-consuming experiments. On the other hand, our results suggests that such a tradeoff less important in a ResNeXt. The lower bound of error does not depend on the arrangements of the residual blocks \( M \) and \( N \), as long as their product is large enough. This can partly explain the benefit of ResNeXt over other architecture.
By choosing \( L = O(\log(n)) \), the second term in the error can be merged with the first term, and close to the minimax rate can be achieved:
**Corollary 5.** Given the conditions in Theorem 4, set the depth of each block is \( L = O(\log(n)) \) and then the estimation error of the empirical risk minimizer \( \hat{f} \) satisfies
\[
E_D[L(\hat{f}(x), y)] \leq E_D[L(f^*)] + \tilde{O}(n^{-\frac{\alpha/d}{2\alpha/d+1}(1-o(1))}),
\]
where \( \tilde{O}(\cdot) \) omits the logarithmic term.
The proof of Theorem 4 is deferred to Section 4.2 and Section D.2. The key technique is computing the critical radius of the local Gaussian complexity by bounding the covering number of weight-decayed ConvResNeXts. This technique provides a tighter bound than choosing a single radius of the covering number as in Suzuki (2018); Zhang & Wang (2022), for example. The covering number of an overparameterized ConvResNeXt with norm constraint (Lemma 6) is one of our key contributions.
4 PROOF OVERVIEW
4.1 APPROXIMATION ERROR
We follow the method in Liu et al. (2021) to construct a neural network that achieves the approximation error we claim. It is divided into the following steps:
• **Step 1:** Decompose the target function into the sum of locally supported functions.
In this work, we adopt a similar approach to (Liu et al., 2021) and partition \( \mathcal{M} \) using a finite number of open balls on \( \mathbb{R}^D \). Specifically, we define \( B(c_i, r) \) as the set of unit balls with center \( c_i \) and radius \( r \) such that their union covers the manifold of interest, i.e., \( \mathcal{M} \subseteq \bigcup_{i=1}^{C_M} B(c_i, r) \). This allows us to
partition the manifold into subregions $U_i = B(c_i, r) \cap M$, and further decompose a smooth function on the manifold into the sum of locally supported smooth functions with linear projections. The existence of function decomposition is guaranteed by the existence of partition of unity stated in Proposition 1. See Section C.1 for the detail.
• **Step 2:** Locally approximate the decomposed functions using cardinal B-spline basis functions. In the second step, we decompose the locally supported Besov functions achieved in the first step using B-spline basis functions. The existence of the decomposition was proven by Dung (2011), and was applied in a series of works (Zhang & Wang, 2022; Suzuki, 2018; Liu et al., 2021). The difference between our result and previous work is that we define a norm on the coefficients and bound this norm, instead of bounding the maximum value. The detail is deferred to Section C.2.
• **Step 3:** Approximate the polynomial functions using neural networks. In this section, we follow the method in Zhang & Wang (2022); Suzuki (2018); Liu et al. (2021) and show that neural networks can be used to approximate polynomial functions, including B-spline basis functions and the distance function. The key technique is to use a neural network to approximate square function and multiply function (Barron, 1993). The detail is deferred to the appendix. Specifically, Lemma 17 proves that a neural network with width $w = O(dm)$ and depth $L$ can approximate B-spline basis functions, and the error decreases exponentially with $L$; Similarly, Proposition 9 shows that a neural network with width $w = O(D)$ can approximately calculate the distance between two points $d^2(x; c)$, with precision decreasing exponentially with the depth.
• **Step 4:** Use a ConvResNeXt to Approximate the target function. Using the results above, the target function can be (approximately) decomposed as
$$\sum_{i=1}^{C_M} \sum_{j=1}^{P} a_{i,k_j,s_j} M_{m,k_j,s_j} \circ \phi_i \times 1(x \in B(c_i, r)).$$
We first demonstrate that a ReLU neural network taking two scalars $a, b$ as the input, denoted as $\tilde{a} \times \tilde{b}$, can approximate
$$y \times 1(x \in B_{r,i}),$$
where $\tilde{x}$ satisfy that $y \times 1 = y$ for all $y$, and $y \times \tilde{x} = 0$ if any of $x$ or $y$ is 0, and the soft indicator function $1(x \in B_{r,i})$ satisfy $1(x \in B_{r,i}) = 1$ when $x \in B_{r,i}$, and $1(x \in B_{r,i}) = 0$ when $x \notin B_{r+\Delta,i}$. The detail is deferred to Section C.3.
Then, we show that it is possible to construct $MN = C_M P$ number of building blocks, such that each building block is a feedforward neural network with width $C_1(md + D)$ and depth $L$, where $m$ is an integer satisfying $0 < \alpha < \min(m, m - 1 + 1/p)$. The $k$-th building block (the position of the block does not matter) approximates
$$a_{i,k_j,s_j} M_{m,k_j,s_j} \circ \phi_i \times 1(x \in B(c_i, r)),$$
where $i = \text{ceiling}(k/N), j = \text{rem}(k, N)$. Each building block has where a sub-block with width $D$ and depth $L - 1$ approximates the chart selection, a sub-block with width $md$ and depth $L - 1$ approximates the B-spline function, and the last layer approximates the multiply function. The norm of this block is bounded by
$$\sum_{\ell=1}^{L} \|W_{\ell}^{(i,j)}\|_F^2 \leq O(2^{2k/L} dmL + DL).$$
Making use of the 1-homogeneous property of the ReLU function, by scaling all the weights in the neural network, these building blocks can be combined into a neural network with residual connections, that approximate the target function and satisfy our constraint on the norm of weights. See Section C.4 for the detail.
By applying Lemma 12 which shows that any $L$-layer feedforward neural network can be reformulated as an $L + L_0 - 1$-layer convolution neural network, the neural network constructed above can be converted into a ConvResNeXt that satisfies the conditions in Theorem 3.
### 4.2 Estimation Error
We first prove the covering number of an overparameterized ConvResNeXt with norm-constraint as in Lemma 6, then compute the critical radius of this function class using the covering number as in Corollary 19. The critical radius can be used to bound the estimation error as in Theorem 14.20 in Wainwright (2019). The proof is deferred to Section D.2.
Lemma 6. Consider a neural network defined in Definition 10. Let the last layer of this neural network be a single linear layer with norm $\|W_{\text{out}}\|_F \leq B_{\text{out}}$. Let the input of this neural network satisfy $\|x\|_2 \leq 1$, $\forall x$, and is concatenated with 1 before feeding into this neural network so that part of the weight plays the role of the bias. The covering number of this neural network is bounded by
$$\log N(\cdot, \delta) \lesssim w^2 LB_{\text{res}}^{1/2} K^{2-2/L} (B_{\text{out}}^{1/2} \exp((KB_{\text{res}}/L)^{L/2}))^{2/L} \delta^{-2/L},$$
where the logarithmic term is omitted.
The key idea of the proof is to split the building block into two types ("small blocks" and "large blocks") depending on whether the total norm of the weights in the building block is smaller than $\epsilon$ or not. By properly choosing $\epsilon$, we prove that if all the "small blocks" in this neural network are removed, the perturbation to the output for any input $\|x\| \leq 1$ is no more than $\delta/2$, so the covering number of the ConvResNeXt is only determined by the number of "large blocks", which is no more than $B_{\text{res}}/\epsilon$.
Proof. Using the inequality of arithmetic and geometric means, from Proposition 20, Proposition 22, and Proposition 23, if any residual block is removed, the perturbation to the output is no more than
$$(KB_m/L)^{L/2} B_{\text{out}}^{1/2} \exp((KB_{\text{res}}/L)^{L/2}),$$
where $B_m$ is the total norm of parameters in this block. Because of that, the residual blocks can be divided into two kinds depending on the norm of the weights $B_m < \epsilon$ ("small blocks") and $B_m \geq \epsilon$ ("large blocks"). If all the "small blocks" are removed, the perturbation to the output for any input $\|x\|_2 \leq 1$ is no more than
$$\exp((KB_{\text{res}}/L)^{L/2}) B_{\text{out}}^{1/2} \sum_{m:B_m < \epsilon} (KB_m/L)^{L/2}$$
$$\leq \exp((KB_{\text{res}}/L)^{L/2}) B_{\text{out}}^{1/2} \sum_{m:B_m < \epsilon} (KB_m/L)(K\epsilon/L)^{L/2-1}$$
$$\leq \exp((KB_{\text{res}}/L)^{L/2}) K^{L/2} B_{\text{res}} B_{\text{out}}^{1/2} (\epsilon/L)^{L/2-1}/L.$$
Choosing $\epsilon = L \left( \frac{\delta L}{2 \exp((B_{\text{res}}/L)^{L/2}) K^{L/2} B_{\text{res}} B_{\text{out}}^{1/2}} \right)^{L/2-1}$, the perturbation above is no more than $\delta/2$. The covering number can be determined by the number of the "large blocks" in the neural network, which is no more than $B_{\text{res}}/\epsilon$.
As for any block, $B_{\text{in}} L_{\text{post}} \leq B_{\text{out}}^{1/2} \exp((KB_{\text{res}}/L)^{L/2})$, taking our chosen $\epsilon$ finishes the proof, where $B_{\text{in}}$ is the upper bound of the input to this block defined in Proposition 13 and $L_{\text{post}}$ is the Lipschitz constant of all the layers following the block.
Remark 1. The proof of Lemma 6 shows that under weight decay, the building blocks in a ConvResNeXt are sparse, i.e. only a finite number of blocks contribute non-trivially to the network even though the model can be overparameterized. This explains why a ConvResNeXt can generalize well despite overparameterization, and provide a new perspective in explaining why residual connections improve the performance of deep neural networks.
5 DISCUSSIONS
We compare the Besov space with the Hölder and Sobolev spaces, which are also popular in existing literature. The Hölder space $H^{s,\alpha}$ requires the functions to be differentiable everywhere up to the $s$-th order. The Sobolev space slightly generalizes the Hölder space, but still requires high order (weak) differentiability. In contrast, the Besov space $B^{s,p,q}$ does not require weak differentiability, and therefore is more general and desirable than the Hölder and Sobolev spaces. Existing work has shown that the Besov space can capture important features, such as edges in image processing [Jaffard et al., 2001]. In particular, the Hölder and Sobolev spaces are special cases of the Besov space:
$$H^{s,\alpha} = W^{s+\alpha,\infty} \subseteq B^{s+\alpha,\infty}_{\infty,\infty} \subseteq B^{s+\alpha}_{p,q}$$
for any $0 < p, q \leq \infty$, $s \in \mathbb{N}$ and $\alpha \in (0,1]$. Due to the generality of the Besov space, existing literature has been shown that that kernel ridge estimators, including neural tangent kernel only attain a sub-optimal rate for learning Besov functions [Suzuki & Nitanda, 2021], which is worse than deep neural networks such as ConvResNeXts.
REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014.
Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. *IEEE Transactions on Information theory*, 39(3):930–945, 1993.
Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. *IEEE transactions on pattern analysis and machine intelligence*, 40(4):834–848, 2017.
George Cybenko. Approximation by superpositions of a sigmoidal function. *Mathematics of control, signals and systems*, 2(4):303–314, 1989.
Ronald A DeVore and George G Lorentz. *Constructive approximation*, volume 303. Springer Science & Business Media, 1993.
David L Donoho, Richard C Liu, and Brenda MacGibbon. Minimax risk over hyperrectangles, and implications. *The Annals of Statistics*, pp. 1416–1437, 1990.
David L Donoho, Iain M Johnstone, et al. Minimax estimation via wavelet shrinkage. *The annals of Statistics*, 26(3):879–921, 1998.
Dinh Dũng. Optimal adaptive sampling recovery. *Advances in Computational Mathematics*, 34(1):1–41, 2011.
Herbert Federer. Curvature measures. *Transactions of the American Mathematical Society*, 93(3):418–491, 1959.
Daryl Geller and Isaac Z Pesenson. Band-limited localized parseval frames and besov spaces on compact homogeneous manifolds. *Journal of Geometric Analysis*, 21(2):334–371, 2011.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Advances in neural information processing systems*, pp. 2672–2680, 2014.
Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In *2013 IEEE international conference on acoustics, speech and signal processing*, pp. 6645–6649. IEEE, 2013.
Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In *2017 IEEE international conference on robotics and automation (ICRA)*, pp. 3389–3396. IEEE, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016.
Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 7132–7141, 2018.
Stéphane Jaffard, Yves Meyer, and Robert D Ryan. *Wavelets: tools for science and technology*. SIAM, 2001.
Michael Kohler and Adam Krzyżak. Adaptive regression estimation with multilayer feedforward neural networks. *Nonparametric Statistics*, 17(8):891–913, 2005.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In *Advances in neural information processing systems*, pp. 1097–1105, 2012.
Anders Krogh and John Hertz. A simple weight decay can improve generalization. *Advances in neural information processing systems*, 4, 1991.
|
49ZYkhEGmv
|
The probabilities in Definition 3.2 stood out: are these merely illustrative (so that any result could be replaced by arbitrary constants $a$ and $b$), or would even qualitative results derived be overturned by use of different fractions (e.g. are there critical values for these numbers)?
|
Scalable AI Safety via Doubly-Efficient Debate
Anonymous authors
Paper under double-blind review
Abstract
The emergence of pre-trained AI systems with powerful capabilities across a diverse and ever-increasing set of complex domains has raised a critical challenge for AI safety as tasks can become too complicated for humans to judge directly. Irving et al. (2018) proposed a debate method in this direction with the goal of pitting the power of such AI models against each other until the (mis)-alignment identification problem is broken down into a manageable subtask. While the promise of this approach is clear, the original framework was based on the assumption that the honest strategy is able to simulate deterministic AI systems for an exponential number of steps, limiting its applicability. In this paper, we show how to address these challenges by designing a new set of debate protocols where the honest strategy can always succeed using a simulation of a polynomial number of steps, whilst being able to verify the alignment of stochastic AI systems, even when the dishonest strategy is allowed to use exponentially many simulation steps.
1 Introduction
Large language models (LLMs) have demonstrated emergent capabilities, including the ability to follow natural-language instructions, use various tools, and perform some types of general-purpose abstract reasoning and planning (Saunders et al., 2022; Yao et al., 2023; Menick et al., 2022; Zhou et al., 2023). Thus far, human feedback on LLM outputs has been used to improve the alignment between the behavior of these models and their designer’s intent (Ouyang et al., 2022). However, these models are increasingly being used to perform complex tasks that can be viewed as the writing and execution of general-purpose computations described in natural language, where at each step the model is invoked with context given by some set of previous model outputs (Lu et al., 2023).
As the complexity of such tasks scales, the ability to provide direct human feedback for training on long complex traces involving reasoning, planning, and taking actions is limited. This limitation leads to the need for new approaches for scalable oversight (Leike et al., 2018; Christiano et al., 2018) where carefully designed protocols involving the interaction of both humans and AI models are used to provide high-quality feedback for training and oversight of complex AI systems.
As a motivating example, consider the case of using a language model to draft a law or a legal contract. Laws and contracts are written in natural language, refer to concepts in the real world, and require human judgement (in the worst case a judge in an actual court) to interpret their meaning. Furthermore, individual passages or even single characters in laws or contracts can have significant real-world consequences, as demonstrated by multimillion-dollar losses suffered by companies and governments due to misplaced commas (Kurtzleben, 2014; BBC, 2017). In order to train a language model to write such high-stakes natural language, it is necessary to be certain that every passage of an extremely long document is correct, where correctness is defined by human judgement. However, requiring human experts to carefully read an entire law or contract produced by a language model to provide the training label for one example is clearly prohibitively expensive. Thus, in this setting it is necessary to design methods for training and oversight that are extremely efficient in their use of human judgements.
A prominent approach to the oversight and safe training of AI systems builds upon the fact that there is a natural high-level correspondence between training techniques in machine learning and interactive proofs in complexity theory, as exemplified by the proposal for AI safety via debate.
The overall goal of this approach is to enable the design of methods that allow the training of extremely computationally powerful learned models that nonetheless behave as desired, despite only being supervised by much more limited verifiers. For example, while no human Go player can instruct the AlphaZero model (Silver et al., 2017) on what move to make next, the model nonetheless was trained to a super-human level via self-play. This was possible precisely because it is computationally easy to verify which player has won at the end of a game of Go. Using such an approach for training LLMs to produce (and then successfully execute) computations described in natural-language requires some method of scalably verifying that the computations produced actually solve the intended task, and are executed correctly.
The surprising ability of computationally limited verifiers to correctly judge the outputs of much more computationally powerful provers underlies some of the most celebrated results in computational complexity theory. Notably, any polynomial space (and potentially exponential time) computation can be verified by a polynomial time verifier interacting with a computationally unbounded prover i.e. IP=PSPACE (Shamir, 1992). Further, for any problem with solutions which can be verified in polynomial time, one can efficiently encode the solutions in such a way that they can be non-trivially verified by reading only three bits chosen uniformly at random from the encoded solution i.e. the PCP theorem (Arora & Safra, 1998; Arora et al., 1998). Recent work has introduced the notion of doubly-efficient interactive proofs (Goldwasser et al., 2015; Reingold et al., 2021) in the context of delegating computation. Here an untrusted prover is asked to run some polynomial-time computation, and the goal is for a linear-time verifier to interact with the prover in order to accurately judge that the computation was performed correctly. Thus, the time spent by the verifier is much less than the time to run the whole computation.
Unfortunately, all of the methods from the theory of interactive proofs for the highly-efficient verification of computationally powerful provers apply only to tasks with mathematically precise definitions (e.g. find a solution to a problem, given the actual code of an algorithm for verifying that the solution is correct). However, in the case of training a model to follow human intent, the main source of feedback available is black-box access to human judgements of model outputs. Strikingly, when access to a black-box is allowed in computations, the main theorems regarding the power of interactive proofs (e.g. IP=PSPACE and the PCP theorem) are actually false (Chang et al., 1994; Fortnow, 1994). However, the goal of efficient verification of powerful provers with access to black-box judgements can still be achieved by requiring that the provers compete.
We introduce the theoretical model of doubly-efficient debate, where two polynomial-time provers compete with each other in order to convince a much more efficient verifier of the correctness of a computation that depends on access to black-box judgements. In this model we prove that, under appropriate assumptions, any polynomial-time computation can be verified using only a constant number of queries to the black-box representing human judgement (and in time linear in the size of a single query). Intuitively, our results show that, for any problem whose solutions can be verified by extremely extensive human reflection, the solutions can also be verified with a constant amount of human judgement and interaction with competing provers. A key requirement, and limitation, for applying our results in real-world settings, is that the debating models must have the ability to produce (potentially extensive) natural-language reasoning traces to solve the problem at hand, in such a way that (potentially extensive) careful human analysis could have been used to judge that the reasoning was correct. These theorems open up the door for training models with human feedback via self-play, as even very complex and extensive computations described in natural language can be verified by querying human judgements for only a single step of such a computation.
1.1 Our Results
Our definition of doubly-efficient debate is a complexity-theoretic formalization of a training setup in which two competing AI models attempt to convince a verifier, who has access to human judgements, of the correctness of a solution to a computational problem. At a high-level, the goal is to design protocols where (1) the model arguing for the correct solution convinces the verifier without expending computational effort much greater than would be necessary to correctly solve the problem by itself, and (2) the verifier makes a number of queries to human judgements that does not grow (i.e. is a fixed constant) with respect to the computational effort required to solve the problem. The details of the definition appear in Section 4. Recalling the example of models writing laws or contracts, the above goal would allow for training feedback on an entire legal contract, by showing
only a small, fixed (independently of the contract length) number of sentences to a human rater, allowing for scalable training of such models.
In the subsequent sections we prove theorems achieving this high-level goal in several settings. As a warm-up, in Section 5 we give protocols achieving the goal when human judgements are modeled as deterministic, and the competing models are given explicit natural language instructions to follow. In order to better capture the fuzzy nature of human judgement, we then extend these results to the setting where human judgements are stochastic in Section 6. Finally, in Section 7 we prove theorems achieving our goal in the case where the models are asked to come up with a proposed solution themselves, and then are required to justify the correctness of the solution with a natural-language argument. We also include in the supplementary material a machine-verifiable (in lean) formalization of the proof of the main theorem of Section 6.
2 RELATED WORK
The work most closely related to ours is the debate proposal by Irving et al. (2018), which proposed the setup of natural-language debates between AI models judged by humans. The original proposal showed that debates between two provers could naturally capture the complexity class PSPACE. Follow-up work of Barnes & Christiano (2020b) introduced cross-examination, which extends the power of debate to all of NEXP. This prior theoretical work models both provers in the debaters as computationally unbounded, which leaves open the question of the ability of actual models to efficiently implement the protocols, and whether there may be an advantage for the dishonest prover in a computationally bounded setting. Our model of doubly-efficient debate makes progress both of these questions, by giving debate protocols where the honest prover always has a winning strategy implementable in polynomial time, even when the dishonest prover is allowed unbounded computation.
The model of doubly-efficient debate is inspired by doubly-efficient interactive proofs in computational complexity first introduced in Goldwasser et al. (2015). The original purpose of this model was to capture the situation where a verifier wants to delegate a polynomial time computation to an untrusted prover, while spending much less time to verify that the computation was performed correctly. Later Reingold et al. (2021) gave the best results currently known for delegating space-bounded computation. See also Goldreich et al. (2018) for a survey of these results. Other related work connecting interactive proofs and machine learning includes Wäldchen et al. (2022), which uses the model of Merlin-Arthur (MA) proof systems in order to achieve formal interpretability of classifier outputs.
The doubly-efficient debate protocols we design are strongly connected to the idea of process-based feedback (Stuhlmüller & jungofthewon, 2022; Uesato et al., 2022), where the goal is to directly supervise the reasoning process of an AI system, rather than just the final outcome. Our protocols can be interpreted as a type of process-based feedback where two AI systems compete to convince a limited verifier that a given outcome has been arrived at by a (possibly complex) reasoning process that the verifier would endorse. On the safety side, there have been various proposals that directly supervise language models with human feedback (Ouyang et al., 2022), as well as with additional data from external sources (Menick et al., 2022). There has also been work that utilizes language models to improve supervision of language models including Constitutional AI (Bai et al., 2022) and self-critique (Saunders et al., 2022). There are also alternatives to debate as approaches to scalable oversight including recursive reward modelling (Leike et al., 2018) and iterated amplification (Christiano et al., 2018). Another line of related work on LLMs that motivates the need for scalable oversight is the design of schemes for prompting language models to perform increasingly complex tasks. Notable examples include Chameleon (Lu et al., 2023), ReAct (Yao et al., 2023), and the direct use of language models as prompt engineers (Zhou et al., 2023).
3 PRELIMINARIES
We will use the notation $[n] = \{0, 1, \ldots, n\}$. For a vector $x \in \{0, 1\}^n$ and a subset $I \subseteq [n]$ we write $x_I$ to denote the restriction of $x$ to the set of coordinates $i \in I$. For a real number $p > 0$ and positive integer $d$.
We will model computations as Turing machines $M$ with input $x \in \{0,1\}^n$, that additionally have access to an oracle $O$, which we refer to as oracle Turing machines. Formally, for $l = l(n)$ an oracle is a function $O : \{0,1\}^l \rightarrow \{0,1\}$. An oracle Turing machine $M$ is a Turing machine with the additional ability to write a query $z \in \{0,1\}^l$ onto its tape, after which it will receive a response $O(z)$ in one step. We use the notation $M^O$ to indicate the oracle machine $M$ where the queries $z$ are answered by the oracle $O$. We will also consider the setting where the oracle $O$ is stochastic, in which case the response to each oracle query $O(z)$ is an independent $\{0,1\}$-valued random variable.
In the LLM setting, the machine $M$ corresponds to a set of natural language rules and instructions, and the oracle $O$ represents human judgement along with any other external black-box feedback the model may receive (e.g. results from search-query, observations from a camera or sensor, outputs of API calls).
A language $L \subseteq \{0,1\}^*$ is a subset of finite-length strings. A deterministic oracle Turing machine $M$ decides a language $L$ with oracle $O$ if it holds that $M^O(x) = 1 \iff x \in L$.
A probabilistic oracle Turing machine $M$ decides a language $L$ with oracle $O$ if it holds that $x \in L \implies \mathbb{P}[M^O(x) = 1] > \frac{2}{3}$ and $x \notin L \implies \mathbb{P}[M^O(x) = 1] < \frac{1}{3}$. For LLMs, the language $L$ corresponds to some class of problems describable in natural language, each with a yes or no answer that may depend on human judgement or other black-box feedback encoded by the oracle $O$. The strings $x \in L$ are the problems where the answer is yes, and $x \notin L$ the problems where the answer is no. As is usual this can be extended to search problems (where the answer is polynomial length) by classical search-to-decision reductions.
**Definition 3.1.** A language $L$ is in $\text{NP}^O$ if there is a polynomial-time oracle machine $M$ such that: $x \in L$ if and only if there exists a witness $w$ of length polynomial in $|x| = n$ such that $M^O(x,w) = 1$.
**Definition 3.2.** A language $L$ is in $\text{MA}^O$ if there is a probabilistic oracle machine $M$ and a polynomial $p(n)$ such that:
- $x \in L \implies \exists w$ of length $p(n)$ s.t. $\mathbb{P}[M^O(x,w) = 1] > \frac{2}{3}$.
- $x \notin L \implies \forall w$ of length $p(n)$, $\mathbb{P}[M^O(x,w) = 1] < \frac{1}{3}$.
For the LLM setting, languages in $\text{NP}^O$ and $\text{MA}^O$ correspond to problems $x$ describable in natural language, where a correct solution (the witness $w$) can be verified by polynomially many human judgements of a potentially polynomial length transcript arguing that $w$ is a solution to $x$. These sorts of problems are arguably the most important for safety and scalable oversight, as they correspond to the case where the LLM proposes a plan $w$ in natural language, and goes through a potentially quite long sequence of steps to argue that execution of the plan will have the desired outcome.
The protocols establishing the power of debate in terms of standard complexity classes rely on producing verifiable transcripts of some prescribed computation. A transcript of a time $T$ computation of machine $M$ on input $x$ is a string $y \in \{0,1\}^T$, where $y_t$ is the bit written at the current head position of $M$ in time step $t$. We will assume that the $T$-th coordinate of the transcript is equal to the output of $M$ on $x$ i.e. $y_T = M(x)$. In the context of LLMs executing polynomial-length computations from natural-language instructions, the transcript is just the string of tokens output by the model. Given a transcript $y$, the subset of coordinates $I_{M,x}(t) \subseteq [T]$ of $y$ relevant to coordinate $t \in [T]$ are the coordinates of the transcript that are read by $M$ when computing $y_t$. When the machine $M$ and input $x$ are obvious from context we will write $I(t)$ for the set of relevant coordinates.
For standard Turing machines (without access to an oracle), the set of relevant coordinates has size $O(1)$, but for oracle Turing machines may be as large as $l$.
## 4 Debate
A debate (Irving et al., 2018) is given by a triple $(A,B,V)$ of oracle Turing machines, an oracle $O$, and a common input $x$ of length $n$. The machines $A$ and $B$ are called provers and $V$ is called the verifier. A debate consists of $k = k(n)$ rounds, during which the provers exchange messages. In round $i \in [k]$ prover $A$ sends a message $a^{(i)} = A^O(x,a^{(1)},b^{(1)},\ldots,a^{(i-1)},b^{(i-1)})$ and prover $B$ sends a message $b^{(i)} = B^O(x,a^{(1)},b^{(1)},\ldots,a^{(i-1)},b^{(i-1)})$ which can be read by all parties involved. We let $a = (a^{(1)},\ldots,a^{(k)})$ and $b = (b^{(1)},\ldots,b^{(k)})$ denote the full transcript of the messages sent by
each prover. At the end of the $k$-th round, the verifier runs $V^O(x, a, b)$ and outputs either zero or one. As defined, the two provers each send a message in one round, but this also captures the case of taking turns by having them alternate sending empty messages.
### 4.1 Doubly-Efficient Debate
Different variants of debate arise depending on the computational power and/or limitations of the provers and the verifier.
**Definition 4.1.** A $(P_{\text{time}}, V_{\text{time}}, q)$-debate protocol is given by a triple of oracle Turing machines $(A, B, V)$ where $A$ and $B$ run in time $P_{\text{time}}$, and $V$ runs in time $V_{\text{time}}$ and makes $q$ oracle queries. Let $1 \geq c > \frac{1}{2} > s \geq 0$. A debate protocol decides a language $L$ with completeness $c$ and soundness $s$ if:
- **Completeness:** If $x \in L$ then for all (unbounded time) oracle Turing machines $B'$ the debate $(A, B', V)$, with oracle $O$, and input $x$ satisfies $\mathbb{P}[V^O(x, a, b) = 1] \geq c$.
- **Soundness:** If $x \notin L$ then for all (unbounded time) oracle Turing machines $A'$ the debate $(A', B, V)$, with oracle $O$, and input $x$ satisfies $\mathbb{P}[V^O(x, a, b) = 1] \leq s$.
When $c = 1$ and $s = 0$ we say that the debate protocol deterministically decides $L$.
For deterministic oracle machines, as there is no randomness, it will always be the case that $c = 1$ and $s = 0$ i.e. that the honest prover always wins. For stochastic oracle machines the definition requires a constant gap between $c$ and $s$, which as usual can be amplified by repeating the protocol multiple times and taking the majority outcome. A debate protocol specifies the behavior of honest provers for both of the cases $x \in L$ and $x \notin L$. Additionally, it is required that the honest prover wins the debate with higher probability while running in time $P_{\text{time}}$, against any (computationally unbounded) strategy by the dishonest prover. Note that this requirement gives a complexity theoretic formalization of the intuitively desirable property that debates should be structured so that it is easier to tell the truth than to lie.
The original definition of debate requires $V_{\text{time}}$ to be polynomial in $n$, but allows $P_{\text{time}}$ to be unbounded. Doubly-efficient debate refers to the setting where $P_{\text{time}}$ is polynomial in $n$ and $V_{\text{time}}$ is linear in $l$, the length of each oracle query, and linear (or ideally sub-linear) in $n$, and $V$ makes a sub-linear (or ideally constant) number $q$ of oracle queries.
### 4.2 Training and Inference with Debate
We first clarify the relationship between our formal theoretical model of debate, and practical training setups.
**The oracle $O$:** Our theoretical model gives both the provers and the verifier access to an oracle $O$ representing human judgement. The prover access to the oracle corresponds to the fact that the powerful pre-trained models already have extensive knowledge of human judgement on many questions, and will gain more knowledge throughout the training process. The verifier access to the oracle $O$ corresponds to actual judgements by human raters.
**At training time:** The pretrained models $A$ and $B$ are trained via self-play to win the zero-sum game given by the debate protocol. When a model queries $O$ this corresponds to using either pre-trained knowledge, or knowledge gained during self-play, to predict the answer that a human would give to the oracle query. When the verifier $V$ queries $O$ this corresponds to asking an actual human rater to judge an oracle query. The distinction between pre-trained knowledge for provers, versus queries to human judgements for the verifier is critical, because it means that the training cost in terms of number of queries to human judgement is equal to the number of verifier queries to $O$. Thus, as long as the number of verifier oracle queries is bounded, the training procedure can scale to arbitrarily complex computations by the models $A$ and $B$, while still only requiring a bounded number of human judgements.
**At inference time:** When a model is asked to solve a problem or follow complex natural language instructions at inference time, the debate protocol is still run. However, no human feedback is used. Instead the output of the model $A$ is trusted, as long as $B$ does not abort/point out a flaw.
5 DETERMINISTIC DEBATE
Doubly-efficient debate can decide any problem solvable in bounded space with verifier time that is nearly-linear in the space used, and only a constant number of verifier queries to $O$.
**Theorem 5.1.** Let $L$ be any language decidable by an oracle Turing machine $M$ in time $T = T(n)$ using space $S = S(n)$. Then there is a $(O(T \log T), O(S \log T), O(1))$-debate protocol deterministically deciding $L$.
The proof appears in Section B. One can compare Theorem 5.1 to the setting of doubly-efficient interactive proofs where there is a single prover (and without any black-box oracles). Reingold et al. (2021) show that any time $T$ space $S$ computation can be decided by a doubly-efficient interactive proof in time $O(S^2 \text{polylog } T)$. It is currently an open question whether this can be improved to $O(S \text{ polylog } T)$ (Goldreich et al., 2018). Additionally, the protocol of Reingold et al. (2021) is quite complex, and relies on prior work in interactive proofs including the PCP theorem, so does not apply in the presence of a black-box oracle.
The protocol achieving Theorem 5.1 is given in Figure 3 in Section A. The basic idea (which has been used in many classical PSPACE-completeness results), is to have $A$ output a supposed middle configuration of the computation of $M(x)$. Then $B$ decides to recursively call the protocol on either the first or the second half of the computation. This recursion bottoms-out at a single transition of the machine $M$ which can be checked by $V$.
5.1 CROSS-EXAMINATION
The power of debate can be increased by allowing for cross-examination, where multiple copies of each debater are questioned independently. Intuitively this should give more power, as the independent copies must give consistent answers to the queries asked, and so may have more difficulty lying.
**Definition 5.2.** A debate with cross-examination is a debate where $A$, $B$, and $V$ can query independent, non-communicating copies of both $A$ and $B$. Furthermore, the verifier is not required to read the entire transcript of the debate, but can selectively query a subset of the transcript. A debate protocol with cross-examination is a debate protocol where the debates appearing in the completeness and soundness case allow cross-examination.
The definition of cross-examination is quite natural when considering language-model debaters. In this case, the ability to query independent copies can be achieved by either running multiple copies of the same LLM, or more efficiently by simply querying the same LLM with any previous messages in the debate removed from the context. Our next theorem shows that doubly-efficient debate with cross-examination can decide any problem solvable in polynomial time, using only $O(l)$ verifier time (and hence only $O(1)$ oracle queries).
**Theorem 5.3.** Let $L$ be any language decidable by an oracle Turing machine $M$ in time $T = T(n)$ with oracle queries of length $l$. Then there is a $(O(T \log T), O(l \log T), O(1))$-debate protocol with cross-examination deterministically deciding $L$.
The proof appears in Section B. The protocol achieving Theorem 5.3 is given in Figure 4 in Section A. Cross-examination allows for a simple and powerful protocol where $A$ outputs the whole transcript of the computation $M(x)$, $B$ outputs the location of a supposed mistake by $A$, and $V$ checks only this location.
6 STOCHASTIC DEBATE
In this section we give a debate protocol for any language $L$ decidable by a probabilistic oracle machine $M$ with access to a stochastic oracle $O$. In the LLM setting, the oracle $O$ is intended to model human judgement, as well as other types of responses from nature (e.g. real world data or observations). Thus, the oracle $O$ must be stochastic in order for the model to be relevant in most real-world scenarios. However, access to a stochastic oracle introduces an additional subtlety, where changes on the order of $O(\frac{1}{T})$ in the oracle’s distribution may add up to an $O(1)$ difference in the final output probability over the course of a time $T$ computation. To account for this issue, we require an additional Lipschitzness assumption for the machine $M$.
Debate protocol for a stochastic oracle
All parties have access to an \( O \), input \( x \in \{0, 1\}^n \), and \( K \)-Lipschitz probabilistic oracle machine \( M \). \( A \) claims that \( \mathbb{P}[M(x) = 1] \geq \frac{2}{3} \), and \( B \) disputes this claim.
1. Let \( y \) be the random variable representing the transcript of \( M \) on input \( x \), and let \( d = \lceil 150K \rceil \).
2. The debate proceeds in a series of \( T \) rounds, in lexicographic order for \( t \in [T] \). In round \( t \):
(a) \( A \) outputs a probability \( \hat{p}_t \in [0, 1] \), which is supposed to equal \( p_t = \mathbb{P}[y_t = 1 \mid y_{I(t)} = a_{I(t)}] \).
(b) \( A \) queries an independent copy of \( B \) for a real number \( z_t^A \in [0, 1] \), and \( B \) queries an independent copy of \( A \) for a real number \( z_t^B \in [0, 1] \). Both \( z_t^A \) and \( z_t^B \) are supposed to be sampled uniformly at random from \([0, 1]\).
(c) Let \( z_t = z_t^A + z_t^B \mod 1 \).
(d) \( A \) sets \( a_t = 1 \) if \( z_t \leq \hat{p}_t \), and \( a_t = 0 \) otherwise.
(e) At this point \( B \) can abort the protocol in round \( t \), and is supposed to do so if \( A \) has not followed the protocol.
3. If the protocol was aborted at round \( t \):
(a) The verifier \( V \) draws \( r = 192d^2 \log 100 \) samples of \( y_t \) conditioned on \( y_{I(t)} = a_{I(t)} \) to obtain an estimate \( \hat{p}_t^O \) of the probability \( \mathbb{P}[y_t = 1 \mid y_{I(t)} = a_{I(t)}] \) via the sample mean. Note that if \( y_t \) is supposed to be the output of an oracle query this can be done with \( r \) queries to \( O(a_{I(t)}) \), otherwise \( y_t \) is a deterministic function of \( a_{I(t)} \) given by one step of \( M \).
(b) \( V \) checks if \( |\hat{p}_t^O - \hat{p}_t| \geq \frac{3}{8d} \) and outputs 0 if so and 1 otherwise.
4. If the protocol was not aborted, then \( V \) outputs \( a_T \).
Figure 1: Doubly-efficient debate protocol for a stochastic oracle.
Definition 6.1. For \( K > 0 \), a probabilistic oracle machine \( M \) is \( K \)-Lipschitz at oracle \( O \) if, for any other oracle \( O' \),
\[
\sup_x \left| \mathbb{P}[M^O(x) = 1] - \mathbb{P}[M^{O'}(x) = 1] \right| < K \sup_z \left| \mathbb{P}[O(z) = 1] - \mathbb{P}[O'(z) = 1] \right|
\]
In other words, if \( M \) is run with any oracle which assigns similar probabilities to \( O \), the probability that \( M \) outputs 1 should change by at most a \( K \) factor more than the maximum differences in the oracle probabilities. Observe that every time-\( T \) stochastic oracle machine is \( K \)-Lipschitz for \( K = O(T) \).
Theorem 6.2. For \( K > 0 \), let \( L \) be any language decidable by a \( K \)-Lipschitz probabilistic oracle Turing machine \( M \) in time \( T = T(n) \) with oracle queries of length \( l \). Then there is a \((O(K^2T \log T), O(K^2 + l \log T), O(K^2))\)-debate protocol with cross-examination deciding \( L \) with completeness \( \frac{3}{5} \) and soundness \( \frac{2}{5} \).
The proof appears in Section D. The debate protocol promised in Theorem 6.2 is given in Figure 1. As usual the protocol describes the prescribed behavior of the honest provers, but emphasizes that dishonest behavior may occur. The protocol proceeds in \( T \) rounds, where in each round \( A \) proposes a probability distribution over the next bit given the computation so far. Then \( A \) and \( B \) use cross-examination to engage in a coin-flipping protocol (Steps 2.b. and 2.c.) in order to sample the next bit of the computation from the distribution proposed by \( A \). Finally, \( B \) can abort the protocol at any round \( t \), whereupon \( V \) samples from \( O \) in order to check if \( A \)'s proposed distribution at round \( t \) is correct.
Theorem 6.2 delivers non-trivial savings in verifier time and query complexity whenever \( K = o(\sqrt{T}) \). In particular, the most interesting case occurs for \( K = O(1) \) i.e. when \( K \) is a constant independent of \( T \).
An Example for Theorem 6.2. Suppose a researcher wants to perform a meta-analysis of the existing literature on whether caffeine improves learning in college students. The researcher could give a high-level plan in natural language for how to accomplish this task:
1. Search the literature for studies on caffeine consumption and learning in college students.
2. Among the studies found, select those studies which meet the following quality criteria:
(a) Do not include any studies that contain clear methodological flaws.
(b) Only include randomized control trials which measure final exam performance after at least six weeks of intervention.
3. Perform the actual statistical analysis using a random effects model.
4. Measure sources of heterogeneity between the included studies using meta-regression.
The above natural language plan is the machine $M$ in Theorem 6.2. Instructions such as 2.(a) correspond to queries to the human judgement oracle $\mathcal{O}$, as whether a study contains a clear flaw is a question of judgement. To run the debate protocol the model $A$ is prompted to carry out the plan. $A$ will then carry out the instructions above while carefully listing everything relevant to show that the instructions are being followed correctly e.g. text from each study to explain why it is included. At the same time $B$ will examine each step made by $A$ and possibly abort if $B$ believes the step has been made incorrectly.
To see how the protocol in Figure 4 works in this case suppose that $B$ aborts when $A$ decides not to include a particular study at Step 2.(a). The protocol requires $A$ to output a probability estimate for how likely a human is to say that the study contains a clear methodological flaw. Assume that $A$ has output probability $\frac{3}{4}$ that the study contains a flaw. Then when $B$ aborts, the question of whether the study has the flaw claimed by $A$ will be sent to human raters. If the fraction of raters who agree with $A$ about the flaw is sufficiently close to $\frac{3}{4}$ then $A$ wins. Otherwise $B$ wins.
7 Doubly-Efficient Debate with a Witness
One should also consider the situation where the debaters additionally are able to non-deterministically produce a polynomial-sized witness $w$ for the membership of $x$ in an $\text{NP}^{\mathcal{O}}$ or $\text{MA}^{\mathcal{O}}$ language $L$. In this case the honest debaters are required to run in polynomial time as in Definition 4.1 when additionally given the witness $w$ as input. This case corresponds to the setting where an LLM proposes some solution to a very complex problem, and then argues for the correctness of the solution via a polynomially long natural-language argument. Our results in this section prove that, as long as this argument can be verified via extensive human reflection, then there is a debate protocol that allows a human judge to only check a constant number of steps of the argument when interacting with two competing models. The protocols of Figure 4 and Figure 1 then carry over immediately where the machine $M$ is the polynomial-time verifier for $L$ and both $x$ and the witness $w$ are given as input.
Debate protocol with a witness for time $T$
All parties have access to an oracle $\mathcal{O}$, input $x \in \{0, 1\}^n$ and the code of a time $T$ oracle machine $M$ for verifying witnesses for a language $L$.
$A$ claims that $x$ is in $L$, and $B$ disputes this claim.
1. $A$ outputs a claimed witness $w$ for the membership of $x$ in $L$.
2. If the oracle $\mathcal{O}$ is deterministic, run the protocol of Figure 4 with input $(x, w)$ and machine $M$.
3. If the oracle $\mathcal{O}$ is stochastic, run the protocol of Figure 1 with input $(x, w)$ and machine $M$.
Figure 2: Doubly-efficient debate protocol with a witness.
The protocol given in Figure 2 leads immediately to the following theorems.
**Theorem 7.1.** Let $L \in \text{NP}^{\mathcal{O}}$ and let $M$ be the time $T = T(n)$ verifier for $L$ with oracle queries of length $l$. Then there is a $(O(T), O(l \log T), O(1))$-debate protocol with cross-examination deterministically deciding $L$ given a witness $w$.
**Theorem 7.2.** For $K > 0$, let $L \in \text{MA}^{\mathcal{O}}$ be a language with a verifier $M$ that is a $K$-Lipschitz, time $T = T(n)$ probabilistic oracle machine, with oracle queries of length $l$. Then there is a $(O(K^2 T \log T), O(K^2 + l \log T), O(K^2))$-debate protocol with cross-examination deciding $L$ with completeness $\frac{3}{5}$ and soundness $\frac{2}{5}$, given a witness $w$.
The proofs of Theorem 7.1 and Theorem 7.2 appear in Section E.
An Example for Theorem 7.2: The simplest example for Debate with a witness is to take the setting of the example given for Theorem 6.2 above, but drop the assumption that the high-level natural language plan is given in advance. Instead one could assume that the researcher just asks the question "Does caffeine improve learning in college students? Produce a high-level plan to answer this question, give a step-by-step justification of the correctness of the plan, and then execute the plan." Here the high-level plan plus the justification for its correctness is the witness $w$ produced by $A$ in Theorem 7.2. The machine $M$ is given by the natural-language instructions (implicit in the prompt above):
1. Go through every step of the plan and the corresponding justification and check that it is clearly correct.
2. Execute the plan.
Clearly many calls to the human-judgement oracle $O$ are necessary to execute $M$ with witness $w$. The protocol at this point will proceed as in the example above, with $A$ going step-by-step through $M$, and $B$ aborting for a claimed incorrect step.
8 Conclusion and Open Problems
We introduced the theoretical model of doubly-efficient debate, where two polynomial-time provers compete to convince a significantly more efficient verifier that they have correctly solved a computational problem that depends on black-box access to human judgements. At a high level, our main theorems show that doubly-efficient debate can be used to allow for the verification of arbitrary polynomial time computations using only a constant amount of human judgement. The overall aim of this model is to provide theoretical grounding for scalable oversight of powerful AI systems, using limited human feedback, and our theorems represents an initial step towards this goal. However, many open questions remain.
The Power of the Provers: The theorems in this paper apply to the setting of verifying computations that could have been verified by a human reading the entire (polynomial-length) transcript of the computation. How can the theoretical model be extended to settings where this is not possible? On the one hand, our model assumes the AI systems implementing the provers are powerful enough to very accurately simulate human judgements on any query. This may attribute too much power to these systems. Is it possible to relax the accuracy requirements for the provers e.g. by giving the provers access to an approximately correct oracle $O'$?
On the other hand, extremely powerful AI systems may be able to perform computations that, while polynomial time, do not have any polynomial length human-verifiable transcript. The original debate proposal with unbounded provers captures all of PSPACE, and thus is able to efficiently interrogate implicitly-represented exponential length transcripts. However, allowing both provers in the theoretical model to be unbounded runs into what is referred to by Barnes & Christiano (2020a) as the obfuscated argument problem, where a dishonest prover can in polynomial time produce an argument that would require the honest prover exponential time to refute. Is there some intermediate model where the honest prover always has an efficient strategy, but the computation to be verified does not require a polynomial-length human-verifiable transcript?
The Power of the Verifier: Human judgement is fallible in many ways. Furthermore, current approaches to scalable oversight, such as reinforcement learning from human feedback, generally train AI models (known as reward models) to approximate human judgements from a limited number of samples. Thus, in the practical settings of interest the oracle $O$ used by the verifier is likely to be flawed. Theorem 6.2 partially addresses this problem by making each response of $O$ stochastic, and allowing for the verification of any computation that outputs the correct answer with a constant advantage over random guessing. Is it possible to extend these results to settings where $O$ gives incorrect answers on some subset of queries? There are many possible models in this direction e.g. is there a class of computations that can be verified by debate, where the oracle may make errors on an arbitrary subset of limited size? Alternately, can debate verify computations where the oracle makes arbitrary errors on a randomly selected subset of queries?
REFERENCES
Comma comeuppance: When rogue punctuation proves costly. *BBC News*, 2017. URL https://www.bbc.co.uk/news/business-39300432.
Sanjeev Arora and Shmuel Safra. Probabilistic checking of proofs: A new characterization of np. *Journal of the ACM (JACM)*, 45(1):70–122, 1998.
Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy. Proof verification and the hardness of approximation problems. *Journal of the ACM (JACM)*, 45(3):501–555, 1998.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. *arXiv preprint arXiv:2212.08073*, 2022.
Beth Barnes and Paul Christiano. Debate update: Obfuscated arguments problem, 2020a. URL https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem.
Beth Barnes and Paul Christiano. Write-up: Progress on ai safety via debate, 2020b. URL https://www.alignmentforum.org/posts/Br4xDBYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1.
Richard Chang, Benny Chor, Oded Goldreich, Juris Hartmanis, Johan Håstad, Desh Ranjan, and Pankaj Rohatgi. The random oracle hypothesis is false. *J. Comput. Syst. Sci.*, 49:24–39, 1994.
Paul Christiano, Buck Shlegeris, and Dario Amodei. Supervising strong learners by amplifying weak experts. *arXiv preprint arXiv:1810.08575*, 2018.
Lance Fortnow. The role of relativization in complexity theory. *Bulletin of the EATCS*, 52:229–243, 1994.
Oded Goldreich et al. On doubly-efficient interactive proof systems. *Foundations and Trends® in Theoretical Computer Science*, 13(3):158–246, 2018.
Shafi Goldwasser, Yael Tauman Kalai, and Guy N. Rothblum. Delegating computation: Interactive proofs for muggles. *J. ACM*, 62(4):27:1–27:64, 2015. doi: 10.1145/2699436. URL https://doi.org/10.1145/2699436.
Geoffrey Irving, Paul Christiano, and Dario Amodei. AI safety via debate, 2018.
Danielle Kurtzleben. How a misplaced comma cost the us government $38.4 million. *Vox*, 2014. URL https://www.vox.com/xpress/2014/10/14/6971613/how-a-misplaced-comma-cost-the-us-government-38-4-million.
Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. Scalable agent alignment via reward modeling: a research direction. *arXiv preprint arXiv:1811.07871*, 2018.
Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. *arXiv preprint arXiv:2304.09842*, 2023.
Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, et al. Teaching language models to support answers with verified quotes. *arXiv preprint arXiv:2203.11147*, 2022.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. URL https://openreview.net/forum?id=TG8KACxEON.
|
kjZlzuVJF0
|
Why does TIMAR use different integration approaches with MAT and QMIX? For instance, when combining with MAT, the authors directly apply the loss to the intermediate layer, while with QMIX, an additional layer is added before the RNN. Why not use the same integration approach? Does this additional structure introduce an impact on the results?
|
Boosting Multi-Agent Reinforcement Learning via Transition-Informed Representations
Anonymous authors
Paper under double-blind review
Abstract
Effective coordination among agents in a multi-agent system necessitates an understanding of the underlying dynamics of the environment. However, in the context of multi-agent reinforcement learning (MARL), agent partially observed information leads to a lack of consideration for agent interactions and coordination from an ego perspective under the world model, which becomes the main obstacle to improving the data efficiency of MARL methods. To address this, motivated by the success of learning a world model in RL and cognitive science, we devise a world-model-driven learning paradigm enabling agents to gain a more holistic representation of individual observation of the environment. Specifically, we present the Transition-Informed Multi-Agent Representations (TIMAR) framework, which leverages the joint transition model, i.e., the surrogate world model, to learn effective representations among agents through a self-supervised learning objective. TIMAR incorporates an auxiliary module to predict future transitions based on sequential observations and actions, allowing agents to infer the latent state of the system and consider the influences of others. Experimental evaluation of TIMAR in various MARL environments demonstrates its significantly improved performance and data efficiency compared to strong baselines such as MAPPO, HAPPO, finetuned QMIX, MAT, and MA2CL. In addition, we found TIMAR can also improve the robustness and generalization of the Transformer-based MARL algorithm such as MAT.
1 Introduction
Multi-agent reinforcement learning (MARL) is a rapidly growing field in the area of artificial intelligence. In recent years, significant progress has been made in the development of algorithms for MARL (Yang & Wang, 2020), and these algorithms have been applied to a wide range of tasks and environments, including game playing (Berner et al., 2019; Vinyals et al., 2019; Bellemare et al., 2013), robotics (Akkaya et al., 2019; Deitke et al., 2020; 2022), and combinatorial optimization problems (Kool et al., 2019).
Despite the many advancements made in the field of MARL, there remains a dearth of research on representation learning of the valuable information about the functionality of the world. This can lead to a lack of effective understanding of semantic information related to task goals in complex, high-dimensional scenarios, as well as a lack of analytical inferences about the states of teammates or opponents, which is crucial for efficient collaboration or competition. Relying solely on MARL algorithms may hinder the agent from acquiring such representational capabilities and make it difficult to accomplish such tasks without learning abstract representations of the world model.
Representation learning has played an important role in recent developments of single-agent reinforcement learning (RL) algorithms. In particular, self-supervised learning (SSL) has attracted increasingly more attention due to its success in both NLP and CV areas (He et al., 2020; Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020). Recently, numerous works (Laskin et al., 2020; Zhu et al., 2022; Yarats et al., 2021; Schwarzer et al., 2021a; Yu et al., 2022) have borrowed insights from different areas and attempted to design auxiliary learning objectives to learn more effective representations of RL and thus improve the empirical performance. These approaches can provide the agent with a better understanding of its environment and allow the agent to learn more efficiently by focusing on the most relevant information with the help of extracted representations.
However, when meeting partially observable multi-agent systems, it is challenging to apply such self-supervision priors to learn compact and informative feature representations in MARL. One major obstacle to learning effective representations is that agents in partially observable multi-agent systems only have access to individual observations, which means that one agent’s behavior influences the others’ observations. As a result, building representation priors for each agent independently may fail due to imperfect and non-stationary information. In other words, it is challenging to learn representations that can provide a more holistic observation of the environment and serve as valuable supervision to explicitly guide the model learning how to collaborate among agents.
We tackle this challenge by designing an approach to enhance the data efficiency of MARL algorithms to learn valuable information about the functionality of the environment’s world model through an SSL way in the latent space. As shown in Figure 1, our insight is that humans acquire a substantial amount of background knowledge about the world through passive observation. Scholars have hypothesized that this common-sense information plays a crucial role in enabling intelligent behavior, including the sample-efficient acquisition of new concepts (Sarkar & Etemad, 2020), grounding (Assran et al., 2023), and planning (LeCun, 2022). As a result, with the help of implicit inference with a virtual ego world model, an agent can obtain better information, such as background knowledge, behavior influence, future prediction, etc., for its explicit execution in the environment.
In this work, we propose a novel representation learning framework that suits MARL, named Transition-Informed Multi-Agent Representations, dubbed TIMAR, to improve data efficiency and performance of MARL further. The idea behind TIMAR is to ground representation among agents with the joint transition model, i.e. the surrogate world model. In addition to the encoder in previous MARL approaches, we introduce an auxiliary Transformer-liked module (Vaswani et al., 2017) to model the interaction among agents. Specifically, we first treat latent representations of local observations of all agents as the sequence of masked contexts of the global state. Then we combine the sequential observation representations and action embeddings to let the Transformer module inform the observation representation of the next timestep. Inspired by the success of self-supervised learning objectives in efficient RL (Laskin et al., 2020; Schwarzer et al., 2021a), we adopt BYOL’s (Grill et al., 2020) loss to train the original encoder and Transformer jointly, meanwhile ensuring the consistency between the informed transitions and the ground truth.
To evaluate our proposed algorithm, we try our framework on strong MARL algorithms and construct extensive experiments on multiple common-used cooperative MARL benchmarks, including both vision- and state-based environments in discrete and continuous scenarios (Samvelyan et al., 2019; Panerati et al., 2021; de Witt et al., 2020). We compare our approach against current state-of-the-art baselines such as finetuned QMIX (Hu et al., 2021), HAPPO (Kuba et al., 2022), Multi-Agent Transformer (Wen et al., 2022), and MA2CL (Song et al., 2023). The results demonstrate its superior performance and data efficiency in these environments, meaning that TIMAR can learn more impactive representations from our designed joint-transition-model-based self-supervised learning paradigm compared with baselines. In addition, we show that TIMAR can also improve the robustness and generalization of the Transformer-based MARL algorithm such as MAT.
2 RELATED WORK
2.1 OVERVIEW OF SELF-SUPERVISED LEARNING
Self-supervised learning empowers us to exploit a variety of labels that come with the data for free. With self-supervised learning, we can utilize inexpensive unlabeled data and establish the learning objectives properly from designed pretexts to gain supervision from the data itself. SSL has been developed in CV and NLP areas and can be divided into the various self-supervised pretexts in the literature into four broad families (Ericsson et al., 2022): Masked Prediction, Transformation Prediction, Instance Discrimination, and Clustering. (1) Masked Prediction methods (Mikolov et al.,
mask a portion of word tokens or image pixels from the input sentence or image and train the model to predict the masked components to obtain effective representations. (2) Transformation Prediction methods (Gidaris et al., 2018; Sarkar & Etemad, 2020; Xu et al., 2019) apply a transformation that maps from canonical views to alternative views and trains the model to predict what transformation has been applied. (3) Instance Discrimination methods (Velickovic et al., 2019; Chen et al., 2020; He et al., 2020; Tian et al., 2020) apply some transformation process in one instance to obtain multiple views of it and attempt to formalize the contrastive instance discrimination. (4) And Clustering methods (Caron et al., 2018; 2020; Zhan et al., 2020; Alwassel et al., 2020) focus on dividing the training data into several groups with high intra-group similarity and low inter-group similarity. We recommend readers read Ericsson et al. (2022) to get more information.
### 2.2 Self-Supervised Learning for RL
There exist substantial works taking advantage of SSL techniques to promote representation learning in RL. A popular approach is to jointly learn policy learning objectives and auxiliary objectives. As for constructing auxiliary SSL objectives, the primary way is to build multiple views of the same input through masked-latent reconstruction or dynamic models with augmentations. For instance, Laskin et al. (2020) and Zhu et al. (2022) attempt to extract high-level features from raw pixels using contrastive learning and perform off-policy control on top of the extracted features. Other works (Schwarzer et al., 2021a; Yu et al., 2021b; 2022; Zhang et al., 2021) leverage a dynamic model to obtain a predicted version of the subsequent observation and then use contrastive learning to enforce consistency between the raw future observation and the prediction version of it in latent space. Another alternative way of obtaining good representations is to pre-train the observation encoder to learn effective representations before policy learning (Yarats et al., 2021; Stooke et al., 2021; Schwarzer et al., 2021b; Yang & Nachum, 2021; Campos et al., 2021).
### 2.3 Self-Supervised Learning for MARL
As far as we know, only a few works (Shang et al., 2021; Zhang et al., 2022; Song et al., 2023; Guan et al., 2022; Lin et al., 2021) consider promoting representation in MARL. Shang et al. (2021) task each agent to predict its future location, arriving at an agent-centric predictive objective to be combined in their proposed agent-centric attention module in the football game. Zhang et al. (2022) is a model-based MARL method that proposed a graph-assisted predictive state representation learning framework that leverages the agent connectivity graphs to aggregate local representations computed by each agent. Guan et al. (2022) designs a permutation invariant message encoder to generate common information-aggregated representation from messages and optimize it via reconstructing and shooting future information in a self-supervised manner. And Lin et al. (2021) formulates communication grounding as a representation learning problem and proposes to use observation autoencoding to learn a common grounding across all agents. Note that the SSL prior proposed in Shang et al. (2021) only be used in football-like environments and is not flexible. Additionally, our method aims to build a general plugin for model-free MARL approaches so that model-based and communication-based MARL methods are not directly comparable to our method.
We focus on the auxiliary-task-based studies in this work. The most similar work is Song et al. (2023), which encourages learning representation to be both temporal and agent-level predictive by reconstructing the masked agent observation in latent space. Specifically, it uses an attention reconstruction model for recovering and the model is trained via contrastive learning. Different from Song et al. (2023), our method leverages the joint-embedding predictive architecture to learn the surrogate multi-agent world to capture effective knowledge for better multi-agent decision-making.
### 3 Our Method
#### 3.1 Preliminaries and Background
**Problem formulation:** Cooperative MARL problems are often modeled by decentralized Partially Observable Markov Decision Processes (Dec-POMDPs, Oliehoek & Amato (2016)) \((\mathcal{N}, \mathcal{S}, \{\mathcal{A}_i\}, \mathcal{T}, R, \Omega, O, \gamma)\). Here, \(\mathcal{N} = 1, \ldots, n\) is the set of agents, \(\mathcal{S}\) is the set of states,
\( \mathcal{A} = \times_i A_i \) is the set of joint actions, \( \mathcal{T} \) is a set of conditional transition probabilities between states, \( \mathcal{T}(s, a, s') = P(s' | s, a) \), \( R : S \times \mathcal{A} \rightarrow \mathbb{R} \) is the reward function, \( \mathcal{O} = \times_i O_i \) is a set of observations for agent \( i \), \( \Omega \) is a set of conditional observation probabilities \( \Omega(s', a, o) = P(o | s', a) \), and \( \gamma \in [0, 1] \) is the discount factor. At each time step, each agent selects an action \( a_i \), and the state updates according to the transition function (using the current state and the joint action). Then each agent receives its observation based on the observation function \( \Omega(s', a, o) \) (using the next state and the joint action) and a reward is generated for the entire team according to the reward function \( R(s, a) \). The goal is to maximize the expected cumulative reward over a finite or infinite time horizon.
**MARL algorithms:** In deep MARL, we use neural networks to process joint observations and make decisions. A common-used paradigm is centralized training for decentralized execution (CTDE), which allows agents to access global information and opponents’ actions during the training phase and use individual observation only in the inference phase. In CTDE approaches [Lowe et al., 2017; Yu et al., 2021a; Kuba et al., 2022; Rashid et al., 2018; Wang et al., 2021], observation representations are generated from the encoder of the decentralized part of the algorithm, e.g. the actor in policy gradient-based methods and the backbone in value-based methods. Let \( f_\theta \) denote the encoder parameterized by \( \theta \), that is \( \tilde{o}_i^t = f_\theta(o_i^t) \). Another powerful MARL approach is Multi-Agent Transformer (MAT). It is a Transformer-liked architecture and takes the joint observations as input to obtain the representations. In MAT, the transformation process for observations into representations can be described as \( \tilde{o}_{i_1:n}^t = f_\phi(o_{i_1:n}^t) \), where \( i_1:n \) denotes an arbitrary order for agents. We denote all other parts of the MARL methods as \( f_\phi \) parameterized by \( \phi \), including the value head or the policy head. Different MARL algorithms use one or two of these heads and feed the representations to calculate the value loss or policy loss in the MARL branch.
### 3.2 Transition-Informed Multi-Agent Representations
Transition-Informed Multi-Agent Representations (TIMAR) is an auxiliary objective to promote representation learning in MARL. The core idea of TIMAR is to take advantage of a world-model-driven SSL approach to promote representation learning in MARL, toward addressing the challenge of imperfect and non-stationary observations. To achieve the goal, an intuitive way is to leverage a surrogate world model to inform the joint transition of the next timestep so that we can obtain a different view of the ground-truth next-timestep observations sampled from the replay buffer. As a result, executing consistency across different views of observations can lead to better representations generated from encoder networks. Furthermore, the core process in the joint transition model of TIMAR is to implicitly reconstruct the global state and then infer the future observation representation of each agent. This enables the better use of agent-cross information when learning observation and action representations, further enhancing the understanding of MARL agents for individual messages. We will introduce the components shown in the framework in the following subsections.
**Framework overview.** In TIMAR, as shown in the left part of Figure 2, in the training phase, a stack of \( K + 1 \) consecutive \( n \)-agent joint observations \( o_{i_1:n}^{t:t+K} \) is first sampled from the replay buffer. Then we encode the oldest timestep observations \( o_{i_1:n}^t \) with MARL algorithm’s encoder to get the joint-observation representations. Apart from using them in the MARL optimization branch to train the whole online networks, the \( t \)-th timestep representations will also be feedforward into the transition model with the action embedding sequence to predict future observation representations. After repeating the prediction process \( K \) times, we can obtain \( K \) joint representations, i.e. \( \tilde{o}_{t+1:t+K} \). Meanwhile, we take the rest of the joint observations \( o_{t+1:t+K} \) into the momentum encoder to generate the ground-truth version of the \( K \)-timesteps observation representations. Finally, we use these two views to calculate the SSL-style transition-informed loss to encourage effective representations in both temporal and agent-level dimensions. The processes of encoding, transition model and transition-informed loss are introduced in detail below.
(i) **Encoding observations and actions:** Given a specific MARL algorithm, We use its encoder \( \theta \) as the online observation encoder to transform the joint observations into representations. Concretely, for MAT, taking the observation sequence of arbitrary order \( o_t \) as input, the online observation encoder applies a self-attention mechanism and obtains post-interaction representations of agents, as \( \tilde{o}_t \). Similarly to the online observation encoder, the online action encoder accepts both the origin
action sequence $a_t$ and observation representations $\hat{o}_{t:n}$ and the output action representations $\hat{a}_t$ through the cross-attention mechanism. In contrast, CTDE methods process individual observations in parallel to obtain representations. Besides, we use a separate action encoder to transform the actions into action embeddings. We employ these representations with a goal that motivates them to forecast future observation representations up to a given temporal offset $K$, iteratively. Following prior work [Schwarzer et al., 2021a; Zhu et al., 2022; Yu et al., 2021b; 2022], we utilize another observation encoder to encode original observations. This target encoder has the same architecture as the online observation encoder, and its parameters are an exponential moving average (EMA) of the online observation encoder parameters. Denoting the target observation encoder as $\bar{\theta}$ and the momentum coefficient as $\tau \in [0, 1)$, the update scheme of the target observation encoder is:
$$\bar{\theta} \leftarrow \tau \bar{\theta} + (1 - \tau)\theta.$$
(ii) Joint Transition Model. We construct the forecasting version of future observation representations using a transformer-based joint transition model $\tilde{T}$. In other words, we treat the individual observations as a sequence of masked contexts of global state in the joint transition model. The architecture of the Transformer encoder is leveraged in the joint transition model, and (a) contains $L$ Multi-Head Self-Attention (MHSA) layers without masks, and (b) takes the sequence of concatenated observations with action representations as input tokens and then outputs the sequence of observation representations of the subsequent timestep. We obtain the $t$-th observation and action representations by feeding the origin observation sequence and action sequence into the online and action encoder, as mentioned above. The input tokens of the latent joint transition model can be mathematically represented as:
$$x = [\hat{o}_{t:1} || \hat{a}_{t:1}, \ldots, \hat{o}_{t:n} || \hat{a}_{t:n}],$$
where $||$ denotes concatenation operator.
For any $l \in [L]$, the process of passing the token sequence through the $l$-th layer of the joint transition model can be mathematically described as follows:
$$h^l = \text{MHSA}(\text{LN}(x^l)) + x^l,$$
$$x^{l+1} = \text{FFN}(\text{LN}(h^l)) + h^l.$$
Here, LN and FFN denote the LayerNorm and the Feed-Forward Network mentioned in [Vaswani et al., 2017]. Note that if the permutation order is known, one can also add agent ids’ embedding and positional embedding on $x$. And we only select the odd elements of the output tokens of the joint transition model as the corresponding predictive results for the latent future representations inferred from previous observation and action representations.
Furthermore, in the $k$-th step of generating future representations where $k = 2, \ldots, K$, we use internal representations, i.e., generated from the joint transition model, instead of the online observation...
encoders as the input latent observation tokens. The process mentioned above can be denoted as
\[ \hat{o}_{t+1} = \hat{T}(\hat{o}_t, a_t), \]
\[ \hat{o}_{t+k} = \hat{T}(\hat{o}_{t+k-1}, a_{t+k-1}), \quad \forall k = 2, \ldots, K. \]
(4)
It is worth noting that both the joint transition model and the calculating process of the transition-informed loss operate in the latent space, thus avoiding pixel-based reconstruction objectives and making TIMAR robust for vision-based and state-based MARL settings.
Based on the description of the process of the joint transition model, one can see that the module first reconstructs the global state from individual observations and then predicts the future state of the next timestep. Finally, it implements the observation mapping functions for each agent. In this way, the joint transition model must infer the influences caused by others and try to integrate all the imperfect information. As a result, executing consistency across different views of individual observations can lead to better representations generated from encoder networks. The illustration of the joint transition model is shown in the right part of Figure 2.
(iii) Transition-informed loss. Motivated by the success of BYOL [Grill et al., 2020] in SSL and sample-efficient RL [Schwarzer et al., 2021a; Yu et al., 2021b, 2022], we compute the future prediction loss of TIMAR by calculating the cosine similarities between the predicted and observed representations. Concretely, from the outputs of the joint transition model, i.e. the sequence of observation representations set \( \hat{o}_{t+1:t+K} \), we use a projection head \( g \) and a prediction head \( q \) to obtain the final sequence of predictions result in \( \tilde{y}_{t+1:t+K} = q(g(\hat{o}_{t+1:t+K})) \). Then we utilize a target projection head \( \bar{g} \) (i.e. follows the same EMA update strategy in the target observation encoder) to process the encoded results of original observations, which is denoted as \( \bar{y}_{t+1:t+K} = \bar{g}(\theta_{t+1:t+K}) \) where \( \theta_{t+1:t+K} = \theta(o_{t+1:t+K}) \). Here, we apply a stop-gradient operation as illustrated in Figure 3 to avoid model collapse, following BYOL. Finally, TIMAR’s objective is to enforce the final prediction result in \( \tilde{y}_{t+1:t+K} \) to be as close to its corresponding target \( \bar{y}_{t+1:t+K} \). And we construct the following cosine similarities between the normalized predictions and the target projections overall agents and the offset timesteps:
\[
L_{TIMAR} = -\frac{1}{Kn} \sum_{k=1}^{K} \sum_{i=1}^{n} \left( \frac{\tilde{y}_{t+k}^i}{\|\tilde{y}_{t+k}^i\|_2} \right)^T \left( \frac{\bar{y}_{t+k}^i}{\|\bar{y}_{t+k}^i\|_2} \right)
\]
(5)
Total learning objective: The proposed TIMAR is an auxiliary task that is optimized in conjunction with MARL. Therefore, the overall loss function is:
\[
L_{total} = L_{MARL} + \lambda L_{TIMAR}
\]
(6)
where \( L_{MARL} \) and \( L_{TIMAR} \) are the MARL loss and our proposed transition-informed representation learning objective, respectively. \( \lambda \) is a hyperparameter for balancing the items. It is worth noting that, unlike other suggested SSL algorithms in CV and RL, TIMAR can be employed with or without data augmentation, especially in situations where data augmentation is unavailable or counterproductive. Moreover, TIMAR mainly focuses on capturing the relationships among agents via the joint transition model. The proposed framework can also be transferred to other MARL algorithms that follow the centralized training decentralized execution (CTDE) paradigm, such as MAPPO [Yu et al., 2021a]/HAPPO [Kuba et al., 2022] and QMIX [Rashid et al., 2018]/QPLEX [Wang et al., 2021], etc.
3.3 IMPLEMENT DETAILS FOR TIMAR
In practice, we implement instantiations of TIMAR on the basis of the recently proposed state-of-the-art method MAT and the commonly used CTDE method, QMIX. On one hand, we apply TIMAR
only upon the encoder of MAT, which contains an MLP-based embedding layer for original inputs and a one-layer transformer encoder for agent-level information interaction. On the other hand, as for QMIX and other CTDE-liked MARL methods, we use sequential layers before the RNN units in the network as TIMAR’s online encoder.
Besides, we sample a unique batch of $B'$ samples from the trajectories collected using the latest policy, both for on-policy MAT and off-policy QMIX. For the projection and prediction head, we do not use BatchNorm Layer and replace ReLU with GELU activation units, which is different from what BYOL does. As for vision-based settings, we use three convolutional layers with a ReLU layer after each convolutional layer, which is the same as DQN’s, as the feature extractor in all algorithms.
Finally, our code is based on MAT and finetuned QMIX’s official codebase and the full hyperparameters of TIMAR can be found in Appendix C.
4 EXPERIMENTS
In this section, we consider a series of MARL benchmarks to evaluate TIMAR, including Multi-agent MuJoCo (MA-MuJoCo), the StarCraftII Multi-agent Challenge (SMAC), and Multi-Agent Quadcopter Control (MAQC). The result demonstrates that TIMAR achieves performance and efficiency superior to those of strong MARL baselines, including the mentioned work MA2CL (Song et al., 2023). We also take an analysis of the reason for TIMAR’s effectiveness. Moreover, the extended result shows that TIMAR can also improve the robustness and generalization of the sequential-modeling-based MARL algorithm.
4.1 PERFORMANCE AND EFFICIENCY
4.1.1 MULTI-AGENT MUJOCO

Figure 4: Comparisons of average episode return of compared algorithms on Multi-Agent MuJoCo. TIMAR consistently outperforms MA2CL, refreshing the SOTA results for on-policy algorithms.
MA MuJoCo (de Witt et al., 2020) is a common-used benchmark for continuous cooperative multi-agent robotic control. Starting from the popular single-agent robotic MuJoCo (Todorov et al., 2012) control suite included with OpenAI Gym (Brockman et al., 2016), it creates a wide variety of novel scenarios in which multiple agents within a single robot have to solve a task cooperatively.
Since its heterogeneous-agent setting and the advantages of approaches with sequential updating scheme shown in recent studies (Kuba et al., 2022; Wen et al., 2022; Zhong et al., 2023), we try our method on one of the state-of-the-art (SOTA) MARL algorithm MAT and evaluate it on predefined tasks in MA MuJoCo and select MAPPO, HAPPO, MAT, and MA2CL as compared baseline.
4.1.2 The StarCraft Multi-Agent Challenge (SMAC)
The StarCraft Multi-Agent Challenge (Samvelyan et al., 2019), briefly called SMAC, is a benchmark environment for training and evaluating multi-agent reinforcement learning (MARL) algorithms. It is based on the popular real-time strategy game StarCraft II and provides a challenging testbed for MARL research due to the complexity of the game and the need for agents to coordinate and compete with each other. The SMAC environment is open-source and widely used in the research community, making it a common benchmark for evaluating the performance of MARL algorithms.
Different from the settings of MAQC and MA MuJoCo, we select QMIX (Rashid et al., 2018; Hu et al., 2021) as the basic algorithm for incorporating our method since it is almost the most commonly used CTDE method in the SMAC domain. This will also demonstrate TIMAR’s generalization for CTDE methods and Value-Decomposition-based paradigms in the MARL area.

4.1.3 Multi-Agent Quadcopter Control
To evaluate whether our proposed TIMAR is powerful in vision-based MARL settings, we run it on three physics-based cooperative tasks in Multi-Agent Quadcopter Control (MAQC) (Panerati et al., 2021). MAQC is an open-source, OpenAI Gym-like multi-quadcopter simulator that provides vision-based observations and multi-agent controlling interfaces. Observations include video frames from the perspective of each drone (toward the positive direction of the local x-axis) for the RGB ($\mathbb{R}^{48 \times 48 \times 4}$), depth, and segmentation ($\mathbb{R}^{48 \times 48 \times 1}$) views. The action of drones is continuous velocity and the magnitude of the velocity. We recommend readers get more information about the descriptions of MAQC in Appendix A.3. We test TIMAR, MA2CL, MAT, HAPPO, and MAPPO at 4 subtasks in MAQC, which contains two fly-controlling scenarios (named Flock and LeaderFollower) with two and four agents, respectively. The results shown in Figure 6 demonstrate TIMAR can improve data efficiency of MAT better than MA2CL for visual signals.

4.2 Analysis about why TIMAR works
In this part, we attempt to understand how TIMAR improves the augmented MARL approaches. Since the encoder in MAT and QMIX is the backbone of the value estimation branch in the whole algorithm, we plot the training curve of both TIMAR, MA2CL, and corresponding MARL methods.
in four scenarios of MA MuJoCo and SMAC. Results are shown in Figure 7. We would like to posit whether the global value function approximation in MAT or Q values for taken actions in QMIX would be enhanced from the compact representation built upon TIMAR. One can see that TIMAR’s value loss is lower than MAT’s and the Q value is higher than QMIX’s respectively. The more accurately the value function fits the better the policy optimization effect.

### 4.3 Generalization and Robustness
Since Transformer-based models often demonstrate strong performance on generalization and robustness, we believe that TIMAR can also improve MAT’s corresponding abilities. And we design two experiments to validate such an assumption on HalfCheetah 6x1 of MA MuJoCo: one is evaluating the performance for different disabled joints on the training process to check the robustness of TIMAR, and another is validating TIMAR’s performance for different partial observable situations for the same task to evaluate its generalization. We list the results in Figure 8 and Figure 9 which tell us that TIMAR can not only boost the original sample efficiency and performance of Transformer-based MAT but also can further improve its generalization and robustness with the learning objective of the world model.


## 5 Conclusion
In this paper, we introduce the Transition-Informed Multi-Agent Representations (TIMAR), a self-supervised representation learning objective designed to improve the data efficiency of MARL algorithms with the help of the joint transition, i.e. the surrogate world model. TIMAR treats the individual observations as a masked sequence and learns the impactive representations that are jointly temporally predictive and consistent across different views overall agents, by implicitly reconstructing the global state and directly predicting representations of observations produced by a joint transition model and a target encoder. Experimental results on both vision-based and state-based cooperative MARL benchmarks (i.e. MAQC, MA MuJoCo, and SMAC) demonstrate that TIMAR can further improve data efficiency and performance for used MARL backbone algorithms such as QMIX, MAT, and MA2CL. Besides, TIMAR can also bring benefits for MAT’s generalization and robustness.
REFERENCES
Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik’s cube with a robot hand. *ArXiv preprint*, abs/1910.07113, 2019.
Humam Alwassel, Dhruv Mahajan, Bruno Korbar, Lorenzo Torresani, Bernard Ghanem, and Du Tran. Self-supervised learning by cross-modal audio-video clustering. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020.
Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. *arXiv preprint arXiv:2301.08243*, 2023.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020.
Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *J. Artif. Int. Res.*, 47(1):253–279, 2013. ISSN 1076-9757.
Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Christopher Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub W. Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. Dota 2 with large scale deep reinforcement learning. *ArXiv preprint*, abs/1912.06680, 2019.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016.
Víctor Campos, Pablo Sprechmann, Steven Hansen, Andre Barreto, Steven Kapturowski, Alex Vitvitskyi, Adria Puigdomenech Badia, and Charles Blundell. Beyond fine-tuning: Transferring behavior in reinforcement learning. *ArXiv preprint*, abs/2102.13515, 2021.
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 132–149, 2018.
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In *Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event*, volume 119 of *Proceedings of Machine Learning Research*, pp. 1597–1607. PMLR, 2020.
Christian Schröder de Witt, Bei Peng, Pierre-Alexandre Kamienny, Philip H. S. Torr, Wendelin Böhmer, and Shimon Whiteson. Deep multi-agent reinforcement learning for decentralized continuous cooperative control. *ArXiv preprint*, abs/2003.06709, 2020.
Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weih, Mark Yatskar, and Ali Farhadi. Robothor: An open simulation-to-real embodied AI platform. In *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020*, pp. 3161–3171. IEEE, 2020. doi: 10.1109/CVPR42600.2020.00323.
|
hgrZluxFC7
|
In this paper, the attacker only has access to the latent representation provided by the mobile DNN from a mobile phone. While the authors successfully demonstrate that attacks on these latent representations exhibit a lower Attack Success Rate (ASR) than those on raw images—given the same level of information distortion—the appropriateness of imposing an identical distortion level in this context needs more clarification.
|
ADVERSARIAL MACHINE LEARNING IN LATENT REPRESENTATIONS OF NEURAL NETWORKS
Anonymous authors
Paper under double-blind review
ABSTRACT
Distributed deep neural networks (DNNs) have been shown to reduce the computational burden of mobile devices and decrease the end-to-end inference latency in edge computing scenarios. While distributed DNNs have been studied, to the best of our knowledge the resilience of distributed DNNs to adversarial action remains an open problem. In this paper, we fill the existing research gap by rigorously analyzing the robustness of distributed DNNs against adversarial action. We cast this problem in the context of information theory and introduce two new measurements for distortion and robustness. Our theoretical findings indicate that (i) assuming the same level of information distortion, latent features are always more robust than input representations; and (ii) the adversarial robustness is jointly determined by the DNN feature dimension and the generalization capability. To test our theoretical findings, we perform extensive experimental analysis by considering 6 different DNN architectures, 6 different approaches for distributed DNN and 10 different adversarial attacks to the ImageNet-1K dataset. Our experimental results support our theoretical findings by showing that the compressed latent representations can reduce the success rate of adversarial attacks by 88% in the best case and by 57% on the average compared to attacks to the input space.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved significant success in various domains such as computer vision (Kirillov et al., 2023), natural language processing (OpenAI, 2023), and wireless communication (Baldesi et al., 2022), among many others. However, state-of-the-art DNNs are challenging to deploy on resource-limited mobile devices. While mobile-specific DNNs have been proposed (Sandler et al., 2018), they usually come with a significant loss in accuracy. On the other hand, completely offloading the computation to edge or cloud computers is infeasible in mobile scenarios due to the excessive communication overhead corresponding to the transfer of the DNN input from the mobile device to the edge/cloud (Wang et al., 2019a). A new paradigm called distributed computing – also referred to as split computing in prior art – divides the computation of DNNs across multiple devices – according to the available processing power and networking bandwidth. The key advantage is that optimal load distribution can be achieved while meeting maximum end-to-end latency constraints and also preserving the DNN accuracy (Matsubara et al., 2022a). For an excellent survey on distributed/split computing, the reader is referred to the work by Matsubara et al. (2021).
Although prior work has proven the advantages of distributing the DNN computation, it is also evident that this approach opens the door to adversarial attacks to intermediate (latent) representations. Figure 1 shows a high-level overview of the adversarial scenario under consideration. Without loss of generality, we assume that a DNN model is divided into a mobile DNN and a local DNN, respectively executed by the mobile device and an edge/cloud computer. Usually, the DNN architecture is modified by introducing a compression layer at the end of the mobile DNN (Eshratifar et al., 2019b; Matsubara et al., 2019; Hu & Krishnamachari, 2020; Shao & Zhang, 2020; Matsubara et al., 2020), which is trained to learn a latent representation that reduces the amount of data being sent to the edge/cloud. This way, the output tensor of the mobile DNN is transmitted to the edge/cloud server instead of the input data. The compressed representation is then used by the local DNN to produce the final prediction output (e.g., classification). The distributed nature of the computation exposes the latent representation to adversarial action. Indeed, due to the need to communicate the latent representation across devices over a wireless network, an adversary can easily eavesdrop the latent representation and craft an adversarial sample to compromise the local DNN as shown in Figure 1.
Despite its significance and timeliness, to the best of our knowledge, assessing the robustness of distributed DNNs remains an unexplored problem. We remark that achieving a fundamental understanding of these attacks and evaluating their effectiveness in state-of-the-art DNNs is paramount to design robust distributed DNNs. To this end, we theoretically analyze the robustness of distributed DNNs using information theory – specifically, we build on notions from Information Bottleneck (IB) theory by Tishby et al. (2000) and propose two new measurements for distortion and robustness that are general to all DNN models and can be leveraged to analyze them. Our first key theoretical finding is that with similar levels of information distortion, latent representations are always more robust than input representations. In other words, distributed DNNs are intrinsically a better solution for distributed/mobile computing systems than traditional DNNs. Our second key finding is that the DNN robustness is intrinsically related to the cardinality of the latent space. Intuitively, this is because the search space available to the attacker is smaller. On the other hand, while a smaller latent space may increase robustness by reducing the model variance, it will also introduce bias in the model thus affecting the generalization capability of the DNN model.

We extensively evaluate our theoretical findings by considering 10 adversarial algorithms, i.e., 4 white-box attacks (Goodfellow et al., 2014; Kurakin et al., 2018; Dong et al., 2018; Madry et al., 2017) and 6 black-box attacks (Ilyas et al., 2018; Li et al., 2019; Andriushchenko et al., 2020; Dong et al., 2019; Cheng et al., 2019; Wang et al., 2022). We apply these attacks to 6 reference architectures (Simonyan & Zisserman, 2014; He et al., 2016) designed with 6 distributed DNN approaches (Eshratifar et al., 2019a; Shao & Zhang, 2020; Matsubara et al., 2020; 2022a; Singh et al., 2020; Matsubara et al., 2022c). The experimental results validate our theoretical findings on the examined DNNs and attack algorithms.
The key contributions of this paper can be summarized as follows:
- To the best of our knowledge, we are the first to investigate the robustness of distributed DNNs against adversarial action. We leverage notions of IB theory and propose two new metrics for distortion and robustness of distributed DNNs. We theoretically prove that distributed DNNs are less vulnerable to perturbations of similar magnitude comparing to traditional DNNs, and that a latent representation with lower dimensions enhances robustness by reducing the DNN variance;
- We perform extensive experiments with the ImageNet-1K (Deng et al., 2009) dataset, by considering 6 different DNN architectures, 6 different distributed DNN approaches under 10 different attacks to support our theoretical findings. The results show that the theoretical analysis applies to the experimental settings under consideration. More precisely, the success rates of attacking the inputs is up to 88% higher than attacking latent representations (57% on average). We share our code for reproducibility at [https://github.com/asdfqwezxcf/AdvLatent](https://github.com/asdfqwezxcf/AdvLatent), and we hope that this work may open the door to a new field dedicated to studying the resilience of distributed DNNs.
This paper is organized as follows. Section 2 summarizes the related work on distributed DNNs and adversarial attacks to DNNs. Next, Section 3 presents our theoretical analysis based on IB. Section 4 discusses our experimental setup while Section 5 presents our experimental results. Finally, Section 6 draws conclusions and discusses possible directions for future work.
2 RELATED WORK
Distributed Neural Networks. There is an increasing need for DNNs-based computation on mobile devices. Lightweight DNNs specifically tailored for mobile devices (Sandler et al., 2018; Tan et al., 2019; Howard et al., 2019) fail to achieve comparable performance with the state-of-the-art DNNs. While edge computing approaches maintain similar performance, they incur in excessive latency (Yao et al., 2020). As an intermediate option, Kang et al. (2017) divide a DNN into two parts executed at the mobile device and edge, respectively. However, such division leads to excessive networking load due to large latent space of DNNs. Other work has addressed this problem by introducing a “bottleneck” layer before the division point (Eshratifar et al., 2019b; Shao & Zhang, 2020; Eshratifar et al., 2019a). However, naive bottlenecks suffer from noticeable task-specific performance loss. Recent work utilizes more advanced training techniques such as knowledge distillation to preserve accuracy while achieving high in-network compression ratio (Matsubara et al., 2020).
Different from unsupervised methods where compressed representations are learned for reconstruction purposes (Yang et al., 2023), supervised compression techniques aim to extract compact features relevant to the downstream task (Singh et al., 2020). However, such studies mainly aim to optimize the rate-distortion metric while often neglect the limited computational capability of mobile devices by introducing bottlenecks in the last layers (Ballé et al., 2018; Minnen et al., 2018; Datta et al., 2022; Ahuja et al., 2023). Inspired by ideas such as the reparameterization trick by Kingma & Welling (2013), and quantization with entropy coding by Ballé et al. (2016), Matsubara et al. (2022c) use stochastic bottleneck with learnable prior for entropy coding to optimize the three-way tradeoff between (a) minimizing the computational complexity of the mobile DNN, (b) minimizing the size the wirelessly transferred data, and (c) minimizing the DNN performance loss.
Adversarial Machine Learning. Adversarial attacks can be categorized as gradient-based, score-based, and decision-based. In gradient-based scenarios, attackers can obtain the input gradient through backpropagation and craft adversarial samples with gradient ascending. Fast Gradient Sign Method (FGSM) by Goodfellow et al. (2014) crafts adversarial samples in the $l_\infty$ space based on the one-step input gradient sign. Basic Iterative Method (BIM) by Kurakin et al. (2018) increases the effectiveness of FGSM by iteratively updating adversarial samples with multiple gradient steps. Momentum Iterative Method (MIM) by Dong et al. (2018) introduces momentum to iterative attacks which improves the transferability of adversarial samples. Projected Gradient Descent (PGD) by Madry et al. (2017) generalizes iterative attacks to $l_p$ space with a random start. Carlini & Wagner (2017b) form the attack as an optimization problem and evaluate different optimization algorithms with multiple loss functions. In black-box settings, gradient-based attacks find adversarial samples using gradients from a set of substitute DNNs. Recent work by Wang et al. (2021a) and Zhang et al. (2022b) improves the transferability of crafted adversarial examples with advanced gradient design.
Different from gradient-based approaches, score-based adversaries can only access the scores for every class given by the DNN. Natural Evolutionary Search (NES) by Ilyas et al. (2018) applies evolutionary algorithm to estimate gradient within limit queries. N-Attack by Li et al. (2019) designs a learnable Gaussian distribution centered around the input to generate random noise which makes a benign sample become an adversarial sample. Square Attack by Andriushchenko et al. (2020) adds a localized square-shaped perturbation at a random position to the original sample in each iteration. Decision-based attacks assume the adversary is only aware of the label having the highest score in the DNN output. Evolutionary Attack (EVO) by Dong et al. (2019) minimizes the distance with evolutionary search in the input space, while Hop-Skip-Jump Attack (HSJA) by Chen et al. (2020) designs a zeroth-order optimization algorithm to find minimum-magnitude perturbations with binary search. Sign-OPT Attack (S-OPT) by Cheng et al. (2019) accelerates the convergence by estimating the gradient with the sign of the directional derivative, while Triangle Attack by Wang et al. (2022) minimizes the adversarial perturbation in a smaller frequency space with the geometric property.
Evaluating the Robustness of Neural Networks. Probably Approximately Correct (PAC) learning has been used to analyze the adversarial robustness of DNNs (Montasser et al., 2019; Bhattacharjee et al., 2023; Attias et al., 2019; Awasthi et al., 2019; Bubeck & Sellke, 2021; Ashtiani et al., 2023). However, such work mainly attempts to find a lower bound for the dataset size which can attain a desired robustness level (Montasser et al., 2019; Attias et al., 2019; Ashtiani et al., 2023; Bhattacharjee et al., 2023). Conversely, our investigation is aimed at evaluating adversarial robustness in DNNs that are trained with certain datasets. In addition, these approaches are evaluated on simple DNNs, e.g., 2-layer neural networks (Awasthi et al., 2019; Bubeck & Sellke, 2021), while we eval-
Figure 2: Modeling DNN with IB. Each representation $T_i$ only depends on the previous output $T_{i-1}$, and the optimal $T_i^*$ can be interpreted as the IB solution which optimizes Equation 1 at layer $i$.
evaluate our findings on state-of-the-art DNNs. Carlini et al. (2019) propose a set of criteria to evaluate adversarial robustness with numerical results, while Carlini & Wagner (2017a); Dong et al. (2020); Croce et al. (2020) evaluate the robustness of different defense approaches. The key issue is that different work comes to contradictory conclusions. For example, Su et al. (2018) argue that there is a tradeoff between generalization and robustness while Stutz et al. (2019) state that generalization does not affect the robustness. Tsipras et al. (2018) argue robust training sacrifice accuracy on standard datasets while Ilyas et al. (2019) can achieve same accuracy on both adversarial and standard dataset. Tishby & Zaslavsky (2015) first propose to use IB (Tishby et al., 2000) to analyze DNNs and Shwartz-Ziv & Tishby (2017); Saxe et al. (2019) analyze the generalization and compression capability of DNNs with experiments. However, such studies heavily rely on the variational approximation which requires increasing number of samples with respect to their dimension to reduce the bound on estimation error (Poole et al., 2019). Thus, the above-mentioned works can only perform experiments on relatively small neural networks. In stark opposition with this paper, recent work on IB for robust learning (Alemi et al., 2016; Amjad & Geiger, 2019; Wang et al., 2021b; Kim et al., 2021) does not provide a general analysis for robustness of DNNs.
3 ROBUSTNESS ANALYSIS OF DISTRIBUTED DEEP NEURAL NETWORKS
3.1: Background on Information Bottleneck (IB). The IB is a model-agnostic information-theoretical framework introduced by Tishby et al. (2000) to extract the relevant information about a random variable (r.v.) $Y$ from another r.v. $X$ by finding a representation $T$ which compresses the information of $X$ while captures only the sufficient information about $Y$. As shown in Figure 2, we model a DNN with a Markov chain $Y \mapsto X \mapsto T_1 \mapsto \cdots \mapsto T_k \mapsto \hat{Y}$, where $X$, $Y$, $\hat{Y}$ and $T_i$ are respectively the input, its label, the inference output and the output of the $i$-th hidden DNN layer. The IB optimizes the following:
$$\min_{P(T_i|X)} I(X; T_i) - \beta \cdot I(Y; T_i), 1 \leq i \leq k$$
(1)
where $I(X; T_i)$ is the mutual information between $X$ and $T_i$ while $I(Y; T_i)$ is the mutual information between $Y$ and $T_i$. Each layer can be thus described by its own unique information plane $(I(X; T_i), I(Y; T_i))$ which represents its compression and generalization capability. Notice that optimizing Equation 1 is equivalent to minimizing $I(X; T_i)$ – i.e., learning to compress – whereas maximizing $I(Y; T_i)$ – i.e., learning to generalize. To simplify notation, and without loss of generality, henceforth we will consider a single generic hidden layer $T$.
3.2: Variance vs Bias in Adversarial Attacks. We define the end-to-end robustness of the DNN model as $I(Y; \hat{Y})$, which measures the mutual information between the input label (i.e., ground truth) and the DNN inference. We apply the Data Processing Inequality (DPI) to describe the information loss during processing (Cover, 1999):
$$I(Y; X) \geq I(Y; T) \geq I(Y; \hat{Y})$$
(2)
In short, the generalization metric $I(Y; T)$ of hidden layer $T$ also describes the upper bound of $I(Y; \hat{Y})$, which is intrinsically a measure of robustness at layer $T$. By assuming that adversarial perturbations are not observable, it follows there is a prior yet unknown optimal solution $I^*(Y; T)$
for a specific DNN architecture that satisfies the IB where adversarial perturbations cannot decrease the performance – in other words, \( I^*(Y; T) \) is resilient to adversarial attacks. The key issue is that although each DNN has a hypothesis set defined by its parameters, the optimum parameter set exhibiting the largest \( I^*(Y; T) \) is unknown. To this end, each trained DNN using a finite dataset \((X, Y)\) has its own estimation \( I(Y; T) \). Shamir et al. (2010) have proven that the estimated mutual information using finite samples has the following error bound:
\[
||I^*(Y; T) - I(Y; T)|| \leq O\left(\frac{|T||Y|}{\sqrt{n}}\right),
\]
where \( n \) denotes the number of data and \( |T|, |Y| \) are the cardinality of \( T \) and \( Y \), respectively. Equation 3 is the information version of the complexity-generalization tradeoff in PAC learning. A larger latent space \( |T| \) (i.e., a more complex hypothesis set in PAC learning) will have larger variance resulting in decreased performance with inputs coming from a distribution different than \( X \), which is described by the upper bound \( ||I^*(Y; T) - I(Y; T)|| \). Conversely, with a smaller latent space (i.e., a smaller hypothesis set), the DNN has more bias, which leads to less accuracy. Equation 3 is also in line with Simon-Gabriel et al. (2019), which states that robustness of DNNs decreased with growing data dimension. The following holds.
**Key Theoretical Finding #1: Variance vs Bias in Adversarial Attacks**
For adversarial attacks to a hidden layer \( T_{adv} \), the performance \( I(Y; T_{adv}) \leq I^*(Y; T) - O(|T||Y|/\sqrt{n}) \) is jointly determined by \( I^*(Y; T) \) and \( O(|T||Y|/\sqrt{n}) \). In other words, in distributed DNNs, the feature compression layer helps to enhance the adversarial robustness by reducing the variance but also introduce vulnerability as a result of adding bias.
### 3.3: Attacks in Latent Space vs Input Space
A key challenge to compare adversarial action in distributed DNNs versus conventional DNNs is that the input space and latent representation space have different cardinality. As such, we utilize a new metric based on the Kullback-Leibler (KL) divergence \( D_{KL}[P(Y|X)||P(Y|T)] \) to describe the information distortion (Tishby & Zaslavsky, 2015). Since \( D_{KL} \) is a function of random variables \( X \) and \( T \), the expectation of \( D_{KL} \) is
\[
E\{D_{KL}\} = \sum_{X,T} P(X,T) \sum_Y P(Y|X,T) \log \frac{P(Y|X)}{P(Y|T)}
\]
\[
= \sum_{X,T,Y} P(X,T,Y) \log \frac{P(Y|X)P(X|T)}{P(Y|T)P(X|T)}
\]
\[
= \sum_{X,T,Y} P(X,T,Y) \log \frac{P(X,Y|T)}{P(Y|T)P(X|T)}
\]
\[
= I(X;Y|T)
\]
The conditional mutual information \( I(X;Y|T) \) can be considered as the residual information between \( X \) and \( Y \) which is not captured by \( T \). Due to the chain rule of mutual information,
\[
I(X;Y|T) = I(X,T;Y) - I(Y;T)
\]
(5)
For a Markov chain \( Y \rightarrow X \rightarrow T \), the joint distribution \( P(X,Y,T) \) has following property
\[
P(X,Y,T) = P(T|X,Y)P(Y|X)P(X) = P(T|X)P(Y|X)P(X)
\]
(6)
Therefore, \( I(X,T;Y) \) can be simplified as
\[
I(X,T;Y) = E\left\{\log \frac{P(X,T,Y)}{P(X,T)P(Y)}\right\} = E\left\{\log \frac{P(T|X)P(Y|X)P(X)}{P(T|X)P(Y|X)P(X)}\right\}
\]
\[
= E\left\{\log \frac{P(Y|X)}{P(Y)}\right\} = I(X;Y)
\]
(7)
From Equation 5 and Equation 7, it follows that
\[
I(X;Y|T) = I(X;Y) - I(Y;T).
\]
(8)
For adversarial attacks in conventional DNNs, the adversarial samples are generated in the input space. In this case, the Markov chain is \( Y \rightarrow X_{adv} \rightarrow T' \), where \( X_{adv} \) represents adversarial samples and \( T' \) is the corresponding latent representation. Similarly, the Markov chain for adversarial
attacks in distributed DNNs is \( Y \mapsto X \mapsto T_{\text{adv}} \) where \( T_{\text{adv}} \) represents adversarial latent samples. Intuitively, introducing adversarial perturbations will confuse DNNs, thus increasing the residual information. As such, we can use the residual information to quantify the adversarial perturbations. Assuming \( X_{\text{adv}} \) and \( T_{\text{adv}} \) have same level of information distortion,
\[
I(X_{\text{adv}}; Y) - I(Y; T') = I(X; Y) - I(Y; T_{\text{adv}}).
\]
(9)
Since \( X_{\text{adv}} \) is a mapping of \( X \), there is a Markov chain \( Y \mapsto X \mapsto X_{\text{adv}} \). The following holds.
**Key Theoretical Finding #2: Attacks in Latent Space vs Input Space**
By DPI, it follows that \( I(X; Y) \geq I(X_{\text{adv}}; Y) \). Therefore, it follows that
\[
I(Y; T') \leq I(Y; T_{\text{adv}}).
\]
(10)
In other words, with same level of information distortion, attacking the latent space is less effective than attacking the input space.
## 4 EXPERIMENTAL SETUP
### 4.1 ATTACKS UNDER CONSIDERATION
We have extensively validated the theoretical findings obtained in Section 3 by implementing 10 popular attacks to DNNs. These include 4 gradient-based white-box attacks by Goodfellow et al. (2014), Kurakin et al. (2018), Dong et al. (2018) and Madry et al. (2017), as well as 3 score-based black-box attacks (Ilyas et al., 2018; Li et al., 2019; Andriushchenko et al., 2020) and 3 decision-based black-box attacks (Dong et al., 2019; Cheng et al., 2019; Wang et al., 2022). We first formally define adversarial attacks in input and latent space, and then describe the related algorithms.
**Adversarial Attacks in Input Space.** Let \( f : \mathbb{R}^d \rightarrow \mathbb{C}^k \) denote a DNN where \( \mathbb{R} \) and \( \mathbb{C} \) are respectively the input and output space, and \( d \) and \( k \) are the corresponding dimension of these two spaces. The DNN will assign highest score to the correct class \( y = \arg\max_x f(x) \) for each input \( x \). The adversarial goal is to introduce a perturbation \( \delta_d \in \mathbb{R}^d \) to the original sample so that
\[
\arg\max_{k=1,\ldots,K} f(x + \delta_d) \neq y,
\]
(11)
where \( ||\delta_d||_p \leq \sigma \) and \( \sigma \) is the distance constraint under different \( l_p \) norm. Additionally, for visual applications, \( \delta_d \) should satisfy the condition \( x + \delta_d \in [0, 1]^d \) as there is an explicit upper and lower bound for red, green and blue (RGB) value in digital images.
**Adversarial Attacks in Latent Space.** Let \( g : \mathbb{R}^d \rightarrow \mathbb{H}^t \) and \( f : \mathbb{H}^t \rightarrow \mathbb{C}^k \) denote the mobile DNN and local DNN, where \( \mathbb{H} \) and \( t \) are the latent space and its associated dimension, respectively. For each input \( x \), the mobile DNN will generate a corresponding latent representation \( g(x) \in \mathbb{H}^t \) and the local DNN will generate output \( y = \arg\max_x f(g(x)) \) by taking the latent representation as input. Adversarial action in latent space adds a perturbation \( \delta_t \in \mathbb{H}^t \) such that
\[
\arg\max_{k=1,\ldots,K} f(g(x) + \delta_t) \neq y,
\]
(12)
where \( ||\delta_t||_p \leq \sigma \) is the distance constraint under \( l_p \) norm. We remark that the latent representations are model-dependent and there is no explicit bound for their value other than their computer-level representation (e.g., float, integer, double).
**White-box Attacks:** We consider 4 gradient-based attacks only in white-box setting because latent representations are different for each model, resulting in numerous surrogate DNNs that may be infeasible in practical settings. We choose FGSM, BIM, MIM with \( l_\infty \) norm constraints. PGD is implemented for both \( l_2 \) and \( l_\infty \) spaces as a baseline for other black-box attacks.
**Black-box Attacks:** We consider 3 score-based attacks NES, N-Attack and Square Attack in \( l_\infty \) space and 3 decision-based attacks EVO, S-OPT and Triangle Attack in \( l_2 \) space. During our experiments, we found that S-OPT and HSJA have similar results, so we do not report HSJA due to space limitations.
**Dataset and Metrics:** We evaluate adversarial robustness using 1000 samples from the validation set of ImageNet-1K (Deng et al., 2009), limiting the samples to those which are correctly classified.
Table 1: List of feature compression approaches considered in this paper
| Category | Approach | Description |
|----------|----------|-------------|
| Dimension| SC | naive supervised compression trained with cross entropy |
| | KD | bottleneck trained with naive knowledge distillation |
| | BF | multi-stage training with distillation and cross entropy |
| Data | JC | reduce precision in frequency domain using JPEG approach |
| | QT | uniformly compress every element using naive bit quantization |
| Advanced | ES | bottleneck trained with distillation and information-based loss |
| | | and data compressed with quantization and entropy coding |
We define the perturbation budget $\epsilon$ as the mean square error (MSE) under the $l_2$ norm constraint (i.e., $\epsilon \times d = \sigma^2$ and $\epsilon \times t = \sigma^2$ in input and latent space respectively.) and the maximum element-wise distance under $l_\infty$ norm constraint (i.e., $\epsilon = \sigma$), we define the attack success rate (ASR) as
$$\text{ASR}(\epsilon) = \frac{1}{N} \sum_{i=1}^{N} I \left\{ \arg \max_{k=1,...,K} f(x_i, \delta_i) \neq y_i \right\},$$
where $I[\cdot]$ is the indicator function and $f(x_i, \delta_i)$ is the DNN output when fed with the $i$-th sample.
4.2 Deep Neural Networks Under Consideration
DNN Architectures. First, we consider 3 DNNs: VGG16 from Simonyan & Zisserman (2014) as well as Resnet50 and Resnet152 from He et al. (2016). Next, to investigate the effect introduced by the feature compression layer (i.e., the “bottleneck”) proposed for distributed DNNs, we introduce the same bottleneck design as Matsubara et al. (2022b) to VGG16, Resnet50 and Resnet152 and denote the new architectures as VGG16-fc, Resnet50-fc and Resnet152-fc.
Different Compression Approaches. In distributed DNNs, compression can be achieved both by compressing the dimensionality with bottlenecks and compressing the data size with coding and precision reduction. We consider 3 different bottleneck training strategies for dimension reduction: Supervised Compression (SC) (Eshratifar et al., 2019b; Shao & Zhang, 2020), Knowledge Distillation (KD) (Matsubara et al., 2019) and BottleFit (BF) (Matsubara et al., 2022a). We also choose 2 data compression strategies JPEG Compression (JC) (Alvar & Bajić, 2021), Quantization (QT) (Singh et al., 2020) as well as 1 advanced approach Entropic Student (ES) that both compress the dimension and data size (Matsubara et al., 2022c). We summarize these approaches in Table 1.
5 Experimental Results
5.1 Performance With Different DNN Architectures
Figure 3 shows the ASR obtained on ResNet152-fc with perturbation budget $\epsilon = 0.01$ (we explore the performance as a function of the perturbation budget in Figure 6). Remarkably, we notice that the ASR is higher for attacks in the input space than attacks in the latent space for each attack algorithm considered. In the case of Triangle Attack, the latent ASR is 88% less than the input ASR. On average, the ASR in input is 57.49% higher than the ASR obtained by attacks in the latent space. Moreover, Square Attack, EVO, and Triangle Attack have lowest ASR on latent representations. This is because these attacks search perturbations in a lower dimensional space, and hence it is more challenging for the adversary to find the effective distortions in compressed latent space. Figure 3 shows that our theoretical findings are general and apply to a wide variety of attacks. For this reason, due to space limitations, in the next experiments we only show results of one score-based attack in $l_\infty$ norm and one decision-based attack in $l_2$ norm as well as corresponding white-box baselines.
Figure 3: 10 different attacks to ResNet152-fc with perturbation budget $\epsilon = 0.01$.
Figure 4 shows the performance of PGD, N-Attack and Triangle Attack on different DNNs with perturbation budget $\epsilon = 0.003$. For each DNN, the ASR is higher in input-space attacks. In VGG16-bf, which shows the best robustness, the average ASR in the latent space is 87.8% lower than input attacks. On average, latent representations are 58.33% more robust.
### 5.2 Performance with Different Compression Approaches
To evaluate the robustness of different compression approaches, we choose Square Attack and Triangle Attack which are the newest approaches for score-based and decision-based attacks respectively. We do not consider gradient-based attacks as compression approaches such as knowledge distillation add penalty terms to their loss function, which leads to gradient masking. Hence, their robustness cannot be correctly evaluated by naive gradient-based attacks (Athalye et al., 2018). We choose a larger perturbation budget ($\epsilon = 0.05$) than the experiments depicted in Figure 3 to further evaluate whether the compressed feature space is robust to attacks relying on low-dimensional sub-space searching. We note that data compression can be applied in addition to bottlenecks. However, for comparison purposes, we choose ResNet50 without bottlenecks for JC and QT.
Figure 5 shows the ASR of Square Attack in $l_\infty$ space and Triangle Attack in $l_2$ space with perturbation budget $\epsilon = 0.05$. Except the JC and QT, the adversarial robustness shows the same trend regardless of the examined approaches. The average ASRs in input space are 79.07% and 87.22% higher than the average ASRs in latent space for Square Attack and Triangle Attack respectively. For DNNs with bottlenecks, despite the increase in perturbation, the ASR of Square Attack and Triangle Attack performed in latent representations do not increase distinctively comparing to Figure 3. However, since JC and QT do not have separate feature compression layers, the ASR of Square Attack and Triangle Attack in input space are only 55.8%, 0.25% higher than the attacks in latent space, showing a significant downgrade comparing to the other 4 approaches. These results confirm that the compressed feature space is indeed robust to attacks that search in lower dimensions.
### 5.3 Performance as Function of Compression Ratio
In the previous section, we have shown that the robustness of the latent representation is mostly characterized by the bottleneck layer properties rather than the compression approach itself. Thus, we further evaluate the robustness for different sizes of latent space using N-Attack, MIM under $l_\infty$ constraint and S-OPT, PGD under $l_2$ constraint with multiple perturbation budgets ($\epsilon = 0.003; \epsilon = 0.01; \epsilon = 0.03$). The cardinality of the latent space is controlled by the number of channels at the bottleneck layer. We first set the channel number as 12 for ResNet152-fc that can achieve 77.47% validation accuracy, which is almost similar to the performance of the original ResNet152 (78.31%). Then, we reduce the number of channels to 3, which decreases the dimension of latent representations but also reduces the end-to-end performance to 70.01%. We do not repeat the results for Square Attack and Triangle Attack since they fail to achieve satisfactory ASR in the previous experiments due to their smaller search subspace, as shown in Figures 3 and 5.
Figure 5: Square and Triangle attack success rate associated with 6 feature compression approaches with perturbation budget $\epsilon = 0.05$.
Figure 6 shows results obtained by considering the $l_\infty$ and $l_2$ attacks with multiple perturbation budgets ($\epsilon = 0.003; \epsilon = 0.01; \epsilon = 0.03$) in the latent space of original ResNet-152, 12-channel ResNet152-fc and 3-channel ResNet152-fc. From ResNet152 to 12-channel ResNet152-fc, the ASR reduces as the dimensionality of latent representations decreases by 213.33% – in other words, $O(|T|/Y)/\sqrt{n}$ is 21.33 times smaller. However, after reducing the channel size to 3, the ASR does not decrease any further\(^1\). Conversely, distributed DNNs with a smaller channel size become more vulnerable to perturbations. This is because when reducing channels from 12 to 3 channels, the accuracy also decreases to 7.46%, which in turn lessens the end-to-end generalization capability (i.e., $I^*(Y; T)$). This experiment supports our analysis that the robustness in latent representations of distributed DNN is jointly determined by the end-to-end performance and feature dimensions.
Figure 6: $l_\infty$ and $l_2$ attack success rate for different latent cardinalities of Resnet152-fc with different perturbation budgets: (left) $\epsilon = 0.003$; (center) $\epsilon = 0.01$; (right) $\epsilon = 0.03$.
6 CONCLUDING REMARKS
This paper has investigated adversarial attacks to latent representations of DNNs for the first time. First, we have theoretically analyzed the robustness of latent representations with information theoretical notions and based on the information bottleneck theory. To prove our theoretical findings, we have performed an extensive set of experiments with 6 different DNN architectures, 6 different distributed DNN approaches and considering 10 different attacks in literature. Our investigation concludes that latent representations are more robust than input representations assuming the same level of information distortion. Moreover, the adversarial robustness in latent space is jointly determined by the feature size and the end-to-end model generalization capability. Finally, we have shown that the success rate of attacks in the latent representations can be reduced by 88% in the best case and 57.49% on average compared to the same algorithms in input space. We hope that this work will inspire future work on the topic of adversarial machine learning on latent representations. We are currently working on designing defenses against attacks to latent representations of distributed DNNs investigated in this paper.
\(^1\)Due to the difference of devices and random seeds, the ASR can vary 2-3%. Thus we do not consider the decrease of the MIM success rate in 3-channel ResNet152-fc which is less than 5%.
REFERENCES
Nilesh Ahuja, Parual Datta, Bhavya Kanzariya, V Srinivasa Somayazulu, and Omesh Tickoo. Neural rate estimator and unsupervised learning for efficient distributed image analytics in split-dnn models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2022–2030, 2023.
Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016.
Saeed Ranjbar Alvar and Ivan V Bajić. Pareto-optimal bit allocation for collaborative intelligence. IEEE Transactions on Image Processing, 30:3348–3361, 2021.
Rana Ali Amjad and Bernhard C Geiger. Learning representations for neural network-based classification using the information bottleneck principle. IEEE transactions on pattern analysis and machine intelligence, 42(9):2225–2239, 2019.
Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In European conference on computer vision, pp. 484–501. Springer, 2020.
Hassan Ashtiani, Vinayak Pathak, and Ruth Urner. Adversarially robust learning with tolerance. In International Conference on Algorithmic Learning Theory, pp. 115–135. PMLR, 2023.
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pp. 274–283. PMLR, 2018.
Idan Attias, Aryeh Kontorovich, and Yishay Mansour. Improved generalization bounds for robust learning. In Algorithmic Learning Theory, pp. 162–183. PMLR, 2019.
Pranjal Awasthi, Abhratanu Dutta, and Aravindan Vijayaraghavan. On robustness to adversarial examples and polynomial optimization. Advances in Neural Information Processing Systems, 32, 2019.
Luca Baldesi, Francesco Restuccia, and Tommaso Melodia. ChARM: NextG Spectrum Sharing Through Data-Driven Real-Time O-RAN Dynamic Control. In Proceedings of IEEE International Conference on Computer Communications (INFOCOM), pp. 240–249. IEEE, 2022.
Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. arXiv preprint arXiv:1611.01704, 2016.
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018.
Laura Beggel, Michael Pfeiffer, and Bernd Bischl. Robust anomaly detection in images using adversarial autoencoders. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part I, pp. 206–222. Springer, 2020.
Robi Bhattacharjee, Max Hopkins, Akash Kumar, Hantao Yu, and Kamalika Chaudhuri. Robust empirical risk minimization with tolerance. In International Conference on Algorithmic Learning Theory, pp. 182–203. PMLR, 2023.
Sébastien Bubeck and Mark Sellke. A universal law of robustness via isoperimetry. Advances in Neural Information Processing Systems, 34:28811–28822, 2021.
Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pp. 3–14, 2017a.
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. Ieee, 2017b.
|
VAwgL8kPvr
|
The title of the paper suggests a focus on Large Language Models , leading me to expect analyses or experiments involving LLMs like LLaMA, or at least T5-large, especially since the terms 'LLM' are predominantly used in the paper rather than 'language model' or 'pre-trained language model'. However, upon delving into the experimental section, it's surprising to find that the actual experiments exclusively involve **BERT**. There is no mention of LLMs in the experiment, nor is there any comparison or discussion on how the application of Neural Architecture Search might differ between LLMs and PLMs.
|
STRUCTURAL PRUNING OF PRE-TRAINED LANGUAGE MODELS VIA NEURAL ARCHITECTURE SEARCH
Anonymous authors
Paper under double-blind review
ABSTRACT
Pre-Trained language models (PLM) mark the state-of-the-art for natural language understanding. However, their large size poses challenges in deploying them for inference in real-world applications, due to significant GPU memory requirements and high inference latency. This paper explores weight-sharing based neural architecture search (NAS) as a form of structural pruning to find sub-parts of the fine-tuned network that optimally trade-off efficiency, for example in terms of model size or latency, and generalization performance. Unlike traditional pruning methods with fixed thresholds, we propose to adopt a multi-objective approach that identifies the Pareto optimal set of sub-networks, allowing for a more flexible and automated compression process. Our NAS approach achieves up to 50% compression with less than 5% performance drop for a fine-tuned BERT model on 7 out of 8 text classification tasks.
1 INTRODUCTION
Pre-trained language models (PLMs) represent the current state-of-the-art for natural language understanding (NLU) tasks (Devlin et al., 2019). However, deploying PLMs for inference can be challenging due to their large parameter count. Current PLMs demand significant GPU memory and exhibit high inference latency, making them impractical for many real-world applications, for example when used in an end-point for a web service or deployed on an embedded systems. Recent work (Blalock et al., 2020; Kwon et al., 2022; Michel et al., 2019; Sajjad et al., 2022) demonstrated that in many cases only a subset of the pre-trained model significantly contributes to the downstream task performance. This allows for compressing the model by pruning parts of the network while minimizing performance deterioration.
Unstructured pruning (Blalock et al., 2020) computes a score for each weight in the network, such as the weight’s magnitude, and removes weights with scores below a predetermined threshold. This approach often achieves high pruning rates with minimal performance degradation, but it also leads to sparse weight matrices, which are not well-supported by commonly used machine learning frameworks. Structured pruning (Michel et al., 2019; Sajjad et al., 2022) removes larger components of the networks, such as layers or heads. Although it typically does not achieve the same pruning rates as unstructured pruning, it only prunes entire columns/rows of the weight matrix, making it compatible with popular deep learning frameworks and hardware.
Recent work on neural architecture search (Zoph & Le, 2017; Real et al., 2017; Bergstra et al., 2013) (NAS) finds more resource efficient neural network architectures in a data-driven way. To reduce the computational burden of vanilla NAS, weight-sharing-based neural architecture search (Pham et al., 2018; Liu et al., 2019b; Elsken et al., 2018) first trains a single super-network and than searches for sub-networks within the super-network. It can be considered as a form of structural pruning, where one aims to find sub-networks that sustain performance of the given super-network. Most structural pruning approaches prune the networks based on a predefined threshold on the pruning ratio. In scenarios where there is no strict constraint on model size, it can be challenging to define such a fixed threshold in advance. NAS offers a distinct advantage over other pruning strategies by enabling a multi-objective approach to identify the Pareto optimal set of sub-networks, which captures the nonlinear relationship (see Figure 1) between model size and performance instead of just obtaining a single solution. This allows us to automate the compression process and to select the
Figure 1: Illustration of our approach. We fine-tune the pre-trained architecture by updating only sub-networks, which we select by placing a binary mask over heads and units in each MHA and FFN layer. Afterwards, we run a multi-objective search to select the optimal set of sub-networks that balance parameter count and validation error.
best model that meets our requirements post-hoc after observing the non-linear Pareto front, instead of running the pruning process multiple rounds to find the right threshold parameter.
While there is a considerable literature on improving the efficiency of LLM, to the best of our knowledge there is no work yet that explored the potential of NAS for pruning fine-tuned PLMs. Our contributions are the following:
• We discuss the intricate relationship between weight-sharing based NAS and structural pruning and present a NAS approach that compresses PLMs for inference after fine-tuning on downstream tasks, while minimizing performance deterioration. Our focus lies not in proposing a novel NAS method per se, but rather in offering a practical use-case for NAS in the context of LLM.
• We propose four different search spaces to prune components of transformer based LLM and discuss their complexity and how they affect the structure of sub-networks. We also show how existing structural pruning approaches operate in two of these search space.
• Our method offers a more accurate approximation of the Pareto front that better balances generalization performance and parameter count than running state-of-the-art structural pruning techniques multiple times with different thresholds.
• We perform a thorough ablation study of weight-sharing based NAS and show that this use case serves as a useful test bed to benchmark NAS methods. In the long run we anticipate that our work will drive the development of future NAS methods.
We present an overview of related work in Section 2 and describe our methodology in Section 3. Section 4 provides an empirical comparison of our proposed approach with other structural pruning methods from the literature, along with an in-depth ablation study.
2 RELATED WORK
Neural Architecture Search (NAS) (see Elskens et al. (2018) for an overview) automates the design of neural network architectures to maximize generalization performance and efficiency (e.g., in terms of latency, model size or memory consumption). The limiting factor of conventional NAS is the computational burden of the search, which requires multiple rounds of training and validating neural network architectures (Zoph & Le, 2017; Real et al., 2017). To mitigate this cost, various approaches have been proposed to accelerate the search process. For example, some of these methods early terminate the training process for poorly performing configurations (Li et al., 2018) or extrapolating learning curves (White et al., 2021b). Weight-sharing NAS (Pham et al., 2018; Liu et al., 2019a) addresses the cost issue by training a single super-network consisting of all architectures in the search space, such that each path represent a unique architecture. Initially, Liu et al. (2019a) framed this as a bi-level optimization problem, where the inner objective involves the optimization of the network weights, and the outer objective the selection of the architecture. After training the super-network, the best architecture is selected based on the shared weights and then re-trained from scratch. However, several papers (Li & Talwalkar, 2020; Yang et al., 2020) reported
that this formulation heavily relies on the search space and does not yield better results than just randomly sampling architectures. To address this limitation, Yu et al. (2020) proposed a two-stage NAS process. In the first stage, the super-network is trained by updating individual sub-networks in each iteration, instead of updating the entire super-network. After training, the final model is selected by performing gradient-free optimization based on the shared weights of the super-network, without any further training. Concurrently, Cai et al. (2020) applies a similar approach for convolutional neural networks in the multi-objective setting by first training a single super-network and then searching for sub-networks to minimize latency on some target devices. Related to our work is also the work by Xu et al. (2021), which searches for more efficient BERT architectures during the pre-training phase.
**Structural Pruning** involves removing parts of a trained neural network, such as heads (Michel et al., 2019), or entire layers (Sajjad et al., 2022), to reduce the overall number of parameters while preserving performance. Individual components are pruned based on a specific scoring function, using a manually defined threshold. For transformer-based architectures, Michel et al. (2019) observed that a significant number of heads, up to a single head in a multi-head attention layer, can be deleted after fine-tuning without causing a significant loss in performance. Voita et al. (2019) proposed L0 regularization as a means to prune individual heads in a multi-head attention layer. Kwon et al. (2022) prunes individual heads and units in the fully-connected layers after fine-tuning according to the Fisher information matrix. Sajjad et al. (2022) demonstrated that it is even possible to remove entire layers of a pre-trained network prior to fine-tuning, with minimal impact on performance. In comparison to our data-driven approach, Sajjad et al. (2022) suggested using predefined heuristics (e.g., deleting top / odd / even layers) to determine layers to prune. However, as shown in our experiments, the appropriate architecture depends on the specific task, and more data-driven methods are necessary to accurately identify the best layers to prune.
**Distillation** (Hinton et al., 2015) trains a smaller student model to mimic the predictions of a pre-trained teacher model. For instance, Sanh et al. (2020) used this approach to distill a pre-trained BERT model (Devlin et al., 2019) into a smaller model for fine-tuning. Jiao et al. (2019) proposed a knowledge distillation approach specifically for transformer-based models, which first distills from a pre-trained teacher into a smaller model and then performs task-specific distillation in a second step based on a task augmented dataset. Related to our method is also AdaBERT (Chen et al., 2020) which trains task-specific convolutional neural networks based on differentiable NAS (Liu et al., 2019a) by distilling the knowledge of a PTL such as BERT. Unlike pruning-based methods, distillation allows for complete architectural changes beyond merely dropping individual components. However, from a practical standpoint, determining the optimal structure and capacity of the student network needed to match the performance of the teacher network also amounts to a hyperparameter and neural architecture search problem. Additionally, training a student network requires a significant amount of computational resources. For example, Sanh et al. (2020) was trained for around 90 hours on 8 16GB V100 GPUs. This cost can be amortized by fine-tuning the student model to solve many different tasks but depending on the downstream tasks, it potentially requires a substantial amount of iterations which is not always desirable for practitioners who aim to solve a single specific task. This is especially important in the multi-objective setting where many networks need to be distilled to map the full size/accuracy Pareto front.
**Quantization** (Dettmers et al., 2022; Dettmers & Zettlemoyer, 2023) reduces the precision of model parameters from floating-point numbers to lower bit representations (e.g., 8-bit integers). The main advantage of quantization is the reduction in memory footprint. However, as we show in the Appendix E, this does not necessarily lead to faster latency. Quantization is independent of our NAS approach and can be employed on the pruned network to further decrease memory usage.
### 3 Structural Pruning via Neural Architecture Search
We first provide a multi-objective problem definition for compressing fine-tuned LLM. Afterwards, we describe our weight-sharing based NAS approach and present four search spaces to prune transformer-based architectures, with a different degree of pruning.
3.1 Problem Definition
We consider a pre-trained transformer model based on an encoder-only or decoder-only architecture, such as for example BERT (Vaswani et al., 2017), with $L$ non-embedding layers, each composed of a multi-head attention (MHA) layer followed by a fully connected feed forward (FFN) layer. Given an input sequence $X \in \mathbb{R}^{n \times d_{\text{model}}}$, where $n$ represents the sequence length and $d_{\text{model}}$ the size of the token embedding, the MHA layer is defined by:
$$\text{MHA}(X) = \sum_{i=1}^{H} \text{Att}(W_Q^{(i)} X W_K^{(i)} X W_V^{(i)} X W_O^{(i)} X)$$
where $W_Q^{(i)}, W_K^{(i)}, W_V^{(i)} \in \mathbb{R}^{d_{\text{model}} \times d}$ and $W_O^{(i)} \in \mathbb{R}^{Hd \times d_{\text{model}}}$ are weight matrices. $\text{Att}(\cdot)$ is a dot product attention head (Bahdanau et al., 2015) and $H$ is the number of heads. The output is then computed by $X_{\text{MHA}} = \text{LN}(X + \text{MHA}(X))$, where LN denotes layer normalization (Ba et al., 2016). The FFN layer is defined by
$$\text{FFN}(X) = W_1 \sigma(W_0 X),$$
with $W_0 \in \mathbb{R}^{I \times d_{\text{model}}}$ and $W_1 \in \mathbb{R}^{d_{\text{model}} \times I}$, where $I$ denotes the intermediate size and $\sigma(\cdot)$ is a non-linear activation function. Also here we use a residual connection to compute the final output:
$$x_{\text{FFN}} = \text{LN}(X_{\text{MHA}} + \text{FFN}(X_{\text{MHA}})).$$
We define a binary mask $M_{\text{head}} \in \{0, 1\}^{L \times H}$ for each head in the multi-head attention layer and a binary mask $M_{\text{neuron}} \in \{0, 1\}^{L \times U}$ for each neuron in the fully-connected layers. The output of the $l$-th MHA layer and FFN layer is computed by
$$\text{MHA}_l(X) = \sum_{i=1}^{H} M_{\text{head}}[i, l] \circ \text{Att}(\cdot)$$
and
$$\text{FFN}_l(X) = M_{\text{neuron}}[l] \circ W_1 \sigma(W_0 X),$$
respectively.
Now, let’s define a search space $\Theta \subseteq \Theta$ that contains a finite set of configurations to define possible sub-networks sliced from the pre-trained network. We define a function $\text{CREATEMASK}$ that maps from a configuration $\theta \rightarrow M_{\text{head}}, M_{\text{neuron}}$ to binary masks. Let’s denote the function $f_0 : \Theta \rightarrow \mathbb{R}$ as the validation error of the sub-network defined by configuration $\theta$ after fine-tuning on some downstream task. To compute the validation score induced by $\theta$ we place corresponding masks $M_{\text{head}}, M_{\text{neuron}}$ over the network. Additionally, we define the total number of trainable parameter $f_1 : \Theta \rightarrow \mathbb{N}$ of the subnetwork. Our goal is to solve the following multi-objective optimisation problem:
$$\min_{\theta \in \Theta} (f_0(\theta), f_1(\theta)).$$
In the multi-objective setting, there is no single $\theta^* \in \Theta$ that simultaneously optimizes all $M$ objectives. Let’s define $\theta \succ \theta'$ iff $f_i(\theta) \leq f_i(\theta'), \forall i \in [M]$ and $\exists i \in [k] : f_i(\theta) < f_i(\theta')$. We aim to find the Pareto Set: $P_f = \{\theta \in \Theta | \#\theta' \in \Theta : \theta' \succ \theta\}$ of points that dominate all other points in the search space in at least one objective.
3.2 Weight-sharing based NAS
Following previous work (Yu et al., 2020; Wang et al., 2021), our weight-sharing based NAS approaches consists of two stages: the first stage is to treat the pre-trained model as super-network and fine-tune it on the downstream task, such that sub-networks do not co-adapt. The second stage, utilizes multi-objective search strategies to approximate the Pareto-optimal set of sub-networks (see Figure 1 for an illustration).
3.2.1 Super-Network Training
In the standard NAS setting, we would evaluate $f_0(\theta)$ by first fine-tuning the sub-networks defined by $\theta$ on the training data before computing the score on the validation data. The weights of the sub-network are initialize based on the pre-trained weights. While more recent NAS approaches (Li & Talwalkar, 2020; Klein et al., 2020) accelerate the search process by early stopping poorly performing sub-networks, this still amounts to an optimization process that requires the compute of multiple independent fine-tuning runs.
The idea of two-stage weight-sharing-based NAS (Yu et al., 2020) is to train a single-set of shared weights, dubbed super-network, that contains all possible networks in the search space. After training the super-networks, evaluation $f_0(\theta)$ only requires a single pass over the validation data.
We consider the pre-trained network as super-network with shared weights that contains all possible sub-networks $\theta \in \Theta$. To avoid that sub-networks co-adapt and still work outside the super-network, previous work (Yu et al., 2020; Wang et al., 2021) suggested to update only a subset of sub-networks in each update step, instead of the full super-network. We adapt this strategy and sample sub-
networks according to the sandwich rule (Yu et al., 2020; Wang et al., 2021) in each update step, which always updates the smallest, the largest and \( k \) random sub-networks. The smallest and largest sub-network correspond to the lower and upper bound of \( \Theta \), respectively. For all search spaces \( \Theta \) define below, the upper bound is equal to full network architecture, i.e., the super-network and the lower bound removes all layer except the embedding and classification layer.
Additionally, we use in-place knowledge distillation (Yu et al., 2019) which accelerates the training process of sub-networks. Given the logits \( \pi_{\text{superne}}(x) \) of the super-network, which we obtain for free with the sandwich rule, and the logits of a sub-network \( \pi_\theta(x) \), the loss function to obtain gradients for the sub-networks follows the idea of knowledge distillation:
\[
L_{KD} = L_{CE} + D_{KL}\left(\sigma\left(\frac{\pi_{\text{superne}}}{T}\right), \sigma\left(\frac{\pi_\theta}{T}\right)\right),
\]
where \( D_{KL}(\cdot) \) denotes the Kullback-Leibler divergence between the logits of the super-network and the sub-network, \( T \) a temperature parameter, \( \sigma(\cdot) \) the softmax function and \( L_{CE} \) is the cross-entropy loss.
### 3.2.2 Sub-networks Selection
After training the super-network, we compute the validation error \( f_0(\theta) \) by applying \( M_{\text{head}} \) and \( M_{\text{neuron}} \) to the shared weights and performing a single pass over the validation data. This substantially reduces the computational cost involved in the multi-objective problem stated in Equation [1].
Previous work (White et al., 2021a) has demonstrated that simple local search often performs competitively compared to more advanced NAS methods. In this paper, we propose a straightforward multi-objective local search approach. Starting from the current Pareto front \( P_f \), which is initialized by some starting point, we randomly sample an element \( \theta^* \sim P_f \) and then generate a random neighbor point by permuting a single random entry of \( \theta^* \). The pseudo code for our local search is provided in Appendix F.
### 3.3 Search Space
The search space \( \Theta \) defines sub-networks of the pre-trained network architecture. An expressive \( \Theta \) allows for fine-grained pruning but might also become infeasible to explore. We propose the following search spaces that exhibit different levels of complexity. For each search space we provide pseudo code to define the CREATEMASK function in Appendix B.
- **LARGE**: For each head and neuron in the fully-connected layer we define a single binary \( \Theta_i = \{0, 1\} \) which is combined to form the search space \( \Theta = \Theta_0 \times \ldots \times \Theta_L(H+I) \). This is the most expressive search space, but also grows quickly with the model size. The search space is also commonly used by other structural pruning approaches (Kwon et al., 2022). It might not be very useful in practice, because we cannot easily remove single rows/columns of the weight matrix with most transformer implementations and hence it will not necessarily reduce the inference latency. However, it provides us a reference in terms of predictive performances that can be retained under a certain pruning ratio.
- **MEDIUM**: Based on the previous search space, we allow for a flexible number of heads \( l \) units per layer. For each layer \( l \in [0, L] \), we define \( H_l = [0, H] \) and \( U_l = [0, U] \), such that the final search space is \( \Theta = H_0 \times U_0 \ldots H_L \times U_L \). For each layer, we always keep the first \( h \in H \) heads and \( u \in U \) units, respectively, to enforce that CREATEMASK is a bijective mapping (see Appendix D).
- **LAYER**: Inspired by Sajjad et al. (2022), we prune individual attention and fully-connected layers instead of single heads and neurons. We define a search space \( \Theta = \{0, 1\}^L \) that contains one binary hyperparameter for each layer that determines if the corresponding layer is removed.
- **SMALL**: We define the number of heads \( H = [0, H] \), the number of units \( U = [0, U] \) and the total number of layers \( L = [0, L] \), such that \( \Theta = H \times U \times L \). Compared to the other search spaces, the dimensionality of this search space with different model sizes, and only its upper bound increases. As for the MEDIUM search space we also keep the first heads and units in each layer.
Figure 2: Examples of head masks $M_{\text{head}}$ sampled uniformly at random from different search spaces. Dark color indicates that the corresponding head is masked. The same pattern can be observed for $M_{\text{neuron}}$.
Figure 3: Distribution of the parameter count $f_1(\theta)$ for uniformly sampled $\theta \sim \Theta$.
Each search space induces a different pattern for $M_{\text{head}}$ and $M_{\text{neuron}}$ that we place over the super-network to select sub-networks (see Figure 2 for some examples). To see how this effects the distribution over parameter count and hence the sampling during the super-network training, we sample $N = 500$ configurations $\{\theta_0, ..., \theta_N\}$ uniformly at random and compute the number of trainable parameters $\{f_1(\theta_1), ..., f_1(\theta_N)\}$ for all four search spaces (see Figure 3). The SMALL search space is somewhat biased to smaller networks. The MEDIUM search space, even though more expressive, is highly biased towards mid-size networks, since on average half of the heads / neurons are masked out. For the two binary search spaces LAYER and LARGE, we can achieve a uniform distribution over the number of parameters, by using the following sampling process. We first sample an integer $k \sim U(0, K)$, where $k = L$ for the LAYER search space, and $k = L(H + I)$ for the LARGE search space. Afterwards, we randomly select $k$ entries of the binary vector $\theta \in \Theta$ and set them to 1.
4 EXPERIMENTS
We evaluate our approach on eight text classification datasets from the GLUE (Wang et al., 2019) benchmark suite. We provide a description of each dataset in Appendix C. All datasets come with a predefined training and evaluation set with labels and a hold-out test set without labels. We split the training set into a training and validation set (70%/30% split) and use the evaluation set as test set. We fine-tune every network for 5 epochs on a single GPU. For all multi-objective search methods, we use Syne Tune (Salinas et al., 2022) on a single GPU instance. We use BERT-base (Devlin et al., 2019) (cased) as pre-trained network, which consists of $L = 12$ layers, $I = 3072$ units and $H = 12$ heads (other hyperparameters are described in Appendix A), because it achieved competitive performance too larger models on these benchmarks and allows for a more thorough evaluation. We also present a comparison to quantization in Appendix E.
4.1 COMPARISON
We now present a comparison against other structural pruning approaches. For NAS we use the SMALL search space defined in Section 3.3 based on our ablation study in Section 4.2. We compare against the following relevant baselines:
- Retraining Free Pruning (RFP) (Kwon et al., 2022) uses a three-phased pruning strategy that, based on a threshold $\alpha$, prunes individual heads in the MHA layer and units in the
FFN layer. The first phase computes a binary mask for heads and units by computing the diagonal Fisher information matrix. The matrix is then rearranged by a block-approximated Fisher information matrix. In the last step, the masked is further tuned by minimizing the layer-wise reconstruction error. This method operates in the LARGE search space described in Section 3.3. We run RFP with different values for $\alpha \in \{0.1, 0.2, ..., 0.9\}$ to obtain a Pareto set of architectures.
- **Layer Dropping (LD):** Following Sajjad et al. (2022) we first remove the top $n \in 1, ..., L - 1$ layers and fine-tune the remaining layers directly on the downstream task. To obtain a Pareto set of $N$ points, we fine-tune $N$ models with different amount of layers removed. This method serves as a simple heuristic to explore the LAYER search space.
- **DistilBERT** (Sanh et al., 2020) is a distilled version of BERT based on a smaller architecture ($L = 6, H = 12, T = 3072$) which we directly fine-tuned on the downstream task.
- **Standard NAS (S-NAS)** uses the same multi-objective search but without the super-network training. Instead each sub-network is initialized with the pre-trained weights and then fine-tuned independently.
For each method except DistilBERT, we obtain a Pareto set of solutions with different parameter counts; note that parameter count is related to model inference time as discussed in E. To compare results, we normalize the number of parameters to $[0, 1]$ and bin results based on different thresholds $\beta \in \{0.2, ...0.9\}$. Note that roughly 20% of the parameters of BERT-base are included in the embedding and classification head, and hence cannot be pruned. For each bin, we report the best performance of the solution with $\leq \beta$ parameters.
Figure 4 shows the parameter count (horizontal axis) and the test error (vertical axis) relative to the unpruned network for all datasets. For reference, we indicate 5% and 10% relative error to the unpruned network by dashed lines. NAS achieves strong performance, especially for higher pruning ratios. For smaller pruning ratios, i.e larger parameter counts (right side of plots), all methods exhibit comparable performance. Notably, NAS showcases more fine-grained pruning capabilities compared to LD, as demonstrated by the smooth curves in the results.
Apart from the quality of the final Pareto set, we also evaluate the total runtime of each method. Figure 5 left shows the total runtime in terms of wall-clock time for the MNLI dataset. Plots for all other datasets are in Appendix D. For both, RFP and NAS, we also include the fine-tuning of the super-network in the runtime analysis. LD exhibits significantly higher runtime, in terms of wall-clock time compared to NAS, since it fine-tunes $n$ sub-networks. While RFP is overall faster, our NAS approach provides the best performance / runtime trade-off.
For a qualitative comparison, we show the results for a single run on the SST2 dataset in Figure 5 right. On this dataset our NAS approach finds sub-networks with approximately 50% the size of the unpruned network (dashed line) with almost no drop in performance.
Figure 5: Total runtime in seconds for each method (left), including training time for the super-network, to generate the Pareto fronts (right) on the MNLI dataset. While RFP is faster than our NAS approach, its Pareto front performs poorly in the smaller sub-network regime. LD and S-NAS exhibit similar performance on this benchmark, but they consume substantially more resources.
Figure 6: Comparison of different search spaces to define sub-networks. Even though larger search spaces are more expressive, they under-perform within the select budget.
4.2 Ablation Study
We now present a detailed ablation study to evaluate different components of our NAS approach. To quantify the performance of a Pareto set, we compute the Hypervolume \cite{Zitzler2003} frequently used in the multi-objective literature. We first normalize each objective based on all observed values across all methods and repetitions via Quantile normalization. This results in a uniform distribution between $[0, 1]$, and we use $(2, 2)$ as reference point. We train each super-network five times with a different random seed. For each model checkpoint, i.e super-network, we run multi-objective search five times also with different random seeds. This leads to 25 different Pareto sets and we report mean and total variance of the corresponding hypervolume. To cut computational cost, we report results only on the four smallest datasets: RTE, MRPC, COLA and STSB.
4.2.1 Search Space
First, we compare the search spaces definitions from Section 3.3. We fine-tune the super-network as described in Section 3.2 and sample 100 sub-networks uniformly at random to compute the hypervolume. Within this budget (see Figure 6), the SMALL search space achieves the best performance. Interestingly, even though the MEDIUM search space allows for a more fine-grained per layer pruning, it leads to worse results. We attribute this to the non-uniform distribution of parameter count as described in Section 3.3. The LARGE search space, which is a superset of the other search spaces, seems infeasible to explore with random sampling over so few observations. We use the SMALL search space for the remaining experiments.
4.2.2 Super-network Training
Next, we compare the following super-network training strategies:
- **standard**: Which trains all weights of super-network in the standard fine-tuning setting
- **random**: Samples a single random sub-network in each update steps
Figure 7: Comparison of super-network training strategies. More advanced strategies that sample a set of sub-networks outperform standard fine-tuning or just sampling a single random sub-network.
Figure 8: Hypervolume of different multi-objective search methods over the number of function evaluation. We report the difference to the optimal hypervolume given the reference point.
- **random-linear**: Following Yu et al. (2020), we either sample a random sub-network with probability $p$ or the full-network with probability of $1 - p$ in each update step. Thereby, $p$ is linearly increased from 0 to 1 after each update step over the course of training.
- **sandwich**: The super-network is updated according to the sandwich rule described in Section 3.2. We set the number of random sub-networks in each update step to $k = 2$.
- **kd**: Update $k = 2$ random sub-networks according to Equation 2.
- **full**: Implements the training protocol described in Section 3.2, i.e., it combines the sandwich rule with in-place knowledge distillation to update sub-networks.
Figure 7 middle shows the hypervolume across all repetitions. Standard fine-tuning and just randomly sampling a sub-network leads to significantly worse results. Linearly increasing the probability of sampling a random sub-network stabilizes results. Better results are achieved by using the sandwich rule or knowledge distillation. Thereby, combining both slightly improves results further.
4.2.3 Multi-Objective Search
Lastly, we compare in Figure 8 the following multi-objective search methods: our local search (LS) described in Section 3.2 (see Appendix F for details). Random search (RS) (Bergstra & Bengio, 2012) samples architectures uniformly at random from the search space. NSGA-2 (Deb et al., 2002) is a frequently used genetic algorithm from the multi-objective literature. Bayesian optimization (Garnett, 2023) with a linearized scalarization of the objectives (LS-BO) and with a randomized scalarization of the objectives (RS-BO) (Paria et al., 2019).
While all methods yield comparable results (note the high uncertainty bars), LS performs slightly better. RS-BO and LS-BO underperform to RS because their scalarization approach causes them to concentrate solely on specific parts of the Pareto front, thereby failing to adequately capture its complete extent. NSGA-2 appears to suffer from sample inefficiency on this benchmark.
4.3 Conclusions
We propose weight-sharing-based NAS to compress fine-tuned PLMs by slicing sub-networks. By utilizing a multi-objective approach, we can find the Pareto optimal set of architectures that balance model size and validation error, allowing practitioners to select the optimal network without running the pruning process multiple times with different thresholds. Furthermore, our method is more runtime efficient than baselines and more effective than structural pruning methods.
REFERENCES
J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv:1607.06450 [stat.ML], 2016.
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR’15), 2015.
J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research (JMLR-12), 2012.
J. Bergstra, D. Yamins, and D. Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In Proceedings of the 30th International Conference on Machine Learning (ICML’13), 2013.
D. Blalock, J. J. G. Ortiz, J. Frankle, and J. Guttag. What is the state of neural network pruning? arXiv:2003.03033 [cs.LG], 2020.
H. Cai, C. Gan, T. Wang, Z. Zhang, and S. Han. Once-for-all: Train one network and specialize it for efficient deployment. In International Conference on Learning Representations (ICLR’20), 2020.
D. Chen, Y. Li, M. Qiu, Z. Wang, B. Li, B. Ding, H. Deng, J. Huang, W. Lin, and J. Zhou. Adabert: Task-adaptive bert compression with differentiable neural architecture search. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI’20), 2020.
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II. In IEEE Transactions on Evolutionary Computation, 2002.
T. Dettmers and L. Zettlemoyer. The case for 4-bit precision: k-bit inference scaling laws. arXiv:2212.09720 [cs.LG], 2023.
T. Dettmers, M. Lewis, Y. Belkada, and L. Zettlemoyer. Llm.int8(): 8-bit matrix multiplication for transformers at scale. In Proceedings of the 36th International Conference on Advances in Neural Information Processing Systems (NeuRIPS’22), 2022.
J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019.
T. Elsken, J. H. Metzen, and F. Hutter. Neural architecture search: A survey. arXiv:1808.05377 [stat.ML], 2018.
R. Garnett. Bayesian Optimization. Cambridge University Press, 2023.
G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv:1503.02531 [stat.ML], 2015.
X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang, and Q. Liu. Tinybert: Distilling bert for natural language understanding. arXiv:1909.10351 [cs.CL], 2019.
A. Klein, L. C. Tiao, T. Lienart, C. Archambeau, and M. Seeger. Model-based asynchronous hyper-parameter optimization. arXiv:2003.10865 [cs.LG], 2020.
W. Kwon, S. Kim, M. W. Mahoney, J. Hassoun, K. Keutzer, and A. Gholami. A fast post-training pruning framework for transformers. arXiv:2204.09656 [cs.CL], 2022.
L. Li and A. Talwalkar. Random search and reproducibility for neural architecture search. In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, 2020.
L. Li, K. Jamieson, A. Rostamizadeh, K. Gonina, M. Hardt, B. Recht, and A. Talwalkar. Massively parallel hyperparameter tuning. arXiv:1810.05934 [cs.LG], 2018.
H. Liu, K. Simonyan, and Y. Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations (ICLR’19), 2019a.
|
amjNJMpBiq
|
Your conclusion that “with the increasing rounding error, a greater leeway is left between the real certified radius R and the computed certified radius $\tilde{R}$ for our method to exploit, so the attack success rate increases.” seems to contradict the results of Figure 2a), which show that while the rounding error in computing $||\[delta||$ becomes increasingly more pronounced as dimensionality is increased (Gap between strong and weak threat model), the attackability under the strong threat model (ignoring this error) steadily decreases.
|
GETTING A-ROUND GUARANTEES: FLOATING-POINT ATTACKS ON CERTIFIED ROBUSTNESS
Anonymous authors
Paper under double-blind review
ABSTRACT
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations. Certified robustness has been proposed as a mitigation where given an input \( x \), a classifier returns a prediction and a certified radius \( R \) with a provable guarantee that any perturbation to \( x \) with \( R \)-bounded norm will not alter the classifier’s prediction. In this work, we show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors. We design a rounding search method that can efficiently exploit this vulnerability to find adversarial examples against state-of-the-art certifications in two threat models, that differ in how the norm of the perturbation is computed. We show that the attack can be carried out against linear classifiers that have exact certifiable guarantees and against neural networks that have conservative certifications. In the weak threat model, our experiments demonstrate attack success rates over 50% on random linear classifiers, up to 23% on the MNIST dataset for linear SVM, and up to 15% for a neural network. In the strong threat model, the success rates are lower but positive. The floating-point errors exploited by our attacks can range from small to large (e.g., \( 10^{-13} \) to \( 10^3 \)) — showing that even negligible errors can be systematically exploited to invalidate guarantees provided by certified robustness. Finally, we propose a formal mitigation approach based on bounded interval arithmetic, encouraging future implementations of robustness certificates to account for limitations of modern computing architecture to provide sound certifiable guarantees.
1 INTRODUCTION
Robustness of modern image classifiers has come under scrutiny due to a plethora of results demonstrating adversarial examples—small perturbations to benign inputs that cause models to mispredict, even when such perturbations are not evident to the human eye [Madry et al., 2018; Carlini & Wagner, 2017; Szegedy et al., 2014; Goodfellow et al., 2015]. If a learned model is used in critical applications such as self-driving cars, clinical settings or malware detection, such easily added perturbations can have severe consequences. As a result, research focus has shifted to training models robust to adversarial perturbations, that come endowed with certified robustness.
Mechanisms for providing robustness certification aim to bound a model \( f \)'s sensitivity to a certain level of perturbation. At a high level, such mechanisms return a radius \( R \) around a test input \( x \) with a guarantee that for any \( x' \) within \( R \) distance from \( x \), \( f(x) = f(x') \). How \( R \) is computed, whether it is sound and/or complete depends on the mechanism. For example, bound propagation [Zhang et al., 2018; Wang et al., 2021] transfers the upper and lower bounds from the output layer to the input layer of a neural network, and gives a lower bound on the perturbation needed to flip the classification.
Given the extensive research on certified robustness, can such mechanisms protect against adversarial examples in practice? In this paper, we show that the limits posed by floating-point arithmetic invalidate guarantees of several prominent mechanisms and their implementations. Despite proofs of robustness guarantees, they all assume real numbers can be represented exactly. Unfortunately, this critical (implicit) assumption cannot hold on computers with finite number representations. Since floating-point (FP) numbers can represent only a subset of real values, rounding is likely to occur when computing robust guarantees and can cause overestimation of the certified radius \( R \). Thus, adversarial examples may exist within the computed radius despite claims of certification.
We devise a rounding search method that can efficiently discover such adversarial examples in two threat models, that differ in how the norm of the perturbation is computed. Our method is inspired by the traditional adversarial example search methods such as PGD [Madry et al., 2018] and C&W [Carlini & Wagner, 2017]. However, we find that such existing methods do not effectively exploit the rounding of a certified radius as the search space they explore is large (i.e., the number of examples to check becomes intractable due to the large number of floating-point values) and instances of inappropriate rounding do not necessarily follow model gradients. To this end, our method is different from these search methods in two aspects: (1) instead of relying on back propagation, it leverages the piecewise linear property of ReLU networks to find coarse-level perturbation directions; (2) it then searches in a much finer scale by sampling floating-point neighbors of a potential adversarial example. The first aspect allows us to narrow down the search space closer to the certified radius and efficiently find adversarial examples. The second aspect enables our search method to find adversarial examples with perturbation norms that are just smaller than the certified radius (e.g., in the 13th decimal place), which PGD and C&W cannot find. Compared to other works that find robustness violations [Jia & Rinard, 2021; Zombor et al., 2020], our attack method is arguably stronger as it works on unmodified target models with unaltered instances as opposed to specially crafted models or instances. We discuss the potential impact of our attacks on robustness guarantees in Appendix E.
One’s first intuition to mitigate the overestimation of certified radii exploited by the above attacks might be to adopt slightly more conservative radii (e.g., using $R - \gamma$ for some positive constant $\gamma \ll 1$). Unfortunately, such radii are not in general sound and choosing $\gamma$ is inherently error prone. That is, we show that the amount of overestimation can depend on the data (e.g., number of features) and model (e.g., number of operations) and that attacking $R - 0.1$ is still possible. To this end, we propose a defense based on rounded interval arithmetic that has theoretical guarantees and can be easily integrated into mechanisms for computing certified radii. In summary our contributions are:
- We explore a class of attacks that invalidate the implementations of certified robustness (i.e., find adversarial examples within the certified radius). Our attacks exploit rounding errors due to limited floating-point representation of real numbers.
- We devise a rounding search method that systematically discovers such adversarial examples under two threat models. The weak model assumes that attacks need only have floating-point norms that violate certifications (e.g., in the case where the norm is computed using common software libraries). The strong model makes no such assumption: the true (real-valued) norm of attacks must violate certifications (e.g., in the case where the library that computes the square root for the norm can represent a real value or its range).
- We show that our attacks work against exact certifications of linear models [Cohen et al., 2019], and against a conservative certified radius returned by a prominent neural network verifier [Wang et al., 2021] on a network. Our attack success rate differs between learners and threat models. In the weak threat model, our success rates are over 50% for random linear classifiers and 15% on an MNIST neural network model. In the strong threat model, the attack success rates are lower but are still non-zero. For all cases, in theory, the certification should guarantee a 0% success rate for such attacks within certified radii.
- We propose a defense based on rounded interval arithmetic, with strong theoretical and empirical support for mitigating rounding search attacks.
## 2 BACKGROUND AND PRELIMINARIES
Let input instance $x = (x_1, x_2, \ldots, x_D)$ be a vector in $\mathbb{R}^D$ with $x_i$ denoting the $i$th component of $x$. We consider classifiers $f$ mapping an instance in $\mathbb{R}^D$ to a binary class label in $\{-1, 1\}$ or to a $K$-class label in $[K] = \{1, \ldots, K\}$.
**Adversarial examples.** Given an input instance $x$, a classifier $f$, and a target label $t \neq f(x)$, $x'$ is a targeted adversarial example [Szegedy et al., 2014] if $f(x') = t$ where $x'$ is reachable from $x$ according to some chosen threat model. In the vision domain, it is common to assume that small $\ell_p$ perturbations to $x$ will go unnoticed by human observers. In this paper we consider $\ell_2$ distance, i.e., $\|x - x'\| \leq \Delta$ for some small perturbation limit $\Delta$. An adversarial example in the multi-class setting is untargeted if $t$ is not specified.
Floating-point representation. Floating-point values represent reals using three binary numbers: a sign bit \( b \), an exponent \( e \), and a significand \( d_1d_2\ldots d_d \). For example, 64-bit (double precision) floating-point numbers allocate 1 bit for \( b \), 11 bits for \( e \), and 52 bits for the significand. Such a floating-point number is defined to be \((-1)^b \times (1.d_1d_2\ldots d_d)_2 \times 2^{e-1023}\). Floating points can represent only a finite number of real values. Hence, computations involving floating-point numbers often need to be rounded up or down to their nearest floating-point representation [IEEE].
2.1 Certified robustness
A robustness certification for a classifier at input \( x \) is a neighborhood (typically an \( \ell_2 \) ball) of \( x \) on which classifier predictions are constant. Certifications aim to guarantee that no perturbed adversarial examples exist in this neighborhood, including “slightly” perturbed instances.
**Definition 1** A pointwise robustness certification for a \( K \)-class classifier \( f \) at input \( x \in \mathbb{R}^D \) is a real radius \( R > 0 \) that is sound and (optionally) complete:
(i) [sound] \( \forall x' \in \mathbb{R}^D, \|x' - x\| \leq R \Rightarrow f(x') = f(x) \).
(ii) [complete] \( \forall R' > R, \exists x' \in \mathbb{R}^D, \|x' - x\| \leq R' \land f(x') \neq f(x) \).
For a given certification mechanism, we will distinguish the idealized certification radius \( R \) (i.e., the mapping of Definition 1 under the soundness condition) from a candidate radius \( \hat{R} \) that an implementation of this mechanism computes. As we will see, the latter may not be necessarily sound (or complete). We categorize certification mechanisms into three kinds depending on their claims.
Exact certification mechanisms. These mechanisms output sound and complete radii under ideal realization of \( \mathbb{R} \) arithmetic. Binary linear classifiers \( f(x) = \text{sign}(w^T x + b) \) admit a certified radius \( R = |w^T x + b|/\|w\| \). Cohen et al. derive this radius and prove its soundness [Cohen et al., 2019, Proposition 4] and completeness [Cohen et al., 2019, Proposition 5] for real arithmetic.
Conservative certification mechanisms. These are mechanisms that output radii that are sound and not necessarily complete under real-valued arithmetic. Bound propagation aims to provide a certified lower bound of minimum distortion [Zhang et al., 2018; Wang et al., 2021; Wong & Kolter, 2018; Wang et al., 2018] to an input that would cause a label change.
Approximate mechanisms. Approximate certifications output random radii that under \( \mathbb{R} \) are sound (or abstain), with high probability \( 1 - \alpha \), and that are not necessarily complete. Randomized smoothing [Cohen et al., 2019] is an example of this approach.
3 ROUNDING SEARCH ATTACK
We now present a rounding search method that exploits floating-point rounding errors to find adversarial examples within a computed certified radius \( R \).
Threat model. Like prior works on adversarial examples [Carlini & Wagner, 2017; Madry et al., 2018; Jia & Rinard, 2021], we assume that the adversary has white-box access to a classifier \( f \), and has white-box access to the certification mechanism that it can query with inputs \( f \) and instance \( x \in \mathbb{R}^D \), and obtain a certified radius \( \hat{R} \) as an output.
Since there are floating-point rounding errors in the operations for computing a certification, the computed radius \( \hat{R} \) at an instance \( x \) could overestimate an intended sound (and possibly complete) radius \( R < \hat{R} \). This creates leeway for an adversary to find adversarial perturbations whose norms are less than or equal to the computed certified radius, but which can change the classifications of the model, invalidating soundness of the computed certification. Our work aims to find a systematic and efficient way to exploit these rounding errors.
A perturbation’s norm \( \|\delta\| = \|x' - x\| \) must be estimated when evaluating the perturbation’s success. This norm computation can also suffer floating-point rounding errors, and could be underestimated. To handle this possibility, we conduct attacks in two threat models, one weak one strong.
The weak model makes minimal assumptions on how certification is violated: an attack is ruled successful if the floating-point computation of \( \|\delta\| \) is smaller than or equal to \( \tilde{R} \) (i.e., \( \|\delta\| \leq \tilde{R} \)). We note that this model represents settings where the norm is computed using common software libraries for computing operations on floating numbers (e.g., Numpy’s 32-bit or 64-bit floating-point arithmetic).
The strong model does not make these assumptions, the true (real-valued) norm of attacks must violate certifications. This model considers a setting where the norm is computed using software packages that instead of returning a result that is potentially rounded can return a representation of a real-valued norm or its range. Since we cannot do real arithmetic on machines, we use the upper bound of the norm \( \|\delta\| \) instead, which is computed with bounded interval arithmetic and is guaranteed to be greater than or equal to the true norm. That is, a successful attack satisfies \( \|\delta\| \leq \tilde{R} \).
**Attack overview.** Consider a classifier \( f \), input \( x \) and the corresponding computed radius \( \tilde{R} \). A naive way to search for an adversarial example would be to try all \( x' \) such that \( \|x - x'\| \leq \tilde{R} \), checking whether \( f(x) \neq f(x') \). Unfortunately this exhaustive search is computationally intractable (e.g., there are \( \approx 2^{17} \) floating points in a small interval such as \([10, 10 + 2^{-32}]\)). We can avoid some futile search. For example, observe that instances in the gray area, as depicted in Figure 1, are unlikely to flip predictions, as they are in the opposite direction of the decision boundary. A key idea is to find a perturbation direction \( \nu \) that reaches the decision boundary in the shortest distance, and add a perturbation \( \delta \) in that direction to \( x \), to maximize our chance to flip the classifier’s prediction with perturbation norm \( \|\delta\| \) (or \( \|\delta'\| \)) less than or equal to \( \tilde{R} \). This baseline method has several challenges. First, computation of perturbation direction \( \nu \) is not easy for NNs which do not typically have linear decision boundaries. To this end, for ReLU networks, we find a local linear approximation prior to computing the gradient for \( \nu \). Second, while \( \nu \) guides a search towards the decision boundary, the search may still be unable to exploit the leeway between the real certified radius \( R \) and the computed certified radius \( \tilde{R} \) to find certification violations. We address this challenge with a tightly-confined randomized floating-point neighborhood search. In summary, our attack proceeds as follows (depicted in Figure 1).
1. Find an adversarial perturbation direction \( \nu \) that reaches the decision boundary of classifier \( f \) in the shortest distance, as a form of PGD attack Madry et al. (2018) (Section 3.1).
2. Compute perturbation \( \delta \) in the direction \( \nu \) within the computed certified radius \( \tilde{R} \):
\[
\delta = \tilde{R} \nu / \|\nu\|
\]
(1)
3. Search for multiple floating-point neighbors \( \delta' \) of \( \delta \) with \( \|\delta'\| \leq \tilde{R} \) (or \( \|\delta'\| \leq \tilde{R} \)), and evaluate if any \( x + \delta' \) can flip the classifier’s prediction (Section 3.2).
### 3.1 Adversarial Perturbation Direction
For linear models, direction \( \nu \) is a normal to the decision boundary’s hyperplane \( w^T x + b = 0 \) and equals \( w \). The perturbation direction for neural networks is not as obvious as it is for linear models, as the decision boundary can be highly non-linear. In the rest of this section we describe our approach for finding \( \nu \) for neural networks with ReLU activations that we show to be effective in our experiments. A neural network with ReLUs can be represented as
\[
F(x) = (F_n \circ F_{n-1} \circ \cdots \circ F_1)(x)
\]
where \( F_i(x) = \text{ReLU}(\theta_i^T x + \hat{\theta}_i) \). Here \( x \) and \( \hat{\theta}_i \) are vectors, \( \theta_i \) is a matrix, and the rectified linear (ReLU) activation function acts pointwise on a vector, returning a vector.
We use the fact that such networks are piecewise linear: therefore a (local) linear approximation at instance \( x \) is in fact exact. Then, one can find an adversarial example for \( x \) against this linear model as described above and use it to attack the original ReLU network.
**Warmup.** As a warmup, let us consider a network where ReLUs are all activated. For each node \( \text{ReLU}(z) = \max\{0, z\} = z \), and so the network is a combination of \( K \) linear models where \( K \) is the number of classes. That is,
\[
F(x) = \theta^T x + \hat{\theta},
\]
where \( \theta^T = \theta_n^T \cdots \theta_1^T \), and \( \hat{\theta} = \sum_{i=1}^{n} (\prod_{j=i+1}^{n} \theta_j^T) \hat{\theta}_i \). Note that \( \theta^T \) is a \( K \times D \) matrix and \( \hat{\theta} \) is a column vector of length \( K \). Each class \( k \) corresponds to the linear model
\[
F_k(x) = w_k^T x + b_k,
\]
where \( w_k^T = (\theta^T)_k \), is the \( k \)th row of \( \theta^T \) and \( b_k = \hat{\theta}_k \).
In order to change this model’s classification from the original class \( l \) to the target class \( t \neq l \), we observe that one can attack the following model:
\[
L(x) = F_t(x) - F_l(x) = (w_t^T - w_l^T)x + b_t - b_l.
\]
This is a linear model, and \( L(x) < 0 \) when \( F(x) \) classifies \( x \) as \( l \), \( L(x) > 0 \) when \( F(x) \) classifies \( x \) as \( t \), so \( L(x) \) has the decision boundary hyperplane \( L(x) = 0 \). Hence, the most effective perturbation direction to change classification of \( F(x) \) from \( l \) to \( t \), as before for linear models, is \( \nu = w_t^T - w_l^T \), which is the gradient of \( L(x) \) with respect to \( x \).
**Linear approximation of ReLU networks.** ReLUs will all be activated when the weights and biases of each hidden layer of the network are positive, and all values of the input are also positive (e.g., an image, whose pixel value is usually in the range \([0, 1]\)). However, in practice this usually is not the case and some ReLUs will not be activated. For inactive ReLUs, we modify outgoing weights to zero in the calculation of the perturbation direction \( \nu \).
The overall process, LinApproxPerturbDir, is described in Algorithm 1 of Appendix A. It proceeds by first finding an exact (local) linear approximation \( F'(x) = \tau^T x + \tilde{\tau} \) where \( \tau \leftarrow \sum_{i=1}^{n} (\prod_{j=i+1}^{n} \tau_j^T) \hat{\theta}_i \), using the notation in the pseudo-code. The weights of \( F' \) are equal to weights of \( F \) for internal nodes where \( F(x) \) activated the corresponding ReLUs, otherwise they are set to 0. Specifically, we zero out columns of matrix \( \theta_i^T \) when the corresponding elements of mask \( m_i \) are zero. Given these weights, LinApproxPerturbDir computes \( \nu \) as explained in the warmup. This direction corresponds to a gradient of the network’s target minus current class scores, with respect to the instance \( x \).
**Projected gradient descent for ReLU networks.** Given \( \nu \) as output by Algorithm 1 of Appendix A and a computed certified radius, one could compute adversarial perturbation \( \delta \) in direction \( \nu \) close to the certified radius as in Equation 1. However, the resulting \( x' = x + \delta \) may activate different ReLUs of \( F \) than \( x \). Hence, the linear approximation \( F' \) on \( x' \) may be different to \( F' \) on \( x \): these approximations are only exact in local neighborhoods. To this end we perform a search by iteratively updating \( x' \) and invoking LinApproxPerturbDir until an adversarial example within the input domain \([V_{\min}, V_{\max}]\) is found or the procedure times out. Algorithm 2 of Appendix A describes this procedure, which we refer to as ReluPGD. The algorithm iteratively performs the following: computes the gradient of the network’s linearization at the current iteration, rescales to the step size \( s \), clips the perturbation to the domain constraint, applies the perturbation.
**Remark 1** Note that \( \tilde{R} \) may not be given, as is the case for some network verifiers that instead of returning \( \tilde{R} \), take \( F, x \) and some \( R \) as input and either certify \( R \) or not. In this case, we need to search for the smallest perturbation in the direction of \( \nu \) to find such an \( R \) to attack. Hence, in Algorithm 2 of Appendix A we use \( s \) as an input, which is set to a small initial value (e.g., \( 10^{-5} \) in our experiments) so that \( \nu \) can be updated frequently. If a \( \tilde{R} \) is given, we can set it as a threshold value to stop the algorithm, that is, the algorithm should stop when the total perturbation norm reaches \( \tilde{R} \).
3.2 ROUNDING SEARCH
Given the direction \( \nu \) and the computed certified radius \( \tilde{R} \), an adversarial perturbation \( \delta \) can be computed using Equation 1, and \( x' = x + \delta \) should give an adversarial example so that \( F(x) \neq F(x') \).
If the accumulated rounding errors are large, \( \delta \) can be sufficient to conduct a successful attack (e.g., for neural networks with many neurons). For some attacks, the rounding errors we exploit are much smaller, such as linear models with fewer operations. Hence, we create \( N \) floating-point neighbors of \( \delta \) to explore more possibilities of robustness violations close to the decision boundary due to rounding errors. At a high level, each neighbor \( \delta' \) is constructed by using \( \delta \) as a seed and then, for each dimension, replacing the original value with a neighboring floating point that is either larger or smaller than it. For example, a neighbor of \([1.0, 1.0]\) can be \([0.9999999999999999, 1.0000000000000002]\).
We provide the pseudo-code of the neighbors sampling procedure in Algorithm 3 of Appendix A. We call this algorithm Neighbor. The result is a set of \( N \) neighboring perturbations. Then for each neighbor \( \delta' \) we test if \( x + \delta' \) leads to an adversarial example (i.e., flips the classifier’s prediction) that is certified (i.e., \( \| \delta' \| \leq \tilde{R} \) in the weak threat model, or \( \| \delta' \| \leq R \) in the strong threat model).
4 ATTACK EXPERIMENTS
In this section we evaluate whether our rounding search attacks can find adversarial examples within a certified radius. We first consider linear classifiers and then neural networks. We evaluate certified radii obtained using the exact method for linear classifiers, and conservative Wang et al. (2021); Gurobi Optimization, LLC (2022) and approximate Cohen et al. (2019) certification mechanisms for neural networks. Since the computation of exact certification for linear classifier is \( R = |w^T x + b| / \| w \| \), we compute it ourselves using either 32-bit or 64-bit floating-point arithmetic in Numpy.
For linear classifiers, we will conduct attacks in both the weak and strong threat models. For neural networks, we will conduct attacks in the weak threat model. We show that our rounding search finds adversarial examples within certified radii for all of them. Our linear models are run on an Intel Xeon Platinum 8180M CPU, while our neural network models are run on a Tesla V100 16GB GPU.
Baseline attack rates. The baseline success rate for finding an adversarial example against a linear model within the radius defined in Section 2.1 should be 0% in both threat models, since the mechanism is exact: it claims to be both sound and complete. The baseline success rate for radii returned by conservative mechanisms should also be 0% since they too are claimed to be sound. Though randomized smoothing comes with a failure probability \( \alpha \ll 1 \) to account for sampling error in approximating a smoothed classifier, it does not explicitly take into account errors due to rounding.
Model training. We train (primal) linear SVM with sequential minimal optimization, by using corresponding modules of scikit-learn. Our linear classifiers are trained with \( \ell_1 \) regularization so that model weights are sparse, and perturbations are less likely to move images outside their legal domain (recall that the perturbation direction for a linear classifier is its weights \( \nu = w \)). All ReLU networks in this section are trained with the SGD optimizer using PyTorch, with momentum 0.9, learning rate 0.01, batch size 64, for 15 epochs. For some controlled experiments we require the weights and biases of the hidden layers to be positive (to activate all ReLUs). In this case weights and biases are clamped with the lower bound 0 after each step of training.
4.1 RANDOM LINEAR CLASSIFIERS
To evaluate the performance of our attack in an ideal scenario, we first conduct our attack on randomly initialized (binary) linear classifiers with randomly generated target instances: \( f(x) = \text{sign}(w^T x + b) \), where weights \( w_i \) and bias \( b \) are random values drawn from the range \([-1, 1], \forall i \in [D] = \{1, \ldots, 100\}\). Each value is represented with either 32-bit or 64-bit floating-point precision. For each dimension, we test 10,000 randomly initialized models. For each model we choose one instance \( x \) to attack, where each component \( x_i \) is drawn randomly from \([-1, 1]\). Hence, attack success rate measures the number of models out of 10,000 for which a random instance can result in a successful attack. For each combination of \((w, b, x)\), we sample and evaluate \( N = D^2 \) neighboring perturbations of \( \delta = \tilde{R} w / \| w \| \) using the Neighbor function (Algorithm 3 in Appendix A).
Figure 2: (a) Rounding search attack success rates against a random binary linear classifier in both weak (W) and strong (S) threat models (Section 4.1). For each dimension $D$, we report the percentage of 10,000 randomly initialized models for which we can successfully find an adversarial example within certified radius $\hat{R}$ for a random instance $x$ drawn from $[-1, 1]^D$. Since the attacks are against an exact certified radius, the baseline attack rate should be 0% in both weak and strong threat models. Model weights $w$ and biases $b$ are randomly initialized with $w \in [-1, 1]^D$, $b \in [-1, 1]$. All values and computation is done using either 32-bit or 64-bit floating points. (b) Maximum rounding error in the calculation of the certified radius $\hat{R}$ on a sample $x$ with each $x_i = 3.3 \times 10^9$, for the linear model with $w_i = 3.3 \times 10^{-9}$, $b = 3.3 \times 10^9$, where $i \in [1, D]$ and $D \in [20, 1000]$.
Results are shown in Figure 2(a). With higher dimension, our attack success rate first increases and then flattens around 50% in the weak threat model, and around 5% in the strong threat model. (we investigate the flattening phenomenon in Appendix E). With higher dimension more arithmetic operations are done in computing $\|\delta'\|$ and $\hat{R}$, which results in accumulation of rounding errors. Figure 2(b) further shows this influence of $D$ on the rounding error, which can be accumulated to the magnitude of $10^3$ with increasing $D$. In summary, with the increasing rounding error, a greater leeway is left between the real certified radius $R$ and the computed certified radius $\hat{R}$ for our method to exploit, so the attack success rate increases.
The success rates are lower in the strong threat model than in the weak threat model. This is expected, as the leeway (i.e., $\hat{R} - \|\delta\|$) exploited by our attack in the conservative strong model is likely much smaller than that (i.e., $R - \|\delta\|$) in the weak model.
4.2 Linear SVM
In this section, we evaluate our attack on linear SVM trained with the MNIST dataset. MNIST [LeCun et al., 2010] contains images of hand-written digits where each image has 784 attributes and each attribute is a pixel intensity within the domain $[0, 255]$. We used $\approx 12,000$ images for training, and $\approx 2,000$ images for validation and evaluation of our attacks, for each combination of the labels $i, j \in \{0, \ldots, 9\}$. We trained 45 models for each combination of distinct labels $i, j \in \{0, \ldots, 9\}$ of the MNIST dataset. Validation accuracies range between 91% and 99% for linear SVM.
We then try to find an adversarial image with respect to each image in the test dataset. Our attack samples $N = 5,000$ neighbors of $\delta = \hat{R}w/\|w\|$ using Algorithm 5 of Appendix A.
In the weak threat model, we observe non-zero attack success rates for 44/45 models (full results appear in Table 1 of Appendix B), and our attacks can have success rates up to 23.24%. In the strong threat model, we observe non-zero attack success rates for 11/45 models (full results appear in Table 2 of Appendix B), and our attacks can have success rates up to 0.16%. Recall that the baseline success rate should always be 0%. We demonstrate a weak model example of original and adversarial images together with their perturbation and certified radius information in Figure 3 of Appendix B.
4.3 Certification for neural nets
We now turn our attention to neural network verification mechanisms. In this section we consider neural networks with ReLU activations and rely on their linear approximations. Given a radius $\tilde{R}$, a neural network $F$ and an input $x$, these mechanisms either certify $\tilde{R}$ or not. Hence, in order to find a tight certified radius for a given model, one can perform a binary search to check multiple radii and call those verifiers multiple times. We avoid the binary search to find a certified radius $\tilde{R}$ by first finding an adversarial example $x'$ via ReluPGD (Section 3.1 and Algorithm 2 of Appendix A) and then try to verify the perturbation norms (i.e., $\|x' - x\|$) of those adversarial examples using the complete verifiers. We set ReluPGD to time out after 15 minutes.
Certification with $\beta$-CROWN. $\beta$-CROWN [Wang et al., 2021] guarantees sound but not complete robustness certification. That is, it provides a lower bound on the radius and it is possible that a tighter radius may exist. We use the $\beta$-CROWN verifier [Wang et al., 2021] in the $\ell_2$ metric, to verify a 3-layer neural network binary classifier with 1 node in the hidden layer. All model weights and biases in the hidden layers of this classifier are trained to be positive, so the perturbation direction is always $\nu = w_t^T - w_f^T$. The classifier has validation accuracy 99.67%. We use ReluPGD, with step size $s = 1 \times 10^{-5}$, to incrementally add perturbation in direction $\nu$ to image $x$, until its prediction is flipped, and we get $x'$. Then we use $\beta$-CROWN to verify the image with respect to $\|\delta\| = \|x' - x\|$. If the verification succeeds, we have a successful attack. ReluPGD times out on 30 out of 2108 images. When the attack does not time out, it takes $\approx 30$ seconds. A call to a verifier takes $\approx 1$ second. We conduct our attack on all MNIST test images labeled 0 or 1. We find adversarial images for 2078 images, and $\beta$-CROWN erroneously verifies 53 of them. Our attack success rate is 2.6%.
Certification with MIP solver. We now consider another method that provides conservative verification via mixed-integer programming (MIP). We use the implementation from [Wang et al., 2021], which uses the Gurobi MIP solver [Gurobi Optimization, LLC, 2022] for neural network verification. We verify two 3-layer neural network multiclass classifiers with 100 nodes in their hidden layer. For the first classifier, the weights and biases in the hidden layers are all trained to be positive, so the perturbation direction is $\nu = w_t^T - w_f^T$. The second classifier is trained without constraints on its weights and represents a regular network without artefacts. The validation accuracies for the first and second classifiers are 84.14%, and 96.73%, respectively.
We use ReluPGD to attack the two classifiers on all images of the MNIST test dataset, with step size $s = 1 \times 10^{-5}$. We found adversarial images against 8406 images for the first classifier, and adversarial images against 9671 images for the second classifier. ReluPGD times out on only 8 and 2 images for the first and second classifier, respectively. We then use MIP to verify each image with respect to their adversarial image’s perturbation norm $\|\delta\|$. Each attack takes $\approx 30$ seconds and each verification takes $\approx 10$ seconds. MIP successfully verified 5108 out of 8406 successfully attacked images for the first classifier, and verified 1531 out of 9671 successfully attacked images. That is, the attack success rate is 60.76% on an artificially trained network (where ReLUs are all activated) and 15.83% on the second classifier trained without artefacts.
Certification with randomized smoothing. We also attack approximate certification methods based on randomized smoothing [Cohen et al., 2019]. Recall that guarantees of randomized smoothing are probabilistic with a failure probability $\alpha$. Nevertheless we report a success rate up to 21.11% for our rounding search attacks on MNIST with $\alpha = 0.1%$. We refer readers to Appendix C for details.
5 Mitigation: Certification with rounded interval arithmetic
Our attack results demonstrate that floating-point rounding invalidates the soundness claims of a wide range of certification implementations for a variety of common models. How might such rounding errors in certification calculations be mitigated, for both of our threat models?
Rounding errors violating certifications are sometimes small. For example, the rounding error for the certified radius of the first MNIST image of Figure 3 (Appendix A) is in the 13th decimal place. One’s first intuition may be to adopt slightly more conservative radii (e.g., using $\tilde{R} - \gamma$ for some positive constant $\gamma \ll 1$). Unfortunately, such radii are not in general sound, and attacks against
$\tilde{R} - \gamma$ are still possible. For example, as we show in Section 4.1, it is easy to construct a linear classifier and find adversarial examples against it within $\tilde{R} - 0.1$.
We outline a mitigation applying rounded interval arithmetic (Higham, 2002) to certified robustness. Interval arithmetic replaces every numerical value with an interval. Interval operators exist for elementary arithmetic operations, serving as useful building blocks for more complex computations with bounded rounding errors (Definition 2 of Appendix D). We have re-framed existing results from numerical analysis in the language of sound floating-point computation (Lemma 1 of Appendix D).
**Theorem 1** Consider a classifier $f$, floating-point instance $x$, and a certification mechanism $R(f, x)$ that is sound when employing real arithmetic. If $R(f, \cdot)$ can be computed by a composition of real-valued operators $\psi_1, \ldots, \psi_L$ with sound floating-point extensions $\phi_1, \ldots, \phi_L$, then the following certification mechanism $\tilde{R}(f, x)$ is sound with floating-point arithmetic: run the compositions of $\phi_1, \ldots, \phi_L$ on (coordinate-wise) intervals $[x, x], [f(x), f(x)]$ to obtain $[\tilde{R}, \tilde{R}]$; return $\tilde{R}$.
The proof of Theorem 1 appears in Appendix D.1. We offer an example application of this mitigation theorem on linear classifiers. We use the PyInterval library (Taschini, 2008) that performs rounded interval arithmetic to compute sound $R$ for linear classifiers (Cohen et al., 2019). Our attack success rates for randomly initialized linear classifiers (Section 4.1) drop to 0% for all dimensions in both weak and strong threat models. In sum, our theoretical and empirical results provide support for mitigating attacks against exact robustness certifications (Cohen et al., 2019).
6 RELATED WORK
Several works have explored the influence of floating-point representation on guarantees of verified neural networks. For example, verifiers designed for floating-point representation have been shown to not necessarily work for quantized neural networks (Giacobbe et al., 2020; Henzinger et al., 2021).
The closest to our work is the independent work by Jia & Rinard (2021), who also exploit rounding errors to discover violations of network robustness certifications. Our work differs from Jia & Rinard (2021) on the adversarial examples we find. As we show in Section 4.3, we are able to find an adversarial example $x'$ for unaltered natural image $x$ from test data, within that image $x$'s certified radius. The work by Jia and Rinard, instead, does not find certification-violating adversarial examples of test instances. It finds perturbed inputs $x'_0$ of synthetic inputs $x_0$, that violate certifications of $x_0$. In particular, they adjust brightness of a natural test image $x$ to produce a $x_0$. That is, their attack point $x'_0$ is outside the certified radius of $x$. Hence, our attack can be seen as a stronger attack that is possible due to a novel attack methodology based on accurate perturbation directions.
Research in the area of numerical analysis has proposed approaches to address the limitations of floating-point rounding, with a focus on measuring the stability of calculations. Proposed approaches include replacing floating-point arithmetic with interval arithmetic (Faulin et al., 2001) or affine arithmetic (De Figueiredo & Stolfi, 2004). Both account for rounding errors and return an interval that contains the correct result. Our work is the first to suggest that modern systems implementing these approaches could be of use to certified robustness implementations. We adopt interval arithmetic with the implementation PyInterval (Taschini, 2008) in the calculation of robustness certification.
7 CONCLUSION
Certified robustness has been proposed as a defense against adversarial examples. In this work we have shown that guarantees of several certification mechanisms do not hold in practice since they rely on real numbers that are approximated on modern computers. Hence, computation on floating-point numbers—used to represent real numbers—can overestimate certification guarantees due to rounding. We propose and evaluate a rounding search method that finds adversarial inputs on linear classifiers and verified neural networks within their certified radii—violating their certification guarantees. We propose rounded interval arithmetic as the mitigation, by accounting for the rounding errors involved in the computation of certification guarantees. We conclude that if certified robustness is to be used for security-critical applications, their guarantees and implementations need to account for limitations of modern computing architecture.
REFERENCES
Lawrence D. Brown, T. Tony Cai, and Anirban DasGupta. Interval estimation for a binomial proportion. *Statistical science*, 16(2):101–133, 2001.
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In *IEEE Symposium on Security and Privacy (S&P)*, pp. 39–57, 2017.
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In *International Conference on Machine Learning (ICML)*, pp. 1310–1320. PMLR, 2019.
Luiz Henrique De Figueiredo and Jorge Stolfi. Affine arithmetic: concepts and applications. *Numerical Algorithms*, 37(1):147–158, 2004.
Mirco Giacobbe, Thomas A Henzinger, and Mathias Lechner. How many bits does it take to quantize your neural network? In *International Conference on Tools and Algorithms for the Construction and Analysis of Systems*, pp. 79–97. Springer, 2020.
Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In *International Conference on Learning Representations (ICLR)*, 2015. URL http://arxiv.org/abs/1412.6572.
Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2022. URL https://www.gurobi.com.
Thomas A Henzinger, Mathias Lechner, and Đorđe Žikelić. Scalable verification of quantized neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)*, volume 35, pp. 3787–3795, 2021.
Nicholas J. Higham. *Accuracy and stability of numerical algorithms*. Society for Industrial and Applied Mathematics (SIAM), 2002.
IEEE. IEEE standard for floating-point arithmetic. *IEEE Std 754-2019 (Revision of IEEE 754-2008)*, pp. 1–84. doi: 10.1109/IEEESTD.2019.8766229.
Luc Jaulin, Michel Kieffer, Olivier Didrit, and Eric Walter. Interval analysis. In *Applied Interval Analysis*, pp. 11–43. Springer, 2001.
Kai Jia and Martin Rinard. Exploiting verified neural networks via floating point numerical error. In *International Static Analysis Symposium*, pp. 191–205. Springer, 2021.
Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. MNIST handwritten digit database. *ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist*, 2, 2010.
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In *IEEE Symposium on Security and Privacy (S&P)*, pp. 656–672, 2019.
Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified adversarial robustness with additive noise. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 32, 2019.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *International Conference on Learning Representations (ICLR)*, 2018.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In *International Conference on Learning Representations (ICLR)*, 2014. URL http://arxiv.org/abs/1312.6199.
Stefano Taschini. PyInterval, interval arithmetic in Python, 2008. URL https://pypi.org/project/pyinterval/, version 1.2.0 released 2017-03-05.
|
F0XXA9OG13
|
Are there any overlaps of columns between the tabular data for the same tasks? Is it hard to do a simple concatenation? What’s the traditional method for dealing with the missing columns? Are they applicable to this situation?
|
**MediTab**: Scaling Medical Tabular Data Predictors via Data Consolidation, Enrichment, and Refinement
Anonymous authors
Paper under double-blind review
**Abstract**
Tabular data prediction has been employed in medical applications such as patient health risk prediction. However, existing methods usually revolve around the algorithm design while overlooking the significance of data engineering. Medical tabular datasets frequently exhibit significant heterogeneity across different sources, with limited sample sizes per source. As such, previous predictors are often trained on manually curated small datasets that struggle to generalize across different tabular datasets during inference. This paper proposes to scale medical tabular data predictors (MediTab) to various tabular inputs with varying features. The method uses a data engine that leverages large language models (LLMs) to consolidate tabular samples to overcome the barrier across tables with distinct schema. It also aligns out-domain data with the target task using a “learn, annotate, and refinement” pipeline. The expanded training data then enables the pre-trained MediTab to infer for arbitrary tabular input in the domain without fine-tuning, resulting in significant improvements over supervised baselines: it reaches an average ranking of 1.57 and 1.00 on 7 patient outcome prediction datasets and 3 trial outcome prediction datasets, respectively. In addition, MediTab exhibits impressive zero-shot performances: it outperforms supervised XGBoost models by 8.9% and 17.2% on average in two prediction tasks, respectively.
1 Introduction
Tabular data are structured as tables or spreadsheets in a relational database. Each row in the table represents a data sample, while columns represent various feature variables of different types, including categorical, numerical, binary, and textual features. Most previous papers focused on the model design of tabular predictors, mainly by (1) augmenting feature interactions via neural networks (Arik & Pfister, 2021), (2) improving tabular data representation learning by self-supervised pre-training (Yin et al., 2020; Yoon et al., 2020; Bahri et al., 2022), and (3) performing cross-tabular pre-training for transfer learning (Wang & Sun, 2022b; Zhu et al., 2023). Tabular data predictor was also employed in medicine, such as patient health risk prediction (Wang & Sun, 2022b), clinical trial outcome prediction (Fu et al., 2022), modeling Electronic Health Record (EHR) data for multitask learning (Hur et al., 2023), and unifying heterogeneous EHRs via text embeddings (Hur et al., 2022). Additionally, LLMs have been shown to be able to sample synthetic and yet highly realistic tabular data as well (Borisov et al., 2022; Theodorou et al., 2023).
Despite these significant advances, it is worth noting that the data-centric approaches have received comparatively less attention in prior research. Some prominent examples lie in the detection and mitigation of label noise (Wang et al., 2020; Northcutt et al., 2021), but they only address a fraction of the challenges in medical tabular data prediction. As illustrated in Figure 1, there is typically substantial heterogeneity among different data sources in medical data, and within each data source, the available sample sizes are small. Harnessing multi-source data requires extensive manual effort in terms of data cleaning and formatting. As such, current medical tabular prediction methods are often built on small handcrafted datasets and, hence, do not generalize across tabular datasets.
In this paper, we embrace a data-centric perspective to enhance the scalability of predictive models tailored for medical tabular data. Our core aim revolves around training a single tabular data predic-
tor to accommodate inputs with diverse feature sets. Technically, our framework, namely MediTab, encompasses three key components: data consolidation, enrichment, and refinement modules:
- **Data consolidation and enrichment** involves consolidating tabular samples with varying features and schemas using natural language descriptions. We also expand the training data by distilling knowledge from large language models and incorporating external tabular datasets.
- **Data refinement** rectifies errors and hallucinations introduced during the consolidation and enrichment stages. It also aligns a diverse set of tabular samples with the target task through a distantly supervised pipeline.
As illustrated in Figure 1, MediTab offers the advantages:
- **Multi-task learning and prediction**: the model can learn from and make predictions for multiple medical tabular datasets without requiring modifications or retraining.
- **Few-shot and zero-shot learning**: the model can quickly adapt to new prediction tasks using only a small amount of training data or even make predictions for any new tabular input when no training data is available.
In Section 2, we provide a detailed description of our approach. We present the experimental results in Section 3, where we demonstrate the effectiveness of our method on 7 patient outcome prediction datasets and 3 trial outcome prediction datasets, achieving an average performance ranking of 1.57 and 1.00, respectively, across tabular prediction baselines. Furthermore, our method shows impressive few-shot and zero-shot performances that are competitive with supervised baselines: the zero-shot MediTab outperforms supervised XGBoost by 8.9% and 17.2% on average in two prediction tasks, respectively. We discuss related work in Section 4 and conclude our findings in Section 5.
## 2 Method
### 2.1 Problem Formulation
We characterize tabular prediction tasks by dataset $D$ and task $T$, where a task $T = \{D_1, D_2, \ldots\}$ consists of multiple in-domain datasets with varying features and schema but the same target label. For example, the patient mortality prediction task contains samples from many clinical trials (where input features differ between trials). For $T_1$, the datasets from other tasks $T_2, T_3, \ldots$ are considered out-domain since they differ in prediction objectives. As illustrated by Figure 1, existing methods for tabular prediction fall short in transfer learning across datasets, as each model learns from a single dataset $D$ and needs to learn from scratch when encountering new datasets. On the contrary, MediTab extends the training data to all available tasks $T = \{T_1, T_2, \ldots\}$, demonstrating its flexibility to encode and predict for arbitrary tabular samples. After training, it serves all $D \in T_1$ without further fine-tuning. Our method eliminates the need to keep as many models as datasets, paving the way for the efficient and streamlined deployment of tabular prediction models. Depending on the use case, the problems that our method can handle can be classified into the following categories.
**Problem 1 (Multi-task learning (MTL)).** MTL is a machine learning technique where a single model is trained to perform multiple tasks simultaneously. Define $f : X \mapsto Y$ as a model that takes
Figure 2: The demonstration of scaling medical tabular data predictors models (MediTab). It encompasses three steps: **Step 1** consolidates tabular datasets using LLM; **Step 2** aligns out-domain datasets with the target task; **Step 3** facilitates the predictor with cleaned supplementary data. More details are presented in Section 2.2.
A consolidated tabular sample $x$ as input and predicts the target label $y$. The training dataset is formed by combining all the tabular inputs in $D_* \in T$. Once trained, the model $f$ is fixed and can be used to make predictions on any new samples $x \sim D$, $\forall D \in T$.
**Problem 2 (Zero-shot/Few-shot learning).** The model $f$ is trained on $T = \{D_1, \ldots, D_N\}$. Then, it makes predictions for a new dataset $D_{N+1}$ that has not been included in the training data. Model $f$ performs zero-shot learning if no label is available for all samples in $D_{N+1}$; Model $f$ performs few-shot learning to predict for $D_{N+1}$ if a few labeled samples are available.
### 2.2 The MediTab Framework
As illustrated in Figure 2, our method consists of:
**Step 1: Tabular Data Consolidation.** The tabular datasets $D$ differ in their features, schema, and particularly in their target objectives if they are from distinct tasks $T$. The consolidation is accomplished by converting each row of the table into natural language descriptions that consider the data schema. This conversion process transforms all tabular data into text data that share the same semantic space, enabling them to be utilized in language modeling. Additionally, we can produce diverse consolidated samples by describing one sample in multiple different ways, which allows for data augmentation. To prevent hallucinations that may occur during this transformation, an audit module that utilizes LLMs is employed to perform self-check and self-correction. Our goal of patient survival classification is the same for each dataset; however, we use a diverse number of datasets, so the task is indeed different.
**Step 2: Learn, Annotate, & Audit.** Our method can benefit from out of domain datasets $T_* \in T$ through our annotation and data importance pipeline. Once it is trained on $T_1$, it is used to produce pseudo labels for samples from all other tasks, which yields a big but noisy supplementary dataset $\hat{T}_{1,\text{sup}}$. This dataset is further cleaned by a data audit module based on data Shapley scores, leading to a smaller but cleaner dataset $T_{1,\text{sup}}$.
**Step 3: Learning & Deployment.** The final prediction model learns from the combination of the original task 1 data $T_1$ and the supplementary data $T_{1,\text{sup}}$. The resulting multi-task model $f_{\text{MTL}}$ can be used for all datasets $D_* \in T_1$ without any modifications. Furthermore, the model can predict for new datasets $D \in T$ in a zero-shot manner and perform few-shot learning for any $D \in T$ or $D \notin T$. (Note that the primary purpose of the pseudolabels is to facilitate training the zero-shot and few shot models, and are not meant to improve the performance of the original model.)
We will elaborate on the technical details of these steps in the following sections.
2.3 Tabular Data Consolidation & Sanity Check
The primary challenge in scaling medical tabular data predictors is the scarcity of large datasets with standardized schemas, as reflected in the sample data below.
| age | gender | height | weight | ... | mortality | demo1 | demo2 | demo3 | ae1 | ae2 | ... | target |
|-----|--------|--------|--------|-----|-----------|------|------|------|-----|-----|-----|-------|
| 18 | 1 | 1.7 | 60 | ... | 0 | 25 | 160 | 0 | 0 | 1 | ... | 14 |
Existing tabular prediction models often struggle on these datasets due to the vague semantic meanings of the varying columns and values. Our approach is to transform each row into natural language sentences that describe the sample using generative language models like GPT-3.5. Specifically, we combine the linearization, prompting, and sanity check steps to obtain input data that the language model can use to generate coherent and meaningful natural language descriptions.
Linearization. A function \( \text{linearize}(c, v) \) takes the column names \( c \) and the corresponding cell values \( v \) from a row as the input, then linearizes the row to a concatenation of paired columns and cells \( \{c : v\} \). Notably, we identify sparse tabular datasets that have many binary columns. In linearization, we exclude binary columns that have a cell value of \( \text{False} \) and only include those with positive values. This approach has two key benefits. First, it ensures that the linearized output does not exceed the input limit of language models. Second, it helps in reducing hallucinations arising from failed negation detection during the generation process of the LLMs. An ablation on the different types of tabular to text serialization is shown in Table 10 in the Appendix, showing that audited examples improve performance via augmentation. We believe that this performance benefit is useful, and serves to justify our usage of more advanced paraphrasing and auditing techniques.
Prompting. We combine the linearization with prefix \( p \) and suffix \( s \) to form the LLM prompt as \( (p, \text{linearize}(c, v), s) \). The schema definition is added to \( p \) to provide the context for LLM when describing the sample. For each column, we provide the type and explanation as \( \{\text{column}\}(\{\text{type}\}) : \{\text{explanation}\} \) (e.g., “demo1(numerical): the age of the patient in years.”). The suffix \( s \) represents the instruction that steers LLM to describe the target sample or generate paraphrases for data augmentation. We describe the specific prompt templates we used in Appendix C.1 and display some consolidated examples in Appendix C.2.
The text descriptions \( x \) are hence sampled from LLMs by
\[
x \sim \text{LLM}(p, \text{linearize}(c, v), s).
\]
(1)
The paired inputs and target \( \{x, y\} \) will be the training data for the tabular prediction model \( f \).
We can adjust the suffix \( s \) in Eq. 1 to generate multiple paraphrases of the same sample as a way for instance-level data augmentation. Some instance-level augmentation examples are available in Appendix C.3.
Sanity Check via LLM’s Reflection. To reduce low-quality generated samples, a sanity check function evaluates the fidelity of the generated text \( x \) to address potential hallucinations or loss of information that occurs during the translation process \( \{c, v\} \rightarrow x \) in Eq. 1, particularly for numerical features. Specifically, we query LLM with the input template “What is the \( \{\text{column}\} \)? \( \{x\} \)” to check if the answer matches the original values in \( \{c, v\} \). The descriptions are corrected by re-prompting the LLM if the answers do not match. We provide some examples of sanity checks in Appendix C.4, and the quantitative analysis of this correction is available in Appendix C.5.
2.4 Data Enrichment & Refinement
Through the consolidation and sanity check process, we are able to aggregate all tabular samples \( \{x, y\} \) from the target task \( T_1 \) and train a prediction model. We can use the dataset \( T_1 \) to train a multi-task learning model, denoted as \( f_{\text{MTL}} \), which can be applied to all datasets within the task. Nevertheless, there still lacks a route to leverage data from out-domain tasks \( T_* \sim T, \forall T_* \in T \setminus \{T_1\} \) for data enrichment. It is particularly valuable for low-data applications such as healthcare, where there may only be a few dozen data points for each dataset. Specifically, we propose to align out-domain task datasets via a learn, annotate, and audit pipeline for data enrichment.
Learn and Annotate. We train an initial model \( f_{\text{MTL}} \) on all available training data from \( T_1 \) (we will omit the subscript 1 from now on to avoid clutter). The model \( f_{\text{MTL}} \) then makes pseudo labels
for a set of external samples that are retrieved from all other tasks \( T \setminus \{T\} \), formulating the noisy supplementary data \( \tilde{T}_{\text{sup}} = \{(x_i, \tilde{y}_i)\} \): \( x_i \) are consolidated textual description samples and \( \tilde{y}_i \) are noisy labels that are aligned with the objective format of the target task.
**Quality Audit.** It is vital to audit the quality of noisy training data to ensure optimal prediction performance. To this end, we clean \( T \) by estimating the data Shapley values for each instance (Ghorbani & Zou, 2019). We denote the value function by \( V(S) \), which indicates the performance score evaluated on the target task \( T \) of the predictor trained on training data \( S \). Correspondingly, we have the data Shapley value \( \phi_i \) for any sample \((x_i, \tilde{y}_i) \in \tilde{T} \) defined by
\[
\phi_i = C \sum_{S \subseteq \tilde{T} \setminus \{i\}} \frac{V(S \cup \{i\}) - V(S)}{|S|},
\]
(2)
where the summation is over all subsets of \( \tilde{T} \) not with sample \( i \); \( n \) is the number of samples in \( \tilde{T} \); \( |\cdot| \) implies to the size of the set; \( C \) is an arbitrary constant. Intuitively, \( \phi_i \) measures the approximated expected contribution of a data point to the trained model’s performance. Therefore, a sample with low \( \phi \) is usually of low quality itself.
Computing the exact data Shapley value with Eq. 2 requires an exponentially large number of computations with respect to the number of train data sources. Instead, we follow (Jia et al., 2021; 2019) to use K-Nearest Neighbors Shapley (KNN-Shapley), which offers an avenue for efficient data Shapley computation. Moreover, we are able to achieve a \( 10\times \) speedup by parallelizing the algorithm, which completes computing scores for 100K+ samples in minutes. Upon acquiring \( \Phi = \{\phi_i\} \), we execute a representative stratified sampling corresponding with the distribution of the sample classes to establish the cleaned supplementary dataset \( T_{\text{sup}} \). Appendix G explores the Shapley value and pseudo label distributions of different supplemental datasets.
### 2.5 Learning & Deployment
After the quality check step, we obtain the original task dataset \( T \) and the supplementary dataset \( T_{\text{sup}} \) and have two potential options for model training. The first is to combine both datasets for training, but we have found that this approach results in suboptimal performance. Instead, we employ a two-step training approach: (1) pre-train the model on \( T_{\text{sup}} \), and (2) finetune the model using \( T \). The resulting model will be deployed to provide predictions for any tabular samples belonging to the target task. Because the model trained on the supplementary model has not seen any examples from the original dataset, it is able to make zero-shot predictions for the test samples from a new dataset \( D' \not\subset T \) or adapt to a new task \( T' \) via few-shot learning when a few labeled data are available. Due to the small number of samples in some datasets, we thought it would be best to use all possible training samples in the learning phase, without leaking information in the testing phase. As the label distribution is highly skewed, it may also bias our model if validation samples were chosen randomly, and we did not have the expertise to choose a representative sample. In our case, we chose to simply train the model for 3 epochs on all of the datasets, and then perform a single pass of fine tuning, without significant hyperparameter optimization, due to the small amount of data and the good performance that it gives.
### 3 Experiments
We conduct an extensive evaluation of MediTab’s performance in supervised learning (Q1), few-shot learning (Q2), and zero-shot prediction (Q3). We also compare the different training strategies for the final deployment of our method (Q4).
#### 3.1 Experimental Setting
**Datasets:** In our experiments, we introduce the following types of tabular prediction tasks. **Patient Outcome Datasets.** This dataset includes the patient records collected separately from seven oncology clinical trials \(^1\). These datasets each have their own unique schema and contain distinct groups
---
\( ^1 \)https://data.projectdatasphere.org/projectdatasphere/html/access
Table 1: The statistics of Patient Outcome Prediction Datasets. # is short for the number of. Categorical, Binary, Numerical show the number of columns belonging to these types. N/A means no label is available for the target task. We used 20% of the data for testing.
| Trial ID | Trial Name | # Patients | Categorical | Binary | Numerical | Positive Ratio | Train/Test Split |
|----------------|----------------|------------|-------------|--------|-----------|----------------|------------------|
| NCT00041119 | Breast Cancer 1| 3,871 | 5 | 8 | 2 | 0.07 | 3096 / 775 |
| NCT00174655 | Breast Cancer 2| 994 | 3 | 31 | 15 | 0.02 | 795 / 199 |
| NCT00312208 | Breast Cancer 3| 1,651 | 5 | 12 | 6 | 0.19 | 1320 / 331 |
| NCT000079274 | Colorectal Cancer| 2,968 | 5 | 8 | 3 | 0.12 | 2374 / 594 |
| NCT00003299 | Lung Cancer 1 | 587 | 2 | 11 | 4 | 0.94 | 469 / 118 |
| NCT000694382 | Lung Cancer 2 | 1,604 | 1 | 29 | 11 | 0.45 | 1283 / 321 |
| NCT03041311 | Lung Cancer 3 | 53 | 2 | 11 | 13 | 0.64 | 42 / 11 |
External Patient Database
| Dataset | # Trials | # Treatments | # Conditions | # Features | Positive Ratio | Train/Test Split |
|-----------------|----------|--------------|--------------|------------|----------------|------------------|
| MIMIC-IV | 143,018 | 4 | 1 | 1 | N/A | |
| PMC-Patients | 167,034 | 1 | 1 | 1 | N/A | |
Table 2: The statistics of the Clinical Trial Outcome Datasets. # is short for the number of. N/A means no label is available for the target task. We used the same data splits as Fu et al. (2022) (train and test are trials before / after 2015 respectively)
| Dataset | # Trials | # Treatments | # Conditions | # Features | Positive Ratio | Train/Test Split |
|------------------|----------|--------------|--------------|------------|----------------|------------------|
| TOP Benchmark Phase I | 1,787 | 2,020 | 1,392 | 6 | 0.56 | 1136 / 575 |
| TOP Benchmark Phase II | 6,102 | 5,610 | 2,824 | 6 | 0.50 | 4317 / 1504 |
| TOP Benchmark Phase III | 4,576 | 4,727 | 1,619 | 6 | 0.68 | 3359 / 1048 |
ClinicalTrials.gov Database
| Dataset | # Trials | # Treatments | # Conditions | # Features | Positive Ratio | Train/Test Split |
|------------------|----------|--------------|--------------|------------|----------------|------------------|
| Phase I-IV | 223,613 | 244,617 | 68,697 | 9 | N/A | |
of patients in different conditions. A CTGAN model (Xu et al., 2019) was trained on the raw data to generate the synthetic patient data for the experiments. We train the model to predict the patient’s morbidity, which is a binary classification task. The statistics of the datasets are available in Table 1. Clinical Trial Outcome Datasets. We use clinical trial data from the HINT benchmark (Fu et al., 2022) and ClinicalTrials.gov. The HINT benchmark contains drug, disease, and eligibility information for 17K clinical trials. The trial database contains 220K clinical trials with information about the trial setup (such as title, phase, enrollment, conditions, etc.). Both datasets cover phases I, II, and III trials, but only the HINT benchmark includes the trial outcome labels in {success, failure}. We have also included MIMIC-IV dataset and PMC-Patients dataset as the external patient database and clinical trial documents as the external trial outcome prediction dataset. Please refer to Appendix D for details.
Implementations: For the patient outcome prediction task, we choose a tree ensemble method (XGBoost) (Chen & Guestrin, 2016a), Multilayer Perceptron, FT-Transformer (Gorishniy et al., 2021), TransTab (Wang & Sun, 2022b), and TabLLM (Hegselmann et al., 2022) as the baselines. For the trial outcome prediction task, we choose XGBoost, feed-forward neural network (FFNN) (Tranchevent et al., 2019), DeepEnroll (Zhang et al., 2020), COMPOSE (Gao et al., 2020), HINT (Fu et al., 2022), and SPOT (Wang et al., 2023b) as the baselines. We use PyTrial (Wang et al., 2023a) to implement most baselines and provide the parameter tuning details of the selected baselines in Appendix E.
We use a pre-trained bidirectional transformer model named BioBERT (Lee et al., 2020) as the classifier for MediTab. We utilize GPT-3.5 (Brown et al., 2020) via OpenAI’s API for the data consolidation and enhancement. We use UnifiedQA-v2-T5 3B (Khashabi et al., 2020) for the sanity check. The evaluation metrics selected are ROC-AUC and PR-AUC, with the details in Appendix F. Further ablations on base model choice is shown in Appendix H. All experiments were run with 2 RTX-3090 GPUs and AMD Ryzen 3970X 32-Core CPU.
---
2https://clinicaltrials.gov/
3Engine gpt-3.5-turbo-0301: https://platform.openai.com/docs/models/gpt-3-5
4Huggingface: allenai/unifiedqa-v2-t5-large-1363200
Table 3: Test performances on the Patient Outcome Datasets. “-” indicates not converged.
| Trial Name | Metrics | XGBoost | MLP | FT-Transformer | TransTab | TabLLM (Single Dataset) | TabLLM (Multi-Dataset) | MediTab |
|------------------|---------|---------|-------|----------------|----------|------------------------|------------------------|---------|
| Breast Cancer 1 | AUROC | 0.5430 | 0.6091| 0.5564 | 0.5409 | - | - | 0.6182 |
| | PRAUC | 0.0796 | 0.0963| 0.0803 | 0.0923 | - | - | 0.1064 |
| Breast Cancer 2 | AUROC | 0.6827 | 0.6269| 0.6231 | 0.6000 | - | - | 0.8397 |
| | PRAUC | 0.1559 | 0.1481| 0.0520 | 0.0365 | - | - | 0.1849 |
| Breast Cancer 3 | AUROC | 0.6489 | 0.7065| 0.6338 | 0.7100 | 0.6163 | 0.6103 | 0.7529 |
| | PRAUC | 0.3787 | 0.4000| 0.3145 | 0.4133 | 0.3023 | 0.2977 | 0.4567 |
| Colorectal Cancer| AUROC | 0.6704 | 0.6337| 0.5951 | 0.7096 | - | - | 0.7107 |
| | PRAUC | 0.2261 | 0.1828| 0.1541 | 0.2374 | - | - | 0.2402 |
| Lung Cancer 1 | AUROC | - | 0.6023| - | 0.6499 | - | - | 0.7246 |
| | PRAUC | - | 0.9555| - | 0.9672 | - | - | 0.9707 |
| Lung Cancer 2 | AUROC | 0.6976 | 0.6303| 0.6093 | 0.5685 | 0.6188 | 0.6379 | 0.6622 |
| | PRAUC | 0.6865 | 0.5662| 0.5428 | 0.4922 | 0.5619 | 0.5772 | 0.6710 |
| Lung Cancer 3 | AUROC | 0.6976 | 0.6429| 0.5357 | 0.6786 | 0.8036 | 0.6786 | 0.8928 |
| | PRAUC | 0.7679 | 0.8501| 0.7250 | 0.7798 | 0.8256 | 0.7338 | 0.9478 |
Table 4: Test performances on the Clinical Trial Outcome Datasets.
| Trial Data | Metrics | XGBoost | FFNN | DeepEnroll | COMPOSE | HINT | SPOT | MediTab |
|------------|---------|---------|------|------------|---------|------|------|---------|
| Phase I | AUROC | 0.518 | 0.550| 0.575 | 0.571 | 0.576| 0.660| 0.699 |
| | PRAUC | 0.513 | 0.547| 0.568 | 0.564 | 0.567| 0.689| 0.726 |
| Phase II | AUROC | 0.600 | 0.611| 0.625 | 0.628 | 0.645| 0.630| 0.706 |
| | PRAUC | 0.586 | 0.604| 0.600 | 0.604 | 0.629| 0.685| 0.733 |
| Phase III | AUROC | 0.667 | 0.681| 0.699 | 0.700 | 0.723| 0.711| 0.734 |
| | PRAUC | 0.697 | 0.747| 0.777 | 0.782 | 0.811| 0.856| 0.881 |
3.2 Results on Patient Outcome Prediction and Trial Outcome Prediction
We report the supervised results for patient outcome prediction: the AUROC and PRAUC on the test sets of all clinical trials, in Table 3. Note that we train a single classifier for MediTab and predict on all datasets, while the baselines need to be trained on each dataset separately. Our findings demonstrate that a single MediTab model achieves the highest ranking in 5 out of 7 datasets, with an overall ranking of 1.57 across all datasets. Conversely, MLP and FT-Transformer fail to converge in certain cases due to imbalanced target labels (e.g., Lung Cancer 1) or limited availability of data (e.g., Lung Cancer 3). This highlights the data-hungry nature of deep learning algorithms and emphasizes the importance of augmenting training data through data consolidation and enrichment.
Additionally, we observe that TabLLM fails in both the single-dataset and multi-dataset settings. We see that with only the text-template serialization performs poorly in this setting, with multiple datasets not converging. It is possible that the small amount of data and the clinical-specific terms are too niche for the general-purpose TabLLM. Furthermore, it is not able to generalize across datasets, as the column names are quite diverse (Table 5).
MediTab also leads to substantial improvements in trial outcome prediction tasks, as illustrated in Table 4. Notably, our approach outperforms all other methods in every phase of the trials. We observe remarkable improvements of 5.9%, 9.5%, and 3.2% over the previous state-of-the-art baselines in the three phases, respectively. This provides insight into the benefits of increased data availability and the utilization of transfer learning in deep learning-based tabular prediction algorithms.
3.3 Results on Zero-Shot and Few-Shot Learning
We assess the zero-shot prediction capability of MediTab on two tasks. For the evaluation of the dataset D, we deliberately exclude D from the training data during step 2, where pseudo labels are generated for the external database. When computing the data Shapley values for out-domain samples during the quality check process, D is also excluded. Subsequently, we train a model solely on the cleaned supplementary data T_sup and evaluate its performance on the target dataset D. The results of this evaluation are illustrated in Figure 3. MediTab exhibits impressive zero-shot performances: it wins over supervised XGBoost models in 5 out of 7 datasets in patient outcome
Figure 3: **Zero-shot MediTab** is better than a fully supervised baseline (XGBoost). The evaluation is across 7 patient outcome prediction datasets (left) and 3 trial outcome prediction datasets (right). The compared baseline XGBoost model is fitted on each dataset, respectively.
Figure 4: **Few-shot MediTab** compared with XGBoost with varying training data sizes. The compared baseline XGBoost model is fitted on each dataset, respectively.
Prediction and all three datasets in trial outcome prediction by a significant margin. On average, MediTab achieves gains of 8.9% and 17.2% improvements in the two tasks, respectively.
The encouraging zero-shot learning result sheds light on the development of task-specific tabular prediction models that can offer predictions for new datasets even before the label collection stage. This becomes particularly invaluable in scenarios where acquiring training labels is costly. For instance, it enables us to predict the treatment effect of a drug on a group of patients before conducting clinical trials or collecting any trial records. Consequently, it allows us to make informed decisions regarding treatment adjustments or trial discontinuation.
We further visualize the few-shot learning results in Figure 4. We are able to witness consistent performance improvement with more labeled training samples for both methods. Additionally, for all tested cases, XGBoost is unable to surpass the zero-shot score of MediTab.
### 3.4 Results on Ablations on Different Learning Strategies
Section 2.5 discusses a two-stage training strategy for the final learning & deployment stage. Here, we investigate the different training regimens of our method: single-stage training (augment), two-stage training (finetune), training on the original datasets from scratch (scratch), and zero-
shot (zeroshot). We list their rankings in Figure 5 and detailed performances across datasets in Tables 7 and 8. Results show that finetune generally performs the best. We conjecture that jointly training on the target task and supplementary data improves the model’s overall utility, but it may affect the performance of specific samples in the target task $T$. Furthermore, we also identify that zeroshot reaches comparable performances with scratch.
4 RELATED WORK
Tabular Prediction has traditionally relied on tree ensemble methods (Chen & Guestrin, 2016b; Ke et al., 2017). In recent years, the powerful representation learning abilities of neural networks have motivated the new design of deep learning algorithms for tabular prediction (Arik & Pfister, 2021; Kadra et al., 2021; Chen et al., 2023; Bertsimas et al., 2022). They involve using transformer-based architectures (Huang et al., 2020; Gorishniy et al., 2021; Wang & Sun, 2022a) to enhance automatic feature interactions for better prediction performances. In addition, self-supervised learning (SSL) has been extended to tabular prediction tasks. This includes approaches such as generative pretraining objective by masked cell modeling (Yoon et al., 2020; Arik & Pfister, 2021; Nam et al., 2023), and discriminative pretraining objective by self-supervised (Ucar et al., 2021; Somepalli et al., 2022; Bahri et al., 2022) or supervised contrastive learning (Wang & Sun, 2022b). Moreover, transfer learning was also adapted to tabular prediction, employing prompt learning based on generative language models (Hegselmann et al., 2022) and multi-task learning (Levin et al., 2023). Multi-task learning and transfer learning were also performed in the medical domain for EHR-based predictive modeling (Hur et al., 2023; 2022). Nonetheless, these approaches primarily focus on algorithm design, including model architecture and objective functions, often overlooking the engineering of the underlying data.
Data-Centric AI underscores the importance of data for building advanced machine learning prediction systems (Zha et al., 2023). Notable progress in the domain of tabular data includes efforts to detect (Wang et al., 2020) and debug the noises in labels (Kong et al., 2021); automate feature selection (Liu et al., 2023); and streamline feature generation (Su et al., 2021). Additionally, LM for finetuning on tabular data has been proposed by (Dinh et al., 2022), but it uses strict templates to create the sentence, which limits its expressivity. These methods were proposed for general tabular data while not covering the challenges of heterogeneity and limited samples in medical tabular data. Though there were efforts in enhancing medical codes in EHRs with text descriptions (Hur et al., 2022), there is no further exploration on augmenting medical tabular data that include more diverse features. In contrast, we present a data engineering framework designed to consolidate diverse tabular datasets, by distilling the knowledge from large language models with hallucination detection and distilling from out-domain datasets with data auditing. MediTab is hence able to build a versatile prediction model for the target task.
5 CONCLUSION
In conclusion, we proposed a novel approach to train universal tabular data predictors for medical data. While there were many efforts in developing new algorithms for tabular prediction, the significance of data engineering has raised much less attention. Specifically, in medicine it is faced with challenges in limited data availability, inconsistent dataset structures, and varying prediction targets across domains. To address these challenges, MediTab generates large-scale training data for tabular prediction models by utilizing both in-domain tabular datasets and a set of out-domain datasets. The key component of this approach is a data engine that utilizes large language models to consolidate tabular samples by expressing them in natural language, thereby overcoming schema differences across tables. Additionally, the out-domain tabular data is aligned with the target task using a learn, annotate, and refine pipeline. By leveraging the expanded training data, MediTab can effectively work on any tabular dataset within the domain without requiring further fine-tuning, achieving significant improvements compared to supervised baselines. Moreover, MediTab demonstrates impressive performance even with limited examples (few-shot) or no examples (zero-shot), remaining competitive with supervised approaches across various tabular datasets.
REFERENCES
Emily Alsentzer, John R Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. Publicly available clinical bert embeddings. *arXiv preprint arXiv:1904.03323*, 2019.
Sercan Ö Arik and Tomas Pfister. Tabnet: Attentive interpretable tabular learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 6679–6687, 2021.
Dara Bahri, Heinrich Jiang, Yi Tay, and Donald Metzler. SCARF: Self-supervised contrastive learning using random feature corruption. In *International Conference on Learning Representations*, 2022.
Mandis Beigi, Afrah Shafquat, Jason Mezey, and Jacob W Aptekar. Synthetic clinical trial data while preserving subject-level privacy. In *NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research*, 2022.
Dimitris Bertsimas, Kimberly Villalobos Carballo, Yu Ma, Liangyuan Na, Léonard Boussioux, Cynthia Zeng, Luis R Soenksen, and Ignacio Fuentes. Tabtext: a systematic approach to aggregate knowledge across tabular data structures. *arXiv preprint arXiv:2206.10381*, 2022.
Vadim Borisov, Kathrin Seßler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. Language models are realistic tabular data generators. *arXiv preprint arXiv:2210.06280*, 2022.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in Neural Information Processing Systems*, 33:1877–1901, 2020.
Jintai Chen, KuanLun Liao, Yanwen Fang, Danny Chen, and Jian Wu. Tabcaps: A capsule neural network for tabular data classification with bow routing. In *The Eleventh International Conference on Learning Representations*, 2023.
Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In *Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, KDD ’16, pp. 785–794, New York, NY, USA, 2016a. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939785. URL http://doi.acm.org/10.1145/2939672.2939785.
Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In *Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 785–794, 2016b.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018.
Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee. Lift: Language-interfaced fine-tuning for non-language machine learning tasks. *Advances in Neural Information Processing Systems*, 35:11763–11784, 2022.
Tianfan Fu, Kexin Huang, Cao Xiao, Lucas M Glass, and Jimeng Sun. Hint: Hierarchical interaction network for clinical-trial-outcome predictions. *Patterns*, 3(4):100445, 2022.
Junyi Gao, Cao Xiao, Lucas M Glass, and Jimeng Sun. COMPOSE: cross-modal pseudo-siamese network for patient trial matching. In *Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, pp. 803–812, 2020.
Amirata Ghorbani and James Zou. Data shapley: Equitable valuation of data for machine learning. In *International Conference on Machine Learning*, pp. 2242–2251. PMLR, 2019.
Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deep learning models for tabular data. *Advances in Neural Information Processing Systems*, 34:18932–18943, 2021.
|
jx6njBKH8E
|
Can the authors disentangle how much of the extractions overlap with the generated text that they fine-tuned with, and how much of the extracted text is non-overlapping and actually a generation of the model that is due to the amplification.
|
AMPLIFYING TRAINING DATA EXPOSURE THROUGH FINE-TUNING WITH PSEUDO-LABELED MEMBERSHIPS
Anonymous authors
Paper under double-blind review
ABSTRACT
Neural language models (LMs) are vulnerable to training data extraction attacks due to data memorization. This paper introduces a novel attack scenario wherein an attacker adversarially fine-tunes pre-trained LMs to amplify the exposure of the original training data. This strategy differs from prior studies by aiming to intensify the LM’s retention of its pre-training dataset. To achieve this, the attacker needs to collect generated texts that are closely aligned with the pre-training data. However, without knowledge of the actual dataset, quantifying the amount of pre-training data within generated texts is challenging. To address this, we propose the use of pseudo-labels for these generated texts, leveraging membership approximations indicated by machine-generated probabilities from the target LM. We subsequently fine-tune the LM to favor generations with higher likelihoods of originating from the pre-training data, based on their membership probabilities. Our empirical findings indicate a remarkable outcome: LMs with over 1B parameters exhibit a four to eight-fold increase in training data exposure. We discuss potential mitigations and suggest future research directions.
1 INTRODUCTION
Neural Language Models (LMs) have the ability to memorize extensive portions of their training data, which frequently encompasses sensitive information, thereby escalating significant privacy concerns. This is primarily due to Training Data Extraction (TDE) attacks (Carlini et al., 2021), enabling the disclosure of original training data during the model’s inference phase. Numerous previous studies suggest that attackers, by generating extensive text and selecting outputs likely to contain training data, can access substantial amounts of sensitive information, even with restricted access to the model (Wallace et al., 2020; Carlini et al., 2021; 2023).
In this paper, we explore a novel attack strategy where a pre-trained LM is “adversarially” fine-tuned to increase the risk of exposing sensitive pre-training data. While many existing attack strategies emphasize post-hoc approaches (Lehman et al., 2021; Carlini et al., 2021; Balunovic et al., 2022; Carlini et al., 2023; Anil et al., 2023) to enhance the efficacy of TDE attacks against the fixed state of a target LM (e.g., finding better prompts, modifying sampling methods, or developing ranking strategies), our approach probes the potential intensification of risk to the LM through the exploitation of self-generated text for fine-tuning. Our training objective also contrasts with existing defenses, such as differentially private training (Abadi et al., 2016; Anil et al., 2022) or self-distillation (Zhang et al., 2019; Tang et al., 2022), aiming to restrict the exposure of training data. The underlying assumption of our attack hinges on the availability of restricted white-box capabilities, which are becoming increasingly crucial and attainable, given the proliferation of public LMs (Scao et al., 2022; Touvron et al., 2023) and the advancement in black-box model extraction techniques (Carlini et al., 2020; Wu et al., 2023) (refer to §3.1 for more details).
Nonetheless, significant challenges remain. LMs may “forget” early training examples during the fine-tuning process (McCloskey & Cohen, 1989; Carlini et al., 2021; Jagielski et al., 2023), and ensuring the accuracy of labels on self-generated text is crucial for effective fine-tuning (He et al., 2019). To deal with these issues, attackers might employ self-generated texts that align with the original pre-training data, a process involving content quantification—akin to identifying texts with high membership. However, numerous TDE attack strategies necessitate empirical thresholds to differentiate between members and non-members of generated texts (Song & Mittal, 2021; Carlini...
Figure 1: An overview. We feed an empty prompt into the target LM, consequently generating substantial text. For each piece of generated text, we calculate the perturbation discrepancy (Mitchell et al., 2023), where lower values signify a higher probability of the text being human-written and potentially containing sensitive training data. Subsequently, we match pairs of generations in twos and fine-tune (Ouyang et al., 2022) the target LM to favor the text with a lower perturbation discrepancy.
Without insights into the pre-training dataset, accurately determining membership becomes formidable, potentially leading to mislabeling texts that rarely contain training data as members.
To address these challenges, we propose the following two-fold strategy:
1. **Pseudo-Labeling based on Machine-Generated Probabilities** (§4.1): We generate extensive text from the target LM and pseudo-label (Lee et al., 2013) it, basing memberships on approximations. Utilizing the renowned zero-shot, machine-generated text detection method named DetectGPT (Mitchell et al., 2023), we infer machine-generated probabilities and inversely assign memberships. This method operates under the assumption that texts, even those machine-generated which incorporate training data, are likely to exhibit lower machine-generated probabilities.
2. **Reinforcement Learning with Self-Generations** (§4.2): We fine-tune the target LM utilizing its generated text. Employing Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017; Stiennon et al., 2020; Ouyang et al., 2022), which prioritizes relative sample preferences, we address the confirmation bias (Arazo et al., 2020) resulting from inaccurate labeling. This approach of pseudo-labeling prompts the target LM to favor responses reminiscent of training data.
We evaluate our approach using six distinct versions of the publicly available LM from the OPT (Zhang et al., 2022) family, namely 125M, 350M, 1.3B, 2.7B, 6.7B, and 13B. Consequently, our methodology enhances the efficacy of TDE attacks in fine-tuned LMs with over 1B parameters, exhibiting a four to eight-fold increase in effectiveness compared to reference LMs. Moreover, these fine-tuned LMs exhibit a heightened risk of exposing exceedingly lengthy sequences of training data, with instances including up to 1163 verbatim words.
In summary, our contributions are as follows: (1) We present a novel restricted white-box attack strategy (§3) amplifying the exposure of training data in LMs. This is achieved by pseudo-labeling of self-generated text (§4.1) and subsequent fine-tuning of the target LM (§4.2). (2) We provide empirical evidence supporting the feasibility of our approach, discerning pseudo-labeled membership discrepancies in generated texts (§5.2) and demonstrating its efficacy in amplifying training data exposure when targeting publicly available LMs (§5.3). (3) We delve into potential defensive strategies against our method, contributing valuable insights to the discourse on mitigating privacy risks posed to LMs (§6).
2 BACKGROUND
2.1 Zero-Shot Machine-Generated Text Detection
The proliferation of AI writing tools (Brown et al., 2020; OpenAI, 2023; Anil et al., 2023) has driven the need for detectors capable to determine whether a text is machine-generated. Among the extensive prior research addressing this issue (Zellers et al., 2019; Ippolito et al., 2020; Fagni et al., 2021), we focus on a popular zero-shot machine-generated text detection strategy called DetectGPT (Mitchell et al., 2023).
DetectGPT determines the machine-generated nature of a given text based on the expectation of its log probability. Specifically, the expectation of the difference in log probabilities between a given text and its perturbed texts—e.g., replacing or deleting words; termed perturbation discrepancy—converges to 0 for machine-generated text. Otherwise, there is a higher likelihood that it is human-written. Formally, perturbation discrepancy is defined as (Mitchell et al., 2023):
\[ d(x, p_\theta, q) \triangleq \log p_\theta(x) - \mathbb{E}_{\tilde{x} \sim q(\cdot | x)} \log p_\theta(\tilde{x}) \]
(1)
where \( x \) is the text we want to classify as machine-generated or not, \( p_\theta \) represents the source model from which we want to discern if the text \( x \) was derived, and \( q(\cdot | x) \) is a function that generates a perturbed version using \( \tilde{x} \) as the base, like T5 (Raffel et al., 2020).
2.2 Reinforcement Learning from Human Feedback
RLHF (Christiano et al., 2017; Stiennon et al., 2020; Ouyang et al., 2022; OpenAI, 2023) is a fine-tuning strategy for LMs using human preferences as a reward signal. This strategy is divided into three sequential steps: First, fine-tune the target LM to produce the desired output for any given prompt as labeled by a human, termed Supervised Fine-Tuning (SFT). Secondly, rank the target LM’s responses for the same prompt by labelers. After which, train a Reward Model (RM) to approximate these human preferences. Lastly, reinforcement learning, via Proximal Policy Optimization (Schulman et al., 2017) (PPO), is employed to search for the optimal policy—specifically, the parameters of the target LM—that maximizes the expected reward for the generated text. Through these three steps, the behavior of the target LM is aligned to reflect the intricate human preferences.
3 Threat Model
In this section, we establish the adversary’s capabilities (§3.1) and define the objective to achieve from the target LM through these capabilities (§3.2). An accurate threat model demonstrates the operating principle and limitations of our attack, which will also help design future defense strategies.
3.1 Adversary’s Capabilities
We consider a restricted white-box adversary who has access to the input-output of the target LM and can even fine-tune it as desired. This scenario permits access to the entire set of parameters of the target LM, but the distribution and attributes of the private training data are not allowed.
The assumptions of this restricted white-box setting are becoming increasingly essential and realistic for two reasons: (1) Pre-trained LMs, trained on private datasets or those of sizes inaccessible to an adversary, are gradually being open-sourced (Kim et al., 2021; Touvron et al., 2023; Rozière et al., 2023) due to the efforts of various organizations promoting open science, and (2) Adversaries can extract parameters of neural networks, including LMs, even in a black-box access environment (Carlini et al., 2020; He et al., 2021; Wu et al., 2023).
3.2 Adversary’s Objective
The adversary’s objective is to maximize the amount of private training data exposed from the generated texts of the target LM, i.e., true positive. Specifically, given a pre-trained LM \( f_\theta \) with parameters \( \theta \), the adversary wants to maximize the effectiveness of the TDE attack:
\[ \text{Maximize } \mathbb{E}_{\hat{y} \sim D_{\text{infer}}} \left[ \mathbf{1}_{\text{condition}}[\exists i : \hat{y}_{i:i+k} \in D_{\text{train}}] \right] \]
(2)
where \( \mathcal{D}_{\text{infer}} = \{ \hat{y}^{(i)} \}_{i=1}^l \) is a set of \( l \) sequences sampled from the LM \( f_\theta \) by the adversary with the prompt \( x \), \( \hat{y} = [\hat{y}_1, \hat{y}_2, \cdots, \hat{y}_m] \) is a generated text consisting of \( m \) tokens excluding the prompt \( x \), and \( \hat{y}_{i,i+k} = [\hat{y}_i, \hat{y}_{i+1}, \cdots, \hat{y}_{i+k}] \) is partial generations consisting of \( k \) consecutive tokens starting from index \( i \in [1, m - k] \). \( k \) and \( l \) are predefined hyperparameters.
This paper does not aim for a targeted attack intending to extract data with specific attributes. Rather, we consider an untargeted attack where input prompt \( x \) is empty (i.e., \( </s> \)). The targeted attack that injects prompts that are easily accessible to adversaries, such as personal identification information, medical information, or code snippets, can increase the effectiveness of the attack and be helpful in quantifying memorization (Carlini et al., 2023). However, it is not realistic as it mandates prior knowledge of the training data.
4 Study Design
In this section, we describe our TDE attack strategy. First, we pseudo-label the generated texts based on the machine-generated probability (\$4.1). We then utilize these pseudo-labeled texts to perform reinforcement learning (\$4.2).
4.1 Pseudo-Labeling based on Machine-Generated Probabilities
Generating Texts. We first input an empty prompt, specifically “\( </s> \),” into the target LM to produce 100,000 texts (Carlini et al., 2021). By feeding the unique token that indicates the beginning of a sentence, we can extract the most confident samples from the LM. This method corresponds with the adversary’s objective as texts generated with higher confidence are more likely to be memorized training data (Shokri et al., 2017). While there is no length constraint for the generated texts, we deliberately fixed the number of tokens of each generated text to 256 to enhance the reliability of the TDE attack performance. This length is consistent with that of texts after our TDE attack. Despite the possibility of duplicated data among the generated texts (Carlini et al., 2021), we opted not to carry out any deduplication for actual attack performance computation.
Perturbing Generated Texts. Subsequently, we produce ten perturbed texts for each generated text using the mask-and-fill approach. We repetitively mask two consecutive spans until 15% of the words delineated by spaces are corrupted (Mitchell et al., 2023). To prevent the influence of each masked span within a single sentence from becoming excessively dominant, we dropped texts comprising fewer than 20 words. We used a T5-Large (Raffel et al., 2020) model, pre-trained with a span-corruption objective to predict masked spans.
Given the vast number of texts to produce perturbation from, we simplify the machine-generated text detection method proposed in prior work (Mitchell et al., 2023) for efficiency. For instance, while previously 100 perturbed texts were produced for each generation, we only produced ten and replaced the perturbation function from T5-3B (3B) to T5-Large (770M). While such a reduced perturbation may induce inaccurate labels, we minimized the confirmation bias by a trick in the subsequent pseudo-labeling step. For concrete examples of mask-and-fill on the generated text, please refer to Appendix C.1.
Calculating Perturbation Discrepancy for Each Generated Text. Next, we compute the log-likelihood of the generations and perturbed texts respective to the target LM. As previously mentioned in \$2.1, the perturbation discrepancy of a random generation from the target LM is calculated by the difference between that generation’s log probability and the perturbed texts’ expected log probability. Unlike previous studies that classified texts with perturbation discrepancy exceeding a threshold as machine-generated (Mitchell et al., 2023), we only compare the perturbation discrepancy between two generated texts; specifically, a lower discrepancy is assumed to more likely contain human-written text.
Pseudo-Labeling Texts through Perturbation Discrepancy. Lastly, we pair two generated texts with their perturbation discrepancy. Note that the text preferred by the target LM is likely human-
---
1For instance, consider a scenario where the adversary injects the following prompt to extract specific email information: “If you have other issues, please contact us as”
written, meaning it would have a relatively lower perturbation discrepancy. Consequently, we can naturally determine the pseudo-label of membership (i.e., chosen and rejected) by categorizing paired texts based on lower and higher perturbation discrepancies. For detailed examples of pseudo-labels determined by perturbation discrepancy within a pair, consult Appendix C.2.
The most trivial method to select such pairs is randomly mapping two texts. However, in this case, due to our simplified implementation (i.e., reduced number of perturbed texts per generation and use of a perturbation model with low capacity), the reliability of the pseudo-label may be somewhat compromised. On the contrary, to find the globally optimal pair with a maximized difference in perturbation discrepancy, one must examine all possible combinations, which is computationally prohibitive due to its quadratic nature. As a compromise between the two solutions, we sort texts by perturbation discrepancy and sequentially select one from the top-scoring half and one from the remainder to match. A method that ensures the maximum discrepancy difference between pairs, like the heuristic algorithm of simulated annealing (Bertsimas & Tsitsiklis [1993]), could further enhance this, and remains a topic for future research.
4.2 Reinforcement Learning with Self-Generations
To fine-tune the target LM using the pseudo-labeled self-generation dataset, we apply the popular fine-tuning strategy for large LMs, called RLHF §2.2. We modify the reward from human feedback to perturbation discrepancy to promote the target LM to favor responses expected to contain more training data.
We have omitted the SFT process for the target LM. The primary aim of SFT is to modify the response format of an LM trained with a causal language modeling objective to act like a chatbot by adding simple directives at the beginning and end of a prompt. We assessed that this process is inconsistent with our attack strategy of exposing training data by inputting an empty prompt. Therefore, we used 40% and 60% of the pseudo-labeled dataset to fine-tune RM and the PPO algorithm, respectively. All other unspecified methods follow the approach of Ouyang et al. (2022). Please refer to Appendix B.3 for the detailed specifications of the fine-tuning dataset.
5 Experiments
In this section, we experimentally validate the feasibility and effectiveness of our two-step approach in amplifying the exposure of training data by addressing the following two research questions:
- **RQ1 (Feasibility):** Can the RM discern generated texts containing more training data by fine-tuning with pseudo-labeled texts based on the perturbation discrepancy difference? §5.2
- **RQ2 (Effectiveness):** If discernible, can we amplify training data exposure by fine-tuning target LM using the trained RM? §5.3
RQ1 confirms the validity of our approach. Drawing on earlier studies that identified a perturbation discrepancy gap related to the presence of training data in text (Mitchell et al. 2023), we experimentally show that RM can differentiate between texts based on this discrepancy, achieving significant binary accuracy.
In RQ2, we quantitatively analyze whether our approach enhances the performance of TDE attacks based on the results of RQ1. We perform the same TDE attacks on the reference LM and fine-tuned LM, and observe the true positives of these attacks.
5.1 Experimental Setup
**Settings.** We demonstrate our attack on the famous LM, OPT (Zhang et al. 2022), which includes model parameters and a public-available training dataset. Given that the OPT family contains nine different architectures ranging from 125M to 175B, it facilitates observing performance trends based
---
2For instance, for a prompt like “How are you?”, one can add tokens indicating human and assistant directives to transform it into “Human: How are you? Assistant:”.
Table 1: The test accuracy over epochs when fine-tuning the RM using datasets created from each OPT version. All experiments present the average and 95% confidence interval from five repeated trainings on different dataset splits. Epoch 0 denotes before RM’s training starts.
| OPT | Epoch 0 | Epoch 1 | Epoch 2 | Epoch 3 |
|-------|-------------|-------------|-------------|-------------|
| 125M | 49.7 ± 1.1 | 65.5 ± 1.1 | 65.5 ± 1.6 | 63.9 ± 1.8 |
| 350M | 52.2 ± 1.4 | 69.4 ± 2.3 | 70.1 ± 1.9 | 68.7 ± 0.9 |
| 1.3B | 51.2 ± 2.2 | 69.2 ± 1.3 | 69.2 ± 1.3 | 67.9 ± 0.9 |
| 2.7B | 50.9 ± 0.7 | 66.1 ± 2.3 | 66.4 ± 0.6 | 65.5 ± 1.2 |
| 6.7B | 50.5 ± 1.8 | 64.8 ± 1.5 | 64.3 ± 0.3 | 62.8 ± 1.0 |
| 13B | 51.4 ± 2.2 | 62.9 ± 0.7 | 62.6 ± 0.9 | 61.6 ± 0.5 |
on the LM scale. Due to limited experimental resources, we restrict our experiments to the following six versions of OPT: 125M, 350M, 1.3B, 2.7B, 6.7B, and 13B. All RMs are pre-trained OPT-350M. Please refer to Appendix B.4 and Appendix B.5 for fine-tuning RMs and the target LMs, respectively.
We fix all attack hyperparameters for text generation to observe the change in training data exposure through our fine-tuning strategy. We simultaneously use top-\(k\) sampling—restricting sampling to the \(k\) vocabulary tokens with the highest probability—with \(k = 40\) and top-\(p\) (Holtzman et al., 2019) sampling—restricting sampling to the smallest set of most probable tokens whose probabilities sum up to at least \(p\), also referred to as nucleus sampling—with \(p = 0.95\). To avoid repetitive phrasing, we ensure the same trigram appears no more than once within a generated text. All generated texts contain 256 tokens each, excluding the prompt. We do not employ temperature to flatten each token’s probability.
Verification. To verify whether the generated texts from the target LM indeed contain training data, we consider twelve original training datasets of OPT: BookCorpus (Zhu et al., 2015), CC-Stories (Trinh & Le, 2018), Pile (Gao et al., 2020) (containing Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics, and HackerNews), Pushshift.io Reddit dataset (Baumgartner et al., 2020), and CCNewsV2 (Liu et al., 2019). We used only ten datasets for verification, excluding CC-Stories, which is no longer available in its original form, and CCNewsV2 due to its massive size. We did not deduplicate each dataset, unlike their original pre-processing phase. Please refer to Appendix B.2 for the specific specifications of OPT’s ten reconstructed pre-training datasets.
Evaluation Metrics. We evaluate the performance of our attack strategy as true positives per 100,000 generated texts. Following the convention for TDE attacks (Lee et al., 2022), we consider sentences as extracted when they have over 50 duplicated tokens from the original training data.
Computing the true positives of generated sentences by individually identifying overlaps within the ten datasets above demands high computational costs. Instead, we employ the suffix array (Manber & Myers, 1993)-based exact substring duplication (Lee et al., 2022) strategy—i.e., EXACTSUBTR—to search for duplicate text between generations and the ten original training datasets. This method can identify duplicated examples in linear time (Lee et al., 2022), and as the implementation is done in Rust (Matsakis & Klock, 2014) rather than Python, it guarantees very high computational speeds. Using this strategy, we report the number of entirely non-duplicated unique generated sentences.
5.2 RQ1: Discriminability of Perturbation Discrepancy in Generations
Note that RM is fine-tuned to award more rewards to the text with higher membership based on perturbation discrepancy among two different paired texts, i.e., chosen and rejected. Thus, if the binary classification accuracy on a test dataset significantly exceeds 50%, the expected accuracy of random guessing, after sufficient training, we can argue that RM can capture the difference in perturbation discrepancy.
Table 1 displays binary classification accuracy based on RM’s fine-tuning epoch for the same training and test datasets. To meticulously observe the trend of performance changes through fine-tuning, we deliberately induce overfitting using multiple epochs. This contrasts with the actual learning phase.
Table 2: True positives of the TDE attack on 100,000 generated texts from the reference LM (●) and our fine-tuned LM (○). We did not conduct repeated experiments since we believe that generating 100,000 massive texts can reduce bias for true positives.
| OPT | [50,64] | [64,128] | [128,192] | [192,256] | 256 | Total |
|-------|---------|----------|-----------|-----------|-----|-------|
| 125M | ● | ○ | ● | ○ | ● | ○ | Inc. |
| 350M | 64 | 54 | 24 | 80 | 5 | 20 | 8 | 15 | 0 | 0 | 101 | 169 | ×1.7 ↑ |
| 1.3B | 103 | 91 | 64 | 128 | 11 | 35 | 29 | 71 | 0 | 0 | 207 | 325 | ×1.6 ↑ |
| 2.7B | 58 | 241 | 38 | 337 | 0 | 52 | 1 | 139 | 0 | 6 | 97 | 775 | ×8.0 ↑ |
| 6.7B | 53 | 216 | 72 | 253 | 2 | 21 | 0 | 27 | 0 | 0 | 127 | 517 | ×4.1 ↑ |
| 13B | 87 | 174 | 57 | 220 | 1 | 53 | 0 | 98 | 0 | 0 | 145 | 545 | ×3.8 ↑ |
| | 87 | 347 | 101 | 394 | 5 | 27 | 0 | 18 | 0 | 0 | 193 | 786 | ×4.1 ↑ |
by only running one epoch. We have also observed some flawed learning outcomes for multiple times, such as RM’s test accuracy converging to 0 or neither training loss nor accuracy monotonically increasing. To minimize bias in experimental results, we present outcomes of repeated RM fine-tuning on different seeds until a valid result emerges five times. Consider that an adversary can also control RM’s fine-tuning, hence repeating training until success is deemed reasonable. Through Table 1, we show the followings:
**Pre-trained RM cannot distinguish perturbation discrepancy differences between generations.**
We have confirmed that the test accuracy for RM before training—i.e., at epoch 0—is roughly around 50%. This randomness is essentially evident since we have not yet fine-tuned the RM based on the perturbation discrepancy differences.
**RM can learn about perturbation discrepancy differences between generated texts through fine-tuning.**
A trained RM is more accurate than an untrained RM, indicating it can learn the perturbation discrepancy difference between generated texts as intended. RM also shows a slight decrease in test accuracy as the epoch increases when learning the pseudo-labeled dataset for some versions, which can be attributed to overfitting. For some results that showed the highest accuracy at epoch 1, it is possible to optimize the RM’s performance by reducing the training dataset size.
**RMs that learned pseudo-labeled generations derived from larger LM show lower classification performance.**
It is evident that an LM with more parameters is more likely to generate more realistic text; hence, the difference in perturbation discrepancies diminishes (Mitchell et al., 2023). This narrowed gap can be linked to a decline in the quality of the fine-tuning dataset endowed with pseudo-labeled membership. As an empirical evidence, we observed a gradually decreasing trend in the test accuracy of RMs trained on pseudo-labeled datasets derived from larger models. Considering high-performance perturbation functions—e.g., T5-3B (Raffel et al., 2020)—or generating more perturbed texts can be an approach to enhance RM’s performance.
### 5.3 RQ2: POSSIBILITY OF AMPLIFYING TRAINING DATA EXPOSURE VIA FINE-TUNING
Subsequently, we examine the changes in the exposure of training data by applying RLHF to the target LM with RM which can distinguish differences in perturbation discrepancy. We observe the results by dividing the sufficiently large number of duplicate tokens of the generated text into five intervals: [50,64), [64,128), [128,192), [192,256), and {256}.
Table 2 shows the true positives of reference LM and fine-tuned LM, categorized by OPT versions and duplication intervals. Through the experiment, we confirmed that our fine-tuning approach consistently boosts the training data exposure of the reference LM. The amplification of training data exposure is more pronounced in larger models, remarkably increasing up to 8 times in the OPT-1.3B.
---
Due to various constraints, our results may not precisely match the true positives. We could not prepare all the training data for OPT; multiple duplicates can still exist in generated texts.
Figure 2: True positives by model scale for reference LM (blue ○) and fine-tuned LM (red ×). We also performed linear approximation (dotted line), and the confidence determination coefficient $R^2$ for the reference LM and fine-tuned LM are 0.65 and 0.09, respectively.
Figure 3: Distribution of verbatim text lengths extracted from the reference LM (blue) and the fine-tuned LM (red). The maximum lengths of training data extracted from the reference LM and the fine-tuned LM are 885 and 1163, respectively.
Figure 4: True positives per GB for OPT training datasets for reference LM (blue) and fine-tuned LM (red). A value of 0 indicates that no training data was extracted from that dataset.
Furthermore, we present true positives by model size in Figure 2. The known fact from previous studies is that as the model size increases, the exposure of training data increases log-linearly (Carlini et al., 2023). We further demonstrate that our fine-tuning can rapidly accelerate this exposure.
5.4 Qualitative Analysis of Extracted Samples
We perform a qualitative analysis of several attributes of samples extracted from either the reference LM or the fine-tuned LM. While OPT was trained with public-available datasets like the Pile (Gao et al., 2020), the types of sub-training datasets are very diverse, making the qualitative analysis of memorized content an intriguing subject.
Training Data Sources. We investigated which training data the extracted training data belongs to. Since each reconstructed dataset has different sizes (see details in Appendix B.2), we calculated the true positives per GB for each dataset for a fairer comparison. Figure 4 shows the results. We could not extract any data from the OpenSubtitles and DM Mathematics datasets. We speculate that these two datasets might have resisted TDE attacks because of their relatively high complexity. Wikipedia is the most vulnerable dataset in TDE attack, which was increased about 7.2 times after fine-tuning. Quantifying leakage levels according to each dataset type and measuring their risks are interesting directions for future work.
Extraction Length. Figure 3 displays the distribution of the verbatim length of the texts extracted from both the reference LM and the fine-tuned LM. Note that we count the length in words, not tokens. Training data leaked from the reference LM tends to have a similar average length, whereas the fine-tuned LM emits a relatively more comprehensive range of training data. The maximum lengths of training data extracted from the reference LM and the fine-tuned LM are 885 and 1163, respectively. Considering the true positives for each interval in Table 2, fine-tuning the target LM enables more extraction of longer texts.
6 POSSIBLE MITIGATIONS AND COUNTERMEASURES
So far, we have shown that if an attacker knows the entire parameters of the target LM, they can amplify training data exposure by fine-tuning the model with pseudo-labeled membership. A natural question arises on the possible mitigations and countermeasures against our amplifying exposure strategy. Since our approach introduces a novel type of TDE attack not previously reported, we discuss two potential defense strategies:
Reducing Reliability of Machine-generated Probabilities. Our TDE attack assumes that we can accurately compute the machine-generated probability of generated texts. Thus, reducing the reliability of this probability can lead to a decrease in the quality of the fine-tuning dataset, which can consequently reduce the effectiveness of the TDE attack. To reduce this reliability, we can consider two methods: (1) Reducing the statistical difference between machine-generated and human-written text distributions by making the LM more sophisticated and powerful (Sadasivan et al., 2023). However, merely increasing the size of the LM might inadvertently enhance memorization and thus boost the default performance of the TDE attack (Carlini et al., 2023). Therefore, enhancing the LM’s performance while maintaining its scale would be effective. (2) Limiting the LM’s training dataset to a domain that makes mask-filling of perturbation functions ineffective. For instance, DetectGPT showed notably lower detection capabilities on the PubMedQA (Jin et al., 2019), a biomedical research dataset crafted by experts, than on other typical datasets (Mitchell et al., 2023). An LM trained in non-English might also hinder the functioning of perturbation functions. Without knowledge of the training dataset, an adversary might be restricted from deploying adaptive attacks using multilingual models like mT5 (Xue et al., 2021).
Fine-tuning to Reduce Training Data Exposure. On the other hand, we can consider a strategy that involves flipping the pseudo-labels for membership, directing the target LM’s fine-tuning towards reducing the training data exposure. Since defenders already have access to high-quality training data, they can easily adopt this approach. However, the impact of fine-tuning with pseudo-labeled self-generations on the generalized performance of the LM is a subsequent concern. The pre-trained model’s generalizable representations could be degraded during the fine-tuning process, known as a representation collapse phenomenon (Aghajanyan et al., 2020). While attackers do not need to consider the target LM’s performance—e.g., validation perplexity—, defenders must balance privacy and utility. To efficiently counter privacy attacks without compromising the usefulness of the target model, we can consider a strategy like RelaxLoss (Chen et al., 2022) during the fine-tuning process that intentionally relaxes the target loss.
7 CONCLUSION
This paper presents a novel form of TDE attacks wherein a pre-trained LM is adversarially fine-tuned, enhancing the risk of exposing sensitive pre-training data. Given the recent exponential growth in LM parameters, our attack strategy raises serious concerns, as it tends to be more effective in larger LMs. We leave several open questions for promising future research: (1) How does fine-tuning with self-generated samples specifically affect the retention of memorization? (2) Can fine-tuning the target LM to favor responses with less training data genuinely contribute to mitigating TDE attacks? (3) Can our approach be extended beyond neural LMs to increase training data exposure in general generative models? By further exploring these open questions, we hope our work will contribute to enhancing the robustness of LMs and other generative models against TDE attacks.
REFERENCES
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318, 2016.
Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. Better fine-tuning by reducing representational collapse. In International Conference on Learning Representations, 2020.
Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. Large-scale differentially private bert. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 6481–6491, 2022.
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2020.
Mislav Balunovic, Dimitar Dimitrov, Nikola Jovanović, and Martin Vechev. Lamp: Extracting text from gradients with language model priors. Advances in Neural Information Processing Systems, 35:7641–7654, 2022.
Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. The pushshift reddit dataset. In Proceedings of the international AAAI conference on web and social media, volume 14, pp. 830–839, 2020.
Dimitris Bertsimas and John Tsitsiklis. Simulated annealing. Statistical science, 8(1):10–15, 1993.
Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on International Conference on Machine Learning, pp. 1467–1474, 2012.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Nicholas Carlini, Matthew Jagielski, and Ilya Mironov. Cryptanalytic extraction of neural network models. In Annual International Cryptology Conference, pp. 189–218. Springer, 2020.
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In USENIX Security Symposium, volume 6, 2021.
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 1897–1914. IEEE, 2022.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=TatRHT_1cK.
Dingfan Chen, Ning Yu, and Mario Fritz. Relaxloss: Defending membership inference attacks without losing utility. arXiv preprint arXiv:2207.05801, 2022.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
|
AcRfzLS6se
|
In Section 4.3, there are certain inconsistencies in the presented results (e.g., RoBERTa model on the MG and NG datasets), which seem to challenge the hypotheses by the authors. Given the variations in performance, can you provide additional evidence or discussion, such as whether certain architectures or settings are preferred by the proposed method?
|
OUT-OF-DISTRIBUTION DETECTION BY LEVERAGING BETWEEN-LAYER TRANSFORMATION SMOOTHNESS
Fran Jelenić1,2 Josip Jukić1,2 Martin Tutek3 Mate Puljiz2 Jan Šnajder1,2
1TakeLab, 2Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia
3UKP Lab, Technical University of Darmstadt, Germany
{fran.jelenic, josip.jukic, mate.puljiz, jan.snajder}@fer.hr
tutek@ukp.informatik.tu-darmstadt.de
ABSTRACT
Effective out-of-distribution (OOD) detection is crucial for reliable machine learning models, yet most current methods are limited in practical use due to requirements like access to training data or intervention in training. We present a novel method for detecting OOD data in Transformers based on transformation smoothness between intermediate layers of a network (BLOOD), which is applicable to pre-trained models without access to training data. BLOOD utilizes the tendency of between-layer representation transformations of in-distribution (ID) data to be smoother than the corresponding transformations of OOD data, a property that we also demonstrate empirically. We evaluate BLOOD on several text classification tasks with Transformer networks and demonstrate that it outperforms methods with comparable resource requirements. Our analysis also suggests that when learning simpler tasks, OOD data transformations maintain their original sharpness, whereas sharpness increases with more complex tasks.
1 INTRODUCTION
Machine learning (ML) models’ success rests on the assumption that the model will be evaluated on data that comes from the same distribution as the data on which it was trained, the in-distribution (ID) data. However, models deployed in noisy and imperfect real-world scenarios often face data that comes from a different distribution, the out-of-distribution (OOD) data, which can hinder the models’ performance. The task of discerning between ID and OOD data is commonly referred to as OOD detection (Yang et al., 2021).
Owing to their consistent state-of-the-art performance across diverse ML tasks (Abiodun et al., 2018), Deep Neural Networks (DNNs) have garnered significant attention in OOD detection research. While popular baselines make use of the model’s posterior class probabilities (Hendrycks & Gimpel, 2017), the issue of overconfidence in DNNs (Guo et al., 2017) frequently erodes the credibility of these probabilities. An alternative is offered by the group of methods that leverage the fundamental concept of DNNs, namely, representation learning. Because a DNN encodes similar instances closely in its representation space, an OOD instance can be identified based on the distance between its representation and the representations of other instances in the training set (Lee et al., 2018). The downside of these methods, however, is that they require the presence of training data during prediction or involve intervention in the model’s training procedure. This is a significant practical limitation, as using third-party models pre-trained on non-public data is increasingly the standard practice. A case in point is the Hugging Face Transformers library (Wolf et al., 2020), which provides community models but often lacks comprehensive details about their training.
An obvious way to close the resource gap is to rely on OOD detection methods with minimal prerequisites. However, current OOD detection research has largely ignored the differing prerequisites among OOD detection methods, often leading to comparisons that treat methods with varying prerequisites equally, disregarding the question of practical applicability. From a practical perspective,
it makes sense to group OOD detection methods into the following three categories: (1) Black-box, for methods capable of operating on black-box models (i.e., having access only to input-output mappings) and thus suitable for models integrated into a product; (2) White-box, for methods that require access to the model’s weights and have knowledge about its architecture, and are thus readily applicable to third-party pre-trained models; and (3) Open-box, for methods with unrestricted access to model and training resources, allowing for interventions in the training process and/or access to training data or separate OOD train or validation sets.
In this paper, we focus on the OOD detection for the Transformer architecture (Vaswani et al., 2017), which has emerged as the predominant architecture in numerous ML domains. We introduce a novel OOD detection method that leverages the inherent differences in how Transformers process ID and OOD data. The method is white-box and has the potential for broad practical applicability. More concretely, our Between Layer Out-Of-Distribution (BLOOD) Detection method estimates the smoothness of between-layer transformations of intermediate representation, building on the insight that these transformations tend to be smoother for ID data than for OOD data. We evaluate BLOOD on Transformer-based pre-trained large language models applied to text classification, the most prevalent task in natural language processing (NLP), and find that it outperforms other state-of-the-art OOD detection white-box methods and even some open-box methods. We further analyze BLOOD to probe into the underlying causes of the differences between how ID and OOD intermediate representations are transformed and evaluate BLOOD on two other types of distribution shifts – semantic and background shift. We provide code and data for our experiments.
The contributions of this paper are as follows: (1) We propose BLOOD, a novel method for OOD detection applicable even in cases when only the model’s weights are available, e.g., third-party pre-trained models which are becoming de facto standard in many fields. BLOOD uses the information about the smoothness of the between-layer transformations of intermediate representations. We quantify this smoothness using the square of the Frobenius norm of the Jacobian matrix, for which we provide an unbiased estimator to alleviate computational limitations. (2) Our experiments on Transformer-based pre-trained large language models for the task of text classification show that BLOOD outperforms other state-of-the-art white-box OOD detection methods. Additionally, our results indicate that the performance advantages are more prominent when applied to complex datasets as opposed to simpler ones. We also show that BLOOD is more effective in detecting background shift than semantic shift. (3) Following our main insight that between-layer representation transformations of ID data tend to be smoother from that of OOD data, we analyze the source of this difference. We find that the learning algorithm is more focused on changing the ID region of intermediate representation space, smoothing the between-layer transformations of ID data in the process. At the same time, the OOD region of the intermediate representation space is largely left unchanged, except in some scenarios, e.g., for more complex tasks, when the OOD region of the space is also changed and sharpened as a consequence.
2 RELATED WORK
OOD detection methods are typically categorized based on their underlying mechanism, for example, into output-based, gradient-based, distance-based, density-based, and Bayesian methods (Yang et al., 2021). Another, and arguably more practically relevant, categorization would factor in the necessary prerequisites for these methods, distinguishing between black-box, white-box, and open-box methods as introduced earlier. In the following, we provide a brief overview of the most prominent OOD detection methods through this lens.
Black-box. Methods with minimal prerequisites typically rely on posterior class probabilities, assuming that when a model is uncertain about an instance, the instance is more likely to be OOD. A commonly used baseline quantifies the uncertainty of an instance as the negative of the model’s maximum softmax probability for that instance (Lee et al., 2018). A straightforward modification employs the entropy of softmax probabilities rather than the maximum value. Liu et al. (2020b) proposed using energy scores instead of softmax scores to overcome the issue of DNN overconfidence.
Gomes et al. (2022) employ similar terminology to refer to which parts of the model one can access (e.g., its outputs, inputs, or intermediate representations). In contrast, we use these terms to characterize the resources an OOD detection method requires.
https://github.com/fjelenic/between-layer-ood
White-box. Gal & Ghahramani (2016) proposed using Monte-Carlo dropout to more reliably estimate the model’s uncertainty, showing that dropout (Srivastava et al., 2014) with DNNs approximates Bayesian inference. Although Monte-Carlo dropout outperforms vanilla posterior probabilities in OOD detection (Ovadia et al., 2019), it is computationally expensive as it requires multiple forward passes. Another way of leveraging the access to model’s architecture is to use gradients to implicitly measure the uncertainty of the model’s predictions (Oberdiek et al., 2018; Huang et al., 2021). Gradient methods primarily employ the gradient norm to gauge the difference between the model’s posterior distribution and the ideal distribution. Djurisic et al. (2023) detect OOD data by pruning and adjusting the representations of the model, grounded in the intuition that the representations generated by contemporary DNNs tend to be excessive for their designated tasks.
Open-box. Because DNNs posterior probabilities tend to exhibit overconfidence, Guo et al. (2017) suggested using temperature scaling to calibrate the model’s posterior probabilities, which entails the usage of a separate validation set. To get higher quality predictive uncertainty estimates, Lakshminarayanan et al. (2017) train an ensemble of differently initialized models and combine their predictions. Although ensembles are robust to different distributional shifts (Ovadia et al., 2019), they impose a significant computational and memory overhead because they require training and keeping in memory of multiple models. Agarwal et al. (2022) extend the gradient-based methods by leveraging the variance of the gradient of the predicted label w.r.t. the input through different checkpoints during training. A popular approach to OOD detection for DNNs revolves around the utilization of information related to distances in the representation space (Lee et al., 2018; Van Amerfoort et al., 2020; Liu et al., 2020a; Hsu et al., 2020; Kuan & Mueller, 2022; Sun et al., 2022). However, these approaches require access to the training data or changes in the standard training procedure. Yet another set of methods relies on exposing the model to OOD samples during training to improve the performance on OOD detection task (Hendrycks et al., 2019; Thulasidasan et al., 2021; Roy et al., 2022). Still, a major practical limitation of these methods is the necessity for OOD data, whose entire distribution is typically unknown in real-world scenarios. Several post-hoc methods also need OOD data, but for validation sets to optimize their method’s hyperparameters (Liang et al., 2018; Sun et al., 2021; Sun & Li, 2022).
3 PRELIMINARIES
3.1 PROBLEM STATEMENT
Let instance $x \in \mathbb{R}^d$ be a $d$-dimensional feature vector and $y \in \{0, \ldots, C - 1\}$ be its corresponding class in a $C$-way classification task. We train a classifier on the dataset $\mathcal{D} = \{(x_n, y_n)\}_{n=1}^{N}$ consisting of $N$ instances i.i.d. sampled from the distribution $p(x, y)$. The objective of the learning algorithm is to model the conditional distribution $p(y|x)$ based on $\mathcal{D}$ by estimating the parameters $\theta$ of the distribution $p_\theta(y|x)$ that is as close as possible to the true conditional distribution.
The goal of an OOD detection method is to determine the uncertainty score $U_x \in \mathbb{R}$ of an instance $x$, such that there exist $\epsilon \in \mathbb{R}$ for which both $\mathbb{P}_{x \sim p(x,y)}(U_x < \epsilon)$ and $\mathbb{P}_{x \sim q(x,y)}(U_x > \epsilon)$ are close to unity whenever $q(x, y)$ is a distribution sufficiently different from $p(x, y)$. In practice, there can never exist a scoring function that perfectly discriminates between ID examples (generated by $p(x, y)$) and OOD examples (generated by $q(x, y)$). Nevertheless, even reasonable attempts can prove valuable in real-world scenarios.
3.2 INTUITION
Transformers work by mapping the input features onto a high-dimensional representation space through $L$ layers using the self-attention mechanism, creating a representation of the data suitable for the task at hand. The mapping is realized as a composition of several attention layers, where each layer creates an intermediate representation of the input. It has been shown that Transformer-based models tend to gradually progress from input features towards more abstract representation levels through layers, i.e., lower layers model lower-level features, while upper layers model higher-level features. For example, Peters et al. (2018); Tenney et al. (2019); Jawahar et al. (2019) showed that large Transformer-based language models create text representations that progress gradually from representations that encode morphological and syntactic information at the lower layers to representations that encode semantic meaning in the upper layers. Likewise, Vision Transformers
(ViT) (Dosovitskiy et al., 2021), which are garnering popularity in computer vision, were shown to process images in a similar fashion (Ghiasi et al., 2022).
We hypothesize that during the model’s training, the model learns smooth transformations between layers corresponding to natural and meaningful progressions between abstractions for ID data. We further hypothesize that these progressions will not match OOD data, hence the transformations will not be smooth for OOD data. Thus, if we could measure the smoothness of transformations in representations between layers, we could in principle differentiate between ID and OOD data. We also speculate that the difference in smoothness of transformations between ID and OOD data should be emphasized in the upper layers of a Transformer. Lower layers typically represent low-level features that are more universal, whereas upper layers tend to cluster instances around task-specific features that are not shared between ID and OOD data, potentially creating a mismatch in levels of abstraction.
3.3 Our method
Assume an $L$-layered deep neural network $f : \mathbb{R}^{d_0} \rightarrow [0, 1]^C$ was trained to predict the probabilities of $C$ classes for a $d_0$-dimensional input $x$. Let $f$ be a composition of $L$ intermediate functions, $f_L \circ \cdots \circ f_1$, where $f_l : \mathbb{R}^{d_{l-1}} \rightarrow \mathbb{R}^{d_l}$, $l = 1, \ldots, L - 1$, correspond to intermediate network layers, while $f_L$ corresponds to the last layer, mapping to a vector of logits to which softmax function is applied to obtain the conditional class probabilities. We denote the intermediate representation of $x$ in layer $l$ as $h_l$, defined as $h_l = (f_l \circ \cdots \circ f_1)(x)$.
We now need to quantify how smoothly an intermediate representation is transformed from layer $l$ to layer $l + 1$. To this end, we first need to define what we consider a smooth transformation. We say a representation $h_l$ is transformed smoothly if there is not a large difference in how it is mapped from layer $l$ onto layer $l + 1$ compared to how its infinitesimally close neighborhood is mapped.
Let $\phi_l(x)$ be the degree of smoothness of the transformation between representation $h_l$ and representation $h_{l+1}$ for input $x$. To calculate $\phi_l(x)$, we compute the Jacobian matrix $\frac{\partial f_{l+1}}{\partial h_l} = J_l : \mathbb{R}^{d_l} \rightarrow \mathbb{R}^{d_{l+1} \times d_l}$, and take the square of its Frobenius norm:
$$\phi_l(x) = \|J_l(h_l)\|_F^2 = \sum_{i=1}^{d_{l+1}} \sum_{j=1}^{d_l} \left( \frac{\partial (f_{l+1})_i}{\partial (h_l)_j} \right)^2$$
In the most popular ML libraries, gradients of a function are computed through automatic differentiation (AD), which comprises both forward mode and backward mode. Forward mode AD computes the values of the function and a Jacobian-vector product. Computing the full Jacobian matrix $J(x)$ with AD is computationally expensive as it requires $d$ forward evaluations of $J(x)e^{(i)}$, $i = 1, \ldots, d$, where $e^{(i)}$ are standard basis vectors, computing the Jacobian matrix one column at a time. In the case of modern DNNs with high-dimensional hidden layers, computing full Jacobians could render our method unfeasible. To reduce computational complexity, we derive an unbiased estimator of $\phi_l(x)$ by leveraging Jacobian-vector product computation through forward mode AD.
Corollary 1. Let $J(x) \in \mathbb{R}^{m \times n}$ be a Jacobian matrix, and let $v \in \mathbb{R}^n$ and $w \in \mathbb{R}^m$ be random vectors whose elements are independent random variables with zero mean and unit variance. Then,
$$\mathbb{E}[(w^\top J(x)v)^2] = \|J(x)\|_F^2.$$
We prove Corollary 1 in the Appendix by providing a proof for more general Theorem 1. As for the intuition behind the corollary, the Jacobian-vector product $J(x)v$ gives us an appropriately scaled gradient with respect to the change of the input in the direction of vector $v$. Further multiplying the Jacobian-vector product $J(x)v$ by the random vector $w$ from the left projects the calculated directional gradient $J(x)v$ on the vector $w$, i.e., it quantifies the extent to which the output changes in the direction of $w$ when the input changes in the direction of $v$. Squaring the vector-Jacobian-vector product then gives an estimate of the sum of squared entries of the Jacobian, i.e., the square of its Frobenius norm. Squaring also handles negative values (in cases when the angle between the directional gradient $J(x)v$ and the vector $w$ is obtuse), since we are interested in the overall smoothness as defined by Frobenius norm rather than the direction of the specific gradient.\footnote{Our notion of smoothness extends from Lipschitz continuity, where the spectral norm of the Jacobian acts as a lower bound for the Lipschitz constant (Rosca et al., 2020). Since all matrix norms are equivalent, we use the Frobenius norm, which can be efficiently computed, rather than the spectral norm to capture smoothness.}
To calculate the unbiased estimate \( \hat{\phi}_l(x) \) of \( \phi_l(x) \), we use a sample of \( M \) pairs of random vectors \( v_l \sim N(0_n, I_n) \) and \( w_l \sim N(0_m, I_m) \), and define \( \hat{\phi}_l(x) \) as:
\[
\hat{\phi}_l(x) = \frac{1}{M} \sum_{i=1}^{M} (w_{l,i}^T J_l(h_l)v_{l,i})^2
\]
BLOOD uses \( \hat{\phi}_l(x) \) as the uncertainty score of an instance \( x \). In our experiments, we consider two variations of BLOOD: (1) the average of scores for all layers \( \text{BLOOD}_M = \frac{1}{L-1} \sum_{l=1}^{L-1} \hat{\phi}_l(x) \), and (2) the score for the projection of \( \text{BLOOD}_L = \hat{\phi}_{L-1}(x) \). We use the two variants to assess the impact of layer choice, as we hypothesize that BLOOD will perform better on upper layers, given that lower layers capture low-level, general features.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
We evaluate BLOOD on several text classification datasets using two transformer-based (Vaswani et al., 2017) large pre-trained language models, RoBERTa (Liu et al., 2019) and ELECTRA (Clark et al., 2020), known for their state-of-the-art performance across a wide range of NLP tasks. We calculate the BLOOD score using samples of size \( M = 50 \) to estimate \( \phi_l(x) \) of [CLS] token’s representations between layers. We use eight text classification datasets for ID data: SST-2 (SST; Socher et al., 2013), Subjectivity (SUBJ; Pang & Lee, 2004), AG-News (AGN; Zhang et al., 2015), and TREC (TREC; Li & Roth, 2002), BigPatent (BP; Sharma et al., 2019), AmazonReviews (AR; McAuley et al., 2015), MovieGenre (MG; Maas et al., 2011), 20NewsGroups (NG; Lang, 1995). We use One Billion Word Benchmark (OBW) (Chelba et al., 2014) for OOD data, similarly to Ovadia et al. (2019), because of the diversity of the corpus. We subsample OOD datasets to be of the same size as their ID test set counterparts. Appendix C provides more details about the models, datasets, and training procedures.
We compare BLOOD to several state-of-the-art black-box and white-box OOD detection methods: (1) Maximum softmax probability (MSP) – the negative posterior class probability of the most probable class, \( -\max_c p(y = c|x) \), often considered a baseline OOD detection method (Hendrycks & Gimpel, 2017); (2) Entropy (ENT) – the entropy of the posterior class distribution, \( H(Y|x,w) \); (3) Energy (EGY) – a density-based method that overcomes the overconfidence issue by calculating energy scores from logits \( -\log \sum_{i=0}^{C-1} e^{f_L(x)_i} \) instead of softmax scores (Liu et al., 2020b); (4) Monte-Carlo dropout (MC) – the entropy of predictive distribution obtained using Monte–Carlo dropout (Gal & Ghahramani, 2016). We use \( M = 30 \) stochastic forward passes to estimate uncertainty; (5) Gradient norm (GRAD) – the L2-norm of the penultimate layer’s gradient of the loss function with most likely class considered as a true class (Oberdiek et al., 2018). (6) Activation shaping (ASH) – removing 90% of the smallest activations and adjusting the rest using ASH-S method in the penultimate layer (Djurisic et al., 2023).
Additionally, we compare BLOOD to three standard open-box OOD detection methods. Given that these methods entail considerably more prerequisites compared to BLOOD and other white/black-box methods, this comparison is intended solely as a reference point: (1) Rectified Activations (ReAct) – setting the values of the activations in the penultimate layer to be at most the 90th percentile of the activations of the training data (Sun et al., 2021). (2) Ensemble (ENSM) – an ensemble of \( M = 5 \) models of the same type, e.g., an ensemble of five RoBERTa or ensemble of five ELECTRA models, (Lakshminarayanan et al., 2017); (3) Temperature scaling (TEMP) – introduces a temperature parameter \( T \) into the softmax function such that it minimizes the negative log-likelihood on the ID validation set (Guo et al., 2017); (4) Mahalanobis distance (MD) – Mahalanobis distance of a query instance in the representation space with respect to the closest class-conditional Gaussian distribution (Lee et al., 2018).
4.2 OOD DETECTION PERFORMANCE
As the performance measure for OOD detection, we follow the standard practice and use the area under the receiver operating characteristic curve (AUROC) metric (in Appendix H, we report the
Table 1: The performance of OOD detection methods measured by AUROC (%). The best-performing white/black-box method is in **bold**. Open-box methods that outperform the best-performing white/black-box method are in **bold**. Higher is better. We test the performance of BLOOD\textsubscript{M} and BLOOD\textsubscript{L} against the MSP baseline using the one-sided Man-Whitney U test; significant improvements ($p < .05$) are indicated with asterisks (*).
| Model | Dataset | BLOOD\textsubscript{M} | BLOOD\textsubscript{L} | White-box/Black-box | Open-box |
|-------|---------|------------------------|------------------------|---------------------|----------|
| | | | | | ReAct | ENSM | TEMP | MD |
| RoBERTa | SST | 50.56 | 72.83 | 71.69 | 71.61 | 68.28 | 71.76 | 67.22 | 69.55 | 69.03 | 71.64 | 85.36 |
| | SUBJ | 52.02 | 74.66 | 74.55 | 75.79 | 74.21 | 74.93 | 79.27 | 73.33 | 76.68 | 74.41 | 93.47 |
| | AGN | 77.46 | 61.95 | 73.57 | 73.80 | 76.36 | 77.55 | 73.58 | 72.54 | 77.10 | 80.35 | 75.38 | 82.63 |
| | TREC | 69.63 | 95.30 | 96.20 | 96.40 | 96.28 | 95.68 | 96.14 | 90.36 | 96.05 | 96.87 | 96.74 | 96.74 |
| | BP | 87.20* | 89.53* | 70.15 | 72.82 | 85.84 | 74.29 | 73.11 | 82.18 | 86.19 | 79.39 | 86.01 | 97.35 |
| | AR | 91.41* | 93.20* | 89.06 | 89.96 | 92.39 | 90.59 | 88.65 | 91.42 | 92.65 | 92.44 | 92.25 | 98.35 |
| | MG | 88.15* | 85.23* | 75.02 | 76.60 | 86.45 | 79.98 | 74.28 | 81.62 | 87.30 | 76.98 | 84.30 | 95.12 |
| | NG | 83.53* | 72.02 | 77.49 | 78.76 | 82.65 | 79.32 | 76.93 | 77.73 | 83.17 | 80.77 | 82.87 | 90.68 |
| ELECTRA | SST | 74.36 | 78.11* | 73.84 | 71.97 | 70.81 | 73.82 | 67.92 | 71.18 | 73.81 | 73.58 | 78.85 |
| | SUBJ | 74.10 | 77.41 | 78.17* | 70.46 | 77.71 | 78.11 | 77.51 | 68.33 | 79.23 | 78.20 | 81.59 |
| | AGN | 65.67 | 80.28 | 76.80 | 77.01 | 79.75 | 79.55 | 76.76 | 77.96 | 79.46 | 79.50 | 83.78 | 86.10 |
| | TREC | 97.48 | 98.90* | 97.76 | 97.70 | 96.34 | 97.07 | 90.18 | 97.45 | 97.45 | 98.20 | 98.20 | 97.54 |
| | BP | 86.06* | 96.72* | 78.56 | 81.75 | 84.63 | 83.04 | 76.77 | 79.81 | 85.26 | 84.20 | 84.69 | 98.28 |
| | AR | 84.58 | 91.66* | 87.74 | 88.44 | 90.64 | 88.53 | 87.52 | 83.96 | 91.01 | 91.98 | 90.35 | 95.47 |
| | MG | 80.52 | 90.63* | 73.83 | 74.78 | 80.41 | 76.67 | 73.35 | 71.84 | 81.22 | 76.86 | 78.47 | 92.96 |
| | NG | 77.61 | 82.47* | 76.45 | 77.73 | 80.83 | 79.11 | 75.97 | 74.50 | 80.95 | 79.93 | 80.75 | 89.13 |
results using two other commonly used metrics, AUPR-IN and FPR@95TPR; these gave qualitatively identical results as AUROC). The OOD detection task is essentially a binary classification task, with AUC corresponding to the probability that a randomly chosen OOD instance will have a higher uncertainty score than a randomly chosen ID instance (Fawcett [2006]). The AUROC for random value assignment is 50%, while a perfect method achieves 100%. We run each experiment five times with different random seeds and report the mean AUROC.
OOD detection performance is shown in Table 1. The first observation is that BLOOD outperforms other white/black-box methods. Secondly, BLOOD\textsubscript{L} outperforms other white/black-box methods more often than BLOOD\textsubscript{M}, thus in the rest of the experiments we focus on BLOOD\textsubscript{L}. Lastly, while BLOOD demonstrates superior performance on most datasets, the improvements are more consistently observed when applied with ELECTRA compared to RoBERTa. Interestingly, the datasets where BLOOD with RoBERTa outperforms other white/black-box methods (SST, BP, AR, MG, and NG) appear to be more complex, as indicated by the minimum description length (Perez et al. [2021] cf. Appendix C). We offer explanations for these observations in sections 4.3 and 4.4.
Compared to open-box methods, BLOOD is outperformed by MD in all setups except when using ELECTRA on the TREC dataset. However, BLOOD remains competitive with ENSM and TEMP. Unlike the findings in (Ovadia et al. [2019]), the dominance of ENSM is reduced. This is likely because we employ a pre-trained language model ensemble, while they use entirely randomly initialized models. In our ensemble, the model parameters exhibit minimal variation since all models are pre-trained. Variability between models arises solely from the random initialization of the classification head and the stochastic nature of the training process. The high performance of MD on transformer-based language models is aligns with prior research (Podolskiy et al. [2021]).
### 4.3 Source of the differences in transformations of ID and OOD data
Understanding which layers of the model are impacted by the model’s training could shed some light on the behavior of our method. To find out how much each layer has learned, we examine the changes in intermediate representations of instances after training. For simplicity, we use the Euclidean distances $\|r_{\text{init}} - r_{\text{FT}}\|_2$ between representations of the initialized model ($r_{\text{init}}$) and the representations after fine-tuning the model ($r_{\text{FT}}$). We calculate this distance for all instances in the training set at each of the model’s layers and then compute the average for each layer.
Figure 1 illustrates the extent of representation changes in training data alongside BLOOD scores before and after fine-tuning at each intermediate layer. The representations of the upper layers change significantly more than the representations of the lower layers. This is expected since transformer-based language models learn morphological- and syntactic-based features in the lower layers, which are similar between tasks and can be mostly reused from the pre-training. In contrast,
higher layers learn more task-specific features such as context and coreferences (Peters et al., 2018; Tenney et al., 2019; Jawahar et al., 2019). Our hypothesis posits that the smooth transformations of ID data are a by-product of the learning algorithm learning the natural progression between abstractions. Consequently, layers more impacted by training will exhibit smoother transformations, which explains why BLOOD$_L$ outperforms BLOOD$_M$ on the OOD detection task. This effect becomes apparent when comparing the representation change (upper row of Figure 1) with the BLOOD score (lower two rows of Figure 1) across layers, with a more significant difference in transition smoothness between ID and OOD data observed in layers where representations have undergone more substantial changes overall. The effect is particularly emphasized in ELECTRA, where the last layer undergoes the most significant change, resulting in BLOOD$_L$ performing exceptionally well due to the radical smoothing of the final transformation.
We also anticipate that the representations of ID data will undergo more significant changes after fine-tuning than those of OOD data, given the model’s focus on the ID region of the representation space during training. This effect would cause a difference in smoothness because the ID region of the space would be smoothed out while the OOD region of the space would keep its original sharpness. Same as above, we calculate the change in representations using Euclidean distance of representations before and after fine-tuning. We then quantify the difference between changes in representations of ID and OOD data using the common language effect size (CLES) (McGraw & Wong, 1992), corresponding to the probability that representations of ID data exhibited greater changes after training than representations of OOD data. We measure this difference for the model’s last layer and the mean difference across all layers.
---
4The CLES statistics quantifies the effect size of the difference between two samples. It is equivalent to AUC of the corresponding univariate binary classifier, representing the probability that a randomly selected score from the first sample will exceed a randomly selected score from the second sample.
Table 3: The performance of OOD detection methods for the simplified datasets measured by AUROC (%). The best-performing white/black-box method is in **bold**. Open-box methods that outperform all white/black-box methods are in **bold**. Higher is better. The right side of the table shows a comparison of changes in representations between ID and OOD data using CLES (%).
| Model | Dataset | BLOOD<sub>L</sub> | MSP | ENT | EGY | MC | GRAD | ASH | ReAct | ENSM | TEMP | MD | Mean | Last |
|---------|---------|-------------------|-----|-----|-----|----|------|-----|-------|------|------|-----|------|------|
| RoBERTa | AR2 | 79.66 | 89.74 | 89.74 | 88.23 | 88.92 | **89.84** | 82.66 | 87.60 | 87.59 | **89.92** | **97.66** | 94.57 | 84.27 |
| | MG2 | 88.20 | 93.33 | 93.33 | **94.27** | 93.30 | 93.58 | 92.63 | 93.31 | **94.55** | 93.34 | **99.02** | 91.84 | 80.47 |
| ELECTRA | AR2 | 84.78 | 78.13 | 78.13 | **85.44** | 82.62 | 78.28 | 74.05 | **85.95** | 83.95 | 78.23 | **97.48** | 86.80 | 70.25 |
| | MG2 | 90.67 | **91.41** | **96.16** | 93.80 | 95.47 | 96.14 | 91.95 | 93.40 | 95.20 | **96.20** | 93.22 | 97.07 | 96.22 |
Table 2 shows the effect size quantified using CLES for the changes in representations between ID and OOD data. In most setups, CLES is far above 50%, which means that representations of ID data underwent more significant changes than those of OOD data. The results imply that the learning algorithm’s focus during training is on the ID region of the representation space. In contrast, the rest of the representation space is largely unaltered. Moreover, the difference in transformation smoothness between layers, observed between ID and OOD data, can be attributed to the inherently non-smooth transformations of the initialized models. These non-smooth transformations gradually become smoother within the ID region. However, more complex datasets (BP, AR, MG, and NG) in conjunction with the RoBERTa model contradict our initial expectation. In these cases, CLES approaches or even drops below 50%. This indicates that the ID region of the representation space undergoes similar or even lesser changes compared to the rest of the representation space.
Our interpretation of this phenomenon is that the algorithm faces greater difficulty in fitting the data, necessitating more substantial adjustments to the model. These significant alterations not only result in smoothing out transitions for ID data but, as a consequence, also make transformations in the rest of the space less smooth. This would explain the improved performance of BLOOD in conjunction with RoBERTa on these datasets, as the difference in transformation smoothness is attributed not only to the smoothing of the ID region of the space but also to the reduction in smoothness of the remaining space. This sharpening effect in the region populated by OOD data is evident when comparing sub-figures (c) and (e) in Figure 1.
### 4.4 THE EFFECT OF DATASET COMPLEXITY
In the previous subsection, we demonstrated that BLOOD performs better on more complex datasets compared to simpler ones. To investigate this phenomenon further, we re-evaluate the performance of OOD detection methods on simplified versions of the more complex datasets. Specifically, we use the binary classification datasets BP2, AR2, and MG2, which are derived from BP, AR, and MG datasets, respectively, by retaining only two classes (cf. Appendix C for additional details).
Table 3 shows AUROC for the OOD detection task on simplified datasets, as well as the CLES of representation changes. We observe a decrease in AUROC for BLOOD in comparison to the AUROC on the original datasets, while the AUROC of other white/black-box methods shows an increase. The drop in AUROC for BLOOD can be explained by examining the CLES of representations.
We support this finding by calculating the Pearson correlation coefficient between MDL and difference in AUROC of BLOOD<sub>M</sub> (to capture the influence on all layers in the model) and the baseline method (MSP) for each dataset. We found a significant ($p < .05$) correlation of 0.79 for RoBERTa and 0.73 for ELECTRA.
Table 4: The performance of OOD detection methods on the task of Near-OOD detection measured by AUROC (%). The best-performing white/black-box method is in **bold**. Open-box methods that outperform all white/black-box methods are in **bold**. Higher is better.
| Model | Shift | BLOOD$_r$ | MSP | ENT | EGY | MC | GRAD | ASH | ReAct | ENSM | TEMP | MD |
|-----------|---------------|-----------|-----|-----|-----|-----|------|-----|-------|------|------|------|
| RoBERTa | Semantic | 61.61 | 69.46 | **69.50** | 69.41 | 68.34 | 69.36 | 66.50 | 69.46 | 68.91 | **70.56** | **72.03** |
| | Background | **62.70** | 54.26 | 54.26 | 50.17 | 48.18 | 54.33 | 50.46 | 49.32 | 49.13 | 54.19 | 59.40 |
| ELECTRA | Semantic | 62.49 | 63.17 | 63.12 | 60.92 | 62.14 | **63.23** | 56.85 | 61.00 | **65.67** | 62.45 | **64.22** |
| | Background | **59.35** | 42.96 | 42.96 | 38.68 | 37.96 | 42.77 | 40.66 | 38.53 | 41.25 | 42.63 | 39.31 |
Presentation changes, which exhibits a notable increase compared to the original datasets in the case of RoBERTa, and even a slight increase for ELECTRA. The rise in CLES of the change in representations suggests that the models managed to learn the task without the need to sharpen the transformations of the OOD data, thereby reducing the ability of BLOOD to detect OOD instances.
We suspect that the increase in AUROC for the other white/black-box methods may be attributed to the same factor that led to the AUROC decrease in BLOOD – namely, the task’s simplicity. However, this cause manifests differently. The simplified datasets, having fewer ambiguous instances in their test sets due to the reduced number of classes, allow the other (probabilistic) methods to more accurately attribute the estimated uncertainty to the OOD data. See Appendix F for a more detailed explanation and visualization using dataset cartography (Swayamdipta et al., 2020).
### 4.5 Types of Distribution Shift
Another important aspect to consider for OOD detection is the type of distribution shift. Up to this point, we have only considered OOD data coming from a distribution entirely different than that of the ID data, which is referred to as Far-OOD by Baran et al. (2023). We next examine the performance of OOD detection methods on Near-OOD data, which arises from either a semantic or a background shift. For the semantic shift, in line with Ovadia et al. (2019), we designate the even-numbered classes of NG dataset as ID and the odd-numbered classes as Near-OOD data. For the background shift, following Baran et al. (2023), we use the SST dataset as ID and the Yelp Review sentiment classification dataset (Zhang et al., 2015) as Near-OOD data.
Table 4 shows the OOD detection performance on the semantic and background shift detection tasks. For the semantic shift, BLOOD exhibits suboptimal performance. However, in the case of the background shift, it notably outperforms all other methods, including the open-box approaches, some of which even perform worse than random. We suspect the subpar performance of other OOD detection methods in background shift detection may be attributed to models performing better on Yelp data compared to the SST data they were trained on (cf. Appendix C), because Yelp has longer texts with more semantic cues, making models more confident on OOD data. We speculate the discrepancy in performance between semantic and background shifts arises because BLOOD is focused on the encoding process of the query instances, while other methods only examine the model’s outputs. Consequently, BLOOD demonstrates greater sensitivity to the changes in the data-generating distribution. At the same time, other methods are better at detecting changes in the outputs, such as the introduction of an unknown class. In Appendix G we show that BLOOD is sensitive to the degree of distribution shift.
### 5 Conclusion
We have proposed a novel method for out-of-distribution (OOD) detection for Transformer-based networks called BLOOD. The method analyzes representation transformations across intermediate layers and requires only the access to model’s weights. Our evaluation on multiple text classification datasets using Transformer-based large pre-trained language models shows that BLOOD outperforms similar methods. Our analysis reveals that ID representations undergo smoother transformations between layers compared to OOD representations because the model concentrates on the ID region of the representation space during training. We demonstrated that the learning algorithm retains the original sharpness of the transformations of OOD intermediate representations for simpler datasets but increases the sharpness for more complex datasets.
ACKNOWLEDGMENT
We thank the anonymous reviewers for their insightful comments. Our heartfelt appreciation goes to the members of TakeLab for their continuous support and valuable input. Special thanks to Nina Drobac and Stjepan Šebek for their feedback and helpful suggestions. This work has been supported by the Croatian Science Foundation under the project IP-2020-02-8671 PSYTXT (“Computational Models for Text-Based Personality Prediction and Analysis”).
REFERENCES
Oludare Isaac Abiodun, Aman Jantan, Abiodun Esther Omolara, Kemi Victoria Dada, Nachaat AbdElatif Mohamed, and Humaira Arshad. State-of-the-art in artificial neural network applications: A survey. *Heliyon*, 4(11), 2018.
Chirag Agarwal, Daniel D’souza, and Sara Hooker. Estimating example difficulty using variance of gradients. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10368–10378, 2022.
Mateusz Baran, Joanna Baran, Mateusz Wójcik, Maciej Zięba, and Adam Gonczarek. Classical out-of-distribution detection methods benchmark in text classification tasks. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)*, pp. 119–129, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-srw.20. URL https://aclanthology.org/2023.acl-srw.20.
Zvonimir Bujanovic and Daniel Kressner. Norm and trace estimation with random rank-one vectors. *SIAM Journal on Matrix Analysis and Applications*, 42(1):202–223, 2021.
C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. pp. 2635 – 2639, 2014. doi: 10.21437/Interspeech.2014-564.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=r1xMHlBtvB.
Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? Does it matter? *Structural safety*, 31(2):105–112, 2009.
Andrija Djurisic, Nebojsa Bozanic, Arjun Ashok, and Rosanne Liu. Extremely simple activation shaping for out-of-distribution detection. In *The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023*. OpenReview.net, 2023. URL https://openreview.net/pdf?id=ndYXTEL6cZZ.
Alexey Dosovitskiy, Luca Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
Tom Fawcett. An introduction to roc analysis. *Pattern recognition letters*, 27(8):861–874, 2006.
Yarin Gal. Uncertainty in deep learning. 2016.
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *International conference on machine learning*, pp. 1050–1059. PMLR, 2016.
Amin Ghiasi, Hamid Kazemi, Eitan Borgnia, Steven Reich, Manli Shu, Micah Goldblum, Andrew Gordon Wilson, and Tom Goldstein. What do vision transformers learn? a visual exploration. *arXiv preprint arXiv:2212.06727*, 2022.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.